repo_name
stringlengths 8
38
| pr_number
int64 3
47.1k
| pr_title
stringlengths 8
175
| pr_description
stringlengths 2
19.8k
⌀ | author
null | date_created
stringlengths 25
25
| date_merged
stringlengths 25
25
| filepath
stringlengths 6
136
| before_content
stringlengths 54
884k
⌀ | after_content
stringlengths 56
884k
| pr_author
stringlengths 3
21
| previous_commit
stringlengths 40
40
| pr_commit
stringlengths 40
40
| comment
stringlengths 2
25.4k
| comment_author
stringlengths 3
29
| __index_level_0__
int64 0
5.1k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ceph/ceph-ansible | 7,326 | common: stop using ceph/daemon entrypoint | major refactor in order to make ceph-ansible use the `ceph/ceph` container image instead of `ceph/daemon` | null | 2022-10-07 12:52:53+00:00 | 2023-05-31 21:07:13+00:00 | roles/ceph-nfs/templates/ceph-nfs.service.j2 | [Unit]
Description=NFS-Ganesha file server
Documentation=http://github.com/nfs-ganesha/nfs-ganesha/wiki
{% if container_binary == 'docker' %}
After=docker.service network-online.target local-fs.target time-sync.target
Requires=docker.service
{% else %}
After=network-online.target local-fs.target time-sync.target
{% endif %}
Wants=network-online.target local-fs.target time-sync.target
[Service]
EnvironmentFile=-/etc/environment
{% if container_binary == 'podman' %}
ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid
ExecStartPre=-/usr/bin/{{ container_binary }} rm --storage ceph-nfs-%i
ExecStartPre=-/usr/bin/mkdir -p /var/log/ceph /var/log/ganesha
{% endif %}
ExecStartPre=-/usr/bin/{{ container_binary }} rm ceph-nfs-%i
ExecStartPre={{ '/bin/mkdir' if ansible_facts['os_family'] == 'Debian' else '/usr/bin/mkdir' }} -p /etc/ceph /etc/ganesha /var/lib/nfs/ganesha /var/log/ganesha
ExecStart=/usr/bin/{{ container_binary }} run --rm --net=host \
{% if container_binary == 'podman' %}
-d --log-driver journald --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \
{% endif %}
--pids-limit={{ 0 if container_binary == 'podman' else -1 }} \
-v /etc/ceph:/etc/ceph:z \
-v /var/lib/ceph/radosgw/{{ cluster }}-rgw.{{ ansible_facts['hostname'] }}/keyring:/var/lib/ceph/radosgw/{{ cluster }}-rgw.{{ ansible_facts['hostname'] }}/keyring:z \
-v /var/lib/ceph/radosgw/{{ cluster }}-nfs.{{ ansible_facts['hostname'] }}/keyring:/etc/ceph/keyring:z \
-v /etc/ganesha:/etc/ganesha:z \
-v /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket \
-v /var/run/ceph:/var/run/ceph:z \
-v /var/log/ceph:/var/log/ceph:z \
-v /var/log/ganesha:/var/log/ganesha:z \
-v /etc/localtime:/etc/localtime:ro \
{{ ceph_nfs_docker_extra_env }} \
--entrypoint=/usr/bin/ganesha.nfsd \
--name=ceph-nfs-{{ ceph_nfs_service_suffix | default(ansible_facts['hostname']) }} \
{{ ceph_docker_registry }}/{{ ceph_docker_image }}:{{ ceph_docker_image_tag }} \
-F -L STDOUT
{% if container_binary == 'podman' %}
ExecStop=-/usr/bin/sh -c "/usr/bin/{{ container_binary }} rm -f `cat /%t/%n-cid`"
{% else %}
ExecStopPost=-/usr/bin/{{ container_binary }} stop ceph-nfs-%i
{% endif %}
KillMode=none
Restart=always
RestartSec=10s
TimeoutStartSec=120
TimeoutStopSec=15
{% if container_binary == 'podman' %}
Type=forking
PIDFile=/%t/%n-pid
{% endif %}
[Install]
WantedBy=multi-user.target
| [Unit]
Description=NFS-Ganesha file server
Documentation=http://github.com/nfs-ganesha/nfs-ganesha/wiki
{% if container_binary == 'docker' %}
After=docker.service network-online.target local-fs.target time-sync.target
Requires=docker.service
{% else %}
After=network-online.target local-fs.target time-sync.target
{% endif %}
Wants=network-online.target local-fs.target time-sync.target
[Service]
EnvironmentFile=-/etc/environment
{% if container_binary == 'podman' %}
ExecStartPre=-/usr/bin/rm -f /%t/%n-pid /%t/%n-cid
ExecStartPre=-/usr/bin/{{ container_binary }} rm --storage ceph-nfs-%i
ExecStartPre=-/usr/bin/mkdir -p /var/log/ceph /var/log/ganesha
{% endif %}
ExecStartPre=-/usr/bin/{{ container_binary }} rm ceph-nfs-%i
ExecStartPre={{ '/bin/mkdir' if ansible_facts['os_family'] == 'Debian' else '/usr/bin/mkdir' }} -p /etc/ceph /etc/ganesha /var/lib/nfs/ganesha /var/log/ganesha
ExecStart=/usr/bin/{{ container_binary }} run --rm --net=host \
{% if container_binary == 'podman' %}
-d --log-driver journald --conmon-pidfile /%t/%n-pid --cidfile /%t/%n-cid \
{% endif %}
--pids-limit={{ 0 if container_binary == 'podman' else -1 }} \
-v /etc/ceph:/etc/ceph:z \
-v /var/lib/ceph/radosgw/{{ cluster }}-rgw.{{ ansible_facts['hostname'] }}/keyring:/var/lib/ceph/radosgw/{{ cluster }}-rgw.{{ ansible_facts['hostname'] }}/keyring:z \
-v /var/lib/ceph/radosgw/{{ cluster }}-nfs.{{ ansible_facts['hostname'] }}/keyring:/etc/ceph/keyring:z \
-v /etc/ganesha:/etc/ganesha:z \
-v /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket \
-v /var/run/ceph:/var/run/ceph:z \
-v /var/log/ceph:/var/log/ceph:z \
-v /var/log/ganesha:/var/log/ganesha:z \
-v /etc/localtime:/etc/localtime:ro \
{{ ceph_nfs_docker_extra_env }} \
--entrypoint=/usr/bin/ganesha.nfsd \
--name=ceph-nfs-{{ ceph_nfs_service_suffix | default(ansible_facts['hostname']) }} \
{{ ceph_docker_registry }}/{{ ceph_docker_image }}:{{ ceph_docker_image_tag }} \
-F -L STDOUT
{% if container_binary == 'podman' %}
ExecStop=-/usr/bin/sh -c "/usr/bin/{{ container_binary }} rm -f `cat /%t/%n-cid`"
{% else %}
ExecStopPost=-/usr/bin/{{ container_binary }} stop ceph-nfs-%i
{% endif %}
KillMode=none
Restart=always
RestartSec=10s
TimeoutStartSec=120
TimeoutStopSec=15
{% if container_binary == 'podman' %}
Type=forking
PIDFile=/%t/%n-pid
{% endif %}
[Install]
WantedBy=multi-user.target
| guits | 5cd692dcdc8dd55036b8c283f1bca56f79965c1d | 23a8bbc6c59d78e982f343c06cfb5fe8e86cb757 | hi @gouthampacha sorry for the late answer.
I've no idea about nfsv3 here but this part needs to be reworked indeed. | guits | 13 |
ceph/ceph-ansible | 7,226 | [skip ci] Refresh /etc/ceph/osd json files content before zapping the disks | If the physical disk to device path mapping has changed since the
last ceph-volume simple scan (e.g. addition or removal of disks),
a wrong disk could be deleted.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2071035
Signed-off-by: Teoman ONAY <tonay@redhat.com> | null | 2022-07-04 10:03:03+00:00 | 2022-07-11 07:14:41+00:00 | infrastructure-playbooks/shrink-osd.yml | ---
# This playbook shrinks Ceph OSDs that have been created with ceph-volume.
# It can remove any number of OSD(s) from the cluster and ALL THEIR DATA
#
# Use it like this:
# ansible-playbook shrink-osd.yml -e osd_to_kill=0,2,6
# Prompts for confirmation to shrink, defaults to no and
# doesn't shrink the cluster. yes shrinks the cluster.
#
# ansible-playbook -e ireallymeanit=yes|no shrink-osd.yml
# Overrides the prompt using -e option. Can be used in
# automation scripts to avoid interactive prompt.
- name: gather facts and check the init system
hosts:
- "{{ mon_group_name|default('mons') }}"
- "{{ osd_group_name|default('osds') }}"
become: True
tasks:
- debug: msg="gather facts on all Ceph hosts for following reference"
- name: confirm whether user really meant to remove osd(s) from the cluster
hosts: "{{ groups[mon_group_name][0] }}"
become: true
vars_prompt:
- name: ireallymeanit
prompt: Are you sure you want to shrink the cluster?
default: 'no'
private: no
vars:
mon_group_name: mons
osd_group_name: osds
pre_tasks:
- name: exit playbook, if user did not mean to shrink cluster
fail:
msg: "Exiting shrink-osd playbook, no osd(s) was/were removed..
To shrink the cluster, either say 'yes' on the prompt or
or use `-e ireallymeanit=yes` on the command line when
invoking the playbook"
when: ireallymeanit != 'yes'
- name: exit playbook, if no osd(s) was/were given
fail:
msg: "osd_to_kill must be declared
Exiting shrink-osd playbook, no OSD(s) was/were removed.
On the command line when invoking the playbook, you can use
-e osd_to_kill=0,1,2,3 argument."
when: osd_to_kill is not defined
- name: check the osd ids passed have the correct format
fail:
msg: "The id {{ item }} has wrong format, please pass the number only"
with_items: "{{ osd_to_kill.split(',') }}"
when: not item is regex("^\d+$")
tasks:
- import_role:
name: ceph-defaults
- import_role:
name: ceph-facts
tasks_from: container_binary
post_tasks:
- name: set_fact container_exec_cmd build docker exec command (containerized)
set_fact:
container_exec_cmd: "{{ container_binary }} exec ceph-mon-{{ ansible_facts['hostname'] }}"
when: containerized_deployment | bool
- name: exit playbook, if can not connect to the cluster
command: "{{ container_exec_cmd }} timeout 5 ceph --cluster {{ cluster }} health"
register: ceph_health
changed_when: false
until: ceph_health.stdout.find("HEALTH") > -1
retries: 5
delay: 2
- name: find the host(s) where the osd(s) is/are running on
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd find {{ item }}"
changed_when: false
with_items: "{{ osd_to_kill.split(',') }}"
register: find_osd_hosts
- name: set_fact osd_hosts
set_fact:
osd_hosts: "{{ osd_hosts | default([]) + [ [ (item.stdout | from_json).crush_location.host, (item.stdout | from_json).osd_fsid, item.item ] ] }}"
with_items: "{{ find_osd_hosts.results }}"
- name: set_fact _osd_hosts
set_fact:
_osd_hosts: "{{ _osd_hosts | default([]) + [ [ item.0, item.2, item.3 ] ] }}"
with_nested:
- "{{ groups.get(osd_group_name) }}"
- "{{ osd_hosts }}"
when: hostvars[item.0]['ansible_facts']['hostname'] == item.1
- name: get ceph-volume lvm list data
ceph_volume:
cluster: "{{ cluster }}"
action: list
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _lvm_list_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
- name: set_fact _lvm_list
set_fact:
_lvm_list: "{{ _lvm_list | default({}) | combine(item.stdout | from_json) }}"
with_items: "{{ _lvm_list_data.results }}"
- name: find /etc/ceph/osd files
find:
paths: /etc/ceph/osd
pattern: "{{ item.2 }}-*"
register: ceph_osd_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 not in _lvm_list.keys()
- name: slurp ceph osd files content
slurp:
src: "{{ item['files'][0]['path'] }}"
delegate_to: "{{ item.item.0 }}"
register: ceph_osd_files_content
loop: "{{ ceph_osd_data.results }}"
when:
- item.skipped is undefined
- item.matched > 0
- name: set_fact ceph_osd_files_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({ item.item.item.2: item.content | b64decode | from_json}) }}"
with_items: "{{ ceph_osd_files_content.results }}"
when: item.skipped is undefined
- name: mark osd(s) out of the cluster
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: out
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
- name: stop osd(s) service
service:
name: ceph-osd@{{ item.2 }}
state: stopped
enabled: no
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: umount osd lockbox
ansible.posix.mount:
path: "/var/lib/ceph/osd-lockbox/{{ ceph_osd_data_json[item.2]['data']['uuid'] }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- not containerized_deployment | bool
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2]['data']['uuid'] is defined
- name: umount osd data
ansible.posix.mount:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when: not containerized_deployment | bool
- name: get parent device for data partition
command: lsblk --noheadings --output PKNAME --nodeps "{{ ceph_osd_data_json[item.2]['data']['path'] }}"
register: parent_device_data_part
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['data']['path'] is defined
- name: add pkname information in ceph_osd_data_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({item.item[2]: {'pkname_data': '/dev/' + item.stdout }}, recursive=True) }}"
loop: "{{ parent_device_data_part.results }}"
when: item.skipped is undefined
- name: close dmcrypt close on devices if needed
command: "cryptsetup close {{ ceph_osd_data_json[item.2][item.3]['uuid'] }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block_dmcrypt', 'block.db_dmcrypt', 'block.wal_dmcrypt', 'data', 'journal_dmcrypt' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
until: result is succeeded
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2][item.3] is defined
- name: use ceph-volume lvm zap to destroy all partitions
ceph_volume:
cluster: "{{ cluster }}"
action: zap
destroy: true
data: "{{ ceph_osd_data_json[item.2]['pkname_data'] if item.3 == 'data' else ceph_osd_data_json[item.2][item.3]['path'] }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block', 'block.db', 'block.wal', 'journal', 'data' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2][item.3] is defined
- name: zap osd devices
ceph_volume:
action: "zap"
osd_fsid: "{{ item.1 }}"
environment:
CEPH_VOLUME_DEBUG: "{{ ceph_volume_debug }}"
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 in _lvm_list.keys()
- name: ensure osds are marked down
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: down
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: purge osd(s) from the cluster
ceph_osd:
ids: "{{ item }}"
cluster: "{{ cluster }}"
state: purge
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
with_items: "{{ osd_to_kill.split(',') }}"
- name: remove osd data dir
file:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: show ceph health
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} -s"
changed_when: false
- name: show ceph osd tree
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd tree"
changed_when: false
| ---
# This playbook shrinks Ceph OSDs that have been created with ceph-volume.
# It can remove any number of OSD(s) from the cluster and ALL THEIR DATA
#
# Use it like this:
# ansible-playbook shrink-osd.yml -e osd_to_kill=0,2,6
# Prompts for confirmation to shrink, defaults to no and
# doesn't shrink the cluster. yes shrinks the cluster.
#
# ansible-playbook -e ireallymeanit=yes|no shrink-osd.yml
# Overrides the prompt using -e option. Can be used in
# automation scripts to avoid interactive prompt.
- name: gather facts and check the init system
hosts:
- "{{ mon_group_name|default('mons') }}"
- "{{ osd_group_name|default('osds') }}"
become: True
tasks:
- debug: msg="gather facts on all Ceph hosts for following reference"
- name: confirm whether user really meant to remove osd(s) from the cluster
hosts: "{{ groups[mon_group_name][0] }}"
become: true
vars_prompt:
- name: ireallymeanit
prompt: Are you sure you want to shrink the cluster?
default: 'no'
private: no
vars:
mon_group_name: mons
osd_group_name: osds
pre_tasks:
- name: exit playbook, if user did not mean to shrink cluster
fail:
msg: "Exiting shrink-osd playbook, no osd(s) was/were removed..
To shrink the cluster, either say 'yes' on the prompt or
or use `-e ireallymeanit=yes` on the command line when
invoking the playbook"
when: ireallymeanit != 'yes'
- name: exit playbook, if no osd(s) was/were given
fail:
msg: "osd_to_kill must be declared
Exiting shrink-osd playbook, no OSD(s) was/were removed.
On the command line when invoking the playbook, you can use
-e osd_to_kill=0,1,2,3 argument."
when: osd_to_kill is not defined
- name: check the osd ids passed have the correct format
fail:
msg: "The id {{ item }} has wrong format, please pass the number only"
with_items: "{{ osd_to_kill.split(',') }}"
when: not item is regex("^\d+$")
tasks:
- import_role:
name: ceph-defaults
- import_role:
name: ceph-facts
tasks_from: container_binary
post_tasks:
- name: set_fact container_exec_cmd build docker exec command (containerized)
set_fact:
container_exec_cmd: "{{ container_binary }} exec ceph-mon-{{ ansible_facts['hostname'] }}"
when: containerized_deployment | bool
- name: exit playbook, if can not connect to the cluster
command: "{{ container_exec_cmd }} timeout 5 ceph --cluster {{ cluster }} health"
register: ceph_health
changed_when: false
until: ceph_health.stdout.find("HEALTH") > -1
retries: 5
delay: 2
- name: find the host(s) where the osd(s) is/are running on
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd find {{ item }}"
changed_when: false
with_items: "{{ osd_to_kill.split(',') }}"
register: find_osd_hosts
- name: set_fact osd_hosts
set_fact:
osd_hosts: "{{ osd_hosts | default([]) + [ [ (item.stdout | from_json).crush_location.host, (item.stdout | from_json).osd_fsid, item.item ] ] }}"
with_items: "{{ find_osd_hosts.results }}"
- name: set_fact _osd_hosts
set_fact:
_osd_hosts: "{{ _osd_hosts | default([]) + [ [ item.0, item.2, item.3 ] ] }}"
with_nested:
- "{{ groups.get(osd_group_name) }}"
- "{{ osd_hosts }}"
when: hostvars[item.0]['ansible_facts']['hostname'] == item.1
- name: set_fact host_list
set_fact:
host_list: "{{ host_list | default([]) | union([item.0]) }}"
loop: "{{ _osd_hosts }}"
- name: get ceph-volume lvm list data
ceph_volume:
cluster: "{{ cluster }}"
action: list
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _lvm_list_data
delegate_to: "{{ item }}"
loop: "{{ host_list }}"
- name: set_fact _lvm_list
set_fact:
_lvm_list: "{{ _lvm_list | default({}) | combine(item.stdout | from_json) }}"
with_items: "{{ _lvm_list_data.results }}"
- name: refresh /etc/ceph/osd files non containerized_deployment
ceph_volume_simple_scan:
cluster: "{{ cluster }}"
force: true
delegate_to: "{{ item }}"
loop: "{{ host_list }}"
when: not containerized_deployment | bool
- name: refresh /etc/ceph/osd files containerized_deployment
command: "{{ container_binary }} exec ceph-osd-{{ item.2 }} ceph-volume simple scan --force /var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
changed_when: false
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: containerized_deployment | bool
- name: find /etc/ceph/osd files
find:
paths: /etc/ceph/osd
pattern: "{{ item.2 }}-*"
register: ceph_osd_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 not in _lvm_list.keys()
- name: slurp ceph osd files content
slurp:
src: "{{ item['files'][0]['path'] }}"
delegate_to: "{{ item.item.0 }}"
register: ceph_osd_files_content
loop: "{{ ceph_osd_data.results }}"
when:
- item.skipped is undefined
- item.matched > 0
- name: set_fact ceph_osd_files_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({ item.item.item.2: item.content | b64decode | from_json}) }}"
with_items: "{{ ceph_osd_files_content.results }}"
when: item.skipped is undefined
- name: mark osd(s) out of the cluster
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: out
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
- name: stop osd(s) service
service:
name: ceph-osd@{{ item.2 }}
state: stopped
enabled: no
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: umount osd lockbox
ansible.posix.mount:
path: "/var/lib/ceph/osd-lockbox/{{ ceph_osd_data_json[item.2]['data']['uuid'] }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- not containerized_deployment | bool
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2]['data']['uuid'] is defined
- name: umount osd data
ansible.posix.mount:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when: not containerized_deployment | bool
- name: get parent device for data partition
command: lsblk --noheadings --output PKNAME --nodeps "{{ ceph_osd_data_json[item.2]['data']['path'] }}"
register: parent_device_data_part
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['data']['path'] is defined
- name: add pkname information in ceph_osd_data_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({item.item[2]: {'pkname_data': '/dev/' + item.stdout }}, recursive=True) }}"
loop: "{{ parent_device_data_part.results }}"
when: item.skipped is undefined
- name: close dmcrypt close on devices if needed
command: "cryptsetup close {{ ceph_osd_data_json[item.2][item.3]['uuid'] }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block_dmcrypt', 'block.db_dmcrypt', 'block.wal_dmcrypt', 'data', 'journal_dmcrypt' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
until: result is succeeded
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2][item.3] is defined
- name: use ceph-volume lvm zap to destroy all partitions
ceph_volume:
cluster: "{{ cluster }}"
action: zap
destroy: true
data: "{{ ceph_osd_data_json[item.2]['pkname_data'] if item.3 == 'data' else ceph_osd_data_json[item.2][item.3]['path'] }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block', 'block.db', 'block.wal', 'journal', 'data' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2][item.3] is defined
- name: zap osd devices
ceph_volume:
action: "zap"
osd_fsid: "{{ item.1 }}"
environment:
CEPH_VOLUME_DEBUG: "{{ ceph_volume_debug }}"
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 in _lvm_list.keys()
- name: ensure osds are marked down
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: down
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: purge osd(s) from the cluster
ceph_osd:
ids: "{{ item }}"
cluster: "{{ cluster }}"
state: purge
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
with_items: "{{ osd_to_kill.split(',') }}"
- name: remove osd data dir
file:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: show ceph health
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} -s"
changed_when: false
- name: show ceph osd tree
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd tree"
changed_when: false
| asm0deuz | dffe7b47de70b6eeec71a3fa86f8c407adb4dd8e | 64e08f2c0bdea6f4c4ad5862dc8f350c6adbe2cd | ```suggestion
- name: refresh /etc/ceph/osd files
``` | guits | 14 |
ceph/ceph-ansible | 7,226 | [skip ci] Refresh /etc/ceph/osd json files content before zapping the disks | If the physical disk to device path mapping has changed since the
last ceph-volume simple scan (e.g. addition or removal of disks),
a wrong disk could be deleted.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2071035
Signed-off-by: Teoman ONAY <tonay@redhat.com> | null | 2022-07-04 10:03:03+00:00 | 2022-07-11 07:14:41+00:00 | infrastructure-playbooks/shrink-osd.yml | ---
# This playbook shrinks Ceph OSDs that have been created with ceph-volume.
# It can remove any number of OSD(s) from the cluster and ALL THEIR DATA
#
# Use it like this:
# ansible-playbook shrink-osd.yml -e osd_to_kill=0,2,6
# Prompts for confirmation to shrink, defaults to no and
# doesn't shrink the cluster. yes shrinks the cluster.
#
# ansible-playbook -e ireallymeanit=yes|no shrink-osd.yml
# Overrides the prompt using -e option. Can be used in
# automation scripts to avoid interactive prompt.
- name: gather facts and check the init system
hosts:
- "{{ mon_group_name|default('mons') }}"
- "{{ osd_group_name|default('osds') }}"
become: True
tasks:
- debug: msg="gather facts on all Ceph hosts for following reference"
- name: confirm whether user really meant to remove osd(s) from the cluster
hosts: "{{ groups[mon_group_name][0] }}"
become: true
vars_prompt:
- name: ireallymeanit
prompt: Are you sure you want to shrink the cluster?
default: 'no'
private: no
vars:
mon_group_name: mons
osd_group_name: osds
pre_tasks:
- name: exit playbook, if user did not mean to shrink cluster
fail:
msg: "Exiting shrink-osd playbook, no osd(s) was/were removed..
To shrink the cluster, either say 'yes' on the prompt or
or use `-e ireallymeanit=yes` on the command line when
invoking the playbook"
when: ireallymeanit != 'yes'
- name: exit playbook, if no osd(s) was/were given
fail:
msg: "osd_to_kill must be declared
Exiting shrink-osd playbook, no OSD(s) was/were removed.
On the command line when invoking the playbook, you can use
-e osd_to_kill=0,1,2,3 argument."
when: osd_to_kill is not defined
- name: check the osd ids passed have the correct format
fail:
msg: "The id {{ item }} has wrong format, please pass the number only"
with_items: "{{ osd_to_kill.split(',') }}"
when: not item is regex("^\d+$")
tasks:
- import_role:
name: ceph-defaults
- import_role:
name: ceph-facts
tasks_from: container_binary
post_tasks:
- name: set_fact container_exec_cmd build docker exec command (containerized)
set_fact:
container_exec_cmd: "{{ container_binary }} exec ceph-mon-{{ ansible_facts['hostname'] }}"
when: containerized_deployment | bool
- name: exit playbook, if can not connect to the cluster
command: "{{ container_exec_cmd }} timeout 5 ceph --cluster {{ cluster }} health"
register: ceph_health
changed_when: false
until: ceph_health.stdout.find("HEALTH") > -1
retries: 5
delay: 2
- name: find the host(s) where the osd(s) is/are running on
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd find {{ item }}"
changed_when: false
with_items: "{{ osd_to_kill.split(',') }}"
register: find_osd_hosts
- name: set_fact osd_hosts
set_fact:
osd_hosts: "{{ osd_hosts | default([]) + [ [ (item.stdout | from_json).crush_location.host, (item.stdout | from_json).osd_fsid, item.item ] ] }}"
with_items: "{{ find_osd_hosts.results }}"
- name: set_fact _osd_hosts
set_fact:
_osd_hosts: "{{ _osd_hosts | default([]) + [ [ item.0, item.2, item.3 ] ] }}"
with_nested:
- "{{ groups.get(osd_group_name) }}"
- "{{ osd_hosts }}"
when: hostvars[item.0]['ansible_facts']['hostname'] == item.1
- name: get ceph-volume lvm list data
ceph_volume:
cluster: "{{ cluster }}"
action: list
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _lvm_list_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
- name: set_fact _lvm_list
set_fact:
_lvm_list: "{{ _lvm_list | default({}) | combine(item.stdout | from_json) }}"
with_items: "{{ _lvm_list_data.results }}"
- name: find /etc/ceph/osd files
find:
paths: /etc/ceph/osd
pattern: "{{ item.2 }}-*"
register: ceph_osd_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 not in _lvm_list.keys()
- name: slurp ceph osd files content
slurp:
src: "{{ item['files'][0]['path'] }}"
delegate_to: "{{ item.item.0 }}"
register: ceph_osd_files_content
loop: "{{ ceph_osd_data.results }}"
when:
- item.skipped is undefined
- item.matched > 0
- name: set_fact ceph_osd_files_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({ item.item.item.2: item.content | b64decode | from_json}) }}"
with_items: "{{ ceph_osd_files_content.results }}"
when: item.skipped is undefined
- name: mark osd(s) out of the cluster
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: out
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
- name: stop osd(s) service
service:
name: ceph-osd@{{ item.2 }}
state: stopped
enabled: no
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: umount osd lockbox
ansible.posix.mount:
path: "/var/lib/ceph/osd-lockbox/{{ ceph_osd_data_json[item.2]['data']['uuid'] }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- not containerized_deployment | bool
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2]['data']['uuid'] is defined
- name: umount osd data
ansible.posix.mount:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when: not containerized_deployment | bool
- name: get parent device for data partition
command: lsblk --noheadings --output PKNAME --nodeps "{{ ceph_osd_data_json[item.2]['data']['path'] }}"
register: parent_device_data_part
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['data']['path'] is defined
- name: add pkname information in ceph_osd_data_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({item.item[2]: {'pkname_data': '/dev/' + item.stdout }}, recursive=True) }}"
loop: "{{ parent_device_data_part.results }}"
when: item.skipped is undefined
- name: close dmcrypt close on devices if needed
command: "cryptsetup close {{ ceph_osd_data_json[item.2][item.3]['uuid'] }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block_dmcrypt', 'block.db_dmcrypt', 'block.wal_dmcrypt', 'data', 'journal_dmcrypt' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
until: result is succeeded
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2][item.3] is defined
- name: use ceph-volume lvm zap to destroy all partitions
ceph_volume:
cluster: "{{ cluster }}"
action: zap
destroy: true
data: "{{ ceph_osd_data_json[item.2]['pkname_data'] if item.3 == 'data' else ceph_osd_data_json[item.2][item.3]['path'] }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block', 'block.db', 'block.wal', 'journal', 'data' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2][item.3] is defined
- name: zap osd devices
ceph_volume:
action: "zap"
osd_fsid: "{{ item.1 }}"
environment:
CEPH_VOLUME_DEBUG: "{{ ceph_volume_debug }}"
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 in _lvm_list.keys()
- name: ensure osds are marked down
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: down
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: purge osd(s) from the cluster
ceph_osd:
ids: "{{ item }}"
cluster: "{{ cluster }}"
state: purge
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
with_items: "{{ osd_to_kill.split(',') }}"
- name: remove osd data dir
file:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: show ceph health
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} -s"
changed_when: false
- name: show ceph osd tree
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd tree"
changed_when: false
| ---
# This playbook shrinks Ceph OSDs that have been created with ceph-volume.
# It can remove any number of OSD(s) from the cluster and ALL THEIR DATA
#
# Use it like this:
# ansible-playbook shrink-osd.yml -e osd_to_kill=0,2,6
# Prompts for confirmation to shrink, defaults to no and
# doesn't shrink the cluster. yes shrinks the cluster.
#
# ansible-playbook -e ireallymeanit=yes|no shrink-osd.yml
# Overrides the prompt using -e option. Can be used in
# automation scripts to avoid interactive prompt.
- name: gather facts and check the init system
hosts:
- "{{ mon_group_name|default('mons') }}"
- "{{ osd_group_name|default('osds') }}"
become: True
tasks:
- debug: msg="gather facts on all Ceph hosts for following reference"
- name: confirm whether user really meant to remove osd(s) from the cluster
hosts: "{{ groups[mon_group_name][0] }}"
become: true
vars_prompt:
- name: ireallymeanit
prompt: Are you sure you want to shrink the cluster?
default: 'no'
private: no
vars:
mon_group_name: mons
osd_group_name: osds
pre_tasks:
- name: exit playbook, if user did not mean to shrink cluster
fail:
msg: "Exiting shrink-osd playbook, no osd(s) was/were removed..
To shrink the cluster, either say 'yes' on the prompt or
or use `-e ireallymeanit=yes` on the command line when
invoking the playbook"
when: ireallymeanit != 'yes'
- name: exit playbook, if no osd(s) was/were given
fail:
msg: "osd_to_kill must be declared
Exiting shrink-osd playbook, no OSD(s) was/were removed.
On the command line when invoking the playbook, you can use
-e osd_to_kill=0,1,2,3 argument."
when: osd_to_kill is not defined
- name: check the osd ids passed have the correct format
fail:
msg: "The id {{ item }} has wrong format, please pass the number only"
with_items: "{{ osd_to_kill.split(',') }}"
when: not item is regex("^\d+$")
tasks:
- import_role:
name: ceph-defaults
- import_role:
name: ceph-facts
tasks_from: container_binary
post_tasks:
- name: set_fact container_exec_cmd build docker exec command (containerized)
set_fact:
container_exec_cmd: "{{ container_binary }} exec ceph-mon-{{ ansible_facts['hostname'] }}"
when: containerized_deployment | bool
- name: exit playbook, if can not connect to the cluster
command: "{{ container_exec_cmd }} timeout 5 ceph --cluster {{ cluster }} health"
register: ceph_health
changed_when: false
until: ceph_health.stdout.find("HEALTH") > -1
retries: 5
delay: 2
- name: find the host(s) where the osd(s) is/are running on
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd find {{ item }}"
changed_when: false
with_items: "{{ osd_to_kill.split(',') }}"
register: find_osd_hosts
- name: set_fact osd_hosts
set_fact:
osd_hosts: "{{ osd_hosts | default([]) + [ [ (item.stdout | from_json).crush_location.host, (item.stdout | from_json).osd_fsid, item.item ] ] }}"
with_items: "{{ find_osd_hosts.results }}"
- name: set_fact _osd_hosts
set_fact:
_osd_hosts: "{{ _osd_hosts | default([]) + [ [ item.0, item.2, item.3 ] ] }}"
with_nested:
- "{{ groups.get(osd_group_name) }}"
- "{{ osd_hosts }}"
when: hostvars[item.0]['ansible_facts']['hostname'] == item.1
- name: set_fact host_list
set_fact:
host_list: "{{ host_list | default([]) | union([item.0]) }}"
loop: "{{ _osd_hosts }}"
- name: get ceph-volume lvm list data
ceph_volume:
cluster: "{{ cluster }}"
action: list
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _lvm_list_data
delegate_to: "{{ item }}"
loop: "{{ host_list }}"
- name: set_fact _lvm_list
set_fact:
_lvm_list: "{{ _lvm_list | default({}) | combine(item.stdout | from_json) }}"
with_items: "{{ _lvm_list_data.results }}"
- name: refresh /etc/ceph/osd files non containerized_deployment
ceph_volume_simple_scan:
cluster: "{{ cluster }}"
force: true
delegate_to: "{{ item }}"
loop: "{{ host_list }}"
when: not containerized_deployment | bool
- name: refresh /etc/ceph/osd files containerized_deployment
command: "{{ container_binary }} exec ceph-osd-{{ item.2 }} ceph-volume simple scan --force /var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
changed_when: false
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: containerized_deployment | bool
- name: find /etc/ceph/osd files
find:
paths: /etc/ceph/osd
pattern: "{{ item.2 }}-*"
register: ceph_osd_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 not in _lvm_list.keys()
- name: slurp ceph osd files content
slurp:
src: "{{ item['files'][0]['path'] }}"
delegate_to: "{{ item.item.0 }}"
register: ceph_osd_files_content
loop: "{{ ceph_osd_data.results }}"
when:
- item.skipped is undefined
- item.matched > 0
- name: set_fact ceph_osd_files_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({ item.item.item.2: item.content | b64decode | from_json}) }}"
with_items: "{{ ceph_osd_files_content.results }}"
when: item.skipped is undefined
- name: mark osd(s) out of the cluster
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: out
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
- name: stop osd(s) service
service:
name: ceph-osd@{{ item.2 }}
state: stopped
enabled: no
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: umount osd lockbox
ansible.posix.mount:
path: "/var/lib/ceph/osd-lockbox/{{ ceph_osd_data_json[item.2]['data']['uuid'] }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- not containerized_deployment | bool
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2]['data']['uuid'] is defined
- name: umount osd data
ansible.posix.mount:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when: not containerized_deployment | bool
- name: get parent device for data partition
command: lsblk --noheadings --output PKNAME --nodeps "{{ ceph_osd_data_json[item.2]['data']['path'] }}"
register: parent_device_data_part
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['data']['path'] is defined
- name: add pkname information in ceph_osd_data_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({item.item[2]: {'pkname_data': '/dev/' + item.stdout }}, recursive=True) }}"
loop: "{{ parent_device_data_part.results }}"
when: item.skipped is undefined
- name: close dmcrypt close on devices if needed
command: "cryptsetup close {{ ceph_osd_data_json[item.2][item.3]['uuid'] }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block_dmcrypt', 'block.db_dmcrypt', 'block.wal_dmcrypt', 'data', 'journal_dmcrypt' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
until: result is succeeded
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2][item.3] is defined
- name: use ceph-volume lvm zap to destroy all partitions
ceph_volume:
cluster: "{{ cluster }}"
action: zap
destroy: true
data: "{{ ceph_osd_data_json[item.2]['pkname_data'] if item.3 == 'data' else ceph_osd_data_json[item.2][item.3]['path'] }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block', 'block.db', 'block.wal', 'journal', 'data' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2][item.3] is defined
- name: zap osd devices
ceph_volume:
action: "zap"
osd_fsid: "{{ item.1 }}"
environment:
CEPH_VOLUME_DEBUG: "{{ ceph_volume_debug }}"
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 in _lvm_list.keys()
- name: ensure osds are marked down
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: down
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: purge osd(s) from the cluster
ceph_osd:
ids: "{{ item }}"
cluster: "{{ cluster }}"
state: purge
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
with_items: "{{ osd_to_kill.split(',') }}"
- name: remove osd data dir
file:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: show ceph health
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} -s"
changed_when: false
- name: show ceph osd tree
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd tree"
changed_when: false
| asm0deuz | dffe7b47de70b6eeec71a3fa86f8c407adb4dd8e | 64e08f2c0bdea6f4c4ad5862dc8f350c6adbe2cd | When passing a list of osd ids, (`-e osd_to_kill=1,3,5`), if the osds are on the same host, that command would be run multiple times unnecessarily on that host. | guits | 15 |
ceph/ceph-ansible | 7,226 | [skip ci] Refresh /etc/ceph/osd json files content before zapping the disks | If the physical disk to device path mapping has changed since the
last ceph-volume simple scan (e.g. addition or removal of disks),
a wrong disk could be deleted.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2071035
Signed-off-by: Teoman ONAY <tonay@redhat.com> | null | 2022-07-04 10:03:03+00:00 | 2022-07-11 07:14:41+00:00 | infrastructure-playbooks/shrink-osd.yml | ---
# This playbook shrinks Ceph OSDs that have been created with ceph-volume.
# It can remove any number of OSD(s) from the cluster and ALL THEIR DATA
#
# Use it like this:
# ansible-playbook shrink-osd.yml -e osd_to_kill=0,2,6
# Prompts for confirmation to shrink, defaults to no and
# doesn't shrink the cluster. yes shrinks the cluster.
#
# ansible-playbook -e ireallymeanit=yes|no shrink-osd.yml
# Overrides the prompt using -e option. Can be used in
# automation scripts to avoid interactive prompt.
- name: gather facts and check the init system
hosts:
- "{{ mon_group_name|default('mons') }}"
- "{{ osd_group_name|default('osds') }}"
become: True
tasks:
- debug: msg="gather facts on all Ceph hosts for following reference"
- name: confirm whether user really meant to remove osd(s) from the cluster
hosts: "{{ groups[mon_group_name][0] }}"
become: true
vars_prompt:
- name: ireallymeanit
prompt: Are you sure you want to shrink the cluster?
default: 'no'
private: no
vars:
mon_group_name: mons
osd_group_name: osds
pre_tasks:
- name: exit playbook, if user did not mean to shrink cluster
fail:
msg: "Exiting shrink-osd playbook, no osd(s) was/were removed..
To shrink the cluster, either say 'yes' on the prompt or
or use `-e ireallymeanit=yes` on the command line when
invoking the playbook"
when: ireallymeanit != 'yes'
- name: exit playbook, if no osd(s) was/were given
fail:
msg: "osd_to_kill must be declared
Exiting shrink-osd playbook, no OSD(s) was/were removed.
On the command line when invoking the playbook, you can use
-e osd_to_kill=0,1,2,3 argument."
when: osd_to_kill is not defined
- name: check the osd ids passed have the correct format
fail:
msg: "The id {{ item }} has wrong format, please pass the number only"
with_items: "{{ osd_to_kill.split(',') }}"
when: not item is regex("^\d+$")
tasks:
- import_role:
name: ceph-defaults
- import_role:
name: ceph-facts
tasks_from: container_binary
post_tasks:
- name: set_fact container_exec_cmd build docker exec command (containerized)
set_fact:
container_exec_cmd: "{{ container_binary }} exec ceph-mon-{{ ansible_facts['hostname'] }}"
when: containerized_deployment | bool
- name: exit playbook, if can not connect to the cluster
command: "{{ container_exec_cmd }} timeout 5 ceph --cluster {{ cluster }} health"
register: ceph_health
changed_when: false
until: ceph_health.stdout.find("HEALTH") > -1
retries: 5
delay: 2
- name: find the host(s) where the osd(s) is/are running on
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd find {{ item }}"
changed_when: false
with_items: "{{ osd_to_kill.split(',') }}"
register: find_osd_hosts
- name: set_fact osd_hosts
set_fact:
osd_hosts: "{{ osd_hosts | default([]) + [ [ (item.stdout | from_json).crush_location.host, (item.stdout | from_json).osd_fsid, item.item ] ] }}"
with_items: "{{ find_osd_hosts.results }}"
- name: set_fact _osd_hosts
set_fact:
_osd_hosts: "{{ _osd_hosts | default([]) + [ [ item.0, item.2, item.3 ] ] }}"
with_nested:
- "{{ groups.get(osd_group_name) }}"
- "{{ osd_hosts }}"
when: hostvars[item.0]['ansible_facts']['hostname'] == item.1
- name: get ceph-volume lvm list data
ceph_volume:
cluster: "{{ cluster }}"
action: list
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _lvm_list_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
- name: set_fact _lvm_list
set_fact:
_lvm_list: "{{ _lvm_list | default({}) | combine(item.stdout | from_json) }}"
with_items: "{{ _lvm_list_data.results }}"
- name: find /etc/ceph/osd files
find:
paths: /etc/ceph/osd
pattern: "{{ item.2 }}-*"
register: ceph_osd_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 not in _lvm_list.keys()
- name: slurp ceph osd files content
slurp:
src: "{{ item['files'][0]['path'] }}"
delegate_to: "{{ item.item.0 }}"
register: ceph_osd_files_content
loop: "{{ ceph_osd_data.results }}"
when:
- item.skipped is undefined
- item.matched > 0
- name: set_fact ceph_osd_files_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({ item.item.item.2: item.content | b64decode | from_json}) }}"
with_items: "{{ ceph_osd_files_content.results }}"
when: item.skipped is undefined
- name: mark osd(s) out of the cluster
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: out
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
- name: stop osd(s) service
service:
name: ceph-osd@{{ item.2 }}
state: stopped
enabled: no
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: umount osd lockbox
ansible.posix.mount:
path: "/var/lib/ceph/osd-lockbox/{{ ceph_osd_data_json[item.2]['data']['uuid'] }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- not containerized_deployment | bool
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2]['data']['uuid'] is defined
- name: umount osd data
ansible.posix.mount:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when: not containerized_deployment | bool
- name: get parent device for data partition
command: lsblk --noheadings --output PKNAME --nodeps "{{ ceph_osd_data_json[item.2]['data']['path'] }}"
register: parent_device_data_part
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['data']['path'] is defined
- name: add pkname information in ceph_osd_data_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({item.item[2]: {'pkname_data': '/dev/' + item.stdout }}, recursive=True) }}"
loop: "{{ parent_device_data_part.results }}"
when: item.skipped is undefined
- name: close dmcrypt close on devices if needed
command: "cryptsetup close {{ ceph_osd_data_json[item.2][item.3]['uuid'] }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block_dmcrypt', 'block.db_dmcrypt', 'block.wal_dmcrypt', 'data', 'journal_dmcrypt' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
until: result is succeeded
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2][item.3] is defined
- name: use ceph-volume lvm zap to destroy all partitions
ceph_volume:
cluster: "{{ cluster }}"
action: zap
destroy: true
data: "{{ ceph_osd_data_json[item.2]['pkname_data'] if item.3 == 'data' else ceph_osd_data_json[item.2][item.3]['path'] }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block', 'block.db', 'block.wal', 'journal', 'data' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2][item.3] is defined
- name: zap osd devices
ceph_volume:
action: "zap"
osd_fsid: "{{ item.1 }}"
environment:
CEPH_VOLUME_DEBUG: "{{ ceph_volume_debug }}"
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 in _lvm_list.keys()
- name: ensure osds are marked down
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: down
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: purge osd(s) from the cluster
ceph_osd:
ids: "{{ item }}"
cluster: "{{ cluster }}"
state: purge
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
with_items: "{{ osd_to_kill.split(',') }}"
- name: remove osd data dir
file:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: show ceph health
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} -s"
changed_when: false
- name: show ceph osd tree
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd tree"
changed_when: false
| ---
# This playbook shrinks Ceph OSDs that have been created with ceph-volume.
# It can remove any number of OSD(s) from the cluster and ALL THEIR DATA
#
# Use it like this:
# ansible-playbook shrink-osd.yml -e osd_to_kill=0,2,6
# Prompts for confirmation to shrink, defaults to no and
# doesn't shrink the cluster. yes shrinks the cluster.
#
# ansible-playbook -e ireallymeanit=yes|no shrink-osd.yml
# Overrides the prompt using -e option. Can be used in
# automation scripts to avoid interactive prompt.
- name: gather facts and check the init system
hosts:
- "{{ mon_group_name|default('mons') }}"
- "{{ osd_group_name|default('osds') }}"
become: True
tasks:
- debug: msg="gather facts on all Ceph hosts for following reference"
- name: confirm whether user really meant to remove osd(s) from the cluster
hosts: "{{ groups[mon_group_name][0] }}"
become: true
vars_prompt:
- name: ireallymeanit
prompt: Are you sure you want to shrink the cluster?
default: 'no'
private: no
vars:
mon_group_name: mons
osd_group_name: osds
pre_tasks:
- name: exit playbook, if user did not mean to shrink cluster
fail:
msg: "Exiting shrink-osd playbook, no osd(s) was/were removed..
To shrink the cluster, either say 'yes' on the prompt or
or use `-e ireallymeanit=yes` on the command line when
invoking the playbook"
when: ireallymeanit != 'yes'
- name: exit playbook, if no osd(s) was/were given
fail:
msg: "osd_to_kill must be declared
Exiting shrink-osd playbook, no OSD(s) was/were removed.
On the command line when invoking the playbook, you can use
-e osd_to_kill=0,1,2,3 argument."
when: osd_to_kill is not defined
- name: check the osd ids passed have the correct format
fail:
msg: "The id {{ item }} has wrong format, please pass the number only"
with_items: "{{ osd_to_kill.split(',') }}"
when: not item is regex("^\d+$")
tasks:
- import_role:
name: ceph-defaults
- import_role:
name: ceph-facts
tasks_from: container_binary
post_tasks:
- name: set_fact container_exec_cmd build docker exec command (containerized)
set_fact:
container_exec_cmd: "{{ container_binary }} exec ceph-mon-{{ ansible_facts['hostname'] }}"
when: containerized_deployment | bool
- name: exit playbook, if can not connect to the cluster
command: "{{ container_exec_cmd }} timeout 5 ceph --cluster {{ cluster }} health"
register: ceph_health
changed_when: false
until: ceph_health.stdout.find("HEALTH") > -1
retries: 5
delay: 2
- name: find the host(s) where the osd(s) is/are running on
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd find {{ item }}"
changed_when: false
with_items: "{{ osd_to_kill.split(',') }}"
register: find_osd_hosts
- name: set_fact osd_hosts
set_fact:
osd_hosts: "{{ osd_hosts | default([]) + [ [ (item.stdout | from_json).crush_location.host, (item.stdout | from_json).osd_fsid, item.item ] ] }}"
with_items: "{{ find_osd_hosts.results }}"
- name: set_fact _osd_hosts
set_fact:
_osd_hosts: "{{ _osd_hosts | default([]) + [ [ item.0, item.2, item.3 ] ] }}"
with_nested:
- "{{ groups.get(osd_group_name) }}"
- "{{ osd_hosts }}"
when: hostvars[item.0]['ansible_facts']['hostname'] == item.1
- name: set_fact host_list
set_fact:
host_list: "{{ host_list | default([]) | union([item.0]) }}"
loop: "{{ _osd_hosts }}"
- name: get ceph-volume lvm list data
ceph_volume:
cluster: "{{ cluster }}"
action: list
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _lvm_list_data
delegate_to: "{{ item }}"
loop: "{{ host_list }}"
- name: set_fact _lvm_list
set_fact:
_lvm_list: "{{ _lvm_list | default({}) | combine(item.stdout | from_json) }}"
with_items: "{{ _lvm_list_data.results }}"
- name: refresh /etc/ceph/osd files non containerized_deployment
ceph_volume_simple_scan:
cluster: "{{ cluster }}"
force: true
delegate_to: "{{ item }}"
loop: "{{ host_list }}"
when: not containerized_deployment | bool
- name: refresh /etc/ceph/osd files containerized_deployment
command: "{{ container_binary }} exec ceph-osd-{{ item.2 }} ceph-volume simple scan --force /var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
changed_when: false
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: containerized_deployment | bool
- name: find /etc/ceph/osd files
find:
paths: /etc/ceph/osd
pattern: "{{ item.2 }}-*"
register: ceph_osd_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 not in _lvm_list.keys()
- name: slurp ceph osd files content
slurp:
src: "{{ item['files'][0]['path'] }}"
delegate_to: "{{ item.item.0 }}"
register: ceph_osd_files_content
loop: "{{ ceph_osd_data.results }}"
when:
- item.skipped is undefined
- item.matched > 0
- name: set_fact ceph_osd_files_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({ item.item.item.2: item.content | b64decode | from_json}) }}"
with_items: "{{ ceph_osd_files_content.results }}"
when: item.skipped is undefined
- name: mark osd(s) out of the cluster
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: out
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
- name: stop osd(s) service
service:
name: ceph-osd@{{ item.2 }}
state: stopped
enabled: no
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: umount osd lockbox
ansible.posix.mount:
path: "/var/lib/ceph/osd-lockbox/{{ ceph_osd_data_json[item.2]['data']['uuid'] }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- not containerized_deployment | bool
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2]['data']['uuid'] is defined
- name: umount osd data
ansible.posix.mount:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when: not containerized_deployment | bool
- name: get parent device for data partition
command: lsblk --noheadings --output PKNAME --nodeps "{{ ceph_osd_data_json[item.2]['data']['path'] }}"
register: parent_device_data_part
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['data']['path'] is defined
- name: add pkname information in ceph_osd_data_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({item.item[2]: {'pkname_data': '/dev/' + item.stdout }}, recursive=True) }}"
loop: "{{ parent_device_data_part.results }}"
when: item.skipped is undefined
- name: close dmcrypt close on devices if needed
command: "cryptsetup close {{ ceph_osd_data_json[item.2][item.3]['uuid'] }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block_dmcrypt', 'block.db_dmcrypt', 'block.wal_dmcrypt', 'data', 'journal_dmcrypt' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
until: result is succeeded
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2][item.3] is defined
- name: use ceph-volume lvm zap to destroy all partitions
ceph_volume:
cluster: "{{ cluster }}"
action: zap
destroy: true
data: "{{ ceph_osd_data_json[item.2]['pkname_data'] if item.3 == 'data' else ceph_osd_data_json[item.2][item.3]['path'] }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block', 'block.db', 'block.wal', 'journal', 'data' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2][item.3] is defined
- name: zap osd devices
ceph_volume:
action: "zap"
osd_fsid: "{{ item.1 }}"
environment:
CEPH_VOLUME_DEBUG: "{{ ceph_volume_debug }}"
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 in _lvm_list.keys()
- name: ensure osds are marked down
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: down
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: purge osd(s) from the cluster
ceph_osd:
ids: "{{ item }}"
cluster: "{{ cluster }}"
state: purge
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
with_items: "{{ osd_to_kill.split(',') }}"
- name: remove osd data dir
file:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: show ceph health
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} -s"
changed_when: false
- name: show ceph osd tree
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd tree"
changed_when: false
| asm0deuz | dffe7b47de70b6eeec71a3fa86f8c407adb4dd8e | 64e08f2c0bdea6f4c4ad5862dc8f350c6adbe2cd | ```suggestion
- name: refresh /etc/ceph/osd files
ceph_volume_simple_scan:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
cluster: "{{ cluster }}"
force: true
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
``` | guits | 16 |
ceph/ceph-ansible | 7,226 | [skip ci] Refresh /etc/ceph/osd json files content before zapping the disks | If the physical disk to device path mapping has changed since the
last ceph-volume simple scan (e.g. addition or removal of disks),
a wrong disk could be deleted.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2071035
Signed-off-by: Teoman ONAY <tonay@redhat.com> | null | 2022-07-04 10:03:03+00:00 | 2022-07-11 07:14:41+00:00 | infrastructure-playbooks/shrink-osd.yml | ---
# This playbook shrinks Ceph OSDs that have been created with ceph-volume.
# It can remove any number of OSD(s) from the cluster and ALL THEIR DATA
#
# Use it like this:
# ansible-playbook shrink-osd.yml -e osd_to_kill=0,2,6
# Prompts for confirmation to shrink, defaults to no and
# doesn't shrink the cluster. yes shrinks the cluster.
#
# ansible-playbook -e ireallymeanit=yes|no shrink-osd.yml
# Overrides the prompt using -e option. Can be used in
# automation scripts to avoid interactive prompt.
- name: gather facts and check the init system
hosts:
- "{{ mon_group_name|default('mons') }}"
- "{{ osd_group_name|default('osds') }}"
become: True
tasks:
- debug: msg="gather facts on all Ceph hosts for following reference"
- name: confirm whether user really meant to remove osd(s) from the cluster
hosts: "{{ groups[mon_group_name][0] }}"
become: true
vars_prompt:
- name: ireallymeanit
prompt: Are you sure you want to shrink the cluster?
default: 'no'
private: no
vars:
mon_group_name: mons
osd_group_name: osds
pre_tasks:
- name: exit playbook, if user did not mean to shrink cluster
fail:
msg: "Exiting shrink-osd playbook, no osd(s) was/were removed..
To shrink the cluster, either say 'yes' on the prompt or
or use `-e ireallymeanit=yes` on the command line when
invoking the playbook"
when: ireallymeanit != 'yes'
- name: exit playbook, if no osd(s) was/were given
fail:
msg: "osd_to_kill must be declared
Exiting shrink-osd playbook, no OSD(s) was/were removed.
On the command line when invoking the playbook, you can use
-e osd_to_kill=0,1,2,3 argument."
when: osd_to_kill is not defined
- name: check the osd ids passed have the correct format
fail:
msg: "The id {{ item }} has wrong format, please pass the number only"
with_items: "{{ osd_to_kill.split(',') }}"
when: not item is regex("^\d+$")
tasks:
- import_role:
name: ceph-defaults
- import_role:
name: ceph-facts
tasks_from: container_binary
post_tasks:
- name: set_fact container_exec_cmd build docker exec command (containerized)
set_fact:
container_exec_cmd: "{{ container_binary }} exec ceph-mon-{{ ansible_facts['hostname'] }}"
when: containerized_deployment | bool
- name: exit playbook, if can not connect to the cluster
command: "{{ container_exec_cmd }} timeout 5 ceph --cluster {{ cluster }} health"
register: ceph_health
changed_when: false
until: ceph_health.stdout.find("HEALTH") > -1
retries: 5
delay: 2
- name: find the host(s) where the osd(s) is/are running on
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd find {{ item }}"
changed_when: false
with_items: "{{ osd_to_kill.split(',') }}"
register: find_osd_hosts
- name: set_fact osd_hosts
set_fact:
osd_hosts: "{{ osd_hosts | default([]) + [ [ (item.stdout | from_json).crush_location.host, (item.stdout | from_json).osd_fsid, item.item ] ] }}"
with_items: "{{ find_osd_hosts.results }}"
- name: set_fact _osd_hosts
set_fact:
_osd_hosts: "{{ _osd_hosts | default([]) + [ [ item.0, item.2, item.3 ] ] }}"
with_nested:
- "{{ groups.get(osd_group_name) }}"
- "{{ osd_hosts }}"
when: hostvars[item.0]['ansible_facts']['hostname'] == item.1
- name: get ceph-volume lvm list data
ceph_volume:
cluster: "{{ cluster }}"
action: list
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _lvm_list_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
- name: set_fact _lvm_list
set_fact:
_lvm_list: "{{ _lvm_list | default({}) | combine(item.stdout | from_json) }}"
with_items: "{{ _lvm_list_data.results }}"
- name: find /etc/ceph/osd files
find:
paths: /etc/ceph/osd
pattern: "{{ item.2 }}-*"
register: ceph_osd_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 not in _lvm_list.keys()
- name: slurp ceph osd files content
slurp:
src: "{{ item['files'][0]['path'] }}"
delegate_to: "{{ item.item.0 }}"
register: ceph_osd_files_content
loop: "{{ ceph_osd_data.results }}"
when:
- item.skipped is undefined
- item.matched > 0
- name: set_fact ceph_osd_files_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({ item.item.item.2: item.content | b64decode | from_json}) }}"
with_items: "{{ ceph_osd_files_content.results }}"
when: item.skipped is undefined
- name: mark osd(s) out of the cluster
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: out
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
- name: stop osd(s) service
service:
name: ceph-osd@{{ item.2 }}
state: stopped
enabled: no
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: umount osd lockbox
ansible.posix.mount:
path: "/var/lib/ceph/osd-lockbox/{{ ceph_osd_data_json[item.2]['data']['uuid'] }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- not containerized_deployment | bool
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2]['data']['uuid'] is defined
- name: umount osd data
ansible.posix.mount:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when: not containerized_deployment | bool
- name: get parent device for data partition
command: lsblk --noheadings --output PKNAME --nodeps "{{ ceph_osd_data_json[item.2]['data']['path'] }}"
register: parent_device_data_part
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['data']['path'] is defined
- name: add pkname information in ceph_osd_data_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({item.item[2]: {'pkname_data': '/dev/' + item.stdout }}, recursive=True) }}"
loop: "{{ parent_device_data_part.results }}"
when: item.skipped is undefined
- name: close dmcrypt close on devices if needed
command: "cryptsetup close {{ ceph_osd_data_json[item.2][item.3]['uuid'] }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block_dmcrypt', 'block.db_dmcrypt', 'block.wal_dmcrypt', 'data', 'journal_dmcrypt' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
until: result is succeeded
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2][item.3] is defined
- name: use ceph-volume lvm zap to destroy all partitions
ceph_volume:
cluster: "{{ cluster }}"
action: zap
destroy: true
data: "{{ ceph_osd_data_json[item.2]['pkname_data'] if item.3 == 'data' else ceph_osd_data_json[item.2][item.3]['path'] }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block', 'block.db', 'block.wal', 'journal', 'data' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2][item.3] is defined
- name: zap osd devices
ceph_volume:
action: "zap"
osd_fsid: "{{ item.1 }}"
environment:
CEPH_VOLUME_DEBUG: "{{ ceph_volume_debug }}"
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 in _lvm_list.keys()
- name: ensure osds are marked down
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: down
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: purge osd(s) from the cluster
ceph_osd:
ids: "{{ item }}"
cluster: "{{ cluster }}"
state: purge
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
with_items: "{{ osd_to_kill.split(',') }}"
- name: remove osd data dir
file:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: show ceph health
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} -s"
changed_when: false
- name: show ceph osd tree
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd tree"
changed_when: false
| ---
# This playbook shrinks Ceph OSDs that have been created with ceph-volume.
# It can remove any number of OSD(s) from the cluster and ALL THEIR DATA
#
# Use it like this:
# ansible-playbook shrink-osd.yml -e osd_to_kill=0,2,6
# Prompts for confirmation to shrink, defaults to no and
# doesn't shrink the cluster. yes shrinks the cluster.
#
# ansible-playbook -e ireallymeanit=yes|no shrink-osd.yml
# Overrides the prompt using -e option. Can be used in
# automation scripts to avoid interactive prompt.
- name: gather facts and check the init system
hosts:
- "{{ mon_group_name|default('mons') }}"
- "{{ osd_group_name|default('osds') }}"
become: True
tasks:
- debug: msg="gather facts on all Ceph hosts for following reference"
- name: confirm whether user really meant to remove osd(s) from the cluster
hosts: "{{ groups[mon_group_name][0] }}"
become: true
vars_prompt:
- name: ireallymeanit
prompt: Are you sure you want to shrink the cluster?
default: 'no'
private: no
vars:
mon_group_name: mons
osd_group_name: osds
pre_tasks:
- name: exit playbook, if user did not mean to shrink cluster
fail:
msg: "Exiting shrink-osd playbook, no osd(s) was/were removed..
To shrink the cluster, either say 'yes' on the prompt or
or use `-e ireallymeanit=yes` on the command line when
invoking the playbook"
when: ireallymeanit != 'yes'
- name: exit playbook, if no osd(s) was/were given
fail:
msg: "osd_to_kill must be declared
Exiting shrink-osd playbook, no OSD(s) was/were removed.
On the command line when invoking the playbook, you can use
-e osd_to_kill=0,1,2,3 argument."
when: osd_to_kill is not defined
- name: check the osd ids passed have the correct format
fail:
msg: "The id {{ item }} has wrong format, please pass the number only"
with_items: "{{ osd_to_kill.split(',') }}"
when: not item is regex("^\d+$")
tasks:
- import_role:
name: ceph-defaults
- import_role:
name: ceph-facts
tasks_from: container_binary
post_tasks:
- name: set_fact container_exec_cmd build docker exec command (containerized)
set_fact:
container_exec_cmd: "{{ container_binary }} exec ceph-mon-{{ ansible_facts['hostname'] }}"
when: containerized_deployment | bool
- name: exit playbook, if can not connect to the cluster
command: "{{ container_exec_cmd }} timeout 5 ceph --cluster {{ cluster }} health"
register: ceph_health
changed_when: false
until: ceph_health.stdout.find("HEALTH") > -1
retries: 5
delay: 2
- name: find the host(s) where the osd(s) is/are running on
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd find {{ item }}"
changed_when: false
with_items: "{{ osd_to_kill.split(',') }}"
register: find_osd_hosts
- name: set_fact osd_hosts
set_fact:
osd_hosts: "{{ osd_hosts | default([]) + [ [ (item.stdout | from_json).crush_location.host, (item.stdout | from_json).osd_fsid, item.item ] ] }}"
with_items: "{{ find_osd_hosts.results }}"
- name: set_fact _osd_hosts
set_fact:
_osd_hosts: "{{ _osd_hosts | default([]) + [ [ item.0, item.2, item.3 ] ] }}"
with_nested:
- "{{ groups.get(osd_group_name) }}"
- "{{ osd_hosts }}"
when: hostvars[item.0]['ansible_facts']['hostname'] == item.1
- name: set_fact host_list
set_fact:
host_list: "{{ host_list | default([]) | union([item.0]) }}"
loop: "{{ _osd_hosts }}"
- name: get ceph-volume lvm list data
ceph_volume:
cluster: "{{ cluster }}"
action: list
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _lvm_list_data
delegate_to: "{{ item }}"
loop: "{{ host_list }}"
- name: set_fact _lvm_list
set_fact:
_lvm_list: "{{ _lvm_list | default({}) | combine(item.stdout | from_json) }}"
with_items: "{{ _lvm_list_data.results }}"
- name: refresh /etc/ceph/osd files non containerized_deployment
ceph_volume_simple_scan:
cluster: "{{ cluster }}"
force: true
delegate_to: "{{ item }}"
loop: "{{ host_list }}"
when: not containerized_deployment | bool
- name: refresh /etc/ceph/osd files containerized_deployment
command: "{{ container_binary }} exec ceph-osd-{{ item.2 }} ceph-volume simple scan --force /var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
changed_when: false
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: containerized_deployment | bool
- name: find /etc/ceph/osd files
find:
paths: /etc/ceph/osd
pattern: "{{ item.2 }}-*"
register: ceph_osd_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 not in _lvm_list.keys()
- name: slurp ceph osd files content
slurp:
src: "{{ item['files'][0]['path'] }}"
delegate_to: "{{ item.item.0 }}"
register: ceph_osd_files_content
loop: "{{ ceph_osd_data.results }}"
when:
- item.skipped is undefined
- item.matched > 0
- name: set_fact ceph_osd_files_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({ item.item.item.2: item.content | b64decode | from_json}) }}"
with_items: "{{ ceph_osd_files_content.results }}"
when: item.skipped is undefined
- name: mark osd(s) out of the cluster
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: out
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
- name: stop osd(s) service
service:
name: ceph-osd@{{ item.2 }}
state: stopped
enabled: no
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: umount osd lockbox
ansible.posix.mount:
path: "/var/lib/ceph/osd-lockbox/{{ ceph_osd_data_json[item.2]['data']['uuid'] }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- not containerized_deployment | bool
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2]['data']['uuid'] is defined
- name: umount osd data
ansible.posix.mount:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when: not containerized_deployment | bool
- name: get parent device for data partition
command: lsblk --noheadings --output PKNAME --nodeps "{{ ceph_osd_data_json[item.2]['data']['path'] }}"
register: parent_device_data_part
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['data']['path'] is defined
- name: add pkname information in ceph_osd_data_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({item.item[2]: {'pkname_data': '/dev/' + item.stdout }}, recursive=True) }}"
loop: "{{ parent_device_data_part.results }}"
when: item.skipped is undefined
- name: close dmcrypt close on devices if needed
command: "cryptsetup close {{ ceph_osd_data_json[item.2][item.3]['uuid'] }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block_dmcrypt', 'block.db_dmcrypt', 'block.wal_dmcrypt', 'data', 'journal_dmcrypt' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
until: result is succeeded
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2][item.3] is defined
- name: use ceph-volume lvm zap to destroy all partitions
ceph_volume:
cluster: "{{ cluster }}"
action: zap
destroy: true
data: "{{ ceph_osd_data_json[item.2]['pkname_data'] if item.3 == 'data' else ceph_osd_data_json[item.2][item.3]['path'] }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block', 'block.db', 'block.wal', 'journal', 'data' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2][item.3] is defined
- name: zap osd devices
ceph_volume:
action: "zap"
osd_fsid: "{{ item.1 }}"
environment:
CEPH_VOLUME_DEBUG: "{{ ceph_volume_debug }}"
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 in _lvm_list.keys()
- name: ensure osds are marked down
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: down
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: purge osd(s) from the cluster
ceph_osd:
ids: "{{ item }}"
cluster: "{{ cluster }}"
state: purge
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
with_items: "{{ osd_to_kill.split(',') }}"
- name: remove osd data dir
file:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: show ceph health
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} -s"
changed_when: false
- name: show ceph osd tree
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd tree"
changed_when: false
| asm0deuz | dffe7b47de70b6eeec71a3fa86f8c407adb4dd8e | 64e08f2c0bdea6f4c4ad5862dc8f350c6adbe2cd | ```suggestion
``` | guits | 17 |
ceph/ceph-ansible | 7,226 | [skip ci] Refresh /etc/ceph/osd json files content before zapping the disks | If the physical disk to device path mapping has changed since the
last ceph-volume simple scan (e.g. addition or removal of disks),
a wrong disk could be deleted.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2071035
Signed-off-by: Teoman ONAY <tonay@redhat.com> | null | 2022-07-04 10:03:03+00:00 | 2022-07-11 07:14:41+00:00 | infrastructure-playbooks/shrink-osd.yml | ---
# This playbook shrinks Ceph OSDs that have been created with ceph-volume.
# It can remove any number of OSD(s) from the cluster and ALL THEIR DATA
#
# Use it like this:
# ansible-playbook shrink-osd.yml -e osd_to_kill=0,2,6
# Prompts for confirmation to shrink, defaults to no and
# doesn't shrink the cluster. yes shrinks the cluster.
#
# ansible-playbook -e ireallymeanit=yes|no shrink-osd.yml
# Overrides the prompt using -e option. Can be used in
# automation scripts to avoid interactive prompt.
- name: gather facts and check the init system
hosts:
- "{{ mon_group_name|default('mons') }}"
- "{{ osd_group_name|default('osds') }}"
become: True
tasks:
- debug: msg="gather facts on all Ceph hosts for following reference"
- name: confirm whether user really meant to remove osd(s) from the cluster
hosts: "{{ groups[mon_group_name][0] }}"
become: true
vars_prompt:
- name: ireallymeanit
prompt: Are you sure you want to shrink the cluster?
default: 'no'
private: no
vars:
mon_group_name: mons
osd_group_name: osds
pre_tasks:
- name: exit playbook, if user did not mean to shrink cluster
fail:
msg: "Exiting shrink-osd playbook, no osd(s) was/were removed..
To shrink the cluster, either say 'yes' on the prompt or
or use `-e ireallymeanit=yes` on the command line when
invoking the playbook"
when: ireallymeanit != 'yes'
- name: exit playbook, if no osd(s) was/were given
fail:
msg: "osd_to_kill must be declared
Exiting shrink-osd playbook, no OSD(s) was/were removed.
On the command line when invoking the playbook, you can use
-e osd_to_kill=0,1,2,3 argument."
when: osd_to_kill is not defined
- name: check the osd ids passed have the correct format
fail:
msg: "The id {{ item }} has wrong format, please pass the number only"
with_items: "{{ osd_to_kill.split(',') }}"
when: not item is regex("^\d+$")
tasks:
- import_role:
name: ceph-defaults
- import_role:
name: ceph-facts
tasks_from: container_binary
post_tasks:
- name: set_fact container_exec_cmd build docker exec command (containerized)
set_fact:
container_exec_cmd: "{{ container_binary }} exec ceph-mon-{{ ansible_facts['hostname'] }}"
when: containerized_deployment | bool
- name: exit playbook, if can not connect to the cluster
command: "{{ container_exec_cmd }} timeout 5 ceph --cluster {{ cluster }} health"
register: ceph_health
changed_when: false
until: ceph_health.stdout.find("HEALTH") > -1
retries: 5
delay: 2
- name: find the host(s) where the osd(s) is/are running on
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd find {{ item }}"
changed_when: false
with_items: "{{ osd_to_kill.split(',') }}"
register: find_osd_hosts
- name: set_fact osd_hosts
set_fact:
osd_hosts: "{{ osd_hosts | default([]) + [ [ (item.stdout | from_json).crush_location.host, (item.stdout | from_json).osd_fsid, item.item ] ] }}"
with_items: "{{ find_osd_hosts.results }}"
- name: set_fact _osd_hosts
set_fact:
_osd_hosts: "{{ _osd_hosts | default([]) + [ [ item.0, item.2, item.3 ] ] }}"
with_nested:
- "{{ groups.get(osd_group_name) }}"
- "{{ osd_hosts }}"
when: hostvars[item.0]['ansible_facts']['hostname'] == item.1
- name: get ceph-volume lvm list data
ceph_volume:
cluster: "{{ cluster }}"
action: list
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _lvm_list_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
- name: set_fact _lvm_list
set_fact:
_lvm_list: "{{ _lvm_list | default({}) | combine(item.stdout | from_json) }}"
with_items: "{{ _lvm_list_data.results }}"
- name: find /etc/ceph/osd files
find:
paths: /etc/ceph/osd
pattern: "{{ item.2 }}-*"
register: ceph_osd_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 not in _lvm_list.keys()
- name: slurp ceph osd files content
slurp:
src: "{{ item['files'][0]['path'] }}"
delegate_to: "{{ item.item.0 }}"
register: ceph_osd_files_content
loop: "{{ ceph_osd_data.results }}"
when:
- item.skipped is undefined
- item.matched > 0
- name: set_fact ceph_osd_files_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({ item.item.item.2: item.content | b64decode | from_json}) }}"
with_items: "{{ ceph_osd_files_content.results }}"
when: item.skipped is undefined
- name: mark osd(s) out of the cluster
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: out
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
- name: stop osd(s) service
service:
name: ceph-osd@{{ item.2 }}
state: stopped
enabled: no
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: umount osd lockbox
ansible.posix.mount:
path: "/var/lib/ceph/osd-lockbox/{{ ceph_osd_data_json[item.2]['data']['uuid'] }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- not containerized_deployment | bool
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2]['data']['uuid'] is defined
- name: umount osd data
ansible.posix.mount:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when: not containerized_deployment | bool
- name: get parent device for data partition
command: lsblk --noheadings --output PKNAME --nodeps "{{ ceph_osd_data_json[item.2]['data']['path'] }}"
register: parent_device_data_part
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['data']['path'] is defined
- name: add pkname information in ceph_osd_data_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({item.item[2]: {'pkname_data': '/dev/' + item.stdout }}, recursive=True) }}"
loop: "{{ parent_device_data_part.results }}"
when: item.skipped is undefined
- name: close dmcrypt close on devices if needed
command: "cryptsetup close {{ ceph_osd_data_json[item.2][item.3]['uuid'] }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block_dmcrypt', 'block.db_dmcrypt', 'block.wal_dmcrypt', 'data', 'journal_dmcrypt' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
until: result is succeeded
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2][item.3] is defined
- name: use ceph-volume lvm zap to destroy all partitions
ceph_volume:
cluster: "{{ cluster }}"
action: zap
destroy: true
data: "{{ ceph_osd_data_json[item.2]['pkname_data'] if item.3 == 'data' else ceph_osd_data_json[item.2][item.3]['path'] }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block', 'block.db', 'block.wal', 'journal', 'data' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2][item.3] is defined
- name: zap osd devices
ceph_volume:
action: "zap"
osd_fsid: "{{ item.1 }}"
environment:
CEPH_VOLUME_DEBUG: "{{ ceph_volume_debug }}"
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 in _lvm_list.keys()
- name: ensure osds are marked down
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: down
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: purge osd(s) from the cluster
ceph_osd:
ids: "{{ item }}"
cluster: "{{ cluster }}"
state: purge
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
with_items: "{{ osd_to_kill.split(',') }}"
- name: remove osd data dir
file:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: show ceph health
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} -s"
changed_when: false
- name: show ceph osd tree
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd tree"
changed_when: false
| ---
# This playbook shrinks Ceph OSDs that have been created with ceph-volume.
# It can remove any number of OSD(s) from the cluster and ALL THEIR DATA
#
# Use it like this:
# ansible-playbook shrink-osd.yml -e osd_to_kill=0,2,6
# Prompts for confirmation to shrink, defaults to no and
# doesn't shrink the cluster. yes shrinks the cluster.
#
# ansible-playbook -e ireallymeanit=yes|no shrink-osd.yml
# Overrides the prompt using -e option. Can be used in
# automation scripts to avoid interactive prompt.
- name: gather facts and check the init system
hosts:
- "{{ mon_group_name|default('mons') }}"
- "{{ osd_group_name|default('osds') }}"
become: True
tasks:
- debug: msg="gather facts on all Ceph hosts for following reference"
- name: confirm whether user really meant to remove osd(s) from the cluster
hosts: "{{ groups[mon_group_name][0] }}"
become: true
vars_prompt:
- name: ireallymeanit
prompt: Are you sure you want to shrink the cluster?
default: 'no'
private: no
vars:
mon_group_name: mons
osd_group_name: osds
pre_tasks:
- name: exit playbook, if user did not mean to shrink cluster
fail:
msg: "Exiting shrink-osd playbook, no osd(s) was/were removed..
To shrink the cluster, either say 'yes' on the prompt or
or use `-e ireallymeanit=yes` on the command line when
invoking the playbook"
when: ireallymeanit != 'yes'
- name: exit playbook, if no osd(s) was/were given
fail:
msg: "osd_to_kill must be declared
Exiting shrink-osd playbook, no OSD(s) was/were removed.
On the command line when invoking the playbook, you can use
-e osd_to_kill=0,1,2,3 argument."
when: osd_to_kill is not defined
- name: check the osd ids passed have the correct format
fail:
msg: "The id {{ item }} has wrong format, please pass the number only"
with_items: "{{ osd_to_kill.split(',') }}"
when: not item is regex("^\d+$")
tasks:
- import_role:
name: ceph-defaults
- import_role:
name: ceph-facts
tasks_from: container_binary
post_tasks:
- name: set_fact container_exec_cmd build docker exec command (containerized)
set_fact:
container_exec_cmd: "{{ container_binary }} exec ceph-mon-{{ ansible_facts['hostname'] }}"
when: containerized_deployment | bool
- name: exit playbook, if can not connect to the cluster
command: "{{ container_exec_cmd }} timeout 5 ceph --cluster {{ cluster }} health"
register: ceph_health
changed_when: false
until: ceph_health.stdout.find("HEALTH") > -1
retries: 5
delay: 2
- name: find the host(s) where the osd(s) is/are running on
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd find {{ item }}"
changed_when: false
with_items: "{{ osd_to_kill.split(',') }}"
register: find_osd_hosts
- name: set_fact osd_hosts
set_fact:
osd_hosts: "{{ osd_hosts | default([]) + [ [ (item.stdout | from_json).crush_location.host, (item.stdout | from_json).osd_fsid, item.item ] ] }}"
with_items: "{{ find_osd_hosts.results }}"
- name: set_fact _osd_hosts
set_fact:
_osd_hosts: "{{ _osd_hosts | default([]) + [ [ item.0, item.2, item.3 ] ] }}"
with_nested:
- "{{ groups.get(osd_group_name) }}"
- "{{ osd_hosts }}"
when: hostvars[item.0]['ansible_facts']['hostname'] == item.1
- name: set_fact host_list
set_fact:
host_list: "{{ host_list | default([]) | union([item.0]) }}"
loop: "{{ _osd_hosts }}"
- name: get ceph-volume lvm list data
ceph_volume:
cluster: "{{ cluster }}"
action: list
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _lvm_list_data
delegate_to: "{{ item }}"
loop: "{{ host_list }}"
- name: set_fact _lvm_list
set_fact:
_lvm_list: "{{ _lvm_list | default({}) | combine(item.stdout | from_json) }}"
with_items: "{{ _lvm_list_data.results }}"
- name: refresh /etc/ceph/osd files non containerized_deployment
ceph_volume_simple_scan:
cluster: "{{ cluster }}"
force: true
delegate_to: "{{ item }}"
loop: "{{ host_list }}"
when: not containerized_deployment | bool
- name: refresh /etc/ceph/osd files containerized_deployment
command: "{{ container_binary }} exec ceph-osd-{{ item.2 }} ceph-volume simple scan --force /var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
changed_when: false
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: containerized_deployment | bool
- name: find /etc/ceph/osd files
find:
paths: /etc/ceph/osd
pattern: "{{ item.2 }}-*"
register: ceph_osd_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 not in _lvm_list.keys()
- name: slurp ceph osd files content
slurp:
src: "{{ item['files'][0]['path'] }}"
delegate_to: "{{ item.item.0 }}"
register: ceph_osd_files_content
loop: "{{ ceph_osd_data.results }}"
when:
- item.skipped is undefined
- item.matched > 0
- name: set_fact ceph_osd_files_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({ item.item.item.2: item.content | b64decode | from_json}) }}"
with_items: "{{ ceph_osd_files_content.results }}"
when: item.skipped is undefined
- name: mark osd(s) out of the cluster
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: out
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
- name: stop osd(s) service
service:
name: ceph-osd@{{ item.2 }}
state: stopped
enabled: no
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: umount osd lockbox
ansible.posix.mount:
path: "/var/lib/ceph/osd-lockbox/{{ ceph_osd_data_json[item.2]['data']['uuid'] }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- not containerized_deployment | bool
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2]['data']['uuid'] is defined
- name: umount osd data
ansible.posix.mount:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when: not containerized_deployment | bool
- name: get parent device for data partition
command: lsblk --noheadings --output PKNAME --nodeps "{{ ceph_osd_data_json[item.2]['data']['path'] }}"
register: parent_device_data_part
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['data']['path'] is defined
- name: add pkname information in ceph_osd_data_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({item.item[2]: {'pkname_data': '/dev/' + item.stdout }}, recursive=True) }}"
loop: "{{ parent_device_data_part.results }}"
when: item.skipped is undefined
- name: close dmcrypt close on devices if needed
command: "cryptsetup close {{ ceph_osd_data_json[item.2][item.3]['uuid'] }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block_dmcrypt', 'block.db_dmcrypt', 'block.wal_dmcrypt', 'data', 'journal_dmcrypt' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
until: result is succeeded
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2][item.3] is defined
- name: use ceph-volume lvm zap to destroy all partitions
ceph_volume:
cluster: "{{ cluster }}"
action: zap
destroy: true
data: "{{ ceph_osd_data_json[item.2]['pkname_data'] if item.3 == 'data' else ceph_osd_data_json[item.2][item.3]['path'] }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block', 'block.db', 'block.wal', 'journal', 'data' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2][item.3] is defined
- name: zap osd devices
ceph_volume:
action: "zap"
osd_fsid: "{{ item.1 }}"
environment:
CEPH_VOLUME_DEBUG: "{{ ceph_volume_debug }}"
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 in _lvm_list.keys()
- name: ensure osds are marked down
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: down
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: purge osd(s) from the cluster
ceph_osd:
ids: "{{ item }}"
cluster: "{{ cluster }}"
state: purge
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
with_items: "{{ osd_to_kill.split(',') }}"
- name: remove osd data dir
file:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: show ceph health
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} -s"
changed_when: false
- name: show ceph osd tree
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd tree"
changed_when: false
| asm0deuz | dffe7b47de70b6eeec71a3fa86f8c407adb4dd8e | 64e08f2c0bdea6f4c4ad5862dc8f350c6adbe2cd | ```suggestion
```
doesn't make sense to keep this if we know it will run only when not containerized deployments. | guits | 18 |
ceph/ceph-ansible | 7,226 | [skip ci] Refresh /etc/ceph/osd json files content before zapping the disks | If the physical disk to device path mapping has changed since the
last ceph-volume simple scan (e.g. addition or removal of disks),
a wrong disk could be deleted.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2071035
Signed-off-by: Teoman ONAY <tonay@redhat.com> | null | 2022-07-04 10:03:03+00:00 | 2022-07-11 07:14:41+00:00 | infrastructure-playbooks/shrink-osd.yml | ---
# This playbook shrinks Ceph OSDs that have been created with ceph-volume.
# It can remove any number of OSD(s) from the cluster and ALL THEIR DATA
#
# Use it like this:
# ansible-playbook shrink-osd.yml -e osd_to_kill=0,2,6
# Prompts for confirmation to shrink, defaults to no and
# doesn't shrink the cluster. yes shrinks the cluster.
#
# ansible-playbook -e ireallymeanit=yes|no shrink-osd.yml
# Overrides the prompt using -e option. Can be used in
# automation scripts to avoid interactive prompt.
- name: gather facts and check the init system
hosts:
- "{{ mon_group_name|default('mons') }}"
- "{{ osd_group_name|default('osds') }}"
become: True
tasks:
- debug: msg="gather facts on all Ceph hosts for following reference"
- name: confirm whether user really meant to remove osd(s) from the cluster
hosts: "{{ groups[mon_group_name][0] }}"
become: true
vars_prompt:
- name: ireallymeanit
prompt: Are you sure you want to shrink the cluster?
default: 'no'
private: no
vars:
mon_group_name: mons
osd_group_name: osds
pre_tasks:
- name: exit playbook, if user did not mean to shrink cluster
fail:
msg: "Exiting shrink-osd playbook, no osd(s) was/were removed..
To shrink the cluster, either say 'yes' on the prompt or
or use `-e ireallymeanit=yes` on the command line when
invoking the playbook"
when: ireallymeanit != 'yes'
- name: exit playbook, if no osd(s) was/were given
fail:
msg: "osd_to_kill must be declared
Exiting shrink-osd playbook, no OSD(s) was/were removed.
On the command line when invoking the playbook, you can use
-e osd_to_kill=0,1,2,3 argument."
when: osd_to_kill is not defined
- name: check the osd ids passed have the correct format
fail:
msg: "The id {{ item }} has wrong format, please pass the number only"
with_items: "{{ osd_to_kill.split(',') }}"
when: not item is regex("^\d+$")
tasks:
- import_role:
name: ceph-defaults
- import_role:
name: ceph-facts
tasks_from: container_binary
post_tasks:
- name: set_fact container_exec_cmd build docker exec command (containerized)
set_fact:
container_exec_cmd: "{{ container_binary }} exec ceph-mon-{{ ansible_facts['hostname'] }}"
when: containerized_deployment | bool
- name: exit playbook, if can not connect to the cluster
command: "{{ container_exec_cmd }} timeout 5 ceph --cluster {{ cluster }} health"
register: ceph_health
changed_when: false
until: ceph_health.stdout.find("HEALTH") > -1
retries: 5
delay: 2
- name: find the host(s) where the osd(s) is/are running on
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd find {{ item }}"
changed_when: false
with_items: "{{ osd_to_kill.split(',') }}"
register: find_osd_hosts
- name: set_fact osd_hosts
set_fact:
osd_hosts: "{{ osd_hosts | default([]) + [ [ (item.stdout | from_json).crush_location.host, (item.stdout | from_json).osd_fsid, item.item ] ] }}"
with_items: "{{ find_osd_hosts.results }}"
- name: set_fact _osd_hosts
set_fact:
_osd_hosts: "{{ _osd_hosts | default([]) + [ [ item.0, item.2, item.3 ] ] }}"
with_nested:
- "{{ groups.get(osd_group_name) }}"
- "{{ osd_hosts }}"
when: hostvars[item.0]['ansible_facts']['hostname'] == item.1
- name: get ceph-volume lvm list data
ceph_volume:
cluster: "{{ cluster }}"
action: list
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _lvm_list_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
- name: set_fact _lvm_list
set_fact:
_lvm_list: "{{ _lvm_list | default({}) | combine(item.stdout | from_json) }}"
with_items: "{{ _lvm_list_data.results }}"
- name: find /etc/ceph/osd files
find:
paths: /etc/ceph/osd
pattern: "{{ item.2 }}-*"
register: ceph_osd_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 not in _lvm_list.keys()
- name: slurp ceph osd files content
slurp:
src: "{{ item['files'][0]['path'] }}"
delegate_to: "{{ item.item.0 }}"
register: ceph_osd_files_content
loop: "{{ ceph_osd_data.results }}"
when:
- item.skipped is undefined
- item.matched > 0
- name: set_fact ceph_osd_files_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({ item.item.item.2: item.content | b64decode | from_json}) }}"
with_items: "{{ ceph_osd_files_content.results }}"
when: item.skipped is undefined
- name: mark osd(s) out of the cluster
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: out
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
- name: stop osd(s) service
service:
name: ceph-osd@{{ item.2 }}
state: stopped
enabled: no
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: umount osd lockbox
ansible.posix.mount:
path: "/var/lib/ceph/osd-lockbox/{{ ceph_osd_data_json[item.2]['data']['uuid'] }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- not containerized_deployment | bool
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2]['data']['uuid'] is defined
- name: umount osd data
ansible.posix.mount:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when: not containerized_deployment | bool
- name: get parent device for data partition
command: lsblk --noheadings --output PKNAME --nodeps "{{ ceph_osd_data_json[item.2]['data']['path'] }}"
register: parent_device_data_part
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['data']['path'] is defined
- name: add pkname information in ceph_osd_data_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({item.item[2]: {'pkname_data': '/dev/' + item.stdout }}, recursive=True) }}"
loop: "{{ parent_device_data_part.results }}"
when: item.skipped is undefined
- name: close dmcrypt close on devices if needed
command: "cryptsetup close {{ ceph_osd_data_json[item.2][item.3]['uuid'] }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block_dmcrypt', 'block.db_dmcrypt', 'block.wal_dmcrypt', 'data', 'journal_dmcrypt' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
until: result is succeeded
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2][item.3] is defined
- name: use ceph-volume lvm zap to destroy all partitions
ceph_volume:
cluster: "{{ cluster }}"
action: zap
destroy: true
data: "{{ ceph_osd_data_json[item.2]['pkname_data'] if item.3 == 'data' else ceph_osd_data_json[item.2][item.3]['path'] }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block', 'block.db', 'block.wal', 'journal', 'data' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2][item.3] is defined
- name: zap osd devices
ceph_volume:
action: "zap"
osd_fsid: "{{ item.1 }}"
environment:
CEPH_VOLUME_DEBUG: "{{ ceph_volume_debug }}"
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 in _lvm_list.keys()
- name: ensure osds are marked down
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: down
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: purge osd(s) from the cluster
ceph_osd:
ids: "{{ item }}"
cluster: "{{ cluster }}"
state: purge
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
with_items: "{{ osd_to_kill.split(',') }}"
- name: remove osd data dir
file:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: show ceph health
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} -s"
changed_when: false
- name: show ceph osd tree
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd tree"
changed_when: false
| ---
# This playbook shrinks Ceph OSDs that have been created with ceph-volume.
# It can remove any number of OSD(s) from the cluster and ALL THEIR DATA
#
# Use it like this:
# ansible-playbook shrink-osd.yml -e osd_to_kill=0,2,6
# Prompts for confirmation to shrink, defaults to no and
# doesn't shrink the cluster. yes shrinks the cluster.
#
# ansible-playbook -e ireallymeanit=yes|no shrink-osd.yml
# Overrides the prompt using -e option. Can be used in
# automation scripts to avoid interactive prompt.
- name: gather facts and check the init system
hosts:
- "{{ mon_group_name|default('mons') }}"
- "{{ osd_group_name|default('osds') }}"
become: True
tasks:
- debug: msg="gather facts on all Ceph hosts for following reference"
- name: confirm whether user really meant to remove osd(s) from the cluster
hosts: "{{ groups[mon_group_name][0] }}"
become: true
vars_prompt:
- name: ireallymeanit
prompt: Are you sure you want to shrink the cluster?
default: 'no'
private: no
vars:
mon_group_name: mons
osd_group_name: osds
pre_tasks:
- name: exit playbook, if user did not mean to shrink cluster
fail:
msg: "Exiting shrink-osd playbook, no osd(s) was/were removed..
To shrink the cluster, either say 'yes' on the prompt or
or use `-e ireallymeanit=yes` on the command line when
invoking the playbook"
when: ireallymeanit != 'yes'
- name: exit playbook, if no osd(s) was/were given
fail:
msg: "osd_to_kill must be declared
Exiting shrink-osd playbook, no OSD(s) was/were removed.
On the command line when invoking the playbook, you can use
-e osd_to_kill=0,1,2,3 argument."
when: osd_to_kill is not defined
- name: check the osd ids passed have the correct format
fail:
msg: "The id {{ item }} has wrong format, please pass the number only"
with_items: "{{ osd_to_kill.split(',') }}"
when: not item is regex("^\d+$")
tasks:
- import_role:
name: ceph-defaults
- import_role:
name: ceph-facts
tasks_from: container_binary
post_tasks:
- name: set_fact container_exec_cmd build docker exec command (containerized)
set_fact:
container_exec_cmd: "{{ container_binary }} exec ceph-mon-{{ ansible_facts['hostname'] }}"
when: containerized_deployment | bool
- name: exit playbook, if can not connect to the cluster
command: "{{ container_exec_cmd }} timeout 5 ceph --cluster {{ cluster }} health"
register: ceph_health
changed_when: false
until: ceph_health.stdout.find("HEALTH") > -1
retries: 5
delay: 2
- name: find the host(s) where the osd(s) is/are running on
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd find {{ item }}"
changed_when: false
with_items: "{{ osd_to_kill.split(',') }}"
register: find_osd_hosts
- name: set_fact osd_hosts
set_fact:
osd_hosts: "{{ osd_hosts | default([]) + [ [ (item.stdout | from_json).crush_location.host, (item.stdout | from_json).osd_fsid, item.item ] ] }}"
with_items: "{{ find_osd_hosts.results }}"
- name: set_fact _osd_hosts
set_fact:
_osd_hosts: "{{ _osd_hosts | default([]) + [ [ item.0, item.2, item.3 ] ] }}"
with_nested:
- "{{ groups.get(osd_group_name) }}"
- "{{ osd_hosts }}"
when: hostvars[item.0]['ansible_facts']['hostname'] == item.1
- name: set_fact host_list
set_fact:
host_list: "{{ host_list | default([]) | union([item.0]) }}"
loop: "{{ _osd_hosts }}"
- name: get ceph-volume lvm list data
ceph_volume:
cluster: "{{ cluster }}"
action: list
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _lvm_list_data
delegate_to: "{{ item }}"
loop: "{{ host_list }}"
- name: set_fact _lvm_list
set_fact:
_lvm_list: "{{ _lvm_list | default({}) | combine(item.stdout | from_json) }}"
with_items: "{{ _lvm_list_data.results }}"
- name: refresh /etc/ceph/osd files non containerized_deployment
ceph_volume_simple_scan:
cluster: "{{ cluster }}"
force: true
delegate_to: "{{ item }}"
loop: "{{ host_list }}"
when: not containerized_deployment | bool
- name: refresh /etc/ceph/osd files containerized_deployment
command: "{{ container_binary }} exec ceph-osd-{{ item.2 }} ceph-volume simple scan --force /var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
changed_when: false
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: containerized_deployment | bool
- name: find /etc/ceph/osd files
find:
paths: /etc/ceph/osd
pattern: "{{ item.2 }}-*"
register: ceph_osd_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 not in _lvm_list.keys()
- name: slurp ceph osd files content
slurp:
src: "{{ item['files'][0]['path'] }}"
delegate_to: "{{ item.item.0 }}"
register: ceph_osd_files_content
loop: "{{ ceph_osd_data.results }}"
when:
- item.skipped is undefined
- item.matched > 0
- name: set_fact ceph_osd_files_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({ item.item.item.2: item.content | b64decode | from_json}) }}"
with_items: "{{ ceph_osd_files_content.results }}"
when: item.skipped is undefined
- name: mark osd(s) out of the cluster
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: out
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
- name: stop osd(s) service
service:
name: ceph-osd@{{ item.2 }}
state: stopped
enabled: no
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: umount osd lockbox
ansible.posix.mount:
path: "/var/lib/ceph/osd-lockbox/{{ ceph_osd_data_json[item.2]['data']['uuid'] }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- not containerized_deployment | bool
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2]['data']['uuid'] is defined
- name: umount osd data
ansible.posix.mount:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when: not containerized_deployment | bool
- name: get parent device for data partition
command: lsblk --noheadings --output PKNAME --nodeps "{{ ceph_osd_data_json[item.2]['data']['path'] }}"
register: parent_device_data_part
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['data']['path'] is defined
- name: add pkname information in ceph_osd_data_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({item.item[2]: {'pkname_data': '/dev/' + item.stdout }}, recursive=True) }}"
loop: "{{ parent_device_data_part.results }}"
when: item.skipped is undefined
- name: close dmcrypt close on devices if needed
command: "cryptsetup close {{ ceph_osd_data_json[item.2][item.3]['uuid'] }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block_dmcrypt', 'block.db_dmcrypt', 'block.wal_dmcrypt', 'data', 'journal_dmcrypt' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
until: result is succeeded
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2][item.3] is defined
- name: use ceph-volume lvm zap to destroy all partitions
ceph_volume:
cluster: "{{ cluster }}"
action: zap
destroy: true
data: "{{ ceph_osd_data_json[item.2]['pkname_data'] if item.3 == 'data' else ceph_osd_data_json[item.2][item.3]['path'] }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block', 'block.db', 'block.wal', 'journal', 'data' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2][item.3] is defined
- name: zap osd devices
ceph_volume:
action: "zap"
osd_fsid: "{{ item.1 }}"
environment:
CEPH_VOLUME_DEBUG: "{{ ceph_volume_debug }}"
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 in _lvm_list.keys()
- name: ensure osds are marked down
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: down
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: purge osd(s) from the cluster
ceph_osd:
ids: "{{ item }}"
cluster: "{{ cluster }}"
state: purge
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
with_items: "{{ osd_to_kill.split(',') }}"
- name: remove osd data dir
file:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: show ceph health
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} -s"
changed_when: false
- name: show ceph osd tree
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd tree"
changed_when: false
| asm0deuz | dffe7b47de70b6eeec71a3fa86f8c407adb4dd8e | 64e08f2c0bdea6f4c4ad5862dc8f350c6adbe2cd | this command won't work in containerized deployment, you must run it from within a container.. | guits | 19 |
ceph/ceph-ansible | 7,226 | [skip ci] Refresh /etc/ceph/osd json files content before zapping the disks | If the physical disk to device path mapping has changed since the
last ceph-volume simple scan (e.g. addition or removal of disks),
a wrong disk could be deleted.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=2071035
Signed-off-by: Teoman ONAY <tonay@redhat.com> | null | 2022-07-04 10:03:03+00:00 | 2022-07-11 07:14:41+00:00 | infrastructure-playbooks/shrink-osd.yml | ---
# This playbook shrinks Ceph OSDs that have been created with ceph-volume.
# It can remove any number of OSD(s) from the cluster and ALL THEIR DATA
#
# Use it like this:
# ansible-playbook shrink-osd.yml -e osd_to_kill=0,2,6
# Prompts for confirmation to shrink, defaults to no and
# doesn't shrink the cluster. yes shrinks the cluster.
#
# ansible-playbook -e ireallymeanit=yes|no shrink-osd.yml
# Overrides the prompt using -e option. Can be used in
# automation scripts to avoid interactive prompt.
- name: gather facts and check the init system
hosts:
- "{{ mon_group_name|default('mons') }}"
- "{{ osd_group_name|default('osds') }}"
become: True
tasks:
- debug: msg="gather facts on all Ceph hosts for following reference"
- name: confirm whether user really meant to remove osd(s) from the cluster
hosts: "{{ groups[mon_group_name][0] }}"
become: true
vars_prompt:
- name: ireallymeanit
prompt: Are you sure you want to shrink the cluster?
default: 'no'
private: no
vars:
mon_group_name: mons
osd_group_name: osds
pre_tasks:
- name: exit playbook, if user did not mean to shrink cluster
fail:
msg: "Exiting shrink-osd playbook, no osd(s) was/were removed..
To shrink the cluster, either say 'yes' on the prompt or
or use `-e ireallymeanit=yes` on the command line when
invoking the playbook"
when: ireallymeanit != 'yes'
- name: exit playbook, if no osd(s) was/were given
fail:
msg: "osd_to_kill must be declared
Exiting shrink-osd playbook, no OSD(s) was/were removed.
On the command line when invoking the playbook, you can use
-e osd_to_kill=0,1,2,3 argument."
when: osd_to_kill is not defined
- name: check the osd ids passed have the correct format
fail:
msg: "The id {{ item }} has wrong format, please pass the number only"
with_items: "{{ osd_to_kill.split(',') }}"
when: not item is regex("^\d+$")
tasks:
- import_role:
name: ceph-defaults
- import_role:
name: ceph-facts
tasks_from: container_binary
post_tasks:
- name: set_fact container_exec_cmd build docker exec command (containerized)
set_fact:
container_exec_cmd: "{{ container_binary }} exec ceph-mon-{{ ansible_facts['hostname'] }}"
when: containerized_deployment | bool
- name: exit playbook, if can not connect to the cluster
command: "{{ container_exec_cmd }} timeout 5 ceph --cluster {{ cluster }} health"
register: ceph_health
changed_when: false
until: ceph_health.stdout.find("HEALTH") > -1
retries: 5
delay: 2
- name: find the host(s) where the osd(s) is/are running on
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd find {{ item }}"
changed_when: false
with_items: "{{ osd_to_kill.split(',') }}"
register: find_osd_hosts
- name: set_fact osd_hosts
set_fact:
osd_hosts: "{{ osd_hosts | default([]) + [ [ (item.stdout | from_json).crush_location.host, (item.stdout | from_json).osd_fsid, item.item ] ] }}"
with_items: "{{ find_osd_hosts.results }}"
- name: set_fact _osd_hosts
set_fact:
_osd_hosts: "{{ _osd_hosts | default([]) + [ [ item.0, item.2, item.3 ] ] }}"
with_nested:
- "{{ groups.get(osd_group_name) }}"
- "{{ osd_hosts }}"
when: hostvars[item.0]['ansible_facts']['hostname'] == item.1
- name: get ceph-volume lvm list data
ceph_volume:
cluster: "{{ cluster }}"
action: list
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _lvm_list_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
- name: set_fact _lvm_list
set_fact:
_lvm_list: "{{ _lvm_list | default({}) | combine(item.stdout | from_json) }}"
with_items: "{{ _lvm_list_data.results }}"
- name: find /etc/ceph/osd files
find:
paths: /etc/ceph/osd
pattern: "{{ item.2 }}-*"
register: ceph_osd_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 not in _lvm_list.keys()
- name: slurp ceph osd files content
slurp:
src: "{{ item['files'][0]['path'] }}"
delegate_to: "{{ item.item.0 }}"
register: ceph_osd_files_content
loop: "{{ ceph_osd_data.results }}"
when:
- item.skipped is undefined
- item.matched > 0
- name: set_fact ceph_osd_files_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({ item.item.item.2: item.content | b64decode | from_json}) }}"
with_items: "{{ ceph_osd_files_content.results }}"
when: item.skipped is undefined
- name: mark osd(s) out of the cluster
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: out
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
- name: stop osd(s) service
service:
name: ceph-osd@{{ item.2 }}
state: stopped
enabled: no
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: umount osd lockbox
ansible.posix.mount:
path: "/var/lib/ceph/osd-lockbox/{{ ceph_osd_data_json[item.2]['data']['uuid'] }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- not containerized_deployment | bool
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2]['data']['uuid'] is defined
- name: umount osd data
ansible.posix.mount:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when: not containerized_deployment | bool
- name: get parent device for data partition
command: lsblk --noheadings --output PKNAME --nodeps "{{ ceph_osd_data_json[item.2]['data']['path'] }}"
register: parent_device_data_part
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['data']['path'] is defined
- name: add pkname information in ceph_osd_data_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({item.item[2]: {'pkname_data': '/dev/' + item.stdout }}, recursive=True) }}"
loop: "{{ parent_device_data_part.results }}"
when: item.skipped is undefined
- name: close dmcrypt close on devices if needed
command: "cryptsetup close {{ ceph_osd_data_json[item.2][item.3]['uuid'] }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block_dmcrypt', 'block.db_dmcrypt', 'block.wal_dmcrypt', 'data', 'journal_dmcrypt' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
until: result is succeeded
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2][item.3] is defined
- name: use ceph-volume lvm zap to destroy all partitions
ceph_volume:
cluster: "{{ cluster }}"
action: zap
destroy: true
data: "{{ ceph_osd_data_json[item.2]['pkname_data'] if item.3 == 'data' else ceph_osd_data_json[item.2][item.3]['path'] }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block', 'block.db', 'block.wal', 'journal', 'data' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2][item.3] is defined
- name: zap osd devices
ceph_volume:
action: "zap"
osd_fsid: "{{ item.1 }}"
environment:
CEPH_VOLUME_DEBUG: "{{ ceph_volume_debug }}"
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 in _lvm_list.keys()
- name: ensure osds are marked down
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: down
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: purge osd(s) from the cluster
ceph_osd:
ids: "{{ item }}"
cluster: "{{ cluster }}"
state: purge
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
with_items: "{{ osd_to_kill.split(',') }}"
- name: remove osd data dir
file:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: show ceph health
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} -s"
changed_when: false
- name: show ceph osd tree
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd tree"
changed_when: false
| ---
# This playbook shrinks Ceph OSDs that have been created with ceph-volume.
# It can remove any number of OSD(s) from the cluster and ALL THEIR DATA
#
# Use it like this:
# ansible-playbook shrink-osd.yml -e osd_to_kill=0,2,6
# Prompts for confirmation to shrink, defaults to no and
# doesn't shrink the cluster. yes shrinks the cluster.
#
# ansible-playbook -e ireallymeanit=yes|no shrink-osd.yml
# Overrides the prompt using -e option. Can be used in
# automation scripts to avoid interactive prompt.
- name: gather facts and check the init system
hosts:
- "{{ mon_group_name|default('mons') }}"
- "{{ osd_group_name|default('osds') }}"
become: True
tasks:
- debug: msg="gather facts on all Ceph hosts for following reference"
- name: confirm whether user really meant to remove osd(s) from the cluster
hosts: "{{ groups[mon_group_name][0] }}"
become: true
vars_prompt:
- name: ireallymeanit
prompt: Are you sure you want to shrink the cluster?
default: 'no'
private: no
vars:
mon_group_name: mons
osd_group_name: osds
pre_tasks:
- name: exit playbook, if user did not mean to shrink cluster
fail:
msg: "Exiting shrink-osd playbook, no osd(s) was/were removed..
To shrink the cluster, either say 'yes' on the prompt or
or use `-e ireallymeanit=yes` on the command line when
invoking the playbook"
when: ireallymeanit != 'yes'
- name: exit playbook, if no osd(s) was/were given
fail:
msg: "osd_to_kill must be declared
Exiting shrink-osd playbook, no OSD(s) was/were removed.
On the command line when invoking the playbook, you can use
-e osd_to_kill=0,1,2,3 argument."
when: osd_to_kill is not defined
- name: check the osd ids passed have the correct format
fail:
msg: "The id {{ item }} has wrong format, please pass the number only"
with_items: "{{ osd_to_kill.split(',') }}"
when: not item is regex("^\d+$")
tasks:
- import_role:
name: ceph-defaults
- import_role:
name: ceph-facts
tasks_from: container_binary
post_tasks:
- name: set_fact container_exec_cmd build docker exec command (containerized)
set_fact:
container_exec_cmd: "{{ container_binary }} exec ceph-mon-{{ ansible_facts['hostname'] }}"
when: containerized_deployment | bool
- name: exit playbook, if can not connect to the cluster
command: "{{ container_exec_cmd }} timeout 5 ceph --cluster {{ cluster }} health"
register: ceph_health
changed_when: false
until: ceph_health.stdout.find("HEALTH") > -1
retries: 5
delay: 2
- name: find the host(s) where the osd(s) is/are running on
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd find {{ item }}"
changed_when: false
with_items: "{{ osd_to_kill.split(',') }}"
register: find_osd_hosts
- name: set_fact osd_hosts
set_fact:
osd_hosts: "{{ osd_hosts | default([]) + [ [ (item.stdout | from_json).crush_location.host, (item.stdout | from_json).osd_fsid, item.item ] ] }}"
with_items: "{{ find_osd_hosts.results }}"
- name: set_fact _osd_hosts
set_fact:
_osd_hosts: "{{ _osd_hosts | default([]) + [ [ item.0, item.2, item.3 ] ] }}"
with_nested:
- "{{ groups.get(osd_group_name) }}"
- "{{ osd_hosts }}"
when: hostvars[item.0]['ansible_facts']['hostname'] == item.1
- name: set_fact host_list
set_fact:
host_list: "{{ host_list | default([]) | union([item.0]) }}"
loop: "{{ _osd_hosts }}"
- name: get ceph-volume lvm list data
ceph_volume:
cluster: "{{ cluster }}"
action: list
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _lvm_list_data
delegate_to: "{{ item }}"
loop: "{{ host_list }}"
- name: set_fact _lvm_list
set_fact:
_lvm_list: "{{ _lvm_list | default({}) | combine(item.stdout | from_json) }}"
with_items: "{{ _lvm_list_data.results }}"
- name: refresh /etc/ceph/osd files non containerized_deployment
ceph_volume_simple_scan:
cluster: "{{ cluster }}"
force: true
delegate_to: "{{ item }}"
loop: "{{ host_list }}"
when: not containerized_deployment | bool
- name: refresh /etc/ceph/osd files containerized_deployment
command: "{{ container_binary }} exec ceph-osd-{{ item.2 }} ceph-volume simple scan --force /var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
changed_when: false
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: containerized_deployment | bool
- name: find /etc/ceph/osd files
find:
paths: /etc/ceph/osd
pattern: "{{ item.2 }}-*"
register: ceph_osd_data
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 not in _lvm_list.keys()
- name: slurp ceph osd files content
slurp:
src: "{{ item['files'][0]['path'] }}"
delegate_to: "{{ item.item.0 }}"
register: ceph_osd_files_content
loop: "{{ ceph_osd_data.results }}"
when:
- item.skipped is undefined
- item.matched > 0
- name: set_fact ceph_osd_files_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({ item.item.item.2: item.content | b64decode | from_json}) }}"
with_items: "{{ ceph_osd_files_content.results }}"
when: item.skipped is undefined
- name: mark osd(s) out of the cluster
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: out
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
- name: stop osd(s) service
service:
name: ceph-osd@{{ item.2 }}
state: stopped
enabled: no
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: umount osd lockbox
ansible.posix.mount:
path: "/var/lib/ceph/osd-lockbox/{{ ceph_osd_data_json[item.2]['data']['uuid'] }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- not containerized_deployment | bool
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2]['data']['uuid'] is defined
- name: umount osd data
ansible.posix.mount:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when: not containerized_deployment | bool
- name: get parent device for data partition
command: lsblk --noheadings --output PKNAME --nodeps "{{ ceph_osd_data_json[item.2]['data']['path'] }}"
register: parent_device_data_part
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['data']['path'] is defined
- name: add pkname information in ceph_osd_data_json
set_fact:
ceph_osd_data_json: "{{ ceph_osd_data_json | default({}) | combine({item.item[2]: {'pkname_data': '/dev/' + item.stdout }}, recursive=True) }}"
loop: "{{ parent_device_data_part.results }}"
when: item.skipped is undefined
- name: close dmcrypt close on devices if needed
command: "cryptsetup close {{ ceph_osd_data_json[item.2][item.3]['uuid'] }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block_dmcrypt', 'block.db_dmcrypt', 'block.wal_dmcrypt', 'data', 'journal_dmcrypt' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
until: result is succeeded
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2]['encrypted'] | default(False) | bool
- ceph_osd_data_json[item.2][item.3] is defined
- name: use ceph-volume lvm zap to destroy all partitions
ceph_volume:
cluster: "{{ cluster }}"
action: zap
destroy: true
data: "{{ ceph_osd_data_json[item.2]['pkname_data'] if item.3 == 'data' else ceph_osd_data_json[item.2][item.3]['path'] }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
with_nested:
- "{{ _osd_hosts }}"
- [ 'block', 'block.db', 'block.wal', 'journal', 'data' ]
delegate_to: "{{ item.0 }}"
failed_when: false
register: result
when:
- item.2 not in _lvm_list.keys()
- ceph_osd_data_json[item.2][item.3] is defined
- name: zap osd devices
ceph_volume:
action: "zap"
osd_fsid: "{{ item.1 }}"
environment:
CEPH_VOLUME_DEBUG: "{{ ceph_volume_debug }}"
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
delegate_to: "{{ item.0 }}"
loop: "{{ _osd_hosts }}"
when: item.2 in _lvm_list.keys()
- name: ensure osds are marked down
ceph_osd:
ids: "{{ osd_to_kill.split(',') }}"
cluster: "{{ cluster }}"
state: down
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: purge osd(s) from the cluster
ceph_osd:
ids: "{{ item }}"
cluster: "{{ cluster }}"
state: purge
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
run_once: true
with_items: "{{ osd_to_kill.split(',') }}"
- name: remove osd data dir
file:
path: "/var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
state: absent
loop: "{{ _osd_hosts }}"
delegate_to: "{{ item.0 }}"
- name: show ceph health
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} -s"
changed_when: false
- name: show ceph osd tree
command: "{{ container_exec_cmd }} ceph --cluster {{ cluster }} osd tree"
changed_when: false
| asm0deuz | dffe7b47de70b6eeec71a3fa86f8c407adb4dd8e | 64e08f2c0bdea6f4c4ad5862dc8f350c6adbe2cd | ```suggestion
command: "{{ container_binary }} exec ceph-osd-{{ item.2 }} ceph-volume simple scan --force /var/lib/ceph/osd/{{ cluster }}-{{ item.2 }}"
``` | guits | 20 |
ceph/ceph-ansible | 7,197 | fix(ceph-grafana): make dashboard download work again | This fixes the dashboard download for pacific and later.
Since ceph switched to Prometheus Monitoring Mixins the path to the generated dashboards has changed. It is still working for octopus but it's broken from pacific onwards.
This change fixes the issue. Currently I only added the two latest releases to the check.
Also, this is only related to non containerized deployments. | null | 2022-06-10 15:27:51+00:00 | 2022-06-14 12:36:24+00:00 | roles/ceph-grafana/tasks/configure_grafana.yml | ---
- name: install ceph-grafana-dashboards package on RedHat or SUSE
package:
name: ceph-grafana-dashboards
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
register: result
until: result is succeeded
when:
- not containerized_deployment | bool
- ansible_facts['os_family'] in ['RedHat', 'Suse']
tags: package-install
- name: make sure grafana is down
service:
name: grafana-server
state: stopped
- name: wait for grafana to be stopped
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
state: stopped
- name: make sure grafana configuration directories exist
file:
path: "{{ item }}"
state: directory
recurse: yes
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
with_items:
- "/etc/grafana/dashboards/ceph-dashboard"
- "/etc/grafana/provisioning/datasources"
- "/etc/grafana/provisioning/dashboards"
- "/etc/grafana/provisioning/notifiers"
- name: download ceph grafana dashboards
get_url:
url: "https://raw.githubusercontent.com/ceph/ceph/{{ grafana_dashboard_version }}/monitoring/grafana/dashboards/{{ item }}"
dest: "/etc/grafana/dashboards/ceph-dashboard/{{ item }}"
with_items: "{{ grafana_dashboard_files }}"
when:
- not containerized_deployment | bool
- not ansible_facts['os_family'] in ['RedHat', 'Suse']
- name: write grafana.ini
openstack.config_template.config_template:
src: grafana.ini.j2
dest: /etc/grafana/grafana.ini
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
config_type: ini
config_overrides: "{{ grafana_conf_overrides }}"
- name: write datasources provisioning config file
template:
src: datasources-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/datasources/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
- name: Write dashboards provisioning config file
template:
src: dashboards-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/dashboards/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
when: not containerized_deployment | bool
- name: copy grafana SSL certificate file
copy:
src: "{{ grafana_crt }}"
dest: "/etc/grafana/ceph-dashboard.crt"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_crt | length > 0
- dashboard_protocol == "https"
- name: copy grafana SSL certificate key
copy:
src: "{{ grafana_key }}"
dest: "/etc/grafana/ceph-dashboard.key"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0440
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_key | length > 0
- dashboard_protocol == "https"
- name: generate a Self Signed OpenSSL certificate for dashboard
shell: |
test -f /etc/grafana/ceph-dashboard.key -a -f /etc/grafana/ceph-dashboard.crt || \
(openssl req -new -nodes -x509 -subj '/O=IT/CN=ceph-grafana' -days 3650 -keyout /etc/grafana/ceph-dashboard.key -out /etc/grafana/ceph-dashboard.crt -extensions v3_ca && \
chown {{ grafana_uid }}:{{ grafana_uid }} /etc/grafana/ceph-dashboard.key /etc/grafana/ceph-dashboard.crt)
when:
- dashboard_protocol == "https"
- grafana_key | length == 0 or grafana_crt | length == 0
- name: enable and start grafana
service:
name: grafana-server
state: restarted
enabled: true
- name: wait for grafana to start
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
| ---
- name: install ceph-grafana-dashboards package on RedHat or SUSE
package:
name: ceph-grafana-dashboards
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
register: result
until: result is succeeded
when:
- not containerized_deployment | bool
- ansible_facts['os_family'] in ['RedHat', 'Suse']
tags: package-install
- name: make sure grafana is down
service:
name: grafana-server
state: stopped
- name: wait for grafana to be stopped
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
state: stopped
- name: make sure grafana configuration directories exist
file:
path: "{{ item }}"
state: directory
recurse: yes
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
with_items:
- "/etc/grafana/dashboards/ceph-dashboard"
- "/etc/grafana/provisioning/datasources"
- "/etc/grafana/provisioning/dashboards"
- "/etc/grafana/provisioning/notifiers"
- name: download ceph grafana dashboards
get_url:
url: "https://raw.githubusercontent.com/ceph/ceph/{{ grafana_dashboard_version }}/monitoring/ceph-mixin/dashboards_out/{{ item }}"
dest: "/etc/grafana/dashboards/ceph-dashboard/{{ item }}"
with_items: "{{ grafana_dashboard_files }}"
when:
- not containerized_deployment | bool
- not ansible_facts['os_family'] in ['RedHat', 'Suse']
- name: write grafana.ini
openstack.config_template.config_template:
src: grafana.ini.j2
dest: /etc/grafana/grafana.ini
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
config_type: ini
config_overrides: "{{ grafana_conf_overrides }}"
- name: write datasources provisioning config file
template:
src: datasources-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/datasources/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
- name: Write dashboards provisioning config file
template:
src: dashboards-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/dashboards/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
when: not containerized_deployment | bool
- name: copy grafana SSL certificate file
copy:
src: "{{ grafana_crt }}"
dest: "/etc/grafana/ceph-dashboard.crt"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_crt | length > 0
- dashboard_protocol == "https"
- name: copy grafana SSL certificate key
copy:
src: "{{ grafana_key }}"
dest: "/etc/grafana/ceph-dashboard.key"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0440
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_key | length > 0
- dashboard_protocol == "https"
- name: generate a Self Signed OpenSSL certificate for dashboard
shell: |
test -f /etc/grafana/ceph-dashboard.key -a -f /etc/grafana/ceph-dashboard.crt || \
(openssl req -new -nodes -x509 -subj '/O=IT/CN=ceph-grafana' -days 3650 -keyout /etc/grafana/ceph-dashboard.key -out /etc/grafana/ceph-dashboard.crt -extensions v3_ca && \
chown {{ grafana_uid }}:{{ grafana_uid }} /etc/grafana/ceph-dashboard.key /etc/grafana/ceph-dashboard.crt)
when:
- dashboard_protocol == "https"
- grafana_key | length == 0 or grafana_crt | length == 0
- name: enable and start grafana
service:
name: grafana-server
state: restarted
enabled: true
- name: wait for grafana to start
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
| mitch000001 | 8a5fb702f2a3df46834baf6019285463bbfcc4fb | 4edaab5f4c5445cb1fafc5d8824c49717e9f96c8 | ```suggestion
url: "https://raw.githubusercontent.com/ceph/ceph/{{ grafana_dashboard_version }}/monitoring/ceph-mixin/dashboards_out/{{ item }}"
``` | guits | 21 |
ceph/ceph-ansible | 7,197 | fix(ceph-grafana): make dashboard download work again | This fixes the dashboard download for pacific and later.
Since ceph switched to Prometheus Monitoring Mixins the path to the generated dashboards has changed. It is still working for octopus but it's broken from pacific onwards.
This change fixes the issue. Currently I only added the two latest releases to the check.
Also, this is only related to non containerized deployments. | null | 2022-06-10 15:27:51+00:00 | 2022-06-14 12:36:24+00:00 | roles/ceph-grafana/tasks/configure_grafana.yml | ---
- name: install ceph-grafana-dashboards package on RedHat or SUSE
package:
name: ceph-grafana-dashboards
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
register: result
until: result is succeeded
when:
- not containerized_deployment | bool
- ansible_facts['os_family'] in ['RedHat', 'Suse']
tags: package-install
- name: make sure grafana is down
service:
name: grafana-server
state: stopped
- name: wait for grafana to be stopped
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
state: stopped
- name: make sure grafana configuration directories exist
file:
path: "{{ item }}"
state: directory
recurse: yes
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
with_items:
- "/etc/grafana/dashboards/ceph-dashboard"
- "/etc/grafana/provisioning/datasources"
- "/etc/grafana/provisioning/dashboards"
- "/etc/grafana/provisioning/notifiers"
- name: download ceph grafana dashboards
get_url:
url: "https://raw.githubusercontent.com/ceph/ceph/{{ grafana_dashboard_version }}/monitoring/grafana/dashboards/{{ item }}"
dest: "/etc/grafana/dashboards/ceph-dashboard/{{ item }}"
with_items: "{{ grafana_dashboard_files }}"
when:
- not containerized_deployment | bool
- not ansible_facts['os_family'] in ['RedHat', 'Suse']
- name: write grafana.ini
openstack.config_template.config_template:
src: grafana.ini.j2
dest: /etc/grafana/grafana.ini
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
config_type: ini
config_overrides: "{{ grafana_conf_overrides }}"
- name: write datasources provisioning config file
template:
src: datasources-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/datasources/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
- name: Write dashboards provisioning config file
template:
src: dashboards-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/dashboards/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
when: not containerized_deployment | bool
- name: copy grafana SSL certificate file
copy:
src: "{{ grafana_crt }}"
dest: "/etc/grafana/ceph-dashboard.crt"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_crt | length > 0
- dashboard_protocol == "https"
- name: copy grafana SSL certificate key
copy:
src: "{{ grafana_key }}"
dest: "/etc/grafana/ceph-dashboard.key"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0440
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_key | length > 0
- dashboard_protocol == "https"
- name: generate a Self Signed OpenSSL certificate for dashboard
shell: |
test -f /etc/grafana/ceph-dashboard.key -a -f /etc/grafana/ceph-dashboard.crt || \
(openssl req -new -nodes -x509 -subj '/O=IT/CN=ceph-grafana' -days 3650 -keyout /etc/grafana/ceph-dashboard.key -out /etc/grafana/ceph-dashboard.crt -extensions v3_ca && \
chown {{ grafana_uid }}:{{ grafana_uid }} /etc/grafana/ceph-dashboard.key /etc/grafana/ceph-dashboard.crt)
when:
- dashboard_protocol == "https"
- grafana_key | length == 0 or grafana_crt | length == 0
- name: enable and start grafana
service:
name: grafana-server
state: restarted
enabled: true
- name: wait for grafana to start
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
| ---
- name: install ceph-grafana-dashboards package on RedHat or SUSE
package:
name: ceph-grafana-dashboards
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
register: result
until: result is succeeded
when:
- not containerized_deployment | bool
- ansible_facts['os_family'] in ['RedHat', 'Suse']
tags: package-install
- name: make sure grafana is down
service:
name: grafana-server
state: stopped
- name: wait for grafana to be stopped
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
state: stopped
- name: make sure grafana configuration directories exist
file:
path: "{{ item }}"
state: directory
recurse: yes
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
with_items:
- "/etc/grafana/dashboards/ceph-dashboard"
- "/etc/grafana/provisioning/datasources"
- "/etc/grafana/provisioning/dashboards"
- "/etc/grafana/provisioning/notifiers"
- name: download ceph grafana dashboards
get_url:
url: "https://raw.githubusercontent.com/ceph/ceph/{{ grafana_dashboard_version }}/monitoring/ceph-mixin/dashboards_out/{{ item }}"
dest: "/etc/grafana/dashboards/ceph-dashboard/{{ item }}"
with_items: "{{ grafana_dashboard_files }}"
when:
- not containerized_deployment | bool
- not ansible_facts['os_family'] in ['RedHat', 'Suse']
- name: write grafana.ini
openstack.config_template.config_template:
src: grafana.ini.j2
dest: /etc/grafana/grafana.ini
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
config_type: ini
config_overrides: "{{ grafana_conf_overrides }}"
- name: write datasources provisioning config file
template:
src: datasources-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/datasources/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
- name: Write dashboards provisioning config file
template:
src: dashboards-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/dashboards/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
when: not containerized_deployment | bool
- name: copy grafana SSL certificate file
copy:
src: "{{ grafana_crt }}"
dest: "/etc/grafana/ceph-dashboard.crt"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_crt | length > 0
- dashboard_protocol == "https"
- name: copy grafana SSL certificate key
copy:
src: "{{ grafana_key }}"
dest: "/etc/grafana/ceph-dashboard.key"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0440
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_key | length > 0
- dashboard_protocol == "https"
- name: generate a Self Signed OpenSSL certificate for dashboard
shell: |
test -f /etc/grafana/ceph-dashboard.key -a -f /etc/grafana/ceph-dashboard.crt || \
(openssl req -new -nodes -x509 -subj '/O=IT/CN=ceph-grafana' -days 3650 -keyout /etc/grafana/ceph-dashboard.key -out /etc/grafana/ceph-dashboard.crt -extensions v3_ca && \
chown {{ grafana_uid }}:{{ grafana_uid }} /etc/grafana/ceph-dashboard.key /etc/grafana/ceph-dashboard.crt)
when:
- dashboard_protocol == "https"
- grafana_key | length == 0 or grafana_crt | length == 0
- name: enable and start grafana
service:
name: grafana-server
state: restarted
enabled: true
- name: wait for grafana to start
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
| mitch000001 | 8a5fb702f2a3df46834baf6019285463bbfcc4fb | 4edaab5f4c5445cb1fafc5d8824c49717e9f96c8 | ```suggestion
``` | guits | 22 |
ceph/ceph-ansible | 7,197 | fix(ceph-grafana): make dashboard download work again | This fixes the dashboard download for pacific and later.
Since ceph switched to Prometheus Monitoring Mixins the path to the generated dashboards has changed. It is still working for octopus but it's broken from pacific onwards.
This change fixes the issue. Currently I only added the two latest releases to the check.
Also, this is only related to non containerized deployments. | null | 2022-06-10 15:27:51+00:00 | 2022-06-14 12:36:24+00:00 | roles/ceph-grafana/tasks/configure_grafana.yml | ---
- name: install ceph-grafana-dashboards package on RedHat or SUSE
package:
name: ceph-grafana-dashboards
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
register: result
until: result is succeeded
when:
- not containerized_deployment | bool
- ansible_facts['os_family'] in ['RedHat', 'Suse']
tags: package-install
- name: make sure grafana is down
service:
name: grafana-server
state: stopped
- name: wait for grafana to be stopped
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
state: stopped
- name: make sure grafana configuration directories exist
file:
path: "{{ item }}"
state: directory
recurse: yes
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
with_items:
- "/etc/grafana/dashboards/ceph-dashboard"
- "/etc/grafana/provisioning/datasources"
- "/etc/grafana/provisioning/dashboards"
- "/etc/grafana/provisioning/notifiers"
- name: download ceph grafana dashboards
get_url:
url: "https://raw.githubusercontent.com/ceph/ceph/{{ grafana_dashboard_version }}/monitoring/grafana/dashboards/{{ item }}"
dest: "/etc/grafana/dashboards/ceph-dashboard/{{ item }}"
with_items: "{{ grafana_dashboard_files }}"
when:
- not containerized_deployment | bool
- not ansible_facts['os_family'] in ['RedHat', 'Suse']
- name: write grafana.ini
openstack.config_template.config_template:
src: grafana.ini.j2
dest: /etc/grafana/grafana.ini
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
config_type: ini
config_overrides: "{{ grafana_conf_overrides }}"
- name: write datasources provisioning config file
template:
src: datasources-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/datasources/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
- name: Write dashboards provisioning config file
template:
src: dashboards-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/dashboards/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
when: not containerized_deployment | bool
- name: copy grafana SSL certificate file
copy:
src: "{{ grafana_crt }}"
dest: "/etc/grafana/ceph-dashboard.crt"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_crt | length > 0
- dashboard_protocol == "https"
- name: copy grafana SSL certificate key
copy:
src: "{{ grafana_key }}"
dest: "/etc/grafana/ceph-dashboard.key"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0440
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_key | length > 0
- dashboard_protocol == "https"
- name: generate a Self Signed OpenSSL certificate for dashboard
shell: |
test -f /etc/grafana/ceph-dashboard.key -a -f /etc/grafana/ceph-dashboard.crt || \
(openssl req -new -nodes -x509 -subj '/O=IT/CN=ceph-grafana' -days 3650 -keyout /etc/grafana/ceph-dashboard.key -out /etc/grafana/ceph-dashboard.crt -extensions v3_ca && \
chown {{ grafana_uid }}:{{ grafana_uid }} /etc/grafana/ceph-dashboard.key /etc/grafana/ceph-dashboard.crt)
when:
- dashboard_protocol == "https"
- grafana_key | length == 0 or grafana_crt | length == 0
- name: enable and start grafana
service:
name: grafana-server
state: restarted
enabled: true
- name: wait for grafana to start
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
| ---
- name: install ceph-grafana-dashboards package on RedHat or SUSE
package:
name: ceph-grafana-dashboards
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
register: result
until: result is succeeded
when:
- not containerized_deployment | bool
- ansible_facts['os_family'] in ['RedHat', 'Suse']
tags: package-install
- name: make sure grafana is down
service:
name: grafana-server
state: stopped
- name: wait for grafana to be stopped
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
state: stopped
- name: make sure grafana configuration directories exist
file:
path: "{{ item }}"
state: directory
recurse: yes
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
with_items:
- "/etc/grafana/dashboards/ceph-dashboard"
- "/etc/grafana/provisioning/datasources"
- "/etc/grafana/provisioning/dashboards"
- "/etc/grafana/provisioning/notifiers"
- name: download ceph grafana dashboards
get_url:
url: "https://raw.githubusercontent.com/ceph/ceph/{{ grafana_dashboard_version }}/monitoring/ceph-mixin/dashboards_out/{{ item }}"
dest: "/etc/grafana/dashboards/ceph-dashboard/{{ item }}"
with_items: "{{ grafana_dashboard_files }}"
when:
- not containerized_deployment | bool
- not ansible_facts['os_family'] in ['RedHat', 'Suse']
- name: write grafana.ini
openstack.config_template.config_template:
src: grafana.ini.j2
dest: /etc/grafana/grafana.ini
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
config_type: ini
config_overrides: "{{ grafana_conf_overrides }}"
- name: write datasources provisioning config file
template:
src: datasources-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/datasources/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
- name: Write dashboards provisioning config file
template:
src: dashboards-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/dashboards/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
when: not containerized_deployment | bool
- name: copy grafana SSL certificate file
copy:
src: "{{ grafana_crt }}"
dest: "/etc/grafana/ceph-dashboard.crt"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_crt | length > 0
- dashboard_protocol == "https"
- name: copy grafana SSL certificate key
copy:
src: "{{ grafana_key }}"
dest: "/etc/grafana/ceph-dashboard.key"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0440
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_key | length > 0
- dashboard_protocol == "https"
- name: generate a Self Signed OpenSSL certificate for dashboard
shell: |
test -f /etc/grafana/ceph-dashboard.key -a -f /etc/grafana/ceph-dashboard.crt || \
(openssl req -new -nodes -x509 -subj '/O=IT/CN=ceph-grafana' -days 3650 -keyout /etc/grafana/ceph-dashboard.key -out /etc/grafana/ceph-dashboard.crt -extensions v3_ca && \
chown {{ grafana_uid }}:{{ grafana_uid }} /etc/grafana/ceph-dashboard.key /etc/grafana/ceph-dashboard.crt)
when:
- dashboard_protocol == "https"
- grafana_key | length == 0 or grafana_crt | length == 0
- name: enable and start grafana
service:
name: grafana-server
state: restarted
enabled: true
- name: wait for grafana to start
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
| mitch000001 | 8a5fb702f2a3df46834baf6019285463bbfcc4fb | 4edaab5f4c5445cb1fafc5d8824c49717e9f96c8 | The change I proposed would be backwards compatible as the dashboards for octopus and earlier life in the old location still. | mitch000001 | 23 |
ceph/ceph-ansible | 7,197 | fix(ceph-grafana): make dashboard download work again | This fixes the dashboard download for pacific and later.
Since ceph switched to Prometheus Monitoring Mixins the path to the generated dashboards has changed. It is still working for octopus but it's broken from pacific onwards.
This change fixes the issue. Currently I only added the two latest releases to the check.
Also, this is only related to non containerized deployments. | null | 2022-06-10 15:27:51+00:00 | 2022-06-14 12:36:24+00:00 | roles/ceph-grafana/tasks/configure_grafana.yml | ---
- name: install ceph-grafana-dashboards package on RedHat or SUSE
package:
name: ceph-grafana-dashboards
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
register: result
until: result is succeeded
when:
- not containerized_deployment | bool
- ansible_facts['os_family'] in ['RedHat', 'Suse']
tags: package-install
- name: make sure grafana is down
service:
name: grafana-server
state: stopped
- name: wait for grafana to be stopped
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
state: stopped
- name: make sure grafana configuration directories exist
file:
path: "{{ item }}"
state: directory
recurse: yes
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
with_items:
- "/etc/grafana/dashboards/ceph-dashboard"
- "/etc/grafana/provisioning/datasources"
- "/etc/grafana/provisioning/dashboards"
- "/etc/grafana/provisioning/notifiers"
- name: download ceph grafana dashboards
get_url:
url: "https://raw.githubusercontent.com/ceph/ceph/{{ grafana_dashboard_version }}/monitoring/grafana/dashboards/{{ item }}"
dest: "/etc/grafana/dashboards/ceph-dashboard/{{ item }}"
with_items: "{{ grafana_dashboard_files }}"
when:
- not containerized_deployment | bool
- not ansible_facts['os_family'] in ['RedHat', 'Suse']
- name: write grafana.ini
openstack.config_template.config_template:
src: grafana.ini.j2
dest: /etc/grafana/grafana.ini
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
config_type: ini
config_overrides: "{{ grafana_conf_overrides }}"
- name: write datasources provisioning config file
template:
src: datasources-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/datasources/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
- name: Write dashboards provisioning config file
template:
src: dashboards-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/dashboards/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
when: not containerized_deployment | bool
- name: copy grafana SSL certificate file
copy:
src: "{{ grafana_crt }}"
dest: "/etc/grafana/ceph-dashboard.crt"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_crt | length > 0
- dashboard_protocol == "https"
- name: copy grafana SSL certificate key
copy:
src: "{{ grafana_key }}"
dest: "/etc/grafana/ceph-dashboard.key"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0440
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_key | length > 0
- dashboard_protocol == "https"
- name: generate a Self Signed OpenSSL certificate for dashboard
shell: |
test -f /etc/grafana/ceph-dashboard.key -a -f /etc/grafana/ceph-dashboard.crt || \
(openssl req -new -nodes -x509 -subj '/O=IT/CN=ceph-grafana' -days 3650 -keyout /etc/grafana/ceph-dashboard.key -out /etc/grafana/ceph-dashboard.crt -extensions v3_ca && \
chown {{ grafana_uid }}:{{ grafana_uid }} /etc/grafana/ceph-dashboard.key /etc/grafana/ceph-dashboard.crt)
when:
- dashboard_protocol == "https"
- grafana_key | length == 0 or grafana_crt | length == 0
- name: enable and start grafana
service:
name: grafana-server
state: restarted
enabled: true
- name: wait for grafana to start
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
| ---
- name: install ceph-grafana-dashboards package on RedHat or SUSE
package:
name: ceph-grafana-dashboards
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
register: result
until: result is succeeded
when:
- not containerized_deployment | bool
- ansible_facts['os_family'] in ['RedHat', 'Suse']
tags: package-install
- name: make sure grafana is down
service:
name: grafana-server
state: stopped
- name: wait for grafana to be stopped
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
state: stopped
- name: make sure grafana configuration directories exist
file:
path: "{{ item }}"
state: directory
recurse: yes
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
with_items:
- "/etc/grafana/dashboards/ceph-dashboard"
- "/etc/grafana/provisioning/datasources"
- "/etc/grafana/provisioning/dashboards"
- "/etc/grafana/provisioning/notifiers"
- name: download ceph grafana dashboards
get_url:
url: "https://raw.githubusercontent.com/ceph/ceph/{{ grafana_dashboard_version }}/monitoring/ceph-mixin/dashboards_out/{{ item }}"
dest: "/etc/grafana/dashboards/ceph-dashboard/{{ item }}"
with_items: "{{ grafana_dashboard_files }}"
when:
- not containerized_deployment | bool
- not ansible_facts['os_family'] in ['RedHat', 'Suse']
- name: write grafana.ini
openstack.config_template.config_template:
src: grafana.ini.j2
dest: /etc/grafana/grafana.ini
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
config_type: ini
config_overrides: "{{ grafana_conf_overrides }}"
- name: write datasources provisioning config file
template:
src: datasources-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/datasources/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
- name: Write dashboards provisioning config file
template:
src: dashboards-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/dashboards/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
when: not containerized_deployment | bool
- name: copy grafana SSL certificate file
copy:
src: "{{ grafana_crt }}"
dest: "/etc/grafana/ceph-dashboard.crt"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_crt | length > 0
- dashboard_protocol == "https"
- name: copy grafana SSL certificate key
copy:
src: "{{ grafana_key }}"
dest: "/etc/grafana/ceph-dashboard.key"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0440
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_key | length > 0
- dashboard_protocol == "https"
- name: generate a Self Signed OpenSSL certificate for dashboard
shell: |
test -f /etc/grafana/ceph-dashboard.key -a -f /etc/grafana/ceph-dashboard.crt || \
(openssl req -new -nodes -x509 -subj '/O=IT/CN=ceph-grafana' -days 3650 -keyout /etc/grafana/ceph-dashboard.key -out /etc/grafana/ceph-dashboard.crt -extensions v3_ca && \
chown {{ grafana_uid }}:{{ grafana_uid }} /etc/grafana/ceph-dashboard.key /etc/grafana/ceph-dashboard.crt)
when:
- dashboard_protocol == "https"
- grafana_key | length == 0 or grafana_crt | length == 0
- name: enable and start grafana
service:
name: grafana-server
state: restarted
enabled: true
- name: wait for grafana to start
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
| mitch000001 | 8a5fb702f2a3df46834baf6019285463bbfcc4fb | 4edaab5f4c5445cb1fafc5d8824c49717e9f96c8 | I mean, we could also change this like that within master and cherry-pick into stable-6.0 onwards. Works for me. Is that the direction to go? | mitch000001 | 24 |
ceph/ceph-ansible | 7,197 | fix(ceph-grafana): make dashboard download work again | This fixes the dashboard download for pacific and later.
Since ceph switched to Prometheus Monitoring Mixins the path to the generated dashboards has changed. It is still working for octopus but it's broken from pacific onwards.
This change fixes the issue. Currently I only added the two latest releases to the check.
Also, this is only related to non containerized deployments. | null | 2022-06-10 15:27:51+00:00 | 2022-06-14 12:36:24+00:00 | roles/ceph-grafana/tasks/configure_grafana.yml | ---
- name: install ceph-grafana-dashboards package on RedHat or SUSE
package:
name: ceph-grafana-dashboards
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
register: result
until: result is succeeded
when:
- not containerized_deployment | bool
- ansible_facts['os_family'] in ['RedHat', 'Suse']
tags: package-install
- name: make sure grafana is down
service:
name: grafana-server
state: stopped
- name: wait for grafana to be stopped
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
state: stopped
- name: make sure grafana configuration directories exist
file:
path: "{{ item }}"
state: directory
recurse: yes
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
with_items:
- "/etc/grafana/dashboards/ceph-dashboard"
- "/etc/grafana/provisioning/datasources"
- "/etc/grafana/provisioning/dashboards"
- "/etc/grafana/provisioning/notifiers"
- name: download ceph grafana dashboards
get_url:
url: "https://raw.githubusercontent.com/ceph/ceph/{{ grafana_dashboard_version }}/monitoring/grafana/dashboards/{{ item }}"
dest: "/etc/grafana/dashboards/ceph-dashboard/{{ item }}"
with_items: "{{ grafana_dashboard_files }}"
when:
- not containerized_deployment | bool
- not ansible_facts['os_family'] in ['RedHat', 'Suse']
- name: write grafana.ini
openstack.config_template.config_template:
src: grafana.ini.j2
dest: /etc/grafana/grafana.ini
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
config_type: ini
config_overrides: "{{ grafana_conf_overrides }}"
- name: write datasources provisioning config file
template:
src: datasources-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/datasources/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
- name: Write dashboards provisioning config file
template:
src: dashboards-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/dashboards/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
when: not containerized_deployment | bool
- name: copy grafana SSL certificate file
copy:
src: "{{ grafana_crt }}"
dest: "/etc/grafana/ceph-dashboard.crt"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_crt | length > 0
- dashboard_protocol == "https"
- name: copy grafana SSL certificate key
copy:
src: "{{ grafana_key }}"
dest: "/etc/grafana/ceph-dashboard.key"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0440
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_key | length > 0
- dashboard_protocol == "https"
- name: generate a Self Signed OpenSSL certificate for dashboard
shell: |
test -f /etc/grafana/ceph-dashboard.key -a -f /etc/grafana/ceph-dashboard.crt || \
(openssl req -new -nodes -x509 -subj '/O=IT/CN=ceph-grafana' -days 3650 -keyout /etc/grafana/ceph-dashboard.key -out /etc/grafana/ceph-dashboard.crt -extensions v3_ca && \
chown {{ grafana_uid }}:{{ grafana_uid }} /etc/grafana/ceph-dashboard.key /etc/grafana/ceph-dashboard.crt)
when:
- dashboard_protocol == "https"
- grafana_key | length == 0 or grafana_crt | length == 0
- name: enable and start grafana
service:
name: grafana-server
state: restarted
enabled: true
- name: wait for grafana to start
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
| ---
- name: install ceph-grafana-dashboards package on RedHat or SUSE
package:
name: ceph-grafana-dashboards
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
register: result
until: result is succeeded
when:
- not containerized_deployment | bool
- ansible_facts['os_family'] in ['RedHat', 'Suse']
tags: package-install
- name: make sure grafana is down
service:
name: grafana-server
state: stopped
- name: wait for grafana to be stopped
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
state: stopped
- name: make sure grafana configuration directories exist
file:
path: "{{ item }}"
state: directory
recurse: yes
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
with_items:
- "/etc/grafana/dashboards/ceph-dashboard"
- "/etc/grafana/provisioning/datasources"
- "/etc/grafana/provisioning/dashboards"
- "/etc/grafana/provisioning/notifiers"
- name: download ceph grafana dashboards
get_url:
url: "https://raw.githubusercontent.com/ceph/ceph/{{ grafana_dashboard_version }}/monitoring/ceph-mixin/dashboards_out/{{ item }}"
dest: "/etc/grafana/dashboards/ceph-dashboard/{{ item }}"
with_items: "{{ grafana_dashboard_files }}"
when:
- not containerized_deployment | bool
- not ansible_facts['os_family'] in ['RedHat', 'Suse']
- name: write grafana.ini
openstack.config_template.config_template:
src: grafana.ini.j2
dest: /etc/grafana/grafana.ini
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
config_type: ini
config_overrides: "{{ grafana_conf_overrides }}"
- name: write datasources provisioning config file
template:
src: datasources-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/datasources/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
- name: Write dashboards provisioning config file
template:
src: dashboards-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/dashboards/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
when: not containerized_deployment | bool
- name: copy grafana SSL certificate file
copy:
src: "{{ grafana_crt }}"
dest: "/etc/grafana/ceph-dashboard.crt"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_crt | length > 0
- dashboard_protocol == "https"
- name: copy grafana SSL certificate key
copy:
src: "{{ grafana_key }}"
dest: "/etc/grafana/ceph-dashboard.key"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0440
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_key | length > 0
- dashboard_protocol == "https"
- name: generate a Self Signed OpenSSL certificate for dashboard
shell: |
test -f /etc/grafana/ceph-dashboard.key -a -f /etc/grafana/ceph-dashboard.crt || \
(openssl req -new -nodes -x509 -subj '/O=IT/CN=ceph-grafana' -days 3650 -keyout /etc/grafana/ceph-dashboard.key -out /etc/grafana/ceph-dashboard.crt -extensions v3_ca && \
chown {{ grafana_uid }}:{{ grafana_uid }} /etc/grafana/ceph-dashboard.key /etc/grafana/ceph-dashboard.crt)
when:
- dashboard_protocol == "https"
- grafana_key | length == 0 or grafana_crt | length == 0
- name: enable and start grafana
service:
name: grafana-server
state: restarted
enabled: true
- name: wait for grafana to start
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
| mitch000001 | 8a5fb702f2a3df46834baf6019285463bbfcc4fb | 4edaab5f4c5445cb1fafc5d8824c49717e9f96c8 | > I mean, we could also change this like that within master and cherry-pick into stable-6.0 onwards. Works for me. Is that the direction to go?
yes | guits | 25 |
ceph/ceph-ansible | 7,197 | fix(ceph-grafana): make dashboard download work again | This fixes the dashboard download for pacific and later.
Since ceph switched to Prometheus Monitoring Mixins the path to the generated dashboards has changed. It is still working for octopus but it's broken from pacific onwards.
This change fixes the issue. Currently I only added the two latest releases to the check.
Also, this is only related to non containerized deployments. | null | 2022-06-10 15:27:51+00:00 | 2022-06-14 12:36:24+00:00 | roles/ceph-grafana/tasks/configure_grafana.yml | ---
- name: install ceph-grafana-dashboards package on RedHat or SUSE
package:
name: ceph-grafana-dashboards
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
register: result
until: result is succeeded
when:
- not containerized_deployment | bool
- ansible_facts['os_family'] in ['RedHat', 'Suse']
tags: package-install
- name: make sure grafana is down
service:
name: grafana-server
state: stopped
- name: wait for grafana to be stopped
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
state: stopped
- name: make sure grafana configuration directories exist
file:
path: "{{ item }}"
state: directory
recurse: yes
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
with_items:
- "/etc/grafana/dashboards/ceph-dashboard"
- "/etc/grafana/provisioning/datasources"
- "/etc/grafana/provisioning/dashboards"
- "/etc/grafana/provisioning/notifiers"
- name: download ceph grafana dashboards
get_url:
url: "https://raw.githubusercontent.com/ceph/ceph/{{ grafana_dashboard_version }}/monitoring/grafana/dashboards/{{ item }}"
dest: "/etc/grafana/dashboards/ceph-dashboard/{{ item }}"
with_items: "{{ grafana_dashboard_files }}"
when:
- not containerized_deployment | bool
- not ansible_facts['os_family'] in ['RedHat', 'Suse']
- name: write grafana.ini
openstack.config_template.config_template:
src: grafana.ini.j2
dest: /etc/grafana/grafana.ini
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
config_type: ini
config_overrides: "{{ grafana_conf_overrides }}"
- name: write datasources provisioning config file
template:
src: datasources-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/datasources/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
- name: Write dashboards provisioning config file
template:
src: dashboards-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/dashboards/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
when: not containerized_deployment | bool
- name: copy grafana SSL certificate file
copy:
src: "{{ grafana_crt }}"
dest: "/etc/grafana/ceph-dashboard.crt"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_crt | length > 0
- dashboard_protocol == "https"
- name: copy grafana SSL certificate key
copy:
src: "{{ grafana_key }}"
dest: "/etc/grafana/ceph-dashboard.key"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0440
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_key | length > 0
- dashboard_protocol == "https"
- name: generate a Self Signed OpenSSL certificate for dashboard
shell: |
test -f /etc/grafana/ceph-dashboard.key -a -f /etc/grafana/ceph-dashboard.crt || \
(openssl req -new -nodes -x509 -subj '/O=IT/CN=ceph-grafana' -days 3650 -keyout /etc/grafana/ceph-dashboard.key -out /etc/grafana/ceph-dashboard.crt -extensions v3_ca && \
chown {{ grafana_uid }}:{{ grafana_uid }} /etc/grafana/ceph-dashboard.key /etc/grafana/ceph-dashboard.crt)
when:
- dashboard_protocol == "https"
- grafana_key | length == 0 or grafana_crt | length == 0
- name: enable and start grafana
service:
name: grafana-server
state: restarted
enabled: true
- name: wait for grafana to start
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
| ---
- name: install ceph-grafana-dashboards package on RedHat or SUSE
package:
name: ceph-grafana-dashboards
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
register: result
until: result is succeeded
when:
- not containerized_deployment | bool
- ansible_facts['os_family'] in ['RedHat', 'Suse']
tags: package-install
- name: make sure grafana is down
service:
name: grafana-server
state: stopped
- name: wait for grafana to be stopped
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
state: stopped
- name: make sure grafana configuration directories exist
file:
path: "{{ item }}"
state: directory
recurse: yes
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
with_items:
- "/etc/grafana/dashboards/ceph-dashboard"
- "/etc/grafana/provisioning/datasources"
- "/etc/grafana/provisioning/dashboards"
- "/etc/grafana/provisioning/notifiers"
- name: download ceph grafana dashboards
get_url:
url: "https://raw.githubusercontent.com/ceph/ceph/{{ grafana_dashboard_version }}/monitoring/ceph-mixin/dashboards_out/{{ item }}"
dest: "/etc/grafana/dashboards/ceph-dashboard/{{ item }}"
with_items: "{{ grafana_dashboard_files }}"
when:
- not containerized_deployment | bool
- not ansible_facts['os_family'] in ['RedHat', 'Suse']
- name: write grafana.ini
openstack.config_template.config_template:
src: grafana.ini.j2
dest: /etc/grafana/grafana.ini
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
config_type: ini
config_overrides: "{{ grafana_conf_overrides }}"
- name: write datasources provisioning config file
template:
src: datasources-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/datasources/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
- name: Write dashboards provisioning config file
template:
src: dashboards-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/dashboards/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
when: not containerized_deployment | bool
- name: copy grafana SSL certificate file
copy:
src: "{{ grafana_crt }}"
dest: "/etc/grafana/ceph-dashboard.crt"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_crt | length > 0
- dashboard_protocol == "https"
- name: copy grafana SSL certificate key
copy:
src: "{{ grafana_key }}"
dest: "/etc/grafana/ceph-dashboard.key"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0440
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_key | length > 0
- dashboard_protocol == "https"
- name: generate a Self Signed OpenSSL certificate for dashboard
shell: |
test -f /etc/grafana/ceph-dashboard.key -a -f /etc/grafana/ceph-dashboard.crt || \
(openssl req -new -nodes -x509 -subj '/O=IT/CN=ceph-grafana' -days 3650 -keyout /etc/grafana/ceph-dashboard.key -out /etc/grafana/ceph-dashboard.crt -extensions v3_ca && \
chown {{ grafana_uid }}:{{ grafana_uid }} /etc/grafana/ceph-dashboard.key /etc/grafana/ceph-dashboard.crt)
when:
- dashboard_protocol == "https"
- grafana_key | length == 0 or grafana_crt | length == 0
- name: enable and start grafana
service:
name: grafana-server
state: restarted
enabled: true
- name: wait for grafana to start
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
| mitch000001 | 8a5fb702f2a3df46834baf6019285463bbfcc4fb | 4edaab5f4c5445cb1fafc5d8824c49717e9f96c8 | > The change I proposed would be backwards compatible as the dashboards for octopus and earlier life in the old location still.
the branch 'main' isn't intended to be used for deploying stables releases. | guits | 26 |
ceph/ceph-ansible | 7,197 | fix(ceph-grafana): make dashboard download work again | This fixes the dashboard download for pacific and later.
Since ceph switched to Prometheus Monitoring Mixins the path to the generated dashboards has changed. It is still working for octopus but it's broken from pacific onwards.
This change fixes the issue. Currently I only added the two latest releases to the check.
Also, this is only related to non containerized deployments. | null | 2022-06-10 15:27:51+00:00 | 2022-06-14 12:36:24+00:00 | roles/ceph-grafana/tasks/configure_grafana.yml | ---
- name: install ceph-grafana-dashboards package on RedHat or SUSE
package:
name: ceph-grafana-dashboards
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
register: result
until: result is succeeded
when:
- not containerized_deployment | bool
- ansible_facts['os_family'] in ['RedHat', 'Suse']
tags: package-install
- name: make sure grafana is down
service:
name: grafana-server
state: stopped
- name: wait for grafana to be stopped
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
state: stopped
- name: make sure grafana configuration directories exist
file:
path: "{{ item }}"
state: directory
recurse: yes
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
with_items:
- "/etc/grafana/dashboards/ceph-dashboard"
- "/etc/grafana/provisioning/datasources"
- "/etc/grafana/provisioning/dashboards"
- "/etc/grafana/provisioning/notifiers"
- name: download ceph grafana dashboards
get_url:
url: "https://raw.githubusercontent.com/ceph/ceph/{{ grafana_dashboard_version }}/monitoring/grafana/dashboards/{{ item }}"
dest: "/etc/grafana/dashboards/ceph-dashboard/{{ item }}"
with_items: "{{ grafana_dashboard_files }}"
when:
- not containerized_deployment | bool
- not ansible_facts['os_family'] in ['RedHat', 'Suse']
- name: write grafana.ini
openstack.config_template.config_template:
src: grafana.ini.j2
dest: /etc/grafana/grafana.ini
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
config_type: ini
config_overrides: "{{ grafana_conf_overrides }}"
- name: write datasources provisioning config file
template:
src: datasources-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/datasources/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
- name: Write dashboards provisioning config file
template:
src: dashboards-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/dashboards/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
when: not containerized_deployment | bool
- name: copy grafana SSL certificate file
copy:
src: "{{ grafana_crt }}"
dest: "/etc/grafana/ceph-dashboard.crt"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_crt | length > 0
- dashboard_protocol == "https"
- name: copy grafana SSL certificate key
copy:
src: "{{ grafana_key }}"
dest: "/etc/grafana/ceph-dashboard.key"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0440
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_key | length > 0
- dashboard_protocol == "https"
- name: generate a Self Signed OpenSSL certificate for dashboard
shell: |
test -f /etc/grafana/ceph-dashboard.key -a -f /etc/grafana/ceph-dashboard.crt || \
(openssl req -new -nodes -x509 -subj '/O=IT/CN=ceph-grafana' -days 3650 -keyout /etc/grafana/ceph-dashboard.key -out /etc/grafana/ceph-dashboard.crt -extensions v3_ca && \
chown {{ grafana_uid }}:{{ grafana_uid }} /etc/grafana/ceph-dashboard.key /etc/grafana/ceph-dashboard.crt)
when:
- dashboard_protocol == "https"
- grafana_key | length == 0 or grafana_crt | length == 0
- name: enable and start grafana
service:
name: grafana-server
state: restarted
enabled: true
- name: wait for grafana to start
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
| ---
- name: install ceph-grafana-dashboards package on RedHat or SUSE
package:
name: ceph-grafana-dashboards
state: "{{ (upgrade_ceph_packages|bool) | ternary('latest','present') }}"
register: result
until: result is succeeded
when:
- not containerized_deployment | bool
- ansible_facts['os_family'] in ['RedHat', 'Suse']
tags: package-install
- name: make sure grafana is down
service:
name: grafana-server
state: stopped
- name: wait for grafana to be stopped
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
state: stopped
- name: make sure grafana configuration directories exist
file:
path: "{{ item }}"
state: directory
recurse: yes
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
with_items:
- "/etc/grafana/dashboards/ceph-dashboard"
- "/etc/grafana/provisioning/datasources"
- "/etc/grafana/provisioning/dashboards"
- "/etc/grafana/provisioning/notifiers"
- name: download ceph grafana dashboards
get_url:
url: "https://raw.githubusercontent.com/ceph/ceph/{{ grafana_dashboard_version }}/monitoring/ceph-mixin/dashboards_out/{{ item }}"
dest: "/etc/grafana/dashboards/ceph-dashboard/{{ item }}"
with_items: "{{ grafana_dashboard_files }}"
when:
- not containerized_deployment | bool
- not ansible_facts['os_family'] in ['RedHat', 'Suse']
- name: write grafana.ini
openstack.config_template.config_template:
src: grafana.ini.j2
dest: /etc/grafana/grafana.ini
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
config_type: ini
config_overrides: "{{ grafana_conf_overrides }}"
- name: write datasources provisioning config file
template:
src: datasources-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/datasources/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
- name: Write dashboards provisioning config file
template:
src: dashboards-ceph-dashboard.yml.j2
dest: /etc/grafana/provisioning/dashboards/ceph-dashboard.yml
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
when: not containerized_deployment | bool
- name: copy grafana SSL certificate file
copy:
src: "{{ grafana_crt }}"
dest: "/etc/grafana/ceph-dashboard.crt"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0640
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_crt | length > 0
- dashboard_protocol == "https"
- name: copy grafana SSL certificate key
copy:
src: "{{ grafana_key }}"
dest: "/etc/grafana/ceph-dashboard.key"
owner: "{{ grafana_uid }}"
group: "{{ grafana_uid }}"
mode: 0440
remote_src: "{{ dashboard_tls_external | bool }}"
when:
- grafana_key | length > 0
- dashboard_protocol == "https"
- name: generate a Self Signed OpenSSL certificate for dashboard
shell: |
test -f /etc/grafana/ceph-dashboard.key -a -f /etc/grafana/ceph-dashboard.crt || \
(openssl req -new -nodes -x509 -subj '/O=IT/CN=ceph-grafana' -days 3650 -keyout /etc/grafana/ceph-dashboard.key -out /etc/grafana/ceph-dashboard.crt -extensions v3_ca && \
chown {{ grafana_uid }}:{{ grafana_uid }} /etc/grafana/ceph-dashboard.key /etc/grafana/ceph-dashboard.crt)
when:
- dashboard_protocol == "https"
- grafana_key | length == 0 or grafana_crt | length == 0
- name: enable and start grafana
service:
name: grafana-server
state: restarted
enabled: true
- name: wait for grafana to start
wait_for:
host: '{{ grafana_server_addr if ip_version == "ipv4" else grafana_server_addr[1:-1] }}'
port: '{{ grafana_port }}'
| mitch000001 | 8a5fb702f2a3df46834baf6019285463bbfcc4fb | 4edaab5f4c5445cb1fafc5d8824c49717e9f96c8 | Changed. | mitch000001 | 27 |
ceph/ceph-ansible | 7,181 | [skip ci] rbd-mirror: major refactor | - Use config-key store to add cluster peer.
- Support multiple pools mirroring.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com> | null | 2022-05-12 15:23:32+00:00 | 2022-07-29 15:33:26+00:00 | roles/ceph-rbd-mirror/tasks/configure_mirroring.yml | ---
- name: enable mirroring on the pool
command: "{{ container_exec_cmd | default('') }} rbd --cluster {{ cluster }} --keyring /etc/ceph/{{ cluster }}.client.rbd-mirror.{{ ansible_facts['hostname'] }}.keyring --name client.rbd-mirror.{{ ansible_facts['hostname'] }} mirror pool enable {{ ceph_rbd_mirror_pool }} {{ ceph_rbd_mirror_mode }}"
register: result
changed_when: false
retries: 90
delay: 1
until: result is succeeded
- name: list mirroring peer
command: "{{ container_exec_cmd | default('') }} rbd --cluster {{ cluster }} --keyring /etc/ceph/{{ cluster }}.client.rbd-mirror.{{ ansible_facts['hostname'] }}.keyring --name client.rbd-mirror.{{ ansible_facts['hostname'] }} mirror pool info {{ ceph_rbd_mirror_pool }}"
changed_when: false
register: mirror_peer
- name: add a mirroring peer
command: "{{ container_exec_cmd | default('') }} rbd --cluster {{ cluster }} --keyring /etc/ceph/{{ cluster }}.client.rbd-mirror.{{ ansible_facts['hostname'] }}.keyring --name client.rbd-mirror.{{ ansible_facts['hostname'] }} mirror pool peer add {{ ceph_rbd_mirror_pool }} {{ ceph_rbd_mirror_remote_user }}@{{ ceph_rbd_mirror_remote_cluster }}"
changed_when: false
when: ceph_rbd_mirror_remote_user not in mirror_peer.stdout
| ---
- name: cephx tasks
when:
- cephx | bool
block:
- name: get client.bootstrap-rbd-mirror from ceph monitor
ceph_key:
name: client.bootstrap-rbd-mirror
cluster: "{{ cluster }}"
output_format: plain
state: info
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _bootstrap_rbd_mirror_key
delegate_to: "{{ groups.get(mon_group_name)[0] }}"
run_once: true
no_log: "{{ no_log_on_ceph_key_tasks }}"
- name: copy ceph key(s)
copy:
dest: "/var/lib/ceph/bootstrap-rbd-mirror/{{ cluster }}.keyring"
content: "{{ _bootstrap_rbd_mirror_key.stdout + '\n' }}"
owner: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
group: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
mode: "{{ ceph_keyring_permissions }}"
no_log: "{{ no_log_on_ceph_key_tasks }}"
- name: create rbd-mirror keyrings
ceph_key:
name: "{{ item.name }}"
cluster: "{{ cluster }}"
user: client.admin
user_key: "/etc/ceph/{{ cluster }}.client.admin.keyring"
caps:
mon: "profile rbd-mirror"
osd: "profile rbd"
dest: "{{ item.dest }}"
secret: "{{ item.secret | default(omit) }}"
import_key: true
owner: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
group: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
mode: "{{ ceph_keyring_permissions }}"
no_log: "{{ no_log_on_ceph_key_tasks }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
delegate_to: "{{ groups.get(mon_group_name)[0] }}"
loop:
- { name: "client.rbd-mirror.{{ ansible_facts['hostname'] }}",
dest: "/etc/ceph/{{ cluster }}.client.rbd-mirror.{{ ansible_facts['hostname'] }}.keyring" }
- { name: "{{ ceph_rbd_mirror_local_user }}",
dest: "/etc/ceph/{{ cluster }}.{{ ceph_rbd_mirror_local_user }}.keyring",
secret: "{{ ceph_rbd_mirror_local_user_secret | default('') }}" }
- name: get "client.rbd-mirror.{{ ansible_facts['hostname'] }}" from ceph monitor
ceph_key:
name: "client.rbd-mirror.{{ ansible_facts['hostname'] }}"
cluster: "{{ cluster }}"
output_format: plain
state: info
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _rbd_mirror_key
delegate_to: "{{ groups.get(mon_group_name)[0] }}"
run_once: true
no_log: "{{ no_log_on_ceph_key_tasks }}"
- name: copy ceph key
copy:
dest: "/etc/ceph/{{ cluster }}.client.rbd-mirror.{{ ansible_facts['hostname'] }}.keyring"
content: "{{ _rbd_mirror_key.stdout + '\n' }}"
owner: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
group: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
mode: "{{ ceph_keyring_permissions }}"
no_log: false
- name: start and add the rbd-mirror service instance
service:
name: "ceph-rbd-mirror@rbd-mirror.{{ ansible_facts['hostname'] }}"
state: started
enabled: yes
masked: no
changed_when: false
when:
- not containerized_deployment | bool
- ceph_rbd_mirror_remote_user is defined
- name: set_fact ceph_rbd_mirror_pools
set_fact:
ceph_rbd_mirror_pools:
- name: "{{ ceph_rbd_mirror_pool }}"
when: ceph_rbd_mirror_pools is undefined
- name: create pool if it doesn't exist
ceph_pool:
name: "{{ item.name }}"
cluster: "{{ cluster }}"
pg_num: "{{ item.pg_num | default(omit) }}"
pgp_num: "{{ item.pgp_num | default(omit) }}"
size: "{{ item.size | default(omit) }}"
min_size: "{{ item.min_size | default(omit) }}"
pool_type: "{{ item.type | default('replicated') }}"
rule_name: "{{ item.rule_name | default(omit) }}"
erasure_profile: "{{ item.erasure_profile | default(omit) }}"
pg_autoscale_mode: "{{ item.pg_autoscale_mode | default(omit) }}"
target_size_ratio: "{{ item.target_size_ratio | default(omit) }}"
application: "{{ item.application | default('rbd') }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
loop: "{{ ceph_rbd_mirror_pools }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
- name: enable mirroring on the pool
command: "{{ rbd_cmd }} --cluster {{ cluster }} mirror pool enable {{ item.name }} {{ ceph_rbd_mirror_mode }}"
register: result
changed_when: false
retries: 60
delay: 1
until: result is succeeded
loop: "{{ ceph_rbd_mirror_pools }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: add mirroring peer
when: ceph_rbd_mirror_remote_user is defined
block:
- name: list mirroring peer
command: "{{ rbd_cmd }} --cluster {{ cluster }} mirror pool info {{ item.name }}"
changed_when: false
register: mirror_peer
loop: "{{ ceph_rbd_mirror_pools }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: create a temporary file
tempfile:
path: /etc/ceph
state: file
suffix: _ceph-ansible
register: tmp_file
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: write secret to temporary file
copy:
dest: "{{ tmp_file.path }}"
content: "{{ ceph_rbd_mirror_remote_key }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: add a mirroring peer
command: "{{ rbd_cmd }} --cluster {{ cluster }} mirror pool peer add {{ item.item.name }} {{ ceph_rbd_mirror_remote_user }}@{{ ceph_rbd_mirror_remote_cluster }} --remote-mon-host {{ ceph_rbd_mirror_remote_mon_hosts }} --remote-key-file {{ tmp_file.path }}"
changed_when: false
delegate_to: "{{ groups[mon_group_name][0] }}"
loop: "{{ mirror_peer.results }}"
when: ceph_rbd_mirror_remote_user not in item.stdout
- name: rm temporary file
file:
path: "{{ tmp_file.path }}"
state: absent
delegate_to: "{{ groups[mon_group_name][0] }}"
| guits | 3a8daafbe8c9023c6dcd8034adfcc98893e5c303 | b74ff6e22c0d1b95e71384e4d7e2fb2ad556ac39 | add a `ceph_pool` task? (create the pool if it doesn't exist already?) | guits | 28 |
ceph/ceph-ansible | 7,181 | [skip ci] rbd-mirror: major refactor | - Use config-key store to add cluster peer.
- Support multiple pools mirroring.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com> | null | 2022-05-12 15:23:32+00:00 | 2022-07-29 15:33:26+00:00 | roles/ceph-rbd-mirror/tasks/configure_mirroring.yml | ---
- name: enable mirroring on the pool
command: "{{ container_exec_cmd | default('') }} rbd --cluster {{ cluster }} --keyring /etc/ceph/{{ cluster }}.client.rbd-mirror.{{ ansible_facts['hostname'] }}.keyring --name client.rbd-mirror.{{ ansible_facts['hostname'] }} mirror pool enable {{ ceph_rbd_mirror_pool }} {{ ceph_rbd_mirror_mode }}"
register: result
changed_when: false
retries: 90
delay: 1
until: result is succeeded
- name: list mirroring peer
command: "{{ container_exec_cmd | default('') }} rbd --cluster {{ cluster }} --keyring /etc/ceph/{{ cluster }}.client.rbd-mirror.{{ ansible_facts['hostname'] }}.keyring --name client.rbd-mirror.{{ ansible_facts['hostname'] }} mirror pool info {{ ceph_rbd_mirror_pool }}"
changed_when: false
register: mirror_peer
- name: add a mirroring peer
command: "{{ container_exec_cmd | default('') }} rbd --cluster {{ cluster }} --keyring /etc/ceph/{{ cluster }}.client.rbd-mirror.{{ ansible_facts['hostname'] }}.keyring --name client.rbd-mirror.{{ ansible_facts['hostname'] }} mirror pool peer add {{ ceph_rbd_mirror_pool }} {{ ceph_rbd_mirror_remote_user }}@{{ ceph_rbd_mirror_remote_cluster }}"
changed_when: false
when: ceph_rbd_mirror_remote_user not in mirror_peer.stdout
| ---
- name: cephx tasks
when:
- cephx | bool
block:
- name: get client.bootstrap-rbd-mirror from ceph monitor
ceph_key:
name: client.bootstrap-rbd-mirror
cluster: "{{ cluster }}"
output_format: plain
state: info
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _bootstrap_rbd_mirror_key
delegate_to: "{{ groups.get(mon_group_name)[0] }}"
run_once: true
no_log: "{{ no_log_on_ceph_key_tasks }}"
- name: copy ceph key(s)
copy:
dest: "/var/lib/ceph/bootstrap-rbd-mirror/{{ cluster }}.keyring"
content: "{{ _bootstrap_rbd_mirror_key.stdout + '\n' }}"
owner: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
group: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
mode: "{{ ceph_keyring_permissions }}"
no_log: "{{ no_log_on_ceph_key_tasks }}"
- name: create rbd-mirror keyrings
ceph_key:
name: "{{ item.name }}"
cluster: "{{ cluster }}"
user: client.admin
user_key: "/etc/ceph/{{ cluster }}.client.admin.keyring"
caps:
mon: "profile rbd-mirror"
osd: "profile rbd"
dest: "{{ item.dest }}"
secret: "{{ item.secret | default(omit) }}"
import_key: true
owner: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
group: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
mode: "{{ ceph_keyring_permissions }}"
no_log: "{{ no_log_on_ceph_key_tasks }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
delegate_to: "{{ groups.get(mon_group_name)[0] }}"
loop:
- { name: "client.rbd-mirror.{{ ansible_facts['hostname'] }}",
dest: "/etc/ceph/{{ cluster }}.client.rbd-mirror.{{ ansible_facts['hostname'] }}.keyring" }
- { name: "{{ ceph_rbd_mirror_local_user }}",
dest: "/etc/ceph/{{ cluster }}.{{ ceph_rbd_mirror_local_user }}.keyring",
secret: "{{ ceph_rbd_mirror_local_user_secret | default('') }}" }
- name: get "client.rbd-mirror.{{ ansible_facts['hostname'] }}" from ceph monitor
ceph_key:
name: "client.rbd-mirror.{{ ansible_facts['hostname'] }}"
cluster: "{{ cluster }}"
output_format: plain
state: info
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
register: _rbd_mirror_key
delegate_to: "{{ groups.get(mon_group_name)[0] }}"
run_once: true
no_log: "{{ no_log_on_ceph_key_tasks }}"
- name: copy ceph key
copy:
dest: "/etc/ceph/{{ cluster }}.client.rbd-mirror.{{ ansible_facts['hostname'] }}.keyring"
content: "{{ _rbd_mirror_key.stdout + '\n' }}"
owner: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
group: "{{ ceph_uid if containerized_deployment | bool else 'ceph' }}"
mode: "{{ ceph_keyring_permissions }}"
no_log: false
- name: start and add the rbd-mirror service instance
service:
name: "ceph-rbd-mirror@rbd-mirror.{{ ansible_facts['hostname'] }}"
state: started
enabled: yes
masked: no
changed_when: false
when:
- not containerized_deployment | bool
- ceph_rbd_mirror_remote_user is defined
- name: set_fact ceph_rbd_mirror_pools
set_fact:
ceph_rbd_mirror_pools:
- name: "{{ ceph_rbd_mirror_pool }}"
when: ceph_rbd_mirror_pools is undefined
- name: create pool if it doesn't exist
ceph_pool:
name: "{{ item.name }}"
cluster: "{{ cluster }}"
pg_num: "{{ item.pg_num | default(omit) }}"
pgp_num: "{{ item.pgp_num | default(omit) }}"
size: "{{ item.size | default(omit) }}"
min_size: "{{ item.min_size | default(omit) }}"
pool_type: "{{ item.type | default('replicated') }}"
rule_name: "{{ item.rule_name | default(omit) }}"
erasure_profile: "{{ item.erasure_profile | default(omit) }}"
pg_autoscale_mode: "{{ item.pg_autoscale_mode | default(omit) }}"
target_size_ratio: "{{ item.target_size_ratio | default(omit) }}"
application: "{{ item.application | default('rbd') }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
loop: "{{ ceph_rbd_mirror_pools }}"
environment:
CEPH_CONTAINER_IMAGE: "{{ ceph_docker_registry + '/' + ceph_docker_image + ':' + ceph_docker_image_tag if containerized_deployment | bool else None }}"
CEPH_CONTAINER_BINARY: "{{ container_binary }}"
- name: enable mirroring on the pool
command: "{{ rbd_cmd }} --cluster {{ cluster }} mirror pool enable {{ item.name }} {{ ceph_rbd_mirror_mode }}"
register: result
changed_when: false
retries: 60
delay: 1
until: result is succeeded
loop: "{{ ceph_rbd_mirror_pools }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: add mirroring peer
when: ceph_rbd_mirror_remote_user is defined
block:
- name: list mirroring peer
command: "{{ rbd_cmd }} --cluster {{ cluster }} mirror pool info {{ item.name }}"
changed_when: false
register: mirror_peer
loop: "{{ ceph_rbd_mirror_pools }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: create a temporary file
tempfile:
path: /etc/ceph
state: file
suffix: _ceph-ansible
register: tmp_file
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: write secret to temporary file
copy:
dest: "{{ tmp_file.path }}"
content: "{{ ceph_rbd_mirror_remote_key }}"
delegate_to: "{{ groups[mon_group_name][0] }}"
- name: add a mirroring peer
command: "{{ rbd_cmd }} --cluster {{ cluster }} mirror pool peer add {{ item.item.name }} {{ ceph_rbd_mirror_remote_user }}@{{ ceph_rbd_mirror_remote_cluster }} --remote-mon-host {{ ceph_rbd_mirror_remote_mon_hosts }} --remote-key-file {{ tmp_file.path }}"
changed_when: false
delegate_to: "{{ groups[mon_group_name][0] }}"
loop: "{{ mirror_peer.results }}"
when: ceph_rbd_mirror_remote_user not in item.stdout
- name: rm temporary file
file:
path: "{{ tmp_file.path }}"
state: absent
delegate_to: "{{ groups[mon_group_name][0] }}"
| guits | 3a8daafbe8c9023c6dcd8034adfcc98893e5c303 | b74ff6e22c0d1b95e71384e4d7e2fb2ad556ac39 | legacy from testing?
```suggestion
retries: 60
``` | guits | 29 |
yogeshojha/rengine | 1,100 | Fix report generation when `Ignore Informational Vulnerabilities` checked | When **Ignore Informational Vulnerabilities** is checked there are still info vulns datas.
I've reworked the queries that display vulnerabilities to prevent info vulns to display in the :
- **Quick summary** Info blue box
- **Reconnaissance Findings**
- **Vulnerabilities Discovered** Info blue box
I've also fixed the **Vulnerabilities Discovered** listing by doing a correct loop through regrouped values because values withe the same path but not the same severity does not display well
Tested and working on current master branch | null | 2023-12-05 01:25:41+00:00 | 2023-12-08 05:48:36+00:00 | web/startScan/views.py | import markdown
from celery import group
from weasyprint import HTML
from datetime import datetime
from django.contrib import messages
from django.db.models import Count
from django.http import HttpResponse, HttpResponseRedirect, JsonResponse
from django.shortcuts import get_object_or_404, render
from django.template.loader import get_template
from django.urls import reverse
from django.utils import timezone
from django_celery_beat.models import (ClockedSchedule, IntervalSchedule, PeriodicTask)
from rolepermissions.decorators import has_permission_decorator
from reNgine.celery import app
from reNgine.common_func import *
from reNgine.definitions import ABORTED_TASK, SUCCESS_TASK
from reNgine.tasks import create_scan_activity, initiate_scan, run_command
from scanEngine.models import EngineType
from startScan.models import *
from targetApp.models import *
def scan_history(request, slug):
host = ScanHistory.objects.filter(domain__project__slug=slug).order_by('-start_scan_date')
context = {'scan_history_active': 'active', "scan_history": host}
return render(request, 'startScan/history.html', context)
def subscan_history(request, slug):
subscans = SubScan.objects.filter(scan_history__domain__project__slug=slug).order_by('-start_scan_date')
context = {'scan_history_active': 'active', "subscans": subscans}
return render(request, 'startScan/subscan_history.html', context)
def detail_scan(request, id, slug):
ctx = {}
# Get scan objects
scan = get_object_or_404(ScanHistory, id=id)
domain_id = scan.domain.id
scan_engines = EngineType.objects.order_by('engine_name').all()
recent_scans = ScanHistory.objects.filter(domain__id=domain_id)
last_scans = (
ScanHistory.objects
.filter(domain__id=domain_id)
.filter(tasks__overlap=['subdomain_discovery'])
.filter(id__lte=id)
.filter(scan_status=2)
)
# Get all kind of objects associated with our ScanHistory object
emails = Email.objects.filter(emails__in=[scan])
employees = Employee.objects.filter(employees__in=[scan])
subdomains = Subdomain.objects.filter(scan_history=scan)
endpoints = EndPoint.objects.filter(scan_history=scan)
vulns = Vulnerability.objects.filter(scan_history=scan)
vulns_tags = VulnerabilityTags.objects.filter(vuln_tags__in=vulns)
ip_addresses = IpAddress.objects.filter(ip_addresses__in=subdomains)
geo_isos = CountryISO.objects.filter(ipaddress__in=ip_addresses)
scan_activity = ScanActivity.objects.filter(scan_of__id=id).order_by('time')
cves = CveId.objects.filter(cve_ids__in=vulns)
cwes = CweId.objects.filter(cwe_ids__in=vulns)
# HTTP statuses
http_statuses = (
subdomains
.exclude(http_status=0)
.values('http_status')
.annotate(Count('http_status'))
)
# CVEs / CWes
common_cves = (
cves
.annotate(nused=Count('cve_ids'))
.order_by('-nused')
.values('name', 'nused')
[:10]
)
common_cwes = (
cwes
.annotate(nused=Count('cwe_ids'))
.order_by('-nused')
.values('name', 'nused')
[:10]
)
# Tags
common_tags = (
vulns_tags
.annotate(nused=Count('vuln_tags'))
.order_by('-nused')
.values('name', 'nused')
[:7]
)
# Countries
asset_countries = (
geo_isos
.annotate(count=Count('iso'))
.order_by('-count')
)
# Subdomains
subdomain_count = (
subdomains
.values('name')
.distinct()
.count()
)
alive_count = (
subdomains
.values('name')
.distinct()
.filter(http_status__exact=200)
.count()
)
important_count = (
subdomains
.values('name')
.distinct()
.filter(is_important=True)
.count()
)
# Endpoints
endpoint_count = (
endpoints
.values('http_url')
.distinct()
.count()
)
endpoint_alive_count = (
endpoints
.filter(http_status__exact=200) # TODO: use is_alive() func as it's more precise
.values('http_url')
.distinct()
.count()
)
# Vulnerabilities
common_vulns = (
vulns
.exclude(severity=0)
.values('name', 'severity')
.annotate(count=Count('name'))
.order_by('-count')
[:10]
)
info_count = vulns.filter(severity=0).count()
low_count = vulns.filter(severity=1).count()
medium_count = vulns.filter(severity=2).count()
high_count = vulns.filter(severity=3).count()
critical_count = vulns.filter(severity=4).count()
unknown_count = vulns.filter(severity=-1).count()
total_count = vulns.count()
total_count_ignore_info = vulns.exclude(severity=0).count()
# Emails
exposed_count = emails.exclude(password__isnull=True).count()
# Build render context
ctx = {
'scan_history_id': id,
'history': scan,
'scan_activity': scan_activity,
'subdomain_count': subdomain_count,
'alive_count': alive_count,
'important_count': important_count,
'endpoint_count': endpoint_count,
'endpoint_alive_count': endpoint_alive_count,
'info_count': info_count,
'low_count': low_count,
'medium_count': medium_count,
'high_count': high_count,
'critical_count': critical_count,
'unknown_count': unknown_count,
'total_vulnerability_count': total_count,
'total_vul_ignore_info_count': total_count_ignore_info,
'vulnerability_list': vulns.order_by('-severity').all(),
'scan_history_active': 'active',
'scan_engines': scan_engines,
'exposed_count': exposed_count,
'email_count': emails.count(),
'employees_count': employees.count(),
'most_recent_scans': recent_scans.order_by('-start_scan_date')[:1],
'http_status_breakdown': http_statuses,
'most_common_cve': common_cves,
'most_common_cwe': common_cwes,
'most_common_tags': common_tags,
'most_common_vulnerability': common_vulns,
'asset_countries': asset_countries,
}
# Find number of matched GF patterns
if scan.used_gf_patterns:
count_gf = {}
for gf in scan.used_gf_patterns.split(','):
count_gf[gf] = (
endpoints
.filter(matched_gf_patterns__icontains=gf)
.count()
)
ctx['matched_gf_count'] = count_gf
# Find last scan for this domain
if last_scans.count() > 1:
last_scan = last_scans.order_by('-start_scan_date')[1]
ctx['last_scan'] = last_scan
return render(request, 'startScan/detail_scan.html', ctx)
def all_subdomains(request, slug):
subdomains = Subdomain.objects.filter(target_domain__project__slug=slug)
scan_engines = EngineType.objects.order_by('engine_name').all()
alive_subdomains = subdomains.filter(http_status__exact=200) # TODO: replace this with is_alive() function
important_subdomains = (
subdomains
.filter(is_important=True)
.values('name')
.distinct()
.count()
)
context = {
'scan_history_id': id,
'scan_history_active': 'active',
'scan_engines': scan_engines,
'subdomain_count': subdomains.values('name').distinct().count(),
'alive_count': alive_subdomains.values('name').distinct().count(),
'important_count': important_subdomains
}
return render(request, 'startScan/subdomains.html', context)
def detail_vuln_scan(request, slug, id=None):
if id:
history = get_object_or_404(ScanHistory, id=id)
history.filter(domain__project__slug=slug)
context = {'scan_history_id': id, 'history': history}
else:
context = {'vuln_scan_active': 'true'}
return render(request, 'startScan/vulnerabilities.html', context)
def all_endpoints(request, slug):
context = {
'scan_history_active': 'active'
}
return render(request, 'startScan/endpoints.html', context)
def start_scan_ui(request, slug, domain_id):
domain = get_object_or_404(Domain, id=domain_id)
if request.method == "POST":
# Get imported and out-of-scope subdomains
subdomains_in = request.POST['importSubdomainTextArea'].split()
subdomains_in = [s.rstrip() for s in subdomains_in if s]
subdomains_out = request.POST['outOfScopeSubdomainTextarea'].split()
subdomains_out = [s.rstrip() for s in subdomains_out if s]
paths = request.POST['filterPath'].split()
filterPath = [s.rstrip() for s in paths if s]
if len(filterPath) > 0:
filterPath = filterPath[0]
else:
filterPath = ''
# Get engine type
engine_id = request.POST['scan_mode']
# Create ScanHistory object
scan_history_id = create_scan_object(domain_id, engine_id)
scan = ScanHistory.objects.get(pk=scan_history_id)
# Start the celery task
kwargs = {
'scan_history_id': scan.id,
'domain_id': domain.id,
'engine_id': engine_id,
'scan_type': LIVE_SCAN,
'results_dir': '/usr/src/scan_results',
'imported_subdomains': subdomains_in,
'out_of_scope_subdomains': subdomains_out,
'url_filter': filterPath
}
initiate_scan.apply_async(kwargs=kwargs)
scan.save()
# Send start notif
messages.add_message(
request,
messages.INFO,
f'Scan Started for {domain.name}')
return HttpResponseRedirect(reverse('scan_history', kwargs={'slug': slug}))
# GET request
engine = EngineType.objects.order_by('engine_name')
custom_engine_count = (
EngineType.objects
.filter(default_engine=False)
.count()
)
context = {
'scan_history_active': 'active',
'domain': domain,
'engines': engine,
'custom_engine_count': custom_engine_count}
return render(request, 'startScan/start_scan_ui.html', context)
@has_permission_decorator(PERM_INITATE_SCANS_SUBSCANS, redirect_url=FOUR_OH_FOUR_URL)
def start_multiple_scan(request, slug):
# domain = get_object_or_404(Domain, id=host_id)
if request.method == "POST":
if request.POST.get('scan_mode', 0):
# if scan mode is available, then start the scan
# get engine type
engine_id = request.POST['scan_mode']
list_of_domains = request.POST['list_of_domain_id']
grouped_scans = []
for domain_id in list_of_domains.split(","):
# Start the celery task
scan_history_id = create_scan_object(domain_id, engine_id)
# domain = get_object_or_404(Domain, id=domain_id)
kwargs = {
'scan_history_id': scan_history_id,
'domain_id': domain_id,
'engine_id': engine_id,
'scan_type': LIVE_SCAN,
'results_dir': '/usr/src/scan_results',
# TODO: Add this to multiple scan view
# 'imported_subdomains': subdomains_in,
# 'out_of_scope_subdomains': subdomains_out
}
_scan_task = initiate_scan.si(**kwargs)
grouped_scans.append(_scan_task)
celery_group = group(grouped_scans)
celery_group.apply_async()
# Send start notif
messages.add_message(
request,
messages.INFO,
'Scan Started for multiple targets')
return HttpResponseRedirect(reverse('scan_history', kwargs={'slug': slug}))
else:
# this else condition will have post request from the scan page
# containing all the targets id
list_of_domain_name = []
list_of_domain_id = []
for key, value in request.POST.items():
if key != "list_target_table_length" and key != "csrfmiddlewaretoken":
domain = get_object_or_404(Domain, id=value)
list_of_domain_name.append(domain.name)
list_of_domain_id.append(value)
domain_ids = ",".join(list_of_domain_id)
# GET request
engines = EngineType.objects
custom_engine_count = (
engines
.filter(default_engine=False)
.count()
)
context = {
'scan_history_active': 'active',
'engines': engines,
'domain_list': list_of_domain_name,
'domain_ids': domain_ids,
'custom_engine_count': custom_engine_count
}
return render(request, 'startScan/start_multiple_scan_ui.html', context)
def export_subdomains(request, scan_id):
subdomain_list = Subdomain.objects.filter(scan_history__id=scan_id)
scan = ScanHistory.objects.get(id=scan_id)
response_body = ""
for domain in subdomain_list:
response_body += response_body + domain.name + "\n"
scan_start_date_str = str(scan.start_scan_date.date())
domain_name = scan.domain.name
response = HttpResponse(response_body, content_type='text/plain')
response['Content-Disposition'] = (
f'attachment; filename="subdomains_{domain_name}_{scan_start_date_str}.txt"'
)
return response
def export_endpoints(request, scan_id):
endpoint_list = EndPoint.objects.filter(scan_history__id=scan_id)
scan = ScanHistory.objects.get(id=scan_id)
response_body = ""
for endpoint in endpoint_list:
response_body += endpoint.http_url + "\n"
scan_start_date_str = str(scan.start_scan_date.date())
domain_name = scan.domain.name
response = HttpResponse(response_body, content_type='text/plain')
response['Content-Disposition'] = (
f'attachment; filename="endpoints_{domain_name}_{scan_start_date_str}.txt"'
)
return response
def export_urls(request, scan_id):
urls_list = Subdomain.objects.filter(scan_history__id=scan_id)
scan = ScanHistory.objects.get(id=scan_id)
response_body = ""
for url in urls_list:
if url.http_url:
response_body += response_body + url.http_url + "\n"
scan_start_date_str = str(scan.start_scan_date.date())
domain_name = scan.domain.name
response = HttpResponse(response_body, content_type='text/plain')
response['Content-Disposition'] = (
f'attachment; filename="urls_{domain_name}_{scan_start_date_str}.txt"'
)
return response
@has_permission_decorator(PERM_MODIFY_SCAN_RESULTS, redirect_url=FOUR_OH_FOUR_URL)
def delete_scan(request, id):
obj = get_object_or_404(ScanHistory, id=id)
if request.method == "POST":
delete_dir = obj.results_dir
run_command('rm -rf ' + delete_dir)
obj.delete()
messageData = {'status': 'true'}
messages.add_message(
request,
messages.INFO,
'Scan history successfully deleted!'
)
else:
messageData = {'status': 'false'}
messages.add_message(
request,
messages.INFO,
'Oops! something went wrong!'
)
return JsonResponse(messageData)
@has_permission_decorator(PERM_INITATE_SCANS_SUBSCANS, redirect_url=FOUR_OH_FOUR_URL)
def stop_scan(request, id):
if request.method == "POST":
scan = get_object_or_404(ScanHistory, id=id)
scan.scan_status = ABORTED_TASK
scan.save()
try:
for task_id in scan.celery_ids:
app.control.revoke(task_id, terminate=True, signal='SIGKILL')
tasks = (
ScanActivity.objects
.filter(scan_of=scan)
.filter(status=RUNNING_TASK)
.order_by('-pk')
)
for task in tasks:
task.status = ABORTED_TASK
task.time = timezone.now()
task.save()
create_scan_activity(scan.id, "Scan aborted", SUCCESS_TASK)
response = {'status': True}
messages.add_message(
request,
messages.INFO,
'Scan successfully stopped!'
)
except Exception as e:
logger.error(e)
response = {'status': False}
messages.add_message(
request,
messages.ERROR,
f'Scan failed to stop ! Error: {str(e)}'
)
return JsonResponse(response)
return scan_history(request)
@has_permission_decorator(PERM_INITATE_SCANS_SUBSCANS, redirect_url=FOUR_OH_FOUR_URL)
def schedule_scan(request, host_id, slug):
domain = Domain.objects.get(id=host_id)
if request.method == "POST":
scheduled_mode = request.POST['scheduled_mode']
engine_type = int(request.POST['scan_mode'])
# Get imported and out-of-scope subdomains
subdomains_in = request.POST['importSubdomainTextArea'].split()
subdomains_in = [s.rstrip() for s in subdomains_in if s]
subdomains_out = request.POST['outOfScopeSubdomainTextarea'].split()
subdomains_out = [s.rstrip() for s in subdomains_out if s]
# Get engine type
engine = get_object_or_404(EngineType, id=engine_type)
timestr = str(datetime.strftime(timezone.now(), '%Y_%m_%d_%H_%M_%S'))
task_name = f'{engine.engine_name} for {domain.name}: {timestr}'
if scheduled_mode == 'periodic':
frequency_value = int(request.POST['frequency'])
frequency_type = request.POST['frequency_type']
if frequency_type == 'minutes':
period = IntervalSchedule.MINUTES
elif frequency_type == 'hours':
period = IntervalSchedule.HOURS
elif frequency_type == 'days':
period = IntervalSchedule.DAYS
elif frequency_type == 'weeks':
period = IntervalSchedule.DAYS
frequency_value *= 7
elif frequency_type == 'months':
period = IntervalSchedule.DAYS
frequency_value *= 30
schedule, _ = IntervalSchedule.objects.get_or_create(
every=frequency_value,
period=period)
kwargs = {
'domain_id': host_id,
'engine_id': engine.id,
'scan_history_id': 1,
'scan_type': SCHEDULED_SCAN,
'imported_subdomains': subdomains_in,
'out_of_scope_subdomains': subdomains_out
}
PeriodicTask.objects.create(interval=schedule,
name=task_name,
task='reNgine.tasks.initiate_scan',
kwargs=json.dumps(kwargs))
elif scheduled_mode == 'clocked':
schedule_time = request.POST['scheduled_time']
clock, _ = ClockedSchedule.objects.get_or_create(
clocked_time=schedule_time)
kwargs = {
'scan_history_id': 0,
'domain_id': host_id,
'engine_id': engine.id,
'scan_type': SCHEDULED_SCAN,
'imported_subdomains': subdomains_in,
'out_of_scope_subdomains': subdomains_out
}
PeriodicTask.objects.create(clocked=clock,
one_off=True,
name=task_name,
task='reNgine.tasks.initiate_scan',
kwargs=json.dumps(kwargs))
messages.add_message(
request,
messages.INFO,
f'Scan Scheduled for {domain.name}'
)
return HttpResponseRedirect(reverse('scheduled_scan_view', kwargs={'slug': slug}))
# GET request
engines = EngineType.objects
custom_engine_count = (
engines
.filter(default_engine=False)
.count()
)
context = {
'scan_history_active': 'active',
'domain': domain,
'engines': engines,
'custom_engine_count': custom_engine_count}
return render(request, 'startScan/schedule_scan_ui.html', context)
def scheduled_scan_view(request, slug):
scheduled_tasks = (
PeriodicTask.objects
.all()
.exclude(name='celery.backend_cleanup')
)
context = {
'scheduled_scan_active': 'active',
'scheduled_tasks': scheduled_tasks,
}
return render(request, 'startScan/schedule_scan_list.html', context)
@has_permission_decorator(PERM_MODIFY_SCAN_RESULTS, redirect_url=FOUR_OH_FOUR_URL)
def delete_scheduled_task(request, id):
task_object = get_object_or_404(PeriodicTask, id=id)
if request.method == "POST":
task_object.delete()
messageData = {'status': 'true'}
messages.add_message(
request,
messages.INFO,
'Scheduled Scan successfully deleted!')
else:
messageData = {'status': 'false'}
messages.add_message(
request,
messages.INFO,
'Oops! something went wrong!')
return JsonResponse(messageData)
@has_permission_decorator(PERM_MODIFY_SCAN_RESULTS, redirect_url=FOUR_OH_FOUR_URL)
def change_scheduled_task_status(request, id):
if request.method == 'POST':
task = PeriodicTask.objects.get(id=id)
task.enabled = not task.enabled
task.save()
return HttpResponse('')
def change_vuln_status(request, id):
if request.method == 'POST':
vuln = Vulnerability.objects.get(id=id)
vuln.open_status = not vuln.open_status
vuln.save()
return HttpResponse('')
def create_scan_object(host_id, engine_id):
'''
create task with pending status so that celery task will execute when
threads are free
'''
# get current time
current_scan_time = timezone.now()
# fetch engine and domain object
engine = EngineType.objects.get(pk=engine_id)
domain = Domain.objects.get(pk=host_id)
scan = ScanHistory()
scan.scan_status = INITIATED_TASK
scan.domain = domain
scan.scan_type = engine
scan.start_scan_date = current_scan_time
scan.save()
# save last scan date for domain model
domain.start_scan_date = current_scan_time
domain.save()
return scan.id
@has_permission_decorator(PERM_MODIFY_SYSTEM_CONFIGURATIONS, redirect_url=FOUR_OH_FOUR_URL)
def delete_all_scan_results(request):
if request.method == 'POST':
ScanHistory.objects.all().delete()
messageData = {'status': 'true'}
messages.add_message(
request,
messages.INFO,
'All Scan History successfully deleted!')
return JsonResponse(messageData)
@has_permission_decorator(PERM_MODIFY_SYSTEM_CONFIGURATIONS, redirect_url=FOUR_OH_FOUR_URL)
def delete_all_screenshots(request):
if request.method == 'POST':
run_command('rm -rf /usr/src/scan_results/*')
messageData = {'status': 'true'}
messages.add_message(
request,
messages.INFO,
'Screenshots successfully deleted!')
return JsonResponse(messageData)
def visualise(request, id):
scan = ScanHistory.objects.get(id=id)
context = {
'scan_id': id,
'scan_history': scan,
}
return render(request, 'startScan/visualise.html', context)
@has_permission_decorator(PERM_INITATE_SCANS_SUBSCANS, redirect_url=FOUR_OH_FOUR_URL)
def start_organization_scan(request, id, slug):
organization = get_object_or_404(Organization, id=id)
if request.method == "POST":
engine_id = request.POST['scan_mode']
# Start Celery task for each organization's domains
for domain in organization.get_domains():
scan_history_id = create_scan_object(domain.id, engine_id)
scan = ScanHistory.objects.get(pk=scan_history_id)
kwargs = {
'scan_history_id': scan.id,
'domain_id': domain.id,
'engine_id': engine_id,
'scan_type': LIVE_SCAN,
'results_dir': '/usr/src/scan_results',
# TODO: Add this to multiple scan view
# 'imported_subdomains': subdomains_in,
# 'out_of_scope_subdomains': subdomains_out
}
initiate_scan.apply_async(kwargs=kwargs)
scan.save()
# Send start notif
ndomains = len(organization.get_domains())
messages.add_message(
request,
messages.INFO,
f'Scan Started for {ndomains} domains in organization {organization.name}')
return HttpResponseRedirect(reverse('scan_history', kwargs={'slug': slug}))
# GET request
engine = EngineType.objects.order_by('engine_name')
custom_engine_count = EngineType.objects.filter(default_engine=False).count()
domain_list = organization.get_domains()
context = {
'organization_data_active': 'true',
'list_organization_li': 'active',
'organization': organization,
'engines': engine,
'domain_list': domain_list,
'custom_engine_count': custom_engine_count}
return render(request, 'organization/start_scan.html', context)
@has_permission_decorator(PERM_INITATE_SCANS_SUBSCANS, redirect_url=FOUR_OH_FOUR_URL)
def schedule_organization_scan(request, slug, id):
organization =Organization.objects.get(id=id)
if request.method == "POST":
engine_type = int(request.POST['scan_mode'])
engine = get_object_or_404(EngineType, id=engine_type)
scheduled_mode = request.POST['scheduled_mode']
for domain in organization.get_domains():
timestr = str(datetime.strftime(timezone.now(), '%Y_%m_%d_%H_%M_%S'))
task_name = f'{engine.engine_name} for {domain.name}: {timestr}'
# Period task
if scheduled_mode == 'periodic':
frequency_value = int(request.POST['frequency'])
frequency_type = request.POST['frequency_type']
if frequency_type == 'minutes':
period = IntervalSchedule.MINUTES
elif frequency_type == 'hours':
period = IntervalSchedule.HOURS
elif frequency_type == 'days':
period = IntervalSchedule.DAYS
elif frequency_type == 'weeks':
period = IntervalSchedule.DAYS
frequency_value *= 7
elif frequency_type == 'months':
period = IntervalSchedule.DAYS
frequency_value *= 30
schedule, _ = IntervalSchedule.objects.get_or_create(
every=frequency_value,
period=period
)
_kwargs = json.dumps({
'domain_id': domain.id,
'engine_id': engine.id,
'scan_history_id': 0,
'scan_type': SCHEDULED_SCAN,
'imported_subdomains': None
})
PeriodicTask.objects.create(
interval=schedule,
name=task_name,
task='reNgine.tasks.initiate_scan',
kwargs=_kwargs
)
# Clocked task
elif scheduled_mode == 'clocked':
schedule_time = request.POST['scheduled_time']
clock, _ = ClockedSchedule.objects.get_or_create(
clocked_time=schedule_time
)
_kwargs = json.dumps({
'domain_id': domain.id,
'engine_id': engine.id,
'scan_history_id': 0,
'scan_type': LIVE_SCAN,
'imported_subdomains': None
})
PeriodicTask.objects.create(clocked=clock,
one_off=True,
name=task_name,
task='reNgine.tasks.initiate_scan',
kwargs=_kwargs
)
# Send start notif
ndomains = len(organization.get_domains())
messages.add_message(
request,
messages.INFO,
f'Scan started for {ndomains} domains in organization {organization.name}'
)
return HttpResponseRedirect(reverse('scheduled_scan_view', kwargs={'slug': slug, 'id': id}))
# GET request
engine = EngineType.objects
custom_engine_count = EngineType.objects.filter(default_engine=False).count()
context = {
'scan_history_active': 'active',
'organization': organization,
'domain_list': organization.get_domains(),
'engines': engine,
'custom_engine_count': custom_engine_count
}
return render(request, 'organization/schedule_scan_ui.html', context)
@has_permission_decorator(PERM_MODIFY_SCAN_RESULTS, redirect_url=FOUR_OH_FOUR_URL)
def delete_scans(request, slug):
if request.method == "POST":
for key, value in request.POST.items():
if key == 'scan_history_table_length' or key == 'csrfmiddlewaretoken':
continue
scan = get_object_or_404(ScanHistory, id=value)
delete_dir = scan.results_dir
run_command('rm -rf ' + delete_dir)
scan.delete()
messages.add_message(
request,
messages.INFO,
'All Scans deleted!')
return HttpResponseRedirect(reverse('scan_history', kwargs={'slug': slug}))
@has_permission_decorator(PERM_MODIFY_SCAN_REPORT, redirect_url=FOUR_OH_FOUR_URL)
def customize_report(request, id):
scan = ScanHistory.objects.get(id=id)
context = {
'scan_id': id,
'scan_history': scan,
}
return render(request, 'startScan/customize_report.html', context)
@has_permission_decorator(PERM_MODIFY_SCAN_REPORT, redirect_url=FOUR_OH_FOUR_URL)
def create_report(request, id):
primary_color = '#FFB74D'
secondary_color = '#212121'
# get report type
report_type = request.GET['report_type'] if 'report_type' in request.GET else 'full'
is_ignore_info_vuln = True if 'ignore_info_vuln' in request.GET else False
if report_type == 'recon':
show_recon = True
show_vuln = False
report_name = 'Reconnaissance Report'
elif report_type == 'vulnerability':
show_recon = False
show_vuln = True
report_name = 'Vulnerability Report'
else:
# default
show_recon = True
show_vuln = True
report_name = 'Full Scan Report'
scan = ScanHistory.objects.get(id=id)
vulns = (
Vulnerability.objects
.filter(scan_history=scan)
.order_by('-severity')
) if not is_ignore_info_vuln else (
Vulnerability.objects
.filter(scan_history=scan)
.exclude(severity=0)
.order_by('-severity')
)
unique_vulns = (
Vulnerability.objects
.filter(scan_history=scan)
.values("name", "severity")
.annotate(count=Count('name'))
.order_by('-severity', '-count')
) if not is_ignore_info_vuln else (
Vulnerability.objects
.filter(scan_history=scan)
.exclude(severity=0)
.values("name", "severity")
.annotate(count=Count('name'))
.order_by('-severity', '-count')
)
subdomains = (
Subdomain.objects
.filter(scan_history=scan)
.order_by('-content_length')
)
subdomain_alive_count = (
Subdomain.objects
.filter(scan_history__id=id)
.values('name')
.distinct()
.filter(http_status__exact=200)
.count()
)
interesting_subdomains = get_interesting_subdomains(scan_history=id)
ip_addresses = (
IpAddress.objects
.filter(ip_addresses__in=subdomains)
.distinct()
)
data = {
'scan_object': scan,
'unique_vulnerabilities': unique_vulns,
'all_vulnerabilities': vulns,
'subdomain_alive_count': subdomain_alive_count,
'interesting_subdomains': interesting_subdomains,
'subdomains': subdomains,
'ip_addresses': ip_addresses,
'show_recon': show_recon,
'show_vuln': show_vuln,
'report_name': report_name,
}
# Get report related config
vuln_report_query = VulnerabilityReportSetting.objects.all()
if vuln_report_query.exists():
report = vuln_report_query[0]
data['company_name'] = report.company_name
data['company_address'] = report.company_address
data['company_email'] = report.company_email
data['company_website'] = report.company_website
data['show_rengine_banner'] = report.show_rengine_banner
data['show_footer'] = report.show_footer
data['footer_text'] = report.footer_text
data['show_executive_summary'] = report.show_executive_summary
# Replace executive_summary_description with template syntax
description = report.executive_summary_description
description = description.replace('{scan_date}', scan.start_scan_date.strftime('%d %B, %Y'))
description = description.replace('{company_name}', report.company_name)
description = description.replace('{target_name}', scan.domain.name)
description = description.replace('{subdomain_count}', str(subdomains.count()))
description = description.replace('{vulnerability_count}', str(vulns.count()))
description = description.replace('{critical_count}', str(vulns.filter(severity=4).count()))
description = description.replace('{high_count}', str(vulns.filter(severity=3).count()))
description = description.replace('{medium_count}', str(vulns.filter(severity=2).count()))
description = description.replace('{low_count}', str(vulns.filter(severity=1).count()))
description = description.replace('{info_count}', str(vulns.filter(severity=0).count()))
description = description.replace('{unknown_count}', str(vulns.filter(severity=-1).count()))
if scan.domain.description:
description = description.replace('{target_description}', scan.domain.description)
# Convert to Markdown
data['executive_summary_description'] = markdown.markdown(description)
primary_color = report.primary_color
secondary_color = report.secondary_color
data['primary_color'] = primary_color
data['secondary_color'] = secondary_color
template = get_template('report/template.html')
html = template.render(data)
pdf = HTML(string=html).write_pdf()
if 'download' in request.GET:
response = HttpResponse(pdf, content_type='application/octet-stream')
else:
response = HttpResponse(pdf, content_type='application/pdf')
return response
| import markdown
from celery import group
from weasyprint import HTML
from datetime import datetime
from django.contrib import messages
from django.db.models import Count
from django.http import HttpResponse, HttpResponseRedirect, JsonResponse
from django.shortcuts import get_object_or_404, render
from django.template.loader import get_template
from django.urls import reverse
from django.utils import timezone
from django_celery_beat.models import (ClockedSchedule, IntervalSchedule, PeriodicTask)
from rolepermissions.decorators import has_permission_decorator
from reNgine.celery import app
from reNgine.common_func import *
from reNgine.definitions import ABORTED_TASK, SUCCESS_TASK
from reNgine.tasks import create_scan_activity, initiate_scan, run_command
from scanEngine.models import EngineType
from startScan.models import *
from targetApp.models import *
def scan_history(request, slug):
host = ScanHistory.objects.filter(domain__project__slug=slug).order_by('-start_scan_date')
context = {'scan_history_active': 'active', "scan_history": host}
return render(request, 'startScan/history.html', context)
def subscan_history(request, slug):
subscans = SubScan.objects.filter(scan_history__domain__project__slug=slug).order_by('-start_scan_date')
context = {'scan_history_active': 'active', "subscans": subscans}
return render(request, 'startScan/subscan_history.html', context)
def detail_scan(request, id, slug):
ctx = {}
# Get scan objects
scan = get_object_or_404(ScanHistory, id=id)
domain_id = scan.domain.id
scan_engines = EngineType.objects.order_by('engine_name').all()
recent_scans = ScanHistory.objects.filter(domain__id=domain_id)
last_scans = (
ScanHistory.objects
.filter(domain__id=domain_id)
.filter(tasks__overlap=['subdomain_discovery'])
.filter(id__lte=id)
.filter(scan_status=2)
)
# Get all kind of objects associated with our ScanHistory object
emails = Email.objects.filter(emails__in=[scan])
employees = Employee.objects.filter(employees__in=[scan])
subdomains = Subdomain.objects.filter(scan_history=scan)
endpoints = EndPoint.objects.filter(scan_history=scan)
vulns = Vulnerability.objects.filter(scan_history=scan)
vulns_tags = VulnerabilityTags.objects.filter(vuln_tags__in=vulns)
ip_addresses = IpAddress.objects.filter(ip_addresses__in=subdomains)
geo_isos = CountryISO.objects.filter(ipaddress__in=ip_addresses)
scan_activity = ScanActivity.objects.filter(scan_of__id=id).order_by('time')
cves = CveId.objects.filter(cve_ids__in=vulns)
cwes = CweId.objects.filter(cwe_ids__in=vulns)
# HTTP statuses
http_statuses = (
subdomains
.exclude(http_status=0)
.values('http_status')
.annotate(Count('http_status'))
)
# CVEs / CWes
common_cves = (
cves
.annotate(nused=Count('cve_ids'))
.order_by('-nused')
.values('name', 'nused')
[:10]
)
common_cwes = (
cwes
.annotate(nused=Count('cwe_ids'))
.order_by('-nused')
.values('name', 'nused')
[:10]
)
# Tags
common_tags = (
vulns_tags
.annotate(nused=Count('vuln_tags'))
.order_by('-nused')
.values('name', 'nused')
[:7]
)
# Countries
asset_countries = (
geo_isos
.annotate(count=Count('iso'))
.order_by('-count')
)
# Subdomains
subdomain_count = (
subdomains
.values('name')
.distinct()
.count()
)
alive_count = (
subdomains
.values('name')
.distinct()
.filter(http_status__exact=200)
.count()
)
important_count = (
subdomains
.values('name')
.distinct()
.filter(is_important=True)
.count()
)
# Endpoints
endpoint_count = (
endpoints
.values('http_url')
.distinct()
.count()
)
endpoint_alive_count = (
endpoints
.filter(http_status__exact=200) # TODO: use is_alive() func as it's more precise
.values('http_url')
.distinct()
.count()
)
# Vulnerabilities
common_vulns = (
vulns
.exclude(severity=0)
.values('name', 'severity')
.annotate(count=Count('name'))
.order_by('-count')
[:10]
)
info_count = vulns.filter(severity=0).count()
low_count = vulns.filter(severity=1).count()
medium_count = vulns.filter(severity=2).count()
high_count = vulns.filter(severity=3).count()
critical_count = vulns.filter(severity=4).count()
unknown_count = vulns.filter(severity=-1).count()
total_count = vulns.count()
total_count_ignore_info = vulns.exclude(severity=0).count()
# Emails
exposed_count = emails.exclude(password__isnull=True).count()
# Build render context
ctx = {
'scan_history_id': id,
'history': scan,
'scan_activity': scan_activity,
'subdomain_count': subdomain_count,
'alive_count': alive_count,
'important_count': important_count,
'endpoint_count': endpoint_count,
'endpoint_alive_count': endpoint_alive_count,
'info_count': info_count,
'low_count': low_count,
'medium_count': medium_count,
'high_count': high_count,
'critical_count': critical_count,
'unknown_count': unknown_count,
'total_vulnerability_count': total_count,
'total_vul_ignore_info_count': total_count_ignore_info,
'vulnerability_list': vulns.order_by('-severity').all(),
'scan_history_active': 'active',
'scan_engines': scan_engines,
'exposed_count': exposed_count,
'email_count': emails.count(),
'employees_count': employees.count(),
'most_recent_scans': recent_scans.order_by('-start_scan_date')[:1],
'http_status_breakdown': http_statuses,
'most_common_cve': common_cves,
'most_common_cwe': common_cwes,
'most_common_tags': common_tags,
'most_common_vulnerability': common_vulns,
'asset_countries': asset_countries,
}
# Find number of matched GF patterns
if scan.used_gf_patterns:
count_gf = {}
for gf in scan.used_gf_patterns.split(','):
count_gf[gf] = (
endpoints
.filter(matched_gf_patterns__icontains=gf)
.count()
)
ctx['matched_gf_count'] = count_gf
# Find last scan for this domain
if last_scans.count() > 1:
last_scan = last_scans.order_by('-start_scan_date')[1]
ctx['last_scan'] = last_scan
return render(request, 'startScan/detail_scan.html', ctx)
def all_subdomains(request, slug):
subdomains = Subdomain.objects.filter(target_domain__project__slug=slug)
scan_engines = EngineType.objects.order_by('engine_name').all()
alive_subdomains = subdomains.filter(http_status__exact=200) # TODO: replace this with is_alive() function
important_subdomains = (
subdomains
.filter(is_important=True)
.values('name')
.distinct()
.count()
)
context = {
'scan_history_id': id,
'scan_history_active': 'active',
'scan_engines': scan_engines,
'subdomain_count': subdomains.values('name').distinct().count(),
'alive_count': alive_subdomains.values('name').distinct().count(),
'important_count': important_subdomains
}
return render(request, 'startScan/subdomains.html', context)
def detail_vuln_scan(request, slug, id=None):
if id:
history = get_object_or_404(ScanHistory, id=id)
history.filter(domain__project__slug=slug)
context = {'scan_history_id': id, 'history': history}
else:
context = {'vuln_scan_active': 'true'}
return render(request, 'startScan/vulnerabilities.html', context)
def all_endpoints(request, slug):
context = {
'scan_history_active': 'active'
}
return render(request, 'startScan/endpoints.html', context)
def start_scan_ui(request, slug, domain_id):
domain = get_object_or_404(Domain, id=domain_id)
if request.method == "POST":
# Get imported and out-of-scope subdomains
subdomains_in = request.POST['importSubdomainTextArea'].split()
subdomains_in = [s.rstrip() for s in subdomains_in if s]
subdomains_out = request.POST['outOfScopeSubdomainTextarea'].split()
subdomains_out = [s.rstrip() for s in subdomains_out if s]
paths = request.POST['filterPath'].split()
filterPath = [s.rstrip() for s in paths if s]
if len(filterPath) > 0:
filterPath = filterPath[0]
else:
filterPath = ''
# Get engine type
engine_id = request.POST['scan_mode']
# Create ScanHistory object
scan_history_id = create_scan_object(domain_id, engine_id)
scan = ScanHistory.objects.get(pk=scan_history_id)
# Start the celery task
kwargs = {
'scan_history_id': scan.id,
'domain_id': domain.id,
'engine_id': engine_id,
'scan_type': LIVE_SCAN,
'results_dir': '/usr/src/scan_results',
'imported_subdomains': subdomains_in,
'out_of_scope_subdomains': subdomains_out,
'url_filter': filterPath
}
initiate_scan.apply_async(kwargs=kwargs)
scan.save()
# Send start notif
messages.add_message(
request,
messages.INFO,
f'Scan Started for {domain.name}')
return HttpResponseRedirect(reverse('scan_history', kwargs={'slug': slug}))
# GET request
engine = EngineType.objects.order_by('engine_name')
custom_engine_count = (
EngineType.objects
.filter(default_engine=False)
.count()
)
context = {
'scan_history_active': 'active',
'domain': domain,
'engines': engine,
'custom_engine_count': custom_engine_count}
return render(request, 'startScan/start_scan_ui.html', context)
@has_permission_decorator(PERM_INITATE_SCANS_SUBSCANS, redirect_url=FOUR_OH_FOUR_URL)
def start_multiple_scan(request, slug):
# domain = get_object_or_404(Domain, id=host_id)
if request.method == "POST":
if request.POST.get('scan_mode', 0):
# if scan mode is available, then start the scan
# get engine type
engine_id = request.POST['scan_mode']
list_of_domains = request.POST['list_of_domain_id']
grouped_scans = []
for domain_id in list_of_domains.split(","):
# Start the celery task
scan_history_id = create_scan_object(domain_id, engine_id)
# domain = get_object_or_404(Domain, id=domain_id)
kwargs = {
'scan_history_id': scan_history_id,
'domain_id': domain_id,
'engine_id': engine_id,
'scan_type': LIVE_SCAN,
'results_dir': '/usr/src/scan_results',
# TODO: Add this to multiple scan view
# 'imported_subdomains': subdomains_in,
# 'out_of_scope_subdomains': subdomains_out
}
_scan_task = initiate_scan.si(**kwargs)
grouped_scans.append(_scan_task)
celery_group = group(grouped_scans)
celery_group.apply_async()
# Send start notif
messages.add_message(
request,
messages.INFO,
'Scan Started for multiple targets')
return HttpResponseRedirect(reverse('scan_history', kwargs={'slug': slug}))
else:
# this else condition will have post request from the scan page
# containing all the targets id
list_of_domain_name = []
list_of_domain_id = []
for key, value in request.POST.items():
if key != "list_target_table_length" and key != "csrfmiddlewaretoken":
domain = get_object_or_404(Domain, id=value)
list_of_domain_name.append(domain.name)
list_of_domain_id.append(value)
domain_ids = ",".join(list_of_domain_id)
# GET request
engines = EngineType.objects
custom_engine_count = (
engines
.filter(default_engine=False)
.count()
)
context = {
'scan_history_active': 'active',
'engines': engines,
'domain_list': list_of_domain_name,
'domain_ids': domain_ids,
'custom_engine_count': custom_engine_count
}
return render(request, 'startScan/start_multiple_scan_ui.html', context)
def export_subdomains(request, scan_id):
subdomain_list = Subdomain.objects.filter(scan_history__id=scan_id)
scan = ScanHistory.objects.get(id=scan_id)
response_body = ""
for domain in subdomain_list:
response_body += response_body + domain.name + "\n"
scan_start_date_str = str(scan.start_scan_date.date())
domain_name = scan.domain.name
response = HttpResponse(response_body, content_type='text/plain')
response['Content-Disposition'] = (
f'attachment; filename="subdomains_{domain_name}_{scan_start_date_str}.txt"'
)
return response
def export_endpoints(request, scan_id):
endpoint_list = EndPoint.objects.filter(scan_history__id=scan_id)
scan = ScanHistory.objects.get(id=scan_id)
response_body = ""
for endpoint in endpoint_list:
response_body += endpoint.http_url + "\n"
scan_start_date_str = str(scan.start_scan_date.date())
domain_name = scan.domain.name
response = HttpResponse(response_body, content_type='text/plain')
response['Content-Disposition'] = (
f'attachment; filename="endpoints_{domain_name}_{scan_start_date_str}.txt"'
)
return response
def export_urls(request, scan_id):
urls_list = Subdomain.objects.filter(scan_history__id=scan_id)
scan = ScanHistory.objects.get(id=scan_id)
response_body = ""
for url in urls_list:
if url.http_url:
response_body += response_body + url.http_url + "\n"
scan_start_date_str = str(scan.start_scan_date.date())
domain_name = scan.domain.name
response = HttpResponse(response_body, content_type='text/plain')
response['Content-Disposition'] = (
f'attachment; filename="urls_{domain_name}_{scan_start_date_str}.txt"'
)
return response
@has_permission_decorator(PERM_MODIFY_SCAN_RESULTS, redirect_url=FOUR_OH_FOUR_URL)
def delete_scan(request, id):
obj = get_object_or_404(ScanHistory, id=id)
if request.method == "POST":
delete_dir = obj.results_dir
run_command('rm -rf ' + delete_dir)
obj.delete()
messageData = {'status': 'true'}
messages.add_message(
request,
messages.INFO,
'Scan history successfully deleted!'
)
else:
messageData = {'status': 'false'}
messages.add_message(
request,
messages.INFO,
'Oops! something went wrong!'
)
return JsonResponse(messageData)
@has_permission_decorator(PERM_INITATE_SCANS_SUBSCANS, redirect_url=FOUR_OH_FOUR_URL)
def stop_scan(request, id):
if request.method == "POST":
scan = get_object_or_404(ScanHistory, id=id)
scan.scan_status = ABORTED_TASK
scan.save()
try:
for task_id in scan.celery_ids:
app.control.revoke(task_id, terminate=True, signal='SIGKILL')
tasks = (
ScanActivity.objects
.filter(scan_of=scan)
.filter(status=RUNNING_TASK)
.order_by('-pk')
)
for task in tasks:
task.status = ABORTED_TASK
task.time = timezone.now()
task.save()
create_scan_activity(scan.id, "Scan aborted", SUCCESS_TASK)
response = {'status': True}
messages.add_message(
request,
messages.INFO,
'Scan successfully stopped!'
)
except Exception as e:
logger.error(e)
response = {'status': False}
messages.add_message(
request,
messages.ERROR,
f'Scan failed to stop ! Error: {str(e)}'
)
return JsonResponse(response)
return scan_history(request)
@has_permission_decorator(PERM_INITATE_SCANS_SUBSCANS, redirect_url=FOUR_OH_FOUR_URL)
def schedule_scan(request, host_id, slug):
domain = Domain.objects.get(id=host_id)
if request.method == "POST":
scheduled_mode = request.POST['scheduled_mode']
engine_type = int(request.POST['scan_mode'])
# Get imported and out-of-scope subdomains
subdomains_in = request.POST['importSubdomainTextArea'].split()
subdomains_in = [s.rstrip() for s in subdomains_in if s]
subdomains_out = request.POST['outOfScopeSubdomainTextarea'].split()
subdomains_out = [s.rstrip() for s in subdomains_out if s]
# Get engine type
engine = get_object_or_404(EngineType, id=engine_type)
timestr = str(datetime.strftime(timezone.now(), '%Y_%m_%d_%H_%M_%S'))
task_name = f'{engine.engine_name} for {domain.name}: {timestr}'
if scheduled_mode == 'periodic':
frequency_value = int(request.POST['frequency'])
frequency_type = request.POST['frequency_type']
if frequency_type == 'minutes':
period = IntervalSchedule.MINUTES
elif frequency_type == 'hours':
period = IntervalSchedule.HOURS
elif frequency_type == 'days':
period = IntervalSchedule.DAYS
elif frequency_type == 'weeks':
period = IntervalSchedule.DAYS
frequency_value *= 7
elif frequency_type == 'months':
period = IntervalSchedule.DAYS
frequency_value *= 30
schedule, _ = IntervalSchedule.objects.get_or_create(
every=frequency_value,
period=period)
kwargs = {
'domain_id': host_id,
'engine_id': engine.id,
'scan_history_id': 1,
'scan_type': SCHEDULED_SCAN,
'imported_subdomains': subdomains_in,
'out_of_scope_subdomains': subdomains_out
}
PeriodicTask.objects.create(interval=schedule,
name=task_name,
task='reNgine.tasks.initiate_scan',
kwargs=json.dumps(kwargs))
elif scheduled_mode == 'clocked':
schedule_time = request.POST['scheduled_time']
clock, _ = ClockedSchedule.objects.get_or_create(
clocked_time=schedule_time)
kwargs = {
'scan_history_id': 0,
'domain_id': host_id,
'engine_id': engine.id,
'scan_type': SCHEDULED_SCAN,
'imported_subdomains': subdomains_in,
'out_of_scope_subdomains': subdomains_out
}
PeriodicTask.objects.create(clocked=clock,
one_off=True,
name=task_name,
task='reNgine.tasks.initiate_scan',
kwargs=json.dumps(kwargs))
messages.add_message(
request,
messages.INFO,
f'Scan Scheduled for {domain.name}'
)
return HttpResponseRedirect(reverse('scheduled_scan_view', kwargs={'slug': slug}))
# GET request
engines = EngineType.objects
custom_engine_count = (
engines
.filter(default_engine=False)
.count()
)
context = {
'scan_history_active': 'active',
'domain': domain,
'engines': engines,
'custom_engine_count': custom_engine_count}
return render(request, 'startScan/schedule_scan_ui.html', context)
def scheduled_scan_view(request, slug):
scheduled_tasks = (
PeriodicTask.objects
.all()
.exclude(name='celery.backend_cleanup')
)
context = {
'scheduled_scan_active': 'active',
'scheduled_tasks': scheduled_tasks,
}
return render(request, 'startScan/schedule_scan_list.html', context)
@has_permission_decorator(PERM_MODIFY_SCAN_RESULTS, redirect_url=FOUR_OH_FOUR_URL)
def delete_scheduled_task(request, id):
task_object = get_object_or_404(PeriodicTask, id=id)
if request.method == "POST":
task_object.delete()
messageData = {'status': 'true'}
messages.add_message(
request,
messages.INFO,
'Scheduled Scan successfully deleted!')
else:
messageData = {'status': 'false'}
messages.add_message(
request,
messages.INFO,
'Oops! something went wrong!')
return JsonResponse(messageData)
@has_permission_decorator(PERM_MODIFY_SCAN_RESULTS, redirect_url=FOUR_OH_FOUR_URL)
def change_scheduled_task_status(request, id):
if request.method == 'POST':
task = PeriodicTask.objects.get(id=id)
task.enabled = not task.enabled
task.save()
return HttpResponse('')
def change_vuln_status(request, id):
if request.method == 'POST':
vuln = Vulnerability.objects.get(id=id)
vuln.open_status = not vuln.open_status
vuln.save()
return HttpResponse('')
def create_scan_object(host_id, engine_id):
'''
create task with pending status so that celery task will execute when
threads are free
'''
# get current time
current_scan_time = timezone.now()
# fetch engine and domain object
engine = EngineType.objects.get(pk=engine_id)
domain = Domain.objects.get(pk=host_id)
scan = ScanHistory()
scan.scan_status = INITIATED_TASK
scan.domain = domain
scan.scan_type = engine
scan.start_scan_date = current_scan_time
scan.save()
# save last scan date for domain model
domain.start_scan_date = current_scan_time
domain.save()
return scan.id
@has_permission_decorator(PERM_MODIFY_SYSTEM_CONFIGURATIONS, redirect_url=FOUR_OH_FOUR_URL)
def delete_all_scan_results(request):
if request.method == 'POST':
ScanHistory.objects.all().delete()
messageData = {'status': 'true'}
messages.add_message(
request,
messages.INFO,
'All Scan History successfully deleted!')
return JsonResponse(messageData)
@has_permission_decorator(PERM_MODIFY_SYSTEM_CONFIGURATIONS, redirect_url=FOUR_OH_FOUR_URL)
def delete_all_screenshots(request):
if request.method == 'POST':
run_command('rm -rf /usr/src/scan_results/*')
messageData = {'status': 'true'}
messages.add_message(
request,
messages.INFO,
'Screenshots successfully deleted!')
return JsonResponse(messageData)
def visualise(request, id):
scan = ScanHistory.objects.get(id=id)
context = {
'scan_id': id,
'scan_history': scan,
}
return render(request, 'startScan/visualise.html', context)
@has_permission_decorator(PERM_INITATE_SCANS_SUBSCANS, redirect_url=FOUR_OH_FOUR_URL)
def start_organization_scan(request, id, slug):
organization = get_object_or_404(Organization, id=id)
if request.method == "POST":
engine_id = request.POST['scan_mode']
# Start Celery task for each organization's domains
for domain in organization.get_domains():
scan_history_id = create_scan_object(domain.id, engine_id)
scan = ScanHistory.objects.get(pk=scan_history_id)
kwargs = {
'scan_history_id': scan.id,
'domain_id': domain.id,
'engine_id': engine_id,
'scan_type': LIVE_SCAN,
'results_dir': '/usr/src/scan_results',
# TODO: Add this to multiple scan view
# 'imported_subdomains': subdomains_in,
# 'out_of_scope_subdomains': subdomains_out
}
initiate_scan.apply_async(kwargs=kwargs)
scan.save()
# Send start notif
ndomains = len(organization.get_domains())
messages.add_message(
request,
messages.INFO,
f'Scan Started for {ndomains} domains in organization {organization.name}')
return HttpResponseRedirect(reverse('scan_history', kwargs={'slug': slug}))
# GET request
engine = EngineType.objects.order_by('engine_name')
custom_engine_count = EngineType.objects.filter(default_engine=False).count()
domain_list = organization.get_domains()
context = {
'organization_data_active': 'true',
'list_organization_li': 'active',
'organization': organization,
'engines': engine,
'domain_list': domain_list,
'custom_engine_count': custom_engine_count}
return render(request, 'organization/start_scan.html', context)
@has_permission_decorator(PERM_INITATE_SCANS_SUBSCANS, redirect_url=FOUR_OH_FOUR_URL)
def schedule_organization_scan(request, slug, id):
organization =Organization.objects.get(id=id)
if request.method == "POST":
engine_type = int(request.POST['scan_mode'])
engine = get_object_or_404(EngineType, id=engine_type)
scheduled_mode = request.POST['scheduled_mode']
for domain in organization.get_domains():
timestr = str(datetime.strftime(timezone.now(), '%Y_%m_%d_%H_%M_%S'))
task_name = f'{engine.engine_name} for {domain.name}: {timestr}'
# Period task
if scheduled_mode == 'periodic':
frequency_value = int(request.POST['frequency'])
frequency_type = request.POST['frequency_type']
if frequency_type == 'minutes':
period = IntervalSchedule.MINUTES
elif frequency_type == 'hours':
period = IntervalSchedule.HOURS
elif frequency_type == 'days':
period = IntervalSchedule.DAYS
elif frequency_type == 'weeks':
period = IntervalSchedule.DAYS
frequency_value *= 7
elif frequency_type == 'months':
period = IntervalSchedule.DAYS
frequency_value *= 30
schedule, _ = IntervalSchedule.objects.get_or_create(
every=frequency_value,
period=period
)
_kwargs = json.dumps({
'domain_id': domain.id,
'engine_id': engine.id,
'scan_history_id': 0,
'scan_type': SCHEDULED_SCAN,
'imported_subdomains': None
})
PeriodicTask.objects.create(
interval=schedule,
name=task_name,
task='reNgine.tasks.initiate_scan',
kwargs=_kwargs
)
# Clocked task
elif scheduled_mode == 'clocked':
schedule_time = request.POST['scheduled_time']
clock, _ = ClockedSchedule.objects.get_or_create(
clocked_time=schedule_time
)
_kwargs = json.dumps({
'domain_id': domain.id,
'engine_id': engine.id,
'scan_history_id': 0,
'scan_type': LIVE_SCAN,
'imported_subdomains': None
})
PeriodicTask.objects.create(clocked=clock,
one_off=True,
name=task_name,
task='reNgine.tasks.initiate_scan',
kwargs=_kwargs
)
# Send start notif
ndomains = len(organization.get_domains())
messages.add_message(
request,
messages.INFO,
f'Scan started for {ndomains} domains in organization {organization.name}'
)
return HttpResponseRedirect(reverse('scheduled_scan_view', kwargs={'slug': slug, 'id': id}))
# GET request
engine = EngineType.objects
custom_engine_count = EngineType.objects.filter(default_engine=False).count()
context = {
'scan_history_active': 'active',
'organization': organization,
'domain_list': organization.get_domains(),
'engines': engine,
'custom_engine_count': custom_engine_count
}
return render(request, 'organization/schedule_scan_ui.html', context)
@has_permission_decorator(PERM_MODIFY_SCAN_RESULTS, redirect_url=FOUR_OH_FOUR_URL)
def delete_scans(request, slug):
if request.method == "POST":
for key, value in request.POST.items():
if key == 'scan_history_table_length' or key == 'csrfmiddlewaretoken':
continue
scan = get_object_or_404(ScanHistory, id=value)
delete_dir = scan.results_dir
run_command('rm -rf ' + delete_dir)
scan.delete()
messages.add_message(
request,
messages.INFO,
'All Scans deleted!')
return HttpResponseRedirect(reverse('scan_history', kwargs={'slug': slug}))
@has_permission_decorator(PERM_MODIFY_SCAN_REPORT, redirect_url=FOUR_OH_FOUR_URL)
def customize_report(request, id):
scan = ScanHistory.objects.get(id=id)
context = {
'scan_id': id,
'scan_history': scan,
}
return render(request, 'startScan/customize_report.html', context)
@has_permission_decorator(PERM_MODIFY_SCAN_REPORT, redirect_url=FOUR_OH_FOUR_URL)
def create_report(request, id):
primary_color = '#FFB74D'
secondary_color = '#212121'
# get report type
report_type = request.GET['report_type'] if 'report_type' in request.GET else 'full'
is_ignore_info_vuln = True if 'ignore_info_vuln' in request.GET else False
if report_type == 'recon':
show_recon = True
show_vuln = False
report_name = 'Reconnaissance Report'
elif report_type == 'vulnerability':
show_recon = False
show_vuln = True
report_name = 'Vulnerability Report'
else:
# default
show_recon = True
show_vuln = True
report_name = 'Full Scan Report'
scan = ScanHistory.objects.get(id=id)
vulns = (
Vulnerability.objects
.filter(scan_history=scan)
.order_by('-severity')
) if not is_ignore_info_vuln else (
Vulnerability.objects
.filter(scan_history=scan)
.exclude(severity=0)
.order_by('-severity')
)
unique_vulns = (
Vulnerability.objects
.filter(scan_history=scan)
.values("name", "severity")
.annotate(count=Count('name'))
.order_by('-severity', '-count')
) if not is_ignore_info_vuln else (
Vulnerability.objects
.filter(scan_history=scan)
.exclude(severity=0)
.values("name", "severity")
.annotate(count=Count('name'))
.order_by('-severity', '-count')
)
subdomains = (
Subdomain.objects
.filter(scan_history=scan)
.order_by('-content_length')
)
subdomain_alive_count = (
Subdomain.objects
.filter(scan_history__id=id)
.values('name')
.distinct()
.filter(http_status__exact=200)
.count()
)
interesting_subdomains = get_interesting_subdomains(scan_history=id)
ip_addresses = (
IpAddress.objects
.filter(ip_addresses__in=subdomains)
.distinct()
)
data = {
'scan_object': scan,
'unique_vulnerabilities': unique_vulns,
'all_vulnerabilities': vulns,
'all_vulnerabilities_count': vulns.count(),
'subdomain_alive_count': subdomain_alive_count,
'interesting_subdomains': interesting_subdomains,
'subdomains': subdomains,
'ip_addresses': ip_addresses,
'show_recon': show_recon,
'show_vuln': show_vuln,
'report_name': report_name,
'is_ignore_info_vuln': is_ignore_info_vuln,
}
# Get report related config
vuln_report_query = VulnerabilityReportSetting.objects.all()
if vuln_report_query.exists():
report = vuln_report_query[0]
data['company_name'] = report.company_name
data['company_address'] = report.company_address
data['company_email'] = report.company_email
data['company_website'] = report.company_website
data['show_rengine_banner'] = report.show_rengine_banner
data['show_footer'] = report.show_footer
data['footer_text'] = report.footer_text
data['show_executive_summary'] = report.show_executive_summary
# Replace executive_summary_description with template syntax
description = report.executive_summary_description
description = description.replace('{scan_date}', scan.start_scan_date.strftime('%d %B, %Y'))
description = description.replace('{company_name}', report.company_name)
description = description.replace('{target_name}', scan.domain.name)
description = description.replace('{subdomain_count}', str(subdomains.count()))
description = description.replace('{vulnerability_count}', str(vulns.count()))
description = description.replace('{critical_count}', str(vulns.filter(severity=4).count()))
description = description.replace('{high_count}', str(vulns.filter(severity=3).count()))
description = description.replace('{medium_count}', str(vulns.filter(severity=2).count()))
description = description.replace('{low_count}', str(vulns.filter(severity=1).count()))
description = description.replace('{info_count}', str(vulns.filter(severity=0).count()))
description = description.replace('{unknown_count}', str(vulns.filter(severity=-1).count()))
if scan.domain.description:
description = description.replace('{target_description}', scan.domain.description)
# Convert to Markdown
data['executive_summary_description'] = markdown.markdown(description)
primary_color = report.primary_color
secondary_color = report.secondary_color
data['primary_color'] = primary_color
data['secondary_color'] = secondary_color
template = get_template('report/template.html')
html = template.render(data)
pdf = HTML(string=html).write_pdf()
if 'download' in request.GET:
response = HttpResponse(pdf, content_type='application/octet-stream')
else:
response = HttpResponse(pdf, content_type='application/pdf')
return response
| psyray | 4341d9834865240222a8dc72c01caaec0d7bed44 | 69231095782663fe0fe8b0e49b8aa995aa042723 | In django template you can use `|length`
you can use, `{{all_vulnerabilities|length}}` | yogeshojha | 0 |
yogeshojha/rengine | 1,100 | Fix report generation when `Ignore Informational Vulnerabilities` checked | When **Ignore Informational Vulnerabilities** is checked there are still info vulns datas.
I've reworked the queries that display vulnerabilities to prevent info vulns to display in the :
- **Quick summary** Info blue box
- **Reconnaissance Findings**
- **Vulnerabilities Discovered** Info blue box
I've also fixed the **Vulnerabilities Discovered** listing by doing a correct loop through regrouped values because values withe the same path but not the same severity does not display well
Tested and working on current master branch | null | 2023-12-05 01:25:41+00:00 | 2023-12-08 05:48:36+00:00 | web/templates/report/template.html | <html>
<head>
<meta charset="utf-8">
<title>Report</title>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@100;200;300;400;500&display=swap" rel="stylesheet">
<style>
@page {
size: A4;
@top-left {
background: {{primary_color}};
content: counter(page);
height: 1cm;
text-align: center;
width: 1cm;
}
@top-center {
background: {{primary_color}};
content: '';
display: block;
height: .05cm;
opacity: .5;
width: 100%;
}
@top-right {
content: string(heading);
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% if show_footer %}
@bottom-left {
content: "{{footer_text}}";
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% endif %}
}
@page :blank {
@top-left {
background: none;
content: ''
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page no-chapter {
@top-left {
background: none;
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page :first {
background-color: {{secondary_color}};
background-size: cover;
margin: 0;
}
@page chapter {
background: {{primary_color}};
margin: 0;
@top-left {
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
html {
color: #393939;
font-family: 'Inter';
font-weight: 300;
font-size: 11pt;
font-weight: 300;
line-height: 1.5;
}
h1 {
font-family: 'Inter';
font-weight: 200;
font-size: 38pt;
margin: 5cm 2cm 0 2cm;
page: no-chapter;
width: 100%;
line-height: normal;
}
h2,
h3,
h4 {
font-family: 'Inter';
font-weight: 200;
color: black;
font-weight: 400;
line-height: normal;
}
#cover {
align-content: space-between;
display: flex;
flex-wrap: wrap;
height: 297mm;
}
#cover-subheading {
font-family: 'Inter';
font-weight: 200;
font-size: 22pt;
width: 100%;
}
#cover footer {
background: {{primary_color}};
flex: 1 33%;
margin: 0 -2cm;
padding: 1cm 0;
white-space: pre-wrap;
}
#cover footer:first-of-type {
padding-left: 3cm;
}
#cover-line {
margin-top: 6px;
border-bottom: 1px double {{primary_color}};
}
#summary {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#contents {
page: no-chapter;
}
#contents h2 {
font-size: 20pt;
Intereight: 400;
margin-bottom: 3cm;
}
#contents h3 {
font-weight: 400;
margin: 3em 0 1em;
}
#contents h3::before {
background: {{primary_color}};
content: '';
display: block;
height: .08cm;
margin-bottom: .25cm;
width: 2cm;
}
#contents ul {
list-style: none;
padding-left: 0;
}
#contents ul li {
border-top: .25pt solid #c1c1c1;
margin: .25cm 0;
padding-top: .25cm;
}
#contents ul li::before {
color: {{primary_color}};
content: '• ';
font-size: 30pt;
line-height: 16pt;
vertical-align: bottom;
}
#contents ul li a {
color: inherit;
text-decoration-line: inherit;
}
#contents ul li a::before {
content: target-text(attr(href));
}
#contents ul li a::after {
color: {{primary_color}};
content: target-counter(attr(href), page);
float: right;
}
#columns section {
columns: 2;
column-gap: 1cm;
padding-top: 1cm;
}
#columns section p {
text-align: justify;
}
#columns section p:first-of-type {
font-weight: 700;
}
#chapter {
align-items: center;
display: flex;
height: 297mm;
justify-content: center;
page: chapter;
}
#boxes {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#boxes section h4 {
margin-bottom: 0;
}
#boxes section p {
background: {{primary_color}};
display: block;
font-size: 15pt;
margin-bottom: 0;
padding: .25cm 0;
text-align: center;
height: 85px;
color: #37474F;
}
.bg-critical {
background: #EF9A9A !important;
}
.bg-high {
background: #FFAB91 !important;
}
.bg-medium {
background: #FFCC80 !important;
}
.bg-low {
background: #FFE082 !important;
}
.bg-success {
background-color: #A5D6A7 !important;
}
.bg-grey {
background-color: #B0BEC5 !important;
}
.bg-info {
background-color: #90CAF9 !important;
}
.critical-color {
color: #EF9A9A;
}
.high-color {
color: #dc3545;
}
.medium-color {
color: #FFCC80;
}
.low-color {
color: #FFE082;
}
.success-color {
color: #A5D6A7;
}
.grey-color {
color: #212121;
}
.info-color {
color: #90CAF9;
}
.primary-color {
color: {{primary_color}};
}
.text-blue{
color: #007bff!important;
}
.badge {
display: inline-block;
padding-left: 12px;
padding-right: 12px;
text-align: center
}
.critical-hr-line {
border-bottom: 3px solid #EF9A9A !important;
}
.high-hr-line {
border-bottom: 3px solid #FFAB91 !important;
}
.medium-hr-line {
border-bottom: 3px solid #FFCC80 !important;
}
.low-hr-line {
border-bottom: 3px solid #FFE082 !important;
}
.info-hr-line {
border-bottom: 3px solid #90CAF9 !important;
}
.grey-hr-line {
border-bottom: 3px solid #212121 !important;
}
.inside-box-counter {
font-size: 28pt;
}
.table {
margin: 0 0 40px 0;
width: 100%;
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.2);
display: table;
border-spacing: 0 0.4em;
}
.row {
display: table-row;
background: #f6f6f6;
}
.cell {
padding: 6px 6px 6px 6px;
display: table-cell;
}
.header {
Intereight: 900;
color: #ffffff;
}
.page_title{
font-weight: 300;
font-size: 20pt;
}
.subheading{
font-weight: 300;
font-size: 14pt;
}
.content-heading{
font-weight: 300;
font-size: 12pt;
}
.mini-heading{
font-weight: 400;
font-size: 11pt;
}
.table-border{
border-style:solid;
border-width: 1px;
border-color: #90CAF9 !important;
}
a{
color: #007bff;
text-decoration: none;
}
.ml-8{
margin-left: 8px;
}
</style>
</head>
<body>
<article id="cover">
<h1 style="color:{{primary_color}}">{{report_name}}
<br>
{{scan_object.domain.name}}
<div id="cover-line"></div>
{# generated date #}
<span id="cover-subheading">{% now "F j, Y" %}</span>
</h1>
<footer>
{{company_name}}
{{company_address}}
</footer>
<footer>
{{company_email}}
{{company_website}}
</footer>
<footer>
{% if show_rengine_banner %}Generated by reNgine
https://github.com/yogeshojha/rengine
{% endif %}
</footer>
</article>
<article id="contents">
<h2> </h2>
<h3>Table of contents</h3>
<ul>
{% if show_executive_summary %}
<li><a href="#executive-summary"></a></li>
{% endif %}
<li><a href="#quick-summary"></a></li>
<li><a href="#assessment-timeline"></a></li>
{% if interesting_subdomains and show_recon %}
<li><a href="#interesting-recon-data"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerability-summary"></a></li>
{% endif %}
{% if show_recon %}
<li><a href="#reconnaissance-results"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerabilities-discovered"></a></li>
{% endif %}
</ul>
</article>
{% if show_executive_summary %}
<article id="summary" style="page-break-before: always">
<h2 id="executive-summary" class="page_title">Executive summary</h2>
<br>
{{executive_summary_description | safe }}
</article>
{% endif %}
<article id="summary" style="page-break-before: always">
<h2 id="quick-summary" class="page_title">Quick Summary</h2>
<p>This section contains quick summary of scan performed on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<br>
</article>
{# recon section #}
{% if show_recon %}
<h4 id="reconnaissance-summary" class="subheading">Reconnaissance</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-success">Subdomains
<br>
<span class="inside-box-counter">
{{scan_object.get_subdomain_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Endpoints
<br>
<span class="inside-box-counter">
{{scan_object.get_endpoint_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-critical">Vulnerabilities
<br>
<span class="inside-box-counter">
{{scan_object.get_vulnerability_count}}
</span>
</p>
</section>
</div>
{% endif %}
<!-- vulnerability section, hide if only recon report -->
{% if show_vuln %}
<article>
<br>
<h4 id="vulnerability-summary" class="subheading">Vulnerability Summary</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width:30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{{scan_object.get_info_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{% endif %}
<article>
<h3 id="assessment-timeline" class="page_title">Timeline of the Assessment</h3>
<p>
Scan started on: {{scan_object.start_scan_date|date:"F j, Y h:i"}}
<br>
Total time taken:
{% if scan_object.scan_status == 0 %}
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }}
{% elif scan_object.scan_status == 1 %}
{{ scan_object.get_elapsed_time }}
{% elif scan_object.scan_status == 2 %}
{% if scan_object.get_completed_time_in_sec < 60 %}
Completed in < 1 minutes {% else %} Completed in {{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} {% elif scan_object.scan_status == 3 %} Aborted in
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} <br>
Report Generated on: {% now "F j, Y" %}
</p>
</article>
{# show interesting_subdomains section only when show_recon result is there #}
{% if interesting_subdomains and show_recon %}
<article style="page-break-before: always" class="summary">
<h3 id="interesting-recon-data" class="page_title">Interesting Recon Data</h3>
<p>Listed below are the {{interesting_subdomains.count}} interesting subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-success">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 33%">
Subdomain
</div>
<div class="cell grey-color" style="width: 33%">
Page title
</div>
<div class="cell grey-color" style="width: 15%">
HTTP Status
</div>
</div>
{% for subdomain in interesting_subdomains %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 35%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 35%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% else %}
{% endif %}
</div>
<div class="cell" style="width: 15%;">
{% if subdomain.http_status %}
{{subdomain.http_status}}
{% else %}
{% endif %}
</div>
</div>
{% endfor %}
</div>
</article>
{% endif %}
{# vulnerability_summary only when vuln_report #}
{% if show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerability-summary" class="page_title">Summary of Vulnerabilities Identified</h3>
{% if all_vulnerabilities.count > 0 %}
<p>Listed below are the vulnerabilities identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-critical">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 50%;">
Vulnerability Name
</div>
<div class="cell grey-color" style="width: 19%;">
Times Identified
</div>
<div class="cell grey-color" style="width: 15%">
Severity
</div>
</div>
{% for vulnerability in unique_vulnerabilities %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 50%">
<a href="#vuln_{{vulnerability.name.split|join:'_'}}">{{vulnerability.name}}</a>
</div>
<div class="cell" style="float: right; width: 19%;">
{{vulnerability.count}}
</div>
{% if vulnerability.severity == -1 %}
<div class="cell bg-grey" style="width: 15%">
<span class="severity-title-box">Unknown</span>
{% elif vulnerability.severity == 0 %}
<div class="cell bg-info" style="width: 15%">
<span class="severity-title-box">Informational</span>
{% elif vulnerability.severity == 1 %}
<div class="cell bg-low" style="width: 15%">
<span class="severity-title-box">Low</span>
{% elif vulnerability.severity == 2 %}
<div class="cell bg-medium" style="width: 15%">
<span class="severity-title-box">Medium</span>
{% elif vulnerability.severity == 3 %}
<div class="cell bg-high" style="width: 15%">
<span class="severity-title-box">High</span>
{% elif vulnerability.severity == 4 %}
<div class="cell bg-critical" style="width: 15%">
<span class="severity-title-box">Critical</span>
{% endif %}
</div>
</div>
{% endfor %}
{% else %}
<h3 class='info-color'>No Vulnerabilities were Discovered.</h3>
{% endif %}
</div>
</article>
{% endif %}
{# show discovered assets only for show_recon report #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 id="reconnaissance-results" class="page_title">Discovered Assets</h3>
<h4 class="subheading">Subdomains</h4>
<p>
During the reconnaissance phase, {{scan_object.get_subdomain_count}} subdomains were discovered.
Out of {{scan_object.get_subdomain_count}} subdomains, {{subdomain_alive_count}} returned HTTP status 200.
{{interesting_subdomains.count}} interesting subdomains were also identified based on the interesting keywords used.
</p>
<h4>{{scan_object.get_subdomain_count}} subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
Subdomain
</div>
<div class="cell grey-color" style="width: 38%">
Page title
</div>
<div class="cell grey-color" style="width: 18%">
HTTP Status
</div>
</div>
{% for subdomain in subdomains %}
<div class="row">
<div class="cell" style="width: 38%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 38%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% endif %}
</div>
<div class="cell" style="width: 18%">
{{subdomain.http_status}}
</div>
</div>
{% endfor %}
</div>
{% if ip_addresses.count %}
<h4 class="subheading" style="margin-top: 10px;">IP Addresses</h4>
<h4>{{ip_addresses.count}} IP Addresses were identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
IP
</div>
<div class="cell grey-color" style="width: 38%">
Open Ports
</div>
<div class="cell grey-color" style="width: 18%">
Remarks
</div>
</div>
{% for ip in ip_addresses %}
<div class="row">
<div class="cell" style="width: 38%">
{{ip.address}}
</div>
<div class="cell" style="width: 38%">
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</div>
{% if ip.is_cdn %}
<div class="cell medium" style="width: 18%">
CDN IP Address
{% else %}
<div class="cell" style="width: 18%">
{% endif %}
</div>
</div>
{% endfor %}
</div>
{% endif %}
</article>
<br>
{% endif %}
{# reconnaissance finding only when show_recon #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 class="page_title">Reconnaissance Findings</h3>
{% for subdomain in subdomains %}
<table class="table" cellspacing="0" style="border-collapse: collapse;">
<tr>
<td style="width: 2%" class="cell table-border">{{ forloop.counter }}.</td>
<td style="width: 80%" class="cell table-border">{{subdomain.name}}</td>
{% if subdomain.http_status == 200 %}
<td style="width: 10%" class="cell table-border bg-success">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 300 and subdomain.http_status < 400 %}
<td style="width: 10%" class="cell table-border bg-medium">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 400 %}
<td style="width: 10%" class="cell table-border bg-high">{{subdomain.http_status}}</td>
{% elif subdomain.http_status == 0 %}
<td style="width: 10%" class="cell table-border">N/A</td>
{% else %}
<td style="width: 10%" class="cell table-border">{{subdomain.http_status}}</td>
{% endif %}
</tr>
{% if subdomain.page_title %}
<tr>
<td colspan="3" class="cell table-border"><strong>Page Title: </strong>{{subdomain.page_title}}</td>
</tr>
{% endif %}
{% if subdomain.ip_addresses.all %}
<tr>
<td colspan="3" class="cell table-border">
IP Address:
<ul>
{% for ip in subdomain.ip_addresses.all %}
<li>{{ip.address}}
{% if ip.ports.all %}
<ul>
<li>Open Ports:
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</li>
</ul>
{% endif %}
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
{% if subdomain.get_vulnerabilities %}
<tr>
<td colspan="3" class="cell table-border">
Vulnerabilities
{% regroup subdomain.get_vulnerabilities by name as vuln_list %}
<ul>
{% for vulnerability in vuln_list %}
<li>
<a href="#vuln_{{vulnerability.list.0.name.split|join:'_'}}">{{ vulnerability.grouper }}</a>
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
</table>
{% endfor %}
</article>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerabilities-discovered" class="page_title">Vulnerabilities Discovered</h3>
<p>
This section reports the security issues found during the audit.
<br>
A Total of {{scan_object.get_vulnerability_count}} were discovered in {{scan_object.domain.name}},
{{scan_object.get_critical_vulnerability_count}} of them were Critical,
{{scan_object.get_high_vulnerability_count}} of them were High Severity,
{{scan_object.get_medium_vulnerability_count}} of them were Medium severity,
{{scan_object.get_low_vulnerability_count}} of them were Low severity, and
{{scan_object.get_info_vulnerability_count}} of them were Informational.
{{scan_object.get_unknown_vulnerability_count}} of them were Unknown Severity.
</p>
<h4 class="subheading">Vulnerability Breakdown by Severity</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{{scan_object.get_info_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{# start vulnerability #}
{% if show_vuln %}
<article class="">
{% regroup all_vulnerabilities by get_path as grouped_vulnerabilities %}
{% for vulnerability in grouped_vulnerabilities %}
<div>
<h4 class="content-heading" id="vuln_{{vulnerability.list.0.name.split|join:'_'}}">
<span>{{vulnerability.list.0.name}}
<br>in {{vulnerability.grouper}}</span>
{% if vulnerability.list.0.severity == -1 %}
<span style="float: right;" class="badge bg-grey">Unknown</span>
<div class="grey-hr-line" ></div>
{% elif vulnerability.list.0.severity == 0 %}
<span style="float: right;" class="badge bg-info">INFO</span>
<div class="info-hr-line" ></div>
{% elif vulnerability.list.0.severity == 1 %}
<span style="float: right;" class="badge bg-low">LOW</span>
<div class="low-hr-line" ></div>
{% elif vulnerability.list.0.severity == 2 %}
<span style="float: right;" class="badge bg-medium">MEDIUM</span>
<div class="medium-hr-line" ></div>
{% elif vulnerability.list.0.severity == 3 %}
<span style="float: right;" class="badge bg-high">HIGH</span>
<div class="high-hr-line" ></div>
{% elif vulnerability.list.0.severity == 4 %}
<span style="float: right;" class="badge bg-critical">CRITICAL</span>
<div class="critical-hr-line" ></div>
{% endif %}
</h4>
<!-- show vulnerability classification -->
<span class="mini-heading">Vulnerability Source: {{vulnerability.list.0.source|upper}}</span><br>
{% if vulnerability.list.0.cvss_metrics or vulnerability.list.0.cvss_score or vulnerability.list.0.cve_ids.all or vulnerability.list.0.cve_ids.all %}
<span class="mini-heading">Vulnerability Classification</span><br>
{% if vulnerability.list.0.cvss_metrics %}
<span class="mini-heading ml-8">CVSS Metrics: {{vulnerability.list.0.cvss_metrics}}</span>
{% endif %}
{% if vulnerability.list.0.cvss_score %}
<br>
<span class="mini-heading ml-8">CVSS Score:</span> <span class="high-color">{{vulnerability.list.0.cvss_score}}</span>
{% endif %}
{% if vulnerability.list.0.cve_ids.all %}
<br>
<span class="mini-heading ml-8">CVE IDs</span><br>
{% for cve in vulnerability.list.0.cve_ids.all %} {{cve}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
{% if vulnerability.list.0.cwe_ids.all %}
<br>
<span class="mini-heading ml-8">CWE IDs</span><br>
{% for cwe in vulnerability.list.0.cwe_ids.all %} {{cwe}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
<br>
{% endif %}
{% if vulnerability.list.0.description %}
<br>
<span class="mini-heading">Description</span><br>
{{vulnerability.list.0.description|linebreaks}}
{% endif %}
{% if vulnerability.list.0.impact %}
<br>
<span class="mini-heading">Impact</span><br>
{{vulnerability.list.0.impact|linebreaks}}
{% endif %}
{% if vulnerability.list.0.remediation %}
<br>
<span class="mini-heading">Remediation</span><br>
{{vulnerability.list.0.remediation|linebreaks}}
{% endif %}
<br>
<span class="mini-heading">Vulnerable URL(s)</span><br>
<ul>
{% for vuln in vulnerability.list %}
<li class="text-blue">{{vuln.http_url}}</li>
{% endfor %}
</ul>
<!-- {% regroup vulnerability.list by http_url as vuln_http_url_list %} -->
<!-- <ul>
{% for vuln_urls in vuln_http_url_list %}
<li>{{vuln_urls.grouper}}</li>
<span class="mini-heading">Result/Findings</span><br>
{% for vuln in vuln_urls.list %}
{% if vuln.matcher_name %}
{% if not forloop.first %} • {% endif %} {{vuln.matcher_name}}
{% endif %}
{% if vuln.extracted_results %}
{% for res in vuln.extracted_results %}
{% if not forloop.first %} • {% endif %} {{res}}
{% endfor %}
{% endif %}
{% endfor %}
{% endfor %}
</ul> -->
{% if vulnerability.list.0.references.all %}
<span class="mini-heading">References</span><br>
<ul>
{% for ref in vulnerability.list.0.references.all %}
<li>
<span class="text-blue"> {{ref}} </span>
</li>
{% endfor %}
</ul>
{% endif %}
<br>
<br>
</div>
{% endfor %}
</article>
{% endif %}
{% endif %}
<article id="chapter">
<h2 id="chapter-title">END OF REPORT</h2>
</article>
</body>
</html>
| <html>
<head>
<meta charset="utf-8">
<title>Report</title>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@100;200;300;400;500&display=swap" rel="stylesheet">
<style>
@page {
size: A4;
@top-left {
background: {{primary_color}};
content: counter(page);
height: 1cm;
text-align: center;
width: 1cm;
}
@top-center {
background: {{primary_color}};
content: '';
display: block;
height: .05cm;
opacity: .5;
width: 100%;
}
@top-right {
content: string(heading);
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% if show_footer %}
@bottom-left {
content: "{{footer_text}}";
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% endif %}
}
@page :blank {
@top-left {
background: none;
content: ''
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page no-chapter {
@top-left {
background: none;
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page :first {
background-color: {{secondary_color}};
background-size: cover;
margin: 0;
}
@page chapter {
background: {{primary_color}};
margin: 0;
@top-left {
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
html {
color: #393939;
font-family: 'Inter';
font-weight: 300;
font-size: 11pt;
font-weight: 300;
line-height: 1.5;
}
h1 {
font-family: 'Inter';
font-weight: 200;
font-size: 38pt;
margin: 5cm 2cm 0 2cm;
page: no-chapter;
width: 100%;
line-height: normal;
}
h2,
h3,
h4 {
font-family: 'Inter';
font-weight: 200;
color: black;
font-weight: 400;
line-height: normal;
}
#cover {
align-content: space-between;
display: flex;
flex-wrap: wrap;
height: 297mm;
}
#cover-subheading {
font-family: 'Inter';
font-weight: 200;
font-size: 22pt;
width: 100%;
}
#cover footer {
background: {{primary_color}};
flex: 1 33%;
margin: 0 -2cm;
padding: 1cm 0;
white-space: pre-wrap;
}
#cover footer:first-of-type {
padding-left: 3cm;
}
#cover-line {
margin-top: 6px;
border-bottom: 1px double {{primary_color}};
}
#summary {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#contents {
page: no-chapter;
}
#contents h2 {
font-size: 20pt;
Intereight: 400;
margin-bottom: 3cm;
}
#contents h3 {
font-weight: 400;
margin: 3em 0 1em;
}
#contents h3::before {
background: {{primary_color}};
content: '';
display: block;
height: .08cm;
margin-bottom: .25cm;
width: 2cm;
}
#contents ul {
list-style: none;
padding-left: 0;
}
#contents ul li {
border-top: .25pt solid #c1c1c1;
margin: .25cm 0;
padding-top: .25cm;
}
#contents ul li::before {
color: {{primary_color}};
content: '• ';
font-size: 30pt;
line-height: 16pt;
vertical-align: bottom;
}
#contents ul li a {
color: inherit;
text-decoration-line: inherit;
}
#contents ul li a::before {
content: target-text(attr(href));
}
#contents ul li a::after {
color: {{primary_color}};
content: target-counter(attr(href), page);
float: right;
}
#columns section {
columns: 2;
column-gap: 1cm;
padding-top: 1cm;
}
#columns section p {
text-align: justify;
}
#columns section p:first-of-type {
font-weight: 700;
}
#chapter {
align-items: center;
display: flex;
height: 297mm;
justify-content: center;
page: chapter;
}
#boxes {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#boxes section h4 {
margin-bottom: 0;
}
#boxes section p {
background: {{primary_color}};
display: block;
font-size: 15pt;
margin-bottom: 0;
padding: .25cm 0;
text-align: center;
height: 85px;
color: #37474F;
}
.bg-critical {
background: #EF9A9A !important;
}
.bg-high {
background: #FFAB91 !important;
}
.bg-medium {
background: #FFCC80 !important;
}
.bg-low {
background: #FFE082 !important;
}
.bg-success {
background-color: #A5D6A7 !important;
}
.bg-grey {
background-color: #B0BEC5 !important;
}
.bg-info {
background-color: #90CAF9 !important;
}
.critical-color {
color: #EF9A9A;
}
.high-color {
color: #dc3545;
}
.medium-color {
color: #FFCC80;
}
.low-color {
color: #FFE082;
}
.success-color {
color: #A5D6A7;
}
.grey-color {
color: #212121;
}
.info-color {
color: #90CAF9;
}
.primary-color {
color: {{primary_color}};
}
.text-blue{
color: #007bff!important;
}
.badge {
display: inline-block;
padding-left: 12px;
padding-right: 12px;
text-align: center
}
.critical-hr-line {
border-bottom: 3px solid #EF9A9A !important;
}
.high-hr-line {
border-bottom: 3px solid #FFAB91 !important;
}
.medium-hr-line {
border-bottom: 3px solid #FFCC80 !important;
}
.low-hr-line {
border-bottom: 3px solid #FFE082 !important;
}
.info-hr-line {
border-bottom: 3px solid #90CAF9 !important;
}
.grey-hr-line {
border-bottom: 3px solid #212121 !important;
}
.inside-box-counter {
font-size: 28pt;
}
.table {
margin: 0 0 40px 0;
width: 100%;
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.2);
display: table;
border-spacing: 0 0.4em;
}
.row {
display: table-row;
background: #f6f6f6;
}
.cell {
padding: 6px 6px 6px 6px;
display: table-cell;
}
.header {
Intereight: 900;
color: #ffffff;
}
.page_title{
font-weight: 300;
font-size: 20pt;
}
.subheading{
font-weight: 300;
font-size: 14pt;
}
.content-heading{
font-weight: 300;
font-size: 12pt;
}
.mini-heading{
font-weight: 400;
font-size: 11pt;
}
.table-border{
border-style:solid;
border-width: 1px;
border-color: #90CAF9 !important;
}
a{
color: #007bff;
text-decoration: none;
}
.ml-8{
margin-left: 8px;
}
</style>
</head>
<body>
<article id="cover">
<h1 style="color:{{primary_color}}">{{report_name}}
<br>
{{scan_object.domain.name}}
<div id="cover-line"></div>
{# generated date #}
<span id="cover-subheading">{% now "F j, Y" %}</span>
</h1>
<footer>
{{company_name}}
{{company_address}}
</footer>
<footer>
{{company_email}}
{{company_website}}
</footer>
<footer>
{% if show_rengine_banner %}Generated by reNgine
https://github.com/yogeshojha/rengine
{% endif %}
</footer>
</article>
<article id="contents">
<h2> </h2>
<h3>Table of contents</h3>
<ul>
{% if show_executive_summary %}
<li><a href="#executive-summary"></a></li>
{% endif %}
<li><a href="#quick-summary"></a></li>
<li><a href="#assessment-timeline"></a></li>
{% if interesting_subdomains and show_recon %}
<li><a href="#interesting-recon-data"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerability-summary"></a></li>
{% endif %}
{% if show_recon %}
<li><a href="#reconnaissance-results"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerabilities-discovered"></a></li>
{% endif %}
</ul>
</article>
{% if show_executive_summary %}
<article id="summary" style="page-break-before: always">
<h2 id="executive-summary" class="page_title">Executive summary</h2>
<br>
{{executive_summary_description | safe }}
</article>
{% endif %}
<article id="summary" style="page-break-before: always">
<h2 id="quick-summary" class="page_title">Quick Summary</h2>
<p>This section contains quick summary of scan performed on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<br>
</article>
{# recon section #}
{% if show_recon %}
<h4 id="reconnaissance-summary" class="subheading">Reconnaissance</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-success">Subdomains
<br>
<span class="inside-box-counter">
{{scan_object.get_subdomain_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Endpoints
<br>
<span class="inside-box-counter">
{{scan_object.get_endpoint_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-critical">Vulnerabilities
<br>
<span class="inside-box-counter">
{{all_vulnerabilities_count}}
</span>
</p>
</section>
</div>
{% endif %}
<!-- vulnerability section, hide if only recon report -->
{% if show_vuln %}
<article>
<br>
<h4 id="vulnerability-summary" class="subheading">Vulnerability Summary</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width:30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{% if is_ignore_info_vuln %}
0
{% else %}
{{scan_object.get_info_vulnerability_count}}
{% endif %}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{% endif %}
<article>
<h3 id="assessment-timeline" class="page_title">Timeline of the Assessment</h3>
<p>
Scan started on: {{scan_object.start_scan_date|date:"F j, Y h:i"}}
<br>
Total time taken:
{% if scan_object.scan_status == 0 %}
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }}
{% elif scan_object.scan_status == 1 %}
{{ scan_object.get_elapsed_time }}
{% elif scan_object.scan_status == 2 %}
{% if scan_object.get_completed_time_in_sec < 60 %}
Completed in < 1 minutes {% else %} Completed in {{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} {% elif scan_object.scan_status == 3 %} Aborted in
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} <br>
Report Generated on: {% now "F j, Y" %}
</p>
</article>
{# show interesting_subdomains section only when show_recon result is there #}
{% if interesting_subdomains and show_recon %}
<article style="page-break-before: always" class="summary">
<h3 id="interesting-recon-data" class="page_title">Interesting Recon Data</h3>
<p>Listed below are the {{interesting_subdomains.count}} interesting subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-success">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 33%">
Subdomain
</div>
<div class="cell grey-color" style="width: 33%">
Page title
</div>
<div class="cell grey-color" style="width: 15%">
HTTP Status
</div>
</div>
{% for subdomain in interesting_subdomains %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 35%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 35%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% else %}
{% endif %}
</div>
<div class="cell" style="width: 15%;">
{% if subdomain.http_status %}
{{subdomain.http_status}}
{% else %}
{% endif %}
</div>
</div>
{% endfor %}
</div>
</article>
{% endif %}
{# vulnerability_summary only when vuln_report #}
{% if show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerability-summary" class="page_title">Summary of Vulnerabilities Identified</h3>
{% if all_vulnerabilities.count > 0 %}
<p>Listed below are the vulnerabilities identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-critical">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 50%;">
Vulnerability Name
</div>
<div class="cell grey-color" style="width: 19%;">
Times Identified
</div>
<div class="cell grey-color" style="width: 15%">
Severity
</div>
</div>
{% for vulnerability in unique_vulnerabilities %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 50%">
<a href="#vuln_{{vulnerability.name.split|join:'_'}}">{{vulnerability.name}}</a>
</div>
<div class="cell" style="float: right; width: 19%;">
{{vulnerability.count}}
</div>
{% if vulnerability.severity == -1 %}
<div class="cell bg-grey" style="width: 15%">
<span class="severity-title-box">Unknown</span>
{% elif vulnerability.severity == 0 %}
<div class="cell bg-info" style="width: 15%">
<span class="severity-title-box">Informational</span>
{% elif vulnerability.severity == 1 %}
<div class="cell bg-low" style="width: 15%">
<span class="severity-title-box">Low</span>
{% elif vulnerability.severity == 2 %}
<div class="cell bg-medium" style="width: 15%">
<span class="severity-title-box">Medium</span>
{% elif vulnerability.severity == 3 %}
<div class="cell bg-high" style="width: 15%">
<span class="severity-title-box">High</span>
{% elif vulnerability.severity == 4 %}
<div class="cell bg-critical" style="width: 15%">
<span class="severity-title-box">Critical</span>
{% endif %}
</div>
</div>
{% endfor %}
{% else %}
<h3 class='info-color'>No Vulnerabilities were Discovered.</h3>
{% endif %}
</div>
</article>
{% endif %}
{# show discovered assets only for show_recon report #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 id="reconnaissance-results" class="page_title">Discovered Assets</h3>
<h4 class="subheading">Subdomains</h4>
<p>
During the reconnaissance phase, {{scan_object.get_subdomain_count}} subdomains were discovered.
Out of {{scan_object.get_subdomain_count}} subdomains, {{subdomain_alive_count}} returned HTTP status 200.
{{interesting_subdomains.count}} interesting subdomains were also identified based on the interesting keywords used.
</p>
<h4>{{scan_object.get_subdomain_count}} subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
Subdomain
</div>
<div class="cell grey-color" style="width: 38%">
Page title
</div>
<div class="cell grey-color" style="width: 18%">
HTTP Status
</div>
</div>
{% for subdomain in subdomains %}
<div class="row">
<div class="cell" style="width: 38%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 38%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% endif %}
</div>
<div class="cell" style="width: 18%">
{{subdomain.http_status}}
</div>
</div>
{% endfor %}
</div>
{% if ip_addresses.count %}
<h4 class="subheading" style="margin-top: 10px;">IP Addresses</h4>
<h4>{{ip_addresses.count}} IP Addresses were identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
IP
</div>
<div class="cell grey-color" style="width: 38%">
Open Ports
</div>
<div class="cell grey-color" style="width: 18%">
Remarks
</div>
</div>
{% for ip in ip_addresses %}
<div class="row">
<div class="cell" style="width: 38%">
{{ip.address}}
</div>
<div class="cell" style="width: 38%">
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</div>
{% if ip.is_cdn %}
<div class="cell medium" style="width: 18%">
CDN IP Address
{% else %}
<div class="cell" style="width: 18%">
{% endif %}
</div>
</div>
{% endfor %}
</div>
{% endif %}
</article>
<br>
{% endif %}
{# reconnaissance finding only when show_recon #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 class="page_title">Reconnaissance Findings</h3>
{% for subdomain in subdomains %}
<table class="table" cellspacing="0" style="border-collapse: collapse;">
<tr>
<td style="width: 2%" class="cell table-border">{{ forloop.counter }}.</td>
<td style="width: 80%" class="cell table-border">{{subdomain.name}}</td>
{% if subdomain.http_status == 200 %}
<td style="width: 10%" class="cell table-border bg-success">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 300 and subdomain.http_status < 400 %}
<td style="width: 10%" class="cell table-border bg-medium">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 400 %}
<td style="width: 10%" class="cell table-border bg-high">{{subdomain.http_status}}</td>
{% elif subdomain.http_status == 0 %}
<td style="width: 10%" class="cell table-border">N/A</td>
{% else %}
<td style="width: 10%" class="cell table-border">{{subdomain.http_status}}</td>
{% endif %}
</tr>
{% if subdomain.page_title %}
<tr>
<td colspan="3" class="cell table-border"><strong>Page Title: </strong>{{subdomain.page_title}}</td>
</tr>
{% endif %}
{% if subdomain.ip_addresses.all %}
<tr>
<td colspan="3" class="cell table-border">
IP Address:
<ul>
{% for ip in subdomain.ip_addresses.all %}
<li>{{ip.address}}
{% if ip.ports.all %}
<ul>
<li>Open Ports:
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</li>
</ul>
{% endif %}
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
{% if subdomain.get_vulnerabilities_without_info %}
<tr>
<td colspan="3" class="cell table-border">
Vulnerabilities
{% regroup subdomain.get_vulnerabilities_without_info by name as vuln_list %}
<ul>
{% for vulnerability in vuln_list %}
<li>
<a href="#vuln_{{vulnerability.list.0.name.split|join:'_'}}">{{ vulnerability.grouper }}</a>
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
</table>
{% endfor %}
</article>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerabilities-discovered" class="page_title">Vulnerabilities Discovered</h3>
<p>
This section reports the security issues found during the audit.
<br>
A Total of {{scan_object.get_vulnerability_count}} were discovered in {{scan_object.domain.name}},
{{scan_object.get_critical_vulnerability_count}} of them were Critical,
{{scan_object.get_high_vulnerability_count}} of them were High Severity,
{{scan_object.get_medium_vulnerability_count}} of them were Medium severity,
{% if is_ignore_info_vuln %}0{% else %}{{scan_object.get_info_vulnerability_count}}{% endif %} of them were Low severity, and
{{scan_object.get_info_vulnerability_count}} of them were Informational.
{{scan_object.get_unknown_vulnerability_count}} of them were Unknown Severity.
</p>
<h4 class="subheading">Vulnerability Breakdown by Severity</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{% if is_ignore_info_vuln %}
0
{% else %}
{{scan_object.get_info_vulnerability_count}}
{% endif %}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{# start vulnerability #}
{% if show_vuln %}
<article class="">
{% regroup all_vulnerabilities by get_path as grouped_vulnerabilities %}
{% for vulnerabilities in grouped_vulnerabilities %}
{% for vulnerability in vulnerabilities.list %}
<div>
<h4 class="content-heading" id="vuln_{{vulnerability.name.split|join:'_'}}">
<span>{{vulnerability.name}}
<br>in {{vulnerabilities.grouper}}</span>
{% if vulnerability.severity == -1 %}
<span style="float: right;" class="badge bg-grey">Unknown</span>
<div class="grey-hr-line" ></div>
{% elif vulnerability.severity == 0 %}
<span style="float: right;" class="badge bg-info">INFO</span>
<div class="info-hr-line" ></div>
{% elif vulnerability.severity == 1 %}
<span style="float: right;" class="badge bg-low">LOW</span>
<div class="low-hr-line" ></div>
{% elif vulnerability.severity == 2 %}
<span style="float: right;" class="badge bg-medium">MEDIUM</span>
<div class="medium-hr-line" ></div>
{% elif vulnerability.severity == 3 %}
<span style="float: right;" class="badge bg-high">HIGH</span>
<div class="high-hr-line" ></div>
{% elif vulnerability.severity == 4 %}
<span style="float: right;" class="badge bg-critical">CRITICAL</span>
<div class="critical-hr-line" ></div>
{% endif %}
</h4>
<!-- show vulnerability classification -->
<span class="mini-heading">Vulnerability Source: {{vulnerability.source|upper}}</span><br>
{% if vulnerability.cvss_metrics or vulnerability.cvss_score or vulnerability.cve_ids.all or vulnerability.cve_ids.all %}
<span class="mini-heading">Vulnerability Classification</span><br>
{% if vulnerability.cvss_metrics %}
<span class="mini-heading ml-8">CVSS Metrics: {{vulnerability.cvss_metrics}}</span>
{% endif %}
{% if vulnerability.cvss_score %}
<br>
<span class="mini-heading ml-8">CVSS Score:</span> <span class="high-color">{{vulnerability.cvss_score}}</span>
{% endif %}
{% if vulnerability.cve_ids.all %}
<br>
<span class="mini-heading ml-8">CVE IDs</span><br>
{% for cve in vulnerability.cve_ids.all %} {{cve}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
{% if vulnerability.cwe_ids.all %}
<br>
<span class="mini-heading ml-8">CWE IDs</span><br>
{% for cwe in vulnerability.cwe_ids.all %} {{cwe}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
<br>
{% endif %}
{% if vulnerability.description %}
<br>
<span class="mini-heading">Description</span><br>
{{vulnerability.description|linebreaks}}
{% endif %}
{% if vulnerability.impact %}
<br>
<span class="mini-heading">Impact</span><br>
{{vulnerability.impact|linebreaks}}
{% endif %}
{% if vulnerability.remediation %}
<br>
<span class="mini-heading">Remediation</span><br>
{{vulnerability.remediation|linebreaks}}
{% endif %}
<br>
<span class="mini-heading">Vulnerable URL(s)</span><br>
<ul>
<li class="text-blue"><a href="{{vulnerability.http_url}}" target="_blank" rel="noopener noreferrer">{{vulnerability.http_url}}</a></li>
</ul>
<!-- {% regroup vulnerability.list by http_url as vuln_http_url_list %} -->
<!-- <ul>
{% for vuln_urls in vuln_http_url_list %}
<li>{{vuln_urls.grouper}}</li>
<span class="mini-heading">Result/Findings</span><br>
{% for vuln in vuln_urls.list %}
{% if vuln.matcher_name %}
{% if not forloop.first %} • {% endif %} {{vuln.matcher_name}}
{% endif %}
{% if vuln.extracted_results %}
{% for res in vuln.extracted_results %}
{% if not forloop.first %} • {% endif %} {{res}}
{% endfor %}
{% endif %}
{% endfor %}
{% endfor %}
</ul> -->
{% if vulnerability.references.all %}
<span class="mini-heading">References</span><br>
<ul>
{% for ref in vulnerability.references.all %}
<li>
<span class="text-blue"><a href="{{ref}}" target="_blank" rel="noopener noreferrer">{{ref}}</a></span>
</li>
{% endfor %}
</ul>
{% endif %}
<br>
<br>
</div>
{% endfor %}
{% endfor %}
</article>
{% endif %}
{% endif %}
<article id="chapter">
<h2 id="chapter-title">END OF REPORT</h2>
</article>
</body>
</html>
| psyray | 4341d9834865240222a8dc72c01caaec0d7bed44 | 69231095782663fe0fe8b0e49b8aa995aa042723 | ## Potentially unsafe external link
External links without noopener/noreferrer are a potential security risk.
[Show more details](https://github.com/yogeshojha/rengine/security/code-scanning/171) | github-advanced-security[bot] | 1 |
yogeshojha/rengine | 1,100 | Fix report generation when `Ignore Informational Vulnerabilities` checked | When **Ignore Informational Vulnerabilities** is checked there are still info vulns datas.
I've reworked the queries that display vulnerabilities to prevent info vulns to display in the :
- **Quick summary** Info blue box
- **Reconnaissance Findings**
- **Vulnerabilities Discovered** Info blue box
I've also fixed the **Vulnerabilities Discovered** listing by doing a correct loop through regrouped values because values withe the same path but not the same severity does not display well
Tested and working on current master branch | null | 2023-12-05 01:25:41+00:00 | 2023-12-08 05:48:36+00:00 | web/templates/report/template.html | <html>
<head>
<meta charset="utf-8">
<title>Report</title>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@100;200;300;400;500&display=swap" rel="stylesheet">
<style>
@page {
size: A4;
@top-left {
background: {{primary_color}};
content: counter(page);
height: 1cm;
text-align: center;
width: 1cm;
}
@top-center {
background: {{primary_color}};
content: '';
display: block;
height: .05cm;
opacity: .5;
width: 100%;
}
@top-right {
content: string(heading);
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% if show_footer %}
@bottom-left {
content: "{{footer_text}}";
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% endif %}
}
@page :blank {
@top-left {
background: none;
content: ''
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page no-chapter {
@top-left {
background: none;
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page :first {
background-color: {{secondary_color}};
background-size: cover;
margin: 0;
}
@page chapter {
background: {{primary_color}};
margin: 0;
@top-left {
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
html {
color: #393939;
font-family: 'Inter';
font-weight: 300;
font-size: 11pt;
font-weight: 300;
line-height: 1.5;
}
h1 {
font-family: 'Inter';
font-weight: 200;
font-size: 38pt;
margin: 5cm 2cm 0 2cm;
page: no-chapter;
width: 100%;
line-height: normal;
}
h2,
h3,
h4 {
font-family: 'Inter';
font-weight: 200;
color: black;
font-weight: 400;
line-height: normal;
}
#cover {
align-content: space-between;
display: flex;
flex-wrap: wrap;
height: 297mm;
}
#cover-subheading {
font-family: 'Inter';
font-weight: 200;
font-size: 22pt;
width: 100%;
}
#cover footer {
background: {{primary_color}};
flex: 1 33%;
margin: 0 -2cm;
padding: 1cm 0;
white-space: pre-wrap;
}
#cover footer:first-of-type {
padding-left: 3cm;
}
#cover-line {
margin-top: 6px;
border-bottom: 1px double {{primary_color}};
}
#summary {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#contents {
page: no-chapter;
}
#contents h2 {
font-size: 20pt;
Intereight: 400;
margin-bottom: 3cm;
}
#contents h3 {
font-weight: 400;
margin: 3em 0 1em;
}
#contents h3::before {
background: {{primary_color}};
content: '';
display: block;
height: .08cm;
margin-bottom: .25cm;
width: 2cm;
}
#contents ul {
list-style: none;
padding-left: 0;
}
#contents ul li {
border-top: .25pt solid #c1c1c1;
margin: .25cm 0;
padding-top: .25cm;
}
#contents ul li::before {
color: {{primary_color}};
content: '• ';
font-size: 30pt;
line-height: 16pt;
vertical-align: bottom;
}
#contents ul li a {
color: inherit;
text-decoration-line: inherit;
}
#contents ul li a::before {
content: target-text(attr(href));
}
#contents ul li a::after {
color: {{primary_color}};
content: target-counter(attr(href), page);
float: right;
}
#columns section {
columns: 2;
column-gap: 1cm;
padding-top: 1cm;
}
#columns section p {
text-align: justify;
}
#columns section p:first-of-type {
font-weight: 700;
}
#chapter {
align-items: center;
display: flex;
height: 297mm;
justify-content: center;
page: chapter;
}
#boxes {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#boxes section h4 {
margin-bottom: 0;
}
#boxes section p {
background: {{primary_color}};
display: block;
font-size: 15pt;
margin-bottom: 0;
padding: .25cm 0;
text-align: center;
height: 85px;
color: #37474F;
}
.bg-critical {
background: #EF9A9A !important;
}
.bg-high {
background: #FFAB91 !important;
}
.bg-medium {
background: #FFCC80 !important;
}
.bg-low {
background: #FFE082 !important;
}
.bg-success {
background-color: #A5D6A7 !important;
}
.bg-grey {
background-color: #B0BEC5 !important;
}
.bg-info {
background-color: #90CAF9 !important;
}
.critical-color {
color: #EF9A9A;
}
.high-color {
color: #dc3545;
}
.medium-color {
color: #FFCC80;
}
.low-color {
color: #FFE082;
}
.success-color {
color: #A5D6A7;
}
.grey-color {
color: #212121;
}
.info-color {
color: #90CAF9;
}
.primary-color {
color: {{primary_color}};
}
.text-blue{
color: #007bff!important;
}
.badge {
display: inline-block;
padding-left: 12px;
padding-right: 12px;
text-align: center
}
.critical-hr-line {
border-bottom: 3px solid #EF9A9A !important;
}
.high-hr-line {
border-bottom: 3px solid #FFAB91 !important;
}
.medium-hr-line {
border-bottom: 3px solid #FFCC80 !important;
}
.low-hr-line {
border-bottom: 3px solid #FFE082 !important;
}
.info-hr-line {
border-bottom: 3px solid #90CAF9 !important;
}
.grey-hr-line {
border-bottom: 3px solid #212121 !important;
}
.inside-box-counter {
font-size: 28pt;
}
.table {
margin: 0 0 40px 0;
width: 100%;
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.2);
display: table;
border-spacing: 0 0.4em;
}
.row {
display: table-row;
background: #f6f6f6;
}
.cell {
padding: 6px 6px 6px 6px;
display: table-cell;
}
.header {
Intereight: 900;
color: #ffffff;
}
.page_title{
font-weight: 300;
font-size: 20pt;
}
.subheading{
font-weight: 300;
font-size: 14pt;
}
.content-heading{
font-weight: 300;
font-size: 12pt;
}
.mini-heading{
font-weight: 400;
font-size: 11pt;
}
.table-border{
border-style:solid;
border-width: 1px;
border-color: #90CAF9 !important;
}
a{
color: #007bff;
text-decoration: none;
}
.ml-8{
margin-left: 8px;
}
</style>
</head>
<body>
<article id="cover">
<h1 style="color:{{primary_color}}">{{report_name}}
<br>
{{scan_object.domain.name}}
<div id="cover-line"></div>
{# generated date #}
<span id="cover-subheading">{% now "F j, Y" %}</span>
</h1>
<footer>
{{company_name}}
{{company_address}}
</footer>
<footer>
{{company_email}}
{{company_website}}
</footer>
<footer>
{% if show_rengine_banner %}Generated by reNgine
https://github.com/yogeshojha/rengine
{% endif %}
</footer>
</article>
<article id="contents">
<h2> </h2>
<h3>Table of contents</h3>
<ul>
{% if show_executive_summary %}
<li><a href="#executive-summary"></a></li>
{% endif %}
<li><a href="#quick-summary"></a></li>
<li><a href="#assessment-timeline"></a></li>
{% if interesting_subdomains and show_recon %}
<li><a href="#interesting-recon-data"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerability-summary"></a></li>
{% endif %}
{% if show_recon %}
<li><a href="#reconnaissance-results"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerabilities-discovered"></a></li>
{% endif %}
</ul>
</article>
{% if show_executive_summary %}
<article id="summary" style="page-break-before: always">
<h2 id="executive-summary" class="page_title">Executive summary</h2>
<br>
{{executive_summary_description | safe }}
</article>
{% endif %}
<article id="summary" style="page-break-before: always">
<h2 id="quick-summary" class="page_title">Quick Summary</h2>
<p>This section contains quick summary of scan performed on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<br>
</article>
{# recon section #}
{% if show_recon %}
<h4 id="reconnaissance-summary" class="subheading">Reconnaissance</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-success">Subdomains
<br>
<span class="inside-box-counter">
{{scan_object.get_subdomain_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Endpoints
<br>
<span class="inside-box-counter">
{{scan_object.get_endpoint_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-critical">Vulnerabilities
<br>
<span class="inside-box-counter">
{{scan_object.get_vulnerability_count}}
</span>
</p>
</section>
</div>
{% endif %}
<!-- vulnerability section, hide if only recon report -->
{% if show_vuln %}
<article>
<br>
<h4 id="vulnerability-summary" class="subheading">Vulnerability Summary</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width:30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{{scan_object.get_info_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{% endif %}
<article>
<h3 id="assessment-timeline" class="page_title">Timeline of the Assessment</h3>
<p>
Scan started on: {{scan_object.start_scan_date|date:"F j, Y h:i"}}
<br>
Total time taken:
{% if scan_object.scan_status == 0 %}
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }}
{% elif scan_object.scan_status == 1 %}
{{ scan_object.get_elapsed_time }}
{% elif scan_object.scan_status == 2 %}
{% if scan_object.get_completed_time_in_sec < 60 %}
Completed in < 1 minutes {% else %} Completed in {{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} {% elif scan_object.scan_status == 3 %} Aborted in
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} <br>
Report Generated on: {% now "F j, Y" %}
</p>
</article>
{# show interesting_subdomains section only when show_recon result is there #}
{% if interesting_subdomains and show_recon %}
<article style="page-break-before: always" class="summary">
<h3 id="interesting-recon-data" class="page_title">Interesting Recon Data</h3>
<p>Listed below are the {{interesting_subdomains.count}} interesting subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-success">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 33%">
Subdomain
</div>
<div class="cell grey-color" style="width: 33%">
Page title
</div>
<div class="cell grey-color" style="width: 15%">
HTTP Status
</div>
</div>
{% for subdomain in interesting_subdomains %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 35%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 35%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% else %}
{% endif %}
</div>
<div class="cell" style="width: 15%;">
{% if subdomain.http_status %}
{{subdomain.http_status}}
{% else %}
{% endif %}
</div>
</div>
{% endfor %}
</div>
</article>
{% endif %}
{# vulnerability_summary only when vuln_report #}
{% if show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerability-summary" class="page_title">Summary of Vulnerabilities Identified</h3>
{% if all_vulnerabilities.count > 0 %}
<p>Listed below are the vulnerabilities identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-critical">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 50%;">
Vulnerability Name
</div>
<div class="cell grey-color" style="width: 19%;">
Times Identified
</div>
<div class="cell grey-color" style="width: 15%">
Severity
</div>
</div>
{% for vulnerability in unique_vulnerabilities %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 50%">
<a href="#vuln_{{vulnerability.name.split|join:'_'}}">{{vulnerability.name}}</a>
</div>
<div class="cell" style="float: right; width: 19%;">
{{vulnerability.count}}
</div>
{% if vulnerability.severity == -1 %}
<div class="cell bg-grey" style="width: 15%">
<span class="severity-title-box">Unknown</span>
{% elif vulnerability.severity == 0 %}
<div class="cell bg-info" style="width: 15%">
<span class="severity-title-box">Informational</span>
{% elif vulnerability.severity == 1 %}
<div class="cell bg-low" style="width: 15%">
<span class="severity-title-box">Low</span>
{% elif vulnerability.severity == 2 %}
<div class="cell bg-medium" style="width: 15%">
<span class="severity-title-box">Medium</span>
{% elif vulnerability.severity == 3 %}
<div class="cell bg-high" style="width: 15%">
<span class="severity-title-box">High</span>
{% elif vulnerability.severity == 4 %}
<div class="cell bg-critical" style="width: 15%">
<span class="severity-title-box">Critical</span>
{% endif %}
</div>
</div>
{% endfor %}
{% else %}
<h3 class='info-color'>No Vulnerabilities were Discovered.</h3>
{% endif %}
</div>
</article>
{% endif %}
{# show discovered assets only for show_recon report #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 id="reconnaissance-results" class="page_title">Discovered Assets</h3>
<h4 class="subheading">Subdomains</h4>
<p>
During the reconnaissance phase, {{scan_object.get_subdomain_count}} subdomains were discovered.
Out of {{scan_object.get_subdomain_count}} subdomains, {{subdomain_alive_count}} returned HTTP status 200.
{{interesting_subdomains.count}} interesting subdomains were also identified based on the interesting keywords used.
</p>
<h4>{{scan_object.get_subdomain_count}} subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
Subdomain
</div>
<div class="cell grey-color" style="width: 38%">
Page title
</div>
<div class="cell grey-color" style="width: 18%">
HTTP Status
</div>
</div>
{% for subdomain in subdomains %}
<div class="row">
<div class="cell" style="width: 38%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 38%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% endif %}
</div>
<div class="cell" style="width: 18%">
{{subdomain.http_status}}
</div>
</div>
{% endfor %}
</div>
{% if ip_addresses.count %}
<h4 class="subheading" style="margin-top: 10px;">IP Addresses</h4>
<h4>{{ip_addresses.count}} IP Addresses were identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
IP
</div>
<div class="cell grey-color" style="width: 38%">
Open Ports
</div>
<div class="cell grey-color" style="width: 18%">
Remarks
</div>
</div>
{% for ip in ip_addresses %}
<div class="row">
<div class="cell" style="width: 38%">
{{ip.address}}
</div>
<div class="cell" style="width: 38%">
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</div>
{% if ip.is_cdn %}
<div class="cell medium" style="width: 18%">
CDN IP Address
{% else %}
<div class="cell" style="width: 18%">
{% endif %}
</div>
</div>
{% endfor %}
</div>
{% endif %}
</article>
<br>
{% endif %}
{# reconnaissance finding only when show_recon #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 class="page_title">Reconnaissance Findings</h3>
{% for subdomain in subdomains %}
<table class="table" cellspacing="0" style="border-collapse: collapse;">
<tr>
<td style="width: 2%" class="cell table-border">{{ forloop.counter }}.</td>
<td style="width: 80%" class="cell table-border">{{subdomain.name}}</td>
{% if subdomain.http_status == 200 %}
<td style="width: 10%" class="cell table-border bg-success">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 300 and subdomain.http_status < 400 %}
<td style="width: 10%" class="cell table-border bg-medium">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 400 %}
<td style="width: 10%" class="cell table-border bg-high">{{subdomain.http_status}}</td>
{% elif subdomain.http_status == 0 %}
<td style="width: 10%" class="cell table-border">N/A</td>
{% else %}
<td style="width: 10%" class="cell table-border">{{subdomain.http_status}}</td>
{% endif %}
</tr>
{% if subdomain.page_title %}
<tr>
<td colspan="3" class="cell table-border"><strong>Page Title: </strong>{{subdomain.page_title}}</td>
</tr>
{% endif %}
{% if subdomain.ip_addresses.all %}
<tr>
<td colspan="3" class="cell table-border">
IP Address:
<ul>
{% for ip in subdomain.ip_addresses.all %}
<li>{{ip.address}}
{% if ip.ports.all %}
<ul>
<li>Open Ports:
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</li>
</ul>
{% endif %}
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
{% if subdomain.get_vulnerabilities %}
<tr>
<td colspan="3" class="cell table-border">
Vulnerabilities
{% regroup subdomain.get_vulnerabilities by name as vuln_list %}
<ul>
{% for vulnerability in vuln_list %}
<li>
<a href="#vuln_{{vulnerability.list.0.name.split|join:'_'}}">{{ vulnerability.grouper }}</a>
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
</table>
{% endfor %}
</article>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerabilities-discovered" class="page_title">Vulnerabilities Discovered</h3>
<p>
This section reports the security issues found during the audit.
<br>
A Total of {{scan_object.get_vulnerability_count}} were discovered in {{scan_object.domain.name}},
{{scan_object.get_critical_vulnerability_count}} of them were Critical,
{{scan_object.get_high_vulnerability_count}} of them were High Severity,
{{scan_object.get_medium_vulnerability_count}} of them were Medium severity,
{{scan_object.get_low_vulnerability_count}} of them were Low severity, and
{{scan_object.get_info_vulnerability_count}} of them were Informational.
{{scan_object.get_unknown_vulnerability_count}} of them were Unknown Severity.
</p>
<h4 class="subheading">Vulnerability Breakdown by Severity</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{{scan_object.get_info_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{# start vulnerability #}
{% if show_vuln %}
<article class="">
{% regroup all_vulnerabilities by get_path as grouped_vulnerabilities %}
{% for vulnerability in grouped_vulnerabilities %}
<div>
<h4 class="content-heading" id="vuln_{{vulnerability.list.0.name.split|join:'_'}}">
<span>{{vulnerability.list.0.name}}
<br>in {{vulnerability.grouper}}</span>
{% if vulnerability.list.0.severity == -1 %}
<span style="float: right;" class="badge bg-grey">Unknown</span>
<div class="grey-hr-line" ></div>
{% elif vulnerability.list.0.severity == 0 %}
<span style="float: right;" class="badge bg-info">INFO</span>
<div class="info-hr-line" ></div>
{% elif vulnerability.list.0.severity == 1 %}
<span style="float: right;" class="badge bg-low">LOW</span>
<div class="low-hr-line" ></div>
{% elif vulnerability.list.0.severity == 2 %}
<span style="float: right;" class="badge bg-medium">MEDIUM</span>
<div class="medium-hr-line" ></div>
{% elif vulnerability.list.0.severity == 3 %}
<span style="float: right;" class="badge bg-high">HIGH</span>
<div class="high-hr-line" ></div>
{% elif vulnerability.list.0.severity == 4 %}
<span style="float: right;" class="badge bg-critical">CRITICAL</span>
<div class="critical-hr-line" ></div>
{% endif %}
</h4>
<!-- show vulnerability classification -->
<span class="mini-heading">Vulnerability Source: {{vulnerability.list.0.source|upper}}</span><br>
{% if vulnerability.list.0.cvss_metrics or vulnerability.list.0.cvss_score or vulnerability.list.0.cve_ids.all or vulnerability.list.0.cve_ids.all %}
<span class="mini-heading">Vulnerability Classification</span><br>
{% if vulnerability.list.0.cvss_metrics %}
<span class="mini-heading ml-8">CVSS Metrics: {{vulnerability.list.0.cvss_metrics}}</span>
{% endif %}
{% if vulnerability.list.0.cvss_score %}
<br>
<span class="mini-heading ml-8">CVSS Score:</span> <span class="high-color">{{vulnerability.list.0.cvss_score}}</span>
{% endif %}
{% if vulnerability.list.0.cve_ids.all %}
<br>
<span class="mini-heading ml-8">CVE IDs</span><br>
{% for cve in vulnerability.list.0.cve_ids.all %} {{cve}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
{% if vulnerability.list.0.cwe_ids.all %}
<br>
<span class="mini-heading ml-8">CWE IDs</span><br>
{% for cwe in vulnerability.list.0.cwe_ids.all %} {{cwe}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
<br>
{% endif %}
{% if vulnerability.list.0.description %}
<br>
<span class="mini-heading">Description</span><br>
{{vulnerability.list.0.description|linebreaks}}
{% endif %}
{% if vulnerability.list.0.impact %}
<br>
<span class="mini-heading">Impact</span><br>
{{vulnerability.list.0.impact|linebreaks}}
{% endif %}
{% if vulnerability.list.0.remediation %}
<br>
<span class="mini-heading">Remediation</span><br>
{{vulnerability.list.0.remediation|linebreaks}}
{% endif %}
<br>
<span class="mini-heading">Vulnerable URL(s)</span><br>
<ul>
{% for vuln in vulnerability.list %}
<li class="text-blue">{{vuln.http_url}}</li>
{% endfor %}
</ul>
<!-- {% regroup vulnerability.list by http_url as vuln_http_url_list %} -->
<!-- <ul>
{% for vuln_urls in vuln_http_url_list %}
<li>{{vuln_urls.grouper}}</li>
<span class="mini-heading">Result/Findings</span><br>
{% for vuln in vuln_urls.list %}
{% if vuln.matcher_name %}
{% if not forloop.first %} • {% endif %} {{vuln.matcher_name}}
{% endif %}
{% if vuln.extracted_results %}
{% for res in vuln.extracted_results %}
{% if not forloop.first %} • {% endif %} {{res}}
{% endfor %}
{% endif %}
{% endfor %}
{% endfor %}
</ul> -->
{% if vulnerability.list.0.references.all %}
<span class="mini-heading">References</span><br>
<ul>
{% for ref in vulnerability.list.0.references.all %}
<li>
<span class="text-blue"> {{ref}} </span>
</li>
{% endfor %}
</ul>
{% endif %}
<br>
<br>
</div>
{% endfor %}
</article>
{% endif %}
{% endif %}
<article id="chapter">
<h2 id="chapter-title">END OF REPORT</h2>
</article>
</body>
</html>
| <html>
<head>
<meta charset="utf-8">
<title>Report</title>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@100;200;300;400;500&display=swap" rel="stylesheet">
<style>
@page {
size: A4;
@top-left {
background: {{primary_color}};
content: counter(page);
height: 1cm;
text-align: center;
width: 1cm;
}
@top-center {
background: {{primary_color}};
content: '';
display: block;
height: .05cm;
opacity: .5;
width: 100%;
}
@top-right {
content: string(heading);
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% if show_footer %}
@bottom-left {
content: "{{footer_text}}";
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% endif %}
}
@page :blank {
@top-left {
background: none;
content: ''
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page no-chapter {
@top-left {
background: none;
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page :first {
background-color: {{secondary_color}};
background-size: cover;
margin: 0;
}
@page chapter {
background: {{primary_color}};
margin: 0;
@top-left {
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
html {
color: #393939;
font-family: 'Inter';
font-weight: 300;
font-size: 11pt;
font-weight: 300;
line-height: 1.5;
}
h1 {
font-family: 'Inter';
font-weight: 200;
font-size: 38pt;
margin: 5cm 2cm 0 2cm;
page: no-chapter;
width: 100%;
line-height: normal;
}
h2,
h3,
h4 {
font-family: 'Inter';
font-weight: 200;
color: black;
font-weight: 400;
line-height: normal;
}
#cover {
align-content: space-between;
display: flex;
flex-wrap: wrap;
height: 297mm;
}
#cover-subheading {
font-family: 'Inter';
font-weight: 200;
font-size: 22pt;
width: 100%;
}
#cover footer {
background: {{primary_color}};
flex: 1 33%;
margin: 0 -2cm;
padding: 1cm 0;
white-space: pre-wrap;
}
#cover footer:first-of-type {
padding-left: 3cm;
}
#cover-line {
margin-top: 6px;
border-bottom: 1px double {{primary_color}};
}
#summary {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#contents {
page: no-chapter;
}
#contents h2 {
font-size: 20pt;
Intereight: 400;
margin-bottom: 3cm;
}
#contents h3 {
font-weight: 400;
margin: 3em 0 1em;
}
#contents h3::before {
background: {{primary_color}};
content: '';
display: block;
height: .08cm;
margin-bottom: .25cm;
width: 2cm;
}
#contents ul {
list-style: none;
padding-left: 0;
}
#contents ul li {
border-top: .25pt solid #c1c1c1;
margin: .25cm 0;
padding-top: .25cm;
}
#contents ul li::before {
color: {{primary_color}};
content: '• ';
font-size: 30pt;
line-height: 16pt;
vertical-align: bottom;
}
#contents ul li a {
color: inherit;
text-decoration-line: inherit;
}
#contents ul li a::before {
content: target-text(attr(href));
}
#contents ul li a::after {
color: {{primary_color}};
content: target-counter(attr(href), page);
float: right;
}
#columns section {
columns: 2;
column-gap: 1cm;
padding-top: 1cm;
}
#columns section p {
text-align: justify;
}
#columns section p:first-of-type {
font-weight: 700;
}
#chapter {
align-items: center;
display: flex;
height: 297mm;
justify-content: center;
page: chapter;
}
#boxes {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#boxes section h4 {
margin-bottom: 0;
}
#boxes section p {
background: {{primary_color}};
display: block;
font-size: 15pt;
margin-bottom: 0;
padding: .25cm 0;
text-align: center;
height: 85px;
color: #37474F;
}
.bg-critical {
background: #EF9A9A !important;
}
.bg-high {
background: #FFAB91 !important;
}
.bg-medium {
background: #FFCC80 !important;
}
.bg-low {
background: #FFE082 !important;
}
.bg-success {
background-color: #A5D6A7 !important;
}
.bg-grey {
background-color: #B0BEC5 !important;
}
.bg-info {
background-color: #90CAF9 !important;
}
.critical-color {
color: #EF9A9A;
}
.high-color {
color: #dc3545;
}
.medium-color {
color: #FFCC80;
}
.low-color {
color: #FFE082;
}
.success-color {
color: #A5D6A7;
}
.grey-color {
color: #212121;
}
.info-color {
color: #90CAF9;
}
.primary-color {
color: {{primary_color}};
}
.text-blue{
color: #007bff!important;
}
.badge {
display: inline-block;
padding-left: 12px;
padding-right: 12px;
text-align: center
}
.critical-hr-line {
border-bottom: 3px solid #EF9A9A !important;
}
.high-hr-line {
border-bottom: 3px solid #FFAB91 !important;
}
.medium-hr-line {
border-bottom: 3px solid #FFCC80 !important;
}
.low-hr-line {
border-bottom: 3px solid #FFE082 !important;
}
.info-hr-line {
border-bottom: 3px solid #90CAF9 !important;
}
.grey-hr-line {
border-bottom: 3px solid #212121 !important;
}
.inside-box-counter {
font-size: 28pt;
}
.table {
margin: 0 0 40px 0;
width: 100%;
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.2);
display: table;
border-spacing: 0 0.4em;
}
.row {
display: table-row;
background: #f6f6f6;
}
.cell {
padding: 6px 6px 6px 6px;
display: table-cell;
}
.header {
Intereight: 900;
color: #ffffff;
}
.page_title{
font-weight: 300;
font-size: 20pt;
}
.subheading{
font-weight: 300;
font-size: 14pt;
}
.content-heading{
font-weight: 300;
font-size: 12pt;
}
.mini-heading{
font-weight: 400;
font-size: 11pt;
}
.table-border{
border-style:solid;
border-width: 1px;
border-color: #90CAF9 !important;
}
a{
color: #007bff;
text-decoration: none;
}
.ml-8{
margin-left: 8px;
}
</style>
</head>
<body>
<article id="cover">
<h1 style="color:{{primary_color}}">{{report_name}}
<br>
{{scan_object.domain.name}}
<div id="cover-line"></div>
{# generated date #}
<span id="cover-subheading">{% now "F j, Y" %}</span>
</h1>
<footer>
{{company_name}}
{{company_address}}
</footer>
<footer>
{{company_email}}
{{company_website}}
</footer>
<footer>
{% if show_rengine_banner %}Generated by reNgine
https://github.com/yogeshojha/rengine
{% endif %}
</footer>
</article>
<article id="contents">
<h2> </h2>
<h3>Table of contents</h3>
<ul>
{% if show_executive_summary %}
<li><a href="#executive-summary"></a></li>
{% endif %}
<li><a href="#quick-summary"></a></li>
<li><a href="#assessment-timeline"></a></li>
{% if interesting_subdomains and show_recon %}
<li><a href="#interesting-recon-data"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerability-summary"></a></li>
{% endif %}
{% if show_recon %}
<li><a href="#reconnaissance-results"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerabilities-discovered"></a></li>
{% endif %}
</ul>
</article>
{% if show_executive_summary %}
<article id="summary" style="page-break-before: always">
<h2 id="executive-summary" class="page_title">Executive summary</h2>
<br>
{{executive_summary_description | safe }}
</article>
{% endif %}
<article id="summary" style="page-break-before: always">
<h2 id="quick-summary" class="page_title">Quick Summary</h2>
<p>This section contains quick summary of scan performed on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<br>
</article>
{# recon section #}
{% if show_recon %}
<h4 id="reconnaissance-summary" class="subheading">Reconnaissance</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-success">Subdomains
<br>
<span class="inside-box-counter">
{{scan_object.get_subdomain_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Endpoints
<br>
<span class="inside-box-counter">
{{scan_object.get_endpoint_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-critical">Vulnerabilities
<br>
<span class="inside-box-counter">
{{all_vulnerabilities_count}}
</span>
</p>
</section>
</div>
{% endif %}
<!-- vulnerability section, hide if only recon report -->
{% if show_vuln %}
<article>
<br>
<h4 id="vulnerability-summary" class="subheading">Vulnerability Summary</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width:30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{% if is_ignore_info_vuln %}
0
{% else %}
{{scan_object.get_info_vulnerability_count}}
{% endif %}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{% endif %}
<article>
<h3 id="assessment-timeline" class="page_title">Timeline of the Assessment</h3>
<p>
Scan started on: {{scan_object.start_scan_date|date:"F j, Y h:i"}}
<br>
Total time taken:
{% if scan_object.scan_status == 0 %}
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }}
{% elif scan_object.scan_status == 1 %}
{{ scan_object.get_elapsed_time }}
{% elif scan_object.scan_status == 2 %}
{% if scan_object.get_completed_time_in_sec < 60 %}
Completed in < 1 minutes {% else %} Completed in {{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} {% elif scan_object.scan_status == 3 %} Aborted in
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} <br>
Report Generated on: {% now "F j, Y" %}
</p>
</article>
{# show interesting_subdomains section only when show_recon result is there #}
{% if interesting_subdomains and show_recon %}
<article style="page-break-before: always" class="summary">
<h3 id="interesting-recon-data" class="page_title">Interesting Recon Data</h3>
<p>Listed below are the {{interesting_subdomains.count}} interesting subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-success">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 33%">
Subdomain
</div>
<div class="cell grey-color" style="width: 33%">
Page title
</div>
<div class="cell grey-color" style="width: 15%">
HTTP Status
</div>
</div>
{% for subdomain in interesting_subdomains %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 35%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 35%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% else %}
{% endif %}
</div>
<div class="cell" style="width: 15%;">
{% if subdomain.http_status %}
{{subdomain.http_status}}
{% else %}
{% endif %}
</div>
</div>
{% endfor %}
</div>
</article>
{% endif %}
{# vulnerability_summary only when vuln_report #}
{% if show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerability-summary" class="page_title">Summary of Vulnerabilities Identified</h3>
{% if all_vulnerabilities.count > 0 %}
<p>Listed below are the vulnerabilities identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-critical">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 50%;">
Vulnerability Name
</div>
<div class="cell grey-color" style="width: 19%;">
Times Identified
</div>
<div class="cell grey-color" style="width: 15%">
Severity
</div>
</div>
{% for vulnerability in unique_vulnerabilities %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 50%">
<a href="#vuln_{{vulnerability.name.split|join:'_'}}">{{vulnerability.name}}</a>
</div>
<div class="cell" style="float: right; width: 19%;">
{{vulnerability.count}}
</div>
{% if vulnerability.severity == -1 %}
<div class="cell bg-grey" style="width: 15%">
<span class="severity-title-box">Unknown</span>
{% elif vulnerability.severity == 0 %}
<div class="cell bg-info" style="width: 15%">
<span class="severity-title-box">Informational</span>
{% elif vulnerability.severity == 1 %}
<div class="cell bg-low" style="width: 15%">
<span class="severity-title-box">Low</span>
{% elif vulnerability.severity == 2 %}
<div class="cell bg-medium" style="width: 15%">
<span class="severity-title-box">Medium</span>
{% elif vulnerability.severity == 3 %}
<div class="cell bg-high" style="width: 15%">
<span class="severity-title-box">High</span>
{% elif vulnerability.severity == 4 %}
<div class="cell bg-critical" style="width: 15%">
<span class="severity-title-box">Critical</span>
{% endif %}
</div>
</div>
{% endfor %}
{% else %}
<h3 class='info-color'>No Vulnerabilities were Discovered.</h3>
{% endif %}
</div>
</article>
{% endif %}
{# show discovered assets only for show_recon report #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 id="reconnaissance-results" class="page_title">Discovered Assets</h3>
<h4 class="subheading">Subdomains</h4>
<p>
During the reconnaissance phase, {{scan_object.get_subdomain_count}} subdomains were discovered.
Out of {{scan_object.get_subdomain_count}} subdomains, {{subdomain_alive_count}} returned HTTP status 200.
{{interesting_subdomains.count}} interesting subdomains were also identified based on the interesting keywords used.
</p>
<h4>{{scan_object.get_subdomain_count}} subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
Subdomain
</div>
<div class="cell grey-color" style="width: 38%">
Page title
</div>
<div class="cell grey-color" style="width: 18%">
HTTP Status
</div>
</div>
{% for subdomain in subdomains %}
<div class="row">
<div class="cell" style="width: 38%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 38%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% endif %}
</div>
<div class="cell" style="width: 18%">
{{subdomain.http_status}}
</div>
</div>
{% endfor %}
</div>
{% if ip_addresses.count %}
<h4 class="subheading" style="margin-top: 10px;">IP Addresses</h4>
<h4>{{ip_addresses.count}} IP Addresses were identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
IP
</div>
<div class="cell grey-color" style="width: 38%">
Open Ports
</div>
<div class="cell grey-color" style="width: 18%">
Remarks
</div>
</div>
{% for ip in ip_addresses %}
<div class="row">
<div class="cell" style="width: 38%">
{{ip.address}}
</div>
<div class="cell" style="width: 38%">
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</div>
{% if ip.is_cdn %}
<div class="cell medium" style="width: 18%">
CDN IP Address
{% else %}
<div class="cell" style="width: 18%">
{% endif %}
</div>
</div>
{% endfor %}
</div>
{% endif %}
</article>
<br>
{% endif %}
{# reconnaissance finding only when show_recon #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 class="page_title">Reconnaissance Findings</h3>
{% for subdomain in subdomains %}
<table class="table" cellspacing="0" style="border-collapse: collapse;">
<tr>
<td style="width: 2%" class="cell table-border">{{ forloop.counter }}.</td>
<td style="width: 80%" class="cell table-border">{{subdomain.name}}</td>
{% if subdomain.http_status == 200 %}
<td style="width: 10%" class="cell table-border bg-success">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 300 and subdomain.http_status < 400 %}
<td style="width: 10%" class="cell table-border bg-medium">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 400 %}
<td style="width: 10%" class="cell table-border bg-high">{{subdomain.http_status}}</td>
{% elif subdomain.http_status == 0 %}
<td style="width: 10%" class="cell table-border">N/A</td>
{% else %}
<td style="width: 10%" class="cell table-border">{{subdomain.http_status}}</td>
{% endif %}
</tr>
{% if subdomain.page_title %}
<tr>
<td colspan="3" class="cell table-border"><strong>Page Title: </strong>{{subdomain.page_title}}</td>
</tr>
{% endif %}
{% if subdomain.ip_addresses.all %}
<tr>
<td colspan="3" class="cell table-border">
IP Address:
<ul>
{% for ip in subdomain.ip_addresses.all %}
<li>{{ip.address}}
{% if ip.ports.all %}
<ul>
<li>Open Ports:
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</li>
</ul>
{% endif %}
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
{% if subdomain.get_vulnerabilities_without_info %}
<tr>
<td colspan="3" class="cell table-border">
Vulnerabilities
{% regroup subdomain.get_vulnerabilities_without_info by name as vuln_list %}
<ul>
{% for vulnerability in vuln_list %}
<li>
<a href="#vuln_{{vulnerability.list.0.name.split|join:'_'}}">{{ vulnerability.grouper }}</a>
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
</table>
{% endfor %}
</article>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerabilities-discovered" class="page_title">Vulnerabilities Discovered</h3>
<p>
This section reports the security issues found during the audit.
<br>
A Total of {{scan_object.get_vulnerability_count}} were discovered in {{scan_object.domain.name}},
{{scan_object.get_critical_vulnerability_count}} of them were Critical,
{{scan_object.get_high_vulnerability_count}} of them were High Severity,
{{scan_object.get_medium_vulnerability_count}} of them were Medium severity,
{% if is_ignore_info_vuln %}0{% else %}{{scan_object.get_info_vulnerability_count}}{% endif %} of them were Low severity, and
{{scan_object.get_info_vulnerability_count}} of them were Informational.
{{scan_object.get_unknown_vulnerability_count}} of them were Unknown Severity.
</p>
<h4 class="subheading">Vulnerability Breakdown by Severity</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{% if is_ignore_info_vuln %}
0
{% else %}
{{scan_object.get_info_vulnerability_count}}
{% endif %}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{# start vulnerability #}
{% if show_vuln %}
<article class="">
{% regroup all_vulnerabilities by get_path as grouped_vulnerabilities %}
{% for vulnerabilities in grouped_vulnerabilities %}
{% for vulnerability in vulnerabilities.list %}
<div>
<h4 class="content-heading" id="vuln_{{vulnerability.name.split|join:'_'}}">
<span>{{vulnerability.name}}
<br>in {{vulnerabilities.grouper}}</span>
{% if vulnerability.severity == -1 %}
<span style="float: right;" class="badge bg-grey">Unknown</span>
<div class="grey-hr-line" ></div>
{% elif vulnerability.severity == 0 %}
<span style="float: right;" class="badge bg-info">INFO</span>
<div class="info-hr-line" ></div>
{% elif vulnerability.severity == 1 %}
<span style="float: right;" class="badge bg-low">LOW</span>
<div class="low-hr-line" ></div>
{% elif vulnerability.severity == 2 %}
<span style="float: right;" class="badge bg-medium">MEDIUM</span>
<div class="medium-hr-line" ></div>
{% elif vulnerability.severity == 3 %}
<span style="float: right;" class="badge bg-high">HIGH</span>
<div class="high-hr-line" ></div>
{% elif vulnerability.severity == 4 %}
<span style="float: right;" class="badge bg-critical">CRITICAL</span>
<div class="critical-hr-line" ></div>
{% endif %}
</h4>
<!-- show vulnerability classification -->
<span class="mini-heading">Vulnerability Source: {{vulnerability.source|upper}}</span><br>
{% if vulnerability.cvss_metrics or vulnerability.cvss_score or vulnerability.cve_ids.all or vulnerability.cve_ids.all %}
<span class="mini-heading">Vulnerability Classification</span><br>
{% if vulnerability.cvss_metrics %}
<span class="mini-heading ml-8">CVSS Metrics: {{vulnerability.cvss_metrics}}</span>
{% endif %}
{% if vulnerability.cvss_score %}
<br>
<span class="mini-heading ml-8">CVSS Score:</span> <span class="high-color">{{vulnerability.cvss_score}}</span>
{% endif %}
{% if vulnerability.cve_ids.all %}
<br>
<span class="mini-heading ml-8">CVE IDs</span><br>
{% for cve in vulnerability.cve_ids.all %} {{cve}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
{% if vulnerability.cwe_ids.all %}
<br>
<span class="mini-heading ml-8">CWE IDs</span><br>
{% for cwe in vulnerability.cwe_ids.all %} {{cwe}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
<br>
{% endif %}
{% if vulnerability.description %}
<br>
<span class="mini-heading">Description</span><br>
{{vulnerability.description|linebreaks}}
{% endif %}
{% if vulnerability.impact %}
<br>
<span class="mini-heading">Impact</span><br>
{{vulnerability.impact|linebreaks}}
{% endif %}
{% if vulnerability.remediation %}
<br>
<span class="mini-heading">Remediation</span><br>
{{vulnerability.remediation|linebreaks}}
{% endif %}
<br>
<span class="mini-heading">Vulnerable URL(s)</span><br>
<ul>
<li class="text-blue"><a href="{{vulnerability.http_url}}" target="_blank" rel="noopener noreferrer">{{vulnerability.http_url}}</a></li>
</ul>
<!-- {% regroup vulnerability.list by http_url as vuln_http_url_list %} -->
<!-- <ul>
{% for vuln_urls in vuln_http_url_list %}
<li>{{vuln_urls.grouper}}</li>
<span class="mini-heading">Result/Findings</span><br>
{% for vuln in vuln_urls.list %}
{% if vuln.matcher_name %}
{% if not forloop.first %} • {% endif %} {{vuln.matcher_name}}
{% endif %}
{% if vuln.extracted_results %}
{% for res in vuln.extracted_results %}
{% if not forloop.first %} • {% endif %} {{res}}
{% endfor %}
{% endif %}
{% endfor %}
{% endfor %}
</ul> -->
{% if vulnerability.references.all %}
<span class="mini-heading">References</span><br>
<ul>
{% for ref in vulnerability.references.all %}
<li>
<span class="text-blue"><a href="{{ref}}" target="_blank" rel="noopener noreferrer">{{ref}}</a></span>
</li>
{% endfor %}
</ul>
{% endif %}
<br>
<br>
</div>
{% endfor %}
{% endfor %}
</article>
{% endif %}
{% endif %}
<article id="chapter">
<h2 id="chapter-title">END OF REPORT</h2>
</article>
</body>
</html>
| psyray | 4341d9834865240222a8dc72c01caaec0d7bed44 | 69231095782663fe0fe8b0e49b8aa995aa042723 | ## Potentially unsafe external link
External links without noopener/noreferrer are a potential security risk.
[Show more details](https://github.com/yogeshojha/rengine/security/code-scanning/172) | github-advanced-security[bot] | 2 |
yogeshojha/rengine | 1,100 | Fix report generation when `Ignore Informational Vulnerabilities` checked | When **Ignore Informational Vulnerabilities** is checked there are still info vulns datas.
I've reworked the queries that display vulnerabilities to prevent info vulns to display in the :
- **Quick summary** Info blue box
- **Reconnaissance Findings**
- **Vulnerabilities Discovered** Info blue box
I've also fixed the **Vulnerabilities Discovered** listing by doing a correct loop through regrouped values because values withe the same path but not the same severity does not display well
Tested and working on current master branch | null | 2023-12-05 01:25:41+00:00 | 2023-12-08 05:48:36+00:00 | web/templates/report/template.html | <html>
<head>
<meta charset="utf-8">
<title>Report</title>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@100;200;300;400;500&display=swap" rel="stylesheet">
<style>
@page {
size: A4;
@top-left {
background: {{primary_color}};
content: counter(page);
height: 1cm;
text-align: center;
width: 1cm;
}
@top-center {
background: {{primary_color}};
content: '';
display: block;
height: .05cm;
opacity: .5;
width: 100%;
}
@top-right {
content: string(heading);
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% if show_footer %}
@bottom-left {
content: "{{footer_text}}";
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% endif %}
}
@page :blank {
@top-left {
background: none;
content: ''
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page no-chapter {
@top-left {
background: none;
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page :first {
background-color: {{secondary_color}};
background-size: cover;
margin: 0;
}
@page chapter {
background: {{primary_color}};
margin: 0;
@top-left {
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
html {
color: #393939;
font-family: 'Inter';
font-weight: 300;
font-size: 11pt;
font-weight: 300;
line-height: 1.5;
}
h1 {
font-family: 'Inter';
font-weight: 200;
font-size: 38pt;
margin: 5cm 2cm 0 2cm;
page: no-chapter;
width: 100%;
line-height: normal;
}
h2,
h3,
h4 {
font-family: 'Inter';
font-weight: 200;
color: black;
font-weight: 400;
line-height: normal;
}
#cover {
align-content: space-between;
display: flex;
flex-wrap: wrap;
height: 297mm;
}
#cover-subheading {
font-family: 'Inter';
font-weight: 200;
font-size: 22pt;
width: 100%;
}
#cover footer {
background: {{primary_color}};
flex: 1 33%;
margin: 0 -2cm;
padding: 1cm 0;
white-space: pre-wrap;
}
#cover footer:first-of-type {
padding-left: 3cm;
}
#cover-line {
margin-top: 6px;
border-bottom: 1px double {{primary_color}};
}
#summary {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#contents {
page: no-chapter;
}
#contents h2 {
font-size: 20pt;
Intereight: 400;
margin-bottom: 3cm;
}
#contents h3 {
font-weight: 400;
margin: 3em 0 1em;
}
#contents h3::before {
background: {{primary_color}};
content: '';
display: block;
height: .08cm;
margin-bottom: .25cm;
width: 2cm;
}
#contents ul {
list-style: none;
padding-left: 0;
}
#contents ul li {
border-top: .25pt solid #c1c1c1;
margin: .25cm 0;
padding-top: .25cm;
}
#contents ul li::before {
color: {{primary_color}};
content: '• ';
font-size: 30pt;
line-height: 16pt;
vertical-align: bottom;
}
#contents ul li a {
color: inherit;
text-decoration-line: inherit;
}
#contents ul li a::before {
content: target-text(attr(href));
}
#contents ul li a::after {
color: {{primary_color}};
content: target-counter(attr(href), page);
float: right;
}
#columns section {
columns: 2;
column-gap: 1cm;
padding-top: 1cm;
}
#columns section p {
text-align: justify;
}
#columns section p:first-of-type {
font-weight: 700;
}
#chapter {
align-items: center;
display: flex;
height: 297mm;
justify-content: center;
page: chapter;
}
#boxes {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#boxes section h4 {
margin-bottom: 0;
}
#boxes section p {
background: {{primary_color}};
display: block;
font-size: 15pt;
margin-bottom: 0;
padding: .25cm 0;
text-align: center;
height: 85px;
color: #37474F;
}
.bg-critical {
background: #EF9A9A !important;
}
.bg-high {
background: #FFAB91 !important;
}
.bg-medium {
background: #FFCC80 !important;
}
.bg-low {
background: #FFE082 !important;
}
.bg-success {
background-color: #A5D6A7 !important;
}
.bg-grey {
background-color: #B0BEC5 !important;
}
.bg-info {
background-color: #90CAF9 !important;
}
.critical-color {
color: #EF9A9A;
}
.high-color {
color: #dc3545;
}
.medium-color {
color: #FFCC80;
}
.low-color {
color: #FFE082;
}
.success-color {
color: #A5D6A7;
}
.grey-color {
color: #212121;
}
.info-color {
color: #90CAF9;
}
.primary-color {
color: {{primary_color}};
}
.text-blue{
color: #007bff!important;
}
.badge {
display: inline-block;
padding-left: 12px;
padding-right: 12px;
text-align: center
}
.critical-hr-line {
border-bottom: 3px solid #EF9A9A !important;
}
.high-hr-line {
border-bottom: 3px solid #FFAB91 !important;
}
.medium-hr-line {
border-bottom: 3px solid #FFCC80 !important;
}
.low-hr-line {
border-bottom: 3px solid #FFE082 !important;
}
.info-hr-line {
border-bottom: 3px solid #90CAF9 !important;
}
.grey-hr-line {
border-bottom: 3px solid #212121 !important;
}
.inside-box-counter {
font-size: 28pt;
}
.table {
margin: 0 0 40px 0;
width: 100%;
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.2);
display: table;
border-spacing: 0 0.4em;
}
.row {
display: table-row;
background: #f6f6f6;
}
.cell {
padding: 6px 6px 6px 6px;
display: table-cell;
}
.header {
Intereight: 900;
color: #ffffff;
}
.page_title{
font-weight: 300;
font-size: 20pt;
}
.subheading{
font-weight: 300;
font-size: 14pt;
}
.content-heading{
font-weight: 300;
font-size: 12pt;
}
.mini-heading{
font-weight: 400;
font-size: 11pt;
}
.table-border{
border-style:solid;
border-width: 1px;
border-color: #90CAF9 !important;
}
a{
color: #007bff;
text-decoration: none;
}
.ml-8{
margin-left: 8px;
}
</style>
</head>
<body>
<article id="cover">
<h1 style="color:{{primary_color}}">{{report_name}}
<br>
{{scan_object.domain.name}}
<div id="cover-line"></div>
{# generated date #}
<span id="cover-subheading">{% now "F j, Y" %}</span>
</h1>
<footer>
{{company_name}}
{{company_address}}
</footer>
<footer>
{{company_email}}
{{company_website}}
</footer>
<footer>
{% if show_rengine_banner %}Generated by reNgine
https://github.com/yogeshojha/rengine
{% endif %}
</footer>
</article>
<article id="contents">
<h2> </h2>
<h3>Table of contents</h3>
<ul>
{% if show_executive_summary %}
<li><a href="#executive-summary"></a></li>
{% endif %}
<li><a href="#quick-summary"></a></li>
<li><a href="#assessment-timeline"></a></li>
{% if interesting_subdomains and show_recon %}
<li><a href="#interesting-recon-data"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerability-summary"></a></li>
{% endif %}
{% if show_recon %}
<li><a href="#reconnaissance-results"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerabilities-discovered"></a></li>
{% endif %}
</ul>
</article>
{% if show_executive_summary %}
<article id="summary" style="page-break-before: always">
<h2 id="executive-summary" class="page_title">Executive summary</h2>
<br>
{{executive_summary_description | safe }}
</article>
{% endif %}
<article id="summary" style="page-break-before: always">
<h2 id="quick-summary" class="page_title">Quick Summary</h2>
<p>This section contains quick summary of scan performed on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<br>
</article>
{# recon section #}
{% if show_recon %}
<h4 id="reconnaissance-summary" class="subheading">Reconnaissance</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-success">Subdomains
<br>
<span class="inside-box-counter">
{{scan_object.get_subdomain_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Endpoints
<br>
<span class="inside-box-counter">
{{scan_object.get_endpoint_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-critical">Vulnerabilities
<br>
<span class="inside-box-counter">
{{scan_object.get_vulnerability_count}}
</span>
</p>
</section>
</div>
{% endif %}
<!-- vulnerability section, hide if only recon report -->
{% if show_vuln %}
<article>
<br>
<h4 id="vulnerability-summary" class="subheading">Vulnerability Summary</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width:30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{{scan_object.get_info_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{% endif %}
<article>
<h3 id="assessment-timeline" class="page_title">Timeline of the Assessment</h3>
<p>
Scan started on: {{scan_object.start_scan_date|date:"F j, Y h:i"}}
<br>
Total time taken:
{% if scan_object.scan_status == 0 %}
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }}
{% elif scan_object.scan_status == 1 %}
{{ scan_object.get_elapsed_time }}
{% elif scan_object.scan_status == 2 %}
{% if scan_object.get_completed_time_in_sec < 60 %}
Completed in < 1 minutes {% else %} Completed in {{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} {% elif scan_object.scan_status == 3 %} Aborted in
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} <br>
Report Generated on: {% now "F j, Y" %}
</p>
</article>
{# show interesting_subdomains section only when show_recon result is there #}
{% if interesting_subdomains and show_recon %}
<article style="page-break-before: always" class="summary">
<h3 id="interesting-recon-data" class="page_title">Interesting Recon Data</h3>
<p>Listed below are the {{interesting_subdomains.count}} interesting subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-success">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 33%">
Subdomain
</div>
<div class="cell grey-color" style="width: 33%">
Page title
</div>
<div class="cell grey-color" style="width: 15%">
HTTP Status
</div>
</div>
{% for subdomain in interesting_subdomains %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 35%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 35%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% else %}
{% endif %}
</div>
<div class="cell" style="width: 15%;">
{% if subdomain.http_status %}
{{subdomain.http_status}}
{% else %}
{% endif %}
</div>
</div>
{% endfor %}
</div>
</article>
{% endif %}
{# vulnerability_summary only when vuln_report #}
{% if show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerability-summary" class="page_title">Summary of Vulnerabilities Identified</h3>
{% if all_vulnerabilities.count > 0 %}
<p>Listed below are the vulnerabilities identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-critical">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 50%;">
Vulnerability Name
</div>
<div class="cell grey-color" style="width: 19%;">
Times Identified
</div>
<div class="cell grey-color" style="width: 15%">
Severity
</div>
</div>
{% for vulnerability in unique_vulnerabilities %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 50%">
<a href="#vuln_{{vulnerability.name.split|join:'_'}}">{{vulnerability.name}}</a>
</div>
<div class="cell" style="float: right; width: 19%;">
{{vulnerability.count}}
</div>
{% if vulnerability.severity == -1 %}
<div class="cell bg-grey" style="width: 15%">
<span class="severity-title-box">Unknown</span>
{% elif vulnerability.severity == 0 %}
<div class="cell bg-info" style="width: 15%">
<span class="severity-title-box">Informational</span>
{% elif vulnerability.severity == 1 %}
<div class="cell bg-low" style="width: 15%">
<span class="severity-title-box">Low</span>
{% elif vulnerability.severity == 2 %}
<div class="cell bg-medium" style="width: 15%">
<span class="severity-title-box">Medium</span>
{% elif vulnerability.severity == 3 %}
<div class="cell bg-high" style="width: 15%">
<span class="severity-title-box">High</span>
{% elif vulnerability.severity == 4 %}
<div class="cell bg-critical" style="width: 15%">
<span class="severity-title-box">Critical</span>
{% endif %}
</div>
</div>
{% endfor %}
{% else %}
<h3 class='info-color'>No Vulnerabilities were Discovered.</h3>
{% endif %}
</div>
</article>
{% endif %}
{# show discovered assets only for show_recon report #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 id="reconnaissance-results" class="page_title">Discovered Assets</h3>
<h4 class="subheading">Subdomains</h4>
<p>
During the reconnaissance phase, {{scan_object.get_subdomain_count}} subdomains were discovered.
Out of {{scan_object.get_subdomain_count}} subdomains, {{subdomain_alive_count}} returned HTTP status 200.
{{interesting_subdomains.count}} interesting subdomains were also identified based on the interesting keywords used.
</p>
<h4>{{scan_object.get_subdomain_count}} subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
Subdomain
</div>
<div class="cell grey-color" style="width: 38%">
Page title
</div>
<div class="cell grey-color" style="width: 18%">
HTTP Status
</div>
</div>
{% for subdomain in subdomains %}
<div class="row">
<div class="cell" style="width: 38%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 38%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% endif %}
</div>
<div class="cell" style="width: 18%">
{{subdomain.http_status}}
</div>
</div>
{% endfor %}
</div>
{% if ip_addresses.count %}
<h4 class="subheading" style="margin-top: 10px;">IP Addresses</h4>
<h4>{{ip_addresses.count}} IP Addresses were identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
IP
</div>
<div class="cell grey-color" style="width: 38%">
Open Ports
</div>
<div class="cell grey-color" style="width: 18%">
Remarks
</div>
</div>
{% for ip in ip_addresses %}
<div class="row">
<div class="cell" style="width: 38%">
{{ip.address}}
</div>
<div class="cell" style="width: 38%">
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</div>
{% if ip.is_cdn %}
<div class="cell medium" style="width: 18%">
CDN IP Address
{% else %}
<div class="cell" style="width: 18%">
{% endif %}
</div>
</div>
{% endfor %}
</div>
{% endif %}
</article>
<br>
{% endif %}
{# reconnaissance finding only when show_recon #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 class="page_title">Reconnaissance Findings</h3>
{% for subdomain in subdomains %}
<table class="table" cellspacing="0" style="border-collapse: collapse;">
<tr>
<td style="width: 2%" class="cell table-border">{{ forloop.counter }}.</td>
<td style="width: 80%" class="cell table-border">{{subdomain.name}}</td>
{% if subdomain.http_status == 200 %}
<td style="width: 10%" class="cell table-border bg-success">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 300 and subdomain.http_status < 400 %}
<td style="width: 10%" class="cell table-border bg-medium">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 400 %}
<td style="width: 10%" class="cell table-border bg-high">{{subdomain.http_status}}</td>
{% elif subdomain.http_status == 0 %}
<td style="width: 10%" class="cell table-border">N/A</td>
{% else %}
<td style="width: 10%" class="cell table-border">{{subdomain.http_status}}</td>
{% endif %}
</tr>
{% if subdomain.page_title %}
<tr>
<td colspan="3" class="cell table-border"><strong>Page Title: </strong>{{subdomain.page_title}}</td>
</tr>
{% endif %}
{% if subdomain.ip_addresses.all %}
<tr>
<td colspan="3" class="cell table-border">
IP Address:
<ul>
{% for ip in subdomain.ip_addresses.all %}
<li>{{ip.address}}
{% if ip.ports.all %}
<ul>
<li>Open Ports:
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</li>
</ul>
{% endif %}
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
{% if subdomain.get_vulnerabilities %}
<tr>
<td colspan="3" class="cell table-border">
Vulnerabilities
{% regroup subdomain.get_vulnerabilities by name as vuln_list %}
<ul>
{% for vulnerability in vuln_list %}
<li>
<a href="#vuln_{{vulnerability.list.0.name.split|join:'_'}}">{{ vulnerability.grouper }}</a>
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
</table>
{% endfor %}
</article>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerabilities-discovered" class="page_title">Vulnerabilities Discovered</h3>
<p>
This section reports the security issues found during the audit.
<br>
A Total of {{scan_object.get_vulnerability_count}} were discovered in {{scan_object.domain.name}},
{{scan_object.get_critical_vulnerability_count}} of them were Critical,
{{scan_object.get_high_vulnerability_count}} of them were High Severity,
{{scan_object.get_medium_vulnerability_count}} of them were Medium severity,
{{scan_object.get_low_vulnerability_count}} of them were Low severity, and
{{scan_object.get_info_vulnerability_count}} of them were Informational.
{{scan_object.get_unknown_vulnerability_count}} of them were Unknown Severity.
</p>
<h4 class="subheading">Vulnerability Breakdown by Severity</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{{scan_object.get_info_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{# start vulnerability #}
{% if show_vuln %}
<article class="">
{% regroup all_vulnerabilities by get_path as grouped_vulnerabilities %}
{% for vulnerability in grouped_vulnerabilities %}
<div>
<h4 class="content-heading" id="vuln_{{vulnerability.list.0.name.split|join:'_'}}">
<span>{{vulnerability.list.0.name}}
<br>in {{vulnerability.grouper}}</span>
{% if vulnerability.list.0.severity == -1 %}
<span style="float: right;" class="badge bg-grey">Unknown</span>
<div class="grey-hr-line" ></div>
{% elif vulnerability.list.0.severity == 0 %}
<span style="float: right;" class="badge bg-info">INFO</span>
<div class="info-hr-line" ></div>
{% elif vulnerability.list.0.severity == 1 %}
<span style="float: right;" class="badge bg-low">LOW</span>
<div class="low-hr-line" ></div>
{% elif vulnerability.list.0.severity == 2 %}
<span style="float: right;" class="badge bg-medium">MEDIUM</span>
<div class="medium-hr-line" ></div>
{% elif vulnerability.list.0.severity == 3 %}
<span style="float: right;" class="badge bg-high">HIGH</span>
<div class="high-hr-line" ></div>
{% elif vulnerability.list.0.severity == 4 %}
<span style="float: right;" class="badge bg-critical">CRITICAL</span>
<div class="critical-hr-line" ></div>
{% endif %}
</h4>
<!-- show vulnerability classification -->
<span class="mini-heading">Vulnerability Source: {{vulnerability.list.0.source|upper}}</span><br>
{% if vulnerability.list.0.cvss_metrics or vulnerability.list.0.cvss_score or vulnerability.list.0.cve_ids.all or vulnerability.list.0.cve_ids.all %}
<span class="mini-heading">Vulnerability Classification</span><br>
{% if vulnerability.list.0.cvss_metrics %}
<span class="mini-heading ml-8">CVSS Metrics: {{vulnerability.list.0.cvss_metrics}}</span>
{% endif %}
{% if vulnerability.list.0.cvss_score %}
<br>
<span class="mini-heading ml-8">CVSS Score:</span> <span class="high-color">{{vulnerability.list.0.cvss_score}}</span>
{% endif %}
{% if vulnerability.list.0.cve_ids.all %}
<br>
<span class="mini-heading ml-8">CVE IDs</span><br>
{% for cve in vulnerability.list.0.cve_ids.all %} {{cve}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
{% if vulnerability.list.0.cwe_ids.all %}
<br>
<span class="mini-heading ml-8">CWE IDs</span><br>
{% for cwe in vulnerability.list.0.cwe_ids.all %} {{cwe}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
<br>
{% endif %}
{% if vulnerability.list.0.description %}
<br>
<span class="mini-heading">Description</span><br>
{{vulnerability.list.0.description|linebreaks}}
{% endif %}
{% if vulnerability.list.0.impact %}
<br>
<span class="mini-heading">Impact</span><br>
{{vulnerability.list.0.impact|linebreaks}}
{% endif %}
{% if vulnerability.list.0.remediation %}
<br>
<span class="mini-heading">Remediation</span><br>
{{vulnerability.list.0.remediation|linebreaks}}
{% endif %}
<br>
<span class="mini-heading">Vulnerable URL(s)</span><br>
<ul>
{% for vuln in vulnerability.list %}
<li class="text-blue">{{vuln.http_url}}</li>
{% endfor %}
</ul>
<!-- {% regroup vulnerability.list by http_url as vuln_http_url_list %} -->
<!-- <ul>
{% for vuln_urls in vuln_http_url_list %}
<li>{{vuln_urls.grouper}}</li>
<span class="mini-heading">Result/Findings</span><br>
{% for vuln in vuln_urls.list %}
{% if vuln.matcher_name %}
{% if not forloop.first %} • {% endif %} {{vuln.matcher_name}}
{% endif %}
{% if vuln.extracted_results %}
{% for res in vuln.extracted_results %}
{% if not forloop.first %} • {% endif %} {{res}}
{% endfor %}
{% endif %}
{% endfor %}
{% endfor %}
</ul> -->
{% if vulnerability.list.0.references.all %}
<span class="mini-heading">References</span><br>
<ul>
{% for ref in vulnerability.list.0.references.all %}
<li>
<span class="text-blue"> {{ref}} </span>
</li>
{% endfor %}
</ul>
{% endif %}
<br>
<br>
</div>
{% endfor %}
</article>
{% endif %}
{% endif %}
<article id="chapter">
<h2 id="chapter-title">END OF REPORT</h2>
</article>
</body>
</html>
| <html>
<head>
<meta charset="utf-8">
<title>Report</title>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@100;200;300;400;500&display=swap" rel="stylesheet">
<style>
@page {
size: A4;
@top-left {
background: {{primary_color}};
content: counter(page);
height: 1cm;
text-align: center;
width: 1cm;
}
@top-center {
background: {{primary_color}};
content: '';
display: block;
height: .05cm;
opacity: .5;
width: 100%;
}
@top-right {
content: string(heading);
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% if show_footer %}
@bottom-left {
content: "{{footer_text}}";
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% endif %}
}
@page :blank {
@top-left {
background: none;
content: ''
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page no-chapter {
@top-left {
background: none;
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page :first {
background-color: {{secondary_color}};
background-size: cover;
margin: 0;
}
@page chapter {
background: {{primary_color}};
margin: 0;
@top-left {
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
html {
color: #393939;
font-family: 'Inter';
font-weight: 300;
font-size: 11pt;
font-weight: 300;
line-height: 1.5;
}
h1 {
font-family: 'Inter';
font-weight: 200;
font-size: 38pt;
margin: 5cm 2cm 0 2cm;
page: no-chapter;
width: 100%;
line-height: normal;
}
h2,
h3,
h4 {
font-family: 'Inter';
font-weight: 200;
color: black;
font-weight: 400;
line-height: normal;
}
#cover {
align-content: space-between;
display: flex;
flex-wrap: wrap;
height: 297mm;
}
#cover-subheading {
font-family: 'Inter';
font-weight: 200;
font-size: 22pt;
width: 100%;
}
#cover footer {
background: {{primary_color}};
flex: 1 33%;
margin: 0 -2cm;
padding: 1cm 0;
white-space: pre-wrap;
}
#cover footer:first-of-type {
padding-left: 3cm;
}
#cover-line {
margin-top: 6px;
border-bottom: 1px double {{primary_color}};
}
#summary {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#contents {
page: no-chapter;
}
#contents h2 {
font-size: 20pt;
Intereight: 400;
margin-bottom: 3cm;
}
#contents h3 {
font-weight: 400;
margin: 3em 0 1em;
}
#contents h3::before {
background: {{primary_color}};
content: '';
display: block;
height: .08cm;
margin-bottom: .25cm;
width: 2cm;
}
#contents ul {
list-style: none;
padding-left: 0;
}
#contents ul li {
border-top: .25pt solid #c1c1c1;
margin: .25cm 0;
padding-top: .25cm;
}
#contents ul li::before {
color: {{primary_color}};
content: '• ';
font-size: 30pt;
line-height: 16pt;
vertical-align: bottom;
}
#contents ul li a {
color: inherit;
text-decoration-line: inherit;
}
#contents ul li a::before {
content: target-text(attr(href));
}
#contents ul li a::after {
color: {{primary_color}};
content: target-counter(attr(href), page);
float: right;
}
#columns section {
columns: 2;
column-gap: 1cm;
padding-top: 1cm;
}
#columns section p {
text-align: justify;
}
#columns section p:first-of-type {
font-weight: 700;
}
#chapter {
align-items: center;
display: flex;
height: 297mm;
justify-content: center;
page: chapter;
}
#boxes {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#boxes section h4 {
margin-bottom: 0;
}
#boxes section p {
background: {{primary_color}};
display: block;
font-size: 15pt;
margin-bottom: 0;
padding: .25cm 0;
text-align: center;
height: 85px;
color: #37474F;
}
.bg-critical {
background: #EF9A9A !important;
}
.bg-high {
background: #FFAB91 !important;
}
.bg-medium {
background: #FFCC80 !important;
}
.bg-low {
background: #FFE082 !important;
}
.bg-success {
background-color: #A5D6A7 !important;
}
.bg-grey {
background-color: #B0BEC5 !important;
}
.bg-info {
background-color: #90CAF9 !important;
}
.critical-color {
color: #EF9A9A;
}
.high-color {
color: #dc3545;
}
.medium-color {
color: #FFCC80;
}
.low-color {
color: #FFE082;
}
.success-color {
color: #A5D6A7;
}
.grey-color {
color: #212121;
}
.info-color {
color: #90CAF9;
}
.primary-color {
color: {{primary_color}};
}
.text-blue{
color: #007bff!important;
}
.badge {
display: inline-block;
padding-left: 12px;
padding-right: 12px;
text-align: center
}
.critical-hr-line {
border-bottom: 3px solid #EF9A9A !important;
}
.high-hr-line {
border-bottom: 3px solid #FFAB91 !important;
}
.medium-hr-line {
border-bottom: 3px solid #FFCC80 !important;
}
.low-hr-line {
border-bottom: 3px solid #FFE082 !important;
}
.info-hr-line {
border-bottom: 3px solid #90CAF9 !important;
}
.grey-hr-line {
border-bottom: 3px solid #212121 !important;
}
.inside-box-counter {
font-size: 28pt;
}
.table {
margin: 0 0 40px 0;
width: 100%;
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.2);
display: table;
border-spacing: 0 0.4em;
}
.row {
display: table-row;
background: #f6f6f6;
}
.cell {
padding: 6px 6px 6px 6px;
display: table-cell;
}
.header {
Intereight: 900;
color: #ffffff;
}
.page_title{
font-weight: 300;
font-size: 20pt;
}
.subheading{
font-weight: 300;
font-size: 14pt;
}
.content-heading{
font-weight: 300;
font-size: 12pt;
}
.mini-heading{
font-weight: 400;
font-size: 11pt;
}
.table-border{
border-style:solid;
border-width: 1px;
border-color: #90CAF9 !important;
}
a{
color: #007bff;
text-decoration: none;
}
.ml-8{
margin-left: 8px;
}
</style>
</head>
<body>
<article id="cover">
<h1 style="color:{{primary_color}}">{{report_name}}
<br>
{{scan_object.domain.name}}
<div id="cover-line"></div>
{# generated date #}
<span id="cover-subheading">{% now "F j, Y" %}</span>
</h1>
<footer>
{{company_name}}
{{company_address}}
</footer>
<footer>
{{company_email}}
{{company_website}}
</footer>
<footer>
{% if show_rengine_banner %}Generated by reNgine
https://github.com/yogeshojha/rengine
{% endif %}
</footer>
</article>
<article id="contents">
<h2> </h2>
<h3>Table of contents</h3>
<ul>
{% if show_executive_summary %}
<li><a href="#executive-summary"></a></li>
{% endif %}
<li><a href="#quick-summary"></a></li>
<li><a href="#assessment-timeline"></a></li>
{% if interesting_subdomains and show_recon %}
<li><a href="#interesting-recon-data"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerability-summary"></a></li>
{% endif %}
{% if show_recon %}
<li><a href="#reconnaissance-results"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerabilities-discovered"></a></li>
{% endif %}
</ul>
</article>
{% if show_executive_summary %}
<article id="summary" style="page-break-before: always">
<h2 id="executive-summary" class="page_title">Executive summary</h2>
<br>
{{executive_summary_description | safe }}
</article>
{% endif %}
<article id="summary" style="page-break-before: always">
<h2 id="quick-summary" class="page_title">Quick Summary</h2>
<p>This section contains quick summary of scan performed on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<br>
</article>
{# recon section #}
{% if show_recon %}
<h4 id="reconnaissance-summary" class="subheading">Reconnaissance</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-success">Subdomains
<br>
<span class="inside-box-counter">
{{scan_object.get_subdomain_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Endpoints
<br>
<span class="inside-box-counter">
{{scan_object.get_endpoint_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-critical">Vulnerabilities
<br>
<span class="inside-box-counter">
{{all_vulnerabilities_count}}
</span>
</p>
</section>
</div>
{% endif %}
<!-- vulnerability section, hide if only recon report -->
{% if show_vuln %}
<article>
<br>
<h4 id="vulnerability-summary" class="subheading">Vulnerability Summary</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width:30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{% if is_ignore_info_vuln %}
0
{% else %}
{{scan_object.get_info_vulnerability_count}}
{% endif %}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{% endif %}
<article>
<h3 id="assessment-timeline" class="page_title">Timeline of the Assessment</h3>
<p>
Scan started on: {{scan_object.start_scan_date|date:"F j, Y h:i"}}
<br>
Total time taken:
{% if scan_object.scan_status == 0 %}
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }}
{% elif scan_object.scan_status == 1 %}
{{ scan_object.get_elapsed_time }}
{% elif scan_object.scan_status == 2 %}
{% if scan_object.get_completed_time_in_sec < 60 %}
Completed in < 1 minutes {% else %} Completed in {{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} {% elif scan_object.scan_status == 3 %} Aborted in
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} <br>
Report Generated on: {% now "F j, Y" %}
</p>
</article>
{# show interesting_subdomains section only when show_recon result is there #}
{% if interesting_subdomains and show_recon %}
<article style="page-break-before: always" class="summary">
<h3 id="interesting-recon-data" class="page_title">Interesting Recon Data</h3>
<p>Listed below are the {{interesting_subdomains.count}} interesting subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-success">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 33%">
Subdomain
</div>
<div class="cell grey-color" style="width: 33%">
Page title
</div>
<div class="cell grey-color" style="width: 15%">
HTTP Status
</div>
</div>
{% for subdomain in interesting_subdomains %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 35%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 35%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% else %}
{% endif %}
</div>
<div class="cell" style="width: 15%;">
{% if subdomain.http_status %}
{{subdomain.http_status}}
{% else %}
{% endif %}
</div>
</div>
{% endfor %}
</div>
</article>
{% endif %}
{# vulnerability_summary only when vuln_report #}
{% if show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerability-summary" class="page_title">Summary of Vulnerabilities Identified</h3>
{% if all_vulnerabilities.count > 0 %}
<p>Listed below are the vulnerabilities identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-critical">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 50%;">
Vulnerability Name
</div>
<div class="cell grey-color" style="width: 19%;">
Times Identified
</div>
<div class="cell grey-color" style="width: 15%">
Severity
</div>
</div>
{% for vulnerability in unique_vulnerabilities %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 50%">
<a href="#vuln_{{vulnerability.name.split|join:'_'}}">{{vulnerability.name}}</a>
</div>
<div class="cell" style="float: right; width: 19%;">
{{vulnerability.count}}
</div>
{% if vulnerability.severity == -1 %}
<div class="cell bg-grey" style="width: 15%">
<span class="severity-title-box">Unknown</span>
{% elif vulnerability.severity == 0 %}
<div class="cell bg-info" style="width: 15%">
<span class="severity-title-box">Informational</span>
{% elif vulnerability.severity == 1 %}
<div class="cell bg-low" style="width: 15%">
<span class="severity-title-box">Low</span>
{% elif vulnerability.severity == 2 %}
<div class="cell bg-medium" style="width: 15%">
<span class="severity-title-box">Medium</span>
{% elif vulnerability.severity == 3 %}
<div class="cell bg-high" style="width: 15%">
<span class="severity-title-box">High</span>
{% elif vulnerability.severity == 4 %}
<div class="cell bg-critical" style="width: 15%">
<span class="severity-title-box">Critical</span>
{% endif %}
</div>
</div>
{% endfor %}
{% else %}
<h3 class='info-color'>No Vulnerabilities were Discovered.</h3>
{% endif %}
</div>
</article>
{% endif %}
{# show discovered assets only for show_recon report #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 id="reconnaissance-results" class="page_title">Discovered Assets</h3>
<h4 class="subheading">Subdomains</h4>
<p>
During the reconnaissance phase, {{scan_object.get_subdomain_count}} subdomains were discovered.
Out of {{scan_object.get_subdomain_count}} subdomains, {{subdomain_alive_count}} returned HTTP status 200.
{{interesting_subdomains.count}} interesting subdomains were also identified based on the interesting keywords used.
</p>
<h4>{{scan_object.get_subdomain_count}} subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
Subdomain
</div>
<div class="cell grey-color" style="width: 38%">
Page title
</div>
<div class="cell grey-color" style="width: 18%">
HTTP Status
</div>
</div>
{% for subdomain in subdomains %}
<div class="row">
<div class="cell" style="width: 38%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 38%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% endif %}
</div>
<div class="cell" style="width: 18%">
{{subdomain.http_status}}
</div>
</div>
{% endfor %}
</div>
{% if ip_addresses.count %}
<h4 class="subheading" style="margin-top: 10px;">IP Addresses</h4>
<h4>{{ip_addresses.count}} IP Addresses were identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
IP
</div>
<div class="cell grey-color" style="width: 38%">
Open Ports
</div>
<div class="cell grey-color" style="width: 18%">
Remarks
</div>
</div>
{% for ip in ip_addresses %}
<div class="row">
<div class="cell" style="width: 38%">
{{ip.address}}
</div>
<div class="cell" style="width: 38%">
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</div>
{% if ip.is_cdn %}
<div class="cell medium" style="width: 18%">
CDN IP Address
{% else %}
<div class="cell" style="width: 18%">
{% endif %}
</div>
</div>
{% endfor %}
</div>
{% endif %}
</article>
<br>
{% endif %}
{# reconnaissance finding only when show_recon #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 class="page_title">Reconnaissance Findings</h3>
{% for subdomain in subdomains %}
<table class="table" cellspacing="0" style="border-collapse: collapse;">
<tr>
<td style="width: 2%" class="cell table-border">{{ forloop.counter }}.</td>
<td style="width: 80%" class="cell table-border">{{subdomain.name}}</td>
{% if subdomain.http_status == 200 %}
<td style="width: 10%" class="cell table-border bg-success">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 300 and subdomain.http_status < 400 %}
<td style="width: 10%" class="cell table-border bg-medium">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 400 %}
<td style="width: 10%" class="cell table-border bg-high">{{subdomain.http_status}}</td>
{% elif subdomain.http_status == 0 %}
<td style="width: 10%" class="cell table-border">N/A</td>
{% else %}
<td style="width: 10%" class="cell table-border">{{subdomain.http_status}}</td>
{% endif %}
</tr>
{% if subdomain.page_title %}
<tr>
<td colspan="3" class="cell table-border"><strong>Page Title: </strong>{{subdomain.page_title}}</td>
</tr>
{% endif %}
{% if subdomain.ip_addresses.all %}
<tr>
<td colspan="3" class="cell table-border">
IP Address:
<ul>
{% for ip in subdomain.ip_addresses.all %}
<li>{{ip.address}}
{% if ip.ports.all %}
<ul>
<li>Open Ports:
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</li>
</ul>
{% endif %}
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
{% if subdomain.get_vulnerabilities_without_info %}
<tr>
<td colspan="3" class="cell table-border">
Vulnerabilities
{% regroup subdomain.get_vulnerabilities_without_info by name as vuln_list %}
<ul>
{% for vulnerability in vuln_list %}
<li>
<a href="#vuln_{{vulnerability.list.0.name.split|join:'_'}}">{{ vulnerability.grouper }}</a>
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
</table>
{% endfor %}
</article>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerabilities-discovered" class="page_title">Vulnerabilities Discovered</h3>
<p>
This section reports the security issues found during the audit.
<br>
A Total of {{scan_object.get_vulnerability_count}} were discovered in {{scan_object.domain.name}},
{{scan_object.get_critical_vulnerability_count}} of them were Critical,
{{scan_object.get_high_vulnerability_count}} of them were High Severity,
{{scan_object.get_medium_vulnerability_count}} of them were Medium severity,
{% if is_ignore_info_vuln %}0{% else %}{{scan_object.get_info_vulnerability_count}}{% endif %} of them were Low severity, and
{{scan_object.get_info_vulnerability_count}} of them were Informational.
{{scan_object.get_unknown_vulnerability_count}} of them were Unknown Severity.
</p>
<h4 class="subheading">Vulnerability Breakdown by Severity</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{% if is_ignore_info_vuln %}
0
{% else %}
{{scan_object.get_info_vulnerability_count}}
{% endif %}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{# start vulnerability #}
{% if show_vuln %}
<article class="">
{% regroup all_vulnerabilities by get_path as grouped_vulnerabilities %}
{% for vulnerabilities in grouped_vulnerabilities %}
{% for vulnerability in vulnerabilities.list %}
<div>
<h4 class="content-heading" id="vuln_{{vulnerability.name.split|join:'_'}}">
<span>{{vulnerability.name}}
<br>in {{vulnerabilities.grouper}}</span>
{% if vulnerability.severity == -1 %}
<span style="float: right;" class="badge bg-grey">Unknown</span>
<div class="grey-hr-line" ></div>
{% elif vulnerability.severity == 0 %}
<span style="float: right;" class="badge bg-info">INFO</span>
<div class="info-hr-line" ></div>
{% elif vulnerability.severity == 1 %}
<span style="float: right;" class="badge bg-low">LOW</span>
<div class="low-hr-line" ></div>
{% elif vulnerability.severity == 2 %}
<span style="float: right;" class="badge bg-medium">MEDIUM</span>
<div class="medium-hr-line" ></div>
{% elif vulnerability.severity == 3 %}
<span style="float: right;" class="badge bg-high">HIGH</span>
<div class="high-hr-line" ></div>
{% elif vulnerability.severity == 4 %}
<span style="float: right;" class="badge bg-critical">CRITICAL</span>
<div class="critical-hr-line" ></div>
{% endif %}
</h4>
<!-- show vulnerability classification -->
<span class="mini-heading">Vulnerability Source: {{vulnerability.source|upper}}</span><br>
{% if vulnerability.cvss_metrics or vulnerability.cvss_score or vulnerability.cve_ids.all or vulnerability.cve_ids.all %}
<span class="mini-heading">Vulnerability Classification</span><br>
{% if vulnerability.cvss_metrics %}
<span class="mini-heading ml-8">CVSS Metrics: {{vulnerability.cvss_metrics}}</span>
{% endif %}
{% if vulnerability.cvss_score %}
<br>
<span class="mini-heading ml-8">CVSS Score:</span> <span class="high-color">{{vulnerability.cvss_score}}</span>
{% endif %}
{% if vulnerability.cve_ids.all %}
<br>
<span class="mini-heading ml-8">CVE IDs</span><br>
{% for cve in vulnerability.cve_ids.all %} {{cve}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
{% if vulnerability.cwe_ids.all %}
<br>
<span class="mini-heading ml-8">CWE IDs</span><br>
{% for cwe in vulnerability.cwe_ids.all %} {{cwe}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
<br>
{% endif %}
{% if vulnerability.description %}
<br>
<span class="mini-heading">Description</span><br>
{{vulnerability.description|linebreaks}}
{% endif %}
{% if vulnerability.impact %}
<br>
<span class="mini-heading">Impact</span><br>
{{vulnerability.impact|linebreaks}}
{% endif %}
{% if vulnerability.remediation %}
<br>
<span class="mini-heading">Remediation</span><br>
{{vulnerability.remediation|linebreaks}}
{% endif %}
<br>
<span class="mini-heading">Vulnerable URL(s)</span><br>
<ul>
<li class="text-blue"><a href="{{vulnerability.http_url}}" target="_blank" rel="noopener noreferrer">{{vulnerability.http_url}}</a></li>
</ul>
<!-- {% regroup vulnerability.list by http_url as vuln_http_url_list %} -->
<!-- <ul>
{% for vuln_urls in vuln_http_url_list %}
<li>{{vuln_urls.grouper}}</li>
<span class="mini-heading">Result/Findings</span><br>
{% for vuln in vuln_urls.list %}
{% if vuln.matcher_name %}
{% if not forloop.first %} • {% endif %} {{vuln.matcher_name}}
{% endif %}
{% if vuln.extracted_results %}
{% for res in vuln.extracted_results %}
{% if not forloop.first %} • {% endif %} {{res}}
{% endfor %}
{% endif %}
{% endfor %}
{% endfor %}
</ul> -->
{% if vulnerability.references.all %}
<span class="mini-heading">References</span><br>
<ul>
{% for ref in vulnerability.references.all %}
<li>
<span class="text-blue"><a href="{{ref}}" target="_blank" rel="noopener noreferrer">{{ref}}</a></span>
</li>
{% endfor %}
</ul>
{% endif %}
<br>
<br>
</div>
{% endfor %}
{% endfor %}
</article>
{% endif %}
{% endif %}
<article id="chapter">
<h2 id="chapter-title">END OF REPORT</h2>
</article>
</body>
</html>
| psyray | 4341d9834865240222a8dc72c01caaec0d7bed44 | 69231095782663fe0fe8b0e49b8aa995aa042723 | Fixed | psyray | 3 |
yogeshojha/rengine | 1,100 | Fix report generation when `Ignore Informational Vulnerabilities` checked | When **Ignore Informational Vulnerabilities** is checked there are still info vulns datas.
I've reworked the queries that display vulnerabilities to prevent info vulns to display in the :
- **Quick summary** Info blue box
- **Reconnaissance Findings**
- **Vulnerabilities Discovered** Info blue box
I've also fixed the **Vulnerabilities Discovered** listing by doing a correct loop through regrouped values because values withe the same path but not the same severity does not display well
Tested and working on current master branch | null | 2023-12-05 01:25:41+00:00 | 2023-12-08 05:48:36+00:00 | web/templates/report/template.html | <html>
<head>
<meta charset="utf-8">
<title>Report</title>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@100;200;300;400;500&display=swap" rel="stylesheet">
<style>
@page {
size: A4;
@top-left {
background: {{primary_color}};
content: counter(page);
height: 1cm;
text-align: center;
width: 1cm;
}
@top-center {
background: {{primary_color}};
content: '';
display: block;
height: .05cm;
opacity: .5;
width: 100%;
}
@top-right {
content: string(heading);
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% if show_footer %}
@bottom-left {
content: "{{footer_text}}";
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% endif %}
}
@page :blank {
@top-left {
background: none;
content: ''
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page no-chapter {
@top-left {
background: none;
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page :first {
background-color: {{secondary_color}};
background-size: cover;
margin: 0;
}
@page chapter {
background: {{primary_color}};
margin: 0;
@top-left {
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
html {
color: #393939;
font-family: 'Inter';
font-weight: 300;
font-size: 11pt;
font-weight: 300;
line-height: 1.5;
}
h1 {
font-family: 'Inter';
font-weight: 200;
font-size: 38pt;
margin: 5cm 2cm 0 2cm;
page: no-chapter;
width: 100%;
line-height: normal;
}
h2,
h3,
h4 {
font-family: 'Inter';
font-weight: 200;
color: black;
font-weight: 400;
line-height: normal;
}
#cover {
align-content: space-between;
display: flex;
flex-wrap: wrap;
height: 297mm;
}
#cover-subheading {
font-family: 'Inter';
font-weight: 200;
font-size: 22pt;
width: 100%;
}
#cover footer {
background: {{primary_color}};
flex: 1 33%;
margin: 0 -2cm;
padding: 1cm 0;
white-space: pre-wrap;
}
#cover footer:first-of-type {
padding-left: 3cm;
}
#cover-line {
margin-top: 6px;
border-bottom: 1px double {{primary_color}};
}
#summary {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#contents {
page: no-chapter;
}
#contents h2 {
font-size: 20pt;
Intereight: 400;
margin-bottom: 3cm;
}
#contents h3 {
font-weight: 400;
margin: 3em 0 1em;
}
#contents h3::before {
background: {{primary_color}};
content: '';
display: block;
height: .08cm;
margin-bottom: .25cm;
width: 2cm;
}
#contents ul {
list-style: none;
padding-left: 0;
}
#contents ul li {
border-top: .25pt solid #c1c1c1;
margin: .25cm 0;
padding-top: .25cm;
}
#contents ul li::before {
color: {{primary_color}};
content: '• ';
font-size: 30pt;
line-height: 16pt;
vertical-align: bottom;
}
#contents ul li a {
color: inherit;
text-decoration-line: inherit;
}
#contents ul li a::before {
content: target-text(attr(href));
}
#contents ul li a::after {
color: {{primary_color}};
content: target-counter(attr(href), page);
float: right;
}
#columns section {
columns: 2;
column-gap: 1cm;
padding-top: 1cm;
}
#columns section p {
text-align: justify;
}
#columns section p:first-of-type {
font-weight: 700;
}
#chapter {
align-items: center;
display: flex;
height: 297mm;
justify-content: center;
page: chapter;
}
#boxes {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#boxes section h4 {
margin-bottom: 0;
}
#boxes section p {
background: {{primary_color}};
display: block;
font-size: 15pt;
margin-bottom: 0;
padding: .25cm 0;
text-align: center;
height: 85px;
color: #37474F;
}
.bg-critical {
background: #EF9A9A !important;
}
.bg-high {
background: #FFAB91 !important;
}
.bg-medium {
background: #FFCC80 !important;
}
.bg-low {
background: #FFE082 !important;
}
.bg-success {
background-color: #A5D6A7 !important;
}
.bg-grey {
background-color: #B0BEC5 !important;
}
.bg-info {
background-color: #90CAF9 !important;
}
.critical-color {
color: #EF9A9A;
}
.high-color {
color: #dc3545;
}
.medium-color {
color: #FFCC80;
}
.low-color {
color: #FFE082;
}
.success-color {
color: #A5D6A7;
}
.grey-color {
color: #212121;
}
.info-color {
color: #90CAF9;
}
.primary-color {
color: {{primary_color}};
}
.text-blue{
color: #007bff!important;
}
.badge {
display: inline-block;
padding-left: 12px;
padding-right: 12px;
text-align: center
}
.critical-hr-line {
border-bottom: 3px solid #EF9A9A !important;
}
.high-hr-line {
border-bottom: 3px solid #FFAB91 !important;
}
.medium-hr-line {
border-bottom: 3px solid #FFCC80 !important;
}
.low-hr-line {
border-bottom: 3px solid #FFE082 !important;
}
.info-hr-line {
border-bottom: 3px solid #90CAF9 !important;
}
.grey-hr-line {
border-bottom: 3px solid #212121 !important;
}
.inside-box-counter {
font-size: 28pt;
}
.table {
margin: 0 0 40px 0;
width: 100%;
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.2);
display: table;
border-spacing: 0 0.4em;
}
.row {
display: table-row;
background: #f6f6f6;
}
.cell {
padding: 6px 6px 6px 6px;
display: table-cell;
}
.header {
Intereight: 900;
color: #ffffff;
}
.page_title{
font-weight: 300;
font-size: 20pt;
}
.subheading{
font-weight: 300;
font-size: 14pt;
}
.content-heading{
font-weight: 300;
font-size: 12pt;
}
.mini-heading{
font-weight: 400;
font-size: 11pt;
}
.table-border{
border-style:solid;
border-width: 1px;
border-color: #90CAF9 !important;
}
a{
color: #007bff;
text-decoration: none;
}
.ml-8{
margin-left: 8px;
}
</style>
</head>
<body>
<article id="cover">
<h1 style="color:{{primary_color}}">{{report_name}}
<br>
{{scan_object.domain.name}}
<div id="cover-line"></div>
{# generated date #}
<span id="cover-subheading">{% now "F j, Y" %}</span>
</h1>
<footer>
{{company_name}}
{{company_address}}
</footer>
<footer>
{{company_email}}
{{company_website}}
</footer>
<footer>
{% if show_rengine_banner %}Generated by reNgine
https://github.com/yogeshojha/rengine
{% endif %}
</footer>
</article>
<article id="contents">
<h2> </h2>
<h3>Table of contents</h3>
<ul>
{% if show_executive_summary %}
<li><a href="#executive-summary"></a></li>
{% endif %}
<li><a href="#quick-summary"></a></li>
<li><a href="#assessment-timeline"></a></li>
{% if interesting_subdomains and show_recon %}
<li><a href="#interesting-recon-data"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerability-summary"></a></li>
{% endif %}
{% if show_recon %}
<li><a href="#reconnaissance-results"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerabilities-discovered"></a></li>
{% endif %}
</ul>
</article>
{% if show_executive_summary %}
<article id="summary" style="page-break-before: always">
<h2 id="executive-summary" class="page_title">Executive summary</h2>
<br>
{{executive_summary_description | safe }}
</article>
{% endif %}
<article id="summary" style="page-break-before: always">
<h2 id="quick-summary" class="page_title">Quick Summary</h2>
<p>This section contains quick summary of scan performed on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<br>
</article>
{# recon section #}
{% if show_recon %}
<h4 id="reconnaissance-summary" class="subheading">Reconnaissance</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-success">Subdomains
<br>
<span class="inside-box-counter">
{{scan_object.get_subdomain_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Endpoints
<br>
<span class="inside-box-counter">
{{scan_object.get_endpoint_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-critical">Vulnerabilities
<br>
<span class="inside-box-counter">
{{scan_object.get_vulnerability_count}}
</span>
</p>
</section>
</div>
{% endif %}
<!-- vulnerability section, hide if only recon report -->
{% if show_vuln %}
<article>
<br>
<h4 id="vulnerability-summary" class="subheading">Vulnerability Summary</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width:30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{{scan_object.get_info_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{% endif %}
<article>
<h3 id="assessment-timeline" class="page_title">Timeline of the Assessment</h3>
<p>
Scan started on: {{scan_object.start_scan_date|date:"F j, Y h:i"}}
<br>
Total time taken:
{% if scan_object.scan_status == 0 %}
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }}
{% elif scan_object.scan_status == 1 %}
{{ scan_object.get_elapsed_time }}
{% elif scan_object.scan_status == 2 %}
{% if scan_object.get_completed_time_in_sec < 60 %}
Completed in < 1 minutes {% else %} Completed in {{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} {% elif scan_object.scan_status == 3 %} Aborted in
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} <br>
Report Generated on: {% now "F j, Y" %}
</p>
</article>
{# show interesting_subdomains section only when show_recon result is there #}
{% if interesting_subdomains and show_recon %}
<article style="page-break-before: always" class="summary">
<h3 id="interesting-recon-data" class="page_title">Interesting Recon Data</h3>
<p>Listed below are the {{interesting_subdomains.count}} interesting subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-success">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 33%">
Subdomain
</div>
<div class="cell grey-color" style="width: 33%">
Page title
</div>
<div class="cell grey-color" style="width: 15%">
HTTP Status
</div>
</div>
{% for subdomain in interesting_subdomains %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 35%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 35%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% else %}
{% endif %}
</div>
<div class="cell" style="width: 15%;">
{% if subdomain.http_status %}
{{subdomain.http_status}}
{% else %}
{% endif %}
</div>
</div>
{% endfor %}
</div>
</article>
{% endif %}
{# vulnerability_summary only when vuln_report #}
{% if show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerability-summary" class="page_title">Summary of Vulnerabilities Identified</h3>
{% if all_vulnerabilities.count > 0 %}
<p>Listed below are the vulnerabilities identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-critical">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 50%;">
Vulnerability Name
</div>
<div class="cell grey-color" style="width: 19%;">
Times Identified
</div>
<div class="cell grey-color" style="width: 15%">
Severity
</div>
</div>
{% for vulnerability in unique_vulnerabilities %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 50%">
<a href="#vuln_{{vulnerability.name.split|join:'_'}}">{{vulnerability.name}}</a>
</div>
<div class="cell" style="float: right; width: 19%;">
{{vulnerability.count}}
</div>
{% if vulnerability.severity == -1 %}
<div class="cell bg-grey" style="width: 15%">
<span class="severity-title-box">Unknown</span>
{% elif vulnerability.severity == 0 %}
<div class="cell bg-info" style="width: 15%">
<span class="severity-title-box">Informational</span>
{% elif vulnerability.severity == 1 %}
<div class="cell bg-low" style="width: 15%">
<span class="severity-title-box">Low</span>
{% elif vulnerability.severity == 2 %}
<div class="cell bg-medium" style="width: 15%">
<span class="severity-title-box">Medium</span>
{% elif vulnerability.severity == 3 %}
<div class="cell bg-high" style="width: 15%">
<span class="severity-title-box">High</span>
{% elif vulnerability.severity == 4 %}
<div class="cell bg-critical" style="width: 15%">
<span class="severity-title-box">Critical</span>
{% endif %}
</div>
</div>
{% endfor %}
{% else %}
<h3 class='info-color'>No Vulnerabilities were Discovered.</h3>
{% endif %}
</div>
</article>
{% endif %}
{# show discovered assets only for show_recon report #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 id="reconnaissance-results" class="page_title">Discovered Assets</h3>
<h4 class="subheading">Subdomains</h4>
<p>
During the reconnaissance phase, {{scan_object.get_subdomain_count}} subdomains were discovered.
Out of {{scan_object.get_subdomain_count}} subdomains, {{subdomain_alive_count}} returned HTTP status 200.
{{interesting_subdomains.count}} interesting subdomains were also identified based on the interesting keywords used.
</p>
<h4>{{scan_object.get_subdomain_count}} subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
Subdomain
</div>
<div class="cell grey-color" style="width: 38%">
Page title
</div>
<div class="cell grey-color" style="width: 18%">
HTTP Status
</div>
</div>
{% for subdomain in subdomains %}
<div class="row">
<div class="cell" style="width: 38%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 38%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% endif %}
</div>
<div class="cell" style="width: 18%">
{{subdomain.http_status}}
</div>
</div>
{% endfor %}
</div>
{% if ip_addresses.count %}
<h4 class="subheading" style="margin-top: 10px;">IP Addresses</h4>
<h4>{{ip_addresses.count}} IP Addresses were identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
IP
</div>
<div class="cell grey-color" style="width: 38%">
Open Ports
</div>
<div class="cell grey-color" style="width: 18%">
Remarks
</div>
</div>
{% for ip in ip_addresses %}
<div class="row">
<div class="cell" style="width: 38%">
{{ip.address}}
</div>
<div class="cell" style="width: 38%">
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</div>
{% if ip.is_cdn %}
<div class="cell medium" style="width: 18%">
CDN IP Address
{% else %}
<div class="cell" style="width: 18%">
{% endif %}
</div>
</div>
{% endfor %}
</div>
{% endif %}
</article>
<br>
{% endif %}
{# reconnaissance finding only when show_recon #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 class="page_title">Reconnaissance Findings</h3>
{% for subdomain in subdomains %}
<table class="table" cellspacing="0" style="border-collapse: collapse;">
<tr>
<td style="width: 2%" class="cell table-border">{{ forloop.counter }}.</td>
<td style="width: 80%" class="cell table-border">{{subdomain.name}}</td>
{% if subdomain.http_status == 200 %}
<td style="width: 10%" class="cell table-border bg-success">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 300 and subdomain.http_status < 400 %}
<td style="width: 10%" class="cell table-border bg-medium">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 400 %}
<td style="width: 10%" class="cell table-border bg-high">{{subdomain.http_status}}</td>
{% elif subdomain.http_status == 0 %}
<td style="width: 10%" class="cell table-border">N/A</td>
{% else %}
<td style="width: 10%" class="cell table-border">{{subdomain.http_status}}</td>
{% endif %}
</tr>
{% if subdomain.page_title %}
<tr>
<td colspan="3" class="cell table-border"><strong>Page Title: </strong>{{subdomain.page_title}}</td>
</tr>
{% endif %}
{% if subdomain.ip_addresses.all %}
<tr>
<td colspan="3" class="cell table-border">
IP Address:
<ul>
{% for ip in subdomain.ip_addresses.all %}
<li>{{ip.address}}
{% if ip.ports.all %}
<ul>
<li>Open Ports:
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</li>
</ul>
{% endif %}
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
{% if subdomain.get_vulnerabilities %}
<tr>
<td colspan="3" class="cell table-border">
Vulnerabilities
{% regroup subdomain.get_vulnerabilities by name as vuln_list %}
<ul>
{% for vulnerability in vuln_list %}
<li>
<a href="#vuln_{{vulnerability.list.0.name.split|join:'_'}}">{{ vulnerability.grouper }}</a>
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
</table>
{% endfor %}
</article>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerabilities-discovered" class="page_title">Vulnerabilities Discovered</h3>
<p>
This section reports the security issues found during the audit.
<br>
A Total of {{scan_object.get_vulnerability_count}} were discovered in {{scan_object.domain.name}},
{{scan_object.get_critical_vulnerability_count}} of them were Critical,
{{scan_object.get_high_vulnerability_count}} of them were High Severity,
{{scan_object.get_medium_vulnerability_count}} of them were Medium severity,
{{scan_object.get_low_vulnerability_count}} of them were Low severity, and
{{scan_object.get_info_vulnerability_count}} of them were Informational.
{{scan_object.get_unknown_vulnerability_count}} of them were Unknown Severity.
</p>
<h4 class="subheading">Vulnerability Breakdown by Severity</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{{scan_object.get_info_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{# start vulnerability #}
{% if show_vuln %}
<article class="">
{% regroup all_vulnerabilities by get_path as grouped_vulnerabilities %}
{% for vulnerability in grouped_vulnerabilities %}
<div>
<h4 class="content-heading" id="vuln_{{vulnerability.list.0.name.split|join:'_'}}">
<span>{{vulnerability.list.0.name}}
<br>in {{vulnerability.grouper}}</span>
{% if vulnerability.list.0.severity == -1 %}
<span style="float: right;" class="badge bg-grey">Unknown</span>
<div class="grey-hr-line" ></div>
{% elif vulnerability.list.0.severity == 0 %}
<span style="float: right;" class="badge bg-info">INFO</span>
<div class="info-hr-line" ></div>
{% elif vulnerability.list.0.severity == 1 %}
<span style="float: right;" class="badge bg-low">LOW</span>
<div class="low-hr-line" ></div>
{% elif vulnerability.list.0.severity == 2 %}
<span style="float: right;" class="badge bg-medium">MEDIUM</span>
<div class="medium-hr-line" ></div>
{% elif vulnerability.list.0.severity == 3 %}
<span style="float: right;" class="badge bg-high">HIGH</span>
<div class="high-hr-line" ></div>
{% elif vulnerability.list.0.severity == 4 %}
<span style="float: right;" class="badge bg-critical">CRITICAL</span>
<div class="critical-hr-line" ></div>
{% endif %}
</h4>
<!-- show vulnerability classification -->
<span class="mini-heading">Vulnerability Source: {{vulnerability.list.0.source|upper}}</span><br>
{% if vulnerability.list.0.cvss_metrics or vulnerability.list.0.cvss_score or vulnerability.list.0.cve_ids.all or vulnerability.list.0.cve_ids.all %}
<span class="mini-heading">Vulnerability Classification</span><br>
{% if vulnerability.list.0.cvss_metrics %}
<span class="mini-heading ml-8">CVSS Metrics: {{vulnerability.list.0.cvss_metrics}}</span>
{% endif %}
{% if vulnerability.list.0.cvss_score %}
<br>
<span class="mini-heading ml-8">CVSS Score:</span> <span class="high-color">{{vulnerability.list.0.cvss_score}}</span>
{% endif %}
{% if vulnerability.list.0.cve_ids.all %}
<br>
<span class="mini-heading ml-8">CVE IDs</span><br>
{% for cve in vulnerability.list.0.cve_ids.all %} {{cve}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
{% if vulnerability.list.0.cwe_ids.all %}
<br>
<span class="mini-heading ml-8">CWE IDs</span><br>
{% for cwe in vulnerability.list.0.cwe_ids.all %} {{cwe}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
<br>
{% endif %}
{% if vulnerability.list.0.description %}
<br>
<span class="mini-heading">Description</span><br>
{{vulnerability.list.0.description|linebreaks}}
{% endif %}
{% if vulnerability.list.0.impact %}
<br>
<span class="mini-heading">Impact</span><br>
{{vulnerability.list.0.impact|linebreaks}}
{% endif %}
{% if vulnerability.list.0.remediation %}
<br>
<span class="mini-heading">Remediation</span><br>
{{vulnerability.list.0.remediation|linebreaks}}
{% endif %}
<br>
<span class="mini-heading">Vulnerable URL(s)</span><br>
<ul>
{% for vuln in vulnerability.list %}
<li class="text-blue">{{vuln.http_url}}</li>
{% endfor %}
</ul>
<!-- {% regroup vulnerability.list by http_url as vuln_http_url_list %} -->
<!-- <ul>
{% for vuln_urls in vuln_http_url_list %}
<li>{{vuln_urls.grouper}}</li>
<span class="mini-heading">Result/Findings</span><br>
{% for vuln in vuln_urls.list %}
{% if vuln.matcher_name %}
{% if not forloop.first %} • {% endif %} {{vuln.matcher_name}}
{% endif %}
{% if vuln.extracted_results %}
{% for res in vuln.extracted_results %}
{% if not forloop.first %} • {% endif %} {{res}}
{% endfor %}
{% endif %}
{% endfor %}
{% endfor %}
</ul> -->
{% if vulnerability.list.0.references.all %}
<span class="mini-heading">References</span><br>
<ul>
{% for ref in vulnerability.list.0.references.all %}
<li>
<span class="text-blue"> {{ref}} </span>
</li>
{% endfor %}
</ul>
{% endif %}
<br>
<br>
</div>
{% endfor %}
</article>
{% endif %}
{% endif %}
<article id="chapter">
<h2 id="chapter-title">END OF REPORT</h2>
</article>
</body>
</html>
| <html>
<head>
<meta charset="utf-8">
<title>Report</title>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@100;200;300;400;500&display=swap" rel="stylesheet">
<style>
@page {
size: A4;
@top-left {
background: {{primary_color}};
content: counter(page);
height: 1cm;
text-align: center;
width: 1cm;
}
@top-center {
background: {{primary_color}};
content: '';
display: block;
height: .05cm;
opacity: .5;
width: 100%;
}
@top-right {
content: string(heading);
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% if show_footer %}
@bottom-left {
content: "{{footer_text}}";
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% endif %}
}
@page :blank {
@top-left {
background: none;
content: ''
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page no-chapter {
@top-left {
background: none;
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page :first {
background-color: {{secondary_color}};
background-size: cover;
margin: 0;
}
@page chapter {
background: {{primary_color}};
margin: 0;
@top-left {
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
html {
color: #393939;
font-family: 'Inter';
font-weight: 300;
font-size: 11pt;
font-weight: 300;
line-height: 1.5;
}
h1 {
font-family: 'Inter';
font-weight: 200;
font-size: 38pt;
margin: 5cm 2cm 0 2cm;
page: no-chapter;
width: 100%;
line-height: normal;
}
h2,
h3,
h4 {
font-family: 'Inter';
font-weight: 200;
color: black;
font-weight: 400;
line-height: normal;
}
#cover {
align-content: space-between;
display: flex;
flex-wrap: wrap;
height: 297mm;
}
#cover-subheading {
font-family: 'Inter';
font-weight: 200;
font-size: 22pt;
width: 100%;
}
#cover footer {
background: {{primary_color}};
flex: 1 33%;
margin: 0 -2cm;
padding: 1cm 0;
white-space: pre-wrap;
}
#cover footer:first-of-type {
padding-left: 3cm;
}
#cover-line {
margin-top: 6px;
border-bottom: 1px double {{primary_color}};
}
#summary {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#contents {
page: no-chapter;
}
#contents h2 {
font-size: 20pt;
Intereight: 400;
margin-bottom: 3cm;
}
#contents h3 {
font-weight: 400;
margin: 3em 0 1em;
}
#contents h3::before {
background: {{primary_color}};
content: '';
display: block;
height: .08cm;
margin-bottom: .25cm;
width: 2cm;
}
#contents ul {
list-style: none;
padding-left: 0;
}
#contents ul li {
border-top: .25pt solid #c1c1c1;
margin: .25cm 0;
padding-top: .25cm;
}
#contents ul li::before {
color: {{primary_color}};
content: '• ';
font-size: 30pt;
line-height: 16pt;
vertical-align: bottom;
}
#contents ul li a {
color: inherit;
text-decoration-line: inherit;
}
#contents ul li a::before {
content: target-text(attr(href));
}
#contents ul li a::after {
color: {{primary_color}};
content: target-counter(attr(href), page);
float: right;
}
#columns section {
columns: 2;
column-gap: 1cm;
padding-top: 1cm;
}
#columns section p {
text-align: justify;
}
#columns section p:first-of-type {
font-weight: 700;
}
#chapter {
align-items: center;
display: flex;
height: 297mm;
justify-content: center;
page: chapter;
}
#boxes {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#boxes section h4 {
margin-bottom: 0;
}
#boxes section p {
background: {{primary_color}};
display: block;
font-size: 15pt;
margin-bottom: 0;
padding: .25cm 0;
text-align: center;
height: 85px;
color: #37474F;
}
.bg-critical {
background: #EF9A9A !important;
}
.bg-high {
background: #FFAB91 !important;
}
.bg-medium {
background: #FFCC80 !important;
}
.bg-low {
background: #FFE082 !important;
}
.bg-success {
background-color: #A5D6A7 !important;
}
.bg-grey {
background-color: #B0BEC5 !important;
}
.bg-info {
background-color: #90CAF9 !important;
}
.critical-color {
color: #EF9A9A;
}
.high-color {
color: #dc3545;
}
.medium-color {
color: #FFCC80;
}
.low-color {
color: #FFE082;
}
.success-color {
color: #A5D6A7;
}
.grey-color {
color: #212121;
}
.info-color {
color: #90CAF9;
}
.primary-color {
color: {{primary_color}};
}
.text-blue{
color: #007bff!important;
}
.badge {
display: inline-block;
padding-left: 12px;
padding-right: 12px;
text-align: center
}
.critical-hr-line {
border-bottom: 3px solid #EF9A9A !important;
}
.high-hr-line {
border-bottom: 3px solid #FFAB91 !important;
}
.medium-hr-line {
border-bottom: 3px solid #FFCC80 !important;
}
.low-hr-line {
border-bottom: 3px solid #FFE082 !important;
}
.info-hr-line {
border-bottom: 3px solid #90CAF9 !important;
}
.grey-hr-line {
border-bottom: 3px solid #212121 !important;
}
.inside-box-counter {
font-size: 28pt;
}
.table {
margin: 0 0 40px 0;
width: 100%;
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.2);
display: table;
border-spacing: 0 0.4em;
}
.row {
display: table-row;
background: #f6f6f6;
}
.cell {
padding: 6px 6px 6px 6px;
display: table-cell;
}
.header {
Intereight: 900;
color: #ffffff;
}
.page_title{
font-weight: 300;
font-size: 20pt;
}
.subheading{
font-weight: 300;
font-size: 14pt;
}
.content-heading{
font-weight: 300;
font-size: 12pt;
}
.mini-heading{
font-weight: 400;
font-size: 11pt;
}
.table-border{
border-style:solid;
border-width: 1px;
border-color: #90CAF9 !important;
}
a{
color: #007bff;
text-decoration: none;
}
.ml-8{
margin-left: 8px;
}
</style>
</head>
<body>
<article id="cover">
<h1 style="color:{{primary_color}}">{{report_name}}
<br>
{{scan_object.domain.name}}
<div id="cover-line"></div>
{# generated date #}
<span id="cover-subheading">{% now "F j, Y" %}</span>
</h1>
<footer>
{{company_name}}
{{company_address}}
</footer>
<footer>
{{company_email}}
{{company_website}}
</footer>
<footer>
{% if show_rengine_banner %}Generated by reNgine
https://github.com/yogeshojha/rengine
{% endif %}
</footer>
</article>
<article id="contents">
<h2> </h2>
<h3>Table of contents</h3>
<ul>
{% if show_executive_summary %}
<li><a href="#executive-summary"></a></li>
{% endif %}
<li><a href="#quick-summary"></a></li>
<li><a href="#assessment-timeline"></a></li>
{% if interesting_subdomains and show_recon %}
<li><a href="#interesting-recon-data"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerability-summary"></a></li>
{% endif %}
{% if show_recon %}
<li><a href="#reconnaissance-results"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerabilities-discovered"></a></li>
{% endif %}
</ul>
</article>
{% if show_executive_summary %}
<article id="summary" style="page-break-before: always">
<h2 id="executive-summary" class="page_title">Executive summary</h2>
<br>
{{executive_summary_description | safe }}
</article>
{% endif %}
<article id="summary" style="page-break-before: always">
<h2 id="quick-summary" class="page_title">Quick Summary</h2>
<p>This section contains quick summary of scan performed on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<br>
</article>
{# recon section #}
{% if show_recon %}
<h4 id="reconnaissance-summary" class="subheading">Reconnaissance</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-success">Subdomains
<br>
<span class="inside-box-counter">
{{scan_object.get_subdomain_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Endpoints
<br>
<span class="inside-box-counter">
{{scan_object.get_endpoint_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-critical">Vulnerabilities
<br>
<span class="inside-box-counter">
{{all_vulnerabilities_count}}
</span>
</p>
</section>
</div>
{% endif %}
<!-- vulnerability section, hide if only recon report -->
{% if show_vuln %}
<article>
<br>
<h4 id="vulnerability-summary" class="subheading">Vulnerability Summary</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width:30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{% if is_ignore_info_vuln %}
0
{% else %}
{{scan_object.get_info_vulnerability_count}}
{% endif %}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{% endif %}
<article>
<h3 id="assessment-timeline" class="page_title">Timeline of the Assessment</h3>
<p>
Scan started on: {{scan_object.start_scan_date|date:"F j, Y h:i"}}
<br>
Total time taken:
{% if scan_object.scan_status == 0 %}
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }}
{% elif scan_object.scan_status == 1 %}
{{ scan_object.get_elapsed_time }}
{% elif scan_object.scan_status == 2 %}
{% if scan_object.get_completed_time_in_sec < 60 %}
Completed in < 1 minutes {% else %} Completed in {{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} {% elif scan_object.scan_status == 3 %} Aborted in
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} <br>
Report Generated on: {% now "F j, Y" %}
</p>
</article>
{# show interesting_subdomains section only when show_recon result is there #}
{% if interesting_subdomains and show_recon %}
<article style="page-break-before: always" class="summary">
<h3 id="interesting-recon-data" class="page_title">Interesting Recon Data</h3>
<p>Listed below are the {{interesting_subdomains.count}} interesting subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-success">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 33%">
Subdomain
</div>
<div class="cell grey-color" style="width: 33%">
Page title
</div>
<div class="cell grey-color" style="width: 15%">
HTTP Status
</div>
</div>
{% for subdomain in interesting_subdomains %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 35%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 35%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% else %}
{% endif %}
</div>
<div class="cell" style="width: 15%;">
{% if subdomain.http_status %}
{{subdomain.http_status}}
{% else %}
{% endif %}
</div>
</div>
{% endfor %}
</div>
</article>
{% endif %}
{# vulnerability_summary only when vuln_report #}
{% if show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerability-summary" class="page_title">Summary of Vulnerabilities Identified</h3>
{% if all_vulnerabilities.count > 0 %}
<p>Listed below are the vulnerabilities identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-critical">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 50%;">
Vulnerability Name
</div>
<div class="cell grey-color" style="width: 19%;">
Times Identified
</div>
<div class="cell grey-color" style="width: 15%">
Severity
</div>
</div>
{% for vulnerability in unique_vulnerabilities %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 50%">
<a href="#vuln_{{vulnerability.name.split|join:'_'}}">{{vulnerability.name}}</a>
</div>
<div class="cell" style="float: right; width: 19%;">
{{vulnerability.count}}
</div>
{% if vulnerability.severity == -1 %}
<div class="cell bg-grey" style="width: 15%">
<span class="severity-title-box">Unknown</span>
{% elif vulnerability.severity == 0 %}
<div class="cell bg-info" style="width: 15%">
<span class="severity-title-box">Informational</span>
{% elif vulnerability.severity == 1 %}
<div class="cell bg-low" style="width: 15%">
<span class="severity-title-box">Low</span>
{% elif vulnerability.severity == 2 %}
<div class="cell bg-medium" style="width: 15%">
<span class="severity-title-box">Medium</span>
{% elif vulnerability.severity == 3 %}
<div class="cell bg-high" style="width: 15%">
<span class="severity-title-box">High</span>
{% elif vulnerability.severity == 4 %}
<div class="cell bg-critical" style="width: 15%">
<span class="severity-title-box">Critical</span>
{% endif %}
</div>
</div>
{% endfor %}
{% else %}
<h3 class='info-color'>No Vulnerabilities were Discovered.</h3>
{% endif %}
</div>
</article>
{% endif %}
{# show discovered assets only for show_recon report #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 id="reconnaissance-results" class="page_title">Discovered Assets</h3>
<h4 class="subheading">Subdomains</h4>
<p>
During the reconnaissance phase, {{scan_object.get_subdomain_count}} subdomains were discovered.
Out of {{scan_object.get_subdomain_count}} subdomains, {{subdomain_alive_count}} returned HTTP status 200.
{{interesting_subdomains.count}} interesting subdomains were also identified based on the interesting keywords used.
</p>
<h4>{{scan_object.get_subdomain_count}} subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
Subdomain
</div>
<div class="cell grey-color" style="width: 38%">
Page title
</div>
<div class="cell grey-color" style="width: 18%">
HTTP Status
</div>
</div>
{% for subdomain in subdomains %}
<div class="row">
<div class="cell" style="width: 38%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 38%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% endif %}
</div>
<div class="cell" style="width: 18%">
{{subdomain.http_status}}
</div>
</div>
{% endfor %}
</div>
{% if ip_addresses.count %}
<h4 class="subheading" style="margin-top: 10px;">IP Addresses</h4>
<h4>{{ip_addresses.count}} IP Addresses were identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
IP
</div>
<div class="cell grey-color" style="width: 38%">
Open Ports
</div>
<div class="cell grey-color" style="width: 18%">
Remarks
</div>
</div>
{% for ip in ip_addresses %}
<div class="row">
<div class="cell" style="width: 38%">
{{ip.address}}
</div>
<div class="cell" style="width: 38%">
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</div>
{% if ip.is_cdn %}
<div class="cell medium" style="width: 18%">
CDN IP Address
{% else %}
<div class="cell" style="width: 18%">
{% endif %}
</div>
</div>
{% endfor %}
</div>
{% endif %}
</article>
<br>
{% endif %}
{# reconnaissance finding only when show_recon #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 class="page_title">Reconnaissance Findings</h3>
{% for subdomain in subdomains %}
<table class="table" cellspacing="0" style="border-collapse: collapse;">
<tr>
<td style="width: 2%" class="cell table-border">{{ forloop.counter }}.</td>
<td style="width: 80%" class="cell table-border">{{subdomain.name}}</td>
{% if subdomain.http_status == 200 %}
<td style="width: 10%" class="cell table-border bg-success">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 300 and subdomain.http_status < 400 %}
<td style="width: 10%" class="cell table-border bg-medium">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 400 %}
<td style="width: 10%" class="cell table-border bg-high">{{subdomain.http_status}}</td>
{% elif subdomain.http_status == 0 %}
<td style="width: 10%" class="cell table-border">N/A</td>
{% else %}
<td style="width: 10%" class="cell table-border">{{subdomain.http_status}}</td>
{% endif %}
</tr>
{% if subdomain.page_title %}
<tr>
<td colspan="3" class="cell table-border"><strong>Page Title: </strong>{{subdomain.page_title}}</td>
</tr>
{% endif %}
{% if subdomain.ip_addresses.all %}
<tr>
<td colspan="3" class="cell table-border">
IP Address:
<ul>
{% for ip in subdomain.ip_addresses.all %}
<li>{{ip.address}}
{% if ip.ports.all %}
<ul>
<li>Open Ports:
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</li>
</ul>
{% endif %}
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
{% if subdomain.get_vulnerabilities_without_info %}
<tr>
<td colspan="3" class="cell table-border">
Vulnerabilities
{% regroup subdomain.get_vulnerabilities_without_info by name as vuln_list %}
<ul>
{% for vulnerability in vuln_list %}
<li>
<a href="#vuln_{{vulnerability.list.0.name.split|join:'_'}}">{{ vulnerability.grouper }}</a>
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
</table>
{% endfor %}
</article>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerabilities-discovered" class="page_title">Vulnerabilities Discovered</h3>
<p>
This section reports the security issues found during the audit.
<br>
A Total of {{scan_object.get_vulnerability_count}} were discovered in {{scan_object.domain.name}},
{{scan_object.get_critical_vulnerability_count}} of them were Critical,
{{scan_object.get_high_vulnerability_count}} of them were High Severity,
{{scan_object.get_medium_vulnerability_count}} of them were Medium severity,
{% if is_ignore_info_vuln %}0{% else %}{{scan_object.get_info_vulnerability_count}}{% endif %} of them were Low severity, and
{{scan_object.get_info_vulnerability_count}} of them were Informational.
{{scan_object.get_unknown_vulnerability_count}} of them were Unknown Severity.
</p>
<h4 class="subheading">Vulnerability Breakdown by Severity</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{% if is_ignore_info_vuln %}
0
{% else %}
{{scan_object.get_info_vulnerability_count}}
{% endif %}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{# start vulnerability #}
{% if show_vuln %}
<article class="">
{% regroup all_vulnerabilities by get_path as grouped_vulnerabilities %}
{% for vulnerabilities in grouped_vulnerabilities %}
{% for vulnerability in vulnerabilities.list %}
<div>
<h4 class="content-heading" id="vuln_{{vulnerability.name.split|join:'_'}}">
<span>{{vulnerability.name}}
<br>in {{vulnerabilities.grouper}}</span>
{% if vulnerability.severity == -1 %}
<span style="float: right;" class="badge bg-grey">Unknown</span>
<div class="grey-hr-line" ></div>
{% elif vulnerability.severity == 0 %}
<span style="float: right;" class="badge bg-info">INFO</span>
<div class="info-hr-line" ></div>
{% elif vulnerability.severity == 1 %}
<span style="float: right;" class="badge bg-low">LOW</span>
<div class="low-hr-line" ></div>
{% elif vulnerability.severity == 2 %}
<span style="float: right;" class="badge bg-medium">MEDIUM</span>
<div class="medium-hr-line" ></div>
{% elif vulnerability.severity == 3 %}
<span style="float: right;" class="badge bg-high">HIGH</span>
<div class="high-hr-line" ></div>
{% elif vulnerability.severity == 4 %}
<span style="float: right;" class="badge bg-critical">CRITICAL</span>
<div class="critical-hr-line" ></div>
{% endif %}
</h4>
<!-- show vulnerability classification -->
<span class="mini-heading">Vulnerability Source: {{vulnerability.source|upper}}</span><br>
{% if vulnerability.cvss_metrics or vulnerability.cvss_score or vulnerability.cve_ids.all or vulnerability.cve_ids.all %}
<span class="mini-heading">Vulnerability Classification</span><br>
{% if vulnerability.cvss_metrics %}
<span class="mini-heading ml-8">CVSS Metrics: {{vulnerability.cvss_metrics}}</span>
{% endif %}
{% if vulnerability.cvss_score %}
<br>
<span class="mini-heading ml-8">CVSS Score:</span> <span class="high-color">{{vulnerability.cvss_score}}</span>
{% endif %}
{% if vulnerability.cve_ids.all %}
<br>
<span class="mini-heading ml-8">CVE IDs</span><br>
{% for cve in vulnerability.cve_ids.all %} {{cve}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
{% if vulnerability.cwe_ids.all %}
<br>
<span class="mini-heading ml-8">CWE IDs</span><br>
{% for cwe in vulnerability.cwe_ids.all %} {{cwe}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
<br>
{% endif %}
{% if vulnerability.description %}
<br>
<span class="mini-heading">Description</span><br>
{{vulnerability.description|linebreaks}}
{% endif %}
{% if vulnerability.impact %}
<br>
<span class="mini-heading">Impact</span><br>
{{vulnerability.impact|linebreaks}}
{% endif %}
{% if vulnerability.remediation %}
<br>
<span class="mini-heading">Remediation</span><br>
{{vulnerability.remediation|linebreaks}}
{% endif %}
<br>
<span class="mini-heading">Vulnerable URL(s)</span><br>
<ul>
<li class="text-blue"><a href="{{vulnerability.http_url}}" target="_blank" rel="noopener noreferrer">{{vulnerability.http_url}}</a></li>
</ul>
<!-- {% regroup vulnerability.list by http_url as vuln_http_url_list %} -->
<!-- <ul>
{% for vuln_urls in vuln_http_url_list %}
<li>{{vuln_urls.grouper}}</li>
<span class="mini-heading">Result/Findings</span><br>
{% for vuln in vuln_urls.list %}
{% if vuln.matcher_name %}
{% if not forloop.first %} • {% endif %} {{vuln.matcher_name}}
{% endif %}
{% if vuln.extracted_results %}
{% for res in vuln.extracted_results %}
{% if not forloop.first %} • {% endif %} {{res}}
{% endfor %}
{% endif %}
{% endfor %}
{% endfor %}
</ul> -->
{% if vulnerability.references.all %}
<span class="mini-heading">References</span><br>
<ul>
{% for ref in vulnerability.references.all %}
<li>
<span class="text-blue"><a href="{{ref}}" target="_blank" rel="noopener noreferrer">{{ref}}</a></span>
</li>
{% endfor %}
</ul>
{% endif %}
<br>
<br>
</div>
{% endfor %}
{% endfor %}
</article>
{% endif %}
{% endif %}
<article id="chapter">
<h2 id="chapter-title">END OF REPORT</h2>
</article>
</body>
</html>
| psyray | 4341d9834865240222a8dc72c01caaec0d7bed44 | 69231095782663fe0fe8b0e49b8aa995aa042723 | Fixed | psyray | 4 |
yogeshojha/rengine | 1,100 | Fix report generation when `Ignore Informational Vulnerabilities` checked | When **Ignore Informational Vulnerabilities** is checked there are still info vulns datas.
I've reworked the queries that display vulnerabilities to prevent info vulns to display in the :
- **Quick summary** Info blue box
- **Reconnaissance Findings**
- **Vulnerabilities Discovered** Info blue box
I've also fixed the **Vulnerabilities Discovered** listing by doing a correct loop through regrouped values because values withe the same path but not the same severity does not display well
Tested and working on current master branch | null | 2023-12-05 01:25:41+00:00 | 2023-12-08 05:48:36+00:00 | web/templates/report/template.html | <html>
<head>
<meta charset="utf-8">
<title>Report</title>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@100;200;300;400;500&display=swap" rel="stylesheet">
<style>
@page {
size: A4;
@top-left {
background: {{primary_color}};
content: counter(page);
height: 1cm;
text-align: center;
width: 1cm;
}
@top-center {
background: {{primary_color}};
content: '';
display: block;
height: .05cm;
opacity: .5;
width: 100%;
}
@top-right {
content: string(heading);
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% if show_footer %}
@bottom-left {
content: "{{footer_text}}";
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% endif %}
}
@page :blank {
@top-left {
background: none;
content: ''
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page no-chapter {
@top-left {
background: none;
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page :first {
background-color: {{secondary_color}};
background-size: cover;
margin: 0;
}
@page chapter {
background: {{primary_color}};
margin: 0;
@top-left {
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
html {
color: #393939;
font-family: 'Inter';
font-weight: 300;
font-size: 11pt;
font-weight: 300;
line-height: 1.5;
}
h1 {
font-family: 'Inter';
font-weight: 200;
font-size: 38pt;
margin: 5cm 2cm 0 2cm;
page: no-chapter;
width: 100%;
line-height: normal;
}
h2,
h3,
h4 {
font-family: 'Inter';
font-weight: 200;
color: black;
font-weight: 400;
line-height: normal;
}
#cover {
align-content: space-between;
display: flex;
flex-wrap: wrap;
height: 297mm;
}
#cover-subheading {
font-family: 'Inter';
font-weight: 200;
font-size: 22pt;
width: 100%;
}
#cover footer {
background: {{primary_color}};
flex: 1 33%;
margin: 0 -2cm;
padding: 1cm 0;
white-space: pre-wrap;
}
#cover footer:first-of-type {
padding-left: 3cm;
}
#cover-line {
margin-top: 6px;
border-bottom: 1px double {{primary_color}};
}
#summary {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#contents {
page: no-chapter;
}
#contents h2 {
font-size: 20pt;
Intereight: 400;
margin-bottom: 3cm;
}
#contents h3 {
font-weight: 400;
margin: 3em 0 1em;
}
#contents h3::before {
background: {{primary_color}};
content: '';
display: block;
height: .08cm;
margin-bottom: .25cm;
width: 2cm;
}
#contents ul {
list-style: none;
padding-left: 0;
}
#contents ul li {
border-top: .25pt solid #c1c1c1;
margin: .25cm 0;
padding-top: .25cm;
}
#contents ul li::before {
color: {{primary_color}};
content: '• ';
font-size: 30pt;
line-height: 16pt;
vertical-align: bottom;
}
#contents ul li a {
color: inherit;
text-decoration-line: inherit;
}
#contents ul li a::before {
content: target-text(attr(href));
}
#contents ul li a::after {
color: {{primary_color}};
content: target-counter(attr(href), page);
float: right;
}
#columns section {
columns: 2;
column-gap: 1cm;
padding-top: 1cm;
}
#columns section p {
text-align: justify;
}
#columns section p:first-of-type {
font-weight: 700;
}
#chapter {
align-items: center;
display: flex;
height: 297mm;
justify-content: center;
page: chapter;
}
#boxes {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#boxes section h4 {
margin-bottom: 0;
}
#boxes section p {
background: {{primary_color}};
display: block;
font-size: 15pt;
margin-bottom: 0;
padding: .25cm 0;
text-align: center;
height: 85px;
color: #37474F;
}
.bg-critical {
background: #EF9A9A !important;
}
.bg-high {
background: #FFAB91 !important;
}
.bg-medium {
background: #FFCC80 !important;
}
.bg-low {
background: #FFE082 !important;
}
.bg-success {
background-color: #A5D6A7 !important;
}
.bg-grey {
background-color: #B0BEC5 !important;
}
.bg-info {
background-color: #90CAF9 !important;
}
.critical-color {
color: #EF9A9A;
}
.high-color {
color: #dc3545;
}
.medium-color {
color: #FFCC80;
}
.low-color {
color: #FFE082;
}
.success-color {
color: #A5D6A7;
}
.grey-color {
color: #212121;
}
.info-color {
color: #90CAF9;
}
.primary-color {
color: {{primary_color}};
}
.text-blue{
color: #007bff!important;
}
.badge {
display: inline-block;
padding-left: 12px;
padding-right: 12px;
text-align: center
}
.critical-hr-line {
border-bottom: 3px solid #EF9A9A !important;
}
.high-hr-line {
border-bottom: 3px solid #FFAB91 !important;
}
.medium-hr-line {
border-bottom: 3px solid #FFCC80 !important;
}
.low-hr-line {
border-bottom: 3px solid #FFE082 !important;
}
.info-hr-line {
border-bottom: 3px solid #90CAF9 !important;
}
.grey-hr-line {
border-bottom: 3px solid #212121 !important;
}
.inside-box-counter {
font-size: 28pt;
}
.table {
margin: 0 0 40px 0;
width: 100%;
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.2);
display: table;
border-spacing: 0 0.4em;
}
.row {
display: table-row;
background: #f6f6f6;
}
.cell {
padding: 6px 6px 6px 6px;
display: table-cell;
}
.header {
Intereight: 900;
color: #ffffff;
}
.page_title{
font-weight: 300;
font-size: 20pt;
}
.subheading{
font-weight: 300;
font-size: 14pt;
}
.content-heading{
font-weight: 300;
font-size: 12pt;
}
.mini-heading{
font-weight: 400;
font-size: 11pt;
}
.table-border{
border-style:solid;
border-width: 1px;
border-color: #90CAF9 !important;
}
a{
color: #007bff;
text-decoration: none;
}
.ml-8{
margin-left: 8px;
}
</style>
</head>
<body>
<article id="cover">
<h1 style="color:{{primary_color}}">{{report_name}}
<br>
{{scan_object.domain.name}}
<div id="cover-line"></div>
{# generated date #}
<span id="cover-subheading">{% now "F j, Y" %}</span>
</h1>
<footer>
{{company_name}}
{{company_address}}
</footer>
<footer>
{{company_email}}
{{company_website}}
</footer>
<footer>
{% if show_rengine_banner %}Generated by reNgine
https://github.com/yogeshojha/rengine
{% endif %}
</footer>
</article>
<article id="contents">
<h2> </h2>
<h3>Table of contents</h3>
<ul>
{% if show_executive_summary %}
<li><a href="#executive-summary"></a></li>
{% endif %}
<li><a href="#quick-summary"></a></li>
<li><a href="#assessment-timeline"></a></li>
{% if interesting_subdomains and show_recon %}
<li><a href="#interesting-recon-data"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerability-summary"></a></li>
{% endif %}
{% if show_recon %}
<li><a href="#reconnaissance-results"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerabilities-discovered"></a></li>
{% endif %}
</ul>
</article>
{% if show_executive_summary %}
<article id="summary" style="page-break-before: always">
<h2 id="executive-summary" class="page_title">Executive summary</h2>
<br>
{{executive_summary_description | safe }}
</article>
{% endif %}
<article id="summary" style="page-break-before: always">
<h2 id="quick-summary" class="page_title">Quick Summary</h2>
<p>This section contains quick summary of scan performed on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<br>
</article>
{# recon section #}
{% if show_recon %}
<h4 id="reconnaissance-summary" class="subheading">Reconnaissance</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-success">Subdomains
<br>
<span class="inside-box-counter">
{{scan_object.get_subdomain_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Endpoints
<br>
<span class="inside-box-counter">
{{scan_object.get_endpoint_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-critical">Vulnerabilities
<br>
<span class="inside-box-counter">
{{scan_object.get_vulnerability_count}}
</span>
</p>
</section>
</div>
{% endif %}
<!-- vulnerability section, hide if only recon report -->
{% if show_vuln %}
<article>
<br>
<h4 id="vulnerability-summary" class="subheading">Vulnerability Summary</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width:30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{{scan_object.get_info_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{% endif %}
<article>
<h3 id="assessment-timeline" class="page_title">Timeline of the Assessment</h3>
<p>
Scan started on: {{scan_object.start_scan_date|date:"F j, Y h:i"}}
<br>
Total time taken:
{% if scan_object.scan_status == 0 %}
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }}
{% elif scan_object.scan_status == 1 %}
{{ scan_object.get_elapsed_time }}
{% elif scan_object.scan_status == 2 %}
{% if scan_object.get_completed_time_in_sec < 60 %}
Completed in < 1 minutes {% else %} Completed in {{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} {% elif scan_object.scan_status == 3 %} Aborted in
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} <br>
Report Generated on: {% now "F j, Y" %}
</p>
</article>
{# show interesting_subdomains section only when show_recon result is there #}
{% if interesting_subdomains and show_recon %}
<article style="page-break-before: always" class="summary">
<h3 id="interesting-recon-data" class="page_title">Interesting Recon Data</h3>
<p>Listed below are the {{interesting_subdomains.count}} interesting subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-success">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 33%">
Subdomain
</div>
<div class="cell grey-color" style="width: 33%">
Page title
</div>
<div class="cell grey-color" style="width: 15%">
HTTP Status
</div>
</div>
{% for subdomain in interesting_subdomains %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 35%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 35%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% else %}
{% endif %}
</div>
<div class="cell" style="width: 15%;">
{% if subdomain.http_status %}
{{subdomain.http_status}}
{% else %}
{% endif %}
</div>
</div>
{% endfor %}
</div>
</article>
{% endif %}
{# vulnerability_summary only when vuln_report #}
{% if show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerability-summary" class="page_title">Summary of Vulnerabilities Identified</h3>
{% if all_vulnerabilities.count > 0 %}
<p>Listed below are the vulnerabilities identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-critical">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 50%;">
Vulnerability Name
</div>
<div class="cell grey-color" style="width: 19%;">
Times Identified
</div>
<div class="cell grey-color" style="width: 15%">
Severity
</div>
</div>
{% for vulnerability in unique_vulnerabilities %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 50%">
<a href="#vuln_{{vulnerability.name.split|join:'_'}}">{{vulnerability.name}}</a>
</div>
<div class="cell" style="float: right; width: 19%;">
{{vulnerability.count}}
</div>
{% if vulnerability.severity == -1 %}
<div class="cell bg-grey" style="width: 15%">
<span class="severity-title-box">Unknown</span>
{% elif vulnerability.severity == 0 %}
<div class="cell bg-info" style="width: 15%">
<span class="severity-title-box">Informational</span>
{% elif vulnerability.severity == 1 %}
<div class="cell bg-low" style="width: 15%">
<span class="severity-title-box">Low</span>
{% elif vulnerability.severity == 2 %}
<div class="cell bg-medium" style="width: 15%">
<span class="severity-title-box">Medium</span>
{% elif vulnerability.severity == 3 %}
<div class="cell bg-high" style="width: 15%">
<span class="severity-title-box">High</span>
{% elif vulnerability.severity == 4 %}
<div class="cell bg-critical" style="width: 15%">
<span class="severity-title-box">Critical</span>
{% endif %}
</div>
</div>
{% endfor %}
{% else %}
<h3 class='info-color'>No Vulnerabilities were Discovered.</h3>
{% endif %}
</div>
</article>
{% endif %}
{# show discovered assets only for show_recon report #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 id="reconnaissance-results" class="page_title">Discovered Assets</h3>
<h4 class="subheading">Subdomains</h4>
<p>
During the reconnaissance phase, {{scan_object.get_subdomain_count}} subdomains were discovered.
Out of {{scan_object.get_subdomain_count}} subdomains, {{subdomain_alive_count}} returned HTTP status 200.
{{interesting_subdomains.count}} interesting subdomains were also identified based on the interesting keywords used.
</p>
<h4>{{scan_object.get_subdomain_count}} subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
Subdomain
</div>
<div class="cell grey-color" style="width: 38%">
Page title
</div>
<div class="cell grey-color" style="width: 18%">
HTTP Status
</div>
</div>
{% for subdomain in subdomains %}
<div class="row">
<div class="cell" style="width: 38%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 38%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% endif %}
</div>
<div class="cell" style="width: 18%">
{{subdomain.http_status}}
</div>
</div>
{% endfor %}
</div>
{% if ip_addresses.count %}
<h4 class="subheading" style="margin-top: 10px;">IP Addresses</h4>
<h4>{{ip_addresses.count}} IP Addresses were identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
IP
</div>
<div class="cell grey-color" style="width: 38%">
Open Ports
</div>
<div class="cell grey-color" style="width: 18%">
Remarks
</div>
</div>
{% for ip in ip_addresses %}
<div class="row">
<div class="cell" style="width: 38%">
{{ip.address}}
</div>
<div class="cell" style="width: 38%">
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</div>
{% if ip.is_cdn %}
<div class="cell medium" style="width: 18%">
CDN IP Address
{% else %}
<div class="cell" style="width: 18%">
{% endif %}
</div>
</div>
{% endfor %}
</div>
{% endif %}
</article>
<br>
{% endif %}
{# reconnaissance finding only when show_recon #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 class="page_title">Reconnaissance Findings</h3>
{% for subdomain in subdomains %}
<table class="table" cellspacing="0" style="border-collapse: collapse;">
<tr>
<td style="width: 2%" class="cell table-border">{{ forloop.counter }}.</td>
<td style="width: 80%" class="cell table-border">{{subdomain.name}}</td>
{% if subdomain.http_status == 200 %}
<td style="width: 10%" class="cell table-border bg-success">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 300 and subdomain.http_status < 400 %}
<td style="width: 10%" class="cell table-border bg-medium">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 400 %}
<td style="width: 10%" class="cell table-border bg-high">{{subdomain.http_status}}</td>
{% elif subdomain.http_status == 0 %}
<td style="width: 10%" class="cell table-border">N/A</td>
{% else %}
<td style="width: 10%" class="cell table-border">{{subdomain.http_status}}</td>
{% endif %}
</tr>
{% if subdomain.page_title %}
<tr>
<td colspan="3" class="cell table-border"><strong>Page Title: </strong>{{subdomain.page_title}}</td>
</tr>
{% endif %}
{% if subdomain.ip_addresses.all %}
<tr>
<td colspan="3" class="cell table-border">
IP Address:
<ul>
{% for ip in subdomain.ip_addresses.all %}
<li>{{ip.address}}
{% if ip.ports.all %}
<ul>
<li>Open Ports:
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</li>
</ul>
{% endif %}
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
{% if subdomain.get_vulnerabilities %}
<tr>
<td colspan="3" class="cell table-border">
Vulnerabilities
{% regroup subdomain.get_vulnerabilities by name as vuln_list %}
<ul>
{% for vulnerability in vuln_list %}
<li>
<a href="#vuln_{{vulnerability.list.0.name.split|join:'_'}}">{{ vulnerability.grouper }}</a>
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
</table>
{% endfor %}
</article>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerabilities-discovered" class="page_title">Vulnerabilities Discovered</h3>
<p>
This section reports the security issues found during the audit.
<br>
A Total of {{scan_object.get_vulnerability_count}} were discovered in {{scan_object.domain.name}},
{{scan_object.get_critical_vulnerability_count}} of them were Critical,
{{scan_object.get_high_vulnerability_count}} of them were High Severity,
{{scan_object.get_medium_vulnerability_count}} of them were Medium severity,
{{scan_object.get_low_vulnerability_count}} of them were Low severity, and
{{scan_object.get_info_vulnerability_count}} of them were Informational.
{{scan_object.get_unknown_vulnerability_count}} of them were Unknown Severity.
</p>
<h4 class="subheading">Vulnerability Breakdown by Severity</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{{scan_object.get_info_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{# start vulnerability #}
{% if show_vuln %}
<article class="">
{% regroup all_vulnerabilities by get_path as grouped_vulnerabilities %}
{% for vulnerability in grouped_vulnerabilities %}
<div>
<h4 class="content-heading" id="vuln_{{vulnerability.list.0.name.split|join:'_'}}">
<span>{{vulnerability.list.0.name}}
<br>in {{vulnerability.grouper}}</span>
{% if vulnerability.list.0.severity == -1 %}
<span style="float: right;" class="badge bg-grey">Unknown</span>
<div class="grey-hr-line" ></div>
{% elif vulnerability.list.0.severity == 0 %}
<span style="float: right;" class="badge bg-info">INFO</span>
<div class="info-hr-line" ></div>
{% elif vulnerability.list.0.severity == 1 %}
<span style="float: right;" class="badge bg-low">LOW</span>
<div class="low-hr-line" ></div>
{% elif vulnerability.list.0.severity == 2 %}
<span style="float: right;" class="badge bg-medium">MEDIUM</span>
<div class="medium-hr-line" ></div>
{% elif vulnerability.list.0.severity == 3 %}
<span style="float: right;" class="badge bg-high">HIGH</span>
<div class="high-hr-line" ></div>
{% elif vulnerability.list.0.severity == 4 %}
<span style="float: right;" class="badge bg-critical">CRITICAL</span>
<div class="critical-hr-line" ></div>
{% endif %}
</h4>
<!-- show vulnerability classification -->
<span class="mini-heading">Vulnerability Source: {{vulnerability.list.0.source|upper}}</span><br>
{% if vulnerability.list.0.cvss_metrics or vulnerability.list.0.cvss_score or vulnerability.list.0.cve_ids.all or vulnerability.list.0.cve_ids.all %}
<span class="mini-heading">Vulnerability Classification</span><br>
{% if vulnerability.list.0.cvss_metrics %}
<span class="mini-heading ml-8">CVSS Metrics: {{vulnerability.list.0.cvss_metrics}}</span>
{% endif %}
{% if vulnerability.list.0.cvss_score %}
<br>
<span class="mini-heading ml-8">CVSS Score:</span> <span class="high-color">{{vulnerability.list.0.cvss_score}}</span>
{% endif %}
{% if vulnerability.list.0.cve_ids.all %}
<br>
<span class="mini-heading ml-8">CVE IDs</span><br>
{% for cve in vulnerability.list.0.cve_ids.all %} {{cve}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
{% if vulnerability.list.0.cwe_ids.all %}
<br>
<span class="mini-heading ml-8">CWE IDs</span><br>
{% for cwe in vulnerability.list.0.cwe_ids.all %} {{cwe}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
<br>
{% endif %}
{% if vulnerability.list.0.description %}
<br>
<span class="mini-heading">Description</span><br>
{{vulnerability.list.0.description|linebreaks}}
{% endif %}
{% if vulnerability.list.0.impact %}
<br>
<span class="mini-heading">Impact</span><br>
{{vulnerability.list.0.impact|linebreaks}}
{% endif %}
{% if vulnerability.list.0.remediation %}
<br>
<span class="mini-heading">Remediation</span><br>
{{vulnerability.list.0.remediation|linebreaks}}
{% endif %}
<br>
<span class="mini-heading">Vulnerable URL(s)</span><br>
<ul>
{% for vuln in vulnerability.list %}
<li class="text-blue">{{vuln.http_url}}</li>
{% endfor %}
</ul>
<!-- {% regroup vulnerability.list by http_url as vuln_http_url_list %} -->
<!-- <ul>
{% for vuln_urls in vuln_http_url_list %}
<li>{{vuln_urls.grouper}}</li>
<span class="mini-heading">Result/Findings</span><br>
{% for vuln in vuln_urls.list %}
{% if vuln.matcher_name %}
{% if not forloop.first %} • {% endif %} {{vuln.matcher_name}}
{% endif %}
{% if vuln.extracted_results %}
{% for res in vuln.extracted_results %}
{% if not forloop.first %} • {% endif %} {{res}}
{% endfor %}
{% endif %}
{% endfor %}
{% endfor %}
</ul> -->
{% if vulnerability.list.0.references.all %}
<span class="mini-heading">References</span><br>
<ul>
{% for ref in vulnerability.list.0.references.all %}
<li>
<span class="text-blue"> {{ref}} </span>
</li>
{% endfor %}
</ul>
{% endif %}
<br>
<br>
</div>
{% endfor %}
</article>
{% endif %}
{% endif %}
<article id="chapter">
<h2 id="chapter-title">END OF REPORT</h2>
</article>
</body>
</html>
| <html>
<head>
<meta charset="utf-8">
<title>Report</title>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@100;200;300;400;500&display=swap" rel="stylesheet">
<style>
@page {
size: A4;
@top-left {
background: {{primary_color}};
content: counter(page);
height: 1cm;
text-align: center;
width: 1cm;
}
@top-center {
background: {{primary_color}};
content: '';
display: block;
height: .05cm;
opacity: .5;
width: 100%;
}
@top-right {
content: string(heading);
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% if show_footer %}
@bottom-left {
content: "{{footer_text}}";
font-size: 9pt;
height: 1cm;
vertical-align: middle;
width: 100%;
}
{% endif %}
}
@page :blank {
@top-left {
background: none;
content: ''
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page no-chapter {
@top-left {
background: none;
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
@page :first {
background-color: {{secondary_color}};
background-size: cover;
margin: 0;
}
@page chapter {
background: {{primary_color}};
margin: 0;
@top-left {
content: none
}
@top-center {
content: none
}
@top-right {
content: none
}
}
html {
color: #393939;
font-family: 'Inter';
font-weight: 300;
font-size: 11pt;
font-weight: 300;
line-height: 1.5;
}
h1 {
font-family: 'Inter';
font-weight: 200;
font-size: 38pt;
margin: 5cm 2cm 0 2cm;
page: no-chapter;
width: 100%;
line-height: normal;
}
h2,
h3,
h4 {
font-family: 'Inter';
font-weight: 200;
color: black;
font-weight: 400;
line-height: normal;
}
#cover {
align-content: space-between;
display: flex;
flex-wrap: wrap;
height: 297mm;
}
#cover-subheading {
font-family: 'Inter';
font-weight: 200;
font-size: 22pt;
width: 100%;
}
#cover footer {
background: {{primary_color}};
flex: 1 33%;
margin: 0 -2cm;
padding: 1cm 0;
white-space: pre-wrap;
}
#cover footer:first-of-type {
padding-left: 3cm;
}
#cover-line {
margin-top: 6px;
border-bottom: 1px double {{primary_color}};
}
#summary {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#contents {
page: no-chapter;
}
#contents h2 {
font-size: 20pt;
Intereight: 400;
margin-bottom: 3cm;
}
#contents h3 {
font-weight: 400;
margin: 3em 0 1em;
}
#contents h3::before {
background: {{primary_color}};
content: '';
display: block;
height: .08cm;
margin-bottom: .25cm;
width: 2cm;
}
#contents ul {
list-style: none;
padding-left: 0;
}
#contents ul li {
border-top: .25pt solid #c1c1c1;
margin: .25cm 0;
padding-top: .25cm;
}
#contents ul li::before {
color: {{primary_color}};
content: '• ';
font-size: 30pt;
line-height: 16pt;
vertical-align: bottom;
}
#contents ul li a {
color: inherit;
text-decoration-line: inherit;
}
#contents ul li a::before {
content: target-text(attr(href));
}
#contents ul li a::after {
color: {{primary_color}};
content: target-counter(attr(href), page);
float: right;
}
#columns section {
columns: 2;
column-gap: 1cm;
padding-top: 1cm;
}
#columns section p {
text-align: justify;
}
#columns section p:first-of-type {
font-weight: 700;
}
#chapter {
align-items: center;
display: flex;
height: 297mm;
justify-content: center;
page: chapter;
}
#boxes {
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
#boxes section h4 {
margin-bottom: 0;
}
#boxes section p {
background: {{primary_color}};
display: block;
font-size: 15pt;
margin-bottom: 0;
padding: .25cm 0;
text-align: center;
height: 85px;
color: #37474F;
}
.bg-critical {
background: #EF9A9A !important;
}
.bg-high {
background: #FFAB91 !important;
}
.bg-medium {
background: #FFCC80 !important;
}
.bg-low {
background: #FFE082 !important;
}
.bg-success {
background-color: #A5D6A7 !important;
}
.bg-grey {
background-color: #B0BEC5 !important;
}
.bg-info {
background-color: #90CAF9 !important;
}
.critical-color {
color: #EF9A9A;
}
.high-color {
color: #dc3545;
}
.medium-color {
color: #FFCC80;
}
.low-color {
color: #FFE082;
}
.success-color {
color: #A5D6A7;
}
.grey-color {
color: #212121;
}
.info-color {
color: #90CAF9;
}
.primary-color {
color: {{primary_color}};
}
.text-blue{
color: #007bff!important;
}
.badge {
display: inline-block;
padding-left: 12px;
padding-right: 12px;
text-align: center
}
.critical-hr-line {
border-bottom: 3px solid #EF9A9A !important;
}
.high-hr-line {
border-bottom: 3px solid #FFAB91 !important;
}
.medium-hr-line {
border-bottom: 3px solid #FFCC80 !important;
}
.low-hr-line {
border-bottom: 3px solid #FFE082 !important;
}
.info-hr-line {
border-bottom: 3px solid #90CAF9 !important;
}
.grey-hr-line {
border-bottom: 3px solid #212121 !important;
}
.inside-box-counter {
font-size: 28pt;
}
.table {
margin: 0 0 40px 0;
width: 100%;
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.2);
display: table;
border-spacing: 0 0.4em;
}
.row {
display: table-row;
background: #f6f6f6;
}
.cell {
padding: 6px 6px 6px 6px;
display: table-cell;
}
.header {
Intereight: 900;
color: #ffffff;
}
.page_title{
font-weight: 300;
font-size: 20pt;
}
.subheading{
font-weight: 300;
font-size: 14pt;
}
.content-heading{
font-weight: 300;
font-size: 12pt;
}
.mini-heading{
font-weight: 400;
font-size: 11pt;
}
.table-border{
border-style:solid;
border-width: 1px;
border-color: #90CAF9 !important;
}
a{
color: #007bff;
text-decoration: none;
}
.ml-8{
margin-left: 8px;
}
</style>
</head>
<body>
<article id="cover">
<h1 style="color:{{primary_color}}">{{report_name}}
<br>
{{scan_object.domain.name}}
<div id="cover-line"></div>
{# generated date #}
<span id="cover-subheading">{% now "F j, Y" %}</span>
</h1>
<footer>
{{company_name}}
{{company_address}}
</footer>
<footer>
{{company_email}}
{{company_website}}
</footer>
<footer>
{% if show_rengine_banner %}Generated by reNgine
https://github.com/yogeshojha/rengine
{% endif %}
</footer>
</article>
<article id="contents">
<h2> </h2>
<h3>Table of contents</h3>
<ul>
{% if show_executive_summary %}
<li><a href="#executive-summary"></a></li>
{% endif %}
<li><a href="#quick-summary"></a></li>
<li><a href="#assessment-timeline"></a></li>
{% if interesting_subdomains and show_recon %}
<li><a href="#interesting-recon-data"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerability-summary"></a></li>
{% endif %}
{% if show_recon %}
<li><a href="#reconnaissance-results"></a></li>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<li><a href="#vulnerabilities-discovered"></a></li>
{% endif %}
</ul>
</article>
{% if show_executive_summary %}
<article id="summary" style="page-break-before: always">
<h2 id="executive-summary" class="page_title">Executive summary</h2>
<br>
{{executive_summary_description | safe }}
</article>
{% endif %}
<article id="summary" style="page-break-before: always">
<h2 id="quick-summary" class="page_title">Quick Summary</h2>
<p>This section contains quick summary of scan performed on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<br>
</article>
{# recon section #}
{% if show_recon %}
<h4 id="reconnaissance-summary" class="subheading">Reconnaissance</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-success">Subdomains
<br>
<span class="inside-box-counter">
{{scan_object.get_subdomain_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Endpoints
<br>
<span class="inside-box-counter">
{{scan_object.get_endpoint_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-critical">Vulnerabilities
<br>
<span class="inside-box-counter">
{{all_vulnerabilities_count}}
</span>
</p>
</section>
</div>
{% endif %}
<!-- vulnerability section, hide if only recon report -->
{% if show_vuln %}
<article>
<br>
<h4 id="vulnerability-summary" class="subheading">Vulnerability Summary</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width:30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{% if is_ignore_info_vuln %}
0
{% else %}
{{scan_object.get_info_vulnerability_count}}
{% endif %}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{% endif %}
<article>
<h3 id="assessment-timeline" class="page_title">Timeline of the Assessment</h3>
<p>
Scan started on: {{scan_object.start_scan_date|date:"F j, Y h:i"}}
<br>
Total time taken:
{% if scan_object.scan_status == 0 %}
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }}
{% elif scan_object.scan_status == 1 %}
{{ scan_object.get_elapsed_time }}
{% elif scan_object.scan_status == 2 %}
{% if scan_object.get_completed_time_in_sec < 60 %}
Completed in < 1 minutes {% else %} Completed in {{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} {% elif scan_object.scan_status == 3 %} Aborted in
{{ scan_object.start_scan_date|timesince:scan_object.stop_scan_date }} {% endif %} <br>
Report Generated on: {% now "F j, Y" %}
</p>
</article>
{# show interesting_subdomains section only when show_recon result is there #}
{% if interesting_subdomains and show_recon %}
<article style="page-break-before: always" class="summary">
<h3 id="interesting-recon-data" class="page_title">Interesting Recon Data</h3>
<p>Listed below are the {{interesting_subdomains.count}} interesting subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-success">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 33%">
Subdomain
</div>
<div class="cell grey-color" style="width: 33%">
Page title
</div>
<div class="cell grey-color" style="width: 15%">
HTTP Status
</div>
</div>
{% for subdomain in interesting_subdomains %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 35%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 35%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% else %}
{% endif %}
</div>
<div class="cell" style="width: 15%;">
{% if subdomain.http_status %}
{{subdomain.http_status}}
{% else %}
{% endif %}
</div>
</div>
{% endfor %}
</div>
</article>
{% endif %}
{# vulnerability_summary only when vuln_report #}
{% if show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerability-summary" class="page_title">Summary of Vulnerabilities Identified</h3>
{% if all_vulnerabilities.count > 0 %}
<p>Listed below are the vulnerabilities identified on <span class="primary-color">{{scan_object.domain.name}}</span></p>
<div class="table">
<div class="row header bg-critical">
<div class="cell grey-color" style="width: 5%">
#
</div>
<div class="cell grey-color" style="width: 50%;">
Vulnerability Name
</div>
<div class="cell grey-color" style="width: 19%;">
Times Identified
</div>
<div class="cell grey-color" style="width: 15%">
Severity
</div>
</div>
{% for vulnerability in unique_vulnerabilities %}
<div class="row">
<div class="cell" style="width: 5%">
{{ forloop.counter }}
</div>
<div class="cell" style="width: 50%">
<a href="#vuln_{{vulnerability.name.split|join:'_'}}">{{vulnerability.name}}</a>
</div>
<div class="cell" style="float: right; width: 19%;">
{{vulnerability.count}}
</div>
{% if vulnerability.severity == -1 %}
<div class="cell bg-grey" style="width: 15%">
<span class="severity-title-box">Unknown</span>
{% elif vulnerability.severity == 0 %}
<div class="cell bg-info" style="width: 15%">
<span class="severity-title-box">Informational</span>
{% elif vulnerability.severity == 1 %}
<div class="cell bg-low" style="width: 15%">
<span class="severity-title-box">Low</span>
{% elif vulnerability.severity == 2 %}
<div class="cell bg-medium" style="width: 15%">
<span class="severity-title-box">Medium</span>
{% elif vulnerability.severity == 3 %}
<div class="cell bg-high" style="width: 15%">
<span class="severity-title-box">High</span>
{% elif vulnerability.severity == 4 %}
<div class="cell bg-critical" style="width: 15%">
<span class="severity-title-box">Critical</span>
{% endif %}
</div>
</div>
{% endfor %}
{% else %}
<h3 class='info-color'>No Vulnerabilities were Discovered.</h3>
{% endif %}
</div>
</article>
{% endif %}
{# show discovered assets only for show_recon report #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 id="reconnaissance-results" class="page_title">Discovered Assets</h3>
<h4 class="subheading">Subdomains</h4>
<p>
During the reconnaissance phase, {{scan_object.get_subdomain_count}} subdomains were discovered.
Out of {{scan_object.get_subdomain_count}} subdomains, {{subdomain_alive_count}} returned HTTP status 200.
{{interesting_subdomains.count}} interesting subdomains were also identified based on the interesting keywords used.
</p>
<h4>{{scan_object.get_subdomain_count}} subdomains identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
Subdomain
</div>
<div class="cell grey-color" style="width: 38%">
Page title
</div>
<div class="cell grey-color" style="width: 18%">
HTTP Status
</div>
</div>
{% for subdomain in subdomains %}
<div class="row">
<div class="cell" style="width: 38%">
{{subdomain.name}}
</div>
<div class="cell" style="width: 38%">
{% if subdomain.page_title %}
{{subdomain.page_title}}
{% endif %}
</div>
<div class="cell" style="width: 18%">
{{subdomain.http_status}}
</div>
</div>
{% endfor %}
</div>
{% if ip_addresses.count %}
<h4 class="subheading" style="margin-top: 10px;">IP Addresses</h4>
<h4>{{ip_addresses.count}} IP Addresses were identified on <span class="primary-color">{{scan_object.domain.name}}</span></h4>
<div class="table">
<div class="row header bg-info">
<div class="cell grey-color" style="width: 38%">
IP
</div>
<div class="cell grey-color" style="width: 38%">
Open Ports
</div>
<div class="cell grey-color" style="width: 18%">
Remarks
</div>
</div>
{% for ip in ip_addresses %}
<div class="row">
<div class="cell" style="width: 38%">
{{ip.address}}
</div>
<div class="cell" style="width: 38%">
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</div>
{% if ip.is_cdn %}
<div class="cell medium" style="width: 18%">
CDN IP Address
{% else %}
<div class="cell" style="width: 18%">
{% endif %}
</div>
</div>
{% endfor %}
</div>
{% endif %}
</article>
<br>
{% endif %}
{# reconnaissance finding only when show_recon #}
{% if show_recon %}
<article class="summary" style="page-break-before: always">
<h3 class="page_title">Reconnaissance Findings</h3>
{% for subdomain in subdomains %}
<table class="table" cellspacing="0" style="border-collapse: collapse;">
<tr>
<td style="width: 2%" class="cell table-border">{{ forloop.counter }}.</td>
<td style="width: 80%" class="cell table-border">{{subdomain.name}}</td>
{% if subdomain.http_status == 200 %}
<td style="width: 10%" class="cell table-border bg-success">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 300 and subdomain.http_status < 400 %}
<td style="width: 10%" class="cell table-border bg-medium">{{subdomain.http_status}}</td>
{% elif subdomain.http_status >= 400 %}
<td style="width: 10%" class="cell table-border bg-high">{{subdomain.http_status}}</td>
{% elif subdomain.http_status == 0 %}
<td style="width: 10%" class="cell table-border">N/A</td>
{% else %}
<td style="width: 10%" class="cell table-border">{{subdomain.http_status}}</td>
{% endif %}
</tr>
{% if subdomain.page_title %}
<tr>
<td colspan="3" class="cell table-border"><strong>Page Title: </strong>{{subdomain.page_title}}</td>
</tr>
{% endif %}
{% if subdomain.ip_addresses.all %}
<tr>
<td colspan="3" class="cell table-border">
IP Address:
<ul>
{% for ip in subdomain.ip_addresses.all %}
<li>{{ip.address}}
{% if ip.ports.all %}
<ul>
<li>Open Ports:
{% for port in ip.ports.all %}
{{port.number}}/{{port.service_name}}{% if not forloop.last %},{% endif %}
{% endfor %}
</li>
</ul>
{% endif %}
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
{% if subdomain.get_vulnerabilities_without_info %}
<tr>
<td colspan="3" class="cell table-border">
Vulnerabilities
{% regroup subdomain.get_vulnerabilities_without_info by name as vuln_list %}
<ul>
{% for vulnerability in vuln_list %}
<li>
<a href="#vuln_{{vulnerability.list.0.name.split|join:'_'}}">{{ vulnerability.grouper }}</a>
</li>
{% endfor %}
</ul>
</td>
</tr>
{% endif %}
</table>
{% endfor %}
</article>
{% endif %}
{% if all_vulnerabilities.count > 0 and show_vuln %}
<article style="page-break-before: always" class="summary">
<h3 id="vulnerabilities-discovered" class="page_title">Vulnerabilities Discovered</h3>
<p>
This section reports the security issues found during the audit.
<br>
A Total of {{scan_object.get_vulnerability_count}} were discovered in {{scan_object.domain.name}},
{{scan_object.get_critical_vulnerability_count}} of them were Critical,
{{scan_object.get_high_vulnerability_count}} of them were High Severity,
{{scan_object.get_medium_vulnerability_count}} of them were Medium severity,
{% if is_ignore_info_vuln %}0{% else %}{{scan_object.get_info_vulnerability_count}}{% endif %} of them were Low severity, and
{{scan_object.get_info_vulnerability_count}} of them were Informational.
{{scan_object.get_unknown_vulnerability_count}} of them were Unknown Severity.
</p>
<h4 class="subheading">Vulnerability Breakdown by Severity</h4>
<div id="boxes">
<section style="width: 30%">
<p class="bg-critical">Critical
<br>
<span class="inside-box-counter">
{{scan_object.get_critical_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-high">High
<br>
<span class="inside-box-counter">
{{scan_object.get_high_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-medium">Medium
<br>
<span class="inside-box-counter">
{{scan_object.get_medium_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-low">Low
<br>
<span class="inside-box-counter">
{{scan_object.get_low_vulnerability_count}}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-info">Info
<br>
<span class="inside-box-counter">
{% if is_ignore_info_vuln %}
0
{% else %}
{{scan_object.get_info_vulnerability_count}}
{% endif %}
</span>
</p>
</section>
<section style="width: 30%">
<p class="bg-grey">Unknown
<br>
<span class="inside-box-counter">
{{scan_object.get_unknown_vulnerability_count}}
</span>
</p>
</section>
</div>
</article>
{# start vulnerability #}
{% if show_vuln %}
<article class="">
{% regroup all_vulnerabilities by get_path as grouped_vulnerabilities %}
{% for vulnerabilities in grouped_vulnerabilities %}
{% for vulnerability in vulnerabilities.list %}
<div>
<h4 class="content-heading" id="vuln_{{vulnerability.name.split|join:'_'}}">
<span>{{vulnerability.name}}
<br>in {{vulnerabilities.grouper}}</span>
{% if vulnerability.severity == -1 %}
<span style="float: right;" class="badge bg-grey">Unknown</span>
<div class="grey-hr-line" ></div>
{% elif vulnerability.severity == 0 %}
<span style="float: right;" class="badge bg-info">INFO</span>
<div class="info-hr-line" ></div>
{% elif vulnerability.severity == 1 %}
<span style="float: right;" class="badge bg-low">LOW</span>
<div class="low-hr-line" ></div>
{% elif vulnerability.severity == 2 %}
<span style="float: right;" class="badge bg-medium">MEDIUM</span>
<div class="medium-hr-line" ></div>
{% elif vulnerability.severity == 3 %}
<span style="float: right;" class="badge bg-high">HIGH</span>
<div class="high-hr-line" ></div>
{% elif vulnerability.severity == 4 %}
<span style="float: right;" class="badge bg-critical">CRITICAL</span>
<div class="critical-hr-line" ></div>
{% endif %}
</h4>
<!-- show vulnerability classification -->
<span class="mini-heading">Vulnerability Source: {{vulnerability.source|upper}}</span><br>
{% if vulnerability.cvss_metrics or vulnerability.cvss_score or vulnerability.cve_ids.all or vulnerability.cve_ids.all %}
<span class="mini-heading">Vulnerability Classification</span><br>
{% if vulnerability.cvss_metrics %}
<span class="mini-heading ml-8">CVSS Metrics: {{vulnerability.cvss_metrics}}</span>
{% endif %}
{% if vulnerability.cvss_score %}
<br>
<span class="mini-heading ml-8">CVSS Score:</span> <span class="high-color">{{vulnerability.cvss_score}}</span>
{% endif %}
{% if vulnerability.cve_ids.all %}
<br>
<span class="mini-heading ml-8">CVE IDs</span><br>
{% for cve in vulnerability.cve_ids.all %} {{cve}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
{% if vulnerability.cwe_ids.all %}
<br>
<span class="mini-heading ml-8">CWE IDs</span><br>
{% for cwe in vulnerability.cwe_ids.all %} {{cwe}}{% if not forloop.last %}, {% endif %} {% endfor %}
{% endif %}
<br>
{% endif %}
{% if vulnerability.description %}
<br>
<span class="mini-heading">Description</span><br>
{{vulnerability.description|linebreaks}}
{% endif %}
{% if vulnerability.impact %}
<br>
<span class="mini-heading">Impact</span><br>
{{vulnerability.impact|linebreaks}}
{% endif %}
{% if vulnerability.remediation %}
<br>
<span class="mini-heading">Remediation</span><br>
{{vulnerability.remediation|linebreaks}}
{% endif %}
<br>
<span class="mini-heading">Vulnerable URL(s)</span><br>
<ul>
<li class="text-blue"><a href="{{vulnerability.http_url}}" target="_blank" rel="noopener noreferrer">{{vulnerability.http_url}}</a></li>
</ul>
<!-- {% regroup vulnerability.list by http_url as vuln_http_url_list %} -->
<!-- <ul>
{% for vuln_urls in vuln_http_url_list %}
<li>{{vuln_urls.grouper}}</li>
<span class="mini-heading">Result/Findings</span><br>
{% for vuln in vuln_urls.list %}
{% if vuln.matcher_name %}
{% if not forloop.first %} • {% endif %} {{vuln.matcher_name}}
{% endif %}
{% if vuln.extracted_results %}
{% for res in vuln.extracted_results %}
{% if not forloop.first %} • {% endif %} {{res}}
{% endfor %}
{% endif %}
{% endfor %}
{% endfor %}
</ul> -->
{% if vulnerability.references.all %}
<span class="mini-heading">References</span><br>
<ul>
{% for ref in vulnerability.references.all %}
<li>
<span class="text-blue"><a href="{{ref}}" target="_blank" rel="noopener noreferrer">{{ref}}</a></span>
</li>
{% endfor %}
</ul>
{% endif %}
<br>
<br>
</div>
{% endfor %}
{% endfor %}
</article>
{% endif %}
{% endif %}
<article id="chapter">
<h2 id="chapter-title">END OF REPORT</h2>
</article>
</body>
</html>
| psyray | 4341d9834865240222a8dc72c01caaec0d7bed44 | 69231095782663fe0fe8b0e49b8aa995aa042723 | same here as well `{{all_vulnerabilities|length}}`
| yogeshojha | 5 |
yogeshojha/rengine | 1,071 | Fixes for #1033, #1026, #1027 | Fixes
- Fix Dashboard redirection error (fixes #1026)
- Fix Message color (red created confusing message as error) (fixes #1027)
- Update nuclei for v3, and nuclei v3 requires go 1.21. (fixes #1033) https://github.com/projectdiscovery/nuclei#install-nuclei | null | 2023-11-23 10:33:54+00:00 | 2023-11-23 13:04:18+00:00 | web/scanEngine/templates/scanEngine/lookup.html | {% extends 'base/base.html' %}
{% load static %}
{% load custom_tags %}
{% block title %}
Interesting entries Lookup
{% endblock title %}
{% block custom_js_css_link %}
{% endblock custom_js_css_link %}
{% block breadcrumb_title %}
<li class="breadcrumb-item"><a href="{% url 'scan_engine_index' current_project.slug %}">Engines</a></li>
<li class="breadcrumb-item active">Interesting Lookup</li>
{% endblock breadcrumb_title %}
{% block page_title %}
Interesting Lookup
{% endblock page_title %}
{% block main_content %}
<div class="row">
<div class="col-12">
<div class="card">
<div class="card-body">
<h4 class="header-title">Interesting Lookup</h4>
<p>
reNgine supports lookup for interesting keyword in recon data. This could be either looking up in subdomains, URLs or in page title.
You can enter the keywords to lookup and reNgine will highlight the matched entries.<br>
</p>
<div class="alert alert-primary border-0 mb-4" role="alert">
Keywords are case insensitive.
</div>
<h4 class="header-title">Default Keywords</h4>
<p>reNgine will use these default keywords to find the interesting subdomains or URLs from recon data.</p>
<span class="lead">
{% for keyword in default_lookup %}
{% for key in keyword.keywords|split:"," %}
<span class="badge bg-primary"> {{key}}</span>
{% endfor %}
{% endfor %}
</span>
<h4 class="header-title mt-3">Custom Keywords</h4>
<form method="POST">
{% csrf_token %}
<label for="keywords" class="form-label">Interesting Keywords to look for</label>
{{ form.keywords }}
{# hidden value #}
{{ form.custom_type }}
<span class="text-danger">Press comma , to separate the keywords.</span>
<h4 class=" header-title mt-3">Lookup in</h4>
<div class="form-check mb-2 form-check-primary">
{{form.url_lookup}}
<label class="form-check-label" for="url_lookup">Subdomains/URLs</label>
</div>
<div class="form-check mb-2 form-check-primary">
{{form.title_lookup}}
<label class="form-check-label" for="title_lookup">Page Title</label>
</div>
<h4 class="header-title mt-3">Lookup Conditions</h6>
<span class="text-primary">reNgine will lookup the keywords only when below conditions are met.</span>
<br>
<b>Lookup only when</b>
<div class="form-check mt-2 mb-2 form-check-primary">
{{form.condition_200_http_lookup}}
<label class="form-check-label" for="condition_200_http_lookup">HTTP Status is 200</label>
</div>
<button class="btn btn-primary submit-fn mt-2 float-end" type="submit">Update Lookup</button>
</form>
</div>
</div>
</div>
</div>
{% endblock main_content %}
{% block page_level_script %}
{% endblock page_level_script %}
| {% extends 'base/base.html' %}
{% load static %}
{% load custom_tags %}
{% block title %}
Interesting entries Lookup
{% endblock title %}
{% block custom_js_css_link %}
{% endblock custom_js_css_link %}
{% block breadcrumb_title %}
<li class="breadcrumb-item"><a href="{% url 'scan_engine_index' current_project.slug %}">Engines</a></li>
<li class="breadcrumb-item active">Interesting Lookup</li>
{% endblock breadcrumb_title %}
{% block page_title %}
Interesting Lookup
{% endblock page_title %}
{% block main_content %}
<div class="row">
<div class="col-12">
<div class="card">
<div class="card-body">
<h4 class="header-title">Interesting Lookup</h4>
<p>
reNgine supports lookup for interesting keyword in recon data. This could be either looking up in subdomains, URLs or in page title.
You can enter the keywords to lookup and reNgine will highlight the matched entries.<br>
</p>
<div class="alert alert-primary border-0 mb-4" role="alert">
Keywords are case insensitive.
</div>
<h4 class="header-title">Default Keywords</h4>
<p>reNgine will use these default keywords to find the interesting subdomains or URLs from recon data.</p>
<span class="lead">
{% for keyword in default_lookup %}
{% for key in keyword.keywords|split:"," %}
<span class="badge bg-primary"> {{key}}</span>
{% endfor %}
{% endfor %}
</span>
<h4 class="header-title mt-3">Custom Keywords</h4>
<form method="POST">
{% csrf_token %}
<label for="keywords" class="form-label">Interesting Keywords to look for</label>
{{ form.keywords }}
{# hidden value #}
{{ form.custom_type }}
<span class="text-muted">Please use a comma (,) to separate the keywords.</span>
<h4 class=" header-title mt-3">Lookup in</h4>
<div class="form-check mb-2 form-check-primary">
{{form.url_lookup}}
<label class="form-check-label" for="url_lookup">Subdomains/URLs</label>
</div>
<div class="form-check mb-2 form-check-primary">
{{form.title_lookup}}
<label class="form-check-label" for="title_lookup">Page Title</label>
</div>
<h4 class="header-title mt-3">Lookup Conditions</h6>
<span class="text-primary">reNgine will lookup the keywords only when below conditions are met.</span>
<br>
<b>Lookup only when</b>
<div class="form-check mt-2 mb-2 form-check-primary">
{{form.condition_200_http_lookup}}
<label class="form-check-label" for="condition_200_http_lookup">HTTP Status is 200</label>
</div>
<button class="btn btn-primary submit-fn mt-2 float-end" type="submit">Update Lookup</button>
</form>
</div>
</div>
</div>
</div>
{% endblock main_content %}
{% block page_level_script %}
{% endblock page_level_script %}
| yogeshojha | 6c1ec3124b55404eae84c8ac721ad067563b9243 | b190060d07e6ed4d6bfd969481ab6d54779c09a0 | ```suggestion
<span class="text-muted">Please use a comma (,) to separate the keywords.</span>
``` | AnonymousWP | 6 |
yogeshojha/rengine | 1,063 | Fix crash on saving endpoint (FFUF related only) | Fix #1006
I've added :
- a **try except** block to catch error on duplicate record returned by **get_or_create** in **saving_endpoint** method
- a **check** on endpoint existence in **dir_file_fuzz** method
Errors are logged to the console with the URL.
![image](https://github.com/yogeshojha/rengine/assets/1230954/3067c8a3-f44d-4b8f-b048-d1a356d542a2)
Tested and working
Now we need to find why there are duplicates endpoints in the db
But it's another issue | null | 2023-11-22 02:57:45+00:00 | 2023-11-27 12:37:27+00:00 | web/reNgine/tasks.py | import csv
import json
import os
import pprint
import subprocess
import time
import validators
import whatportis
import xmltodict
import yaml
import tldextract
import concurrent.futures
from datetime import datetime
from urllib.parse import urlparse
from api.serializers import SubdomainSerializer
from celery import chain, chord, group
from celery.result import allow_join_result
from celery.utils.log import get_task_logger
from django.db.models import Count
from dotted_dict import DottedDict
from django.utils import timezone
from pycvesearch import CVESearch
from metafinder.extractor import extract_metadata_from_google_search
from reNgine.celery import app
from reNgine.gpt import GPTVulnerabilityReportGenerator
from reNgine.celery_custom_task import RengineTask
from reNgine.common_func import *
from reNgine.definitions import *
from reNgine.settings import *
from reNgine.gpt import *
from reNgine.utilities import *
from scanEngine.models import (EngineType, InstalledExternalTool, Notification, Proxy)
from startScan.models import *
from startScan.models import EndPoint, Subdomain, Vulnerability
from targetApp.models import Domain
"""
Celery tasks.
"""
logger = get_task_logger(__name__)
#----------------------#
# Scan / Subscan tasks #
#----------------------#
@app.task(name='initiate_scan', bind=False, queue='initiate_scan_queue')
def initiate_scan(
scan_history_id,
domain_id,
engine_id=None,
scan_type=LIVE_SCAN,
results_dir=RENGINE_RESULTS,
imported_subdomains=[],
out_of_scope_subdomains=[],
url_filter=''):
"""Initiate a new scan.
Args:
scan_history_id (int): ScanHistory id.
domain_id (int): Domain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
imported_subdomains (list): Imported subdomains.
out_of_scope_subdomains (list): Out-of-scope subdomains.
url_filter (str): URL path. Default: ''
"""
# Get scan history
scan = ScanHistory.objects.get(pk=scan_history_id)
# Get scan engine
engine_id = engine_id or scan.scan_type.id # scan history engine_id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, [])
# Get domain and set last_scan_date
domain = Domain.objects.get(pk=domain_id)
domain.last_scan_date = timezone.now()
domain.save()
# Get path filter
url_filter = url_filter.rstrip('/')
# Get or create ScanHistory() object
if scan_type == LIVE_SCAN: # immediate
scan = ScanHistory.objects.get(pk=scan_history_id)
scan.scan_status = RUNNING_TASK
elif scan_type == SCHEDULED_SCAN: # scheduled
scan = ScanHistory()
scan.scan_status = INITIATED_TASK
scan.scan_type = engine
scan.celery_ids = [initiate_scan.request.id]
scan.domain = domain
scan.start_scan_date = timezone.now()
scan.tasks = engine.tasks
scan.results_dir = f'{results_dir}/{domain.name}_{scan.id}'
add_gf_patterns = gf_patterns and 'fetch_url' in engine.tasks
if add_gf_patterns:
scan.used_gf_patterns = ','.join(gf_patterns)
scan.save()
# Create scan results dir
os.makedirs(scan.results_dir)
# Build task context
ctx = {
'scan_history_id': scan_history_id,
'engine_id': engine_id,
'domain_id': domain.id,
'results_dir': scan.results_dir,
'url_filter': url_filter,
'yaml_configuration': config,
'out_of_scope_subdomains': out_of_scope_subdomains
}
ctx_str = json.dumps(ctx, indent=2)
# Send start notif
logger.warning(f'Starting scan {scan_history_id} with context:\n{ctx_str}')
send_scan_notif.delay(
scan_history_id,
subscan_id=None,
engine_id=engine_id,
status=CELERY_TASK_STATUS_MAP[scan.scan_status])
# Save imported subdomains in DB
save_imported_subdomains(imported_subdomains, ctx=ctx)
# Create initial subdomain in DB: make a copy of domain as a subdomain so
# that other tasks using subdomains can use it.
subdomain_name = domain.name
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
# If enable_http_crawl is set, create an initial root HTTP endpoint so that
# HTTP crawling can start somewhere
http_url = f'{domain.name}{url_filter}' if url_filter else domain.name
endpoint, _ = save_endpoint(
http_url,
ctx=ctx,
crawl=enable_http_crawl,
is_default=True,
subdomain=subdomain
)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build Celery tasks, crafted according to the dependency graph below:
# subdomain_discovery --> port_scan --> fetch_url --> dir_file_fuzz
# osint vulnerability_scan
# osint dalfox xss scan
# screenshot
# waf_detection
workflow = chain(
group(
subdomain_discovery.si(ctx=ctx, description='Subdomain discovery'),
osint.si(ctx=ctx, description='OS Intelligence')
),
port_scan.si(ctx=ctx, description='Port scan'),
fetch_url.si(ctx=ctx, description='Fetch URL'),
group(
dir_file_fuzz.si(ctx=ctx, description='Directories & files fuzz'),
vulnerability_scan.si(ctx=ctx, description='Vulnerability scan'),
screenshot.si(ctx=ctx, description='Screenshot'),
waf_detection.si(ctx=ctx, description='WAF detection')
)
)
# Build callback
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery chord
logger.info(f'Running Celery workflow with {len(workflow.tasks) + 1} tasks')
task = chain(workflow, callback).on_error(callback).delay()
scan.celery_ids.append(task.id)
scan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='initiate_subscan', bind=False, queue='subscan_queue')
def initiate_subscan(
scan_history_id,
subdomain_id,
engine_id=None,
scan_type=None,
results_dir=RENGINE_RESULTS,
url_filter=''):
"""Initiate a new subscan.
Args:
scan_history_id (int): ScanHistory id.
subdomain_id (int): Subdomain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
url_filter (str): URL path. Default: ''
"""
# Get Subdomain, Domain and ScanHistory
subdomain = Subdomain.objects.get(pk=subdomain_id)
scan = ScanHistory.objects.get(pk=subdomain.scan_history.id)
domain = Domain.objects.get(pk=subdomain.target_domain.id)
# Get EngineType
engine_id = engine_id or scan.scan_type.id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Create scan activity of SubScan Model
subscan = SubScan(
start_scan_date=timezone.now(),
celery_ids=[initiate_subscan.request.id],
scan_history=scan,
subdomain=subdomain,
type=scan_type,
status=RUNNING_TASK,
engine=engine)
subscan.save()
# Get YAML configuration
config = yaml.safe_load(engine.yaml_configuration)
# Create results directory
results_dir = f'{scan.results_dir}/subscans/{subscan.id}'
os.makedirs(results_dir, exist_ok=True)
# Run task
method = globals().get(scan_type)
if not method:
logger.warning(f'Task {scan_type} is not supported by reNgine. Skipping')
return
scan.tasks.append(scan_type)
scan.save()
# Send start notif
send_scan_notif.delay(
scan.id,
subscan_id=subscan.id,
engine_id=engine_id,
status='RUNNING')
# Build context
ctx = {
'scan_history_id': scan.id,
'subscan_id': subscan.id,
'engine_id': engine_id,
'domain_id': domain.id,
'subdomain_id': subdomain.id,
'yaml_configuration': config,
'results_dir': results_dir,
'url_filter': url_filter
}
# Create initial endpoints in DB: find domain HTTP endpoint so that HTTP
# crawling can start somewhere
base_url = f'{subdomain.name}{url_filter}' if url_filter else subdomain.name
endpoint, _ = save_endpoint(
base_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build header + callback
workflow = method.si(ctx=ctx)
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery tasks
task = chain(workflow, callback).on_error(callback).delay()
subscan.celery_ids.append(task.id)
subscan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='report', bind=False, queue='report_queue')
def report(ctx={}, description=None):
"""Report task running after all other tasks.
Mark ScanHistory or SubScan object as completed and update with final
status, log run details and send notification.
Args:
description (str, optional): Task description shown in UI.
"""
# Get objects
subscan_id = ctx.get('subscan_id')
scan_id = ctx.get('scan_history_id')
engine_id = ctx.get('engine_id')
scan = ScanHistory.objects.filter(pk=scan_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
# Get failed tasks
tasks = ScanActivity.objects.filter(scan_of=scan).all()
if subscan:
tasks = tasks.filter(celery_id__in=subscan.celery_ids)
failed_tasks = tasks.filter(status=FAILED_TASK)
# Get task status
failed_count = failed_tasks.count()
status = SUCCESS_TASK if failed_count == 0 else FAILED_TASK
status_h = 'SUCCESS' if failed_count == 0 else 'FAILED'
# Update scan / subscan status
if subscan:
subscan.stop_scan_date = timezone.now()
subscan.status = status
subscan.save()
else:
scan.scan_status = status
scan.stop_scan_date = timezone.now()
scan.save()
# Send scan status notif
send_scan_notif.delay(
scan_history_id=scan_id,
subscan_id=subscan_id,
engine_id=engine_id,
status=status_h)
#------------------------- #
# Tracked reNgine tasks #
#--------------------------#
@app.task(name='subdomain_discovery', queue='main_scan_queue', base=RengineTask, bind=True)
def subdomain_discovery(
self,
host=None,
ctx=None,
description=None):
"""Uses a set of tools (see SUBDOMAIN_SCAN_DEFAULT_TOOLS) to scan all
subdomains associated with a domain.
Args:
host (str): Hostname to scan.
Returns:
subdomains (list): List of subdomain names.
"""
if not host:
host = self.subdomain.name if self.subdomain else self.domain.name
if self.url_filter:
logger.warning(f'Ignoring subdomains scan as an URL path filter was passed ({self.url_filter}).')
return
# Config
config = self.yaml_configuration.get(SUBDOMAIN_DISCOVERY) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL) or self.yaml_configuration.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
tools = config.get(USES_TOOLS, SUBDOMAIN_SCAN_DEFAULT_TOOLS)
default_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=True).filter(is_subdomain_gathering=True)]
custom_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=False).filter(is_subdomain_gathering=True)]
send_subdomain_changes, send_interesting = False, False
notif = Notification.objects.first()
if notif:
send_subdomain_changes = notif.send_subdomain_changes_notif
send_interesting = notif.send_interesting_notif
# Gather tools to run for subdomain scan
if ALL in tools:
tools = SUBDOMAIN_SCAN_DEFAULT_TOOLS + custom_subdomain_tools
tools = [t.lower() for t in tools]
# Make exception for amass since tool name is amass, but command is amass-active/passive
default_subdomain_tools.append('amass-passive')
default_subdomain_tools.append('amass-active')
# Run tools
for tool in tools:
cmd = None
logger.info(f'Scanning subdomains for {host} with {tool}')
proxy = get_random_proxy()
if tool in default_subdomain_tools:
if tool == 'amass-passive':
cmd = f'amass enum -passive -d {host} -o {self.results_dir}/subdomains_amass.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
elif tool == 'amass-active':
use_amass_config = config.get(USE_AMASS_CONFIG, False)
amass_wordlist_name = config.get(AMASS_WORDLIST, 'deepmagic.com-prefixes-top50000')
wordlist_path = f'/usr/src/wordlist/{amass_wordlist_name}.txt'
cmd = f'amass enum -active -d {host} -o {self.results_dir}/subdomains_amass_active.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
cmd += f' -brute -w {wordlist_path}'
elif tool == 'sublist3r':
cmd = f'python3 /usr/src/github/Sublist3r/sublist3r.py -d {host} -t {threads} -o {self.results_dir}/subdomains_sublister.txt'
elif tool == 'subfinder':
cmd = f'subfinder -d {host} -o {self.results_dir}/subdomains_subfinder.txt'
use_subfinder_config = config.get(USE_SUBFINDER_CONFIG, False)
cmd += ' -config /root/.config/subfinder/config.yaml' if use_subfinder_config else ''
cmd += f' -proxy {proxy}' if proxy else ''
cmd += f' -timeout {timeout}' if timeout else ''
cmd += f' -t {threads}' if threads else ''
cmd += f' -silent'
elif tool == 'oneforall':
cmd = f'python3 /usr/src/github/OneForAll/oneforall.py --target {host} run'
cmd_extract = f'cut -d\',\' -f6 /usr/src/github/OneForAll/results/{host}.csv > {self.results_dir}/subdomains_oneforall.txt'
cmd_rm = f'rm -rf /usr/src/github/OneForAll/results/{host}.csv'
cmd += f' && {cmd_extract} && {cmd_rm}'
elif tool == 'ctfr':
results_file = self.results_dir + '/subdomains_ctfr.txt'
cmd = f'python3 /usr/src/github/ctfr/ctfr.py -d {host} -o {results_file}'
cmd_extract = f"cat {results_file} | sed 's/\*.//g' | tail -n +12 | uniq | sort > {results_file}"
cmd += f' && {cmd_extract}'
elif tool == 'tlsx':
results_file = self.results_dir + '/subdomains_tlsx.txt'
cmd = f'tlsx -san -cn -silent -ro -host {host}'
cmd += f" | sed -n '/^\([a-zA-Z0-9]\([-a-zA-Z0-9]*[a-zA-Z0-9]\)\?\.\)\+{host}$/p' | uniq | sort"
cmd += f' > {results_file}'
elif tool == 'netlas':
results_file = self.results_dir + '/subdomains_netlas.txt'
cmd = f'netlas search -d domain -i domain domain:"*.{host}" -f json'
netlas_key = get_netlas_key()
cmd += f' -a {netlas_key}' if netlas_key else ''
cmd_extract = f"grep -oE '([a-zA-Z0-9]([-a-zA-Z0-9]*[a-zA-Z0-9])?\.)+{host}'"
cmd += f' | {cmd_extract} > {results_file}'
elif tool in custom_subdomain_tools:
tool_query = InstalledExternalTool.objects.filter(name__icontains=tool.lower())
if not tool_query.exists():
logger.error(f'Missing {{TARGET}} and {{OUTPUT}} placeholders in {tool} configuration. Skipping.')
continue
custom_tool = tool_query.first()
cmd = custom_tool.subdomain_gathering_command
if '{TARGET}' in cmd and '{OUTPUT}' in cmd:
cmd = cmd.replace('{TARGET}', host)
cmd = cmd.replace('{OUTPUT}', f'{self.results_dir}/subdomains_{tool}.txt')
cmd = cmd.replace('{PATH}', custom_tool.github_clone_path) if '{PATH}' in cmd else cmd
else:
logger.warning(
f'Subdomain discovery tool "{tool}" is not supported by reNgine. Skipping.')
continue
# Run tool
try:
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
except Exception as e:
logger.error(
f'Subdomain discovery tool "{tool}" raised an exception')
logger.exception(e)
# Gather all the tools' results in one single file. Write subdomains into
# separate files, and sort all subdomains.
run_command(
f'cat {self.results_dir}/subdomains_*.txt > {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {self.output_path} -o {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
with open(self.output_path) as f:
lines = f.readlines()
# Parse the output_file file and store Subdomain and EndPoint objects found
# in db.
subdomain_count = 0
subdomains = []
urls = []
for line in lines:
subdomain_name = line.strip()
valid_url = bool(validators.url(subdomain_name))
valid_domain = (
bool(validators.domain(subdomain_name)) or
bool(validators.ipv4(subdomain_name)) or
bool(validators.ipv6(subdomain_name)) or
valid_url
)
if not valid_domain:
logger.error(f'Subdomain {subdomain_name} is not a valid domain, IP or URL. Skipping.')
continue
if valid_url:
subdomain_name = urlparse(subdomain_name).netloc
if subdomain_name in self.out_of_scope_subdomains:
logger.error(f'Subdomain {subdomain_name} is out of scope. Skipping.')
continue
# Add subdomain
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain_count += 1
subdomains.append(subdomain)
urls.append(subdomain.name)
# Bulk crawl subdomains
if enable_http_crawl:
ctx['track'] = True
http_crawl(urls, ctx=ctx, is_ran_from_subdomain_scan=True)
# Find root subdomain endpoints
for subdomain in subdomains:
pass
# Send notifications
subdomains_str = '\n'.join([f'• `{subdomain.name}`' for subdomain in subdomains])
self.notify(fields={
'Subdomain count': len(subdomains),
'Subdomains': subdomains_str,
})
if send_subdomain_changes and self.scan_id and self.domain_id:
added = get_new_added_subdomain(self.scan_id, self.domain_id)
removed = get_removed_subdomain(self.scan_id, self.domain_id)
if added:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in added])
self.notify(fields={'Added subdomains': subdomains_str})
if removed:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in removed])
self.notify(fields={'Removed subdomains': subdomains_str})
if send_interesting and self.scan_id and self.domain_id:
interesting_subdomains = get_interesting_subdomains(self.scan_id, self.domain_id)
if interesting_subdomains:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in interesting_subdomains])
self.notify(fields={'Interesting subdomains': subdomains_str})
return SubdomainSerializer(subdomains, many=True).data
@app.task(name='osint', queue='main_scan_queue', base=RengineTask, bind=True)
def osint(self, host=None, ctx={}, description=None):
"""Run Open-Source Intelligence tools on selected domain.
Args:
host (str): Hostname to scan.
Returns:
dict: Results from osint discovery and dorking.
"""
config = self.yaml_configuration.get(OSINT) or OSINT_DEFAULT_CONFIG
results = {}
grouped_tasks = []
if 'discover' in config:
ctx['track'] = False
# results = osint_discovery(host=host, ctx=ctx)
_task = osint_discovery.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
activity_id=self.activity_id,
results_dir=self.results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if OSINT_DORK in config or OSINT_CUSTOM_DORK in config:
_task = dorking.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
results_dir=self.results_dir
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('OSINT Tasks finished...')
# with open(self.output_path, 'w') as f:
# json.dump(results, f, indent=4)
#
# return results
@app.task(name='osint_discovery', queue='osint_discovery_queue', bind=False)
def osint_discovery(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run OSINT discovery.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
dict: osint metadat and theHarvester and h8mail results.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
osint_lookup = config.get(OSINT_DISCOVER, [])
osint_intensity = config.get(INTENSITY, 'normal')
documents_limit = config.get(OSINT_DOCUMENTS_LIMIT, 50)
results = {}
meta_info = []
emails = []
creds = []
# Get and save meta info
if 'metainfo' in osint_lookup:
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': host,
'domain': host,
'scan_id': scan_history_id,
'documents_limit': documents_limit
})
meta_info.append(save_metadata_info(meta_dict))
# TODO: disabled for now
# elif osint_intensity == 'deep':
# subdomains = Subdomain.objects
# if self.scan:
# subdomains = subdomains.filter(scan_history=self.scan)
# for subdomain in subdomains:
# meta_dict = DottedDict({
# 'osint_target': subdomain.name,
# 'domain': self.domain,
# 'scan_id': self.scan_id,
# 'documents_limit': documents_limit
# })
# meta_info.append(save_metadata_info(meta_dict))
grouped_tasks = []
if 'emails' in osint_lookup:
emails = get_and_save_emails(scan_history, activity_id, results_dir)
emails_str = '\n'.join([f'• `{email}`' for email in emails])
# self.notify(fields={'Emails': emails_str})
# ctx['track'] = False
_task = h8mail.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if 'employees' in osint_lookup:
ctx['track'] = False
_task = theHarvester.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
# results['emails'] = results.get('emails', []) + emails
# results['creds'] = creds
# results['meta_info'] = meta_info
return results
@app.task(name='dorking', bind=False, queue='dorking_queue')
def dorking(config, host, scan_history_id, results_dir):
"""Run Google dorks.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
list: Dorking results for each dork ran.
"""
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
scan_history = ScanHistory.objects.get(pk=scan_history_id)
dorks = config.get(OSINT_DORK, [])
custom_dorks = config.get(OSINT_CUSTOM_DORK, [])
results = []
# custom dorking has higher priority
try:
for custom_dork in custom_dorks:
lookup_target = custom_dork.get('lookup_site')
# replace with original host if _target_
lookup_target = host if lookup_target == '_target_' else lookup_target
if 'lookup_extensions' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_extensions=custom_dork.get('lookup_extensions'),
scan_history=scan_history
)
elif 'lookup_keywords' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_keywords=custom_dork.get('lookup_keywords'),
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
# default dorking
try:
for dork in dorks:
logger.info(f'Getting dork information for {dork}')
if dork == 'stackoverflow':
results = get_and_save_dork_results(
lookup_target='stackoverflow.com',
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'login_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/login/,login.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'admin_panels':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/admin/,admin.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'dashboard_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/dashboard/,dashboard.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'social_media' :
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'reddit.com'
]
for site in social_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'project_management' :
project_websites = [
'trello.com',
'atlassian.net'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'code_sharing' :
project_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'config_files' :
config_file_exts = [
'env',
'xml',
'conf',
'toml',
'yml',
'yaml',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(config_file_exts),
page_count=4,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keyword = 'Jenkins'
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=lookup_keyword,
page_count=1,
scan_history=scan_history
)
elif dork == 'wordpress_files' :
lookup_keywords = [
'/wp-content/',
'/wp-includes/'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'php_error' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'exposed_documents' :
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(docs_file_ext),
page_count=7,
scan_history=scan_history
)
elif dork == 'db_files' :
file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
elif dork == 'git_exposed' :
file_ext = [
'git',
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
return results
@app.task(name='theHarvester', queue='theHarvester_queue', bind=False)
def theHarvester(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run theHarvester to get save emails, hosts, employees found in domain.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
dict: Dict of emails, employees, hosts and ips found during crawling.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
output_path_json = f'{results_dir}/theHarvester.json'
theHarvester_dir = '/usr/src/github/theHarvester'
history_file = f'{results_dir}/commands.txt'
cmd = f'python3 {theHarvester_dir}/theHarvester.py -d {host} -b all -f {output_path_json}'
# Update proxies.yaml
proxy_query = Proxy.objects.all()
if proxy_query.exists():
proxy = proxy_query.first()
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(f'{theHarvester_dir}/proxies.yaml', 'w') as file:
yaml.dump(yaml_data, file)
# Run cmd
run_command(
cmd,
shell=False,
cwd=theHarvester_dir,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
# Get file location
if not os.path.isfile(output_path_json):
logger.error(f'Could not open {output_path_json}')
return {}
# Load theHarvester results
with open(output_path_json, 'r') as f:
data = json.load(f)
# Re-indent theHarvester JSON
with open(output_path_json, 'w') as f:
json.dump(data, f, indent=4)
emails = data.get('emails', [])
for email_address in emails:
email, _ = save_email(email_address, scan_history=scan_history)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
linkedin_people = data.get('linkedin_people', [])
for people in linkedin_people:
employee, _ = save_employee(
people,
designation='linkedin',
scan_history=scan_history)
# if employee:
# self.notify(fields={'LinkedIn people': f'• {employee.name}'})
twitter_people = data.get('twitter_people', [])
for people in twitter_people:
employee, _ = save_employee(
people,
designation='twitter',
scan_history=scan_history)
# if employee:
# self.notify(fields={'Twitter people': f'• {employee.name}'})
hosts = data.get('hosts', [])
urls = []
for host in hosts:
split = tuple(host.split(':'))
http_url = split[0]
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
endpoint, _ = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain)
# if endpoint:
# urls.append(endpoint.http_url)
# self.notify(fields={'Hosts': f'• {endpoint.http_url}'})
# if enable_http_crawl:
# ctx['track'] = False
# http_crawl(urls, ctx=ctx)
# TODO: Lots of ips unrelated with our domain are found, disabling
# this for now.
# ips = data.get('ips', [])
# for ip_address in ips:
# ip, created = save_ip_address(
# ip_address,
# subscan=subscan)
# if ip:
# send_task_notif.delay(
# 'osint',
# scan_history_id=scan_history_id,
# subscan_id=subscan_id,
# severity='success',
# update_fields={'IPs': f'{ip.address}'})
return data
@app.task(name='h8mail', queue='h8mail_queue', bind=False)
def h8mail(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run h8mail.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
list[dict]: List of credentials info.
"""
logger.warning('Getting leaked credentials')
scan_history = ScanHistory.objects.get(pk=scan_history_id)
input_path = f'{results_dir}/emails.txt'
output_file = f'{results_dir}/h8mail.json'
cmd = f'h8mail -t {input_path} --json {output_file}'
history_file = f'{results_dir}/commands.txt'
run_command(
cmd,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
with open(output_file) as f:
data = json.load(f)
creds = data.get('targets', [])
# TODO: go through h8mail output and save emails to DB
for cred in creds:
logger.warning(cred)
email_address = cred['target']
pwn_num = cred['pwn_num']
pwn_data = cred.get('data', [])
email, created = save_email(email_address, scan_history=scan)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
return creds
@app.task(name='screenshot', queue='main_scan_queue', base=RengineTask, bind=True)
def screenshot(self, ctx={}, description=None):
"""Uses EyeWitness to gather screenshot of a domain and/or url.
Args:
description (str, optional): Task description shown in UI.
"""
# Config
screenshots_path = f'{self.results_dir}/screenshots'
output_path = f'{self.results_dir}/screenshots/{self.filename}'
alive_endpoints_file = f'{self.results_dir}/endpoints_alive.txt'
config = self.yaml_configuration.get(SCREENSHOT) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT + 5)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
# If intensity is normal, grab only the root endpoints of each subdomain
strict = True if intensity == 'normal' else False
# Get URLs to take screenshot of
get_http_urls(
is_alive=enable_http_crawl,
strict=strict,
write_filepath=alive_endpoints_file,
get_only_default_urls=True,
ctx=ctx
)
# Send start notif
notification = Notification.objects.first()
send_output_file = notification.send_scan_output_file if notification else False
# Run cmd
cmd = f'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py -f {alive_endpoints_file} -d {screenshots_path} --no-prompt'
cmd += f' --timeout {timeout}' if timeout > 0 else ''
cmd += f' --threads {threads}' if threads > 0 else ''
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(output_path):
logger.error(f'Could not load EyeWitness results at {output_path} for {self.domain.name}.')
return
# Loop through results and save objects in DB
screenshot_paths = []
with open(output_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
"Protocol,Port,Domain,Request Status,Screenshot Path, Source Path"
protocol, port, subdomain_name, status, screenshot_path, source_path = tuple(row)
logger.info(f'{protocol}:{port}:{subdomain_name}:{status}')
subdomain_query = Subdomain.objects.filter(name=subdomain_name)
if self.scan:
subdomain_query = subdomain_query.filter(scan_history=self.scan)
if status == 'Successful' and subdomain_query.exists():
subdomain = subdomain_query.first()
screenshot_paths.append(screenshot_path)
subdomain.screenshot_path = screenshot_path.replace('/usr/src/scan_results/', '')
subdomain.save()
logger.warning(f'Added screenshot for {subdomain.name} to DB')
# Remove all db, html extra files in screenshot results
run_command(
'rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(screenshots_path),
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'rm -rf {screenshots_path}/source',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Send finish notifs
screenshots_str = '• ' + '\n• '.join([f'`{path}`' for path in screenshot_paths])
self.notify(fields={'Screenshots': screenshots_str})
if send_output_file:
for path in screenshot_paths:
title = get_output_file_name(
self.scan_id,
self.subscan_id,
self.filename)
send_file_to_discord.delay(path, title)
@app.task(name='port_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def port_scan(self, hosts=[], ctx={}, description=None):
"""Run port scan.
Args:
hosts (list, optional): Hosts to run port scan on.
description (str, optional): Task description shown in UI.
Returns:
list: List of open ports (dict).
"""
input_file = f'{self.results_dir}/input_subdomains_port_scan.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(PORT_SCAN) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
exclude_ports = config.get(NAABU_EXCLUDE_PORTS, [])
exclude_subdomains = config.get(NAABU_EXCLUDE_SUBDOMAINS, False)
ports = config.get(PORTS, NAABU_DEFAULT_PORTS)
ports = [str(port) for port in ports]
rate_limit = config.get(NAABU_RATE) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
passive = config.get(NAABU_PASSIVE, False)
use_naabu_config = config.get(USE_NAABU_CONFIG, False)
exclude_ports_str = ','.join(return_iterable(exclude_ports))
# nmap args
nmap_enabled = config.get(ENABLE_NMAP, False)
nmap_cmd = config.get(NMAP_COMMAND, '')
nmap_script = config.get(NMAP_SCRIPT, '')
nmap_script = ','.join(return_iterable(nmap_script))
nmap_script_args = config.get(NMAP_SCRIPT_ARGS)
if hosts:
with open(input_file, 'w') as f:
f.write('\n'.join(hosts))
else:
hosts = get_subdomains(
write_filepath=input_file,
exclude_subdomains=exclude_subdomains,
ctx=ctx)
# Build cmd
cmd = 'naabu -json -exclude-cdn'
cmd += f' -list {input_file}' if len(hosts) > 0 else f' -host {hosts[0]}'
if 'full' in ports or 'all' in ports:
ports_str = ' -p "-"'
elif 'top-100' in ports:
ports_str = ' -top-ports 100'
elif 'top-1000' in ports:
ports_str = ' -top-ports 1000'
else:
ports_str = ','.join(ports)
ports_str = f' -p {ports_str}'
cmd += ports_str
cmd += ' -config /root/.config/naabu/config.yaml' if use_naabu_config else ''
cmd += f' -proxy "{proxy}"' if proxy else ''
cmd += f' -c {threads}' if threads else ''
cmd += f' -rate {rate_limit}' if rate_limit > 0 else ''
cmd += f' -timeout {timeout*1000}' if timeout > 0 else ''
cmd += f' -passive' if passive else ''
cmd += f' -exclude-ports {exclude_ports_str}' if exclude_ports else ''
cmd += f' -silent'
# Execute cmd and gather results
results = []
urls = []
ports_data = {}
for line in stream_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
port_number = line['port']
ip_address = line['ip']
host = line.get('host') or ip_address
if port_number == 0:
continue
# Grab subdomain
subdomain = Subdomain.objects.filter(
name=host,
target_domain=self.domain,
scan_history=self.scan
).first()
# Add IP DB
ip, _ = save_ip_address(ip_address, subdomain, subscan=self.subscan)
if self.subscan:
ip.ip_subscan_ids.add(self.subscan)
ip.save()
# Add endpoint to DB
# port 80 and 443 not needed as http crawl already does that.
if port_number not in [80, 443]:
http_url = f'{host}:{port_number}'
endpoint, _ = save_endpoint(
http_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint:
http_url = endpoint.http_url
urls.append(http_url)
# Add Port in DB
port_details = whatportis.get_ports(str(port_number))
service_name = port_details[0].name if len(port_details) > 0 else 'unknown'
description = port_details[0].description if len(port_details) > 0 else ''
# get or create port
port, created = Port.objects.get_or_create(
number=port_number,
service_name=service_name,
description=description
)
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port.save()
ip.ports.add(port)
ip.save()
if host in ports_data:
ports_data[host].append(port_number)
else:
ports_data[host] = [port_number]
# Send notification
logger.warning(f'Found opened port {port_number} on {ip_address} ({host})')
if len(ports_data) == 0:
logger.info('Finished running naabu port scan - No open ports found.')
if nmap_enabled:
logger.info('Nmap scans skipped')
return ports_data
# Send notification
fields_str = ''
for host, ports in ports_data.items():
ports_str = ', '.join([f'`{port}`' for port in ports])
fields_str += f'• `{host}`: {ports_str}\n'
self.notify(fields={'Ports discovered': fields_str})
# Save output to file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
logger.info('Finished running naabu port scan.')
# Process nmap results: 1 process per host
sigs = []
if nmap_enabled:
logger.warning(f'Starting nmap scans ...')
logger.warning(ports_data)
for host, port_list in ports_data.items():
ports_str = '_'.join([str(p) for p in port_list])
ctx_nmap = ctx.copy()
ctx_nmap['description'] = get_task_title(f'nmap_{host}', self.scan_id, self.subscan_id)
ctx_nmap['track'] = False
sig = nmap.si(
cmd=nmap_cmd,
ports=port_list,
host=host,
script=nmap_script,
script_args=nmap_script_args,
max_rate=rate_limit,
ctx=ctx_nmap)
sigs.append(sig)
task = group(sigs).apply_async()
with allow_join_result():
results = task.get()
return ports_data
@app.task(name='nmap', queue='main_scan_queue', base=RengineTask, bind=True)
def nmap(
self,
cmd=None,
ports=[],
host=None,
input_file=None,
script=None,
script_args=None,
max_rate=None,
ctx={},
description=None):
"""Run nmap on a host.
Args:
cmd (str, optional): Existing nmap command to complete.
ports (list, optional): List of ports to scan.
host (str, optional): Host to scan.
input_file (str, optional): Input hosts file.
script (str, optional): NSE script to run.
script_args (str, optional): NSE script args.
max_rate (int): Max rate.
description (str, optional): Task description shown in UI.
"""
notif = Notification.objects.first()
ports_str = ','.join(str(port) for port in ports)
self.filename = self.filename.replace('.txt', '.xml')
filename_vulns = self.filename.replace('.xml', '_vulns.json')
output_file = self.output_path
output_file_xml = f'{self.results_dir}/{host}_{self.filename}'
vulns_file = f'{self.results_dir}/{host}_{filename_vulns}'
logger.warning(f'Running nmap on {host}:{ports}')
# Build cmd
nmap_cmd = get_nmap_cmd(
cmd=cmd,
ports=ports_str,
script=script,
script_args=script_args,
max_rate=max_rate,
host=host,
input_file=input_file,
output_file=output_file_xml)
# Run cmd
run_command(
nmap_cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Get nmap XML results and convert to JSON
vulns = parse_nmap_results(output_file_xml, output_file)
with open(vulns_file, 'w') as f:
json.dump(vulns, f, indent=4)
# Save vulnerabilities found by nmap
vulns_str = ''
for vuln_data in vulns:
# URL is not necessarily an HTTP URL when running nmap (can be any
# other vulnerable protocols). Look for existing endpoint and use its
# URL as vulnerability.http_url if it exists.
url = vuln_data['http_url']
endpoint = EndPoint.objects.filter(http_url__contains=url).first()
if endpoint:
vuln_data['http_url'] = endpoint.http_url
vuln, created = save_vulnerability(
target_domain=self.domain,
subdomain=self.subdomain,
scan_history=self.scan,
subscan=self.subscan,
endpoint=endpoint,
**vuln_data)
vulns_str += f'• {str(vuln)}\n'
if created:
logger.warning(str(vuln))
# Send only 1 notif for all vulns to reduce number of notifs
if notif and notif.send_vuln_notif and vulns_str:
logger.warning(vulns_str)
self.notify(fields={'CVEs': vulns_str})
return vulns
@app.task(name='waf_detection', queue='main_scan_queue', base=RengineTask, bind=True)
def waf_detection(self, ctx={}, description=None):
"""
Uses wafw00f to check for the presence of a WAF.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of startScan.models.Waf objects.
"""
input_path = f'{self.results_dir}/input_endpoints_waf_detection.txt'
config = self.yaml_configuration.get(WAF_DETECTION) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Get alive endpoints from DB
get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
cmd = f'wafw00f -i {input_path} -o {self.output_path}'
run_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(self.output_path):
logger.error(f'Could not find {self.output_path}')
return
with open(self.output_path) as file:
wafs = file.readlines()
for line in wafs:
line = " ".join(line.split())
splitted = line.split(' ', 1)
waf_info = splitted[1].strip()
waf_name = waf_info[:waf_info.find('(')].strip()
waf_manufacturer = waf_info[waf_info.find('(')+1:waf_info.find(')')].strip().replace('.', '')
http_url = sanitize_url(splitted[0].strip())
if not waf_name or waf_name == 'None':
continue
# Add waf to db
waf, _ = Waf.objects.get_or_create(
name=waf_name,
manufacturer=waf_manufacturer
)
# Add waf info to Subdomain in DB
subdomain = get_subdomain_from_url(http_url)
logger.info(f'Wafw00f Subdomain : {subdomain}')
subdomain_query, _ = Subdomain.objects.get_or_create(scan_history=self.scan, name=subdomain)
subdomain_query.waf.add(waf)
subdomain_query.save()
return wafs
@app.task(name='dir_file_fuzz', queue='main_scan_queue', base=RengineTask, bind=True)
def dir_file_fuzz(self, ctx={}, description=None):
"""Perform directory scan, and currently uses `ffuf` as a default tool.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of URLs discovered.
"""
# Config
cmd = 'ffuf'
config = self.yaml_configuration.get(DIR_FILE_FUZZ) or {}
custom_header = self.yaml_configuration.get(CUSTOM_HEADER)
auto_calibration = config.get(AUTO_CALIBRATION, True)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
extensions = config.get(EXTENSIONS, DEFAULT_DIR_FILE_FUZZ_EXTENSIONS)
# prepend . on extensions
extensions = [ext if ext.startswith('.') else '.' + ext for ext in extensions]
extensions_str = ','.join(map(str, extensions))
follow_redirect = config.get(FOLLOW_REDIRECT, FFUF_DEFAULT_FOLLOW_REDIRECT)
max_time = config.get(MAX_TIME, 0)
match_http_status = config.get(MATCH_HTTP_STATUS, FFUF_DEFAULT_MATCH_HTTP_STATUS)
mc = ','.join([str(c) for c in match_http_status])
recursive_level = config.get(RECURSIVE_LEVEL, FFUF_DEFAULT_RECURSIVE_LEVEL)
stop_on_error = config.get(STOP_ON_ERROR, False)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
wordlist_name = config.get(WORDLIST, 'dicc')
delay = rate_limit / (threads * 100) # calculate request pause delay from rate_limit and number of threads
input_path = f'{self.results_dir}/input_dir_file_fuzz.txt'
# Get wordlist
wordlist_name = 'dicc' if wordlist_name == 'default' else wordlist_name
wordlist_path = f'/usr/src/wordlist/{wordlist_name}.txt'
# Build command
cmd += f' -w {wordlist_path}'
cmd += f' -e {extensions_str}' if extensions else ''
cmd += f' -maxtime {max_time}' if max_time > 0 else ''
cmd += f' -p {delay}' if delay > 0 else ''
cmd += f' -recursion -recursion-depth {recursive_level} ' if recursive_level > 0 else ''
cmd += f' -t {threads}' if threads and threads > 0 else ''
cmd += f' -timeout {timeout}' if timeout and timeout > 0 else ''
cmd += ' -se' if stop_on_error else ''
cmd += ' -fr' if follow_redirect else ''
cmd += ' -ac' if auto_calibration else ''
cmd += f' -mc {mc}' if mc else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
# Grab URLs to fuzz
urls = get_http_urls(
is_alive=True,
ignore_files=False,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
logger.warning(urls)
# Loop through URLs and run command
results = []
for url in urls:
'''
Above while fetching urls, we are not ignoring files, because some
default urls may redirect to https://example.com/login.php
so, ignore_files is set to False
but, during fuzzing, we will only need part of the path, in above example
it is still a good idea to ffuf base url https://example.com
so files from base url
'''
url_parse = urlparse(url)
url = url_parse.scheme + '://' + url_parse.netloc
url += '/FUZZ' # TODO: fuzz not only URL but also POST / PUT / headers
proxy = get_random_proxy()
# Build final cmd
fcmd = cmd
fcmd += f' -x {proxy}' if proxy else ''
fcmd += f' -u {url} -json'
# Initialize DirectoryScan object
dirscan = DirectoryScan()
dirscan.scanned_date = timezone.now()
dirscan.command_line = fcmd
dirscan.save()
# Loop through results and populate EndPoint and DirectoryFile in DB
results = []
for line in stream_command(
fcmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
name = line['input'].get('FUZZ')
length = line['length']
status = line['status']
words = line['words']
url = line['url']
lines = line['lines']
content_type = line['content-type']
duration = line['duration']
if not name:
logger.error(f'FUZZ not found for "{url}"')
continue
endpoint, created = save_endpoint(url, crawl=False, ctx=ctx)
# endpoint.is_default = False
endpoint.http_status = status
endpoint.content_length = length
endpoint.response_time = duration / 1000000000
endpoint.save()
if created:
urls.append(endpoint.http_url)
endpoint.status = status
endpoint.content_type = content_type
endpoint.content_length = length
dfile, created = DirectoryFile.objects.get_or_create(
name=name,
length=length,
words=words,
lines=lines,
content_type=content_type,
url=url)
dfile.http_status = status
dfile.save()
# if created:
# logger.warning(f'Found new directory or file {url}')
dirscan.directory_files.add(dfile)
dirscan.save()
if self.subscan:
dirscan.dir_subscan_ids.add(self.subscan)
subdomain_name = get_subdomain_from_url(endpoint.http_url)
subdomain = Subdomain.objects.get(name=subdomain_name, scan_history=self.scan)
subdomain.directories.add(dirscan)
subdomain.save()
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(urls, ctx=ctx)
return results
@app.task(name='fetch_url', queue='main_scan_queue', base=RengineTask, bind=True)
def fetch_url(self, urls=[], ctx={}, description=None):
"""Fetch URLs using different tools like gauplus, gau, gospider, waybackurls ...
Args:
urls (list): List of URLs to start from.
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/input_endpoints_fetch_url.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(FETCH_URL) or {}
should_remove_duplicate_endpoints = config.get(REMOVE_DUPLICATE_ENDPOINTS, True)
duplicate_removal_fields = config.get(DUPLICATE_REMOVAL_FIELDS, ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, DEFAULT_GF_PATTERNS)
ignore_file_extension = config.get(IGNORE_FILE_EXTENSION, DEFAULT_IGNORE_FILE_EXTENSIONS)
tools = config.get(USES_TOOLS, ENDPOINT_SCAN_DEFAULT_TOOLS)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
domain_request_headers = self.domain.request_headers if self.domain else None
custom_header = domain_request_headers or self.yaml_configuration.get(CUSTOM_HEADER)
exclude_subdomains = config.get(EXCLUDED_SUBDOMAINS, False)
# Get URLs to scan and save to input file
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
exclude_subdomains=exclude_subdomains,
get_only_default_urls=True,
ctx=ctx
)
# Domain regex
host = self.domain.name if self.domain else urlparse(urls[0]).netloc
host_regex = f"\'https?://([a-z0-9]+[.])*{host}.*\'"
# Tools cmds
cmd_map = {
'gau': f'gau',
'gauplus': f'gauplus -random-agent',
'hakrawler': 'hakrawler -subs -u',
'waybackurls': 'waybackurls',
'gospider': f'gospider -S {input_path} --js -d 2 --sitemap --robots -w -r',
'katana': f'katana -list {input_path} -silent -jc -kf all -d 3 -fs rdn',
}
if proxy:
cmd_map['gau'] += f' --proxy "{proxy}"'
cmd_map['gauplus'] += f' -p "{proxy}"'
cmd_map['gospider'] += f' -p {proxy}'
cmd_map['hakrawler'] += f' -proxy {proxy}'
cmd_map['katana'] += f' -proxy {proxy}'
if threads > 0:
cmd_map['gau'] += f' --threads {threads}'
cmd_map['gauplus'] += f' -t {threads}'
cmd_map['gospider'] += f' -t {threads}'
cmd_map['katana'] += f' -c {threads}'
if custom_header:
header_string = ';;'.join([
f'{key}: {value}' for key, value in custom_header.items()
])
cmd_map['hakrawler'] += f' -h {header_string}'
cmd_map['katana'] += f' -H {header_string}'
header_flags = [':'.join(h) for h in header_string.split(';;')]
for flag in header_flags:
cmd_map['gospider'] += f' -H {flag}'
cat_input = f'cat {input_path}'
grep_output = f'grep -Eo {host_regex}'
cmd_map = {
tool: f'{cat_input} | {cmd} | {grep_output} > {self.results_dir}/urls_{tool}.txt'
for tool, cmd in cmd_map.items()
}
tasks = group(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for tool, cmd in cmd_map.items()
if tool in tools
)
# Cleanup task
sort_output = [
f'cat {self.results_dir}/urls_* > {self.output_path}',
f'cat {input_path} >> {self.output_path}',
f'sort -u {self.output_path} -o {self.output_path}',
]
if ignore_file_extension:
ignore_exts = '|'.join(ignore_file_extension)
grep_ext_filtered_output = [
f'cat {self.output_path} | grep -Eiv "\\.({ignore_exts}).*" > {self.results_dir}/urls_filtered.txt',
f'mv {self.results_dir}/urls_filtered.txt {self.output_path}'
]
sort_output.extend(grep_ext_filtered_output)
cleanup = chain(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for cmd in sort_output
)
# Run all commands
task = chord(tasks)(cleanup)
with allow_join_result():
task.get()
# Store all the endpoints and run httpx
with open(self.output_path) as f:
discovered_urls = f.readlines()
self.notify(fields={'Discovered URLs': len(discovered_urls)})
# Some tools can have an URL in the format <URL>] - <PATH> or <URL> - <PATH>, add them
# to the final URL list
all_urls = []
for url in discovered_urls:
url = url.strip()
urlpath = None
base_url = None
if '] ' in url: # found JS scraped endpoint e.g from gospider
split = tuple(url.split('] '))
if not len(split) == 2:
logger.warning(f'URL format not recognized for "{url}". Skipping.')
continue
base_url, urlpath = split
urlpath = urlpath.lstrip('- ')
elif ' - ' in url: # found JS scraped endpoint e.g from gospider
base_url, urlpath = tuple(url.split(' - '))
if base_url and urlpath:
subdomain = urlparse(base_url)
url = f'{subdomain.scheme}://{subdomain.netloc}{self.url_filter}'
if not validators.url(url):
logger.warning(f'Invalid URL "{url}". Skipping.')
if url not in all_urls:
all_urls.append(url)
# Filter out URLs if a path filter was passed
if self.url_filter:
all_urls = [url for url in all_urls if self.url_filter in url]
# Write result to output path
with open(self.output_path, 'w') as f:
f.write('\n'.join(all_urls))
logger.warning(f'Found {len(all_urls)} usable URLs')
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(
all_urls,
ctx=ctx,
should_remove_duplicate_endpoints=should_remove_duplicate_endpoints,
duplicate_removal_fields=duplicate_removal_fields
)
#-------------------#
# GF PATTERNS MATCH #
#-------------------#
# Combine old gf patterns with new ones
if gf_patterns:
self.scan.used_gf_patterns = ','.join(gf_patterns)
self.scan.save()
# Run gf patterns on saved endpoints
# TODO: refactor to Celery task
for gf_pattern in gf_patterns:
# TODO: js var is causing issues, removing for now
if gf_pattern == 'jsvar':
logger.info('Ignoring jsvar as it is causing issues.')
continue
# Run gf on current pattern
logger.warning(f'Running gf on pattern "{gf_pattern}"')
gf_output_file = f'{self.results_dir}/gf_patterns_{gf_pattern}.txt'
cmd = f'cat {self.output_path} | gf {gf_pattern} | grep -Eo {host_regex} >> {gf_output_file}'
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Check output file
if not os.path.exists(gf_output_file):
logger.error(f'Could not find GF output file {gf_output_file}. Skipping GF pattern "{gf_pattern}"')
continue
# Read output file line by line and
with open(gf_output_file, 'r') as f:
lines = f.readlines()
# Add endpoints / subdomains to DB
for url in lines:
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
endpoint, created = save_endpoint(
http_url,
crawl=False,
subdomain=subdomain,
ctx=ctx)
if not endpoint:
continue
earlier_pattern = None
if not created:
earlier_pattern = endpoint.matched_gf_patterns
pattern = f'{earlier_pattern},{gf_pattern}' if earlier_pattern else gf_pattern
endpoint.matched_gf_patterns = pattern
endpoint.save()
return all_urls
def parse_curl_output(response):
# TODO: Enrich from other cURL fields.
CURL_REGEX_HTTP_STATUS = f'HTTP\/(?:(?:\d\.?)+)\s(\d+)\s(?:\w+)'
http_status = 0
if response:
failed = False
regex = re.compile(CURL_REGEX_HTTP_STATUS, re.MULTILINE)
try:
http_status = int(regex.findall(response)[0])
except (KeyError, TypeError, IndexError):
pass
return {
'http_status': http_status,
}
@app.task(name='vulnerability_scan', queue='main_scan_queue', bind=True, base=RengineTask)
def vulnerability_scan(self, urls=[], ctx={}, description=None):
"""
This function will serve as an entrypoint to vulnerability scan.
All other vulnerability scan will be run from here including nuclei, crlfuzz, etc
"""
logger.info('Running Vulnerability Scan Queue')
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_run_nuclei = config.get(RUN_NUCLEI, True)
should_run_crlfuzz = config.get(RUN_CRLFUZZ, False)
should_run_dalfox = config.get(RUN_DALFOX, False)
should_run_s3scanner = config.get(RUN_S3SCANNER, True)
grouped_tasks = []
if should_run_nuclei:
_task = nuclei_scan.si(
urls=urls,
ctx=ctx,
description=f'Nuclei Scan'
)
grouped_tasks.append(_task)
if should_run_crlfuzz:
_task = crlfuzz_scan.si(
urls=urls,
ctx=ctx,
description=f'CRLFuzz Scan'
)
grouped_tasks.append(_task)
if should_run_dalfox:
_task = dalfox_xss_scan.si(
urls=urls,
ctx=ctx,
description=f'Dalfox XSS Scan'
)
grouped_tasks.append(_task)
if should_run_s3scanner:
_task = s3scanner.si(
ctx=ctx,
description=f'Misconfigured S3 Buckets Scanner'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan completed...')
# return results
return None
@app.task(name='nuclei_individual_severity_module', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_individual_severity_module(self, cmd, severity, enable_http_crawl, should_fetch_gpt_report, ctx={}, description=None):
'''
This celery task will run vulnerability scan in parallel.
All severities supplied should run in parallel as grouped tasks.
'''
results = []
logger.info(f'Running vulnerability scan with severity: {severity}')
cmd += f' -severity {severity}'
# Send start notification
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
# Gather nuclei results
vuln_data = parse_nuclei_result(line)
# Get corresponding subdomain
http_url = sanitize_url(line.get('matched-at'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
# Look for duplicate vulnerabilities by excluding records that might change but are irrelevant.
object_comparison_exclude = ['response', 'curl_command', 'tags', 'references', 'cve_ids', 'cwe_ids']
# Add subdomain and target domain to the duplicate check
vuln_data_copy = vuln_data.copy()
vuln_data_copy['subdomain'] = subdomain
vuln_data_copy['target_domain'] = self.domain
# Check if record exists, if exists do not save it
if record_exists(Vulnerability, data=vuln_data_copy, exclude_keys=object_comparison_exclude):
logger.warning(f'Nuclei vulnerability of severity {severity} : {vuln_data_copy["name"]} for {subdomain_name} already exists')
continue
# Get or create EndPoint object
response = line.get('response')
httpx_crawl = False if response else enable_http_crawl # avoid yet another httpx crawl
endpoint, _ = save_endpoint(
http_url,
crawl=httpx_crawl,
subdomain=subdomain,
ctx=ctx)
if endpoint:
http_url = endpoint.http_url
if not httpx_crawl:
output = parse_curl_output(response)
endpoint.http_status = output['http_status']
endpoint.save()
# Get or create Vulnerability object
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
subdomain=subdomain,
**vuln_data)
if not vuln:
continue
# Print vuln
severity = line['info'].get('severity', 'unknown')
logger.warning(str(vuln))
# Send notification for all vulnerabilities except info
url = vuln.http_url or vuln.subdomain
send_vuln = (
notif and
notif.send_vuln_notif and
vuln and
severity in ['low', 'medium', 'high', 'critical'])
if send_vuln:
fields = {
'Severity': f'**{severity.upper()}**',
'URL': http_url,
'Subdomain': subdomain_name,
'Name': vuln.name,
'Type': vuln.type,
'Description': vuln.description,
'Template': vuln.template_url,
'Tags': vuln.get_tags_str(),
'CVEs': vuln.get_cve_str(),
'CWEs': vuln.get_cwe_str(),
'References': vuln.get_refs_str()
}
severity_map = {
'low': 'info',
'medium': 'warning',
'high': 'error',
'critical': 'error'
}
self.notify(
f'vulnerability_scan_#{vuln.id}',
severity_map[severity],
fields,
add_meta_info=False)
# Send report to hackerone
hackerone_query = Hackerone.objects.all()
send_report = (
hackerone_query.exists() and
severity not in ('info', 'low') and
vuln.target_domain.h1_team_handle
)
if send_report:
hackerone = hackerone_query.first()
if hackerone.send_critical and severity == 'critical':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_high and severity == 'high':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_medium and severity == 'medium':
send_hackerone_report.delay(vuln.id)
# Write results to JSON file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
# Send finish notif
if send_status:
vulns = Vulnerability.objects.filter(scan_history__id=self.scan_id)
info_count = vulns.filter(severity=0).count()
low_count = vulns.filter(severity=1).count()
medium_count = vulns.filter(severity=2).count()
high_count = vulns.filter(severity=3).count()
critical_count = vulns.filter(severity=4).count()
unknown_count = vulns.filter(severity=-1).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count + unknown_count
fields = {
'Total': vulnerability_count,
'Critical': critical_count,
'High': high_count,
'Medium': medium_count,
'Low': low_count,
'Info': info_count,
'Unknown': unknown_count
}
self.notify(fields=fields)
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=NUCLEI
).exclude(
severity=0
)
# find all unique vulnerabilities based on path and title
# all unique vulnerability will go thru gpt function and get report
# once report is got, it will be matched with other vulnerabilities and saved
unique_vulns = set()
for vuln in vulns:
unique_vulns.add((vuln.name, vuln.get_path()))
unique_vulns = list(unique_vulns)
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in unique_vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return None
def get_vulnerability_gpt_report(vuln):
title = vuln[0]
path = vuln[1]
logger.info(f'Getting GPT Report for {title}, PATH: {path}')
# check if in db already exists
stored = GPTVulnerabilityReport.objects.filter(
url_path=path
).filter(
title=title
).first()
if stored:
response = {
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
report = GPTVulnerabilityReportGenerator()
vulnerability_description = get_gpt_vuln_input_description(
title,
path
)
response = report.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
title,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
for vuln in Vulnerability.objects.filter(name=title, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
def add_gpt_description_db(title, path, description, impact, remediation, references):
gpt_report = GPTVulnerabilityReport()
gpt_report.url_path = path
gpt_report.title = title
gpt_report.description = description
gpt_report.impact = impact
gpt_report.remediation = remediation
gpt_report.save()
for url in references:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
gpt_report.references.add(ref)
gpt_report.save()
@app.task(name='nuclei_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_scan(self, urls=[], ctx={}, description=None):
"""HTTP vulnerability scan using Nuclei
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
Notes:
Unfurl the urls to keep only domain and path, will be sent to vuln scan and
ignore certain file extensions. Thanks: https://github.com/six2dez/reconftw
"""
# Config
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
input_path = f'{self.results_dir}/input_endpoints_vulnerability_scan.txt'
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
concurrency = config.get(NUCLEI_CONCURRENCY) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
retries = config.get(RETRIES) or self.yaml_configuration.get(RETRIES, DEFAULT_RETRIES)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
custom_header = config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
should_fetch_gpt_report = config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
proxy = get_random_proxy()
nuclei_specific_config = config.get('nuclei', {})
use_nuclei_conf = nuclei_specific_config.get(USE_CONFIG, False)
severities = nuclei_specific_config.get(NUCLEI_SEVERITY, NUCLEI_DEFAULT_SEVERITIES)
tags = nuclei_specific_config.get(NUCLEI_TAGS, [])
tags = ','.join(tags)
nuclei_templates = nuclei_specific_config.get(NUCLEI_TEMPLATE)
custom_nuclei_templates = nuclei_specific_config.get(NUCLEI_CUSTOM_TEMPLATE)
# severities_str = ','.join(severities)
# Get alive endpoints
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=enable_http_crawl,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
if intensity == 'normal': # reduce number of endpoints to scan
unfurl_filter = f'{self.results_dir}/urls_unfurled.txt'
run_command(
f"cat {input_path} | unfurl -u format %s://%d%p |uro > {unfurl_filter}",
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {unfurl_filter} -o {unfurl_filter}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
input_path = unfurl_filter
# Build templates
# logger.info('Updating Nuclei templates ...')
run_command(
'nuclei -update-templates',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
templates = []
if not (nuclei_templates or custom_nuclei_templates):
templates.append(NUCLEI_DEFAULT_TEMPLATES_PATH)
if nuclei_templates:
if ALL in nuclei_templates:
template = NUCLEI_DEFAULT_TEMPLATES_PATH
templates.append(template)
else:
templates.extend(nuclei_templates)
if custom_nuclei_templates:
custom_nuclei_template_paths = [f'{str(elem)}.yaml' for elem in custom_nuclei_templates]
template = templates.extend(custom_nuclei_template_paths)
# Build CMD
cmd = 'nuclei -j'
cmd += ' -config /root/.config/nuclei/config.yaml' if use_nuclei_conf else ''
cmd += f' -irr'
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -l {input_path}'
cmd += f' -c {str(concurrency)}' if concurrency > 0 else ''
cmd += f' -proxy {proxy} ' if proxy else ''
cmd += f' -retries {retries}' if retries > 0 else ''
cmd += f' -rl {rate_limit}' if rate_limit > 0 else ''
# cmd += f' -severity {severities_str}'
cmd += f' -timeout {str(timeout)}' if timeout and timeout > 0 else ''
cmd += f' -tags {tags}' if tags else ''
cmd += f' -silent'
for tpl in templates:
cmd += f' -t {tpl}'
grouped_tasks = []
custom_ctx = ctx
for severity in severities:
custom_ctx['track'] = True
_task = nuclei_individual_severity_module.si(
cmd,
severity,
enable_http_crawl,
should_fetch_gpt_report,
ctx=custom_ctx,
description=f'Nuclei Scan with severity {severity}'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan with all severities completed...')
return None
@app.task(name='dalfox_xss_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def dalfox_xss_scan(self, urls=[], ctx={}, description=None):
"""XSS Scan using dalfox
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
dalfox_config = vuln_config.get(DALFOX) or {}
custom_header = dalfox_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
is_waf_evasion = dalfox_config.get(WAF_EVASION, False)
blind_xss_server = dalfox_config.get(BLIND_XSS_SERVER)
user_agent = dalfox_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
timeout = dalfox_config.get(TIMEOUT)
delay = dalfox_config.get(DELAY)
threads = dalfox_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_dalfox_xss.txt'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=False,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'dalfox --silence --no-color --no-spinner'
cmd += f' --only-poc r '
cmd += f' --ignore-return 302,404,403'
cmd += f' --skip-bav'
cmd += f' file {input_path}'
cmd += f' --proxy {proxy}' if proxy else ''
cmd += f' --waf-evasion' if is_waf_evasion else ''
cmd += f' -b {blind_xss_server}' if blind_xss_server else ''
cmd += f' --delay {delay}' if delay else ''
cmd += f' --timeout {timeout}' if timeout else ''
cmd += f' --user-agent {user_agent}' if user_agent else ''
cmd += f' --header {custom_header}' if custom_header else ''
cmd += f' --worker {threads}' if threads else ''
cmd += f' --format json'
results = []
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id,
trunc_char=','
):
if not isinstance(line, dict):
continue
results.append(line)
vuln_data = parse_dalfox_result(line)
http_url = sanitize_url(line.get('data'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Dalfox Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=DALFOX
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='crlfuzz_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def crlfuzz_scan(self, urls=[], ctx={}, description=None):
"""CRLF Fuzzing with CRLFuzz
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
custom_header = vuln_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
user_agent = vuln_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
threads = vuln_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_crlf.txt'
output_path = f'{self.results_dir}/{self.filename}'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'crlfuzz -s'
cmd += f' -l {input_path}'
cmd += f' -x {proxy}' if proxy else ''
cmd += f' --H {custom_header}' if custom_header else ''
cmd += f' -o {output_path}'
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id
)
if not os.path.isfile(output_path):
logger.info('No Results from CRLFuzz')
return
crlfs = []
results = []
with open(output_path, 'r') as file:
crlfs = file.readlines()
for crlf in crlfs:
url = crlf.strip()
vuln_data = parse_crlfuzz_result(url)
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting CRLFuzz Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=CRLFUZZ
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='s3scanner', queue='main_scan_queue', base=RengineTask, bind=True)
def s3scanner(self, ctx={}, description=None):
"""Bucket Scanner
Args:
ctx (dict): Context
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/#{self.scan_id}_subdomain_discovery.txt'
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
s3_config = vuln_config.get(S3SCANNER) or {}
threads = s3_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
providers = s3_config.get(PROVIDERS, S3SCANNER_DEFAULT_PROVIDERS)
scan_history = ScanHistory.objects.filter(pk=self.scan_id).first()
for provider in providers:
cmd = f's3scanner -bucket-file {input_path} -enumerate -provider {provider} -threads {threads} -json'
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
if line.get('bucket', {}).get('exists', 0) == 1:
result = parse_s3scanner_result(line)
s3bucket, created = S3Bucket.objects.get_or_create(**result)
scan_history.buckets.add(s3bucket)
logger.info(f"s3 bucket added {result['provider']}-{result['name']}-{result['region']}")
@app.task(name='http_crawl', queue='main_scan_queue', base=RengineTask, bind=True)
def http_crawl(
self,
urls=[],
method=None,
recrawl=False,
ctx={},
track=True,
description=None,
is_ran_from_subdomain_scan=False,
should_remove_duplicate_endpoints=True,
duplicate_removal_fields=[]):
"""Use httpx to query HTTP URLs for important info like page titles, http
status, etc...
Args:
urls (list, optional): A set of URLs to check. Overrides default
behavior which queries all endpoints related to this scan.
method (str): HTTP method to use (GET, HEAD, POST, PUT, DELETE).
recrawl (bool, optional): If False, filter out URLs that have already
been crawled.
should_remove_duplicate_endpoints (bool): Whether to remove duplicate endpoints
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
Returns:
list: httpx results.
"""
logger.info('Initiating HTTP Crawl')
if is_ran_from_subdomain_scan:
logger.info('Running From Subdomain Scan...')
cmd = '/go/bin/httpx'
cfg = self.yaml_configuration.get(HTTP_CRAWL) or {}
custom_header = cfg.get(CUSTOM_HEADER, '')
threads = cfg.get(THREADS, DEFAULT_THREADS)
follow_redirect = cfg.get(FOLLOW_REDIRECT, True)
self.output_path = None
input_path = f'{self.results_dir}/httpx_input.txt'
history_file = f'{self.results_dir}/commands.txt'
if urls: # direct passing URLs to check
if self.url_filter:
urls = [u for u in urls if self.url_filter in u]
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_uncrawled=not recrawl,
write_filepath=input_path,
ctx=ctx
)
# logger.debug(urls)
# If no URLs found, skip it
if not urls:
return
# Re-adjust thread number if few URLs to avoid spinning up a monster to
# kill a fly.
if len(urls) < threads:
threads = len(urls)
# Get random proxy
proxy = get_random_proxy()
# Run command
cmd += f' -cl -ct -rt -location -td -websocket -cname -asn -cdn -probe -random-agent'
cmd += f' -t {threads}' if threads > 0 else ''
cmd += f' --http-proxy {proxy}' if proxy else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -json'
cmd += f' -u {urls[0]}' if len(urls) == 1 else f' -l {input_path}'
cmd += f' -x {method}' if method else ''
cmd += f' -silent'
if follow_redirect:
cmd += ' -fr'
results = []
endpoint_ids = []
for line in stream_command(
cmd,
history_file=history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not line or not isinstance(line, dict):
continue
logger.debug(line)
# No response from endpoint
if line.get('failed', False):
continue
# Parse httpx output
host = line.get('host', '')
content_length = line.get('content_length', 0)
http_status = line.get('status_code')
http_url, is_redirect = extract_httpx_url(line)
page_title = line.get('title')
webserver = line.get('webserver')
cdn = line.get('cdn', False)
rt = line.get('time')
techs = line.get('tech', [])
cname = line.get('cname', '')
content_type = line.get('content_type', '')
response_time = -1
if rt:
response_time = float(''.join(ch for ch in rt if not ch.isalpha()))
if rt[-2:] == 'ms':
response_time = response_time / 1000
# Create Subdomain object in DB
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
# Save default HTTP URL to endpoint object in DB
endpoint, created = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain,
is_default=is_ran_from_subdomain_scan
)
if not endpoint:
continue
endpoint.http_status = http_status
endpoint.page_title = page_title
endpoint.content_length = content_length
endpoint.webserver = webserver
endpoint.response_time = response_time
endpoint.content_type = content_type
endpoint.save()
endpoint_str = f'{http_url} [{http_status}] `{content_length}B` `{webserver}` `{rt}`'
logger.warning(endpoint_str)
if endpoint and endpoint.is_alive and endpoint.http_status != 403:
self.notify(
fields={'Alive endpoint': f'• {endpoint_str}'},
add_meta_info=False)
# Add endpoint to results
line['_cmd'] = cmd
line['final_url'] = http_url
line['endpoint_id'] = endpoint.id
line['endpoint_created'] = created
line['is_redirect'] = is_redirect
results.append(line)
# Add technology objects to DB
for technology in techs:
tech, _ = Technology.objects.get_or_create(name=technology)
endpoint.techs.add(tech)
if is_ran_from_subdomain_scan:
subdomain.technologies.add(tech)
subdomain.save()
endpoint.save()
techs_str = ', '.join([f'`{tech}`' for tech in techs])
self.notify(
fields={'Technologies': techs_str},
add_meta_info=False)
# Add IP objects for 'a' records to DB
a_records = line.get('a', [])
for ip_address in a_records:
ip, created = save_ip_address(
ip_address,
subdomain,
subscan=self.subscan,
cdn=cdn)
ips_str = '• ' + '\n• '.join([f'`{ip}`' for ip in a_records])
self.notify(
fields={'IPs': ips_str},
add_meta_info=False)
# Add IP object for host in DB
if host:
ip, created = save_ip_address(
host,
subdomain,
subscan=self.subscan,
cdn=cdn)
self.notify(
fields={'IPs': f'• `{ip.address}`'},
add_meta_info=False)
# Save subdomain and endpoint
if is_ran_from_subdomain_scan:
# save subdomain stuffs
subdomain.http_url = http_url
subdomain.http_status = http_status
subdomain.page_title = page_title
subdomain.content_length = content_length
subdomain.webserver = webserver
subdomain.response_time = response_time
subdomain.content_type = content_type
subdomain.cname = ','.join(cname)
subdomain.is_cdn = cdn
if cdn:
subdomain.cdn_name = line.get('cdn_name')
subdomain.save()
endpoint.save()
endpoint_ids.append(endpoint.id)
if should_remove_duplicate_endpoints:
# Remove 'fake' alive endpoints that are just redirects to the same page
remove_duplicate_endpoints(
self.scan_id,
self.domain_id,
self.subdomain_id,
filter_ids=endpoint_ids
)
# Remove input file
run_command(
f'rm {input_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
return results
#---------------------#
# Notifications tasks #
#---------------------#
@app.task(name='send_notif', bind=False, queue='send_notif_queue')
def send_notif(
message,
scan_history_id=None,
subscan_id=None,
**options):
if not 'title' in options:
message = enrich_notification(message, scan_history_id, subscan_id)
send_discord_message(message, **options)
send_slack_message(message)
send_telegram_message(message)
@app.task(name='send_scan_notif', bind=False, queue='send_scan_notif_queue')
def send_scan_notif(
scan_history_id,
subscan_id=None,
engine_id=None,
status='RUNNING'):
"""Send scan status notification. Works for scan or a subscan if subscan_id
is passed.
Args:
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Get domain, engine, scan_history objects
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
tasks = ScanActivity.objects.filter(scan_of=scan) if scan else 0
# Build notif options
url = get_scan_url(scan_history_id, subscan_id)
title = get_scan_title(scan_history_id, subscan_id)
fields = get_scan_fields(engine, scan, subscan, status, tasks)
severity = None
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
if status:
severity = STATUS_TO_SEVERITIES.get(status)
opts = {
'title': title,
'url': url,
'fields': fields,
'severity': severity
}
logger.warning(f'Sending notification "{title}" [{severity}]')
# Send notification
send_notif(
msg,
scan_history_id,
subscan_id,
**opts)
@app.task(name='send_task_notif', bind=False, queue='send_task_notif_queue')
def send_task_notif(
task_name,
status=None,
result=None,
output_path=None,
traceback=None,
scan_history_id=None,
engine_id=None,
subscan_id=None,
severity=None,
add_meta_info=True,
update_fields={}):
"""Send task status notification.
Args:
task_name (str): Task name.
status (str, optional): Task status.
result (str, optional): Task result.
output_path (str, optional): Task output path.
traceback (str, optional): Task traceback.
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
severity (str, optional): Severity (will be mapped to notif colors)
add_meta_info (bool, optional): Wheter to add scan / subscan info to notif.
update_fields (dict, optional): Fields key / value to update.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Build fields
url = None
fields = {}
if add_meta_info:
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
url = get_scan_url(scan_history_id)
if status:
fields['Status'] = f'**{status}**'
if engine:
fields['Engine'] = engine.engine_name
if scan:
fields['Scan ID'] = f'[#{scan.id}]({url})'
if subscan:
url = get_scan_url(scan_history_id, subscan_id)
fields['Subscan ID'] = f'[#{subscan.id}]({url})'
title = get_task_title(task_name, scan_history_id, subscan_id)
if status:
severity = STATUS_TO_SEVERITIES.get(status)
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
# Add fields to update
for k, v in update_fields.items():
fields[k] = v
# Add traceback to notif
if traceback and notif.send_scan_tracebacks:
fields['Traceback'] = f'```\n{traceback}\n```'
# Add files to notif
files = []
attach_file = (
notif.send_scan_output_file and
output_path and
result and
not traceback
)
if attach_file:
output_title = output_path.split('/')[-1]
files = [(output_path, output_title)]
# Send notif
opts = {
'title': title,
'url': url,
'files': files,
'severity': severity,
'fields': fields,
'fields_append': update_fields.keys()
}
send_notif(
msg,
scan_history_id=scan_history_id,
subscan_id=subscan_id,
**opts)
@app.task(name='send_file_to_discord', bind=False, queue='send_file_to_discord_queue')
def send_file_to_discord(file_path, title=None):
notif = Notification.objects.first()
do_send = notif and notif.send_to_discord and notif.discord_hook_url
if not do_send:
return False
webhook = DiscordWebhook(
url=notif.discord_hook_url,
rate_limit_retry=True,
username=title or "reNgine Discord Plugin"
)
with open(file_path, "rb") as f:
head, tail = os.path.split(file_path)
webhook.add_file(file=f.read(), filename=tail)
webhook.execute()
@app.task(name='send_hackerone_report', bind=False, queue='send_hackerone_report_queue')
def send_hackerone_report(vulnerability_id):
"""Send HackerOne vulnerability report.
Args:
vulnerability_id (int): Vulnerability id.
Returns:
int: HTTP response status code.
"""
vulnerability = Vulnerability.objects.get(id=vulnerability_id)
severities = {v: k for k,v in NUCLEI_SEVERITY_MAP.items()}
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
# can only send vulnerability report if team_handle exists
if len(vulnerability.target_domain.h1_team_handle) !=0:
hackerone_query = Hackerone.objects.all()
if hackerone_query.exists():
hackerone = Hackerone.objects.first()
severity_value = severities[vulnerability.severity]
tpl = hackerone.report_template
# Replace syntax of report template with actual content
tpl = tpl.replace('{vulnerability_name}', vulnerability.name)
tpl = tpl.replace('{vulnerable_url}', vulnerability.http_url)
tpl = tpl.replace('{vulnerability_severity}', severity_value)
tpl = tpl.replace('{vulnerability_description}', vulnerability.description if vulnerability.description else '')
tpl = tpl.replace('{vulnerability_extracted_results}', vulnerability.extracted_results if vulnerability.extracted_results else '')
tpl = tpl.replace('{vulnerability_reference}', vulnerability.reference if vulnerability.reference else '')
data = {
"data": {
"type": "report",
"attributes": {
"team_handle": vulnerability.target_domain.h1_team_handle,
"title": '{} found in {}'.format(vulnerability.name, vulnerability.http_url),
"vulnerability_information": tpl,
"severity_rating": severity_value,
"impact": "More information about the impact and vulnerability can be found here: \n" + vulnerability.reference if vulnerability.reference else "NA",
}
}
}
r = requests.post(
'https://api.hackerone.com/v1/hackers/reports',
auth=(hackerone.username, hackerone.api_key),
json=data,
headers=headers
)
response = r.json()
status_code = r.status_code
if status_code == 201:
vulnerability.hackerone_report_id = response['data']["id"]
vulnerability.open_status = False
vulnerability.save()
return status_code
else:
logger.error('No team handle found.')
status_code = 111
return status_code
#-------------#
# Utils tasks #
#-------------#
@app.task(name='parse_nmap_results', bind=False, queue='parse_nmap_results_queue')
def parse_nmap_results(xml_file, output_file=None):
"""Parse results from nmap output file.
Args:
xml_file (str): nmap XML report file path.
Returns:
list: List of vulnerabilities found from nmap results.
"""
with open(xml_file, encoding='utf8') as f:
content = f.read()
try:
nmap_results = xmltodict.parse(content) # parse XML to dict
except Exception as e:
logger.exception(e)
logger.error(f'Cannot parse {xml_file} to valid JSON. Skipping.')
return []
# Write JSON to output file
if output_file:
with open(output_file, 'w') as f:
json.dump(nmap_results, f, indent=4)
logger.warning(json.dumps(nmap_results, indent=4))
hosts = (
nmap_results
.get('nmaprun', {})
.get('host', {})
)
all_vulns = []
if isinstance(hosts, dict):
hosts = [hosts]
for host in hosts:
# Grab hostname / IP from output
hostnames_dict = host.get('hostnames', {})
if hostnames_dict:
# Ensure that hostnames['hostname'] is a list for consistency
hostnames_list = hostnames_dict['hostname'] if isinstance(hostnames_dict['hostname'], list) else [hostnames_dict['hostname']]
# Extract all the @name values from the list of dictionaries
hostnames = [entry.get('@name') for entry in hostnames_list]
else:
hostnames = [host.get('address')['@addr']]
# Iterate over each hostname for each port
for hostname in hostnames:
# Grab ports from output
ports = host.get('ports', {}).get('port', [])
if isinstance(ports, dict):
ports = [ports]
for port in ports:
url_vulns = []
port_number = port['@portid']
url = sanitize_url(f'{hostname}:{port_number}')
logger.info(f'Parsing nmap results for {hostname}:{port_number} ...')
if not port_number or not port_number.isdigit():
continue
port_protocol = port['@protocol']
scripts = port.get('script', [])
if isinstance(scripts, dict):
scripts = [scripts]
for script in scripts:
script_id = script['@id']
script_output = script['@output']
script_output_table = script.get('table', [])
logger.debug(f'Ran nmap script "{script_id}" on {port_number}/{port_protocol}:\n{script_output}\n')
if script_id == 'vulscan':
vulns = parse_nmap_vulscan_output(script_output)
url_vulns.extend(vulns)
elif script_id == 'vulners':
vulns = parse_nmap_vulners_output(script_output)
url_vulns.extend(vulns)
# elif script_id == 'http-server-header':
# TODO: nmap can help find technologies as well using the http-server-header script
# regex = r'(\w+)/([\d.]+)\s?(?:\((\w+)\))?'
# tech_name, tech_version, tech_os = re.match(regex, test_string).groups()
# Technology.objects.get_or_create(...)
# elif script_id == 'http_csrf':
# vulns = parse_nmap_http_csrf_output(script_output)
# url_vulns.extend(vulns)
else:
logger.warning(f'Script output parsing for script "{script_id}" is not supported yet.')
# Add URL to vuln
for vuln in url_vulns:
# TODO: This should extend to any URL, not just HTTP
vuln['http_url'] = url
if 'http_path' in vuln:
vuln['http_url'] += vuln['http_path']
all_vulns.append(vuln)
return all_vulns
def parse_nmap_http_csrf_output(script_output):
pass
def parse_nmap_vulscan_output(script_output):
"""Parse nmap vulscan script output.
Args:
script_output (str): Vulscan script output.
Returns:
list: List of Vulnerability dicts.
"""
data = {}
vulns = []
provider_name = ''
# Sort all vulns found by provider so that we can match each provider with
# a function that pulls from its API to get more info about the
# vulnerability.
for line in script_output.splitlines():
if not line:
continue
if not line.startswith('['): # provider line
if "No findings" in line:
logger.info(f"No findings: {line}")
continue
elif ' - ' in line:
provider_name, provider_url = tuple(line.split(' - '))
data[provider_name] = {'url': provider_url.rstrip(':'), 'entries': []}
continue
else:
# Log a warning
logger.warning(f"Unexpected line format: {line}")
continue
reg = r'\[(.*)\] (.*)'
matches = re.match(reg, line)
id, title = matches.groups()
entry = {'id': id, 'title': title}
data[provider_name]['entries'].append(entry)
logger.warning('Vulscan parsed output:')
logger.warning(pprint.pformat(data))
for provider_name in data:
if provider_name == 'Exploit-DB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'IBM X-Force':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'MITRE CVE':
logger.error(f'Provider {provider_name} is not supported YET.')
for entry in data[provider_name]['entries']:
cve_id = entry['id']
vuln = cve_to_vuln(cve_id)
vulns.append(vuln)
elif provider_name == 'OSVDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'OpenVAS (Nessus)':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'SecurityFocus':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'VulDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
else:
logger.error(f'Provider {provider_name} is not supported.')
return vulns
def parse_nmap_vulners_output(script_output, url=''):
"""Parse nmap vulners script output.
TODO: Rework this as it's currently matching all CVEs no matter the
confidence.
Args:
script_output (str): Script output.
Returns:
list: List of found vulnerabilities.
"""
vulns = []
# Check for CVE in script output
CVE_REGEX = re.compile(r'.*(CVE-\d\d\d\d-\d+).*')
matches = CVE_REGEX.findall(script_output)
matches = list(dict.fromkeys(matches))
for cve_id in matches: # get CVE info
vuln = cve_to_vuln(cve_id, vuln_type='nmap-vulners-nse')
if vuln:
vulns.append(vuln)
return vulns
def cve_to_vuln(cve_id, vuln_type=''):
"""Search for a CVE using CVESearch and return Vulnerability data.
Args:
cve_id (str): CVE ID in the form CVE-*
Returns:
dict: Vulnerability dict.
"""
cve_info = CVESearch('https://cve.circl.lu').id(cve_id)
if not cve_info:
logger.error(f'Could not fetch CVE info for cve {cve_id}. Skipping.')
return None
vuln_cve_id = cve_info['id']
vuln_name = vuln_cve_id
vuln_description = cve_info.get('summary', 'none').replace(vuln_cve_id, '').strip()
try:
vuln_cvss = float(cve_info.get('cvss', -1))
except (ValueError, TypeError):
vuln_cvss = -1
vuln_cwe_id = cve_info.get('cwe', '')
exploit_ids = cve_info.get('refmap', {}).get('exploit-db', [])
osvdb_ids = cve_info.get('refmap', {}).get('osvdb', [])
references = cve_info.get('references', [])
capec_objects = cve_info.get('capec', [])
# Parse ovals for a better vuln name / type
ovals = cve_info.get('oval', [])
if ovals:
vuln_name = ovals[0]['title']
vuln_type = ovals[0]['family']
# Set vulnerability severity based on CVSS score
vuln_severity = 'info'
if vuln_cvss < 4:
vuln_severity = 'low'
elif vuln_cvss < 7:
vuln_severity = 'medium'
elif vuln_cvss < 9:
vuln_severity = 'high'
else:
vuln_severity = 'critical'
# Build console warning message
msg = f'{vuln_name} | {vuln_severity.upper()} | {vuln_cve_id} | {vuln_cwe_id} | {vuln_cvss}'
for id in osvdb_ids:
msg += f'\n\tOSVDB: {id}'
for exploit_id in exploit_ids:
msg += f'\n\tEXPLOITDB: {exploit_id}'
logger.warning(msg)
vuln = {
'name': vuln_name,
'type': vuln_type,
'severity': NUCLEI_SEVERITY_MAP[vuln_severity],
'description': vuln_description,
'cvss_score': vuln_cvss,
'references': references,
'cve_ids': [vuln_cve_id],
'cwe_ids': [vuln_cwe_id]
}
return vuln
def parse_s3scanner_result(line):
'''
Parses and returns s3Scanner Data
'''
bucket = line['bucket']
return {
'name': bucket['name'],
'region': bucket['region'],
'provider': bucket['provider'],
'owner_display_name': bucket['owner_display_name'],
'owner_id': bucket['owner_id'],
'perm_auth_users_read': bucket['perm_auth_users_read'],
'perm_auth_users_write': bucket['perm_auth_users_write'],
'perm_auth_users_read_acl': bucket['perm_auth_users_read_acl'],
'perm_auth_users_write_acl': bucket['perm_auth_users_write_acl'],
'perm_auth_users_full_control': bucket['perm_auth_users_full_control'],
'perm_all_users_read': bucket['perm_all_users_read'],
'perm_all_users_write': bucket['perm_all_users_write'],
'perm_all_users_read_acl': bucket['perm_all_users_read_acl'],
'perm_all_users_write_acl': bucket['perm_all_users_write_acl'],
'perm_all_users_full_control': bucket['perm_all_users_full_control'],
'num_objects': bucket['num_objects'],
'size': bucket['bucket_size']
}
def parse_nuclei_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
return {
'name': line['info'].get('name', ''),
'type': line['type'],
'severity': NUCLEI_SEVERITY_MAP[line['info'].get('severity', 'unknown')],
'template': line['template'],
'template_url': line['template-url'],
'template_id': line['template-id'],
'description': line['info'].get('description', ''),
'matcher_name': line.get('matcher-name', ''),
'curl_command': line.get('curl-command'),
'request': line.get('request'),
'response': line.get('response'),
'extracted_results': line.get('extracted-results', []),
'cvss_metrics': line['info'].get('classification', {}).get('cvss-metrics', ''),
'cvss_score': line['info'].get('classification', {}).get('cvss-score'),
'cve_ids': line['info'].get('classification', {}).get('cve_id', []) or [],
'cwe_ids': line['info'].get('classification', {}).get('cwe_id', []) or [],
'references': line['info'].get('reference', []) or [],
'tags': line['info'].get('tags', []),
'source': NUCLEI,
}
def parse_dalfox_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
description = ''
description += f" Evidence: {line.get('evidence')} <br>" if line.get('evidence') else ''
description += f" Message: {line.get('message')} <br>" if line.get('message') else ''
description += f" Payload: {line.get('message_str')} <br>" if line.get('message_str') else ''
description += f" Vulnerable Parameter: {line.get('param')} <br>" if line.get('param') else ''
return {
'name': 'XSS (Cross Site Scripting)',
'type': 'XSS',
'severity': DALFOX_SEVERITY_MAP[line.get('severity', 'unknown')],
'description': description,
'source': DALFOX,
'cwe_ids': [line.get('cwe')]
}
def parse_crlfuzz_result(url):
"""Parse CRLF results
Args:
url (str): CRLF Vulnerable URL
Returns:
dict: Vulnerability data.
"""
return {
'name': 'CRLF (HTTP Response Splitting)',
'type': 'CRLF',
'severity': 2,
'description': 'A CRLF (HTTP Response Splitting) vulnerability has been discovered.',
'source': CRLFUZZ,
}
def record_exists(model, data, exclude_keys=[]):
"""
Check if a record already exists in the database based on the given data.
Args:
model (django.db.models.Model): The Django model to check against.
data (dict): Data dictionary containing fields and values.
exclude_keys (list): List of keys to exclude from the lookup.
Returns:
bool: True if the record exists, False otherwise.
"""
# Extract the keys that will be used for the lookup
lookup_fields = {key: data[key] for key in data if key not in exclude_keys}
# Return True if a record exists based on the lookup fields, False otherwise
return model.objects.filter(**lookup_fields).exists()
@app.task(name='geo_localize', bind=False, queue='geo_localize_queue')
def geo_localize(host, ip_id=None):
"""Uses geoiplookup to find location associated with host.
Args:
host (str): Hostname.
ip_id (int): IpAddress object id.
Returns:
startScan.models.CountryISO: CountryISO object from DB or None.
"""
if validators.ipv6(host):
logger.info(f'Ipv6 "{host}" is not supported by geoiplookup. Skipping.')
return None
cmd = f'geoiplookup {host}'
_, out = run_command(cmd)
if 'IP Address not found' not in out and "can't resolve hostname" not in out:
country_iso = out.split(':')[1].strip().split(',')[0]
country_name = out.split(':')[1].strip().split(',')[1].strip()
geo_object, _ = CountryISO.objects.get_or_create(
iso=country_iso,
name=country_name
)
geo_json = {
'iso': country_iso,
'name': country_name
}
if ip_id:
ip = IpAddress.objects.get(pk=ip_id)
ip.geo_iso = geo_object
ip.save()
return geo_json
logger.info(f'Geo IP lookup failed for host "{host}"')
return None
@app.task(name='query_whois', bind=False, queue='query_whois_queue')
def query_whois(ip_domain, force_reload_whois=False):
"""Query WHOIS information for an IP or a domain name.
Args:
ip_domain (str): IP address or domain name.
save_domain (bool): Whether to save domain or not, default False
Returns:
dict: WHOIS information.
"""
if not force_reload_whois and Domain.objects.filter(name=ip_domain).exists() and Domain.objects.get(name=ip_domain).domain_info:
domain = Domain.objects.get(name=ip_domain)
if not domain.insert_date:
domain.insert_date = timezone.now()
domain.save()
domain_info_db = domain.domain_info
domain_info = DottedDict(
dnssec=domain_info_db.dnssec,
created=domain_info_db.created,
updated=domain_info_db.updated,
expires=domain_info_db.expires,
geolocation_iso=domain_info_db.geolocation_iso,
status=[status['name'] for status in DomainWhoisStatusSerializer(domain_info_db.status, many=True).data],
whois_server=domain_info_db.whois_server,
ns_records=[ns['name'] for ns in NameServersSerializer(domain_info_db.name_servers, many=True).data],
registrar_name=domain_info_db.registrar.name,
registrar_phone=domain_info_db.registrar.phone,
registrar_email=domain_info_db.registrar.email,
registrar_url=domain_info_db.registrar.url,
registrant_name=domain_info_db.registrant.name,
registrant_id=domain_info_db.registrant.id_str,
registrant_organization=domain_info_db.registrant.organization,
registrant_city=domain_info_db.registrant.city,
registrant_state=domain_info_db.registrant.state,
registrant_zip_code=domain_info_db.registrant.zip_code,
registrant_country=domain_info_db.registrant.country,
registrant_phone=domain_info_db.registrant.phone,
registrant_fax=domain_info_db.registrant.fax,
registrant_email=domain_info_db.registrant.email,
registrant_address=domain_info_db.registrant.address,
admin_name=domain_info_db.admin.name,
admin_id=domain_info_db.admin.id_str,
admin_organization=domain_info_db.admin.organization,
admin_city=domain_info_db.admin.city,
admin_state=domain_info_db.admin.state,
admin_zip_code=domain_info_db.admin.zip_code,
admin_country=domain_info_db.admin.country,
admin_phone=domain_info_db.admin.phone,
admin_fax=domain_info_db.admin.fax,
admin_email=domain_info_db.admin.email,
admin_address=domain_info_db.admin.address,
tech_name=domain_info_db.tech.name,
tech_id=domain_info_db.tech.id_str,
tech_organization=domain_info_db.tech.organization,
tech_city=domain_info_db.tech.city,
tech_state=domain_info_db.tech.state,
tech_zip_code=domain_info_db.tech.zip_code,
tech_country=domain_info_db.tech.country,
tech_phone=domain_info_db.tech.phone,
tech_fax=domain_info_db.tech.fax,
tech_email=domain_info_db.tech.email,
tech_address=domain_info_db.tech.address,
related_tlds=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_tlds, many=True).data],
related_domains=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_domains, many=True).data],
historical_ips=[ip for ip in HistoricalIPSerializer(domain_info_db.historical_ips, many=True).data],
)
if domain_info_db.dns_records:
a_records = []
txt_records = []
mx_records = []
dns_records = [{'name': dns['name'], 'type': dns['type']} for dns in DomainDNSRecordSerializer(domain_info_db.dns_records, many=True).data]
for dns in dns_records:
if dns['type'] == 'a':
a_records.append(dns['name'])
elif dns['type'] == 'txt':
txt_records.append(dns['name'])
elif dns['type'] == 'mx':
mx_records.append(dns['name'])
domain_info.a_records = a_records
domain_info.txt_records = txt_records
domain_info.mx_records = mx_records
else:
logger.info(f'Domain info for "{ip_domain}" not found in DB, querying whois')
domain_info = DottedDict()
# find domain historical ip
try:
historical_ips = get_domain_historical_ip_address(ip_domain)
domain_info.historical_ips = historical_ips
except Exception as e:
logger.error(f'HistoricalIP for {ip_domain} not found!\nError: {str(e)}')
historical_ips = []
# find associated domains using ip_domain
try:
related_domains = reverse_whois(ip_domain.split('.')[0])
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
# find related tlds using TLSx
try:
related_tlds = []
output_path = '/tmp/ip_domain_tlsx.txt'
tlsx_command = f'tlsx -san -cn -silent -ro -host {ip_domain} -o {output_path}'
run_command(
tlsx_command,
shell=True,
)
tlsx_output = []
with open(output_path) as f:
tlsx_output = f.readlines()
tldextract_target = tldextract.extract(ip_domain)
for doms in tlsx_output:
doms = doms.strip()
tldextract_res = tldextract.extract(doms)
if ip_domain != doms and tldextract_res.domain == tldextract_target.domain and tldextract_res.subdomain == '':
related_tlds.append(doms)
related_tlds = list(set(related_tlds))
domain_info.related_tlds = related_tlds
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
related_domains_list = []
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_related)
related_domains_list.append(_domain['name'])
for _domain in related_tlds:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain,
)[0]
db_domain_info.related_tlds.add(domain_related)
for _ip in historical_ips:
historical_ip = HistoricalIP.objects.get_or_create(
ip=_ip['ip'],
owner=_ip['owner'],
location=_ip['location'],
last_seen=_ip['last_seen'],
)[0]
db_domain_info.historical_ips.add(historical_ip)
domain.domain_info = db_domain_info
domain.save()
command = f'netlas host {ip_domain} -f json'
# check if netlas key is provided
netlas_key = get_netlas_key()
command += f' -a {netlas_key}' if netlas_key else ''
result = subprocess.check_output(command.split()).decode('utf-8')
if 'Failed to parse response data' in result:
# do fallback
return {
'status': False,
'ip_domain': ip_domain,
'result': "Netlas limit exceeded.",
'message': 'Netlas limit exceeded.'
}
try:
result = json.loads(result)
logger.info(result)
whois = result.get('whois') if result.get('whois') else {}
domain_info.created = whois.get('created_date')
domain_info.expires = whois.get('expiration_date')
domain_info.updated = whois.get('updated_date')
domain_info.whois_server = whois.get('whois_server')
if 'registrant' in whois:
registrant = whois.get('registrant')
domain_info.registrant_name = registrant.get('name')
domain_info.registrant_country = registrant.get('country')
domain_info.registrant_id = registrant.get('id')
domain_info.registrant_state = registrant.get('province')
domain_info.registrant_city = registrant.get('city')
domain_info.registrant_phone = registrant.get('phone')
domain_info.registrant_address = registrant.get('street')
domain_info.registrant_organization = registrant.get('organization')
domain_info.registrant_fax = registrant.get('fax')
domain_info.registrant_zip_code = registrant.get('postal_code')
email_search = EMAIL_REGEX.search(str(registrant.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.registrant_email = field_content
if 'administrative' in whois:
administrative = whois.get('administrative')
domain_info.admin_name = administrative.get('name')
domain_info.admin_country = administrative.get('country')
domain_info.admin_id = administrative.get('id')
domain_info.admin_state = administrative.get('province')
domain_info.admin_city = administrative.get('city')
domain_info.admin_phone = administrative.get('phone')
domain_info.admin_address = administrative.get('street')
domain_info.admin_organization = administrative.get('organization')
domain_info.admin_fax = administrative.get('fax')
domain_info.admin_zip_code = administrative.get('postal_code')
mail_search = EMAIL_REGEX.search(str(administrative.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.admin_email = field_content
if 'technical' in whois:
technical = whois.get('technical')
domain_info.tech_name = technical.get('name')
domain_info.tech_country = technical.get('country')
domain_info.tech_state = technical.get('province')
domain_info.tech_id = technical.get('id')
domain_info.tech_city = technical.get('city')
domain_info.tech_phone = technical.get('phone')
domain_info.tech_address = technical.get('street')
domain_info.tech_organization = technical.get('organization')
domain_info.tech_fax = technical.get('fax')
domain_info.tech_zip_code = technical.get('postal_code')
mail_search = EMAIL_REGEX.search(str(technical.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.tech_email = field_content
if 'dns' in result:
dns = result.get('dns')
domain_info.mx_records = dns.get('mx')
domain_info.txt_records = dns.get('txt')
domain_info.a_records = dns.get('a')
domain_info.ns_records = whois.get('name_servers')
domain_info.dnssec = True if whois.get('dnssec') else False
domain_info.status = whois.get('status')
if 'registrar' in whois:
registrar = whois.get('registrar')
domain_info.registrar_name = registrar.get('name')
domain_info.registrar_email = registrar.get('email')
domain_info.registrar_phone = registrar.get('phone')
domain_info.registrar_url = registrar.get('url')
# find associated domains if registrant email is found
related_domains = reverse_whois(domain_info.get('registrant_email')) if domain_info.get('registrant_email') else []
for _domain in related_domains:
related_domains_list.append(_domain['name'])
# remove duplicate domains from related domains list
related_domains_list = list(set(related_domains_list))
domain_info.related_domains = related_domains_list
# save to db if domain exists
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_rel = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_rel)
db_domain_info.dnssec = domain_info.get('dnssec')
#dates
db_domain_info.created = domain_info.get('created')
db_domain_info.updated = domain_info.get('updated')
db_domain_info.expires = domain_info.get('expires')
#registrar
db_domain_info.registrar = Registrar.objects.get_or_create(
name=domain_info.get('registrar_name'),
email=domain_info.get('registrar_email'),
phone=domain_info.get('registrar_phone'),
url=domain_info.get('registrar_url'),
)[0]
db_domain_info.registrant = DomainRegistration.objects.get_or_create(
name=domain_info.get('registrant_name'),
organization=domain_info.get('registrant_organization'),
address=domain_info.get('registrant_address'),
city=domain_info.get('registrant_city'),
state=domain_info.get('registrant_state'),
zip_code=domain_info.get('registrant_zip_code'),
country=domain_info.get('registrant_country'),
email=domain_info.get('registrant_email'),
phone=domain_info.get('registrant_phone'),
fax=domain_info.get('registrant_fax'),
id_str=domain_info.get('registrant_id'),
)[0]
db_domain_info.admin = DomainRegistration.objects.get_or_create(
name=domain_info.get('admin_name'),
organization=domain_info.get('admin_organization'),
address=domain_info.get('admin_address'),
city=domain_info.get('admin_city'),
state=domain_info.get('admin_state'),
zip_code=domain_info.get('admin_zip_code'),
country=domain_info.get('admin_country'),
email=domain_info.get('admin_email'),
phone=domain_info.get('admin_phone'),
fax=domain_info.get('admin_fax'),
id_str=domain_info.get('admin_id'),
)[0]
db_domain_info.tech = DomainRegistration.objects.get_or_create(
name=domain_info.get('tech_name'),
organization=domain_info.get('tech_organization'),
address=domain_info.get('tech_address'),
city=domain_info.get('tech_city'),
state=domain_info.get('tech_state'),
zip_code=domain_info.get('tech_zip_code'),
country=domain_info.get('tech_country'),
email=domain_info.get('tech_email'),
phone=domain_info.get('tech_phone'),
fax=domain_info.get('tech_fax'),
id_str=domain_info.get('tech_id'),
)[0]
for status in domain_info.get('status') or []:
_status = WhoisStatus.objects.get_or_create(
name=status
)[0]
_status.save()
db_domain_info.status.add(_status)
for ns in domain_info.get('ns_records') or []:
_ns = NameServer.objects.get_or_create(
name=ns
)[0]
_ns.save()
db_domain_info.name_servers.add(_ns)
for a in domain_info.get('a_records') or []:
_a = DNSRecord.objects.get_or_create(
name=a,
type='a'
)[0]
_a.save()
db_domain_info.dns_records.add(_a)
for mx in domain_info.get('mx_records') or []:
_mx = DNSRecord.objects.get_or_create(
name=mx,
type='mx'
)[0]
_mx.save()
db_domain_info.dns_records.add(_mx)
for txt in domain_info.get('txt_records') or []:
_txt = DNSRecord.objects.get_or_create(
name=txt,
type='txt'
)[0]
_txt.save()
db_domain_info.dns_records.add(_txt)
db_domain_info.geolocation_iso = domain_info.get('registrant_country')
db_domain_info.whois_server = domain_info.get('whois_server')
db_domain_info.save()
domain.domain_info = db_domain_info
domain.save()
except Exception as e:
return {
'status': False,
'ip_domain': ip_domain,
'result': "unable to fetch records from WHOIS database.",
'message': str(e)
}
return {
'status': True,
'ip_domain': ip_domain,
'dnssec': domain_info.get('dnssec'),
'created': domain_info.get('created'),
'updated': domain_info.get('updated'),
'expires': domain_info.get('expires'),
'geolocation_iso': domain_info.get('registrant_country'),
'domain_statuses': domain_info.get('status'),
'whois_server': domain_info.get('whois_server'),
'dns': {
'a': domain_info.get('a_records'),
'mx': domain_info.get('mx_records'),
'txt': domain_info.get('txt_records'),
},
'registrar': {
'name': domain_info.get('registrar_name'),
'phone': domain_info.get('registrar_phone'),
'email': domain_info.get('registrar_email'),
'url': domain_info.get('registrar_url'),
},
'registrant': {
'name': domain_info.get('registrant_name'),
'id': domain_info.get('registrant_id'),
'organization': domain_info.get('registrant_organization'),
'address': domain_info.get('registrant_address'),
'city': domain_info.get('registrant_city'),
'state': domain_info.get('registrant_state'),
'zipcode': domain_info.get('registrant_zip_code'),
'country': domain_info.get('registrant_country'),
'phone': domain_info.get('registrant_phone'),
'fax': domain_info.get('registrant_fax'),
'email': domain_info.get('registrant_email'),
},
'admin': {
'name': domain_info.get('admin_name'),
'id': domain_info.get('admin_id'),
'organization': domain_info.get('admin_organization'),
'address':domain_info.get('admin_address'),
'city': domain_info.get('admin_city'),
'state': domain_info.get('admin_state'),
'zipcode': domain_info.get('admin_zip_code'),
'country': domain_info.get('admin_country'),
'phone': domain_info.get('admin_phone'),
'fax': domain_info.get('admin_fax'),
'email': domain_info.get('admin_email'),
},
'technical_contact': {
'name': domain_info.get('tech_name'),
'id': domain_info.get('tech_id'),
'organization': domain_info.get('tech_organization'),
'address': domain_info.get('tech_address'),
'city': domain_info.get('tech_city'),
'state': domain_info.get('tech_state'),
'zipcode': domain_info.get('tech_zip_code'),
'country': domain_info.get('tech_country'),
'phone': domain_info.get('tech_phone'),
'fax': domain_info.get('tech_fax'),
'email': domain_info.get('tech_email'),
},
'nameservers': domain_info.get('ns_records'),
# 'similar_domains': domain_info.get('similar_domains'),
'related_domains': domain_info.get('related_domains'),
'related_tlds': domain_info.get('related_tlds'),
'historical_ips': domain_info.get('historical_ips'),
}
@app.task(name='remove_duplicate_endpoints', bind=False, queue='remove_duplicate_endpoints_queue')
def remove_duplicate_endpoints(
scan_history_id,
domain_id,
subdomain_id=None,
filter_ids=[],
filter_status=[200, 301, 404],
duplicate_removal_fields=ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS
):
"""Remove duplicate endpoints.
Check for implicit redirections by comparing endpoints:
- [x] `content_length` similarities indicating redirections
- [x] `page_title` (check for same page title)
- [ ] Sign-in / login page (check for endpoints with the same words)
Args:
scan_history_id: ScanHistory id.
domain_id (int): Domain id.
subdomain_id (int, optional): Subdomain id.
filter_ids (list): List of endpoint ids to filter on.
filter_status (list): List of HTTP status codes to filter on.
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
"""
logger.info(f'Removing duplicate endpoints based on {duplicate_removal_fields}')
endpoints = (
EndPoint.objects
.filter(scan_history__id=scan_history_id)
.filter(target_domain__id=domain_id)
)
if filter_status:
endpoints = endpoints.filter(http_status__in=filter_status)
if subdomain_id:
endpoints = endpoints.filter(subdomain__id=subdomain_id)
if filter_ids:
endpoints = endpoints.filter(id__in=filter_ids)
for field_name in duplicate_removal_fields:
cl_query = (
endpoints
.values_list(field_name)
.annotate(mc=Count(field_name))
.order_by('-mc')
)
for (field_value, count) in cl_query:
if count > DELETE_DUPLICATES_THRESHOLD:
eps_to_delete = (
endpoints
.filter(**{field_name: field_value})
.order_by('discovered_date')
.all()[1:]
)
msg = f'Deleting {len(eps_to_delete)} endpoints [reason: same {field_name} {field_value}]'
for ep in eps_to_delete:
url = urlparse(ep.http_url)
if url.path in ['', '/', '/login']: # try do not delete the original page that other pages redirect to
continue
msg += f'\n\t {ep.http_url} [{ep.http_status}] [{field_name}={field_value}]'
ep.delete()
logger.warning(msg)
@app.task(name='run_command', bind=False, queue='run_command_queue')
def run_command(cmd, cwd=None, shell=False, history_file=None, scan_id=None, activity_id=None):
"""Run a given command using subprocess module.
Args:
cmd (str): Command to run.
cwd (str): Current working directory.
echo (bool): Log command.
shell (bool): Run within separate shell if True.
history_file (str): Write command + output to history file.
Returns:
tuple: Tuple with return_code, output.
"""
logger.info(cmd)
logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Run the command using subprocess
popen = subprocess.Popen(
cmd if shell else cmd.split(),
shell=shell,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
cwd=cwd,
universal_newlines=True)
output = ''
for stdout_line in iter(popen.stdout.readline, ""):
item = stdout_line.strip()
output += '\n' + item
logger.debug(item)
popen.stdout.close()
popen.wait()
return_code = popen.returncode
command_obj.output = output
command_obj.return_code = return_code
command_obj.save()
if history_file:
mode = 'a'
if not os.path.exists(history_file):
mode = 'w'
with open(history_file, mode) as f:
f.write(f'\n{cmd}\n{return_code}\n{output}\n------------------\n')
return return_code, output
#-------------#
# Other utils #
#-------------#
def stream_command(cmd, cwd=None, shell=False, history_file=None, encoding='utf-8', scan_id=None, activity_id=None, trunc_char=None):
# Log cmd
logger.info(cmd)
# logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Sanitize the cmd
command = cmd if shell else cmd.split()
# Run the command using subprocess
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
shell=shell)
# Log the output in real-time to the database
output = ""
# Process the output
for line in iter(lambda: process.stdout.readline(), b''):
if not line:
break
line = line.strip()
ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')
line = ansi_escape.sub('', line)
line = line.replace('\\x0d\\x0a', '\n')
if trunc_char and line.endswith(trunc_char):
line = line[:-1]
item = line
# Try to parse the line as JSON
try:
item = json.loads(line)
except json.JSONDecodeError:
pass
# Yield the line
#logger.debug(item)
yield item
# Add the log line to the output
output += line + "\n"
# Update the command record in the database
command_obj.output = output
command_obj.save()
# Retrieve the return code and output
process.wait()
return_code = process.returncode
# Update the return code and final output in the database
command_obj.return_code = return_code
command_obj.save()
# Append the command, return code and output to the history file
if history_file is not None:
with open(history_file, "a") as f:
f.write(f"{cmd}\n{return_code}\n{output}\n")
def process_httpx_response(line):
"""TODO: implement this"""
def extract_httpx_url(line):
"""Extract final URL from httpx results. Always follow redirects to find
the last URL.
Args:
line (dict): URL data output by httpx.
Returns:
tuple: (final_url, redirect_bool) tuple.
"""
status_code = line.get('status_code', 0)
final_url = line.get('final_url')
location = line.get('location')
chain_status_codes = line.get('chain_status_codes', [])
# Final URL is already looking nice, if it exists return it
if final_url:
return final_url, False
http_url = line['url'] # fallback to url field
# Handle redirects manually
REDIRECT_STATUS_CODES = [301, 302]
is_redirect = (
status_code in REDIRECT_STATUS_CODES
or
any(x in REDIRECT_STATUS_CODES for x in chain_status_codes)
)
if is_redirect and location:
if location.startswith(('http', 'https')):
http_url = location
else:
http_url = f'{http_url}/{location.lstrip("/")}'
# Sanitize URL
http_url = sanitize_url(http_url)
return http_url, is_redirect
#-------------#
# OSInt utils #
#-------------#
def get_and_save_dork_results(lookup_target, results_dir, type, lookup_keywords=None, lookup_extensions=None, delay=3, page_count=2, scan_history=None):
"""
Uses gofuzz to dork and store information
Args:
lookup_target (str): target to look into such as stackoverflow or even the target itself
results_dir (str): Results directory
type (str): Dork Type Title
lookup_keywords (str): comma separated keywords or paths to look for
lookup_extensions (str): comma separated extensions to look for
delay (int): delay between each requests
page_count (int): pages in google to extract information
scan_history (startScan.ScanHistory): Scan History Object
"""
results = []
gofuzz_command = f'{GOFUZZ_EXEC_PATH} -t {lookup_target} -d {delay} -p {page_count}'
if lookup_extensions:
gofuzz_command += f' -e {lookup_extensions}'
elif lookup_keywords:
gofuzz_command += f' -w {lookup_keywords}'
output_file = f'{results_dir}/gofuzz.txt'
gofuzz_command += f' -o {output_file}'
history_file = f'{results_dir}/commands.txt'
try:
run_command(
gofuzz_command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
)
if not os.path.isfile(output_file):
return
with open(output_file) as f:
for line in f.readlines():
url = line.strip()
if url:
results.append(url)
dork, created = Dork.objects.get_or_create(
type=type,
url=url
)
if scan_history:
scan_history.dorks.add(dork)
# remove output file
os.remove(output_file)
except Exception as e:
logger.exception(e)
return results
def get_and_save_emails(scan_history, activity_id, results_dir):
"""Get and save emails from Google, Bing and Baidu.
Args:
scan_history (startScan.ScanHistory): Scan history object.
activity_id: ScanActivity Object
results_dir (str): Results directory.
Returns:
list: List of emails found.
"""
emails = []
# Proxy settings
# get_random_proxy()
# Gather emails from Google, Bing and Baidu
output_file = f'{results_dir}/emails_tmp.txt'
history_file = f'{results_dir}/commands.txt'
command = f'python3 /usr/src/github/Infoga/infoga.py --domain {scan_history.domain.name} --source all --report {output_file}'
try:
run_command(
command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
activity_id=activity_id)
if not os.path.isfile(output_file):
logger.info('No Email results')
return []
with open(output_file) as f:
for line in f.readlines():
if 'Email' in line:
split_email = line.split(' ')[2]
emails.append(split_email)
output_path = f'{results_dir}/emails.txt'
with open(output_path, 'w') as output_file:
for email_address in emails:
save_email(email_address, scan_history)
output_file.write(f'{email_address}\n')
except Exception as e:
logger.exception(e)
return emails
def save_metadata_info(meta_dict):
"""Extract metadata from Google Search.
Args:
meta_dict (dict): Info dict.
Returns:
list: List of startScan.MetaFinderDocument objects.
"""
logger.warning(f'Getting metadata for {meta_dict.osint_target}')
scan_history = ScanHistory.objects.get(id=meta_dict.scan_id)
# Proxy settings
get_random_proxy()
# Get metadata
result = extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if not result:
logger.error(f'No metadata result from Google Search for {meta_dict.osint_target}.')
return []
# Add metadata info to DB
results = []
for metadata_name, data in result.get_metadata().items():
subdomain = Subdomain.objects.get(
scan_history=meta_dict.scan_id,
name=meta_dict.osint_target)
metadata = DottedDict({k: v for k, v in data.items()})
meta_finder_document = MetaFinderDocument(
subdomain=subdomain,
target_domain=meta_dict.domain,
scan_history=scan_history,
url=metadata.url,
doc_name=metadata_name,
http_status=metadata.status_code,
producer=metadata.metadata.get('Producer'),
creator=metadata.metadata.get('Creator'),
creation_date=metadata.metadata.get('CreationDate'),
modified_date=metadata.metadata.get('ModDate'),
author=metadata.metadata.get('Author'),
title=metadata.metadata.get('Title'),
os=metadata.metadata.get('OSInfo'))
meta_finder_document.save()
results.append(data)
return results
#-----------------#
# Utils functions #
#-----------------#
def create_scan_activity(scan_history_id, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = ScanHistory.objects.get(pk=scan_history_id)
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
#--------------------#
# Database functions #
#--------------------#
def save_vulnerability(**vuln_data):
references = vuln_data.pop('references', [])
cve_ids = vuln_data.pop('cve_ids', [])
cwe_ids = vuln_data.pop('cwe_ids', [])
tags = vuln_data.pop('tags', [])
subscan = vuln_data.pop('subscan', None)
# remove nulls
vuln_data = replace_nulls(vuln_data)
# Create vulnerability
vuln, created = Vulnerability.objects.get_or_create(**vuln_data)
if created:
vuln.discovered_date = timezone.now()
vuln.open_status = True
vuln.save()
# Save vuln tags
for tag_name in tags or []:
tag, created = VulnerabilityTags.objects.get_or_create(name=tag_name)
if tag:
vuln.tags.add(tag)
vuln.save()
# Save CVEs
for cve_id in cve_ids or []:
cve, created = CveId.objects.get_or_create(name=cve_id)
if cve:
vuln.cve_ids.add(cve)
vuln.save()
# Save CWEs
for cve_id in cwe_ids or []:
cwe, created = CweId.objects.get_or_create(name=cve_id)
if cwe:
vuln.cwe_ids.add(cwe)
vuln.save()
# Save vuln reference
for url in references or []:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
if created:
vuln.references.add(ref)
vuln.save()
# Save subscan id in vuln object
if subscan:
vuln.vuln_subscan_ids.add(subscan)
vuln.save()
return vuln, created
def save_endpoint(
http_url,
ctx={},
crawl=False,
is_default=False,
**endpoint_data):
"""Get or create EndPoint object. If crawl is True, also crawl the endpoint
HTTP URL with httpx.
Args:
http_url (str): Input HTTP URL.
is_default (bool): If the url is a default url for SubDomains.
scan_history (startScan.models.ScanHistory): ScanHistory object.
domain (startScan.models.Domain): Domain object.
subdomain (starScan.models.Subdomain): Subdomain object.
results_dir (str, optional): Results directory.
crawl (bool, optional): Run httpx on endpoint if True. Default: False.
force (bool, optional): Force crawl even if ENABLE_HTTP_CRAWL mode is on.
subscan (startScan.models.SubScan, optional): SubScan object.
Returns:
tuple: (startScan.models.EndPoint, created) where `created` is a boolean
indicating if the object is new or already existed.
"""
# remove nulls
endpoint_data = replace_nulls(endpoint_data)
scheme = urlparse(http_url).scheme
endpoint = None
created = False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in http_url:
logger.error(f"{http_url} is not a URL of domain {domain.name}. Skipping.")
return None, False
if crawl:
ctx['track'] = False
results = http_crawl(
urls=[http_url],
method='HEAD',
ctx=ctx)
if results:
endpoint_data = results[0]
endpoint_id = endpoint_data['endpoint_id']
created = endpoint_data['endpoint_created']
endpoint = EndPoint.objects.get(pk=endpoint_id)
elif not scheme:
return None, False
else: # add dumb endpoint without probing it
scan = ScanHistory.objects.filter(pk=ctx.get('scan_history_id')).first()
domain = Domain.objects.filter(pk=ctx.get('domain_id')).first()
if not validators.url(http_url):
return None, False
http_url = sanitize_url(http_url)
endpoint, created = EndPoint.objects.get_or_create(
scan_history=scan,
target_domain=domain,
http_url=http_url,
**endpoint_data)
if created:
endpoint.is_default = is_default
endpoint.discovered_date = timezone.now()
endpoint.save()
subscan_id = ctx.get('subscan_id')
if subscan_id:
endpoint.endpoint_subscan_ids.add(subscan_id)
endpoint.save()
return endpoint, created
def save_subdomain(subdomain_name, ctx={}):
"""Get or create Subdomain object.
Args:
subdomain_name (str): Subdomain name.
scan_history (startScan.models.ScanHistory): ScanHistory object.
Returns:
tuple: (startScan.models.Subdomain, created) where `created` is a
boolean indicating if the object has been created in DB.
"""
scan_id = ctx.get('scan_history_id')
subscan_id = ctx.get('subscan_id')
out_of_scope_subdomains = ctx.get('out_of_scope_subdomains', [])
valid_domain = (
validators.domain(subdomain_name) or
validators.ipv4(subdomain_name) or
validators.ipv6(subdomain_name)
)
if not valid_domain:
logger.error(f'{subdomain_name} is not an invalid domain. Skipping.')
return None, False
if subdomain_name in out_of_scope_subdomains:
logger.error(f'{subdomain_name} is out-of-scope. Skipping.')
return None, False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in subdomain_name:
logger.error(f"{subdomain_name} is not a subdomain of domain {domain.name}. Skipping.")
return None, False
scan = ScanHistory.objects.filter(pk=scan_id).first()
domain = scan.domain if scan else None
subdomain, created = Subdomain.objects.get_or_create(
scan_history=scan,
target_domain=domain,
name=subdomain_name)
if created:
# logger.warning(f'Found new subdomain {subdomain_name}')
subdomain.discovered_date = timezone.now()
if subscan_id:
subdomain.subdomain_subscan_ids.add(subscan_id)
subdomain.save()
return subdomain, created
def save_email(email_address, scan_history=None):
if not validators.email(email_address):
logger.info(f'Email {email_address} is invalid. Skipping.')
return None, False
email, created = Email.objects.get_or_create(address=email_address)
# if created:
# logger.warning(f'Found new email address {email_address}')
# Add email to ScanHistory
if scan_history:
scan_history.emails.add(email)
scan_history.save()
return email, created
def save_employee(name, designation, scan_history=None):
employee, created = Employee.objects.get_or_create(
name=name,
designation=designation)
# if created:
# logger.warning(f'Found new employee {name}')
# Add employee to ScanHistory
if scan_history:
scan_history.employees.add(employee)
scan_history.save()
return employee, created
def save_ip_address(ip_address, subdomain=None, subscan=None, **kwargs):
if not (validators.ipv4(ip_address) or validators.ipv6(ip_address)):
logger.info(f'IP {ip_address} is not a valid IP. Skipping.')
return None, False
ip, created = IpAddress.objects.get_or_create(address=ip_address)
# if created:
# logger.warning(f'Found new IP {ip_address}')
# Set extra attributes
for key, value in kwargs.items():
setattr(ip, key, value)
ip.save()
# Add IP to subdomain
if subdomain:
subdomain.ip_addresses.add(ip)
subdomain.save()
# Add subscan to IP
if subscan:
ip.ip_subscan_ids.add(subscan)
# Geo-localize IP asynchronously
if created:
geo_localize.delay(ip_address, ip.id)
return ip, created
def save_imported_subdomains(subdomains, ctx={}):
"""Take a list of subdomains imported and write them to from_imported.txt.
Args:
subdomains (list): List of subdomain names.
scan_history (startScan.models.ScanHistory): ScanHistory instance.
domain (startScan.models.Domain): Domain instance.
results_dir (str): Results directory.
"""
domain_id = ctx['domain_id']
domain = Domain.objects.get(pk=domain_id)
results_dir = ctx.get('results_dir', RENGINE_RESULTS)
# Validate each subdomain and de-duplicate entries
subdomains = list(set([
subdomain for subdomain in subdomains
if validators.domain(subdomain) and domain.name == get_domain_from_subdomain(subdomain)
]))
if not subdomains:
return
logger.warning(f'Found {len(subdomains)} imported subdomains.')
with open(f'{results_dir}/from_imported.txt', 'w+') as output_file:
for name in subdomains:
subdomain_name = name.strip()
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain.is_imported_subdomain = True
subdomain.save()
output_file.write(f'{subdomain}\n')
@app.task(name='query_reverse_whois', bind=False, queue='query_reverse_whois_queue')
def query_reverse_whois(lookup_keyword):
"""Queries Reverse WHOIS information for an organization or email address.
Args:
lookup_keyword (str): Registrar Name or email
Returns:
dict: Reverse WHOIS information.
"""
return get_associated_domains(lookup_keyword)
@app.task(name='query_ip_history', bind=False, queue='query_ip_history_queue')
def query_ip_history(domain):
"""Queries the IP history for a domain
Args:
domain (str): domain_name
Returns:
list: list of historical ip addresses
"""
return get_domain_historical_ip_address(domain)
@app.task(name='gpt_vulnerability_description', bind=False, queue='gpt_queue')
def gpt_vulnerability_description(vulnerability_id):
"""Generate and store Vulnerability Description using GPT.
Args:
vulnerability_id (Vulnerability Model ID): Vulnerability ID to fetch Description.
"""
logger.info('Getting GPT Vulnerability Description')
try:
lookup_vulnerability = Vulnerability.objects.get(id=vulnerability_id)
lookup_url = urlparse(lookup_vulnerability.http_url)
path = lookup_url.path
except Exception as e:
return {
'status': False,
'error': str(e)
}
# check in db GPTVulnerabilityReport model if vulnerability description and path matches
stored = GPTVulnerabilityReport.objects.filter(url_path=path).filter(title=lookup_vulnerability.name).first()
if stored:
response = {
'status': True,
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
vulnerability_description = get_gpt_vuln_input_description(
lookup_vulnerability.name,
path
)
# one can add more description here later
gpt_generator = GPTVulnerabilityReportGenerator()
response = gpt_generator.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
lookup_vulnerability.name,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
# for all vulnerabilities with the same vulnerability name this description has to be stored.
# also the consition is that the url must contain a part of this.
for vuln in Vulnerability.objects.filter(name=lookup_vulnerability.name, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
return response
| import csv
import json
import os
import pprint
import subprocess
import time
import validators
import whatportis
import xmltodict
import yaml
import tldextract
import concurrent.futures
from datetime import datetime
from urllib.parse import urlparse
from api.serializers import SubdomainSerializer
from celery import chain, chord, group
from celery.result import allow_join_result
from celery.utils.log import get_task_logger
from django.db.models import Count
from dotted_dict import DottedDict
from django.utils import timezone
from pycvesearch import CVESearch
from metafinder.extractor import extract_metadata_from_google_search
from reNgine.celery import app
from reNgine.gpt import GPTVulnerabilityReportGenerator
from reNgine.celery_custom_task import RengineTask
from reNgine.common_func import *
from reNgine.definitions import *
from reNgine.settings import *
from reNgine.gpt import *
from reNgine.utilities import *
from scanEngine.models import (EngineType, InstalledExternalTool, Notification, Proxy)
from startScan.models import *
from startScan.models import EndPoint, Subdomain, Vulnerability
from targetApp.models import Domain
"""
Celery tasks.
"""
logger = get_task_logger(__name__)
#----------------------#
# Scan / Subscan tasks #
#----------------------#
@app.task(name='initiate_scan', bind=False, queue='initiate_scan_queue')
def initiate_scan(
scan_history_id,
domain_id,
engine_id=None,
scan_type=LIVE_SCAN,
results_dir=RENGINE_RESULTS,
imported_subdomains=[],
out_of_scope_subdomains=[],
url_filter=''):
"""Initiate a new scan.
Args:
scan_history_id (int): ScanHistory id.
domain_id (int): Domain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
imported_subdomains (list): Imported subdomains.
out_of_scope_subdomains (list): Out-of-scope subdomains.
url_filter (str): URL path. Default: ''
"""
# Get scan history
scan = ScanHistory.objects.get(pk=scan_history_id)
# Get scan engine
engine_id = engine_id or scan.scan_type.id # scan history engine_id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, [])
# Get domain and set last_scan_date
domain = Domain.objects.get(pk=domain_id)
domain.last_scan_date = timezone.now()
domain.save()
# Get path filter
url_filter = url_filter.rstrip('/')
# Get or create ScanHistory() object
if scan_type == LIVE_SCAN: # immediate
scan = ScanHistory.objects.get(pk=scan_history_id)
scan.scan_status = RUNNING_TASK
elif scan_type == SCHEDULED_SCAN: # scheduled
scan = ScanHistory()
scan.scan_status = INITIATED_TASK
scan.scan_type = engine
scan.celery_ids = [initiate_scan.request.id]
scan.domain = domain
scan.start_scan_date = timezone.now()
scan.tasks = engine.tasks
scan.results_dir = f'{results_dir}/{domain.name}_{scan.id}'
add_gf_patterns = gf_patterns and 'fetch_url' in engine.tasks
if add_gf_patterns:
scan.used_gf_patterns = ','.join(gf_patterns)
scan.save()
# Create scan results dir
os.makedirs(scan.results_dir)
# Build task context
ctx = {
'scan_history_id': scan_history_id,
'engine_id': engine_id,
'domain_id': domain.id,
'results_dir': scan.results_dir,
'url_filter': url_filter,
'yaml_configuration': config,
'out_of_scope_subdomains': out_of_scope_subdomains
}
ctx_str = json.dumps(ctx, indent=2)
# Send start notif
logger.warning(f'Starting scan {scan_history_id} with context:\n{ctx_str}')
send_scan_notif.delay(
scan_history_id,
subscan_id=None,
engine_id=engine_id,
status=CELERY_TASK_STATUS_MAP[scan.scan_status])
# Save imported subdomains in DB
save_imported_subdomains(imported_subdomains, ctx=ctx)
# Create initial subdomain in DB: make a copy of domain as a subdomain so
# that other tasks using subdomains can use it.
subdomain_name = domain.name
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
# If enable_http_crawl is set, create an initial root HTTP endpoint so that
# HTTP crawling can start somewhere
http_url = f'{domain.name}{url_filter}' if url_filter else domain.name
endpoint, _ = save_endpoint(
http_url,
ctx=ctx,
crawl=enable_http_crawl,
is_default=True,
subdomain=subdomain
)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build Celery tasks, crafted according to the dependency graph below:
# subdomain_discovery --> port_scan --> fetch_url --> dir_file_fuzz
# osint vulnerability_scan
# osint dalfox xss scan
# screenshot
# waf_detection
workflow = chain(
group(
subdomain_discovery.si(ctx=ctx, description='Subdomain discovery'),
osint.si(ctx=ctx, description='OS Intelligence')
),
port_scan.si(ctx=ctx, description='Port scan'),
fetch_url.si(ctx=ctx, description='Fetch URL'),
group(
dir_file_fuzz.si(ctx=ctx, description='Directories & files fuzz'),
vulnerability_scan.si(ctx=ctx, description='Vulnerability scan'),
screenshot.si(ctx=ctx, description='Screenshot'),
waf_detection.si(ctx=ctx, description='WAF detection')
)
)
# Build callback
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery chord
logger.info(f'Running Celery workflow with {len(workflow.tasks) + 1} tasks')
task = chain(workflow, callback).on_error(callback).delay()
scan.celery_ids.append(task.id)
scan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='initiate_subscan', bind=False, queue='subscan_queue')
def initiate_subscan(
scan_history_id,
subdomain_id,
engine_id=None,
scan_type=None,
results_dir=RENGINE_RESULTS,
url_filter=''):
"""Initiate a new subscan.
Args:
scan_history_id (int): ScanHistory id.
subdomain_id (int): Subdomain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
url_filter (str): URL path. Default: ''
"""
# Get Subdomain, Domain and ScanHistory
subdomain = Subdomain.objects.get(pk=subdomain_id)
scan = ScanHistory.objects.get(pk=subdomain.scan_history.id)
domain = Domain.objects.get(pk=subdomain.target_domain.id)
# Get EngineType
engine_id = engine_id or scan.scan_type.id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Create scan activity of SubScan Model
subscan = SubScan(
start_scan_date=timezone.now(),
celery_ids=[initiate_subscan.request.id],
scan_history=scan,
subdomain=subdomain,
type=scan_type,
status=RUNNING_TASK,
engine=engine)
subscan.save()
# Get YAML configuration
config = yaml.safe_load(engine.yaml_configuration)
# Create results directory
results_dir = f'{scan.results_dir}/subscans/{subscan.id}'
os.makedirs(results_dir, exist_ok=True)
# Run task
method = globals().get(scan_type)
if not method:
logger.warning(f'Task {scan_type} is not supported by reNgine. Skipping')
return
scan.tasks.append(scan_type)
scan.save()
# Send start notif
send_scan_notif.delay(
scan.id,
subscan_id=subscan.id,
engine_id=engine_id,
status='RUNNING')
# Build context
ctx = {
'scan_history_id': scan.id,
'subscan_id': subscan.id,
'engine_id': engine_id,
'domain_id': domain.id,
'subdomain_id': subdomain.id,
'yaml_configuration': config,
'results_dir': results_dir,
'url_filter': url_filter
}
# Create initial endpoints in DB: find domain HTTP endpoint so that HTTP
# crawling can start somewhere
base_url = f'{subdomain.name}{url_filter}' if url_filter else subdomain.name
endpoint, _ = save_endpoint(
base_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build header + callback
workflow = method.si(ctx=ctx)
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery tasks
task = chain(workflow, callback).on_error(callback).delay()
subscan.celery_ids.append(task.id)
subscan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='report', bind=False, queue='report_queue')
def report(ctx={}, description=None):
"""Report task running after all other tasks.
Mark ScanHistory or SubScan object as completed and update with final
status, log run details and send notification.
Args:
description (str, optional): Task description shown in UI.
"""
# Get objects
subscan_id = ctx.get('subscan_id')
scan_id = ctx.get('scan_history_id')
engine_id = ctx.get('engine_id')
scan = ScanHistory.objects.filter(pk=scan_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
# Get failed tasks
tasks = ScanActivity.objects.filter(scan_of=scan).all()
if subscan:
tasks = tasks.filter(celery_id__in=subscan.celery_ids)
failed_tasks = tasks.filter(status=FAILED_TASK)
# Get task status
failed_count = failed_tasks.count()
status = SUCCESS_TASK if failed_count == 0 else FAILED_TASK
status_h = 'SUCCESS' if failed_count == 0 else 'FAILED'
# Update scan / subscan status
if subscan:
subscan.stop_scan_date = timezone.now()
subscan.status = status
subscan.save()
else:
scan.scan_status = status
scan.stop_scan_date = timezone.now()
scan.save()
# Send scan status notif
send_scan_notif.delay(
scan_history_id=scan_id,
subscan_id=subscan_id,
engine_id=engine_id,
status=status_h)
#------------------------- #
# Tracked reNgine tasks #
#--------------------------#
@app.task(name='subdomain_discovery', queue='main_scan_queue', base=RengineTask, bind=True)
def subdomain_discovery(
self,
host=None,
ctx=None,
description=None):
"""Uses a set of tools (see SUBDOMAIN_SCAN_DEFAULT_TOOLS) to scan all
subdomains associated with a domain.
Args:
host (str): Hostname to scan.
Returns:
subdomains (list): List of subdomain names.
"""
if not host:
host = self.subdomain.name if self.subdomain else self.domain.name
if self.url_filter:
logger.warning(f'Ignoring subdomains scan as an URL path filter was passed ({self.url_filter}).')
return
# Config
config = self.yaml_configuration.get(SUBDOMAIN_DISCOVERY) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL) or self.yaml_configuration.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
tools = config.get(USES_TOOLS, SUBDOMAIN_SCAN_DEFAULT_TOOLS)
default_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=True).filter(is_subdomain_gathering=True)]
custom_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=False).filter(is_subdomain_gathering=True)]
send_subdomain_changes, send_interesting = False, False
notif = Notification.objects.first()
if notif:
send_subdomain_changes = notif.send_subdomain_changes_notif
send_interesting = notif.send_interesting_notif
# Gather tools to run for subdomain scan
if ALL in tools:
tools = SUBDOMAIN_SCAN_DEFAULT_TOOLS + custom_subdomain_tools
tools = [t.lower() for t in tools]
# Make exception for amass since tool name is amass, but command is amass-active/passive
default_subdomain_tools.append('amass-passive')
default_subdomain_tools.append('amass-active')
# Run tools
for tool in tools:
cmd = None
logger.info(f'Scanning subdomains for {host} with {tool}')
proxy = get_random_proxy()
if tool in default_subdomain_tools:
if tool == 'amass-passive':
cmd = f'amass enum -passive -d {host} -o {self.results_dir}/subdomains_amass.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
elif tool == 'amass-active':
use_amass_config = config.get(USE_AMASS_CONFIG, False)
amass_wordlist_name = config.get(AMASS_WORDLIST, 'deepmagic.com-prefixes-top50000')
wordlist_path = f'/usr/src/wordlist/{amass_wordlist_name}.txt'
cmd = f'amass enum -active -d {host} -o {self.results_dir}/subdomains_amass_active.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
cmd += f' -brute -w {wordlist_path}'
elif tool == 'sublist3r':
cmd = f'python3 /usr/src/github/Sublist3r/sublist3r.py -d {host} -t {threads} -o {self.results_dir}/subdomains_sublister.txt'
elif tool == 'subfinder':
cmd = f'subfinder -d {host} -o {self.results_dir}/subdomains_subfinder.txt'
use_subfinder_config = config.get(USE_SUBFINDER_CONFIG, False)
cmd += ' -config /root/.config/subfinder/config.yaml' if use_subfinder_config else ''
cmd += f' -proxy {proxy}' if proxy else ''
cmd += f' -timeout {timeout}' if timeout else ''
cmd += f' -t {threads}' if threads else ''
cmd += f' -silent'
elif tool == 'oneforall':
cmd = f'python3 /usr/src/github/OneForAll/oneforall.py --target {host} run'
cmd_extract = f'cut -d\',\' -f6 /usr/src/github/OneForAll/results/{host}.csv > {self.results_dir}/subdomains_oneforall.txt'
cmd_rm = f'rm -rf /usr/src/github/OneForAll/results/{host}.csv'
cmd += f' && {cmd_extract} && {cmd_rm}'
elif tool == 'ctfr':
results_file = self.results_dir + '/subdomains_ctfr.txt'
cmd = f'python3 /usr/src/github/ctfr/ctfr.py -d {host} -o {results_file}'
cmd_extract = f"cat {results_file} | sed 's/\*.//g' | tail -n +12 | uniq | sort > {results_file}"
cmd += f' && {cmd_extract}'
elif tool == 'tlsx':
results_file = self.results_dir + '/subdomains_tlsx.txt'
cmd = f'tlsx -san -cn -silent -ro -host {host}'
cmd += f" | sed -n '/^\([a-zA-Z0-9]\([-a-zA-Z0-9]*[a-zA-Z0-9]\)\?\.\)\+{host}$/p' | uniq | sort"
cmd += f' > {results_file}'
elif tool == 'netlas':
results_file = self.results_dir + '/subdomains_netlas.txt'
cmd = f'netlas search -d domain -i domain domain:"*.{host}" -f json'
netlas_key = get_netlas_key()
cmd += f' -a {netlas_key}' if netlas_key else ''
cmd_extract = f"grep -oE '([a-zA-Z0-9]([-a-zA-Z0-9]*[a-zA-Z0-9])?\.)+{host}'"
cmd += f' | {cmd_extract} > {results_file}'
elif tool in custom_subdomain_tools:
tool_query = InstalledExternalTool.objects.filter(name__icontains=tool.lower())
if not tool_query.exists():
logger.error(f'Missing {{TARGET}} and {{OUTPUT}} placeholders in {tool} configuration. Skipping.')
continue
custom_tool = tool_query.first()
cmd = custom_tool.subdomain_gathering_command
if '{TARGET}' in cmd and '{OUTPUT}' in cmd:
cmd = cmd.replace('{TARGET}', host)
cmd = cmd.replace('{OUTPUT}', f'{self.results_dir}/subdomains_{tool}.txt')
cmd = cmd.replace('{PATH}', custom_tool.github_clone_path) if '{PATH}' in cmd else cmd
else:
logger.warning(
f'Subdomain discovery tool "{tool}" is not supported by reNgine. Skipping.')
continue
# Run tool
try:
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
except Exception as e:
logger.error(
f'Subdomain discovery tool "{tool}" raised an exception')
logger.exception(e)
# Gather all the tools' results in one single file. Write subdomains into
# separate files, and sort all subdomains.
run_command(
f'cat {self.results_dir}/subdomains_*.txt > {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {self.output_path} -o {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
with open(self.output_path) as f:
lines = f.readlines()
# Parse the output_file file and store Subdomain and EndPoint objects found
# in db.
subdomain_count = 0
subdomains = []
urls = []
for line in lines:
subdomain_name = line.strip()
valid_url = bool(validators.url(subdomain_name))
valid_domain = (
bool(validators.domain(subdomain_name)) or
bool(validators.ipv4(subdomain_name)) or
bool(validators.ipv6(subdomain_name)) or
valid_url
)
if not valid_domain:
logger.error(f'Subdomain {subdomain_name} is not a valid domain, IP or URL. Skipping.')
continue
if valid_url:
subdomain_name = urlparse(subdomain_name).netloc
if subdomain_name in self.out_of_scope_subdomains:
logger.error(f'Subdomain {subdomain_name} is out of scope. Skipping.')
continue
# Add subdomain
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain_count += 1
subdomains.append(subdomain)
urls.append(subdomain.name)
# Bulk crawl subdomains
if enable_http_crawl:
ctx['track'] = True
http_crawl(urls, ctx=ctx, is_ran_from_subdomain_scan=True)
# Find root subdomain endpoints
for subdomain in subdomains:
pass
# Send notifications
subdomains_str = '\n'.join([f'• `{subdomain.name}`' for subdomain in subdomains])
self.notify(fields={
'Subdomain count': len(subdomains),
'Subdomains': subdomains_str,
})
if send_subdomain_changes and self.scan_id and self.domain_id:
added = get_new_added_subdomain(self.scan_id, self.domain_id)
removed = get_removed_subdomain(self.scan_id, self.domain_id)
if added:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in added])
self.notify(fields={'Added subdomains': subdomains_str})
if removed:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in removed])
self.notify(fields={'Removed subdomains': subdomains_str})
if send_interesting and self.scan_id and self.domain_id:
interesting_subdomains = get_interesting_subdomains(self.scan_id, self.domain_id)
if interesting_subdomains:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in interesting_subdomains])
self.notify(fields={'Interesting subdomains': subdomains_str})
return SubdomainSerializer(subdomains, many=True).data
@app.task(name='osint', queue='main_scan_queue', base=RengineTask, bind=True)
def osint(self, host=None, ctx={}, description=None):
"""Run Open-Source Intelligence tools on selected domain.
Args:
host (str): Hostname to scan.
Returns:
dict: Results from osint discovery and dorking.
"""
config = self.yaml_configuration.get(OSINT) or OSINT_DEFAULT_CONFIG
results = {}
grouped_tasks = []
if 'discover' in config:
ctx['track'] = False
# results = osint_discovery(host=host, ctx=ctx)
_task = osint_discovery.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
activity_id=self.activity_id,
results_dir=self.results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if OSINT_DORK in config or OSINT_CUSTOM_DORK in config:
_task = dorking.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
results_dir=self.results_dir
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('OSINT Tasks finished...')
# with open(self.output_path, 'w') as f:
# json.dump(results, f, indent=4)
#
# return results
@app.task(name='osint_discovery', queue='osint_discovery_queue', bind=False)
def osint_discovery(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run OSINT discovery.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
dict: osint metadat and theHarvester and h8mail results.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
osint_lookup = config.get(OSINT_DISCOVER, [])
osint_intensity = config.get(INTENSITY, 'normal')
documents_limit = config.get(OSINT_DOCUMENTS_LIMIT, 50)
results = {}
meta_info = []
emails = []
creds = []
# Get and save meta info
if 'metainfo' in osint_lookup:
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': host,
'domain': host,
'scan_id': scan_history_id,
'documents_limit': documents_limit
})
meta_info.append(save_metadata_info(meta_dict))
# TODO: disabled for now
# elif osint_intensity == 'deep':
# subdomains = Subdomain.objects
# if self.scan:
# subdomains = subdomains.filter(scan_history=self.scan)
# for subdomain in subdomains:
# meta_dict = DottedDict({
# 'osint_target': subdomain.name,
# 'domain': self.domain,
# 'scan_id': self.scan_id,
# 'documents_limit': documents_limit
# })
# meta_info.append(save_metadata_info(meta_dict))
grouped_tasks = []
if 'emails' in osint_lookup:
emails = get_and_save_emails(scan_history, activity_id, results_dir)
emails_str = '\n'.join([f'• `{email}`' for email in emails])
# self.notify(fields={'Emails': emails_str})
# ctx['track'] = False
_task = h8mail.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if 'employees' in osint_lookup:
ctx['track'] = False
_task = theHarvester.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
# results['emails'] = results.get('emails', []) + emails
# results['creds'] = creds
# results['meta_info'] = meta_info
return results
@app.task(name='dorking', bind=False, queue='dorking_queue')
def dorking(config, host, scan_history_id, results_dir):
"""Run Google dorks.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
list: Dorking results for each dork ran.
"""
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
scan_history = ScanHistory.objects.get(pk=scan_history_id)
dorks = config.get(OSINT_DORK, [])
custom_dorks = config.get(OSINT_CUSTOM_DORK, [])
results = []
# custom dorking has higher priority
try:
for custom_dork in custom_dorks:
lookup_target = custom_dork.get('lookup_site')
# replace with original host if _target_
lookup_target = host if lookup_target == '_target_' else lookup_target
if 'lookup_extensions' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_extensions=custom_dork.get('lookup_extensions'),
scan_history=scan_history
)
elif 'lookup_keywords' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_keywords=custom_dork.get('lookup_keywords'),
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
# default dorking
try:
for dork in dorks:
logger.info(f'Getting dork information for {dork}')
if dork == 'stackoverflow':
results = get_and_save_dork_results(
lookup_target='stackoverflow.com',
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'login_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/login/,login.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'admin_panels':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/admin/,admin.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'dashboard_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/dashboard/,dashboard.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'social_media' :
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'reddit.com'
]
for site in social_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'project_management' :
project_websites = [
'trello.com',
'atlassian.net'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'code_sharing' :
project_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'config_files' :
config_file_exts = [
'env',
'xml',
'conf',
'toml',
'yml',
'yaml',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(config_file_exts),
page_count=4,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keyword = 'Jenkins'
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=lookup_keyword,
page_count=1,
scan_history=scan_history
)
elif dork == 'wordpress_files' :
lookup_keywords = [
'/wp-content/',
'/wp-includes/'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'php_error' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'exposed_documents' :
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(docs_file_ext),
page_count=7,
scan_history=scan_history
)
elif dork == 'db_files' :
file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
elif dork == 'git_exposed' :
file_ext = [
'git',
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
return results
@app.task(name='theHarvester', queue='theHarvester_queue', bind=False)
def theHarvester(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run theHarvester to get save emails, hosts, employees found in domain.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
dict: Dict of emails, employees, hosts and ips found during crawling.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
output_path_json = f'{results_dir}/theHarvester.json'
theHarvester_dir = '/usr/src/github/theHarvester'
history_file = f'{results_dir}/commands.txt'
cmd = f'python3 {theHarvester_dir}/theHarvester.py -d {host} -b all -f {output_path_json}'
# Update proxies.yaml
proxy_query = Proxy.objects.all()
if proxy_query.exists():
proxy = proxy_query.first()
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(f'{theHarvester_dir}/proxies.yaml', 'w') as file:
yaml.dump(yaml_data, file)
# Run cmd
run_command(
cmd,
shell=False,
cwd=theHarvester_dir,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
# Get file location
if not os.path.isfile(output_path_json):
logger.error(f'Could not open {output_path_json}')
return {}
# Load theHarvester results
with open(output_path_json, 'r') as f:
data = json.load(f)
# Re-indent theHarvester JSON
with open(output_path_json, 'w') as f:
json.dump(data, f, indent=4)
emails = data.get('emails', [])
for email_address in emails:
email, _ = save_email(email_address, scan_history=scan_history)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
linkedin_people = data.get('linkedin_people', [])
for people in linkedin_people:
employee, _ = save_employee(
people,
designation='linkedin',
scan_history=scan_history)
# if employee:
# self.notify(fields={'LinkedIn people': f'• {employee.name}'})
twitter_people = data.get('twitter_people', [])
for people in twitter_people:
employee, _ = save_employee(
people,
designation='twitter',
scan_history=scan_history)
# if employee:
# self.notify(fields={'Twitter people': f'• {employee.name}'})
hosts = data.get('hosts', [])
urls = []
for host in hosts:
split = tuple(host.split(':'))
http_url = split[0]
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
endpoint, _ = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain)
# if endpoint:
# urls.append(endpoint.http_url)
# self.notify(fields={'Hosts': f'• {endpoint.http_url}'})
# if enable_http_crawl:
# ctx['track'] = False
# http_crawl(urls, ctx=ctx)
# TODO: Lots of ips unrelated with our domain are found, disabling
# this for now.
# ips = data.get('ips', [])
# for ip_address in ips:
# ip, created = save_ip_address(
# ip_address,
# subscan=subscan)
# if ip:
# send_task_notif.delay(
# 'osint',
# scan_history_id=scan_history_id,
# subscan_id=subscan_id,
# severity='success',
# update_fields={'IPs': f'{ip.address}'})
return data
@app.task(name='h8mail', queue='h8mail_queue', bind=False)
def h8mail(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run h8mail.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
list[dict]: List of credentials info.
"""
logger.warning('Getting leaked credentials')
scan_history = ScanHistory.objects.get(pk=scan_history_id)
input_path = f'{results_dir}/emails.txt'
output_file = f'{results_dir}/h8mail.json'
cmd = f'h8mail -t {input_path} --json {output_file}'
history_file = f'{results_dir}/commands.txt'
run_command(
cmd,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
with open(output_file) as f:
data = json.load(f)
creds = data.get('targets', [])
# TODO: go through h8mail output and save emails to DB
for cred in creds:
logger.warning(cred)
email_address = cred['target']
pwn_num = cred['pwn_num']
pwn_data = cred.get('data', [])
email, created = save_email(email_address, scan_history=scan)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
return creds
@app.task(name='screenshot', queue='main_scan_queue', base=RengineTask, bind=True)
def screenshot(self, ctx={}, description=None):
"""Uses EyeWitness to gather screenshot of a domain and/or url.
Args:
description (str, optional): Task description shown in UI.
"""
# Config
screenshots_path = f'{self.results_dir}/screenshots'
output_path = f'{self.results_dir}/screenshots/{self.filename}'
alive_endpoints_file = f'{self.results_dir}/endpoints_alive.txt'
config = self.yaml_configuration.get(SCREENSHOT) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT + 5)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
# If intensity is normal, grab only the root endpoints of each subdomain
strict = True if intensity == 'normal' else False
# Get URLs to take screenshot of
get_http_urls(
is_alive=enable_http_crawl,
strict=strict,
write_filepath=alive_endpoints_file,
get_only_default_urls=True,
ctx=ctx
)
# Send start notif
notification = Notification.objects.first()
send_output_file = notification.send_scan_output_file if notification else False
# Run cmd
cmd = f'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py -f {alive_endpoints_file} -d {screenshots_path} --no-prompt'
cmd += f' --timeout {timeout}' if timeout > 0 else ''
cmd += f' --threads {threads}' if threads > 0 else ''
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(output_path):
logger.error(f'Could not load EyeWitness results at {output_path} for {self.domain.name}.')
return
# Loop through results and save objects in DB
screenshot_paths = []
with open(output_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
"Protocol,Port,Domain,Request Status,Screenshot Path, Source Path"
protocol, port, subdomain_name, status, screenshot_path, source_path = tuple(row)
logger.info(f'{protocol}:{port}:{subdomain_name}:{status}')
subdomain_query = Subdomain.objects.filter(name=subdomain_name)
if self.scan:
subdomain_query = subdomain_query.filter(scan_history=self.scan)
if status == 'Successful' and subdomain_query.exists():
subdomain = subdomain_query.first()
screenshot_paths.append(screenshot_path)
subdomain.screenshot_path = screenshot_path.replace('/usr/src/scan_results/', '')
subdomain.save()
logger.warning(f'Added screenshot for {subdomain.name} to DB')
# Remove all db, html extra files in screenshot results
run_command(
'rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(screenshots_path),
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'rm -rf {screenshots_path}/source',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Send finish notifs
screenshots_str = '• ' + '\n• '.join([f'`{path}`' for path in screenshot_paths])
self.notify(fields={'Screenshots': screenshots_str})
if send_output_file:
for path in screenshot_paths:
title = get_output_file_name(
self.scan_id,
self.subscan_id,
self.filename)
send_file_to_discord.delay(path, title)
@app.task(name='port_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def port_scan(self, hosts=[], ctx={}, description=None):
"""Run port scan.
Args:
hosts (list, optional): Hosts to run port scan on.
description (str, optional): Task description shown in UI.
Returns:
list: List of open ports (dict).
"""
input_file = f'{self.results_dir}/input_subdomains_port_scan.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(PORT_SCAN) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
exclude_ports = config.get(NAABU_EXCLUDE_PORTS, [])
exclude_subdomains = config.get(NAABU_EXCLUDE_SUBDOMAINS, False)
ports = config.get(PORTS, NAABU_DEFAULT_PORTS)
ports = [str(port) for port in ports]
rate_limit = config.get(NAABU_RATE) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
passive = config.get(NAABU_PASSIVE, False)
use_naabu_config = config.get(USE_NAABU_CONFIG, False)
exclude_ports_str = ','.join(return_iterable(exclude_ports))
# nmap args
nmap_enabled = config.get(ENABLE_NMAP, False)
nmap_cmd = config.get(NMAP_COMMAND, '')
nmap_script = config.get(NMAP_SCRIPT, '')
nmap_script = ','.join(return_iterable(nmap_script))
nmap_script_args = config.get(NMAP_SCRIPT_ARGS)
if hosts:
with open(input_file, 'w') as f:
f.write('\n'.join(hosts))
else:
hosts = get_subdomains(
write_filepath=input_file,
exclude_subdomains=exclude_subdomains,
ctx=ctx)
# Build cmd
cmd = 'naabu -json -exclude-cdn'
cmd += f' -list {input_file}' if len(hosts) > 0 else f' -host {hosts[0]}'
if 'full' in ports or 'all' in ports:
ports_str = ' -p "-"'
elif 'top-100' in ports:
ports_str = ' -top-ports 100'
elif 'top-1000' in ports:
ports_str = ' -top-ports 1000'
else:
ports_str = ','.join(ports)
ports_str = f' -p {ports_str}'
cmd += ports_str
cmd += ' -config /root/.config/naabu/config.yaml' if use_naabu_config else ''
cmd += f' -proxy "{proxy}"' if proxy else ''
cmd += f' -c {threads}' if threads else ''
cmd += f' -rate {rate_limit}' if rate_limit > 0 else ''
cmd += f' -timeout {timeout*1000}' if timeout > 0 else ''
cmd += f' -passive' if passive else ''
cmd += f' -exclude-ports {exclude_ports_str}' if exclude_ports else ''
cmd += f' -silent'
# Execute cmd and gather results
results = []
urls = []
ports_data = {}
for line in stream_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
port_number = line['port']
ip_address = line['ip']
host = line.get('host') or ip_address
if port_number == 0:
continue
# Grab subdomain
subdomain = Subdomain.objects.filter(
name=host,
target_domain=self.domain,
scan_history=self.scan
).first()
# Add IP DB
ip, _ = save_ip_address(ip_address, subdomain, subscan=self.subscan)
if self.subscan:
ip.ip_subscan_ids.add(self.subscan)
ip.save()
# Add endpoint to DB
# port 80 and 443 not needed as http crawl already does that.
if port_number not in [80, 443]:
http_url = f'{host}:{port_number}'
endpoint, _ = save_endpoint(
http_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint:
http_url = endpoint.http_url
urls.append(http_url)
# Add Port in DB
port_details = whatportis.get_ports(str(port_number))
service_name = port_details[0].name if len(port_details) > 0 else 'unknown'
description = port_details[0].description if len(port_details) > 0 else ''
# get or create port
port, created = Port.objects.get_or_create(
number=port_number,
service_name=service_name,
description=description
)
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port.save()
ip.ports.add(port)
ip.save()
if host in ports_data:
ports_data[host].append(port_number)
else:
ports_data[host] = [port_number]
# Send notification
logger.warning(f'Found opened port {port_number} on {ip_address} ({host})')
if len(ports_data) == 0:
logger.info('Finished running naabu port scan - No open ports found.')
if nmap_enabled:
logger.info('Nmap scans skipped')
return ports_data
# Send notification
fields_str = ''
for host, ports in ports_data.items():
ports_str = ', '.join([f'`{port}`' for port in ports])
fields_str += f'• `{host}`: {ports_str}\n'
self.notify(fields={'Ports discovered': fields_str})
# Save output to file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
logger.info('Finished running naabu port scan.')
# Process nmap results: 1 process per host
sigs = []
if nmap_enabled:
logger.warning(f'Starting nmap scans ...')
logger.warning(ports_data)
for host, port_list in ports_data.items():
ports_str = '_'.join([str(p) for p in port_list])
ctx_nmap = ctx.copy()
ctx_nmap['description'] = get_task_title(f'nmap_{host}', self.scan_id, self.subscan_id)
ctx_nmap['track'] = False
sig = nmap.si(
cmd=nmap_cmd,
ports=port_list,
host=host,
script=nmap_script,
script_args=nmap_script_args,
max_rate=rate_limit,
ctx=ctx_nmap)
sigs.append(sig)
task = group(sigs).apply_async()
with allow_join_result():
results = task.get()
return ports_data
@app.task(name='nmap', queue='main_scan_queue', base=RengineTask, bind=True)
def nmap(
self,
cmd=None,
ports=[],
host=None,
input_file=None,
script=None,
script_args=None,
max_rate=None,
ctx={},
description=None):
"""Run nmap on a host.
Args:
cmd (str, optional): Existing nmap command to complete.
ports (list, optional): List of ports to scan.
host (str, optional): Host to scan.
input_file (str, optional): Input hosts file.
script (str, optional): NSE script to run.
script_args (str, optional): NSE script args.
max_rate (int): Max rate.
description (str, optional): Task description shown in UI.
"""
notif = Notification.objects.first()
ports_str = ','.join(str(port) for port in ports)
self.filename = self.filename.replace('.txt', '.xml')
filename_vulns = self.filename.replace('.xml', '_vulns.json')
output_file = self.output_path
output_file_xml = f'{self.results_dir}/{host}_{self.filename}'
vulns_file = f'{self.results_dir}/{host}_{filename_vulns}'
logger.warning(f'Running nmap on {host}:{ports}')
# Build cmd
nmap_cmd = get_nmap_cmd(
cmd=cmd,
ports=ports_str,
script=script,
script_args=script_args,
max_rate=max_rate,
host=host,
input_file=input_file,
output_file=output_file_xml)
# Run cmd
run_command(
nmap_cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Get nmap XML results and convert to JSON
vulns = parse_nmap_results(output_file_xml, output_file)
with open(vulns_file, 'w') as f:
json.dump(vulns, f, indent=4)
# Save vulnerabilities found by nmap
vulns_str = ''
for vuln_data in vulns:
# URL is not necessarily an HTTP URL when running nmap (can be any
# other vulnerable protocols). Look for existing endpoint and use its
# URL as vulnerability.http_url if it exists.
url = vuln_data['http_url']
endpoint = EndPoint.objects.filter(http_url__contains=url).first()
if endpoint:
vuln_data['http_url'] = endpoint.http_url
vuln, created = save_vulnerability(
target_domain=self.domain,
subdomain=self.subdomain,
scan_history=self.scan,
subscan=self.subscan,
endpoint=endpoint,
**vuln_data)
vulns_str += f'• {str(vuln)}\n'
if created:
logger.warning(str(vuln))
# Send only 1 notif for all vulns to reduce number of notifs
if notif and notif.send_vuln_notif and vulns_str:
logger.warning(vulns_str)
self.notify(fields={'CVEs': vulns_str})
return vulns
@app.task(name='waf_detection', queue='main_scan_queue', base=RengineTask, bind=True)
def waf_detection(self, ctx={}, description=None):
"""
Uses wafw00f to check for the presence of a WAF.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of startScan.models.Waf objects.
"""
input_path = f'{self.results_dir}/input_endpoints_waf_detection.txt'
config = self.yaml_configuration.get(WAF_DETECTION) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Get alive endpoints from DB
get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
cmd = f'wafw00f -i {input_path} -o {self.output_path}'
run_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(self.output_path):
logger.error(f'Could not find {self.output_path}')
return
with open(self.output_path) as file:
wafs = file.readlines()
for line in wafs:
line = " ".join(line.split())
splitted = line.split(' ', 1)
waf_info = splitted[1].strip()
waf_name = waf_info[:waf_info.find('(')].strip()
waf_manufacturer = waf_info[waf_info.find('(')+1:waf_info.find(')')].strip().replace('.', '')
http_url = sanitize_url(splitted[0].strip())
if not waf_name or waf_name == 'None':
continue
# Add waf to db
waf, _ = Waf.objects.get_or_create(
name=waf_name,
manufacturer=waf_manufacturer
)
# Add waf info to Subdomain in DB
subdomain = get_subdomain_from_url(http_url)
logger.info(f'Wafw00f Subdomain : {subdomain}')
subdomain_query, _ = Subdomain.objects.get_or_create(scan_history=self.scan, name=subdomain)
subdomain_query.waf.add(waf)
subdomain_query.save()
return wafs
@app.task(name='dir_file_fuzz', queue='main_scan_queue', base=RengineTask, bind=True)
def dir_file_fuzz(self, ctx={}, description=None):
"""Perform directory scan, and currently uses `ffuf` as a default tool.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of URLs discovered.
"""
# Config
cmd = 'ffuf'
config = self.yaml_configuration.get(DIR_FILE_FUZZ) or {}
custom_header = self.yaml_configuration.get(CUSTOM_HEADER)
auto_calibration = config.get(AUTO_CALIBRATION, True)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
extensions = config.get(EXTENSIONS, DEFAULT_DIR_FILE_FUZZ_EXTENSIONS)
# prepend . on extensions
extensions = [ext if ext.startswith('.') else '.' + ext for ext in extensions]
extensions_str = ','.join(map(str, extensions))
follow_redirect = config.get(FOLLOW_REDIRECT, FFUF_DEFAULT_FOLLOW_REDIRECT)
max_time = config.get(MAX_TIME, 0)
match_http_status = config.get(MATCH_HTTP_STATUS, FFUF_DEFAULT_MATCH_HTTP_STATUS)
mc = ','.join([str(c) for c in match_http_status])
recursive_level = config.get(RECURSIVE_LEVEL, FFUF_DEFAULT_RECURSIVE_LEVEL)
stop_on_error = config.get(STOP_ON_ERROR, False)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
wordlist_name = config.get(WORDLIST, 'dicc')
delay = rate_limit / (threads * 100) # calculate request pause delay from rate_limit and number of threads
input_path = f'{self.results_dir}/input_dir_file_fuzz.txt'
# Get wordlist
wordlist_name = 'dicc' if wordlist_name == 'default' else wordlist_name
wordlist_path = f'/usr/src/wordlist/{wordlist_name}.txt'
# Build command
cmd += f' -w {wordlist_path}'
cmd += f' -e {extensions_str}' if extensions else ''
cmd += f' -maxtime {max_time}' if max_time > 0 else ''
cmd += f' -p {delay}' if delay > 0 else ''
cmd += f' -recursion -recursion-depth {recursive_level} ' if recursive_level > 0 else ''
cmd += f' -t {threads}' if threads and threads > 0 else ''
cmd += f' -timeout {timeout}' if timeout and timeout > 0 else ''
cmd += ' -se' if stop_on_error else ''
cmd += ' -fr' if follow_redirect else ''
cmd += ' -ac' if auto_calibration else ''
cmd += f' -mc {mc}' if mc else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
# Grab URLs to fuzz
urls = get_http_urls(
is_alive=True,
ignore_files=False,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
logger.warning(urls)
# Loop through URLs and run command
results = []
for url in urls:
'''
Above while fetching urls, we are not ignoring files, because some
default urls may redirect to https://example.com/login.php
so, ignore_files is set to False
but, during fuzzing, we will only need part of the path, in above example
it is still a good idea to ffuf base url https://example.com
so files from base url
'''
url_parse = urlparse(url)
url = url_parse.scheme + '://' + url_parse.netloc
url += '/FUZZ' # TODO: fuzz not only URL but also POST / PUT / headers
proxy = get_random_proxy()
# Build final cmd
fcmd = cmd
fcmd += f' -x {proxy}' if proxy else ''
fcmd += f' -u {url} -json'
# Initialize DirectoryScan object
dirscan = DirectoryScan()
dirscan.scanned_date = timezone.now()
dirscan.command_line = fcmd
dirscan.save()
# Loop through results and populate EndPoint and DirectoryFile in DB
results = []
for line in stream_command(
fcmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
name = line['input'].get('FUZZ')
length = line['length']
status = line['status']
words = line['words']
url = line['url']
lines = line['lines']
content_type = line['content-type']
duration = line['duration']
if not name:
logger.error(f'FUZZ not found for "{url}"')
continue
endpoint, created = save_endpoint(url, crawl=False, ctx=ctx)
endpoint.http_status = status
endpoint.content_length = length
endpoint.response_time = duration / 1000000000
endpoint.save()
if created:
urls.append(endpoint.http_url)
endpoint.status = status
endpoint.content_type = content_type
endpoint.content_length = length
dfile, created = DirectoryFile.objects.get_or_create(
name=name,
length=length,
words=words,
lines=lines,
content_type=content_type,
url=url)
dfile.http_status = status
dfile.save()
# if created:
# logger.warning(f'Found new directory or file {url}')
dirscan.directory_files.add(dfile)
dirscan.save()
if self.subscan:
dirscan.dir_subscan_ids.add(self.subscan)
subdomain_name = get_subdomain_from_url(endpoint.http_url)
subdomain = Subdomain.objects.get(name=subdomain_name, scan_history=self.scan)
subdomain.directories.add(dirscan)
subdomain.save()
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(urls, ctx=ctx)
return results
@app.task(name='fetch_url', queue='main_scan_queue', base=RengineTask, bind=True)
def fetch_url(self, urls=[], ctx={}, description=None):
"""Fetch URLs using different tools like gauplus, gau, gospider, waybackurls ...
Args:
urls (list): List of URLs to start from.
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/input_endpoints_fetch_url.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(FETCH_URL) or {}
should_remove_duplicate_endpoints = config.get(REMOVE_DUPLICATE_ENDPOINTS, True)
duplicate_removal_fields = config.get(DUPLICATE_REMOVAL_FIELDS, ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, DEFAULT_GF_PATTERNS)
ignore_file_extension = config.get(IGNORE_FILE_EXTENSION, DEFAULT_IGNORE_FILE_EXTENSIONS)
tools = config.get(USES_TOOLS, ENDPOINT_SCAN_DEFAULT_TOOLS)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
domain_request_headers = self.domain.request_headers if self.domain else None
custom_header = domain_request_headers or self.yaml_configuration.get(CUSTOM_HEADER)
exclude_subdomains = config.get(EXCLUDED_SUBDOMAINS, False)
# Get URLs to scan and save to input file
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
exclude_subdomains=exclude_subdomains,
get_only_default_urls=True,
ctx=ctx
)
# Domain regex
host = self.domain.name if self.domain else urlparse(urls[0]).netloc
host_regex = f"\'https?://([a-z0-9]+[.])*{host}.*\'"
# Tools cmds
cmd_map = {
'gau': f'gau',
'gauplus': f'gauplus -random-agent',
'hakrawler': 'hakrawler -subs -u',
'waybackurls': 'waybackurls',
'gospider': f'gospider -S {input_path} --js -d 2 --sitemap --robots -w -r',
'katana': f'katana -list {input_path} -silent -jc -kf all -d 3 -fs rdn',
}
if proxy:
cmd_map['gau'] += f' --proxy "{proxy}"'
cmd_map['gauplus'] += f' -p "{proxy}"'
cmd_map['gospider'] += f' -p {proxy}'
cmd_map['hakrawler'] += f' -proxy {proxy}'
cmd_map['katana'] += f' -proxy {proxy}'
if threads > 0:
cmd_map['gau'] += f' --threads {threads}'
cmd_map['gauplus'] += f' -t {threads}'
cmd_map['gospider'] += f' -t {threads}'
cmd_map['katana'] += f' -c {threads}'
if custom_header:
header_string = ';;'.join([
f'{key}: {value}' for key, value in custom_header.items()
])
cmd_map['hakrawler'] += f' -h {header_string}'
cmd_map['katana'] += f' -H {header_string}'
header_flags = [':'.join(h) for h in header_string.split(';;')]
for flag in header_flags:
cmd_map['gospider'] += f' -H {flag}'
cat_input = f'cat {input_path}'
grep_output = f'grep -Eo {host_regex}'
cmd_map = {
tool: f'{cat_input} | {cmd} | {grep_output} > {self.results_dir}/urls_{tool}.txt'
for tool, cmd in cmd_map.items()
}
tasks = group(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for tool, cmd in cmd_map.items()
if tool in tools
)
# Cleanup task
sort_output = [
f'cat {self.results_dir}/urls_* > {self.output_path}',
f'cat {input_path} >> {self.output_path}',
f'sort -u {self.output_path} -o {self.output_path}',
]
if ignore_file_extension:
ignore_exts = '|'.join(ignore_file_extension)
grep_ext_filtered_output = [
f'cat {self.output_path} | grep -Eiv "\\.({ignore_exts}).*" > {self.results_dir}/urls_filtered.txt',
f'mv {self.results_dir}/urls_filtered.txt {self.output_path}'
]
sort_output.extend(grep_ext_filtered_output)
cleanup = chain(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for cmd in sort_output
)
# Run all commands
task = chord(tasks)(cleanup)
with allow_join_result():
task.get()
# Store all the endpoints and run httpx
with open(self.output_path) as f:
discovered_urls = f.readlines()
self.notify(fields={'Discovered URLs': len(discovered_urls)})
# Some tools can have an URL in the format <URL>] - <PATH> or <URL> - <PATH>, add them
# to the final URL list
all_urls = []
for url in discovered_urls:
url = url.strip()
urlpath = None
base_url = None
if '] ' in url: # found JS scraped endpoint e.g from gospider
split = tuple(url.split('] '))
if not len(split) == 2:
logger.warning(f'URL format not recognized for "{url}". Skipping.')
continue
base_url, urlpath = split
urlpath = urlpath.lstrip('- ')
elif ' - ' in url: # found JS scraped endpoint e.g from gospider
base_url, urlpath = tuple(url.split(' - '))
if base_url and urlpath:
subdomain = urlparse(base_url)
url = f'{subdomain.scheme}://{subdomain.netloc}{self.url_filter}'
if not validators.url(url):
logger.warning(f'Invalid URL "{url}". Skipping.')
if url not in all_urls:
all_urls.append(url)
# Filter out URLs if a path filter was passed
if self.url_filter:
all_urls = [url for url in all_urls if self.url_filter in url]
# Write result to output path
with open(self.output_path, 'w') as f:
f.write('\n'.join(all_urls))
logger.warning(f'Found {len(all_urls)} usable URLs')
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(
all_urls,
ctx=ctx,
should_remove_duplicate_endpoints=should_remove_duplicate_endpoints,
duplicate_removal_fields=duplicate_removal_fields
)
#-------------------#
# GF PATTERNS MATCH #
#-------------------#
# Combine old gf patterns with new ones
if gf_patterns:
self.scan.used_gf_patterns = ','.join(gf_patterns)
self.scan.save()
# Run gf patterns on saved endpoints
# TODO: refactor to Celery task
for gf_pattern in gf_patterns:
# TODO: js var is causing issues, removing for now
if gf_pattern == 'jsvar':
logger.info('Ignoring jsvar as it is causing issues.')
continue
# Run gf on current pattern
logger.warning(f'Running gf on pattern "{gf_pattern}"')
gf_output_file = f'{self.results_dir}/gf_patterns_{gf_pattern}.txt'
cmd = f'cat {self.output_path} | gf {gf_pattern} | grep -Eo {host_regex} >> {gf_output_file}'
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Check output file
if not os.path.exists(gf_output_file):
logger.error(f'Could not find GF output file {gf_output_file}. Skipping GF pattern "{gf_pattern}"')
continue
# Read output file line by line and
with open(gf_output_file, 'r') as f:
lines = f.readlines()
# Add endpoints / subdomains to DB
for url in lines:
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
endpoint, created = save_endpoint(
http_url,
crawl=False,
subdomain=subdomain,
ctx=ctx)
if not endpoint:
continue
earlier_pattern = None
if not created:
earlier_pattern = endpoint.matched_gf_patterns
pattern = f'{earlier_pattern},{gf_pattern}' if earlier_pattern else gf_pattern
endpoint.matched_gf_patterns = pattern
endpoint.save()
return all_urls
def parse_curl_output(response):
# TODO: Enrich from other cURL fields.
CURL_REGEX_HTTP_STATUS = f'HTTP\/(?:(?:\d\.?)+)\s(\d+)\s(?:\w+)'
http_status = 0
if response:
failed = False
regex = re.compile(CURL_REGEX_HTTP_STATUS, re.MULTILINE)
try:
http_status = int(regex.findall(response)[0])
except (KeyError, TypeError, IndexError):
pass
return {
'http_status': http_status,
}
@app.task(name='vulnerability_scan', queue='main_scan_queue', bind=True, base=RengineTask)
def vulnerability_scan(self, urls=[], ctx={}, description=None):
"""
This function will serve as an entrypoint to vulnerability scan.
All other vulnerability scan will be run from here including nuclei, crlfuzz, etc
"""
logger.info('Running Vulnerability Scan Queue')
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_run_nuclei = config.get(RUN_NUCLEI, True)
should_run_crlfuzz = config.get(RUN_CRLFUZZ, False)
should_run_dalfox = config.get(RUN_DALFOX, False)
should_run_s3scanner = config.get(RUN_S3SCANNER, True)
grouped_tasks = []
if should_run_nuclei:
_task = nuclei_scan.si(
urls=urls,
ctx=ctx,
description=f'Nuclei Scan'
)
grouped_tasks.append(_task)
if should_run_crlfuzz:
_task = crlfuzz_scan.si(
urls=urls,
ctx=ctx,
description=f'CRLFuzz Scan'
)
grouped_tasks.append(_task)
if should_run_dalfox:
_task = dalfox_xss_scan.si(
urls=urls,
ctx=ctx,
description=f'Dalfox XSS Scan'
)
grouped_tasks.append(_task)
if should_run_s3scanner:
_task = s3scanner.si(
ctx=ctx,
description=f'Misconfigured S3 Buckets Scanner'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan completed...')
# return results
return None
@app.task(name='nuclei_individual_severity_module', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_individual_severity_module(self, cmd, severity, enable_http_crawl, should_fetch_gpt_report, ctx={}, description=None):
'''
This celery task will run vulnerability scan in parallel.
All severities supplied should run in parallel as grouped tasks.
'''
results = []
logger.info(f'Running vulnerability scan with severity: {severity}')
cmd += f' -severity {severity}'
# Send start notification
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
# Gather nuclei results
vuln_data = parse_nuclei_result(line)
# Get corresponding subdomain
http_url = sanitize_url(line.get('matched-at'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
# Look for duplicate vulnerabilities by excluding records that might change but are irrelevant.
object_comparison_exclude = ['response', 'curl_command', 'tags', 'references', 'cve_ids', 'cwe_ids']
# Add subdomain and target domain to the duplicate check
vuln_data_copy = vuln_data.copy()
vuln_data_copy['subdomain'] = subdomain
vuln_data_copy['target_domain'] = self.domain
# Check if record exists, if exists do not save it
if record_exists(Vulnerability, data=vuln_data_copy, exclude_keys=object_comparison_exclude):
logger.warning(f'Nuclei vulnerability of severity {severity} : {vuln_data_copy["name"]} for {subdomain_name} already exists')
continue
# Get or create EndPoint object
response = line.get('response')
httpx_crawl = False if response else enable_http_crawl # avoid yet another httpx crawl
endpoint, _ = save_endpoint(
http_url,
crawl=httpx_crawl,
subdomain=subdomain,
ctx=ctx)
if endpoint:
http_url = endpoint.http_url
if not httpx_crawl:
output = parse_curl_output(response)
endpoint.http_status = output['http_status']
endpoint.save()
# Get or create Vulnerability object
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
subdomain=subdomain,
**vuln_data)
if not vuln:
continue
# Print vuln
severity = line['info'].get('severity', 'unknown')
logger.warning(str(vuln))
# Send notification for all vulnerabilities except info
url = vuln.http_url or vuln.subdomain
send_vuln = (
notif and
notif.send_vuln_notif and
vuln and
severity in ['low', 'medium', 'high', 'critical'])
if send_vuln:
fields = {
'Severity': f'**{severity.upper()}**',
'URL': http_url,
'Subdomain': subdomain_name,
'Name': vuln.name,
'Type': vuln.type,
'Description': vuln.description,
'Template': vuln.template_url,
'Tags': vuln.get_tags_str(),
'CVEs': vuln.get_cve_str(),
'CWEs': vuln.get_cwe_str(),
'References': vuln.get_refs_str()
}
severity_map = {
'low': 'info',
'medium': 'warning',
'high': 'error',
'critical': 'error'
}
self.notify(
f'vulnerability_scan_#{vuln.id}',
severity_map[severity],
fields,
add_meta_info=False)
# Send report to hackerone
hackerone_query = Hackerone.objects.all()
send_report = (
hackerone_query.exists() and
severity not in ('info', 'low') and
vuln.target_domain.h1_team_handle
)
if send_report:
hackerone = hackerone_query.first()
if hackerone.send_critical and severity == 'critical':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_high and severity == 'high':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_medium and severity == 'medium':
send_hackerone_report.delay(vuln.id)
# Write results to JSON file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
# Send finish notif
if send_status:
vulns = Vulnerability.objects.filter(scan_history__id=self.scan_id)
info_count = vulns.filter(severity=0).count()
low_count = vulns.filter(severity=1).count()
medium_count = vulns.filter(severity=2).count()
high_count = vulns.filter(severity=3).count()
critical_count = vulns.filter(severity=4).count()
unknown_count = vulns.filter(severity=-1).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count + unknown_count
fields = {
'Total': vulnerability_count,
'Critical': critical_count,
'High': high_count,
'Medium': medium_count,
'Low': low_count,
'Info': info_count,
'Unknown': unknown_count
}
self.notify(fields=fields)
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=NUCLEI
).exclude(
severity=0
)
# find all unique vulnerabilities based on path and title
# all unique vulnerability will go thru gpt function and get report
# once report is got, it will be matched with other vulnerabilities and saved
unique_vulns = set()
for vuln in vulns:
unique_vulns.add((vuln.name, vuln.get_path()))
unique_vulns = list(unique_vulns)
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in unique_vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return None
def get_vulnerability_gpt_report(vuln):
title = vuln[0]
path = vuln[1]
logger.info(f'Getting GPT Report for {title}, PATH: {path}')
# check if in db already exists
stored = GPTVulnerabilityReport.objects.filter(
url_path=path
).filter(
title=title
).first()
if stored:
response = {
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
report = GPTVulnerabilityReportGenerator()
vulnerability_description = get_gpt_vuln_input_description(
title,
path
)
response = report.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
title,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
for vuln in Vulnerability.objects.filter(name=title, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
def add_gpt_description_db(title, path, description, impact, remediation, references):
gpt_report = GPTVulnerabilityReport()
gpt_report.url_path = path
gpt_report.title = title
gpt_report.description = description
gpt_report.impact = impact
gpt_report.remediation = remediation
gpt_report.save()
for url in references:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
gpt_report.references.add(ref)
gpt_report.save()
@app.task(name='nuclei_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_scan(self, urls=[], ctx={}, description=None):
"""HTTP vulnerability scan using Nuclei
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
Notes:
Unfurl the urls to keep only domain and path, will be sent to vuln scan and
ignore certain file extensions. Thanks: https://github.com/six2dez/reconftw
"""
# Config
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
input_path = f'{self.results_dir}/input_endpoints_vulnerability_scan.txt'
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
concurrency = config.get(NUCLEI_CONCURRENCY) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
retries = config.get(RETRIES) or self.yaml_configuration.get(RETRIES, DEFAULT_RETRIES)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
custom_header = config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
should_fetch_gpt_report = config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
proxy = get_random_proxy()
nuclei_specific_config = config.get('nuclei', {})
use_nuclei_conf = nuclei_specific_config.get(USE_CONFIG, False)
severities = nuclei_specific_config.get(NUCLEI_SEVERITY, NUCLEI_DEFAULT_SEVERITIES)
tags = nuclei_specific_config.get(NUCLEI_TAGS, [])
tags = ','.join(tags)
nuclei_templates = nuclei_specific_config.get(NUCLEI_TEMPLATE)
custom_nuclei_templates = nuclei_specific_config.get(NUCLEI_CUSTOM_TEMPLATE)
# severities_str = ','.join(severities)
# Get alive endpoints
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=enable_http_crawl,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
if intensity == 'normal': # reduce number of endpoints to scan
unfurl_filter = f'{self.results_dir}/urls_unfurled.txt'
run_command(
f"cat {input_path} | unfurl -u format %s://%d%p |uro > {unfurl_filter}",
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {unfurl_filter} -o {unfurl_filter}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
input_path = unfurl_filter
# Build templates
# logger.info('Updating Nuclei templates ...')
run_command(
'nuclei -update-templates',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
templates = []
if not (nuclei_templates or custom_nuclei_templates):
templates.append(NUCLEI_DEFAULT_TEMPLATES_PATH)
if nuclei_templates:
if ALL in nuclei_templates:
template = NUCLEI_DEFAULT_TEMPLATES_PATH
templates.append(template)
else:
templates.extend(nuclei_templates)
if custom_nuclei_templates:
custom_nuclei_template_paths = [f'{str(elem)}.yaml' for elem in custom_nuclei_templates]
template = templates.extend(custom_nuclei_template_paths)
# Build CMD
cmd = 'nuclei -j'
cmd += ' -config /root/.config/nuclei/config.yaml' if use_nuclei_conf else ''
cmd += f' -irr'
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -l {input_path}'
cmd += f' -c {str(concurrency)}' if concurrency > 0 else ''
cmd += f' -proxy {proxy} ' if proxy else ''
cmd += f' -retries {retries}' if retries > 0 else ''
cmd += f' -rl {rate_limit}' if rate_limit > 0 else ''
# cmd += f' -severity {severities_str}'
cmd += f' -timeout {str(timeout)}' if timeout and timeout > 0 else ''
cmd += f' -tags {tags}' if tags else ''
cmd += f' -silent'
for tpl in templates:
cmd += f' -t {tpl}'
grouped_tasks = []
custom_ctx = ctx
for severity in severities:
custom_ctx['track'] = True
_task = nuclei_individual_severity_module.si(
cmd,
severity,
enable_http_crawl,
should_fetch_gpt_report,
ctx=custom_ctx,
description=f'Nuclei Scan with severity {severity}'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan with all severities completed...')
return None
@app.task(name='dalfox_xss_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def dalfox_xss_scan(self, urls=[], ctx={}, description=None):
"""XSS Scan using dalfox
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
dalfox_config = vuln_config.get(DALFOX) or {}
custom_header = dalfox_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
is_waf_evasion = dalfox_config.get(WAF_EVASION, False)
blind_xss_server = dalfox_config.get(BLIND_XSS_SERVER)
user_agent = dalfox_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
timeout = dalfox_config.get(TIMEOUT)
delay = dalfox_config.get(DELAY)
threads = dalfox_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_dalfox_xss.txt'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=False,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'dalfox --silence --no-color --no-spinner'
cmd += f' --only-poc r '
cmd += f' --ignore-return 302,404,403'
cmd += f' --skip-bav'
cmd += f' file {input_path}'
cmd += f' --proxy {proxy}' if proxy else ''
cmd += f' --waf-evasion' if is_waf_evasion else ''
cmd += f' -b {blind_xss_server}' if blind_xss_server else ''
cmd += f' --delay {delay}' if delay else ''
cmd += f' --timeout {timeout}' if timeout else ''
cmd += f' --user-agent {user_agent}' if user_agent else ''
cmd += f' --header {custom_header}' if custom_header else ''
cmd += f' --worker {threads}' if threads else ''
cmd += f' --format json'
results = []
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id,
trunc_char=','
):
if not isinstance(line, dict):
continue
results.append(line)
vuln_data = parse_dalfox_result(line)
http_url = sanitize_url(line.get('data'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Dalfox Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=DALFOX
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='crlfuzz_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def crlfuzz_scan(self, urls=[], ctx={}, description=None):
"""CRLF Fuzzing with CRLFuzz
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
custom_header = vuln_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
user_agent = vuln_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
threads = vuln_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_crlf.txt'
output_path = f'{self.results_dir}/{self.filename}'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'crlfuzz -s'
cmd += f' -l {input_path}'
cmd += f' -x {proxy}' if proxy else ''
cmd += f' --H {custom_header}' if custom_header else ''
cmd += f' -o {output_path}'
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id
)
if not os.path.isfile(output_path):
logger.info('No Results from CRLFuzz')
return
crlfs = []
results = []
with open(output_path, 'r') as file:
crlfs = file.readlines()
for crlf in crlfs:
url = crlf.strip()
vuln_data = parse_crlfuzz_result(url)
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting CRLFuzz Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=CRLFUZZ
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='s3scanner', queue='main_scan_queue', base=RengineTask, bind=True)
def s3scanner(self, ctx={}, description=None):
"""Bucket Scanner
Args:
ctx (dict): Context
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/#{self.scan_id}_subdomain_discovery.txt'
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
s3_config = vuln_config.get(S3SCANNER) or {}
threads = s3_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
providers = s3_config.get(PROVIDERS, S3SCANNER_DEFAULT_PROVIDERS)
scan_history = ScanHistory.objects.filter(pk=self.scan_id).first()
for provider in providers:
cmd = f's3scanner -bucket-file {input_path} -enumerate -provider {provider} -threads {threads} -json'
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
if line.get('bucket', {}).get('exists', 0) == 1:
result = parse_s3scanner_result(line)
s3bucket, created = S3Bucket.objects.get_or_create(**result)
scan_history.buckets.add(s3bucket)
logger.info(f"s3 bucket added {result['provider']}-{result['name']}-{result['region']}")
@app.task(name='http_crawl', queue='main_scan_queue', base=RengineTask, bind=True)
def http_crawl(
self,
urls=[],
method=None,
recrawl=False,
ctx={},
track=True,
description=None,
is_ran_from_subdomain_scan=False,
should_remove_duplicate_endpoints=True,
duplicate_removal_fields=[]):
"""Use httpx to query HTTP URLs for important info like page titles, http
status, etc...
Args:
urls (list, optional): A set of URLs to check. Overrides default
behavior which queries all endpoints related to this scan.
method (str): HTTP method to use (GET, HEAD, POST, PUT, DELETE).
recrawl (bool, optional): If False, filter out URLs that have already
been crawled.
should_remove_duplicate_endpoints (bool): Whether to remove duplicate endpoints
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
Returns:
list: httpx results.
"""
logger.info('Initiating HTTP Crawl')
if is_ran_from_subdomain_scan:
logger.info('Running From Subdomain Scan...')
cmd = '/go/bin/httpx'
cfg = self.yaml_configuration.get(HTTP_CRAWL) or {}
custom_header = cfg.get(CUSTOM_HEADER, '')
threads = cfg.get(THREADS, DEFAULT_THREADS)
follow_redirect = cfg.get(FOLLOW_REDIRECT, True)
self.output_path = None
input_path = f'{self.results_dir}/httpx_input.txt'
history_file = f'{self.results_dir}/commands.txt'
if urls: # direct passing URLs to check
if self.url_filter:
urls = [u for u in urls if self.url_filter in u]
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_uncrawled=not recrawl,
write_filepath=input_path,
ctx=ctx
)
# logger.debug(urls)
# If no URLs found, skip it
if not urls:
return
# Re-adjust thread number if few URLs to avoid spinning up a monster to
# kill a fly.
if len(urls) < threads:
threads = len(urls)
# Get random proxy
proxy = get_random_proxy()
# Run command
cmd += f' -cl -ct -rt -location -td -websocket -cname -asn -cdn -probe -random-agent'
cmd += f' -t {threads}' if threads > 0 else ''
cmd += f' --http-proxy {proxy}' if proxy else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -json'
cmd += f' -u {urls[0]}' if len(urls) == 1 else f' -l {input_path}'
cmd += f' -x {method}' if method else ''
cmd += f' -silent'
if follow_redirect:
cmd += ' -fr'
results = []
endpoint_ids = []
for line in stream_command(
cmd,
history_file=history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not line or not isinstance(line, dict):
continue
logger.debug(line)
# No response from endpoint
if line.get('failed', False):
continue
# Parse httpx output
host = line.get('host', '')
content_length = line.get('content_length', 0)
http_status = line.get('status_code')
http_url, is_redirect = extract_httpx_url(line)
page_title = line.get('title')
webserver = line.get('webserver')
cdn = line.get('cdn', False)
rt = line.get('time')
techs = line.get('tech', [])
cname = line.get('cname', '')
content_type = line.get('content_type', '')
response_time = -1
if rt:
response_time = float(''.join(ch for ch in rt if not ch.isalpha()))
if rt[-2:] == 'ms':
response_time = response_time / 1000
# Create Subdomain object in DB
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
# Save default HTTP URL to endpoint object in DB
endpoint, created = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain,
is_default=is_ran_from_subdomain_scan
)
if not endpoint:
continue
endpoint.http_status = http_status
endpoint.page_title = page_title
endpoint.content_length = content_length
endpoint.webserver = webserver
endpoint.response_time = response_time
endpoint.content_type = content_type
endpoint.save()
endpoint_str = f'{http_url} [{http_status}] `{content_length}B` `{webserver}` `{rt}`'
logger.warning(endpoint_str)
if endpoint and endpoint.is_alive and endpoint.http_status != 403:
self.notify(
fields={'Alive endpoint': f'• {endpoint_str}'},
add_meta_info=False)
# Add endpoint to results
line['_cmd'] = cmd
line['final_url'] = http_url
line['endpoint_id'] = endpoint.id
line['endpoint_created'] = created
line['is_redirect'] = is_redirect
results.append(line)
# Add technology objects to DB
for technology in techs:
tech, _ = Technology.objects.get_or_create(name=technology)
endpoint.techs.add(tech)
if is_ran_from_subdomain_scan:
subdomain.technologies.add(tech)
subdomain.save()
endpoint.save()
techs_str = ', '.join([f'`{tech}`' for tech in techs])
self.notify(
fields={'Technologies': techs_str},
add_meta_info=False)
# Add IP objects for 'a' records to DB
a_records = line.get('a', [])
for ip_address in a_records:
ip, created = save_ip_address(
ip_address,
subdomain,
subscan=self.subscan,
cdn=cdn)
ips_str = '• ' + '\n• '.join([f'`{ip}`' for ip in a_records])
self.notify(
fields={'IPs': ips_str},
add_meta_info=False)
# Add IP object for host in DB
if host:
ip, created = save_ip_address(
host,
subdomain,
subscan=self.subscan,
cdn=cdn)
self.notify(
fields={'IPs': f'• `{ip.address}`'},
add_meta_info=False)
# Save subdomain and endpoint
if is_ran_from_subdomain_scan:
# save subdomain stuffs
subdomain.http_url = http_url
subdomain.http_status = http_status
subdomain.page_title = page_title
subdomain.content_length = content_length
subdomain.webserver = webserver
subdomain.response_time = response_time
subdomain.content_type = content_type
subdomain.cname = ','.join(cname)
subdomain.is_cdn = cdn
if cdn:
subdomain.cdn_name = line.get('cdn_name')
subdomain.save()
endpoint.save()
endpoint_ids.append(endpoint.id)
if should_remove_duplicate_endpoints:
# Remove 'fake' alive endpoints that are just redirects to the same page
remove_duplicate_endpoints(
self.scan_id,
self.domain_id,
self.subdomain_id,
filter_ids=endpoint_ids
)
# Remove input file
run_command(
f'rm {input_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
return results
#---------------------#
# Notifications tasks #
#---------------------#
@app.task(name='send_notif', bind=False, queue='send_notif_queue')
def send_notif(
message,
scan_history_id=None,
subscan_id=None,
**options):
if not 'title' in options:
message = enrich_notification(message, scan_history_id, subscan_id)
send_discord_message(message, **options)
send_slack_message(message)
send_telegram_message(message)
@app.task(name='send_scan_notif', bind=False, queue='send_scan_notif_queue')
def send_scan_notif(
scan_history_id,
subscan_id=None,
engine_id=None,
status='RUNNING'):
"""Send scan status notification. Works for scan or a subscan if subscan_id
is passed.
Args:
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Get domain, engine, scan_history objects
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
tasks = ScanActivity.objects.filter(scan_of=scan) if scan else 0
# Build notif options
url = get_scan_url(scan_history_id, subscan_id)
title = get_scan_title(scan_history_id, subscan_id)
fields = get_scan_fields(engine, scan, subscan, status, tasks)
severity = None
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
if status:
severity = STATUS_TO_SEVERITIES.get(status)
opts = {
'title': title,
'url': url,
'fields': fields,
'severity': severity
}
logger.warning(f'Sending notification "{title}" [{severity}]')
# Send notification
send_notif(
msg,
scan_history_id,
subscan_id,
**opts)
@app.task(name='send_task_notif', bind=False, queue='send_task_notif_queue')
def send_task_notif(
task_name,
status=None,
result=None,
output_path=None,
traceback=None,
scan_history_id=None,
engine_id=None,
subscan_id=None,
severity=None,
add_meta_info=True,
update_fields={}):
"""Send task status notification.
Args:
task_name (str): Task name.
status (str, optional): Task status.
result (str, optional): Task result.
output_path (str, optional): Task output path.
traceback (str, optional): Task traceback.
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
severity (str, optional): Severity (will be mapped to notif colors)
add_meta_info (bool, optional): Wheter to add scan / subscan info to notif.
update_fields (dict, optional): Fields key / value to update.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Build fields
url = None
fields = {}
if add_meta_info:
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
url = get_scan_url(scan_history_id)
if status:
fields['Status'] = f'**{status}**'
if engine:
fields['Engine'] = engine.engine_name
if scan:
fields['Scan ID'] = f'[#{scan.id}]({url})'
if subscan:
url = get_scan_url(scan_history_id, subscan_id)
fields['Subscan ID'] = f'[#{subscan.id}]({url})'
title = get_task_title(task_name, scan_history_id, subscan_id)
if status:
severity = STATUS_TO_SEVERITIES.get(status)
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
# Add fields to update
for k, v in update_fields.items():
fields[k] = v
# Add traceback to notif
if traceback and notif.send_scan_tracebacks:
fields['Traceback'] = f'```\n{traceback}\n```'
# Add files to notif
files = []
attach_file = (
notif.send_scan_output_file and
output_path and
result and
not traceback
)
if attach_file:
output_title = output_path.split('/')[-1]
files = [(output_path, output_title)]
# Send notif
opts = {
'title': title,
'url': url,
'files': files,
'severity': severity,
'fields': fields,
'fields_append': update_fields.keys()
}
send_notif(
msg,
scan_history_id=scan_history_id,
subscan_id=subscan_id,
**opts)
@app.task(name='send_file_to_discord', bind=False, queue='send_file_to_discord_queue')
def send_file_to_discord(file_path, title=None):
notif = Notification.objects.first()
do_send = notif and notif.send_to_discord and notif.discord_hook_url
if not do_send:
return False
webhook = DiscordWebhook(
url=notif.discord_hook_url,
rate_limit_retry=True,
username=title or "reNgine Discord Plugin"
)
with open(file_path, "rb") as f:
head, tail = os.path.split(file_path)
webhook.add_file(file=f.read(), filename=tail)
webhook.execute()
@app.task(name='send_hackerone_report', bind=False, queue='send_hackerone_report_queue')
def send_hackerone_report(vulnerability_id):
"""Send HackerOne vulnerability report.
Args:
vulnerability_id (int): Vulnerability id.
Returns:
int: HTTP response status code.
"""
vulnerability = Vulnerability.objects.get(id=vulnerability_id)
severities = {v: k for k,v in NUCLEI_SEVERITY_MAP.items()}
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
# can only send vulnerability report if team_handle exists
if len(vulnerability.target_domain.h1_team_handle) !=0:
hackerone_query = Hackerone.objects.all()
if hackerone_query.exists():
hackerone = Hackerone.objects.first()
severity_value = severities[vulnerability.severity]
tpl = hackerone.report_template
# Replace syntax of report template with actual content
tpl = tpl.replace('{vulnerability_name}', vulnerability.name)
tpl = tpl.replace('{vulnerable_url}', vulnerability.http_url)
tpl = tpl.replace('{vulnerability_severity}', severity_value)
tpl = tpl.replace('{vulnerability_description}', vulnerability.description if vulnerability.description else '')
tpl = tpl.replace('{vulnerability_extracted_results}', vulnerability.extracted_results if vulnerability.extracted_results else '')
tpl = tpl.replace('{vulnerability_reference}', vulnerability.reference if vulnerability.reference else '')
data = {
"data": {
"type": "report",
"attributes": {
"team_handle": vulnerability.target_domain.h1_team_handle,
"title": '{} found in {}'.format(vulnerability.name, vulnerability.http_url),
"vulnerability_information": tpl,
"severity_rating": severity_value,
"impact": "More information about the impact and vulnerability can be found here: \n" + vulnerability.reference if vulnerability.reference else "NA",
}
}
}
r = requests.post(
'https://api.hackerone.com/v1/hackers/reports',
auth=(hackerone.username, hackerone.api_key),
json=data,
headers=headers
)
response = r.json()
status_code = r.status_code
if status_code == 201:
vulnerability.hackerone_report_id = response['data']["id"]
vulnerability.open_status = False
vulnerability.save()
return status_code
else:
logger.error('No team handle found.')
status_code = 111
return status_code
#-------------#
# Utils tasks #
#-------------#
@app.task(name='parse_nmap_results', bind=False, queue='parse_nmap_results_queue')
def parse_nmap_results(xml_file, output_file=None):
"""Parse results from nmap output file.
Args:
xml_file (str): nmap XML report file path.
Returns:
list: List of vulnerabilities found from nmap results.
"""
with open(xml_file, encoding='utf8') as f:
content = f.read()
try:
nmap_results = xmltodict.parse(content) # parse XML to dict
except Exception as e:
logger.exception(e)
logger.error(f'Cannot parse {xml_file} to valid JSON. Skipping.')
return []
# Write JSON to output file
if output_file:
with open(output_file, 'w') as f:
json.dump(nmap_results, f, indent=4)
logger.warning(json.dumps(nmap_results, indent=4))
hosts = (
nmap_results
.get('nmaprun', {})
.get('host', {})
)
all_vulns = []
if isinstance(hosts, dict):
hosts = [hosts]
for host in hosts:
# Grab hostname / IP from output
hostnames_dict = host.get('hostnames', {})
if hostnames_dict:
# Ensure that hostnames['hostname'] is a list for consistency
hostnames_list = hostnames_dict['hostname'] if isinstance(hostnames_dict['hostname'], list) else [hostnames_dict['hostname']]
# Extract all the @name values from the list of dictionaries
hostnames = [entry.get('@name') for entry in hostnames_list]
else:
hostnames = [host.get('address')['@addr']]
# Iterate over each hostname for each port
for hostname in hostnames:
# Grab ports from output
ports = host.get('ports', {}).get('port', [])
if isinstance(ports, dict):
ports = [ports]
for port in ports:
url_vulns = []
port_number = port['@portid']
url = sanitize_url(f'{hostname}:{port_number}')
logger.info(f'Parsing nmap results for {hostname}:{port_number} ...')
if not port_number or not port_number.isdigit():
continue
port_protocol = port['@protocol']
scripts = port.get('script', [])
if isinstance(scripts, dict):
scripts = [scripts]
for script in scripts:
script_id = script['@id']
script_output = script['@output']
script_output_table = script.get('table', [])
logger.debug(f'Ran nmap script "{script_id}" on {port_number}/{port_protocol}:\n{script_output}\n')
if script_id == 'vulscan':
vulns = parse_nmap_vulscan_output(script_output)
url_vulns.extend(vulns)
elif script_id == 'vulners':
vulns = parse_nmap_vulners_output(script_output)
url_vulns.extend(vulns)
# elif script_id == 'http-server-header':
# TODO: nmap can help find technologies as well using the http-server-header script
# regex = r'(\w+)/([\d.]+)\s?(?:\((\w+)\))?'
# tech_name, tech_version, tech_os = re.match(regex, test_string).groups()
# Technology.objects.get_or_create(...)
# elif script_id == 'http_csrf':
# vulns = parse_nmap_http_csrf_output(script_output)
# url_vulns.extend(vulns)
else:
logger.warning(f'Script output parsing for script "{script_id}" is not supported yet.')
# Add URL to vuln
for vuln in url_vulns:
# TODO: This should extend to any URL, not just HTTP
vuln['http_url'] = url
if 'http_path' in vuln:
vuln['http_url'] += vuln['http_path']
all_vulns.append(vuln)
return all_vulns
def parse_nmap_http_csrf_output(script_output):
pass
def parse_nmap_vulscan_output(script_output):
"""Parse nmap vulscan script output.
Args:
script_output (str): Vulscan script output.
Returns:
list: List of Vulnerability dicts.
"""
data = {}
vulns = []
provider_name = ''
# Sort all vulns found by provider so that we can match each provider with
# a function that pulls from its API to get more info about the
# vulnerability.
for line in script_output.splitlines():
if not line:
continue
if not line.startswith('['): # provider line
if "No findings" in line:
logger.info(f"No findings: {line}")
continue
elif ' - ' in line:
provider_name, provider_url = tuple(line.split(' - '))
data[provider_name] = {'url': provider_url.rstrip(':'), 'entries': []}
continue
else:
# Log a warning
logger.warning(f"Unexpected line format: {line}")
continue
reg = r'\[(.*)\] (.*)'
matches = re.match(reg, line)
id, title = matches.groups()
entry = {'id': id, 'title': title}
data[provider_name]['entries'].append(entry)
logger.warning('Vulscan parsed output:')
logger.warning(pprint.pformat(data))
for provider_name in data:
if provider_name == 'Exploit-DB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'IBM X-Force':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'MITRE CVE':
logger.error(f'Provider {provider_name} is not supported YET.')
for entry in data[provider_name]['entries']:
cve_id = entry['id']
vuln = cve_to_vuln(cve_id)
vulns.append(vuln)
elif provider_name == 'OSVDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'OpenVAS (Nessus)':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'SecurityFocus':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'VulDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
else:
logger.error(f'Provider {provider_name} is not supported.')
return vulns
def parse_nmap_vulners_output(script_output, url=''):
"""Parse nmap vulners script output.
TODO: Rework this as it's currently matching all CVEs no matter the
confidence.
Args:
script_output (str): Script output.
Returns:
list: List of found vulnerabilities.
"""
vulns = []
# Check for CVE in script output
CVE_REGEX = re.compile(r'.*(CVE-\d\d\d\d-\d+).*')
matches = CVE_REGEX.findall(script_output)
matches = list(dict.fromkeys(matches))
for cve_id in matches: # get CVE info
vuln = cve_to_vuln(cve_id, vuln_type='nmap-vulners-nse')
if vuln:
vulns.append(vuln)
return vulns
def cve_to_vuln(cve_id, vuln_type=''):
"""Search for a CVE using CVESearch and return Vulnerability data.
Args:
cve_id (str): CVE ID in the form CVE-*
Returns:
dict: Vulnerability dict.
"""
cve_info = CVESearch('https://cve.circl.lu').id(cve_id)
if not cve_info:
logger.error(f'Could not fetch CVE info for cve {cve_id}. Skipping.')
return None
vuln_cve_id = cve_info['id']
vuln_name = vuln_cve_id
vuln_description = cve_info.get('summary', 'none').replace(vuln_cve_id, '').strip()
try:
vuln_cvss = float(cve_info.get('cvss', -1))
except (ValueError, TypeError):
vuln_cvss = -1
vuln_cwe_id = cve_info.get('cwe', '')
exploit_ids = cve_info.get('refmap', {}).get('exploit-db', [])
osvdb_ids = cve_info.get('refmap', {}).get('osvdb', [])
references = cve_info.get('references', [])
capec_objects = cve_info.get('capec', [])
# Parse ovals for a better vuln name / type
ovals = cve_info.get('oval', [])
if ovals:
vuln_name = ovals[0]['title']
vuln_type = ovals[0]['family']
# Set vulnerability severity based on CVSS score
vuln_severity = 'info'
if vuln_cvss < 4:
vuln_severity = 'low'
elif vuln_cvss < 7:
vuln_severity = 'medium'
elif vuln_cvss < 9:
vuln_severity = 'high'
else:
vuln_severity = 'critical'
# Build console warning message
msg = f'{vuln_name} | {vuln_severity.upper()} | {vuln_cve_id} | {vuln_cwe_id} | {vuln_cvss}'
for id in osvdb_ids:
msg += f'\n\tOSVDB: {id}'
for exploit_id in exploit_ids:
msg += f'\n\tEXPLOITDB: {exploit_id}'
logger.warning(msg)
vuln = {
'name': vuln_name,
'type': vuln_type,
'severity': NUCLEI_SEVERITY_MAP[vuln_severity],
'description': vuln_description,
'cvss_score': vuln_cvss,
'references': references,
'cve_ids': [vuln_cve_id],
'cwe_ids': [vuln_cwe_id]
}
return vuln
def parse_s3scanner_result(line):
'''
Parses and returns s3Scanner Data
'''
bucket = line['bucket']
return {
'name': bucket['name'],
'region': bucket['region'],
'provider': bucket['provider'],
'owner_display_name': bucket['owner_display_name'],
'owner_id': bucket['owner_id'],
'perm_auth_users_read': bucket['perm_auth_users_read'],
'perm_auth_users_write': bucket['perm_auth_users_write'],
'perm_auth_users_read_acl': bucket['perm_auth_users_read_acl'],
'perm_auth_users_write_acl': bucket['perm_auth_users_write_acl'],
'perm_auth_users_full_control': bucket['perm_auth_users_full_control'],
'perm_all_users_read': bucket['perm_all_users_read'],
'perm_all_users_write': bucket['perm_all_users_write'],
'perm_all_users_read_acl': bucket['perm_all_users_read_acl'],
'perm_all_users_write_acl': bucket['perm_all_users_write_acl'],
'perm_all_users_full_control': bucket['perm_all_users_full_control'],
'num_objects': bucket['num_objects'],
'size': bucket['bucket_size']
}
def parse_nuclei_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
return {
'name': line['info'].get('name', ''),
'type': line['type'],
'severity': NUCLEI_SEVERITY_MAP[line['info'].get('severity', 'unknown')],
'template': line['template'],
'template_url': line['template-url'],
'template_id': line['template-id'],
'description': line['info'].get('description', ''),
'matcher_name': line.get('matcher-name', ''),
'curl_command': line.get('curl-command'),
'request': line.get('request'),
'response': line.get('response'),
'extracted_results': line.get('extracted-results', []),
'cvss_metrics': line['info'].get('classification', {}).get('cvss-metrics', ''),
'cvss_score': line['info'].get('classification', {}).get('cvss-score'),
'cve_ids': line['info'].get('classification', {}).get('cve_id', []) or [],
'cwe_ids': line['info'].get('classification', {}).get('cwe_id', []) or [],
'references': line['info'].get('reference', []) or [],
'tags': line['info'].get('tags', []),
'source': NUCLEI,
}
def parse_dalfox_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
description = ''
description += f" Evidence: {line.get('evidence')} <br>" if line.get('evidence') else ''
description += f" Message: {line.get('message')} <br>" if line.get('message') else ''
description += f" Payload: {line.get('message_str')} <br>" if line.get('message_str') else ''
description += f" Vulnerable Parameter: {line.get('param')} <br>" if line.get('param') else ''
return {
'name': 'XSS (Cross Site Scripting)',
'type': 'XSS',
'severity': DALFOX_SEVERITY_MAP[line.get('severity', 'unknown')],
'description': description,
'source': DALFOX,
'cwe_ids': [line.get('cwe')]
}
def parse_crlfuzz_result(url):
"""Parse CRLF results
Args:
url (str): CRLF Vulnerable URL
Returns:
dict: Vulnerability data.
"""
return {
'name': 'CRLF (HTTP Response Splitting)',
'type': 'CRLF',
'severity': 2,
'description': 'A CRLF (HTTP Response Splitting) vulnerability has been discovered.',
'source': CRLFUZZ,
}
def record_exists(model, data, exclude_keys=[]):
"""
Check if a record already exists in the database based on the given data.
Args:
model (django.db.models.Model): The Django model to check against.
data (dict): Data dictionary containing fields and values.
exclude_keys (list): List of keys to exclude from the lookup.
Returns:
bool: True if the record exists, False otherwise.
"""
# Extract the keys that will be used for the lookup
lookup_fields = {key: data[key] for key in data if key not in exclude_keys}
# Return True if a record exists based on the lookup fields, False otherwise
return model.objects.filter(**lookup_fields).exists()
@app.task(name='geo_localize', bind=False, queue='geo_localize_queue')
def geo_localize(host, ip_id=None):
"""Uses geoiplookup to find location associated with host.
Args:
host (str): Hostname.
ip_id (int): IpAddress object id.
Returns:
startScan.models.CountryISO: CountryISO object from DB or None.
"""
if validators.ipv6(host):
logger.info(f'Ipv6 "{host}" is not supported by geoiplookup. Skipping.')
return None
cmd = f'geoiplookup {host}'
_, out = run_command(cmd)
if 'IP Address not found' not in out and "can't resolve hostname" not in out:
country_iso = out.split(':')[1].strip().split(',')[0]
country_name = out.split(':')[1].strip().split(',')[1].strip()
geo_object, _ = CountryISO.objects.get_or_create(
iso=country_iso,
name=country_name
)
geo_json = {
'iso': country_iso,
'name': country_name
}
if ip_id:
ip = IpAddress.objects.get(pk=ip_id)
ip.geo_iso = geo_object
ip.save()
return geo_json
logger.info(f'Geo IP lookup failed for host "{host}"')
return None
@app.task(name='query_whois', bind=False, queue='query_whois_queue')
def query_whois(ip_domain, force_reload_whois=False):
"""Query WHOIS information for an IP or a domain name.
Args:
ip_domain (str): IP address or domain name.
save_domain (bool): Whether to save domain or not, default False
Returns:
dict: WHOIS information.
"""
if not force_reload_whois and Domain.objects.filter(name=ip_domain).exists() and Domain.objects.get(name=ip_domain).domain_info:
domain = Domain.objects.get(name=ip_domain)
if not domain.insert_date:
domain.insert_date = timezone.now()
domain.save()
domain_info_db = domain.domain_info
domain_info = DottedDict(
dnssec=domain_info_db.dnssec,
created=domain_info_db.created,
updated=domain_info_db.updated,
expires=domain_info_db.expires,
geolocation_iso=domain_info_db.geolocation_iso,
status=[status['name'] for status in DomainWhoisStatusSerializer(domain_info_db.status, many=True).data],
whois_server=domain_info_db.whois_server,
ns_records=[ns['name'] for ns in NameServersSerializer(domain_info_db.name_servers, many=True).data],
registrar_name=domain_info_db.registrar.name,
registrar_phone=domain_info_db.registrar.phone,
registrar_email=domain_info_db.registrar.email,
registrar_url=domain_info_db.registrar.url,
registrant_name=domain_info_db.registrant.name,
registrant_id=domain_info_db.registrant.id_str,
registrant_organization=domain_info_db.registrant.organization,
registrant_city=domain_info_db.registrant.city,
registrant_state=domain_info_db.registrant.state,
registrant_zip_code=domain_info_db.registrant.zip_code,
registrant_country=domain_info_db.registrant.country,
registrant_phone=domain_info_db.registrant.phone,
registrant_fax=domain_info_db.registrant.fax,
registrant_email=domain_info_db.registrant.email,
registrant_address=domain_info_db.registrant.address,
admin_name=domain_info_db.admin.name,
admin_id=domain_info_db.admin.id_str,
admin_organization=domain_info_db.admin.organization,
admin_city=domain_info_db.admin.city,
admin_state=domain_info_db.admin.state,
admin_zip_code=domain_info_db.admin.zip_code,
admin_country=domain_info_db.admin.country,
admin_phone=domain_info_db.admin.phone,
admin_fax=domain_info_db.admin.fax,
admin_email=domain_info_db.admin.email,
admin_address=domain_info_db.admin.address,
tech_name=domain_info_db.tech.name,
tech_id=domain_info_db.tech.id_str,
tech_organization=domain_info_db.tech.organization,
tech_city=domain_info_db.tech.city,
tech_state=domain_info_db.tech.state,
tech_zip_code=domain_info_db.tech.zip_code,
tech_country=domain_info_db.tech.country,
tech_phone=domain_info_db.tech.phone,
tech_fax=domain_info_db.tech.fax,
tech_email=domain_info_db.tech.email,
tech_address=domain_info_db.tech.address,
related_tlds=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_tlds, many=True).data],
related_domains=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_domains, many=True).data],
historical_ips=[ip for ip in HistoricalIPSerializer(domain_info_db.historical_ips, many=True).data],
)
if domain_info_db.dns_records:
a_records = []
txt_records = []
mx_records = []
dns_records = [{'name': dns['name'], 'type': dns['type']} for dns in DomainDNSRecordSerializer(domain_info_db.dns_records, many=True).data]
for dns in dns_records:
if dns['type'] == 'a':
a_records.append(dns['name'])
elif dns['type'] == 'txt':
txt_records.append(dns['name'])
elif dns['type'] == 'mx':
mx_records.append(dns['name'])
domain_info.a_records = a_records
domain_info.txt_records = txt_records
domain_info.mx_records = mx_records
else:
logger.info(f'Domain info for "{ip_domain}" not found in DB, querying whois')
domain_info = DottedDict()
# find domain historical ip
try:
historical_ips = get_domain_historical_ip_address(ip_domain)
domain_info.historical_ips = historical_ips
except Exception as e:
logger.error(f'HistoricalIP for {ip_domain} not found!\nError: {str(e)}')
historical_ips = []
# find associated domains using ip_domain
try:
related_domains = reverse_whois(ip_domain.split('.')[0])
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
# find related tlds using TLSx
try:
related_tlds = []
output_path = '/tmp/ip_domain_tlsx.txt'
tlsx_command = f'tlsx -san -cn -silent -ro -host {ip_domain} -o {output_path}'
run_command(
tlsx_command,
shell=True,
)
tlsx_output = []
with open(output_path) as f:
tlsx_output = f.readlines()
tldextract_target = tldextract.extract(ip_domain)
for doms in tlsx_output:
doms = doms.strip()
tldextract_res = tldextract.extract(doms)
if ip_domain != doms and tldextract_res.domain == tldextract_target.domain and tldextract_res.subdomain == '':
related_tlds.append(doms)
related_tlds = list(set(related_tlds))
domain_info.related_tlds = related_tlds
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
related_domains_list = []
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_related)
related_domains_list.append(_domain['name'])
for _domain in related_tlds:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain,
)[0]
db_domain_info.related_tlds.add(domain_related)
for _ip in historical_ips:
historical_ip = HistoricalIP.objects.get_or_create(
ip=_ip['ip'],
owner=_ip['owner'],
location=_ip['location'],
last_seen=_ip['last_seen'],
)[0]
db_domain_info.historical_ips.add(historical_ip)
domain.domain_info = db_domain_info
domain.save()
command = f'netlas host {ip_domain} -f json'
# check if netlas key is provided
netlas_key = get_netlas_key()
command += f' -a {netlas_key}' if netlas_key else ''
result = subprocess.check_output(command.split()).decode('utf-8')
if 'Failed to parse response data' in result:
# do fallback
return {
'status': False,
'ip_domain': ip_domain,
'result': "Netlas limit exceeded.",
'message': 'Netlas limit exceeded.'
}
try:
result = json.loads(result)
logger.info(result)
whois = result.get('whois') if result.get('whois') else {}
domain_info.created = whois.get('created_date')
domain_info.expires = whois.get('expiration_date')
domain_info.updated = whois.get('updated_date')
domain_info.whois_server = whois.get('whois_server')
if 'registrant' in whois:
registrant = whois.get('registrant')
domain_info.registrant_name = registrant.get('name')
domain_info.registrant_country = registrant.get('country')
domain_info.registrant_id = registrant.get('id')
domain_info.registrant_state = registrant.get('province')
domain_info.registrant_city = registrant.get('city')
domain_info.registrant_phone = registrant.get('phone')
domain_info.registrant_address = registrant.get('street')
domain_info.registrant_organization = registrant.get('organization')
domain_info.registrant_fax = registrant.get('fax')
domain_info.registrant_zip_code = registrant.get('postal_code')
email_search = EMAIL_REGEX.search(str(registrant.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.registrant_email = field_content
if 'administrative' in whois:
administrative = whois.get('administrative')
domain_info.admin_name = administrative.get('name')
domain_info.admin_country = administrative.get('country')
domain_info.admin_id = administrative.get('id')
domain_info.admin_state = administrative.get('province')
domain_info.admin_city = administrative.get('city')
domain_info.admin_phone = administrative.get('phone')
domain_info.admin_address = administrative.get('street')
domain_info.admin_organization = administrative.get('organization')
domain_info.admin_fax = administrative.get('fax')
domain_info.admin_zip_code = administrative.get('postal_code')
mail_search = EMAIL_REGEX.search(str(administrative.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.admin_email = field_content
if 'technical' in whois:
technical = whois.get('technical')
domain_info.tech_name = technical.get('name')
domain_info.tech_country = technical.get('country')
domain_info.tech_state = technical.get('province')
domain_info.tech_id = technical.get('id')
domain_info.tech_city = technical.get('city')
domain_info.tech_phone = technical.get('phone')
domain_info.tech_address = technical.get('street')
domain_info.tech_organization = technical.get('organization')
domain_info.tech_fax = technical.get('fax')
domain_info.tech_zip_code = technical.get('postal_code')
mail_search = EMAIL_REGEX.search(str(technical.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.tech_email = field_content
if 'dns' in result:
dns = result.get('dns')
domain_info.mx_records = dns.get('mx')
domain_info.txt_records = dns.get('txt')
domain_info.a_records = dns.get('a')
domain_info.ns_records = whois.get('name_servers')
domain_info.dnssec = True if whois.get('dnssec') else False
domain_info.status = whois.get('status')
if 'registrar' in whois:
registrar = whois.get('registrar')
domain_info.registrar_name = registrar.get('name')
domain_info.registrar_email = registrar.get('email')
domain_info.registrar_phone = registrar.get('phone')
domain_info.registrar_url = registrar.get('url')
# find associated domains if registrant email is found
related_domains = reverse_whois(domain_info.get('registrant_email')) if domain_info.get('registrant_email') else []
for _domain in related_domains:
related_domains_list.append(_domain['name'])
# remove duplicate domains from related domains list
related_domains_list = list(set(related_domains_list))
domain_info.related_domains = related_domains_list
# save to db if domain exists
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_rel = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_rel)
db_domain_info.dnssec = domain_info.get('dnssec')
#dates
db_domain_info.created = domain_info.get('created')
db_domain_info.updated = domain_info.get('updated')
db_domain_info.expires = domain_info.get('expires')
#registrar
db_domain_info.registrar = Registrar.objects.get_or_create(
name=domain_info.get('registrar_name'),
email=domain_info.get('registrar_email'),
phone=domain_info.get('registrar_phone'),
url=domain_info.get('registrar_url'),
)[0]
db_domain_info.registrant = DomainRegistration.objects.get_or_create(
name=domain_info.get('registrant_name'),
organization=domain_info.get('registrant_organization'),
address=domain_info.get('registrant_address'),
city=domain_info.get('registrant_city'),
state=domain_info.get('registrant_state'),
zip_code=domain_info.get('registrant_zip_code'),
country=domain_info.get('registrant_country'),
email=domain_info.get('registrant_email'),
phone=domain_info.get('registrant_phone'),
fax=domain_info.get('registrant_fax'),
id_str=domain_info.get('registrant_id'),
)[0]
db_domain_info.admin = DomainRegistration.objects.get_or_create(
name=domain_info.get('admin_name'),
organization=domain_info.get('admin_organization'),
address=domain_info.get('admin_address'),
city=domain_info.get('admin_city'),
state=domain_info.get('admin_state'),
zip_code=domain_info.get('admin_zip_code'),
country=domain_info.get('admin_country'),
email=domain_info.get('admin_email'),
phone=domain_info.get('admin_phone'),
fax=domain_info.get('admin_fax'),
id_str=domain_info.get('admin_id'),
)[0]
db_domain_info.tech = DomainRegistration.objects.get_or_create(
name=domain_info.get('tech_name'),
organization=domain_info.get('tech_organization'),
address=domain_info.get('tech_address'),
city=domain_info.get('tech_city'),
state=domain_info.get('tech_state'),
zip_code=domain_info.get('tech_zip_code'),
country=domain_info.get('tech_country'),
email=domain_info.get('tech_email'),
phone=domain_info.get('tech_phone'),
fax=domain_info.get('tech_fax'),
id_str=domain_info.get('tech_id'),
)[0]
for status in domain_info.get('status') or []:
_status = WhoisStatus.objects.get_or_create(
name=status
)[0]
_status.save()
db_domain_info.status.add(_status)
for ns in domain_info.get('ns_records') or []:
_ns = NameServer.objects.get_or_create(
name=ns
)[0]
_ns.save()
db_domain_info.name_servers.add(_ns)
for a in domain_info.get('a_records') or []:
_a = DNSRecord.objects.get_or_create(
name=a,
type='a'
)[0]
_a.save()
db_domain_info.dns_records.add(_a)
for mx in domain_info.get('mx_records') or []:
_mx = DNSRecord.objects.get_or_create(
name=mx,
type='mx'
)[0]
_mx.save()
db_domain_info.dns_records.add(_mx)
for txt in domain_info.get('txt_records') or []:
_txt = DNSRecord.objects.get_or_create(
name=txt,
type='txt'
)[0]
_txt.save()
db_domain_info.dns_records.add(_txt)
db_domain_info.geolocation_iso = domain_info.get('registrant_country')
db_domain_info.whois_server = domain_info.get('whois_server')
db_domain_info.save()
domain.domain_info = db_domain_info
domain.save()
except Exception as e:
return {
'status': False,
'ip_domain': ip_domain,
'result': "unable to fetch records from WHOIS database.",
'message': str(e)
}
return {
'status': True,
'ip_domain': ip_domain,
'dnssec': domain_info.get('dnssec'),
'created': domain_info.get('created'),
'updated': domain_info.get('updated'),
'expires': domain_info.get('expires'),
'geolocation_iso': domain_info.get('registrant_country'),
'domain_statuses': domain_info.get('status'),
'whois_server': domain_info.get('whois_server'),
'dns': {
'a': domain_info.get('a_records'),
'mx': domain_info.get('mx_records'),
'txt': domain_info.get('txt_records'),
},
'registrar': {
'name': domain_info.get('registrar_name'),
'phone': domain_info.get('registrar_phone'),
'email': domain_info.get('registrar_email'),
'url': domain_info.get('registrar_url'),
},
'registrant': {
'name': domain_info.get('registrant_name'),
'id': domain_info.get('registrant_id'),
'organization': domain_info.get('registrant_organization'),
'address': domain_info.get('registrant_address'),
'city': domain_info.get('registrant_city'),
'state': domain_info.get('registrant_state'),
'zipcode': domain_info.get('registrant_zip_code'),
'country': domain_info.get('registrant_country'),
'phone': domain_info.get('registrant_phone'),
'fax': domain_info.get('registrant_fax'),
'email': domain_info.get('registrant_email'),
},
'admin': {
'name': domain_info.get('admin_name'),
'id': domain_info.get('admin_id'),
'organization': domain_info.get('admin_organization'),
'address':domain_info.get('admin_address'),
'city': domain_info.get('admin_city'),
'state': domain_info.get('admin_state'),
'zipcode': domain_info.get('admin_zip_code'),
'country': domain_info.get('admin_country'),
'phone': domain_info.get('admin_phone'),
'fax': domain_info.get('admin_fax'),
'email': domain_info.get('admin_email'),
},
'technical_contact': {
'name': domain_info.get('tech_name'),
'id': domain_info.get('tech_id'),
'organization': domain_info.get('tech_organization'),
'address': domain_info.get('tech_address'),
'city': domain_info.get('tech_city'),
'state': domain_info.get('tech_state'),
'zipcode': domain_info.get('tech_zip_code'),
'country': domain_info.get('tech_country'),
'phone': domain_info.get('tech_phone'),
'fax': domain_info.get('tech_fax'),
'email': domain_info.get('tech_email'),
},
'nameservers': domain_info.get('ns_records'),
# 'similar_domains': domain_info.get('similar_domains'),
'related_domains': domain_info.get('related_domains'),
'related_tlds': domain_info.get('related_tlds'),
'historical_ips': domain_info.get('historical_ips'),
}
@app.task(name='remove_duplicate_endpoints', bind=False, queue='remove_duplicate_endpoints_queue')
def remove_duplicate_endpoints(
scan_history_id,
domain_id,
subdomain_id=None,
filter_ids=[],
filter_status=[200, 301, 404],
duplicate_removal_fields=ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS
):
"""Remove duplicate endpoints.
Check for implicit redirections by comparing endpoints:
- [x] `content_length` similarities indicating redirections
- [x] `page_title` (check for same page title)
- [ ] Sign-in / login page (check for endpoints with the same words)
Args:
scan_history_id: ScanHistory id.
domain_id (int): Domain id.
subdomain_id (int, optional): Subdomain id.
filter_ids (list): List of endpoint ids to filter on.
filter_status (list): List of HTTP status codes to filter on.
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
"""
logger.info(f'Removing duplicate endpoints based on {duplicate_removal_fields}')
endpoints = (
EndPoint.objects
.filter(scan_history__id=scan_history_id)
.filter(target_domain__id=domain_id)
)
if filter_status:
endpoints = endpoints.filter(http_status__in=filter_status)
if subdomain_id:
endpoints = endpoints.filter(subdomain__id=subdomain_id)
if filter_ids:
endpoints = endpoints.filter(id__in=filter_ids)
for field_name in duplicate_removal_fields:
cl_query = (
endpoints
.values_list(field_name)
.annotate(mc=Count(field_name))
.order_by('-mc')
)
for (field_value, count) in cl_query:
if count > DELETE_DUPLICATES_THRESHOLD:
eps_to_delete = (
endpoints
.filter(**{field_name: field_value})
.order_by('discovered_date')
.all()[1:]
)
msg = f'Deleting {len(eps_to_delete)} endpoints [reason: same {field_name} {field_value}]'
for ep in eps_to_delete:
url = urlparse(ep.http_url)
if url.path in ['', '/', '/login']: # try do not delete the original page that other pages redirect to
continue
msg += f'\n\t {ep.http_url} [{ep.http_status}] [{field_name}={field_value}]'
ep.delete()
logger.warning(msg)
@app.task(name='run_command', bind=False, queue='run_command_queue')
def run_command(cmd, cwd=None, shell=False, history_file=None, scan_id=None, activity_id=None):
"""Run a given command using subprocess module.
Args:
cmd (str): Command to run.
cwd (str): Current working directory.
echo (bool): Log command.
shell (bool): Run within separate shell if True.
history_file (str): Write command + output to history file.
Returns:
tuple: Tuple with return_code, output.
"""
logger.info(cmd)
logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Run the command using subprocess
popen = subprocess.Popen(
cmd if shell else cmd.split(),
shell=shell,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
cwd=cwd,
universal_newlines=True)
output = ''
for stdout_line in iter(popen.stdout.readline, ""):
item = stdout_line.strip()
output += '\n' + item
logger.debug(item)
popen.stdout.close()
popen.wait()
return_code = popen.returncode
command_obj.output = output
command_obj.return_code = return_code
command_obj.save()
if history_file:
mode = 'a'
if not os.path.exists(history_file):
mode = 'w'
with open(history_file, mode) as f:
f.write(f'\n{cmd}\n{return_code}\n{output}\n------------------\n')
return return_code, output
#-------------#
# Other utils #
#-------------#
def stream_command(cmd, cwd=None, shell=False, history_file=None, encoding='utf-8', scan_id=None, activity_id=None, trunc_char=None):
# Log cmd
logger.info(cmd)
# logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Sanitize the cmd
command = cmd if shell else cmd.split()
# Run the command using subprocess
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
shell=shell)
# Log the output in real-time to the database
output = ""
# Process the output
for line in iter(lambda: process.stdout.readline(), b''):
if not line:
break
line = line.strip()
ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')
line = ansi_escape.sub('', line)
line = line.replace('\\x0d\\x0a', '\n')
if trunc_char and line.endswith(trunc_char):
line = line[:-1]
item = line
# Try to parse the line as JSON
try:
item = json.loads(line)
except json.JSONDecodeError:
pass
# Yield the line
#logger.debug(item)
yield item
# Add the log line to the output
output += line + "\n"
# Update the command record in the database
command_obj.output = output
command_obj.save()
# Retrieve the return code and output
process.wait()
return_code = process.returncode
# Update the return code and final output in the database
command_obj.return_code = return_code
command_obj.save()
# Append the command, return code and output to the history file
if history_file is not None:
with open(history_file, "a") as f:
f.write(f"{cmd}\n{return_code}\n{output}\n")
def process_httpx_response(line):
"""TODO: implement this"""
def extract_httpx_url(line):
"""Extract final URL from httpx results. Always follow redirects to find
the last URL.
Args:
line (dict): URL data output by httpx.
Returns:
tuple: (final_url, redirect_bool) tuple.
"""
status_code = line.get('status_code', 0)
final_url = line.get('final_url')
location = line.get('location')
chain_status_codes = line.get('chain_status_codes', [])
# Final URL is already looking nice, if it exists return it
if final_url:
return final_url, False
http_url = line['url'] # fallback to url field
# Handle redirects manually
REDIRECT_STATUS_CODES = [301, 302]
is_redirect = (
status_code in REDIRECT_STATUS_CODES
or
any(x in REDIRECT_STATUS_CODES for x in chain_status_codes)
)
if is_redirect and location:
if location.startswith(('http', 'https')):
http_url = location
else:
http_url = f'{http_url}/{location.lstrip("/")}'
# Sanitize URL
http_url = sanitize_url(http_url)
return http_url, is_redirect
#-------------#
# OSInt utils #
#-------------#
def get_and_save_dork_results(lookup_target, results_dir, type, lookup_keywords=None, lookup_extensions=None, delay=3, page_count=2, scan_history=None):
"""
Uses gofuzz to dork and store information
Args:
lookup_target (str): target to look into such as stackoverflow or even the target itself
results_dir (str): Results directory
type (str): Dork Type Title
lookup_keywords (str): comma separated keywords or paths to look for
lookup_extensions (str): comma separated extensions to look for
delay (int): delay between each requests
page_count (int): pages in google to extract information
scan_history (startScan.ScanHistory): Scan History Object
"""
results = []
gofuzz_command = f'{GOFUZZ_EXEC_PATH} -t {lookup_target} -d {delay} -p {page_count}'
if lookup_extensions:
gofuzz_command += f' -e {lookup_extensions}'
elif lookup_keywords:
gofuzz_command += f' -w {lookup_keywords}'
output_file = f'{results_dir}/gofuzz.txt'
gofuzz_command += f' -o {output_file}'
history_file = f'{results_dir}/commands.txt'
try:
run_command(
gofuzz_command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
)
if not os.path.isfile(output_file):
return
with open(output_file) as f:
for line in f.readlines():
url = line.strip()
if url:
results.append(url)
dork, created = Dork.objects.get_or_create(
type=type,
url=url
)
if scan_history:
scan_history.dorks.add(dork)
# remove output file
os.remove(output_file)
except Exception as e:
logger.exception(e)
return results
def get_and_save_emails(scan_history, activity_id, results_dir):
"""Get and save emails from Google, Bing and Baidu.
Args:
scan_history (startScan.ScanHistory): Scan history object.
activity_id: ScanActivity Object
results_dir (str): Results directory.
Returns:
list: List of emails found.
"""
emails = []
# Proxy settings
# get_random_proxy()
# Gather emails from Google, Bing and Baidu
output_file = f'{results_dir}/emails_tmp.txt'
history_file = f'{results_dir}/commands.txt'
command = f'python3 /usr/src/github/Infoga/infoga.py --domain {scan_history.domain.name} --source all --report {output_file}'
try:
run_command(
command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
activity_id=activity_id)
if not os.path.isfile(output_file):
logger.info('No Email results')
return []
with open(output_file) as f:
for line in f.readlines():
if 'Email' in line:
split_email = line.split(' ')[2]
emails.append(split_email)
output_path = f'{results_dir}/emails.txt'
with open(output_path, 'w') as output_file:
for email_address in emails:
save_email(email_address, scan_history)
output_file.write(f'{email_address}\n')
except Exception as e:
logger.exception(e)
return emails
def save_metadata_info(meta_dict):
"""Extract metadata from Google Search.
Args:
meta_dict (dict): Info dict.
Returns:
list: List of startScan.MetaFinderDocument objects.
"""
logger.warning(f'Getting metadata for {meta_dict.osint_target}')
scan_history = ScanHistory.objects.get(id=meta_dict.scan_id)
# Proxy settings
get_random_proxy()
# Get metadata
result = extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if not result:
logger.error(f'No metadata result from Google Search for {meta_dict.osint_target}.')
return []
# Add metadata info to DB
results = []
for metadata_name, data in result.get_metadata().items():
subdomain = Subdomain.objects.get(
scan_history=meta_dict.scan_id,
name=meta_dict.osint_target)
metadata = DottedDict({k: v for k, v in data.items()})
meta_finder_document = MetaFinderDocument(
subdomain=subdomain,
target_domain=meta_dict.domain,
scan_history=scan_history,
url=metadata.url,
doc_name=metadata_name,
http_status=metadata.status_code,
producer=metadata.metadata.get('Producer'),
creator=metadata.metadata.get('Creator'),
creation_date=metadata.metadata.get('CreationDate'),
modified_date=metadata.metadata.get('ModDate'),
author=metadata.metadata.get('Author'),
title=metadata.metadata.get('Title'),
os=metadata.metadata.get('OSInfo'))
meta_finder_document.save()
results.append(data)
return results
#-----------------#
# Utils functions #
#-----------------#
def create_scan_activity(scan_history_id, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = ScanHistory.objects.get(pk=scan_history_id)
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
#--------------------#
# Database functions #
#--------------------#
def save_vulnerability(**vuln_data):
references = vuln_data.pop('references', [])
cve_ids = vuln_data.pop('cve_ids', [])
cwe_ids = vuln_data.pop('cwe_ids', [])
tags = vuln_data.pop('tags', [])
subscan = vuln_data.pop('subscan', None)
# remove nulls
vuln_data = replace_nulls(vuln_data)
# Create vulnerability
vuln, created = Vulnerability.objects.get_or_create(**vuln_data)
if created:
vuln.discovered_date = timezone.now()
vuln.open_status = True
vuln.save()
# Save vuln tags
for tag_name in tags or []:
tag, created = VulnerabilityTags.objects.get_or_create(name=tag_name)
if tag:
vuln.tags.add(tag)
vuln.save()
# Save CVEs
for cve_id in cve_ids or []:
cve, created = CveId.objects.get_or_create(name=cve_id)
if cve:
vuln.cve_ids.add(cve)
vuln.save()
# Save CWEs
for cve_id in cwe_ids or []:
cwe, created = CweId.objects.get_or_create(name=cve_id)
if cwe:
vuln.cwe_ids.add(cwe)
vuln.save()
# Save vuln reference
for url in references or []:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
if created:
vuln.references.add(ref)
vuln.save()
# Save subscan id in vuln object
if subscan:
vuln.vuln_subscan_ids.add(subscan)
vuln.save()
return vuln, created
def save_endpoint(
http_url,
ctx={},
crawl=False,
is_default=False,
**endpoint_data):
"""Get or create EndPoint object. If crawl is True, also crawl the endpoint
HTTP URL with httpx.
Args:
http_url (str): Input HTTP URL.
is_default (bool): If the url is a default url for SubDomains.
scan_history (startScan.models.ScanHistory): ScanHistory object.
domain (startScan.models.Domain): Domain object.
subdomain (starScan.models.Subdomain): Subdomain object.
results_dir (str, optional): Results directory.
crawl (bool, optional): Run httpx on endpoint if True. Default: False.
force (bool, optional): Force crawl even if ENABLE_HTTP_CRAWL mode is on.
subscan (startScan.models.SubScan, optional): SubScan object.
Returns:
tuple: (startScan.models.EndPoint, created) where `created` is a boolean
indicating if the object is new or already existed.
"""
# remove nulls
endpoint_data = replace_nulls(endpoint_data)
scheme = urlparse(http_url).scheme
endpoint = None
created = False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in http_url:
logger.error(f"{http_url} is not a URL of domain {domain.name}. Skipping.")
return None, False
if crawl:
ctx['track'] = False
results = http_crawl(
urls=[http_url],
method='HEAD',
ctx=ctx)
if results:
endpoint_data = results[0]
endpoint_id = endpoint_data['endpoint_id']
created = endpoint_data['endpoint_created']
endpoint = EndPoint.objects.get(pk=endpoint_id)
elif not scheme:
return None, False
else: # add dumb endpoint without probing it
scan = ScanHistory.objects.filter(pk=ctx.get('scan_history_id')).first()
domain = Domain.objects.filter(pk=ctx.get('domain_id')).first()
if not validators.url(http_url):
return None, False
http_url = sanitize_url(http_url)
# Try to get the first matching record (prevent duplicate error)
endpoints = EndPoint.objects.filter(
scan_history=scan,
target_domain=domain,
http_url=http_url,
**endpoint_data
)
if endpoints.exists():
endpoint = endpoints.first()
created = False
else:
# No existing record, create a new one
endpoint = EndPoint.objects.create(
scan_history=scan,
target_domain=domain,
http_url=http_url,
**endpoint_data
)
created = True
if created:
endpoint.is_default = is_default
endpoint.discovered_date = timezone.now()
endpoint.save()
subscan_id = ctx.get('subscan_id')
if subscan_id:
endpoint.endpoint_subscan_ids.add(subscan_id)
endpoint.save()
return endpoint, created
def save_subdomain(subdomain_name, ctx={}):
"""Get or create Subdomain object.
Args:
subdomain_name (str): Subdomain name.
scan_history (startScan.models.ScanHistory): ScanHistory object.
Returns:
tuple: (startScan.models.Subdomain, created) where `created` is a
boolean indicating if the object has been created in DB.
"""
scan_id = ctx.get('scan_history_id')
subscan_id = ctx.get('subscan_id')
out_of_scope_subdomains = ctx.get('out_of_scope_subdomains', [])
valid_domain = (
validators.domain(subdomain_name) or
validators.ipv4(subdomain_name) or
validators.ipv6(subdomain_name)
)
if not valid_domain:
logger.error(f'{subdomain_name} is not an invalid domain. Skipping.')
return None, False
if subdomain_name in out_of_scope_subdomains:
logger.error(f'{subdomain_name} is out-of-scope. Skipping.')
return None, False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in subdomain_name:
logger.error(f"{subdomain_name} is not a subdomain of domain {domain.name}. Skipping.")
return None, False
scan = ScanHistory.objects.filter(pk=scan_id).first()
domain = scan.domain if scan else None
subdomain, created = Subdomain.objects.get_or_create(
scan_history=scan,
target_domain=domain,
name=subdomain_name)
if created:
# logger.warning(f'Found new subdomain {subdomain_name}')
subdomain.discovered_date = timezone.now()
if subscan_id:
subdomain.subdomain_subscan_ids.add(subscan_id)
subdomain.save()
return subdomain, created
def save_email(email_address, scan_history=None):
if not validators.email(email_address):
logger.info(f'Email {email_address} is invalid. Skipping.')
return None, False
email, created = Email.objects.get_or_create(address=email_address)
# if created:
# logger.warning(f'Found new email address {email_address}')
# Add email to ScanHistory
if scan_history:
scan_history.emails.add(email)
scan_history.save()
return email, created
def save_employee(name, designation, scan_history=None):
employee, created = Employee.objects.get_or_create(
name=name,
designation=designation)
# if created:
# logger.warning(f'Found new employee {name}')
# Add employee to ScanHistory
if scan_history:
scan_history.employees.add(employee)
scan_history.save()
return employee, created
def save_ip_address(ip_address, subdomain=None, subscan=None, **kwargs):
if not (validators.ipv4(ip_address) or validators.ipv6(ip_address)):
logger.info(f'IP {ip_address} is not a valid IP. Skipping.')
return None, False
ip, created = IpAddress.objects.get_or_create(address=ip_address)
# if created:
# logger.warning(f'Found new IP {ip_address}')
# Set extra attributes
for key, value in kwargs.items():
setattr(ip, key, value)
ip.save()
# Add IP to subdomain
if subdomain:
subdomain.ip_addresses.add(ip)
subdomain.save()
# Add subscan to IP
if subscan:
ip.ip_subscan_ids.add(subscan)
# Geo-localize IP asynchronously
if created:
geo_localize.delay(ip_address, ip.id)
return ip, created
def save_imported_subdomains(subdomains, ctx={}):
"""Take a list of subdomains imported and write them to from_imported.txt.
Args:
subdomains (list): List of subdomain names.
scan_history (startScan.models.ScanHistory): ScanHistory instance.
domain (startScan.models.Domain): Domain instance.
results_dir (str): Results directory.
"""
domain_id = ctx['domain_id']
domain = Domain.objects.get(pk=domain_id)
results_dir = ctx.get('results_dir', RENGINE_RESULTS)
# Validate each subdomain and de-duplicate entries
subdomains = list(set([
subdomain for subdomain in subdomains
if validators.domain(subdomain) and domain.name == get_domain_from_subdomain(subdomain)
]))
if not subdomains:
return
logger.warning(f'Found {len(subdomains)} imported subdomains.')
with open(f'{results_dir}/from_imported.txt', 'w+') as output_file:
for name in subdomains:
subdomain_name = name.strip()
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain.is_imported_subdomain = True
subdomain.save()
output_file.write(f'{subdomain}\n')
@app.task(name='query_reverse_whois', bind=False, queue='query_reverse_whois_queue')
def query_reverse_whois(lookup_keyword):
"""Queries Reverse WHOIS information for an organization or email address.
Args:
lookup_keyword (str): Registrar Name or email
Returns:
dict: Reverse WHOIS information.
"""
return get_associated_domains(lookup_keyword)
@app.task(name='query_ip_history', bind=False, queue='query_ip_history_queue')
def query_ip_history(domain):
"""Queries the IP history for a domain
Args:
domain (str): domain_name
Returns:
list: list of historical ip addresses
"""
return get_domain_historical_ip_address(domain)
@app.task(name='gpt_vulnerability_description', bind=False, queue='gpt_queue')
def gpt_vulnerability_description(vulnerability_id):
"""Generate and store Vulnerability Description using GPT.
Args:
vulnerability_id (Vulnerability Model ID): Vulnerability ID to fetch Description.
"""
logger.info('Getting GPT Vulnerability Description')
try:
lookup_vulnerability = Vulnerability.objects.get(id=vulnerability_id)
lookup_url = urlparse(lookup_vulnerability.http_url)
path = lookup_url.path
except Exception as e:
return {
'status': False,
'error': str(e)
}
# check in db GPTVulnerabilityReport model if vulnerability description and path matches
stored = GPTVulnerabilityReport.objects.filter(url_path=path).filter(title=lookup_vulnerability.name).first()
if stored:
response = {
'status': True,
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
vulnerability_description = get_gpt_vuln_input_description(
lookup_vulnerability.name,
path
)
# one can add more description here later
gpt_generator = GPTVulnerabilityReportGenerator()
response = gpt_generator.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
lookup_vulnerability.name,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
# for all vulnerabilities with the same vulnerability name this description has to be stored.
# also the consition is that the url must contain a part of this.
for vuln in Vulnerability.objects.filter(name=lookup_vulnerability.name, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
return response
| psyray | 7c01a46cea370e74385682ba7c28eaf4e58f5d69 | 2e089dc62f1bd64aa481750da10fa750e3aa232d | Is this comment still needed? | AnonymousWP | 7 |
yogeshojha/rengine | 1,063 | Fix crash on saving endpoint (FFUF related only) | Fix #1006
I've added :
- a **try except** block to catch error on duplicate record returned by **get_or_create** in **saving_endpoint** method
- a **check** on endpoint existence in **dir_file_fuzz** method
Errors are logged to the console with the URL.
![image](https://github.com/yogeshojha/rengine/assets/1230954/3067c8a3-f44d-4b8f-b048-d1a356d542a2)
Tested and working
Now we need to find why there are duplicates endpoints in the db
But it's another issue | null | 2023-11-22 02:57:45+00:00 | 2023-11-27 12:37:27+00:00 | web/reNgine/tasks.py | import csv
import json
import os
import pprint
import subprocess
import time
import validators
import whatportis
import xmltodict
import yaml
import tldextract
import concurrent.futures
from datetime import datetime
from urllib.parse import urlparse
from api.serializers import SubdomainSerializer
from celery import chain, chord, group
from celery.result import allow_join_result
from celery.utils.log import get_task_logger
from django.db.models import Count
from dotted_dict import DottedDict
from django.utils import timezone
from pycvesearch import CVESearch
from metafinder.extractor import extract_metadata_from_google_search
from reNgine.celery import app
from reNgine.gpt import GPTVulnerabilityReportGenerator
from reNgine.celery_custom_task import RengineTask
from reNgine.common_func import *
from reNgine.definitions import *
from reNgine.settings import *
from reNgine.gpt import *
from reNgine.utilities import *
from scanEngine.models import (EngineType, InstalledExternalTool, Notification, Proxy)
from startScan.models import *
from startScan.models import EndPoint, Subdomain, Vulnerability
from targetApp.models import Domain
"""
Celery tasks.
"""
logger = get_task_logger(__name__)
#----------------------#
# Scan / Subscan tasks #
#----------------------#
@app.task(name='initiate_scan', bind=False, queue='initiate_scan_queue')
def initiate_scan(
scan_history_id,
domain_id,
engine_id=None,
scan_type=LIVE_SCAN,
results_dir=RENGINE_RESULTS,
imported_subdomains=[],
out_of_scope_subdomains=[],
url_filter=''):
"""Initiate a new scan.
Args:
scan_history_id (int): ScanHistory id.
domain_id (int): Domain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
imported_subdomains (list): Imported subdomains.
out_of_scope_subdomains (list): Out-of-scope subdomains.
url_filter (str): URL path. Default: ''
"""
# Get scan history
scan = ScanHistory.objects.get(pk=scan_history_id)
# Get scan engine
engine_id = engine_id or scan.scan_type.id # scan history engine_id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, [])
# Get domain and set last_scan_date
domain = Domain.objects.get(pk=domain_id)
domain.last_scan_date = timezone.now()
domain.save()
# Get path filter
url_filter = url_filter.rstrip('/')
# Get or create ScanHistory() object
if scan_type == LIVE_SCAN: # immediate
scan = ScanHistory.objects.get(pk=scan_history_id)
scan.scan_status = RUNNING_TASK
elif scan_type == SCHEDULED_SCAN: # scheduled
scan = ScanHistory()
scan.scan_status = INITIATED_TASK
scan.scan_type = engine
scan.celery_ids = [initiate_scan.request.id]
scan.domain = domain
scan.start_scan_date = timezone.now()
scan.tasks = engine.tasks
scan.results_dir = f'{results_dir}/{domain.name}_{scan.id}'
add_gf_patterns = gf_patterns and 'fetch_url' in engine.tasks
if add_gf_patterns:
scan.used_gf_patterns = ','.join(gf_patterns)
scan.save()
# Create scan results dir
os.makedirs(scan.results_dir)
# Build task context
ctx = {
'scan_history_id': scan_history_id,
'engine_id': engine_id,
'domain_id': domain.id,
'results_dir': scan.results_dir,
'url_filter': url_filter,
'yaml_configuration': config,
'out_of_scope_subdomains': out_of_scope_subdomains
}
ctx_str = json.dumps(ctx, indent=2)
# Send start notif
logger.warning(f'Starting scan {scan_history_id} with context:\n{ctx_str}')
send_scan_notif.delay(
scan_history_id,
subscan_id=None,
engine_id=engine_id,
status=CELERY_TASK_STATUS_MAP[scan.scan_status])
# Save imported subdomains in DB
save_imported_subdomains(imported_subdomains, ctx=ctx)
# Create initial subdomain in DB: make a copy of domain as a subdomain so
# that other tasks using subdomains can use it.
subdomain_name = domain.name
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
# If enable_http_crawl is set, create an initial root HTTP endpoint so that
# HTTP crawling can start somewhere
http_url = f'{domain.name}{url_filter}' if url_filter else domain.name
endpoint, _ = save_endpoint(
http_url,
ctx=ctx,
crawl=enable_http_crawl,
is_default=True,
subdomain=subdomain
)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build Celery tasks, crafted according to the dependency graph below:
# subdomain_discovery --> port_scan --> fetch_url --> dir_file_fuzz
# osint vulnerability_scan
# osint dalfox xss scan
# screenshot
# waf_detection
workflow = chain(
group(
subdomain_discovery.si(ctx=ctx, description='Subdomain discovery'),
osint.si(ctx=ctx, description='OS Intelligence')
),
port_scan.si(ctx=ctx, description='Port scan'),
fetch_url.si(ctx=ctx, description='Fetch URL'),
group(
dir_file_fuzz.si(ctx=ctx, description='Directories & files fuzz'),
vulnerability_scan.si(ctx=ctx, description='Vulnerability scan'),
screenshot.si(ctx=ctx, description='Screenshot'),
waf_detection.si(ctx=ctx, description='WAF detection')
)
)
# Build callback
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery chord
logger.info(f'Running Celery workflow with {len(workflow.tasks) + 1} tasks')
task = chain(workflow, callback).on_error(callback).delay()
scan.celery_ids.append(task.id)
scan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='initiate_subscan', bind=False, queue='subscan_queue')
def initiate_subscan(
scan_history_id,
subdomain_id,
engine_id=None,
scan_type=None,
results_dir=RENGINE_RESULTS,
url_filter=''):
"""Initiate a new subscan.
Args:
scan_history_id (int): ScanHistory id.
subdomain_id (int): Subdomain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
url_filter (str): URL path. Default: ''
"""
# Get Subdomain, Domain and ScanHistory
subdomain = Subdomain.objects.get(pk=subdomain_id)
scan = ScanHistory.objects.get(pk=subdomain.scan_history.id)
domain = Domain.objects.get(pk=subdomain.target_domain.id)
# Get EngineType
engine_id = engine_id or scan.scan_type.id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Create scan activity of SubScan Model
subscan = SubScan(
start_scan_date=timezone.now(),
celery_ids=[initiate_subscan.request.id],
scan_history=scan,
subdomain=subdomain,
type=scan_type,
status=RUNNING_TASK,
engine=engine)
subscan.save()
# Get YAML configuration
config = yaml.safe_load(engine.yaml_configuration)
# Create results directory
results_dir = f'{scan.results_dir}/subscans/{subscan.id}'
os.makedirs(results_dir, exist_ok=True)
# Run task
method = globals().get(scan_type)
if not method:
logger.warning(f'Task {scan_type} is not supported by reNgine. Skipping')
return
scan.tasks.append(scan_type)
scan.save()
# Send start notif
send_scan_notif.delay(
scan.id,
subscan_id=subscan.id,
engine_id=engine_id,
status='RUNNING')
# Build context
ctx = {
'scan_history_id': scan.id,
'subscan_id': subscan.id,
'engine_id': engine_id,
'domain_id': domain.id,
'subdomain_id': subdomain.id,
'yaml_configuration': config,
'results_dir': results_dir,
'url_filter': url_filter
}
# Create initial endpoints in DB: find domain HTTP endpoint so that HTTP
# crawling can start somewhere
base_url = f'{subdomain.name}{url_filter}' if url_filter else subdomain.name
endpoint, _ = save_endpoint(
base_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build header + callback
workflow = method.si(ctx=ctx)
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery tasks
task = chain(workflow, callback).on_error(callback).delay()
subscan.celery_ids.append(task.id)
subscan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='report', bind=False, queue='report_queue')
def report(ctx={}, description=None):
"""Report task running after all other tasks.
Mark ScanHistory or SubScan object as completed and update with final
status, log run details and send notification.
Args:
description (str, optional): Task description shown in UI.
"""
# Get objects
subscan_id = ctx.get('subscan_id')
scan_id = ctx.get('scan_history_id')
engine_id = ctx.get('engine_id')
scan = ScanHistory.objects.filter(pk=scan_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
# Get failed tasks
tasks = ScanActivity.objects.filter(scan_of=scan).all()
if subscan:
tasks = tasks.filter(celery_id__in=subscan.celery_ids)
failed_tasks = tasks.filter(status=FAILED_TASK)
# Get task status
failed_count = failed_tasks.count()
status = SUCCESS_TASK if failed_count == 0 else FAILED_TASK
status_h = 'SUCCESS' if failed_count == 0 else 'FAILED'
# Update scan / subscan status
if subscan:
subscan.stop_scan_date = timezone.now()
subscan.status = status
subscan.save()
else:
scan.scan_status = status
scan.stop_scan_date = timezone.now()
scan.save()
# Send scan status notif
send_scan_notif.delay(
scan_history_id=scan_id,
subscan_id=subscan_id,
engine_id=engine_id,
status=status_h)
#------------------------- #
# Tracked reNgine tasks #
#--------------------------#
@app.task(name='subdomain_discovery', queue='main_scan_queue', base=RengineTask, bind=True)
def subdomain_discovery(
self,
host=None,
ctx=None,
description=None):
"""Uses a set of tools (see SUBDOMAIN_SCAN_DEFAULT_TOOLS) to scan all
subdomains associated with a domain.
Args:
host (str): Hostname to scan.
Returns:
subdomains (list): List of subdomain names.
"""
if not host:
host = self.subdomain.name if self.subdomain else self.domain.name
if self.url_filter:
logger.warning(f'Ignoring subdomains scan as an URL path filter was passed ({self.url_filter}).')
return
# Config
config = self.yaml_configuration.get(SUBDOMAIN_DISCOVERY) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL) or self.yaml_configuration.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
tools = config.get(USES_TOOLS, SUBDOMAIN_SCAN_DEFAULT_TOOLS)
default_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=True).filter(is_subdomain_gathering=True)]
custom_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=False).filter(is_subdomain_gathering=True)]
send_subdomain_changes, send_interesting = False, False
notif = Notification.objects.first()
if notif:
send_subdomain_changes = notif.send_subdomain_changes_notif
send_interesting = notif.send_interesting_notif
# Gather tools to run for subdomain scan
if ALL in tools:
tools = SUBDOMAIN_SCAN_DEFAULT_TOOLS + custom_subdomain_tools
tools = [t.lower() for t in tools]
# Make exception for amass since tool name is amass, but command is amass-active/passive
default_subdomain_tools.append('amass-passive')
default_subdomain_tools.append('amass-active')
# Run tools
for tool in tools:
cmd = None
logger.info(f'Scanning subdomains for {host} with {tool}')
proxy = get_random_proxy()
if tool in default_subdomain_tools:
if tool == 'amass-passive':
cmd = f'amass enum -passive -d {host} -o {self.results_dir}/subdomains_amass.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
elif tool == 'amass-active':
use_amass_config = config.get(USE_AMASS_CONFIG, False)
amass_wordlist_name = config.get(AMASS_WORDLIST, 'deepmagic.com-prefixes-top50000')
wordlist_path = f'/usr/src/wordlist/{amass_wordlist_name}.txt'
cmd = f'amass enum -active -d {host} -o {self.results_dir}/subdomains_amass_active.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
cmd += f' -brute -w {wordlist_path}'
elif tool == 'sublist3r':
cmd = f'python3 /usr/src/github/Sublist3r/sublist3r.py -d {host} -t {threads} -o {self.results_dir}/subdomains_sublister.txt'
elif tool == 'subfinder':
cmd = f'subfinder -d {host} -o {self.results_dir}/subdomains_subfinder.txt'
use_subfinder_config = config.get(USE_SUBFINDER_CONFIG, False)
cmd += ' -config /root/.config/subfinder/config.yaml' if use_subfinder_config else ''
cmd += f' -proxy {proxy}' if proxy else ''
cmd += f' -timeout {timeout}' if timeout else ''
cmd += f' -t {threads}' if threads else ''
cmd += f' -silent'
elif tool == 'oneforall':
cmd = f'python3 /usr/src/github/OneForAll/oneforall.py --target {host} run'
cmd_extract = f'cut -d\',\' -f6 /usr/src/github/OneForAll/results/{host}.csv > {self.results_dir}/subdomains_oneforall.txt'
cmd_rm = f'rm -rf /usr/src/github/OneForAll/results/{host}.csv'
cmd += f' && {cmd_extract} && {cmd_rm}'
elif tool == 'ctfr':
results_file = self.results_dir + '/subdomains_ctfr.txt'
cmd = f'python3 /usr/src/github/ctfr/ctfr.py -d {host} -o {results_file}'
cmd_extract = f"cat {results_file} | sed 's/\*.//g' | tail -n +12 | uniq | sort > {results_file}"
cmd += f' && {cmd_extract}'
elif tool == 'tlsx':
results_file = self.results_dir + '/subdomains_tlsx.txt'
cmd = f'tlsx -san -cn -silent -ro -host {host}'
cmd += f" | sed -n '/^\([a-zA-Z0-9]\([-a-zA-Z0-9]*[a-zA-Z0-9]\)\?\.\)\+{host}$/p' | uniq | sort"
cmd += f' > {results_file}'
elif tool == 'netlas':
results_file = self.results_dir + '/subdomains_netlas.txt'
cmd = f'netlas search -d domain -i domain domain:"*.{host}" -f json'
netlas_key = get_netlas_key()
cmd += f' -a {netlas_key}' if netlas_key else ''
cmd_extract = f"grep -oE '([a-zA-Z0-9]([-a-zA-Z0-9]*[a-zA-Z0-9])?\.)+{host}'"
cmd += f' | {cmd_extract} > {results_file}'
elif tool in custom_subdomain_tools:
tool_query = InstalledExternalTool.objects.filter(name__icontains=tool.lower())
if not tool_query.exists():
logger.error(f'Missing {{TARGET}} and {{OUTPUT}} placeholders in {tool} configuration. Skipping.')
continue
custom_tool = tool_query.first()
cmd = custom_tool.subdomain_gathering_command
if '{TARGET}' in cmd and '{OUTPUT}' in cmd:
cmd = cmd.replace('{TARGET}', host)
cmd = cmd.replace('{OUTPUT}', f'{self.results_dir}/subdomains_{tool}.txt')
cmd = cmd.replace('{PATH}', custom_tool.github_clone_path) if '{PATH}' in cmd else cmd
else:
logger.warning(
f'Subdomain discovery tool "{tool}" is not supported by reNgine. Skipping.')
continue
# Run tool
try:
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
except Exception as e:
logger.error(
f'Subdomain discovery tool "{tool}" raised an exception')
logger.exception(e)
# Gather all the tools' results in one single file. Write subdomains into
# separate files, and sort all subdomains.
run_command(
f'cat {self.results_dir}/subdomains_*.txt > {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {self.output_path} -o {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
with open(self.output_path) as f:
lines = f.readlines()
# Parse the output_file file and store Subdomain and EndPoint objects found
# in db.
subdomain_count = 0
subdomains = []
urls = []
for line in lines:
subdomain_name = line.strip()
valid_url = bool(validators.url(subdomain_name))
valid_domain = (
bool(validators.domain(subdomain_name)) or
bool(validators.ipv4(subdomain_name)) or
bool(validators.ipv6(subdomain_name)) or
valid_url
)
if not valid_domain:
logger.error(f'Subdomain {subdomain_name} is not a valid domain, IP or URL. Skipping.')
continue
if valid_url:
subdomain_name = urlparse(subdomain_name).netloc
if subdomain_name in self.out_of_scope_subdomains:
logger.error(f'Subdomain {subdomain_name} is out of scope. Skipping.')
continue
# Add subdomain
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain_count += 1
subdomains.append(subdomain)
urls.append(subdomain.name)
# Bulk crawl subdomains
if enable_http_crawl:
ctx['track'] = True
http_crawl(urls, ctx=ctx, is_ran_from_subdomain_scan=True)
# Find root subdomain endpoints
for subdomain in subdomains:
pass
# Send notifications
subdomains_str = '\n'.join([f'• `{subdomain.name}`' for subdomain in subdomains])
self.notify(fields={
'Subdomain count': len(subdomains),
'Subdomains': subdomains_str,
})
if send_subdomain_changes and self.scan_id and self.domain_id:
added = get_new_added_subdomain(self.scan_id, self.domain_id)
removed = get_removed_subdomain(self.scan_id, self.domain_id)
if added:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in added])
self.notify(fields={'Added subdomains': subdomains_str})
if removed:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in removed])
self.notify(fields={'Removed subdomains': subdomains_str})
if send_interesting and self.scan_id and self.domain_id:
interesting_subdomains = get_interesting_subdomains(self.scan_id, self.domain_id)
if interesting_subdomains:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in interesting_subdomains])
self.notify(fields={'Interesting subdomains': subdomains_str})
return SubdomainSerializer(subdomains, many=True).data
@app.task(name='osint', queue='main_scan_queue', base=RengineTask, bind=True)
def osint(self, host=None, ctx={}, description=None):
"""Run Open-Source Intelligence tools on selected domain.
Args:
host (str): Hostname to scan.
Returns:
dict: Results from osint discovery and dorking.
"""
config = self.yaml_configuration.get(OSINT) or OSINT_DEFAULT_CONFIG
results = {}
grouped_tasks = []
if 'discover' in config:
ctx['track'] = False
# results = osint_discovery(host=host, ctx=ctx)
_task = osint_discovery.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
activity_id=self.activity_id,
results_dir=self.results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if OSINT_DORK in config or OSINT_CUSTOM_DORK in config:
_task = dorking.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
results_dir=self.results_dir
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('OSINT Tasks finished...')
# with open(self.output_path, 'w') as f:
# json.dump(results, f, indent=4)
#
# return results
@app.task(name='osint_discovery', queue='osint_discovery_queue', bind=False)
def osint_discovery(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run OSINT discovery.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
dict: osint metadat and theHarvester and h8mail results.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
osint_lookup = config.get(OSINT_DISCOVER, [])
osint_intensity = config.get(INTENSITY, 'normal')
documents_limit = config.get(OSINT_DOCUMENTS_LIMIT, 50)
results = {}
meta_info = []
emails = []
creds = []
# Get and save meta info
if 'metainfo' in osint_lookup:
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': host,
'domain': host,
'scan_id': scan_history_id,
'documents_limit': documents_limit
})
meta_info.append(save_metadata_info(meta_dict))
# TODO: disabled for now
# elif osint_intensity == 'deep':
# subdomains = Subdomain.objects
# if self.scan:
# subdomains = subdomains.filter(scan_history=self.scan)
# for subdomain in subdomains:
# meta_dict = DottedDict({
# 'osint_target': subdomain.name,
# 'domain': self.domain,
# 'scan_id': self.scan_id,
# 'documents_limit': documents_limit
# })
# meta_info.append(save_metadata_info(meta_dict))
grouped_tasks = []
if 'emails' in osint_lookup:
emails = get_and_save_emails(scan_history, activity_id, results_dir)
emails_str = '\n'.join([f'• `{email}`' for email in emails])
# self.notify(fields={'Emails': emails_str})
# ctx['track'] = False
_task = h8mail.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if 'employees' in osint_lookup:
ctx['track'] = False
_task = theHarvester.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
# results['emails'] = results.get('emails', []) + emails
# results['creds'] = creds
# results['meta_info'] = meta_info
return results
@app.task(name='dorking', bind=False, queue='dorking_queue')
def dorking(config, host, scan_history_id, results_dir):
"""Run Google dorks.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
list: Dorking results for each dork ran.
"""
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
scan_history = ScanHistory.objects.get(pk=scan_history_id)
dorks = config.get(OSINT_DORK, [])
custom_dorks = config.get(OSINT_CUSTOM_DORK, [])
results = []
# custom dorking has higher priority
try:
for custom_dork in custom_dorks:
lookup_target = custom_dork.get('lookup_site')
# replace with original host if _target_
lookup_target = host if lookup_target == '_target_' else lookup_target
if 'lookup_extensions' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_extensions=custom_dork.get('lookup_extensions'),
scan_history=scan_history
)
elif 'lookup_keywords' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_keywords=custom_dork.get('lookup_keywords'),
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
# default dorking
try:
for dork in dorks:
logger.info(f'Getting dork information for {dork}')
if dork == 'stackoverflow':
results = get_and_save_dork_results(
lookup_target='stackoverflow.com',
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'login_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/login/,login.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'admin_panels':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/admin/,admin.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'dashboard_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/dashboard/,dashboard.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'social_media' :
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'reddit.com'
]
for site in social_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'project_management' :
project_websites = [
'trello.com',
'atlassian.net'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'code_sharing' :
project_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'config_files' :
config_file_exts = [
'env',
'xml',
'conf',
'toml',
'yml',
'yaml',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(config_file_exts),
page_count=4,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keyword = 'Jenkins'
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=lookup_keyword,
page_count=1,
scan_history=scan_history
)
elif dork == 'wordpress_files' :
lookup_keywords = [
'/wp-content/',
'/wp-includes/'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'php_error' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'exposed_documents' :
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(docs_file_ext),
page_count=7,
scan_history=scan_history
)
elif dork == 'db_files' :
file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
elif dork == 'git_exposed' :
file_ext = [
'git',
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
return results
@app.task(name='theHarvester', queue='theHarvester_queue', bind=False)
def theHarvester(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run theHarvester to get save emails, hosts, employees found in domain.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
dict: Dict of emails, employees, hosts and ips found during crawling.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
output_path_json = f'{results_dir}/theHarvester.json'
theHarvester_dir = '/usr/src/github/theHarvester'
history_file = f'{results_dir}/commands.txt'
cmd = f'python3 {theHarvester_dir}/theHarvester.py -d {host} -b all -f {output_path_json}'
# Update proxies.yaml
proxy_query = Proxy.objects.all()
if proxy_query.exists():
proxy = proxy_query.first()
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(f'{theHarvester_dir}/proxies.yaml', 'w') as file:
yaml.dump(yaml_data, file)
# Run cmd
run_command(
cmd,
shell=False,
cwd=theHarvester_dir,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
# Get file location
if not os.path.isfile(output_path_json):
logger.error(f'Could not open {output_path_json}')
return {}
# Load theHarvester results
with open(output_path_json, 'r') as f:
data = json.load(f)
# Re-indent theHarvester JSON
with open(output_path_json, 'w') as f:
json.dump(data, f, indent=4)
emails = data.get('emails', [])
for email_address in emails:
email, _ = save_email(email_address, scan_history=scan_history)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
linkedin_people = data.get('linkedin_people', [])
for people in linkedin_people:
employee, _ = save_employee(
people,
designation='linkedin',
scan_history=scan_history)
# if employee:
# self.notify(fields={'LinkedIn people': f'• {employee.name}'})
twitter_people = data.get('twitter_people', [])
for people in twitter_people:
employee, _ = save_employee(
people,
designation='twitter',
scan_history=scan_history)
# if employee:
# self.notify(fields={'Twitter people': f'• {employee.name}'})
hosts = data.get('hosts', [])
urls = []
for host in hosts:
split = tuple(host.split(':'))
http_url = split[0]
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
endpoint, _ = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain)
# if endpoint:
# urls.append(endpoint.http_url)
# self.notify(fields={'Hosts': f'• {endpoint.http_url}'})
# if enable_http_crawl:
# ctx['track'] = False
# http_crawl(urls, ctx=ctx)
# TODO: Lots of ips unrelated with our domain are found, disabling
# this for now.
# ips = data.get('ips', [])
# for ip_address in ips:
# ip, created = save_ip_address(
# ip_address,
# subscan=subscan)
# if ip:
# send_task_notif.delay(
# 'osint',
# scan_history_id=scan_history_id,
# subscan_id=subscan_id,
# severity='success',
# update_fields={'IPs': f'{ip.address}'})
return data
@app.task(name='h8mail', queue='h8mail_queue', bind=False)
def h8mail(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run h8mail.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
list[dict]: List of credentials info.
"""
logger.warning('Getting leaked credentials')
scan_history = ScanHistory.objects.get(pk=scan_history_id)
input_path = f'{results_dir}/emails.txt'
output_file = f'{results_dir}/h8mail.json'
cmd = f'h8mail -t {input_path} --json {output_file}'
history_file = f'{results_dir}/commands.txt'
run_command(
cmd,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
with open(output_file) as f:
data = json.load(f)
creds = data.get('targets', [])
# TODO: go through h8mail output and save emails to DB
for cred in creds:
logger.warning(cred)
email_address = cred['target']
pwn_num = cred['pwn_num']
pwn_data = cred.get('data', [])
email, created = save_email(email_address, scan_history=scan)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
return creds
@app.task(name='screenshot', queue='main_scan_queue', base=RengineTask, bind=True)
def screenshot(self, ctx={}, description=None):
"""Uses EyeWitness to gather screenshot of a domain and/or url.
Args:
description (str, optional): Task description shown in UI.
"""
# Config
screenshots_path = f'{self.results_dir}/screenshots'
output_path = f'{self.results_dir}/screenshots/{self.filename}'
alive_endpoints_file = f'{self.results_dir}/endpoints_alive.txt'
config = self.yaml_configuration.get(SCREENSHOT) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT + 5)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
# If intensity is normal, grab only the root endpoints of each subdomain
strict = True if intensity == 'normal' else False
# Get URLs to take screenshot of
get_http_urls(
is_alive=enable_http_crawl,
strict=strict,
write_filepath=alive_endpoints_file,
get_only_default_urls=True,
ctx=ctx
)
# Send start notif
notification = Notification.objects.first()
send_output_file = notification.send_scan_output_file if notification else False
# Run cmd
cmd = f'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py -f {alive_endpoints_file} -d {screenshots_path} --no-prompt'
cmd += f' --timeout {timeout}' if timeout > 0 else ''
cmd += f' --threads {threads}' if threads > 0 else ''
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(output_path):
logger.error(f'Could not load EyeWitness results at {output_path} for {self.domain.name}.')
return
# Loop through results and save objects in DB
screenshot_paths = []
with open(output_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
"Protocol,Port,Domain,Request Status,Screenshot Path, Source Path"
protocol, port, subdomain_name, status, screenshot_path, source_path = tuple(row)
logger.info(f'{protocol}:{port}:{subdomain_name}:{status}')
subdomain_query = Subdomain.objects.filter(name=subdomain_name)
if self.scan:
subdomain_query = subdomain_query.filter(scan_history=self.scan)
if status == 'Successful' and subdomain_query.exists():
subdomain = subdomain_query.first()
screenshot_paths.append(screenshot_path)
subdomain.screenshot_path = screenshot_path.replace('/usr/src/scan_results/', '')
subdomain.save()
logger.warning(f'Added screenshot for {subdomain.name} to DB')
# Remove all db, html extra files in screenshot results
run_command(
'rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(screenshots_path),
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'rm -rf {screenshots_path}/source',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Send finish notifs
screenshots_str = '• ' + '\n• '.join([f'`{path}`' for path in screenshot_paths])
self.notify(fields={'Screenshots': screenshots_str})
if send_output_file:
for path in screenshot_paths:
title = get_output_file_name(
self.scan_id,
self.subscan_id,
self.filename)
send_file_to_discord.delay(path, title)
@app.task(name='port_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def port_scan(self, hosts=[], ctx={}, description=None):
"""Run port scan.
Args:
hosts (list, optional): Hosts to run port scan on.
description (str, optional): Task description shown in UI.
Returns:
list: List of open ports (dict).
"""
input_file = f'{self.results_dir}/input_subdomains_port_scan.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(PORT_SCAN) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
exclude_ports = config.get(NAABU_EXCLUDE_PORTS, [])
exclude_subdomains = config.get(NAABU_EXCLUDE_SUBDOMAINS, False)
ports = config.get(PORTS, NAABU_DEFAULT_PORTS)
ports = [str(port) for port in ports]
rate_limit = config.get(NAABU_RATE) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
passive = config.get(NAABU_PASSIVE, False)
use_naabu_config = config.get(USE_NAABU_CONFIG, False)
exclude_ports_str = ','.join(return_iterable(exclude_ports))
# nmap args
nmap_enabled = config.get(ENABLE_NMAP, False)
nmap_cmd = config.get(NMAP_COMMAND, '')
nmap_script = config.get(NMAP_SCRIPT, '')
nmap_script = ','.join(return_iterable(nmap_script))
nmap_script_args = config.get(NMAP_SCRIPT_ARGS)
if hosts:
with open(input_file, 'w') as f:
f.write('\n'.join(hosts))
else:
hosts = get_subdomains(
write_filepath=input_file,
exclude_subdomains=exclude_subdomains,
ctx=ctx)
# Build cmd
cmd = 'naabu -json -exclude-cdn'
cmd += f' -list {input_file}' if len(hosts) > 0 else f' -host {hosts[0]}'
if 'full' in ports or 'all' in ports:
ports_str = ' -p "-"'
elif 'top-100' in ports:
ports_str = ' -top-ports 100'
elif 'top-1000' in ports:
ports_str = ' -top-ports 1000'
else:
ports_str = ','.join(ports)
ports_str = f' -p {ports_str}'
cmd += ports_str
cmd += ' -config /root/.config/naabu/config.yaml' if use_naabu_config else ''
cmd += f' -proxy "{proxy}"' if proxy else ''
cmd += f' -c {threads}' if threads else ''
cmd += f' -rate {rate_limit}' if rate_limit > 0 else ''
cmd += f' -timeout {timeout*1000}' if timeout > 0 else ''
cmd += f' -passive' if passive else ''
cmd += f' -exclude-ports {exclude_ports_str}' if exclude_ports else ''
cmd += f' -silent'
# Execute cmd and gather results
results = []
urls = []
ports_data = {}
for line in stream_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
port_number = line['port']
ip_address = line['ip']
host = line.get('host') or ip_address
if port_number == 0:
continue
# Grab subdomain
subdomain = Subdomain.objects.filter(
name=host,
target_domain=self.domain,
scan_history=self.scan
).first()
# Add IP DB
ip, _ = save_ip_address(ip_address, subdomain, subscan=self.subscan)
if self.subscan:
ip.ip_subscan_ids.add(self.subscan)
ip.save()
# Add endpoint to DB
# port 80 and 443 not needed as http crawl already does that.
if port_number not in [80, 443]:
http_url = f'{host}:{port_number}'
endpoint, _ = save_endpoint(
http_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint:
http_url = endpoint.http_url
urls.append(http_url)
# Add Port in DB
port_details = whatportis.get_ports(str(port_number))
service_name = port_details[0].name if len(port_details) > 0 else 'unknown'
description = port_details[0].description if len(port_details) > 0 else ''
# get or create port
port, created = Port.objects.get_or_create(
number=port_number,
service_name=service_name,
description=description
)
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port.save()
ip.ports.add(port)
ip.save()
if host in ports_data:
ports_data[host].append(port_number)
else:
ports_data[host] = [port_number]
# Send notification
logger.warning(f'Found opened port {port_number} on {ip_address} ({host})')
if len(ports_data) == 0:
logger.info('Finished running naabu port scan - No open ports found.')
if nmap_enabled:
logger.info('Nmap scans skipped')
return ports_data
# Send notification
fields_str = ''
for host, ports in ports_data.items():
ports_str = ', '.join([f'`{port}`' for port in ports])
fields_str += f'• `{host}`: {ports_str}\n'
self.notify(fields={'Ports discovered': fields_str})
# Save output to file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
logger.info('Finished running naabu port scan.')
# Process nmap results: 1 process per host
sigs = []
if nmap_enabled:
logger.warning(f'Starting nmap scans ...')
logger.warning(ports_data)
for host, port_list in ports_data.items():
ports_str = '_'.join([str(p) for p in port_list])
ctx_nmap = ctx.copy()
ctx_nmap['description'] = get_task_title(f'nmap_{host}', self.scan_id, self.subscan_id)
ctx_nmap['track'] = False
sig = nmap.si(
cmd=nmap_cmd,
ports=port_list,
host=host,
script=nmap_script,
script_args=nmap_script_args,
max_rate=rate_limit,
ctx=ctx_nmap)
sigs.append(sig)
task = group(sigs).apply_async()
with allow_join_result():
results = task.get()
return ports_data
@app.task(name='nmap', queue='main_scan_queue', base=RengineTask, bind=True)
def nmap(
self,
cmd=None,
ports=[],
host=None,
input_file=None,
script=None,
script_args=None,
max_rate=None,
ctx={},
description=None):
"""Run nmap on a host.
Args:
cmd (str, optional): Existing nmap command to complete.
ports (list, optional): List of ports to scan.
host (str, optional): Host to scan.
input_file (str, optional): Input hosts file.
script (str, optional): NSE script to run.
script_args (str, optional): NSE script args.
max_rate (int): Max rate.
description (str, optional): Task description shown in UI.
"""
notif = Notification.objects.first()
ports_str = ','.join(str(port) for port in ports)
self.filename = self.filename.replace('.txt', '.xml')
filename_vulns = self.filename.replace('.xml', '_vulns.json')
output_file = self.output_path
output_file_xml = f'{self.results_dir}/{host}_{self.filename}'
vulns_file = f'{self.results_dir}/{host}_{filename_vulns}'
logger.warning(f'Running nmap on {host}:{ports}')
# Build cmd
nmap_cmd = get_nmap_cmd(
cmd=cmd,
ports=ports_str,
script=script,
script_args=script_args,
max_rate=max_rate,
host=host,
input_file=input_file,
output_file=output_file_xml)
# Run cmd
run_command(
nmap_cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Get nmap XML results and convert to JSON
vulns = parse_nmap_results(output_file_xml, output_file)
with open(vulns_file, 'w') as f:
json.dump(vulns, f, indent=4)
# Save vulnerabilities found by nmap
vulns_str = ''
for vuln_data in vulns:
# URL is not necessarily an HTTP URL when running nmap (can be any
# other vulnerable protocols). Look for existing endpoint and use its
# URL as vulnerability.http_url if it exists.
url = vuln_data['http_url']
endpoint = EndPoint.objects.filter(http_url__contains=url).first()
if endpoint:
vuln_data['http_url'] = endpoint.http_url
vuln, created = save_vulnerability(
target_domain=self.domain,
subdomain=self.subdomain,
scan_history=self.scan,
subscan=self.subscan,
endpoint=endpoint,
**vuln_data)
vulns_str += f'• {str(vuln)}\n'
if created:
logger.warning(str(vuln))
# Send only 1 notif for all vulns to reduce number of notifs
if notif and notif.send_vuln_notif and vulns_str:
logger.warning(vulns_str)
self.notify(fields={'CVEs': vulns_str})
return vulns
@app.task(name='waf_detection', queue='main_scan_queue', base=RengineTask, bind=True)
def waf_detection(self, ctx={}, description=None):
"""
Uses wafw00f to check for the presence of a WAF.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of startScan.models.Waf objects.
"""
input_path = f'{self.results_dir}/input_endpoints_waf_detection.txt'
config = self.yaml_configuration.get(WAF_DETECTION) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Get alive endpoints from DB
get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
cmd = f'wafw00f -i {input_path} -o {self.output_path}'
run_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(self.output_path):
logger.error(f'Could not find {self.output_path}')
return
with open(self.output_path) as file:
wafs = file.readlines()
for line in wafs:
line = " ".join(line.split())
splitted = line.split(' ', 1)
waf_info = splitted[1].strip()
waf_name = waf_info[:waf_info.find('(')].strip()
waf_manufacturer = waf_info[waf_info.find('(')+1:waf_info.find(')')].strip().replace('.', '')
http_url = sanitize_url(splitted[0].strip())
if not waf_name or waf_name == 'None':
continue
# Add waf to db
waf, _ = Waf.objects.get_or_create(
name=waf_name,
manufacturer=waf_manufacturer
)
# Add waf info to Subdomain in DB
subdomain = get_subdomain_from_url(http_url)
logger.info(f'Wafw00f Subdomain : {subdomain}')
subdomain_query, _ = Subdomain.objects.get_or_create(scan_history=self.scan, name=subdomain)
subdomain_query.waf.add(waf)
subdomain_query.save()
return wafs
@app.task(name='dir_file_fuzz', queue='main_scan_queue', base=RengineTask, bind=True)
def dir_file_fuzz(self, ctx={}, description=None):
"""Perform directory scan, and currently uses `ffuf` as a default tool.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of URLs discovered.
"""
# Config
cmd = 'ffuf'
config = self.yaml_configuration.get(DIR_FILE_FUZZ) or {}
custom_header = self.yaml_configuration.get(CUSTOM_HEADER)
auto_calibration = config.get(AUTO_CALIBRATION, True)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
extensions = config.get(EXTENSIONS, DEFAULT_DIR_FILE_FUZZ_EXTENSIONS)
# prepend . on extensions
extensions = [ext if ext.startswith('.') else '.' + ext for ext in extensions]
extensions_str = ','.join(map(str, extensions))
follow_redirect = config.get(FOLLOW_REDIRECT, FFUF_DEFAULT_FOLLOW_REDIRECT)
max_time = config.get(MAX_TIME, 0)
match_http_status = config.get(MATCH_HTTP_STATUS, FFUF_DEFAULT_MATCH_HTTP_STATUS)
mc = ','.join([str(c) for c in match_http_status])
recursive_level = config.get(RECURSIVE_LEVEL, FFUF_DEFAULT_RECURSIVE_LEVEL)
stop_on_error = config.get(STOP_ON_ERROR, False)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
wordlist_name = config.get(WORDLIST, 'dicc')
delay = rate_limit / (threads * 100) # calculate request pause delay from rate_limit and number of threads
input_path = f'{self.results_dir}/input_dir_file_fuzz.txt'
# Get wordlist
wordlist_name = 'dicc' if wordlist_name == 'default' else wordlist_name
wordlist_path = f'/usr/src/wordlist/{wordlist_name}.txt'
# Build command
cmd += f' -w {wordlist_path}'
cmd += f' -e {extensions_str}' if extensions else ''
cmd += f' -maxtime {max_time}' if max_time > 0 else ''
cmd += f' -p {delay}' if delay > 0 else ''
cmd += f' -recursion -recursion-depth {recursive_level} ' if recursive_level > 0 else ''
cmd += f' -t {threads}' if threads and threads > 0 else ''
cmd += f' -timeout {timeout}' if timeout and timeout > 0 else ''
cmd += ' -se' if stop_on_error else ''
cmd += ' -fr' if follow_redirect else ''
cmd += ' -ac' if auto_calibration else ''
cmd += f' -mc {mc}' if mc else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
# Grab URLs to fuzz
urls = get_http_urls(
is_alive=True,
ignore_files=False,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
logger.warning(urls)
# Loop through URLs and run command
results = []
for url in urls:
'''
Above while fetching urls, we are not ignoring files, because some
default urls may redirect to https://example.com/login.php
so, ignore_files is set to False
but, during fuzzing, we will only need part of the path, in above example
it is still a good idea to ffuf base url https://example.com
so files from base url
'''
url_parse = urlparse(url)
url = url_parse.scheme + '://' + url_parse.netloc
url += '/FUZZ' # TODO: fuzz not only URL but also POST / PUT / headers
proxy = get_random_proxy()
# Build final cmd
fcmd = cmd
fcmd += f' -x {proxy}' if proxy else ''
fcmd += f' -u {url} -json'
# Initialize DirectoryScan object
dirscan = DirectoryScan()
dirscan.scanned_date = timezone.now()
dirscan.command_line = fcmd
dirscan.save()
# Loop through results and populate EndPoint and DirectoryFile in DB
results = []
for line in stream_command(
fcmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
name = line['input'].get('FUZZ')
length = line['length']
status = line['status']
words = line['words']
url = line['url']
lines = line['lines']
content_type = line['content-type']
duration = line['duration']
if not name:
logger.error(f'FUZZ not found for "{url}"')
continue
endpoint, created = save_endpoint(url, crawl=False, ctx=ctx)
# endpoint.is_default = False
endpoint.http_status = status
endpoint.content_length = length
endpoint.response_time = duration / 1000000000
endpoint.save()
if created:
urls.append(endpoint.http_url)
endpoint.status = status
endpoint.content_type = content_type
endpoint.content_length = length
dfile, created = DirectoryFile.objects.get_or_create(
name=name,
length=length,
words=words,
lines=lines,
content_type=content_type,
url=url)
dfile.http_status = status
dfile.save()
# if created:
# logger.warning(f'Found new directory or file {url}')
dirscan.directory_files.add(dfile)
dirscan.save()
if self.subscan:
dirscan.dir_subscan_ids.add(self.subscan)
subdomain_name = get_subdomain_from_url(endpoint.http_url)
subdomain = Subdomain.objects.get(name=subdomain_name, scan_history=self.scan)
subdomain.directories.add(dirscan)
subdomain.save()
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(urls, ctx=ctx)
return results
@app.task(name='fetch_url', queue='main_scan_queue', base=RengineTask, bind=True)
def fetch_url(self, urls=[], ctx={}, description=None):
"""Fetch URLs using different tools like gauplus, gau, gospider, waybackurls ...
Args:
urls (list): List of URLs to start from.
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/input_endpoints_fetch_url.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(FETCH_URL) or {}
should_remove_duplicate_endpoints = config.get(REMOVE_DUPLICATE_ENDPOINTS, True)
duplicate_removal_fields = config.get(DUPLICATE_REMOVAL_FIELDS, ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, DEFAULT_GF_PATTERNS)
ignore_file_extension = config.get(IGNORE_FILE_EXTENSION, DEFAULT_IGNORE_FILE_EXTENSIONS)
tools = config.get(USES_TOOLS, ENDPOINT_SCAN_DEFAULT_TOOLS)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
domain_request_headers = self.domain.request_headers if self.domain else None
custom_header = domain_request_headers or self.yaml_configuration.get(CUSTOM_HEADER)
exclude_subdomains = config.get(EXCLUDED_SUBDOMAINS, False)
# Get URLs to scan and save to input file
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
exclude_subdomains=exclude_subdomains,
get_only_default_urls=True,
ctx=ctx
)
# Domain regex
host = self.domain.name if self.domain else urlparse(urls[0]).netloc
host_regex = f"\'https?://([a-z0-9]+[.])*{host}.*\'"
# Tools cmds
cmd_map = {
'gau': f'gau',
'gauplus': f'gauplus -random-agent',
'hakrawler': 'hakrawler -subs -u',
'waybackurls': 'waybackurls',
'gospider': f'gospider -S {input_path} --js -d 2 --sitemap --robots -w -r',
'katana': f'katana -list {input_path} -silent -jc -kf all -d 3 -fs rdn',
}
if proxy:
cmd_map['gau'] += f' --proxy "{proxy}"'
cmd_map['gauplus'] += f' -p "{proxy}"'
cmd_map['gospider'] += f' -p {proxy}'
cmd_map['hakrawler'] += f' -proxy {proxy}'
cmd_map['katana'] += f' -proxy {proxy}'
if threads > 0:
cmd_map['gau'] += f' --threads {threads}'
cmd_map['gauplus'] += f' -t {threads}'
cmd_map['gospider'] += f' -t {threads}'
cmd_map['katana'] += f' -c {threads}'
if custom_header:
header_string = ';;'.join([
f'{key}: {value}' for key, value in custom_header.items()
])
cmd_map['hakrawler'] += f' -h {header_string}'
cmd_map['katana'] += f' -H {header_string}'
header_flags = [':'.join(h) for h in header_string.split(';;')]
for flag in header_flags:
cmd_map['gospider'] += f' -H {flag}'
cat_input = f'cat {input_path}'
grep_output = f'grep -Eo {host_regex}'
cmd_map = {
tool: f'{cat_input} | {cmd} | {grep_output} > {self.results_dir}/urls_{tool}.txt'
for tool, cmd in cmd_map.items()
}
tasks = group(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for tool, cmd in cmd_map.items()
if tool in tools
)
# Cleanup task
sort_output = [
f'cat {self.results_dir}/urls_* > {self.output_path}',
f'cat {input_path} >> {self.output_path}',
f'sort -u {self.output_path} -o {self.output_path}',
]
if ignore_file_extension:
ignore_exts = '|'.join(ignore_file_extension)
grep_ext_filtered_output = [
f'cat {self.output_path} | grep -Eiv "\\.({ignore_exts}).*" > {self.results_dir}/urls_filtered.txt',
f'mv {self.results_dir}/urls_filtered.txt {self.output_path}'
]
sort_output.extend(grep_ext_filtered_output)
cleanup = chain(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for cmd in sort_output
)
# Run all commands
task = chord(tasks)(cleanup)
with allow_join_result():
task.get()
# Store all the endpoints and run httpx
with open(self.output_path) as f:
discovered_urls = f.readlines()
self.notify(fields={'Discovered URLs': len(discovered_urls)})
# Some tools can have an URL in the format <URL>] - <PATH> or <URL> - <PATH>, add them
# to the final URL list
all_urls = []
for url in discovered_urls:
url = url.strip()
urlpath = None
base_url = None
if '] ' in url: # found JS scraped endpoint e.g from gospider
split = tuple(url.split('] '))
if not len(split) == 2:
logger.warning(f'URL format not recognized for "{url}". Skipping.')
continue
base_url, urlpath = split
urlpath = urlpath.lstrip('- ')
elif ' - ' in url: # found JS scraped endpoint e.g from gospider
base_url, urlpath = tuple(url.split(' - '))
if base_url and urlpath:
subdomain = urlparse(base_url)
url = f'{subdomain.scheme}://{subdomain.netloc}{self.url_filter}'
if not validators.url(url):
logger.warning(f'Invalid URL "{url}". Skipping.')
if url not in all_urls:
all_urls.append(url)
# Filter out URLs if a path filter was passed
if self.url_filter:
all_urls = [url for url in all_urls if self.url_filter in url]
# Write result to output path
with open(self.output_path, 'w') as f:
f.write('\n'.join(all_urls))
logger.warning(f'Found {len(all_urls)} usable URLs')
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(
all_urls,
ctx=ctx,
should_remove_duplicate_endpoints=should_remove_duplicate_endpoints,
duplicate_removal_fields=duplicate_removal_fields
)
#-------------------#
# GF PATTERNS MATCH #
#-------------------#
# Combine old gf patterns with new ones
if gf_patterns:
self.scan.used_gf_patterns = ','.join(gf_patterns)
self.scan.save()
# Run gf patterns on saved endpoints
# TODO: refactor to Celery task
for gf_pattern in gf_patterns:
# TODO: js var is causing issues, removing for now
if gf_pattern == 'jsvar':
logger.info('Ignoring jsvar as it is causing issues.')
continue
# Run gf on current pattern
logger.warning(f'Running gf on pattern "{gf_pattern}"')
gf_output_file = f'{self.results_dir}/gf_patterns_{gf_pattern}.txt'
cmd = f'cat {self.output_path} | gf {gf_pattern} | grep -Eo {host_regex} >> {gf_output_file}'
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Check output file
if not os.path.exists(gf_output_file):
logger.error(f'Could not find GF output file {gf_output_file}. Skipping GF pattern "{gf_pattern}"')
continue
# Read output file line by line and
with open(gf_output_file, 'r') as f:
lines = f.readlines()
# Add endpoints / subdomains to DB
for url in lines:
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
endpoint, created = save_endpoint(
http_url,
crawl=False,
subdomain=subdomain,
ctx=ctx)
if not endpoint:
continue
earlier_pattern = None
if not created:
earlier_pattern = endpoint.matched_gf_patterns
pattern = f'{earlier_pattern},{gf_pattern}' if earlier_pattern else gf_pattern
endpoint.matched_gf_patterns = pattern
endpoint.save()
return all_urls
def parse_curl_output(response):
# TODO: Enrich from other cURL fields.
CURL_REGEX_HTTP_STATUS = f'HTTP\/(?:(?:\d\.?)+)\s(\d+)\s(?:\w+)'
http_status = 0
if response:
failed = False
regex = re.compile(CURL_REGEX_HTTP_STATUS, re.MULTILINE)
try:
http_status = int(regex.findall(response)[0])
except (KeyError, TypeError, IndexError):
pass
return {
'http_status': http_status,
}
@app.task(name='vulnerability_scan', queue='main_scan_queue', bind=True, base=RengineTask)
def vulnerability_scan(self, urls=[], ctx={}, description=None):
"""
This function will serve as an entrypoint to vulnerability scan.
All other vulnerability scan will be run from here including nuclei, crlfuzz, etc
"""
logger.info('Running Vulnerability Scan Queue')
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_run_nuclei = config.get(RUN_NUCLEI, True)
should_run_crlfuzz = config.get(RUN_CRLFUZZ, False)
should_run_dalfox = config.get(RUN_DALFOX, False)
should_run_s3scanner = config.get(RUN_S3SCANNER, True)
grouped_tasks = []
if should_run_nuclei:
_task = nuclei_scan.si(
urls=urls,
ctx=ctx,
description=f'Nuclei Scan'
)
grouped_tasks.append(_task)
if should_run_crlfuzz:
_task = crlfuzz_scan.si(
urls=urls,
ctx=ctx,
description=f'CRLFuzz Scan'
)
grouped_tasks.append(_task)
if should_run_dalfox:
_task = dalfox_xss_scan.si(
urls=urls,
ctx=ctx,
description=f'Dalfox XSS Scan'
)
grouped_tasks.append(_task)
if should_run_s3scanner:
_task = s3scanner.si(
ctx=ctx,
description=f'Misconfigured S3 Buckets Scanner'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan completed...')
# return results
return None
@app.task(name='nuclei_individual_severity_module', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_individual_severity_module(self, cmd, severity, enable_http_crawl, should_fetch_gpt_report, ctx={}, description=None):
'''
This celery task will run vulnerability scan in parallel.
All severities supplied should run in parallel as grouped tasks.
'''
results = []
logger.info(f'Running vulnerability scan with severity: {severity}')
cmd += f' -severity {severity}'
# Send start notification
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
# Gather nuclei results
vuln_data = parse_nuclei_result(line)
# Get corresponding subdomain
http_url = sanitize_url(line.get('matched-at'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
# Look for duplicate vulnerabilities by excluding records that might change but are irrelevant.
object_comparison_exclude = ['response', 'curl_command', 'tags', 'references', 'cve_ids', 'cwe_ids']
# Add subdomain and target domain to the duplicate check
vuln_data_copy = vuln_data.copy()
vuln_data_copy['subdomain'] = subdomain
vuln_data_copy['target_domain'] = self.domain
# Check if record exists, if exists do not save it
if record_exists(Vulnerability, data=vuln_data_copy, exclude_keys=object_comparison_exclude):
logger.warning(f'Nuclei vulnerability of severity {severity} : {vuln_data_copy["name"]} for {subdomain_name} already exists')
continue
# Get or create EndPoint object
response = line.get('response')
httpx_crawl = False if response else enable_http_crawl # avoid yet another httpx crawl
endpoint, _ = save_endpoint(
http_url,
crawl=httpx_crawl,
subdomain=subdomain,
ctx=ctx)
if endpoint:
http_url = endpoint.http_url
if not httpx_crawl:
output = parse_curl_output(response)
endpoint.http_status = output['http_status']
endpoint.save()
# Get or create Vulnerability object
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
subdomain=subdomain,
**vuln_data)
if not vuln:
continue
# Print vuln
severity = line['info'].get('severity', 'unknown')
logger.warning(str(vuln))
# Send notification for all vulnerabilities except info
url = vuln.http_url or vuln.subdomain
send_vuln = (
notif and
notif.send_vuln_notif and
vuln and
severity in ['low', 'medium', 'high', 'critical'])
if send_vuln:
fields = {
'Severity': f'**{severity.upper()}**',
'URL': http_url,
'Subdomain': subdomain_name,
'Name': vuln.name,
'Type': vuln.type,
'Description': vuln.description,
'Template': vuln.template_url,
'Tags': vuln.get_tags_str(),
'CVEs': vuln.get_cve_str(),
'CWEs': vuln.get_cwe_str(),
'References': vuln.get_refs_str()
}
severity_map = {
'low': 'info',
'medium': 'warning',
'high': 'error',
'critical': 'error'
}
self.notify(
f'vulnerability_scan_#{vuln.id}',
severity_map[severity],
fields,
add_meta_info=False)
# Send report to hackerone
hackerone_query = Hackerone.objects.all()
send_report = (
hackerone_query.exists() and
severity not in ('info', 'low') and
vuln.target_domain.h1_team_handle
)
if send_report:
hackerone = hackerone_query.first()
if hackerone.send_critical and severity == 'critical':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_high and severity == 'high':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_medium and severity == 'medium':
send_hackerone_report.delay(vuln.id)
# Write results to JSON file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
# Send finish notif
if send_status:
vulns = Vulnerability.objects.filter(scan_history__id=self.scan_id)
info_count = vulns.filter(severity=0).count()
low_count = vulns.filter(severity=1).count()
medium_count = vulns.filter(severity=2).count()
high_count = vulns.filter(severity=3).count()
critical_count = vulns.filter(severity=4).count()
unknown_count = vulns.filter(severity=-1).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count + unknown_count
fields = {
'Total': vulnerability_count,
'Critical': critical_count,
'High': high_count,
'Medium': medium_count,
'Low': low_count,
'Info': info_count,
'Unknown': unknown_count
}
self.notify(fields=fields)
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=NUCLEI
).exclude(
severity=0
)
# find all unique vulnerabilities based on path and title
# all unique vulnerability will go thru gpt function and get report
# once report is got, it will be matched with other vulnerabilities and saved
unique_vulns = set()
for vuln in vulns:
unique_vulns.add((vuln.name, vuln.get_path()))
unique_vulns = list(unique_vulns)
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in unique_vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return None
def get_vulnerability_gpt_report(vuln):
title = vuln[0]
path = vuln[1]
logger.info(f'Getting GPT Report for {title}, PATH: {path}')
# check if in db already exists
stored = GPTVulnerabilityReport.objects.filter(
url_path=path
).filter(
title=title
).first()
if stored:
response = {
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
report = GPTVulnerabilityReportGenerator()
vulnerability_description = get_gpt_vuln_input_description(
title,
path
)
response = report.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
title,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
for vuln in Vulnerability.objects.filter(name=title, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
def add_gpt_description_db(title, path, description, impact, remediation, references):
gpt_report = GPTVulnerabilityReport()
gpt_report.url_path = path
gpt_report.title = title
gpt_report.description = description
gpt_report.impact = impact
gpt_report.remediation = remediation
gpt_report.save()
for url in references:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
gpt_report.references.add(ref)
gpt_report.save()
@app.task(name='nuclei_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_scan(self, urls=[], ctx={}, description=None):
"""HTTP vulnerability scan using Nuclei
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
Notes:
Unfurl the urls to keep only domain and path, will be sent to vuln scan and
ignore certain file extensions. Thanks: https://github.com/six2dez/reconftw
"""
# Config
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
input_path = f'{self.results_dir}/input_endpoints_vulnerability_scan.txt'
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
concurrency = config.get(NUCLEI_CONCURRENCY) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
retries = config.get(RETRIES) or self.yaml_configuration.get(RETRIES, DEFAULT_RETRIES)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
custom_header = config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
should_fetch_gpt_report = config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
proxy = get_random_proxy()
nuclei_specific_config = config.get('nuclei', {})
use_nuclei_conf = nuclei_specific_config.get(USE_CONFIG, False)
severities = nuclei_specific_config.get(NUCLEI_SEVERITY, NUCLEI_DEFAULT_SEVERITIES)
tags = nuclei_specific_config.get(NUCLEI_TAGS, [])
tags = ','.join(tags)
nuclei_templates = nuclei_specific_config.get(NUCLEI_TEMPLATE)
custom_nuclei_templates = nuclei_specific_config.get(NUCLEI_CUSTOM_TEMPLATE)
# severities_str = ','.join(severities)
# Get alive endpoints
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=enable_http_crawl,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
if intensity == 'normal': # reduce number of endpoints to scan
unfurl_filter = f'{self.results_dir}/urls_unfurled.txt'
run_command(
f"cat {input_path} | unfurl -u format %s://%d%p |uro > {unfurl_filter}",
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {unfurl_filter} -o {unfurl_filter}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
input_path = unfurl_filter
# Build templates
# logger.info('Updating Nuclei templates ...')
run_command(
'nuclei -update-templates',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
templates = []
if not (nuclei_templates or custom_nuclei_templates):
templates.append(NUCLEI_DEFAULT_TEMPLATES_PATH)
if nuclei_templates:
if ALL in nuclei_templates:
template = NUCLEI_DEFAULT_TEMPLATES_PATH
templates.append(template)
else:
templates.extend(nuclei_templates)
if custom_nuclei_templates:
custom_nuclei_template_paths = [f'{str(elem)}.yaml' for elem in custom_nuclei_templates]
template = templates.extend(custom_nuclei_template_paths)
# Build CMD
cmd = 'nuclei -j'
cmd += ' -config /root/.config/nuclei/config.yaml' if use_nuclei_conf else ''
cmd += f' -irr'
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -l {input_path}'
cmd += f' -c {str(concurrency)}' if concurrency > 0 else ''
cmd += f' -proxy {proxy} ' if proxy else ''
cmd += f' -retries {retries}' if retries > 0 else ''
cmd += f' -rl {rate_limit}' if rate_limit > 0 else ''
# cmd += f' -severity {severities_str}'
cmd += f' -timeout {str(timeout)}' if timeout and timeout > 0 else ''
cmd += f' -tags {tags}' if tags else ''
cmd += f' -silent'
for tpl in templates:
cmd += f' -t {tpl}'
grouped_tasks = []
custom_ctx = ctx
for severity in severities:
custom_ctx['track'] = True
_task = nuclei_individual_severity_module.si(
cmd,
severity,
enable_http_crawl,
should_fetch_gpt_report,
ctx=custom_ctx,
description=f'Nuclei Scan with severity {severity}'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan with all severities completed...')
return None
@app.task(name='dalfox_xss_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def dalfox_xss_scan(self, urls=[], ctx={}, description=None):
"""XSS Scan using dalfox
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
dalfox_config = vuln_config.get(DALFOX) or {}
custom_header = dalfox_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
is_waf_evasion = dalfox_config.get(WAF_EVASION, False)
blind_xss_server = dalfox_config.get(BLIND_XSS_SERVER)
user_agent = dalfox_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
timeout = dalfox_config.get(TIMEOUT)
delay = dalfox_config.get(DELAY)
threads = dalfox_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_dalfox_xss.txt'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=False,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'dalfox --silence --no-color --no-spinner'
cmd += f' --only-poc r '
cmd += f' --ignore-return 302,404,403'
cmd += f' --skip-bav'
cmd += f' file {input_path}'
cmd += f' --proxy {proxy}' if proxy else ''
cmd += f' --waf-evasion' if is_waf_evasion else ''
cmd += f' -b {blind_xss_server}' if blind_xss_server else ''
cmd += f' --delay {delay}' if delay else ''
cmd += f' --timeout {timeout}' if timeout else ''
cmd += f' --user-agent {user_agent}' if user_agent else ''
cmd += f' --header {custom_header}' if custom_header else ''
cmd += f' --worker {threads}' if threads else ''
cmd += f' --format json'
results = []
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id,
trunc_char=','
):
if not isinstance(line, dict):
continue
results.append(line)
vuln_data = parse_dalfox_result(line)
http_url = sanitize_url(line.get('data'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Dalfox Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=DALFOX
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='crlfuzz_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def crlfuzz_scan(self, urls=[], ctx={}, description=None):
"""CRLF Fuzzing with CRLFuzz
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
custom_header = vuln_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
user_agent = vuln_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
threads = vuln_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_crlf.txt'
output_path = f'{self.results_dir}/{self.filename}'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'crlfuzz -s'
cmd += f' -l {input_path}'
cmd += f' -x {proxy}' if proxy else ''
cmd += f' --H {custom_header}' if custom_header else ''
cmd += f' -o {output_path}'
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id
)
if not os.path.isfile(output_path):
logger.info('No Results from CRLFuzz')
return
crlfs = []
results = []
with open(output_path, 'r') as file:
crlfs = file.readlines()
for crlf in crlfs:
url = crlf.strip()
vuln_data = parse_crlfuzz_result(url)
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting CRLFuzz Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=CRLFUZZ
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='s3scanner', queue='main_scan_queue', base=RengineTask, bind=True)
def s3scanner(self, ctx={}, description=None):
"""Bucket Scanner
Args:
ctx (dict): Context
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/#{self.scan_id}_subdomain_discovery.txt'
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
s3_config = vuln_config.get(S3SCANNER) or {}
threads = s3_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
providers = s3_config.get(PROVIDERS, S3SCANNER_DEFAULT_PROVIDERS)
scan_history = ScanHistory.objects.filter(pk=self.scan_id).first()
for provider in providers:
cmd = f's3scanner -bucket-file {input_path} -enumerate -provider {provider} -threads {threads} -json'
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
if line.get('bucket', {}).get('exists', 0) == 1:
result = parse_s3scanner_result(line)
s3bucket, created = S3Bucket.objects.get_or_create(**result)
scan_history.buckets.add(s3bucket)
logger.info(f"s3 bucket added {result['provider']}-{result['name']}-{result['region']}")
@app.task(name='http_crawl', queue='main_scan_queue', base=RengineTask, bind=True)
def http_crawl(
self,
urls=[],
method=None,
recrawl=False,
ctx={},
track=True,
description=None,
is_ran_from_subdomain_scan=False,
should_remove_duplicate_endpoints=True,
duplicate_removal_fields=[]):
"""Use httpx to query HTTP URLs for important info like page titles, http
status, etc...
Args:
urls (list, optional): A set of URLs to check. Overrides default
behavior which queries all endpoints related to this scan.
method (str): HTTP method to use (GET, HEAD, POST, PUT, DELETE).
recrawl (bool, optional): If False, filter out URLs that have already
been crawled.
should_remove_duplicate_endpoints (bool): Whether to remove duplicate endpoints
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
Returns:
list: httpx results.
"""
logger.info('Initiating HTTP Crawl')
if is_ran_from_subdomain_scan:
logger.info('Running From Subdomain Scan...')
cmd = '/go/bin/httpx'
cfg = self.yaml_configuration.get(HTTP_CRAWL) or {}
custom_header = cfg.get(CUSTOM_HEADER, '')
threads = cfg.get(THREADS, DEFAULT_THREADS)
follow_redirect = cfg.get(FOLLOW_REDIRECT, True)
self.output_path = None
input_path = f'{self.results_dir}/httpx_input.txt'
history_file = f'{self.results_dir}/commands.txt'
if urls: # direct passing URLs to check
if self.url_filter:
urls = [u for u in urls if self.url_filter in u]
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_uncrawled=not recrawl,
write_filepath=input_path,
ctx=ctx
)
# logger.debug(urls)
# If no URLs found, skip it
if not urls:
return
# Re-adjust thread number if few URLs to avoid spinning up a monster to
# kill a fly.
if len(urls) < threads:
threads = len(urls)
# Get random proxy
proxy = get_random_proxy()
# Run command
cmd += f' -cl -ct -rt -location -td -websocket -cname -asn -cdn -probe -random-agent'
cmd += f' -t {threads}' if threads > 0 else ''
cmd += f' --http-proxy {proxy}' if proxy else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -json'
cmd += f' -u {urls[0]}' if len(urls) == 1 else f' -l {input_path}'
cmd += f' -x {method}' if method else ''
cmd += f' -silent'
if follow_redirect:
cmd += ' -fr'
results = []
endpoint_ids = []
for line in stream_command(
cmd,
history_file=history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not line or not isinstance(line, dict):
continue
logger.debug(line)
# No response from endpoint
if line.get('failed', False):
continue
# Parse httpx output
host = line.get('host', '')
content_length = line.get('content_length', 0)
http_status = line.get('status_code')
http_url, is_redirect = extract_httpx_url(line)
page_title = line.get('title')
webserver = line.get('webserver')
cdn = line.get('cdn', False)
rt = line.get('time')
techs = line.get('tech', [])
cname = line.get('cname', '')
content_type = line.get('content_type', '')
response_time = -1
if rt:
response_time = float(''.join(ch for ch in rt if not ch.isalpha()))
if rt[-2:] == 'ms':
response_time = response_time / 1000
# Create Subdomain object in DB
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
# Save default HTTP URL to endpoint object in DB
endpoint, created = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain,
is_default=is_ran_from_subdomain_scan
)
if not endpoint:
continue
endpoint.http_status = http_status
endpoint.page_title = page_title
endpoint.content_length = content_length
endpoint.webserver = webserver
endpoint.response_time = response_time
endpoint.content_type = content_type
endpoint.save()
endpoint_str = f'{http_url} [{http_status}] `{content_length}B` `{webserver}` `{rt}`'
logger.warning(endpoint_str)
if endpoint and endpoint.is_alive and endpoint.http_status != 403:
self.notify(
fields={'Alive endpoint': f'• {endpoint_str}'},
add_meta_info=False)
# Add endpoint to results
line['_cmd'] = cmd
line['final_url'] = http_url
line['endpoint_id'] = endpoint.id
line['endpoint_created'] = created
line['is_redirect'] = is_redirect
results.append(line)
# Add technology objects to DB
for technology in techs:
tech, _ = Technology.objects.get_or_create(name=technology)
endpoint.techs.add(tech)
if is_ran_from_subdomain_scan:
subdomain.technologies.add(tech)
subdomain.save()
endpoint.save()
techs_str = ', '.join([f'`{tech}`' for tech in techs])
self.notify(
fields={'Technologies': techs_str},
add_meta_info=False)
# Add IP objects for 'a' records to DB
a_records = line.get('a', [])
for ip_address in a_records:
ip, created = save_ip_address(
ip_address,
subdomain,
subscan=self.subscan,
cdn=cdn)
ips_str = '• ' + '\n• '.join([f'`{ip}`' for ip in a_records])
self.notify(
fields={'IPs': ips_str},
add_meta_info=False)
# Add IP object for host in DB
if host:
ip, created = save_ip_address(
host,
subdomain,
subscan=self.subscan,
cdn=cdn)
self.notify(
fields={'IPs': f'• `{ip.address}`'},
add_meta_info=False)
# Save subdomain and endpoint
if is_ran_from_subdomain_scan:
# save subdomain stuffs
subdomain.http_url = http_url
subdomain.http_status = http_status
subdomain.page_title = page_title
subdomain.content_length = content_length
subdomain.webserver = webserver
subdomain.response_time = response_time
subdomain.content_type = content_type
subdomain.cname = ','.join(cname)
subdomain.is_cdn = cdn
if cdn:
subdomain.cdn_name = line.get('cdn_name')
subdomain.save()
endpoint.save()
endpoint_ids.append(endpoint.id)
if should_remove_duplicate_endpoints:
# Remove 'fake' alive endpoints that are just redirects to the same page
remove_duplicate_endpoints(
self.scan_id,
self.domain_id,
self.subdomain_id,
filter_ids=endpoint_ids
)
# Remove input file
run_command(
f'rm {input_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
return results
#---------------------#
# Notifications tasks #
#---------------------#
@app.task(name='send_notif', bind=False, queue='send_notif_queue')
def send_notif(
message,
scan_history_id=None,
subscan_id=None,
**options):
if not 'title' in options:
message = enrich_notification(message, scan_history_id, subscan_id)
send_discord_message(message, **options)
send_slack_message(message)
send_telegram_message(message)
@app.task(name='send_scan_notif', bind=False, queue='send_scan_notif_queue')
def send_scan_notif(
scan_history_id,
subscan_id=None,
engine_id=None,
status='RUNNING'):
"""Send scan status notification. Works for scan or a subscan if subscan_id
is passed.
Args:
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Get domain, engine, scan_history objects
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
tasks = ScanActivity.objects.filter(scan_of=scan) if scan else 0
# Build notif options
url = get_scan_url(scan_history_id, subscan_id)
title = get_scan_title(scan_history_id, subscan_id)
fields = get_scan_fields(engine, scan, subscan, status, tasks)
severity = None
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
if status:
severity = STATUS_TO_SEVERITIES.get(status)
opts = {
'title': title,
'url': url,
'fields': fields,
'severity': severity
}
logger.warning(f'Sending notification "{title}" [{severity}]')
# Send notification
send_notif(
msg,
scan_history_id,
subscan_id,
**opts)
@app.task(name='send_task_notif', bind=False, queue='send_task_notif_queue')
def send_task_notif(
task_name,
status=None,
result=None,
output_path=None,
traceback=None,
scan_history_id=None,
engine_id=None,
subscan_id=None,
severity=None,
add_meta_info=True,
update_fields={}):
"""Send task status notification.
Args:
task_name (str): Task name.
status (str, optional): Task status.
result (str, optional): Task result.
output_path (str, optional): Task output path.
traceback (str, optional): Task traceback.
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
severity (str, optional): Severity (will be mapped to notif colors)
add_meta_info (bool, optional): Wheter to add scan / subscan info to notif.
update_fields (dict, optional): Fields key / value to update.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Build fields
url = None
fields = {}
if add_meta_info:
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
url = get_scan_url(scan_history_id)
if status:
fields['Status'] = f'**{status}**'
if engine:
fields['Engine'] = engine.engine_name
if scan:
fields['Scan ID'] = f'[#{scan.id}]({url})'
if subscan:
url = get_scan_url(scan_history_id, subscan_id)
fields['Subscan ID'] = f'[#{subscan.id}]({url})'
title = get_task_title(task_name, scan_history_id, subscan_id)
if status:
severity = STATUS_TO_SEVERITIES.get(status)
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
# Add fields to update
for k, v in update_fields.items():
fields[k] = v
# Add traceback to notif
if traceback and notif.send_scan_tracebacks:
fields['Traceback'] = f'```\n{traceback}\n```'
# Add files to notif
files = []
attach_file = (
notif.send_scan_output_file and
output_path and
result and
not traceback
)
if attach_file:
output_title = output_path.split('/')[-1]
files = [(output_path, output_title)]
# Send notif
opts = {
'title': title,
'url': url,
'files': files,
'severity': severity,
'fields': fields,
'fields_append': update_fields.keys()
}
send_notif(
msg,
scan_history_id=scan_history_id,
subscan_id=subscan_id,
**opts)
@app.task(name='send_file_to_discord', bind=False, queue='send_file_to_discord_queue')
def send_file_to_discord(file_path, title=None):
notif = Notification.objects.first()
do_send = notif and notif.send_to_discord and notif.discord_hook_url
if not do_send:
return False
webhook = DiscordWebhook(
url=notif.discord_hook_url,
rate_limit_retry=True,
username=title or "reNgine Discord Plugin"
)
with open(file_path, "rb") as f:
head, tail = os.path.split(file_path)
webhook.add_file(file=f.read(), filename=tail)
webhook.execute()
@app.task(name='send_hackerone_report', bind=False, queue='send_hackerone_report_queue')
def send_hackerone_report(vulnerability_id):
"""Send HackerOne vulnerability report.
Args:
vulnerability_id (int): Vulnerability id.
Returns:
int: HTTP response status code.
"""
vulnerability = Vulnerability.objects.get(id=vulnerability_id)
severities = {v: k for k,v in NUCLEI_SEVERITY_MAP.items()}
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
# can only send vulnerability report if team_handle exists
if len(vulnerability.target_domain.h1_team_handle) !=0:
hackerone_query = Hackerone.objects.all()
if hackerone_query.exists():
hackerone = Hackerone.objects.first()
severity_value = severities[vulnerability.severity]
tpl = hackerone.report_template
# Replace syntax of report template with actual content
tpl = tpl.replace('{vulnerability_name}', vulnerability.name)
tpl = tpl.replace('{vulnerable_url}', vulnerability.http_url)
tpl = tpl.replace('{vulnerability_severity}', severity_value)
tpl = tpl.replace('{vulnerability_description}', vulnerability.description if vulnerability.description else '')
tpl = tpl.replace('{vulnerability_extracted_results}', vulnerability.extracted_results if vulnerability.extracted_results else '')
tpl = tpl.replace('{vulnerability_reference}', vulnerability.reference if vulnerability.reference else '')
data = {
"data": {
"type": "report",
"attributes": {
"team_handle": vulnerability.target_domain.h1_team_handle,
"title": '{} found in {}'.format(vulnerability.name, vulnerability.http_url),
"vulnerability_information": tpl,
"severity_rating": severity_value,
"impact": "More information about the impact and vulnerability can be found here: \n" + vulnerability.reference if vulnerability.reference else "NA",
}
}
}
r = requests.post(
'https://api.hackerone.com/v1/hackers/reports',
auth=(hackerone.username, hackerone.api_key),
json=data,
headers=headers
)
response = r.json()
status_code = r.status_code
if status_code == 201:
vulnerability.hackerone_report_id = response['data']["id"]
vulnerability.open_status = False
vulnerability.save()
return status_code
else:
logger.error('No team handle found.')
status_code = 111
return status_code
#-------------#
# Utils tasks #
#-------------#
@app.task(name='parse_nmap_results', bind=False, queue='parse_nmap_results_queue')
def parse_nmap_results(xml_file, output_file=None):
"""Parse results from nmap output file.
Args:
xml_file (str): nmap XML report file path.
Returns:
list: List of vulnerabilities found from nmap results.
"""
with open(xml_file, encoding='utf8') as f:
content = f.read()
try:
nmap_results = xmltodict.parse(content) # parse XML to dict
except Exception as e:
logger.exception(e)
logger.error(f'Cannot parse {xml_file} to valid JSON. Skipping.')
return []
# Write JSON to output file
if output_file:
with open(output_file, 'w') as f:
json.dump(nmap_results, f, indent=4)
logger.warning(json.dumps(nmap_results, indent=4))
hosts = (
nmap_results
.get('nmaprun', {})
.get('host', {})
)
all_vulns = []
if isinstance(hosts, dict):
hosts = [hosts]
for host in hosts:
# Grab hostname / IP from output
hostnames_dict = host.get('hostnames', {})
if hostnames_dict:
# Ensure that hostnames['hostname'] is a list for consistency
hostnames_list = hostnames_dict['hostname'] if isinstance(hostnames_dict['hostname'], list) else [hostnames_dict['hostname']]
# Extract all the @name values from the list of dictionaries
hostnames = [entry.get('@name') for entry in hostnames_list]
else:
hostnames = [host.get('address')['@addr']]
# Iterate over each hostname for each port
for hostname in hostnames:
# Grab ports from output
ports = host.get('ports', {}).get('port', [])
if isinstance(ports, dict):
ports = [ports]
for port in ports:
url_vulns = []
port_number = port['@portid']
url = sanitize_url(f'{hostname}:{port_number}')
logger.info(f'Parsing nmap results for {hostname}:{port_number} ...')
if not port_number or not port_number.isdigit():
continue
port_protocol = port['@protocol']
scripts = port.get('script', [])
if isinstance(scripts, dict):
scripts = [scripts]
for script in scripts:
script_id = script['@id']
script_output = script['@output']
script_output_table = script.get('table', [])
logger.debug(f'Ran nmap script "{script_id}" on {port_number}/{port_protocol}:\n{script_output}\n')
if script_id == 'vulscan':
vulns = parse_nmap_vulscan_output(script_output)
url_vulns.extend(vulns)
elif script_id == 'vulners':
vulns = parse_nmap_vulners_output(script_output)
url_vulns.extend(vulns)
# elif script_id == 'http-server-header':
# TODO: nmap can help find technologies as well using the http-server-header script
# regex = r'(\w+)/([\d.]+)\s?(?:\((\w+)\))?'
# tech_name, tech_version, tech_os = re.match(regex, test_string).groups()
# Technology.objects.get_or_create(...)
# elif script_id == 'http_csrf':
# vulns = parse_nmap_http_csrf_output(script_output)
# url_vulns.extend(vulns)
else:
logger.warning(f'Script output parsing for script "{script_id}" is not supported yet.')
# Add URL to vuln
for vuln in url_vulns:
# TODO: This should extend to any URL, not just HTTP
vuln['http_url'] = url
if 'http_path' in vuln:
vuln['http_url'] += vuln['http_path']
all_vulns.append(vuln)
return all_vulns
def parse_nmap_http_csrf_output(script_output):
pass
def parse_nmap_vulscan_output(script_output):
"""Parse nmap vulscan script output.
Args:
script_output (str): Vulscan script output.
Returns:
list: List of Vulnerability dicts.
"""
data = {}
vulns = []
provider_name = ''
# Sort all vulns found by provider so that we can match each provider with
# a function that pulls from its API to get more info about the
# vulnerability.
for line in script_output.splitlines():
if not line:
continue
if not line.startswith('['): # provider line
if "No findings" in line:
logger.info(f"No findings: {line}")
continue
elif ' - ' in line:
provider_name, provider_url = tuple(line.split(' - '))
data[provider_name] = {'url': provider_url.rstrip(':'), 'entries': []}
continue
else:
# Log a warning
logger.warning(f"Unexpected line format: {line}")
continue
reg = r'\[(.*)\] (.*)'
matches = re.match(reg, line)
id, title = matches.groups()
entry = {'id': id, 'title': title}
data[provider_name]['entries'].append(entry)
logger.warning('Vulscan parsed output:')
logger.warning(pprint.pformat(data))
for provider_name in data:
if provider_name == 'Exploit-DB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'IBM X-Force':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'MITRE CVE':
logger.error(f'Provider {provider_name} is not supported YET.')
for entry in data[provider_name]['entries']:
cve_id = entry['id']
vuln = cve_to_vuln(cve_id)
vulns.append(vuln)
elif provider_name == 'OSVDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'OpenVAS (Nessus)':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'SecurityFocus':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'VulDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
else:
logger.error(f'Provider {provider_name} is not supported.')
return vulns
def parse_nmap_vulners_output(script_output, url=''):
"""Parse nmap vulners script output.
TODO: Rework this as it's currently matching all CVEs no matter the
confidence.
Args:
script_output (str): Script output.
Returns:
list: List of found vulnerabilities.
"""
vulns = []
# Check for CVE in script output
CVE_REGEX = re.compile(r'.*(CVE-\d\d\d\d-\d+).*')
matches = CVE_REGEX.findall(script_output)
matches = list(dict.fromkeys(matches))
for cve_id in matches: # get CVE info
vuln = cve_to_vuln(cve_id, vuln_type='nmap-vulners-nse')
if vuln:
vulns.append(vuln)
return vulns
def cve_to_vuln(cve_id, vuln_type=''):
"""Search for a CVE using CVESearch and return Vulnerability data.
Args:
cve_id (str): CVE ID in the form CVE-*
Returns:
dict: Vulnerability dict.
"""
cve_info = CVESearch('https://cve.circl.lu').id(cve_id)
if not cve_info:
logger.error(f'Could not fetch CVE info for cve {cve_id}. Skipping.')
return None
vuln_cve_id = cve_info['id']
vuln_name = vuln_cve_id
vuln_description = cve_info.get('summary', 'none').replace(vuln_cve_id, '').strip()
try:
vuln_cvss = float(cve_info.get('cvss', -1))
except (ValueError, TypeError):
vuln_cvss = -1
vuln_cwe_id = cve_info.get('cwe', '')
exploit_ids = cve_info.get('refmap', {}).get('exploit-db', [])
osvdb_ids = cve_info.get('refmap', {}).get('osvdb', [])
references = cve_info.get('references', [])
capec_objects = cve_info.get('capec', [])
# Parse ovals for a better vuln name / type
ovals = cve_info.get('oval', [])
if ovals:
vuln_name = ovals[0]['title']
vuln_type = ovals[0]['family']
# Set vulnerability severity based on CVSS score
vuln_severity = 'info'
if vuln_cvss < 4:
vuln_severity = 'low'
elif vuln_cvss < 7:
vuln_severity = 'medium'
elif vuln_cvss < 9:
vuln_severity = 'high'
else:
vuln_severity = 'critical'
# Build console warning message
msg = f'{vuln_name} | {vuln_severity.upper()} | {vuln_cve_id} | {vuln_cwe_id} | {vuln_cvss}'
for id in osvdb_ids:
msg += f'\n\tOSVDB: {id}'
for exploit_id in exploit_ids:
msg += f'\n\tEXPLOITDB: {exploit_id}'
logger.warning(msg)
vuln = {
'name': vuln_name,
'type': vuln_type,
'severity': NUCLEI_SEVERITY_MAP[vuln_severity],
'description': vuln_description,
'cvss_score': vuln_cvss,
'references': references,
'cve_ids': [vuln_cve_id],
'cwe_ids': [vuln_cwe_id]
}
return vuln
def parse_s3scanner_result(line):
'''
Parses and returns s3Scanner Data
'''
bucket = line['bucket']
return {
'name': bucket['name'],
'region': bucket['region'],
'provider': bucket['provider'],
'owner_display_name': bucket['owner_display_name'],
'owner_id': bucket['owner_id'],
'perm_auth_users_read': bucket['perm_auth_users_read'],
'perm_auth_users_write': bucket['perm_auth_users_write'],
'perm_auth_users_read_acl': bucket['perm_auth_users_read_acl'],
'perm_auth_users_write_acl': bucket['perm_auth_users_write_acl'],
'perm_auth_users_full_control': bucket['perm_auth_users_full_control'],
'perm_all_users_read': bucket['perm_all_users_read'],
'perm_all_users_write': bucket['perm_all_users_write'],
'perm_all_users_read_acl': bucket['perm_all_users_read_acl'],
'perm_all_users_write_acl': bucket['perm_all_users_write_acl'],
'perm_all_users_full_control': bucket['perm_all_users_full_control'],
'num_objects': bucket['num_objects'],
'size': bucket['bucket_size']
}
def parse_nuclei_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
return {
'name': line['info'].get('name', ''),
'type': line['type'],
'severity': NUCLEI_SEVERITY_MAP[line['info'].get('severity', 'unknown')],
'template': line['template'],
'template_url': line['template-url'],
'template_id': line['template-id'],
'description': line['info'].get('description', ''),
'matcher_name': line.get('matcher-name', ''),
'curl_command': line.get('curl-command'),
'request': line.get('request'),
'response': line.get('response'),
'extracted_results': line.get('extracted-results', []),
'cvss_metrics': line['info'].get('classification', {}).get('cvss-metrics', ''),
'cvss_score': line['info'].get('classification', {}).get('cvss-score'),
'cve_ids': line['info'].get('classification', {}).get('cve_id', []) or [],
'cwe_ids': line['info'].get('classification', {}).get('cwe_id', []) or [],
'references': line['info'].get('reference', []) or [],
'tags': line['info'].get('tags', []),
'source': NUCLEI,
}
def parse_dalfox_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
description = ''
description += f" Evidence: {line.get('evidence')} <br>" if line.get('evidence') else ''
description += f" Message: {line.get('message')} <br>" if line.get('message') else ''
description += f" Payload: {line.get('message_str')} <br>" if line.get('message_str') else ''
description += f" Vulnerable Parameter: {line.get('param')} <br>" if line.get('param') else ''
return {
'name': 'XSS (Cross Site Scripting)',
'type': 'XSS',
'severity': DALFOX_SEVERITY_MAP[line.get('severity', 'unknown')],
'description': description,
'source': DALFOX,
'cwe_ids': [line.get('cwe')]
}
def parse_crlfuzz_result(url):
"""Parse CRLF results
Args:
url (str): CRLF Vulnerable URL
Returns:
dict: Vulnerability data.
"""
return {
'name': 'CRLF (HTTP Response Splitting)',
'type': 'CRLF',
'severity': 2,
'description': 'A CRLF (HTTP Response Splitting) vulnerability has been discovered.',
'source': CRLFUZZ,
}
def record_exists(model, data, exclude_keys=[]):
"""
Check if a record already exists in the database based on the given data.
Args:
model (django.db.models.Model): The Django model to check against.
data (dict): Data dictionary containing fields and values.
exclude_keys (list): List of keys to exclude from the lookup.
Returns:
bool: True if the record exists, False otherwise.
"""
# Extract the keys that will be used for the lookup
lookup_fields = {key: data[key] for key in data if key not in exclude_keys}
# Return True if a record exists based on the lookup fields, False otherwise
return model.objects.filter(**lookup_fields).exists()
@app.task(name='geo_localize', bind=False, queue='geo_localize_queue')
def geo_localize(host, ip_id=None):
"""Uses geoiplookup to find location associated with host.
Args:
host (str): Hostname.
ip_id (int): IpAddress object id.
Returns:
startScan.models.CountryISO: CountryISO object from DB or None.
"""
if validators.ipv6(host):
logger.info(f'Ipv6 "{host}" is not supported by geoiplookup. Skipping.')
return None
cmd = f'geoiplookup {host}'
_, out = run_command(cmd)
if 'IP Address not found' not in out and "can't resolve hostname" not in out:
country_iso = out.split(':')[1].strip().split(',')[0]
country_name = out.split(':')[1].strip().split(',')[1].strip()
geo_object, _ = CountryISO.objects.get_or_create(
iso=country_iso,
name=country_name
)
geo_json = {
'iso': country_iso,
'name': country_name
}
if ip_id:
ip = IpAddress.objects.get(pk=ip_id)
ip.geo_iso = geo_object
ip.save()
return geo_json
logger.info(f'Geo IP lookup failed for host "{host}"')
return None
@app.task(name='query_whois', bind=False, queue='query_whois_queue')
def query_whois(ip_domain, force_reload_whois=False):
"""Query WHOIS information for an IP or a domain name.
Args:
ip_domain (str): IP address or domain name.
save_domain (bool): Whether to save domain or not, default False
Returns:
dict: WHOIS information.
"""
if not force_reload_whois and Domain.objects.filter(name=ip_domain).exists() and Domain.objects.get(name=ip_domain).domain_info:
domain = Domain.objects.get(name=ip_domain)
if not domain.insert_date:
domain.insert_date = timezone.now()
domain.save()
domain_info_db = domain.domain_info
domain_info = DottedDict(
dnssec=domain_info_db.dnssec,
created=domain_info_db.created,
updated=domain_info_db.updated,
expires=domain_info_db.expires,
geolocation_iso=domain_info_db.geolocation_iso,
status=[status['name'] for status in DomainWhoisStatusSerializer(domain_info_db.status, many=True).data],
whois_server=domain_info_db.whois_server,
ns_records=[ns['name'] for ns in NameServersSerializer(domain_info_db.name_servers, many=True).data],
registrar_name=domain_info_db.registrar.name,
registrar_phone=domain_info_db.registrar.phone,
registrar_email=domain_info_db.registrar.email,
registrar_url=domain_info_db.registrar.url,
registrant_name=domain_info_db.registrant.name,
registrant_id=domain_info_db.registrant.id_str,
registrant_organization=domain_info_db.registrant.organization,
registrant_city=domain_info_db.registrant.city,
registrant_state=domain_info_db.registrant.state,
registrant_zip_code=domain_info_db.registrant.zip_code,
registrant_country=domain_info_db.registrant.country,
registrant_phone=domain_info_db.registrant.phone,
registrant_fax=domain_info_db.registrant.fax,
registrant_email=domain_info_db.registrant.email,
registrant_address=domain_info_db.registrant.address,
admin_name=domain_info_db.admin.name,
admin_id=domain_info_db.admin.id_str,
admin_organization=domain_info_db.admin.organization,
admin_city=domain_info_db.admin.city,
admin_state=domain_info_db.admin.state,
admin_zip_code=domain_info_db.admin.zip_code,
admin_country=domain_info_db.admin.country,
admin_phone=domain_info_db.admin.phone,
admin_fax=domain_info_db.admin.fax,
admin_email=domain_info_db.admin.email,
admin_address=domain_info_db.admin.address,
tech_name=domain_info_db.tech.name,
tech_id=domain_info_db.tech.id_str,
tech_organization=domain_info_db.tech.organization,
tech_city=domain_info_db.tech.city,
tech_state=domain_info_db.tech.state,
tech_zip_code=domain_info_db.tech.zip_code,
tech_country=domain_info_db.tech.country,
tech_phone=domain_info_db.tech.phone,
tech_fax=domain_info_db.tech.fax,
tech_email=domain_info_db.tech.email,
tech_address=domain_info_db.tech.address,
related_tlds=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_tlds, many=True).data],
related_domains=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_domains, many=True).data],
historical_ips=[ip for ip in HistoricalIPSerializer(domain_info_db.historical_ips, many=True).data],
)
if domain_info_db.dns_records:
a_records = []
txt_records = []
mx_records = []
dns_records = [{'name': dns['name'], 'type': dns['type']} for dns in DomainDNSRecordSerializer(domain_info_db.dns_records, many=True).data]
for dns in dns_records:
if dns['type'] == 'a':
a_records.append(dns['name'])
elif dns['type'] == 'txt':
txt_records.append(dns['name'])
elif dns['type'] == 'mx':
mx_records.append(dns['name'])
domain_info.a_records = a_records
domain_info.txt_records = txt_records
domain_info.mx_records = mx_records
else:
logger.info(f'Domain info for "{ip_domain}" not found in DB, querying whois')
domain_info = DottedDict()
# find domain historical ip
try:
historical_ips = get_domain_historical_ip_address(ip_domain)
domain_info.historical_ips = historical_ips
except Exception as e:
logger.error(f'HistoricalIP for {ip_domain} not found!\nError: {str(e)}')
historical_ips = []
# find associated domains using ip_domain
try:
related_domains = reverse_whois(ip_domain.split('.')[0])
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
# find related tlds using TLSx
try:
related_tlds = []
output_path = '/tmp/ip_domain_tlsx.txt'
tlsx_command = f'tlsx -san -cn -silent -ro -host {ip_domain} -o {output_path}'
run_command(
tlsx_command,
shell=True,
)
tlsx_output = []
with open(output_path) as f:
tlsx_output = f.readlines()
tldextract_target = tldextract.extract(ip_domain)
for doms in tlsx_output:
doms = doms.strip()
tldextract_res = tldextract.extract(doms)
if ip_domain != doms and tldextract_res.domain == tldextract_target.domain and tldextract_res.subdomain == '':
related_tlds.append(doms)
related_tlds = list(set(related_tlds))
domain_info.related_tlds = related_tlds
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
related_domains_list = []
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_related)
related_domains_list.append(_domain['name'])
for _domain in related_tlds:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain,
)[0]
db_domain_info.related_tlds.add(domain_related)
for _ip in historical_ips:
historical_ip = HistoricalIP.objects.get_or_create(
ip=_ip['ip'],
owner=_ip['owner'],
location=_ip['location'],
last_seen=_ip['last_seen'],
)[0]
db_domain_info.historical_ips.add(historical_ip)
domain.domain_info = db_domain_info
domain.save()
command = f'netlas host {ip_domain} -f json'
# check if netlas key is provided
netlas_key = get_netlas_key()
command += f' -a {netlas_key}' if netlas_key else ''
result = subprocess.check_output(command.split()).decode('utf-8')
if 'Failed to parse response data' in result:
# do fallback
return {
'status': False,
'ip_domain': ip_domain,
'result': "Netlas limit exceeded.",
'message': 'Netlas limit exceeded.'
}
try:
result = json.loads(result)
logger.info(result)
whois = result.get('whois') if result.get('whois') else {}
domain_info.created = whois.get('created_date')
domain_info.expires = whois.get('expiration_date')
domain_info.updated = whois.get('updated_date')
domain_info.whois_server = whois.get('whois_server')
if 'registrant' in whois:
registrant = whois.get('registrant')
domain_info.registrant_name = registrant.get('name')
domain_info.registrant_country = registrant.get('country')
domain_info.registrant_id = registrant.get('id')
domain_info.registrant_state = registrant.get('province')
domain_info.registrant_city = registrant.get('city')
domain_info.registrant_phone = registrant.get('phone')
domain_info.registrant_address = registrant.get('street')
domain_info.registrant_organization = registrant.get('organization')
domain_info.registrant_fax = registrant.get('fax')
domain_info.registrant_zip_code = registrant.get('postal_code')
email_search = EMAIL_REGEX.search(str(registrant.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.registrant_email = field_content
if 'administrative' in whois:
administrative = whois.get('administrative')
domain_info.admin_name = administrative.get('name')
domain_info.admin_country = administrative.get('country')
domain_info.admin_id = administrative.get('id')
domain_info.admin_state = administrative.get('province')
domain_info.admin_city = administrative.get('city')
domain_info.admin_phone = administrative.get('phone')
domain_info.admin_address = administrative.get('street')
domain_info.admin_organization = administrative.get('organization')
domain_info.admin_fax = administrative.get('fax')
domain_info.admin_zip_code = administrative.get('postal_code')
mail_search = EMAIL_REGEX.search(str(administrative.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.admin_email = field_content
if 'technical' in whois:
technical = whois.get('technical')
domain_info.tech_name = technical.get('name')
domain_info.tech_country = technical.get('country')
domain_info.tech_state = technical.get('province')
domain_info.tech_id = technical.get('id')
domain_info.tech_city = technical.get('city')
domain_info.tech_phone = technical.get('phone')
domain_info.tech_address = technical.get('street')
domain_info.tech_organization = technical.get('organization')
domain_info.tech_fax = technical.get('fax')
domain_info.tech_zip_code = technical.get('postal_code')
mail_search = EMAIL_REGEX.search(str(technical.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.tech_email = field_content
if 'dns' in result:
dns = result.get('dns')
domain_info.mx_records = dns.get('mx')
domain_info.txt_records = dns.get('txt')
domain_info.a_records = dns.get('a')
domain_info.ns_records = whois.get('name_servers')
domain_info.dnssec = True if whois.get('dnssec') else False
domain_info.status = whois.get('status')
if 'registrar' in whois:
registrar = whois.get('registrar')
domain_info.registrar_name = registrar.get('name')
domain_info.registrar_email = registrar.get('email')
domain_info.registrar_phone = registrar.get('phone')
domain_info.registrar_url = registrar.get('url')
# find associated domains if registrant email is found
related_domains = reverse_whois(domain_info.get('registrant_email')) if domain_info.get('registrant_email') else []
for _domain in related_domains:
related_domains_list.append(_domain['name'])
# remove duplicate domains from related domains list
related_domains_list = list(set(related_domains_list))
domain_info.related_domains = related_domains_list
# save to db if domain exists
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_rel = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_rel)
db_domain_info.dnssec = domain_info.get('dnssec')
#dates
db_domain_info.created = domain_info.get('created')
db_domain_info.updated = domain_info.get('updated')
db_domain_info.expires = domain_info.get('expires')
#registrar
db_domain_info.registrar = Registrar.objects.get_or_create(
name=domain_info.get('registrar_name'),
email=domain_info.get('registrar_email'),
phone=domain_info.get('registrar_phone'),
url=domain_info.get('registrar_url'),
)[0]
db_domain_info.registrant = DomainRegistration.objects.get_or_create(
name=domain_info.get('registrant_name'),
organization=domain_info.get('registrant_organization'),
address=domain_info.get('registrant_address'),
city=domain_info.get('registrant_city'),
state=domain_info.get('registrant_state'),
zip_code=domain_info.get('registrant_zip_code'),
country=domain_info.get('registrant_country'),
email=domain_info.get('registrant_email'),
phone=domain_info.get('registrant_phone'),
fax=domain_info.get('registrant_fax'),
id_str=domain_info.get('registrant_id'),
)[0]
db_domain_info.admin = DomainRegistration.objects.get_or_create(
name=domain_info.get('admin_name'),
organization=domain_info.get('admin_organization'),
address=domain_info.get('admin_address'),
city=domain_info.get('admin_city'),
state=domain_info.get('admin_state'),
zip_code=domain_info.get('admin_zip_code'),
country=domain_info.get('admin_country'),
email=domain_info.get('admin_email'),
phone=domain_info.get('admin_phone'),
fax=domain_info.get('admin_fax'),
id_str=domain_info.get('admin_id'),
)[0]
db_domain_info.tech = DomainRegistration.objects.get_or_create(
name=domain_info.get('tech_name'),
organization=domain_info.get('tech_organization'),
address=domain_info.get('tech_address'),
city=domain_info.get('tech_city'),
state=domain_info.get('tech_state'),
zip_code=domain_info.get('tech_zip_code'),
country=domain_info.get('tech_country'),
email=domain_info.get('tech_email'),
phone=domain_info.get('tech_phone'),
fax=domain_info.get('tech_fax'),
id_str=domain_info.get('tech_id'),
)[0]
for status in domain_info.get('status') or []:
_status = WhoisStatus.objects.get_or_create(
name=status
)[0]
_status.save()
db_domain_info.status.add(_status)
for ns in domain_info.get('ns_records') or []:
_ns = NameServer.objects.get_or_create(
name=ns
)[0]
_ns.save()
db_domain_info.name_servers.add(_ns)
for a in domain_info.get('a_records') or []:
_a = DNSRecord.objects.get_or_create(
name=a,
type='a'
)[0]
_a.save()
db_domain_info.dns_records.add(_a)
for mx in domain_info.get('mx_records') or []:
_mx = DNSRecord.objects.get_or_create(
name=mx,
type='mx'
)[0]
_mx.save()
db_domain_info.dns_records.add(_mx)
for txt in domain_info.get('txt_records') or []:
_txt = DNSRecord.objects.get_or_create(
name=txt,
type='txt'
)[0]
_txt.save()
db_domain_info.dns_records.add(_txt)
db_domain_info.geolocation_iso = domain_info.get('registrant_country')
db_domain_info.whois_server = domain_info.get('whois_server')
db_domain_info.save()
domain.domain_info = db_domain_info
domain.save()
except Exception as e:
return {
'status': False,
'ip_domain': ip_domain,
'result': "unable to fetch records from WHOIS database.",
'message': str(e)
}
return {
'status': True,
'ip_domain': ip_domain,
'dnssec': domain_info.get('dnssec'),
'created': domain_info.get('created'),
'updated': domain_info.get('updated'),
'expires': domain_info.get('expires'),
'geolocation_iso': domain_info.get('registrant_country'),
'domain_statuses': domain_info.get('status'),
'whois_server': domain_info.get('whois_server'),
'dns': {
'a': domain_info.get('a_records'),
'mx': domain_info.get('mx_records'),
'txt': domain_info.get('txt_records'),
},
'registrar': {
'name': domain_info.get('registrar_name'),
'phone': domain_info.get('registrar_phone'),
'email': domain_info.get('registrar_email'),
'url': domain_info.get('registrar_url'),
},
'registrant': {
'name': domain_info.get('registrant_name'),
'id': domain_info.get('registrant_id'),
'organization': domain_info.get('registrant_organization'),
'address': domain_info.get('registrant_address'),
'city': domain_info.get('registrant_city'),
'state': domain_info.get('registrant_state'),
'zipcode': domain_info.get('registrant_zip_code'),
'country': domain_info.get('registrant_country'),
'phone': domain_info.get('registrant_phone'),
'fax': domain_info.get('registrant_fax'),
'email': domain_info.get('registrant_email'),
},
'admin': {
'name': domain_info.get('admin_name'),
'id': domain_info.get('admin_id'),
'organization': domain_info.get('admin_organization'),
'address':domain_info.get('admin_address'),
'city': domain_info.get('admin_city'),
'state': domain_info.get('admin_state'),
'zipcode': domain_info.get('admin_zip_code'),
'country': domain_info.get('admin_country'),
'phone': domain_info.get('admin_phone'),
'fax': domain_info.get('admin_fax'),
'email': domain_info.get('admin_email'),
},
'technical_contact': {
'name': domain_info.get('tech_name'),
'id': domain_info.get('tech_id'),
'organization': domain_info.get('tech_organization'),
'address': domain_info.get('tech_address'),
'city': domain_info.get('tech_city'),
'state': domain_info.get('tech_state'),
'zipcode': domain_info.get('tech_zip_code'),
'country': domain_info.get('tech_country'),
'phone': domain_info.get('tech_phone'),
'fax': domain_info.get('tech_fax'),
'email': domain_info.get('tech_email'),
},
'nameservers': domain_info.get('ns_records'),
# 'similar_domains': domain_info.get('similar_domains'),
'related_domains': domain_info.get('related_domains'),
'related_tlds': domain_info.get('related_tlds'),
'historical_ips': domain_info.get('historical_ips'),
}
@app.task(name='remove_duplicate_endpoints', bind=False, queue='remove_duplicate_endpoints_queue')
def remove_duplicate_endpoints(
scan_history_id,
domain_id,
subdomain_id=None,
filter_ids=[],
filter_status=[200, 301, 404],
duplicate_removal_fields=ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS
):
"""Remove duplicate endpoints.
Check for implicit redirections by comparing endpoints:
- [x] `content_length` similarities indicating redirections
- [x] `page_title` (check for same page title)
- [ ] Sign-in / login page (check for endpoints with the same words)
Args:
scan_history_id: ScanHistory id.
domain_id (int): Domain id.
subdomain_id (int, optional): Subdomain id.
filter_ids (list): List of endpoint ids to filter on.
filter_status (list): List of HTTP status codes to filter on.
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
"""
logger.info(f'Removing duplicate endpoints based on {duplicate_removal_fields}')
endpoints = (
EndPoint.objects
.filter(scan_history__id=scan_history_id)
.filter(target_domain__id=domain_id)
)
if filter_status:
endpoints = endpoints.filter(http_status__in=filter_status)
if subdomain_id:
endpoints = endpoints.filter(subdomain__id=subdomain_id)
if filter_ids:
endpoints = endpoints.filter(id__in=filter_ids)
for field_name in duplicate_removal_fields:
cl_query = (
endpoints
.values_list(field_name)
.annotate(mc=Count(field_name))
.order_by('-mc')
)
for (field_value, count) in cl_query:
if count > DELETE_DUPLICATES_THRESHOLD:
eps_to_delete = (
endpoints
.filter(**{field_name: field_value})
.order_by('discovered_date')
.all()[1:]
)
msg = f'Deleting {len(eps_to_delete)} endpoints [reason: same {field_name} {field_value}]'
for ep in eps_to_delete:
url = urlparse(ep.http_url)
if url.path in ['', '/', '/login']: # try do not delete the original page that other pages redirect to
continue
msg += f'\n\t {ep.http_url} [{ep.http_status}] [{field_name}={field_value}]'
ep.delete()
logger.warning(msg)
@app.task(name='run_command', bind=False, queue='run_command_queue')
def run_command(cmd, cwd=None, shell=False, history_file=None, scan_id=None, activity_id=None):
"""Run a given command using subprocess module.
Args:
cmd (str): Command to run.
cwd (str): Current working directory.
echo (bool): Log command.
shell (bool): Run within separate shell if True.
history_file (str): Write command + output to history file.
Returns:
tuple: Tuple with return_code, output.
"""
logger.info(cmd)
logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Run the command using subprocess
popen = subprocess.Popen(
cmd if shell else cmd.split(),
shell=shell,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
cwd=cwd,
universal_newlines=True)
output = ''
for stdout_line in iter(popen.stdout.readline, ""):
item = stdout_line.strip()
output += '\n' + item
logger.debug(item)
popen.stdout.close()
popen.wait()
return_code = popen.returncode
command_obj.output = output
command_obj.return_code = return_code
command_obj.save()
if history_file:
mode = 'a'
if not os.path.exists(history_file):
mode = 'w'
with open(history_file, mode) as f:
f.write(f'\n{cmd}\n{return_code}\n{output}\n------------------\n')
return return_code, output
#-------------#
# Other utils #
#-------------#
def stream_command(cmd, cwd=None, shell=False, history_file=None, encoding='utf-8', scan_id=None, activity_id=None, trunc_char=None):
# Log cmd
logger.info(cmd)
# logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Sanitize the cmd
command = cmd if shell else cmd.split()
# Run the command using subprocess
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
shell=shell)
# Log the output in real-time to the database
output = ""
# Process the output
for line in iter(lambda: process.stdout.readline(), b''):
if not line:
break
line = line.strip()
ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')
line = ansi_escape.sub('', line)
line = line.replace('\\x0d\\x0a', '\n')
if trunc_char and line.endswith(trunc_char):
line = line[:-1]
item = line
# Try to parse the line as JSON
try:
item = json.loads(line)
except json.JSONDecodeError:
pass
# Yield the line
#logger.debug(item)
yield item
# Add the log line to the output
output += line + "\n"
# Update the command record in the database
command_obj.output = output
command_obj.save()
# Retrieve the return code and output
process.wait()
return_code = process.returncode
# Update the return code and final output in the database
command_obj.return_code = return_code
command_obj.save()
# Append the command, return code and output to the history file
if history_file is not None:
with open(history_file, "a") as f:
f.write(f"{cmd}\n{return_code}\n{output}\n")
def process_httpx_response(line):
"""TODO: implement this"""
def extract_httpx_url(line):
"""Extract final URL from httpx results. Always follow redirects to find
the last URL.
Args:
line (dict): URL data output by httpx.
Returns:
tuple: (final_url, redirect_bool) tuple.
"""
status_code = line.get('status_code', 0)
final_url = line.get('final_url')
location = line.get('location')
chain_status_codes = line.get('chain_status_codes', [])
# Final URL is already looking nice, if it exists return it
if final_url:
return final_url, False
http_url = line['url'] # fallback to url field
# Handle redirects manually
REDIRECT_STATUS_CODES = [301, 302]
is_redirect = (
status_code in REDIRECT_STATUS_CODES
or
any(x in REDIRECT_STATUS_CODES for x in chain_status_codes)
)
if is_redirect and location:
if location.startswith(('http', 'https')):
http_url = location
else:
http_url = f'{http_url}/{location.lstrip("/")}'
# Sanitize URL
http_url = sanitize_url(http_url)
return http_url, is_redirect
#-------------#
# OSInt utils #
#-------------#
def get_and_save_dork_results(lookup_target, results_dir, type, lookup_keywords=None, lookup_extensions=None, delay=3, page_count=2, scan_history=None):
"""
Uses gofuzz to dork and store information
Args:
lookup_target (str): target to look into such as stackoverflow or even the target itself
results_dir (str): Results directory
type (str): Dork Type Title
lookup_keywords (str): comma separated keywords or paths to look for
lookup_extensions (str): comma separated extensions to look for
delay (int): delay between each requests
page_count (int): pages in google to extract information
scan_history (startScan.ScanHistory): Scan History Object
"""
results = []
gofuzz_command = f'{GOFUZZ_EXEC_PATH} -t {lookup_target} -d {delay} -p {page_count}'
if lookup_extensions:
gofuzz_command += f' -e {lookup_extensions}'
elif lookup_keywords:
gofuzz_command += f' -w {lookup_keywords}'
output_file = f'{results_dir}/gofuzz.txt'
gofuzz_command += f' -o {output_file}'
history_file = f'{results_dir}/commands.txt'
try:
run_command(
gofuzz_command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
)
if not os.path.isfile(output_file):
return
with open(output_file) as f:
for line in f.readlines():
url = line.strip()
if url:
results.append(url)
dork, created = Dork.objects.get_or_create(
type=type,
url=url
)
if scan_history:
scan_history.dorks.add(dork)
# remove output file
os.remove(output_file)
except Exception as e:
logger.exception(e)
return results
def get_and_save_emails(scan_history, activity_id, results_dir):
"""Get and save emails from Google, Bing and Baidu.
Args:
scan_history (startScan.ScanHistory): Scan history object.
activity_id: ScanActivity Object
results_dir (str): Results directory.
Returns:
list: List of emails found.
"""
emails = []
# Proxy settings
# get_random_proxy()
# Gather emails from Google, Bing and Baidu
output_file = f'{results_dir}/emails_tmp.txt'
history_file = f'{results_dir}/commands.txt'
command = f'python3 /usr/src/github/Infoga/infoga.py --domain {scan_history.domain.name} --source all --report {output_file}'
try:
run_command(
command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
activity_id=activity_id)
if not os.path.isfile(output_file):
logger.info('No Email results')
return []
with open(output_file) as f:
for line in f.readlines():
if 'Email' in line:
split_email = line.split(' ')[2]
emails.append(split_email)
output_path = f'{results_dir}/emails.txt'
with open(output_path, 'w') as output_file:
for email_address in emails:
save_email(email_address, scan_history)
output_file.write(f'{email_address}\n')
except Exception as e:
logger.exception(e)
return emails
def save_metadata_info(meta_dict):
"""Extract metadata from Google Search.
Args:
meta_dict (dict): Info dict.
Returns:
list: List of startScan.MetaFinderDocument objects.
"""
logger.warning(f'Getting metadata for {meta_dict.osint_target}')
scan_history = ScanHistory.objects.get(id=meta_dict.scan_id)
# Proxy settings
get_random_proxy()
# Get metadata
result = extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if not result:
logger.error(f'No metadata result from Google Search for {meta_dict.osint_target}.')
return []
# Add metadata info to DB
results = []
for metadata_name, data in result.get_metadata().items():
subdomain = Subdomain.objects.get(
scan_history=meta_dict.scan_id,
name=meta_dict.osint_target)
metadata = DottedDict({k: v for k, v in data.items()})
meta_finder_document = MetaFinderDocument(
subdomain=subdomain,
target_domain=meta_dict.domain,
scan_history=scan_history,
url=metadata.url,
doc_name=metadata_name,
http_status=metadata.status_code,
producer=metadata.metadata.get('Producer'),
creator=metadata.metadata.get('Creator'),
creation_date=metadata.metadata.get('CreationDate'),
modified_date=metadata.metadata.get('ModDate'),
author=metadata.metadata.get('Author'),
title=metadata.metadata.get('Title'),
os=metadata.metadata.get('OSInfo'))
meta_finder_document.save()
results.append(data)
return results
#-----------------#
# Utils functions #
#-----------------#
def create_scan_activity(scan_history_id, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = ScanHistory.objects.get(pk=scan_history_id)
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
#--------------------#
# Database functions #
#--------------------#
def save_vulnerability(**vuln_data):
references = vuln_data.pop('references', [])
cve_ids = vuln_data.pop('cve_ids', [])
cwe_ids = vuln_data.pop('cwe_ids', [])
tags = vuln_data.pop('tags', [])
subscan = vuln_data.pop('subscan', None)
# remove nulls
vuln_data = replace_nulls(vuln_data)
# Create vulnerability
vuln, created = Vulnerability.objects.get_or_create(**vuln_data)
if created:
vuln.discovered_date = timezone.now()
vuln.open_status = True
vuln.save()
# Save vuln tags
for tag_name in tags or []:
tag, created = VulnerabilityTags.objects.get_or_create(name=tag_name)
if tag:
vuln.tags.add(tag)
vuln.save()
# Save CVEs
for cve_id in cve_ids or []:
cve, created = CveId.objects.get_or_create(name=cve_id)
if cve:
vuln.cve_ids.add(cve)
vuln.save()
# Save CWEs
for cve_id in cwe_ids or []:
cwe, created = CweId.objects.get_or_create(name=cve_id)
if cwe:
vuln.cwe_ids.add(cwe)
vuln.save()
# Save vuln reference
for url in references or []:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
if created:
vuln.references.add(ref)
vuln.save()
# Save subscan id in vuln object
if subscan:
vuln.vuln_subscan_ids.add(subscan)
vuln.save()
return vuln, created
def save_endpoint(
http_url,
ctx={},
crawl=False,
is_default=False,
**endpoint_data):
"""Get or create EndPoint object. If crawl is True, also crawl the endpoint
HTTP URL with httpx.
Args:
http_url (str): Input HTTP URL.
is_default (bool): If the url is a default url for SubDomains.
scan_history (startScan.models.ScanHistory): ScanHistory object.
domain (startScan.models.Domain): Domain object.
subdomain (starScan.models.Subdomain): Subdomain object.
results_dir (str, optional): Results directory.
crawl (bool, optional): Run httpx on endpoint if True. Default: False.
force (bool, optional): Force crawl even if ENABLE_HTTP_CRAWL mode is on.
subscan (startScan.models.SubScan, optional): SubScan object.
Returns:
tuple: (startScan.models.EndPoint, created) where `created` is a boolean
indicating if the object is new or already existed.
"""
# remove nulls
endpoint_data = replace_nulls(endpoint_data)
scheme = urlparse(http_url).scheme
endpoint = None
created = False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in http_url:
logger.error(f"{http_url} is not a URL of domain {domain.name}. Skipping.")
return None, False
if crawl:
ctx['track'] = False
results = http_crawl(
urls=[http_url],
method='HEAD',
ctx=ctx)
if results:
endpoint_data = results[0]
endpoint_id = endpoint_data['endpoint_id']
created = endpoint_data['endpoint_created']
endpoint = EndPoint.objects.get(pk=endpoint_id)
elif not scheme:
return None, False
else: # add dumb endpoint without probing it
scan = ScanHistory.objects.filter(pk=ctx.get('scan_history_id')).first()
domain = Domain.objects.filter(pk=ctx.get('domain_id')).first()
if not validators.url(http_url):
return None, False
http_url = sanitize_url(http_url)
endpoint, created = EndPoint.objects.get_or_create(
scan_history=scan,
target_domain=domain,
http_url=http_url,
**endpoint_data)
if created:
endpoint.is_default = is_default
endpoint.discovered_date = timezone.now()
endpoint.save()
subscan_id = ctx.get('subscan_id')
if subscan_id:
endpoint.endpoint_subscan_ids.add(subscan_id)
endpoint.save()
return endpoint, created
def save_subdomain(subdomain_name, ctx={}):
"""Get or create Subdomain object.
Args:
subdomain_name (str): Subdomain name.
scan_history (startScan.models.ScanHistory): ScanHistory object.
Returns:
tuple: (startScan.models.Subdomain, created) where `created` is a
boolean indicating if the object has been created in DB.
"""
scan_id = ctx.get('scan_history_id')
subscan_id = ctx.get('subscan_id')
out_of_scope_subdomains = ctx.get('out_of_scope_subdomains', [])
valid_domain = (
validators.domain(subdomain_name) or
validators.ipv4(subdomain_name) or
validators.ipv6(subdomain_name)
)
if not valid_domain:
logger.error(f'{subdomain_name} is not an invalid domain. Skipping.')
return None, False
if subdomain_name in out_of_scope_subdomains:
logger.error(f'{subdomain_name} is out-of-scope. Skipping.')
return None, False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in subdomain_name:
logger.error(f"{subdomain_name} is not a subdomain of domain {domain.name}. Skipping.")
return None, False
scan = ScanHistory.objects.filter(pk=scan_id).first()
domain = scan.domain if scan else None
subdomain, created = Subdomain.objects.get_or_create(
scan_history=scan,
target_domain=domain,
name=subdomain_name)
if created:
# logger.warning(f'Found new subdomain {subdomain_name}')
subdomain.discovered_date = timezone.now()
if subscan_id:
subdomain.subdomain_subscan_ids.add(subscan_id)
subdomain.save()
return subdomain, created
def save_email(email_address, scan_history=None):
if not validators.email(email_address):
logger.info(f'Email {email_address} is invalid. Skipping.')
return None, False
email, created = Email.objects.get_or_create(address=email_address)
# if created:
# logger.warning(f'Found new email address {email_address}')
# Add email to ScanHistory
if scan_history:
scan_history.emails.add(email)
scan_history.save()
return email, created
def save_employee(name, designation, scan_history=None):
employee, created = Employee.objects.get_or_create(
name=name,
designation=designation)
# if created:
# logger.warning(f'Found new employee {name}')
# Add employee to ScanHistory
if scan_history:
scan_history.employees.add(employee)
scan_history.save()
return employee, created
def save_ip_address(ip_address, subdomain=None, subscan=None, **kwargs):
if not (validators.ipv4(ip_address) or validators.ipv6(ip_address)):
logger.info(f'IP {ip_address} is not a valid IP. Skipping.')
return None, False
ip, created = IpAddress.objects.get_or_create(address=ip_address)
# if created:
# logger.warning(f'Found new IP {ip_address}')
# Set extra attributes
for key, value in kwargs.items():
setattr(ip, key, value)
ip.save()
# Add IP to subdomain
if subdomain:
subdomain.ip_addresses.add(ip)
subdomain.save()
# Add subscan to IP
if subscan:
ip.ip_subscan_ids.add(subscan)
# Geo-localize IP asynchronously
if created:
geo_localize.delay(ip_address, ip.id)
return ip, created
def save_imported_subdomains(subdomains, ctx={}):
"""Take a list of subdomains imported and write them to from_imported.txt.
Args:
subdomains (list): List of subdomain names.
scan_history (startScan.models.ScanHistory): ScanHistory instance.
domain (startScan.models.Domain): Domain instance.
results_dir (str): Results directory.
"""
domain_id = ctx['domain_id']
domain = Domain.objects.get(pk=domain_id)
results_dir = ctx.get('results_dir', RENGINE_RESULTS)
# Validate each subdomain and de-duplicate entries
subdomains = list(set([
subdomain for subdomain in subdomains
if validators.domain(subdomain) and domain.name == get_domain_from_subdomain(subdomain)
]))
if not subdomains:
return
logger.warning(f'Found {len(subdomains)} imported subdomains.')
with open(f'{results_dir}/from_imported.txt', 'w+') as output_file:
for name in subdomains:
subdomain_name = name.strip()
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain.is_imported_subdomain = True
subdomain.save()
output_file.write(f'{subdomain}\n')
@app.task(name='query_reverse_whois', bind=False, queue='query_reverse_whois_queue')
def query_reverse_whois(lookup_keyword):
"""Queries Reverse WHOIS information for an organization or email address.
Args:
lookup_keyword (str): Registrar Name or email
Returns:
dict: Reverse WHOIS information.
"""
return get_associated_domains(lookup_keyword)
@app.task(name='query_ip_history', bind=False, queue='query_ip_history_queue')
def query_ip_history(domain):
"""Queries the IP history for a domain
Args:
domain (str): domain_name
Returns:
list: list of historical ip addresses
"""
return get_domain_historical_ip_address(domain)
@app.task(name='gpt_vulnerability_description', bind=False, queue='gpt_queue')
def gpt_vulnerability_description(vulnerability_id):
"""Generate and store Vulnerability Description using GPT.
Args:
vulnerability_id (Vulnerability Model ID): Vulnerability ID to fetch Description.
"""
logger.info('Getting GPT Vulnerability Description')
try:
lookup_vulnerability = Vulnerability.objects.get(id=vulnerability_id)
lookup_url = urlparse(lookup_vulnerability.http_url)
path = lookup_url.path
except Exception as e:
return {
'status': False,
'error': str(e)
}
# check in db GPTVulnerabilityReport model if vulnerability description and path matches
stored = GPTVulnerabilityReport.objects.filter(url_path=path).filter(title=lookup_vulnerability.name).first()
if stored:
response = {
'status': True,
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
vulnerability_description = get_gpt_vuln_input_description(
lookup_vulnerability.name,
path
)
# one can add more description here later
gpt_generator = GPTVulnerabilityReportGenerator()
response = gpt_generator.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
lookup_vulnerability.name,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
# for all vulnerabilities with the same vulnerability name this description has to be stored.
# also the consition is that the url must contain a part of this.
for vuln in Vulnerability.objects.filter(name=lookup_vulnerability.name, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
return response
| import csv
import json
import os
import pprint
import subprocess
import time
import validators
import whatportis
import xmltodict
import yaml
import tldextract
import concurrent.futures
from datetime import datetime
from urllib.parse import urlparse
from api.serializers import SubdomainSerializer
from celery import chain, chord, group
from celery.result import allow_join_result
from celery.utils.log import get_task_logger
from django.db.models import Count
from dotted_dict import DottedDict
from django.utils import timezone
from pycvesearch import CVESearch
from metafinder.extractor import extract_metadata_from_google_search
from reNgine.celery import app
from reNgine.gpt import GPTVulnerabilityReportGenerator
from reNgine.celery_custom_task import RengineTask
from reNgine.common_func import *
from reNgine.definitions import *
from reNgine.settings import *
from reNgine.gpt import *
from reNgine.utilities import *
from scanEngine.models import (EngineType, InstalledExternalTool, Notification, Proxy)
from startScan.models import *
from startScan.models import EndPoint, Subdomain, Vulnerability
from targetApp.models import Domain
"""
Celery tasks.
"""
logger = get_task_logger(__name__)
#----------------------#
# Scan / Subscan tasks #
#----------------------#
@app.task(name='initiate_scan', bind=False, queue='initiate_scan_queue')
def initiate_scan(
scan_history_id,
domain_id,
engine_id=None,
scan_type=LIVE_SCAN,
results_dir=RENGINE_RESULTS,
imported_subdomains=[],
out_of_scope_subdomains=[],
url_filter=''):
"""Initiate a new scan.
Args:
scan_history_id (int): ScanHistory id.
domain_id (int): Domain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
imported_subdomains (list): Imported subdomains.
out_of_scope_subdomains (list): Out-of-scope subdomains.
url_filter (str): URL path. Default: ''
"""
# Get scan history
scan = ScanHistory.objects.get(pk=scan_history_id)
# Get scan engine
engine_id = engine_id or scan.scan_type.id # scan history engine_id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, [])
# Get domain and set last_scan_date
domain = Domain.objects.get(pk=domain_id)
domain.last_scan_date = timezone.now()
domain.save()
# Get path filter
url_filter = url_filter.rstrip('/')
# Get or create ScanHistory() object
if scan_type == LIVE_SCAN: # immediate
scan = ScanHistory.objects.get(pk=scan_history_id)
scan.scan_status = RUNNING_TASK
elif scan_type == SCHEDULED_SCAN: # scheduled
scan = ScanHistory()
scan.scan_status = INITIATED_TASK
scan.scan_type = engine
scan.celery_ids = [initiate_scan.request.id]
scan.domain = domain
scan.start_scan_date = timezone.now()
scan.tasks = engine.tasks
scan.results_dir = f'{results_dir}/{domain.name}_{scan.id}'
add_gf_patterns = gf_patterns and 'fetch_url' in engine.tasks
if add_gf_patterns:
scan.used_gf_patterns = ','.join(gf_patterns)
scan.save()
# Create scan results dir
os.makedirs(scan.results_dir)
# Build task context
ctx = {
'scan_history_id': scan_history_id,
'engine_id': engine_id,
'domain_id': domain.id,
'results_dir': scan.results_dir,
'url_filter': url_filter,
'yaml_configuration': config,
'out_of_scope_subdomains': out_of_scope_subdomains
}
ctx_str = json.dumps(ctx, indent=2)
# Send start notif
logger.warning(f'Starting scan {scan_history_id} with context:\n{ctx_str}')
send_scan_notif.delay(
scan_history_id,
subscan_id=None,
engine_id=engine_id,
status=CELERY_TASK_STATUS_MAP[scan.scan_status])
# Save imported subdomains in DB
save_imported_subdomains(imported_subdomains, ctx=ctx)
# Create initial subdomain in DB: make a copy of domain as a subdomain so
# that other tasks using subdomains can use it.
subdomain_name = domain.name
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
# If enable_http_crawl is set, create an initial root HTTP endpoint so that
# HTTP crawling can start somewhere
http_url = f'{domain.name}{url_filter}' if url_filter else domain.name
endpoint, _ = save_endpoint(
http_url,
ctx=ctx,
crawl=enable_http_crawl,
is_default=True,
subdomain=subdomain
)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build Celery tasks, crafted according to the dependency graph below:
# subdomain_discovery --> port_scan --> fetch_url --> dir_file_fuzz
# osint vulnerability_scan
# osint dalfox xss scan
# screenshot
# waf_detection
workflow = chain(
group(
subdomain_discovery.si(ctx=ctx, description='Subdomain discovery'),
osint.si(ctx=ctx, description='OS Intelligence')
),
port_scan.si(ctx=ctx, description='Port scan'),
fetch_url.si(ctx=ctx, description='Fetch URL'),
group(
dir_file_fuzz.si(ctx=ctx, description='Directories & files fuzz'),
vulnerability_scan.si(ctx=ctx, description='Vulnerability scan'),
screenshot.si(ctx=ctx, description='Screenshot'),
waf_detection.si(ctx=ctx, description='WAF detection')
)
)
# Build callback
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery chord
logger.info(f'Running Celery workflow with {len(workflow.tasks) + 1} tasks')
task = chain(workflow, callback).on_error(callback).delay()
scan.celery_ids.append(task.id)
scan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='initiate_subscan', bind=False, queue='subscan_queue')
def initiate_subscan(
scan_history_id,
subdomain_id,
engine_id=None,
scan_type=None,
results_dir=RENGINE_RESULTS,
url_filter=''):
"""Initiate a new subscan.
Args:
scan_history_id (int): ScanHistory id.
subdomain_id (int): Subdomain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
url_filter (str): URL path. Default: ''
"""
# Get Subdomain, Domain and ScanHistory
subdomain = Subdomain.objects.get(pk=subdomain_id)
scan = ScanHistory.objects.get(pk=subdomain.scan_history.id)
domain = Domain.objects.get(pk=subdomain.target_domain.id)
# Get EngineType
engine_id = engine_id or scan.scan_type.id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Create scan activity of SubScan Model
subscan = SubScan(
start_scan_date=timezone.now(),
celery_ids=[initiate_subscan.request.id],
scan_history=scan,
subdomain=subdomain,
type=scan_type,
status=RUNNING_TASK,
engine=engine)
subscan.save()
# Get YAML configuration
config = yaml.safe_load(engine.yaml_configuration)
# Create results directory
results_dir = f'{scan.results_dir}/subscans/{subscan.id}'
os.makedirs(results_dir, exist_ok=True)
# Run task
method = globals().get(scan_type)
if not method:
logger.warning(f'Task {scan_type} is not supported by reNgine. Skipping')
return
scan.tasks.append(scan_type)
scan.save()
# Send start notif
send_scan_notif.delay(
scan.id,
subscan_id=subscan.id,
engine_id=engine_id,
status='RUNNING')
# Build context
ctx = {
'scan_history_id': scan.id,
'subscan_id': subscan.id,
'engine_id': engine_id,
'domain_id': domain.id,
'subdomain_id': subdomain.id,
'yaml_configuration': config,
'results_dir': results_dir,
'url_filter': url_filter
}
# Create initial endpoints in DB: find domain HTTP endpoint so that HTTP
# crawling can start somewhere
base_url = f'{subdomain.name}{url_filter}' if url_filter else subdomain.name
endpoint, _ = save_endpoint(
base_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build header + callback
workflow = method.si(ctx=ctx)
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery tasks
task = chain(workflow, callback).on_error(callback).delay()
subscan.celery_ids.append(task.id)
subscan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='report', bind=False, queue='report_queue')
def report(ctx={}, description=None):
"""Report task running after all other tasks.
Mark ScanHistory or SubScan object as completed and update with final
status, log run details and send notification.
Args:
description (str, optional): Task description shown in UI.
"""
# Get objects
subscan_id = ctx.get('subscan_id')
scan_id = ctx.get('scan_history_id')
engine_id = ctx.get('engine_id')
scan = ScanHistory.objects.filter(pk=scan_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
# Get failed tasks
tasks = ScanActivity.objects.filter(scan_of=scan).all()
if subscan:
tasks = tasks.filter(celery_id__in=subscan.celery_ids)
failed_tasks = tasks.filter(status=FAILED_TASK)
# Get task status
failed_count = failed_tasks.count()
status = SUCCESS_TASK if failed_count == 0 else FAILED_TASK
status_h = 'SUCCESS' if failed_count == 0 else 'FAILED'
# Update scan / subscan status
if subscan:
subscan.stop_scan_date = timezone.now()
subscan.status = status
subscan.save()
else:
scan.scan_status = status
scan.stop_scan_date = timezone.now()
scan.save()
# Send scan status notif
send_scan_notif.delay(
scan_history_id=scan_id,
subscan_id=subscan_id,
engine_id=engine_id,
status=status_h)
#------------------------- #
# Tracked reNgine tasks #
#--------------------------#
@app.task(name='subdomain_discovery', queue='main_scan_queue', base=RengineTask, bind=True)
def subdomain_discovery(
self,
host=None,
ctx=None,
description=None):
"""Uses a set of tools (see SUBDOMAIN_SCAN_DEFAULT_TOOLS) to scan all
subdomains associated with a domain.
Args:
host (str): Hostname to scan.
Returns:
subdomains (list): List of subdomain names.
"""
if not host:
host = self.subdomain.name if self.subdomain else self.domain.name
if self.url_filter:
logger.warning(f'Ignoring subdomains scan as an URL path filter was passed ({self.url_filter}).')
return
# Config
config = self.yaml_configuration.get(SUBDOMAIN_DISCOVERY) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL) or self.yaml_configuration.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
tools = config.get(USES_TOOLS, SUBDOMAIN_SCAN_DEFAULT_TOOLS)
default_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=True).filter(is_subdomain_gathering=True)]
custom_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=False).filter(is_subdomain_gathering=True)]
send_subdomain_changes, send_interesting = False, False
notif = Notification.objects.first()
if notif:
send_subdomain_changes = notif.send_subdomain_changes_notif
send_interesting = notif.send_interesting_notif
# Gather tools to run for subdomain scan
if ALL in tools:
tools = SUBDOMAIN_SCAN_DEFAULT_TOOLS + custom_subdomain_tools
tools = [t.lower() for t in tools]
# Make exception for amass since tool name is amass, but command is amass-active/passive
default_subdomain_tools.append('amass-passive')
default_subdomain_tools.append('amass-active')
# Run tools
for tool in tools:
cmd = None
logger.info(f'Scanning subdomains for {host} with {tool}')
proxy = get_random_proxy()
if tool in default_subdomain_tools:
if tool == 'amass-passive':
cmd = f'amass enum -passive -d {host} -o {self.results_dir}/subdomains_amass.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
elif tool == 'amass-active':
use_amass_config = config.get(USE_AMASS_CONFIG, False)
amass_wordlist_name = config.get(AMASS_WORDLIST, 'deepmagic.com-prefixes-top50000')
wordlist_path = f'/usr/src/wordlist/{amass_wordlist_name}.txt'
cmd = f'amass enum -active -d {host} -o {self.results_dir}/subdomains_amass_active.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
cmd += f' -brute -w {wordlist_path}'
elif tool == 'sublist3r':
cmd = f'python3 /usr/src/github/Sublist3r/sublist3r.py -d {host} -t {threads} -o {self.results_dir}/subdomains_sublister.txt'
elif tool == 'subfinder':
cmd = f'subfinder -d {host} -o {self.results_dir}/subdomains_subfinder.txt'
use_subfinder_config = config.get(USE_SUBFINDER_CONFIG, False)
cmd += ' -config /root/.config/subfinder/config.yaml' if use_subfinder_config else ''
cmd += f' -proxy {proxy}' if proxy else ''
cmd += f' -timeout {timeout}' if timeout else ''
cmd += f' -t {threads}' if threads else ''
cmd += f' -silent'
elif tool == 'oneforall':
cmd = f'python3 /usr/src/github/OneForAll/oneforall.py --target {host} run'
cmd_extract = f'cut -d\',\' -f6 /usr/src/github/OneForAll/results/{host}.csv > {self.results_dir}/subdomains_oneforall.txt'
cmd_rm = f'rm -rf /usr/src/github/OneForAll/results/{host}.csv'
cmd += f' && {cmd_extract} && {cmd_rm}'
elif tool == 'ctfr':
results_file = self.results_dir + '/subdomains_ctfr.txt'
cmd = f'python3 /usr/src/github/ctfr/ctfr.py -d {host} -o {results_file}'
cmd_extract = f"cat {results_file} | sed 's/\*.//g' | tail -n +12 | uniq | sort > {results_file}"
cmd += f' && {cmd_extract}'
elif tool == 'tlsx':
results_file = self.results_dir + '/subdomains_tlsx.txt'
cmd = f'tlsx -san -cn -silent -ro -host {host}'
cmd += f" | sed -n '/^\([a-zA-Z0-9]\([-a-zA-Z0-9]*[a-zA-Z0-9]\)\?\.\)\+{host}$/p' | uniq | sort"
cmd += f' > {results_file}'
elif tool == 'netlas':
results_file = self.results_dir + '/subdomains_netlas.txt'
cmd = f'netlas search -d domain -i domain domain:"*.{host}" -f json'
netlas_key = get_netlas_key()
cmd += f' -a {netlas_key}' if netlas_key else ''
cmd_extract = f"grep -oE '([a-zA-Z0-9]([-a-zA-Z0-9]*[a-zA-Z0-9])?\.)+{host}'"
cmd += f' | {cmd_extract} > {results_file}'
elif tool in custom_subdomain_tools:
tool_query = InstalledExternalTool.objects.filter(name__icontains=tool.lower())
if not tool_query.exists():
logger.error(f'Missing {{TARGET}} and {{OUTPUT}} placeholders in {tool} configuration. Skipping.')
continue
custom_tool = tool_query.first()
cmd = custom_tool.subdomain_gathering_command
if '{TARGET}' in cmd and '{OUTPUT}' in cmd:
cmd = cmd.replace('{TARGET}', host)
cmd = cmd.replace('{OUTPUT}', f'{self.results_dir}/subdomains_{tool}.txt')
cmd = cmd.replace('{PATH}', custom_tool.github_clone_path) if '{PATH}' in cmd else cmd
else:
logger.warning(
f'Subdomain discovery tool "{tool}" is not supported by reNgine. Skipping.')
continue
# Run tool
try:
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
except Exception as e:
logger.error(
f'Subdomain discovery tool "{tool}" raised an exception')
logger.exception(e)
# Gather all the tools' results in one single file. Write subdomains into
# separate files, and sort all subdomains.
run_command(
f'cat {self.results_dir}/subdomains_*.txt > {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {self.output_path} -o {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
with open(self.output_path) as f:
lines = f.readlines()
# Parse the output_file file and store Subdomain and EndPoint objects found
# in db.
subdomain_count = 0
subdomains = []
urls = []
for line in lines:
subdomain_name = line.strip()
valid_url = bool(validators.url(subdomain_name))
valid_domain = (
bool(validators.domain(subdomain_name)) or
bool(validators.ipv4(subdomain_name)) or
bool(validators.ipv6(subdomain_name)) or
valid_url
)
if not valid_domain:
logger.error(f'Subdomain {subdomain_name} is not a valid domain, IP or URL. Skipping.')
continue
if valid_url:
subdomain_name = urlparse(subdomain_name).netloc
if subdomain_name in self.out_of_scope_subdomains:
logger.error(f'Subdomain {subdomain_name} is out of scope. Skipping.')
continue
# Add subdomain
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain_count += 1
subdomains.append(subdomain)
urls.append(subdomain.name)
# Bulk crawl subdomains
if enable_http_crawl:
ctx['track'] = True
http_crawl(urls, ctx=ctx, is_ran_from_subdomain_scan=True)
# Find root subdomain endpoints
for subdomain in subdomains:
pass
# Send notifications
subdomains_str = '\n'.join([f'• `{subdomain.name}`' for subdomain in subdomains])
self.notify(fields={
'Subdomain count': len(subdomains),
'Subdomains': subdomains_str,
})
if send_subdomain_changes and self.scan_id and self.domain_id:
added = get_new_added_subdomain(self.scan_id, self.domain_id)
removed = get_removed_subdomain(self.scan_id, self.domain_id)
if added:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in added])
self.notify(fields={'Added subdomains': subdomains_str})
if removed:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in removed])
self.notify(fields={'Removed subdomains': subdomains_str})
if send_interesting and self.scan_id and self.domain_id:
interesting_subdomains = get_interesting_subdomains(self.scan_id, self.domain_id)
if interesting_subdomains:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in interesting_subdomains])
self.notify(fields={'Interesting subdomains': subdomains_str})
return SubdomainSerializer(subdomains, many=True).data
@app.task(name='osint', queue='main_scan_queue', base=RengineTask, bind=True)
def osint(self, host=None, ctx={}, description=None):
"""Run Open-Source Intelligence tools on selected domain.
Args:
host (str): Hostname to scan.
Returns:
dict: Results from osint discovery and dorking.
"""
config = self.yaml_configuration.get(OSINT) or OSINT_DEFAULT_CONFIG
results = {}
grouped_tasks = []
if 'discover' in config:
ctx['track'] = False
# results = osint_discovery(host=host, ctx=ctx)
_task = osint_discovery.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
activity_id=self.activity_id,
results_dir=self.results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if OSINT_DORK in config or OSINT_CUSTOM_DORK in config:
_task = dorking.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
results_dir=self.results_dir
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('OSINT Tasks finished...')
# with open(self.output_path, 'w') as f:
# json.dump(results, f, indent=4)
#
# return results
@app.task(name='osint_discovery', queue='osint_discovery_queue', bind=False)
def osint_discovery(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run OSINT discovery.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
dict: osint metadat and theHarvester and h8mail results.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
osint_lookup = config.get(OSINT_DISCOVER, [])
osint_intensity = config.get(INTENSITY, 'normal')
documents_limit = config.get(OSINT_DOCUMENTS_LIMIT, 50)
results = {}
meta_info = []
emails = []
creds = []
# Get and save meta info
if 'metainfo' in osint_lookup:
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': host,
'domain': host,
'scan_id': scan_history_id,
'documents_limit': documents_limit
})
meta_info.append(save_metadata_info(meta_dict))
# TODO: disabled for now
# elif osint_intensity == 'deep':
# subdomains = Subdomain.objects
# if self.scan:
# subdomains = subdomains.filter(scan_history=self.scan)
# for subdomain in subdomains:
# meta_dict = DottedDict({
# 'osint_target': subdomain.name,
# 'domain': self.domain,
# 'scan_id': self.scan_id,
# 'documents_limit': documents_limit
# })
# meta_info.append(save_metadata_info(meta_dict))
grouped_tasks = []
if 'emails' in osint_lookup:
emails = get_and_save_emails(scan_history, activity_id, results_dir)
emails_str = '\n'.join([f'• `{email}`' for email in emails])
# self.notify(fields={'Emails': emails_str})
# ctx['track'] = False
_task = h8mail.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if 'employees' in osint_lookup:
ctx['track'] = False
_task = theHarvester.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
# results['emails'] = results.get('emails', []) + emails
# results['creds'] = creds
# results['meta_info'] = meta_info
return results
@app.task(name='dorking', bind=False, queue='dorking_queue')
def dorking(config, host, scan_history_id, results_dir):
"""Run Google dorks.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
list: Dorking results for each dork ran.
"""
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
scan_history = ScanHistory.objects.get(pk=scan_history_id)
dorks = config.get(OSINT_DORK, [])
custom_dorks = config.get(OSINT_CUSTOM_DORK, [])
results = []
# custom dorking has higher priority
try:
for custom_dork in custom_dorks:
lookup_target = custom_dork.get('lookup_site')
# replace with original host if _target_
lookup_target = host if lookup_target == '_target_' else lookup_target
if 'lookup_extensions' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_extensions=custom_dork.get('lookup_extensions'),
scan_history=scan_history
)
elif 'lookup_keywords' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_keywords=custom_dork.get('lookup_keywords'),
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
# default dorking
try:
for dork in dorks:
logger.info(f'Getting dork information for {dork}')
if dork == 'stackoverflow':
results = get_and_save_dork_results(
lookup_target='stackoverflow.com',
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'login_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/login/,login.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'admin_panels':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/admin/,admin.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'dashboard_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/dashboard/,dashboard.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'social_media' :
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'reddit.com'
]
for site in social_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'project_management' :
project_websites = [
'trello.com',
'atlassian.net'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'code_sharing' :
project_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'config_files' :
config_file_exts = [
'env',
'xml',
'conf',
'toml',
'yml',
'yaml',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(config_file_exts),
page_count=4,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keyword = 'Jenkins'
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=lookup_keyword,
page_count=1,
scan_history=scan_history
)
elif dork == 'wordpress_files' :
lookup_keywords = [
'/wp-content/',
'/wp-includes/'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'php_error' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'exposed_documents' :
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(docs_file_ext),
page_count=7,
scan_history=scan_history
)
elif dork == 'db_files' :
file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
elif dork == 'git_exposed' :
file_ext = [
'git',
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
return results
@app.task(name='theHarvester', queue='theHarvester_queue', bind=False)
def theHarvester(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run theHarvester to get save emails, hosts, employees found in domain.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
dict: Dict of emails, employees, hosts and ips found during crawling.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
output_path_json = f'{results_dir}/theHarvester.json'
theHarvester_dir = '/usr/src/github/theHarvester'
history_file = f'{results_dir}/commands.txt'
cmd = f'python3 {theHarvester_dir}/theHarvester.py -d {host} -b all -f {output_path_json}'
# Update proxies.yaml
proxy_query = Proxy.objects.all()
if proxy_query.exists():
proxy = proxy_query.first()
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(f'{theHarvester_dir}/proxies.yaml', 'w') as file:
yaml.dump(yaml_data, file)
# Run cmd
run_command(
cmd,
shell=False,
cwd=theHarvester_dir,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
# Get file location
if not os.path.isfile(output_path_json):
logger.error(f'Could not open {output_path_json}')
return {}
# Load theHarvester results
with open(output_path_json, 'r') as f:
data = json.load(f)
# Re-indent theHarvester JSON
with open(output_path_json, 'w') as f:
json.dump(data, f, indent=4)
emails = data.get('emails', [])
for email_address in emails:
email, _ = save_email(email_address, scan_history=scan_history)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
linkedin_people = data.get('linkedin_people', [])
for people in linkedin_people:
employee, _ = save_employee(
people,
designation='linkedin',
scan_history=scan_history)
# if employee:
# self.notify(fields={'LinkedIn people': f'• {employee.name}'})
twitter_people = data.get('twitter_people', [])
for people in twitter_people:
employee, _ = save_employee(
people,
designation='twitter',
scan_history=scan_history)
# if employee:
# self.notify(fields={'Twitter people': f'• {employee.name}'})
hosts = data.get('hosts', [])
urls = []
for host in hosts:
split = tuple(host.split(':'))
http_url = split[0]
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
endpoint, _ = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain)
# if endpoint:
# urls.append(endpoint.http_url)
# self.notify(fields={'Hosts': f'• {endpoint.http_url}'})
# if enable_http_crawl:
# ctx['track'] = False
# http_crawl(urls, ctx=ctx)
# TODO: Lots of ips unrelated with our domain are found, disabling
# this for now.
# ips = data.get('ips', [])
# for ip_address in ips:
# ip, created = save_ip_address(
# ip_address,
# subscan=subscan)
# if ip:
# send_task_notif.delay(
# 'osint',
# scan_history_id=scan_history_id,
# subscan_id=subscan_id,
# severity='success',
# update_fields={'IPs': f'{ip.address}'})
return data
@app.task(name='h8mail', queue='h8mail_queue', bind=False)
def h8mail(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run h8mail.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
list[dict]: List of credentials info.
"""
logger.warning('Getting leaked credentials')
scan_history = ScanHistory.objects.get(pk=scan_history_id)
input_path = f'{results_dir}/emails.txt'
output_file = f'{results_dir}/h8mail.json'
cmd = f'h8mail -t {input_path} --json {output_file}'
history_file = f'{results_dir}/commands.txt'
run_command(
cmd,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
with open(output_file) as f:
data = json.load(f)
creds = data.get('targets', [])
# TODO: go through h8mail output and save emails to DB
for cred in creds:
logger.warning(cred)
email_address = cred['target']
pwn_num = cred['pwn_num']
pwn_data = cred.get('data', [])
email, created = save_email(email_address, scan_history=scan)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
return creds
@app.task(name='screenshot', queue='main_scan_queue', base=RengineTask, bind=True)
def screenshot(self, ctx={}, description=None):
"""Uses EyeWitness to gather screenshot of a domain and/or url.
Args:
description (str, optional): Task description shown in UI.
"""
# Config
screenshots_path = f'{self.results_dir}/screenshots'
output_path = f'{self.results_dir}/screenshots/{self.filename}'
alive_endpoints_file = f'{self.results_dir}/endpoints_alive.txt'
config = self.yaml_configuration.get(SCREENSHOT) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT + 5)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
# If intensity is normal, grab only the root endpoints of each subdomain
strict = True if intensity == 'normal' else False
# Get URLs to take screenshot of
get_http_urls(
is_alive=enable_http_crawl,
strict=strict,
write_filepath=alive_endpoints_file,
get_only_default_urls=True,
ctx=ctx
)
# Send start notif
notification = Notification.objects.first()
send_output_file = notification.send_scan_output_file if notification else False
# Run cmd
cmd = f'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py -f {alive_endpoints_file} -d {screenshots_path} --no-prompt'
cmd += f' --timeout {timeout}' if timeout > 0 else ''
cmd += f' --threads {threads}' if threads > 0 else ''
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(output_path):
logger.error(f'Could not load EyeWitness results at {output_path} for {self.domain.name}.')
return
# Loop through results and save objects in DB
screenshot_paths = []
with open(output_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
"Protocol,Port,Domain,Request Status,Screenshot Path, Source Path"
protocol, port, subdomain_name, status, screenshot_path, source_path = tuple(row)
logger.info(f'{protocol}:{port}:{subdomain_name}:{status}')
subdomain_query = Subdomain.objects.filter(name=subdomain_name)
if self.scan:
subdomain_query = subdomain_query.filter(scan_history=self.scan)
if status == 'Successful' and subdomain_query.exists():
subdomain = subdomain_query.first()
screenshot_paths.append(screenshot_path)
subdomain.screenshot_path = screenshot_path.replace('/usr/src/scan_results/', '')
subdomain.save()
logger.warning(f'Added screenshot for {subdomain.name} to DB')
# Remove all db, html extra files in screenshot results
run_command(
'rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(screenshots_path),
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'rm -rf {screenshots_path}/source',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Send finish notifs
screenshots_str = '• ' + '\n• '.join([f'`{path}`' for path in screenshot_paths])
self.notify(fields={'Screenshots': screenshots_str})
if send_output_file:
for path in screenshot_paths:
title = get_output_file_name(
self.scan_id,
self.subscan_id,
self.filename)
send_file_to_discord.delay(path, title)
@app.task(name='port_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def port_scan(self, hosts=[], ctx={}, description=None):
"""Run port scan.
Args:
hosts (list, optional): Hosts to run port scan on.
description (str, optional): Task description shown in UI.
Returns:
list: List of open ports (dict).
"""
input_file = f'{self.results_dir}/input_subdomains_port_scan.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(PORT_SCAN) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
exclude_ports = config.get(NAABU_EXCLUDE_PORTS, [])
exclude_subdomains = config.get(NAABU_EXCLUDE_SUBDOMAINS, False)
ports = config.get(PORTS, NAABU_DEFAULT_PORTS)
ports = [str(port) for port in ports]
rate_limit = config.get(NAABU_RATE) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
passive = config.get(NAABU_PASSIVE, False)
use_naabu_config = config.get(USE_NAABU_CONFIG, False)
exclude_ports_str = ','.join(return_iterable(exclude_ports))
# nmap args
nmap_enabled = config.get(ENABLE_NMAP, False)
nmap_cmd = config.get(NMAP_COMMAND, '')
nmap_script = config.get(NMAP_SCRIPT, '')
nmap_script = ','.join(return_iterable(nmap_script))
nmap_script_args = config.get(NMAP_SCRIPT_ARGS)
if hosts:
with open(input_file, 'w') as f:
f.write('\n'.join(hosts))
else:
hosts = get_subdomains(
write_filepath=input_file,
exclude_subdomains=exclude_subdomains,
ctx=ctx)
# Build cmd
cmd = 'naabu -json -exclude-cdn'
cmd += f' -list {input_file}' if len(hosts) > 0 else f' -host {hosts[0]}'
if 'full' in ports or 'all' in ports:
ports_str = ' -p "-"'
elif 'top-100' in ports:
ports_str = ' -top-ports 100'
elif 'top-1000' in ports:
ports_str = ' -top-ports 1000'
else:
ports_str = ','.join(ports)
ports_str = f' -p {ports_str}'
cmd += ports_str
cmd += ' -config /root/.config/naabu/config.yaml' if use_naabu_config else ''
cmd += f' -proxy "{proxy}"' if proxy else ''
cmd += f' -c {threads}' if threads else ''
cmd += f' -rate {rate_limit}' if rate_limit > 0 else ''
cmd += f' -timeout {timeout*1000}' if timeout > 0 else ''
cmd += f' -passive' if passive else ''
cmd += f' -exclude-ports {exclude_ports_str}' if exclude_ports else ''
cmd += f' -silent'
# Execute cmd and gather results
results = []
urls = []
ports_data = {}
for line in stream_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
port_number = line['port']
ip_address = line['ip']
host = line.get('host') or ip_address
if port_number == 0:
continue
# Grab subdomain
subdomain = Subdomain.objects.filter(
name=host,
target_domain=self.domain,
scan_history=self.scan
).first()
# Add IP DB
ip, _ = save_ip_address(ip_address, subdomain, subscan=self.subscan)
if self.subscan:
ip.ip_subscan_ids.add(self.subscan)
ip.save()
# Add endpoint to DB
# port 80 and 443 not needed as http crawl already does that.
if port_number not in [80, 443]:
http_url = f'{host}:{port_number}'
endpoint, _ = save_endpoint(
http_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint:
http_url = endpoint.http_url
urls.append(http_url)
# Add Port in DB
port_details = whatportis.get_ports(str(port_number))
service_name = port_details[0].name if len(port_details) > 0 else 'unknown'
description = port_details[0].description if len(port_details) > 0 else ''
# get or create port
port, created = Port.objects.get_or_create(
number=port_number,
service_name=service_name,
description=description
)
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port.save()
ip.ports.add(port)
ip.save()
if host in ports_data:
ports_data[host].append(port_number)
else:
ports_data[host] = [port_number]
# Send notification
logger.warning(f'Found opened port {port_number} on {ip_address} ({host})')
if len(ports_data) == 0:
logger.info('Finished running naabu port scan - No open ports found.')
if nmap_enabled:
logger.info('Nmap scans skipped')
return ports_data
# Send notification
fields_str = ''
for host, ports in ports_data.items():
ports_str = ', '.join([f'`{port}`' for port in ports])
fields_str += f'• `{host}`: {ports_str}\n'
self.notify(fields={'Ports discovered': fields_str})
# Save output to file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
logger.info('Finished running naabu port scan.')
# Process nmap results: 1 process per host
sigs = []
if nmap_enabled:
logger.warning(f'Starting nmap scans ...')
logger.warning(ports_data)
for host, port_list in ports_data.items():
ports_str = '_'.join([str(p) for p in port_list])
ctx_nmap = ctx.copy()
ctx_nmap['description'] = get_task_title(f'nmap_{host}', self.scan_id, self.subscan_id)
ctx_nmap['track'] = False
sig = nmap.si(
cmd=nmap_cmd,
ports=port_list,
host=host,
script=nmap_script,
script_args=nmap_script_args,
max_rate=rate_limit,
ctx=ctx_nmap)
sigs.append(sig)
task = group(sigs).apply_async()
with allow_join_result():
results = task.get()
return ports_data
@app.task(name='nmap', queue='main_scan_queue', base=RengineTask, bind=True)
def nmap(
self,
cmd=None,
ports=[],
host=None,
input_file=None,
script=None,
script_args=None,
max_rate=None,
ctx={},
description=None):
"""Run nmap on a host.
Args:
cmd (str, optional): Existing nmap command to complete.
ports (list, optional): List of ports to scan.
host (str, optional): Host to scan.
input_file (str, optional): Input hosts file.
script (str, optional): NSE script to run.
script_args (str, optional): NSE script args.
max_rate (int): Max rate.
description (str, optional): Task description shown in UI.
"""
notif = Notification.objects.first()
ports_str = ','.join(str(port) for port in ports)
self.filename = self.filename.replace('.txt', '.xml')
filename_vulns = self.filename.replace('.xml', '_vulns.json')
output_file = self.output_path
output_file_xml = f'{self.results_dir}/{host}_{self.filename}'
vulns_file = f'{self.results_dir}/{host}_{filename_vulns}'
logger.warning(f'Running nmap on {host}:{ports}')
# Build cmd
nmap_cmd = get_nmap_cmd(
cmd=cmd,
ports=ports_str,
script=script,
script_args=script_args,
max_rate=max_rate,
host=host,
input_file=input_file,
output_file=output_file_xml)
# Run cmd
run_command(
nmap_cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Get nmap XML results and convert to JSON
vulns = parse_nmap_results(output_file_xml, output_file)
with open(vulns_file, 'w') as f:
json.dump(vulns, f, indent=4)
# Save vulnerabilities found by nmap
vulns_str = ''
for vuln_data in vulns:
# URL is not necessarily an HTTP URL when running nmap (can be any
# other vulnerable protocols). Look for existing endpoint and use its
# URL as vulnerability.http_url if it exists.
url = vuln_data['http_url']
endpoint = EndPoint.objects.filter(http_url__contains=url).first()
if endpoint:
vuln_data['http_url'] = endpoint.http_url
vuln, created = save_vulnerability(
target_domain=self.domain,
subdomain=self.subdomain,
scan_history=self.scan,
subscan=self.subscan,
endpoint=endpoint,
**vuln_data)
vulns_str += f'• {str(vuln)}\n'
if created:
logger.warning(str(vuln))
# Send only 1 notif for all vulns to reduce number of notifs
if notif and notif.send_vuln_notif and vulns_str:
logger.warning(vulns_str)
self.notify(fields={'CVEs': vulns_str})
return vulns
@app.task(name='waf_detection', queue='main_scan_queue', base=RengineTask, bind=True)
def waf_detection(self, ctx={}, description=None):
"""
Uses wafw00f to check for the presence of a WAF.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of startScan.models.Waf objects.
"""
input_path = f'{self.results_dir}/input_endpoints_waf_detection.txt'
config = self.yaml_configuration.get(WAF_DETECTION) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Get alive endpoints from DB
get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
cmd = f'wafw00f -i {input_path} -o {self.output_path}'
run_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(self.output_path):
logger.error(f'Could not find {self.output_path}')
return
with open(self.output_path) as file:
wafs = file.readlines()
for line in wafs:
line = " ".join(line.split())
splitted = line.split(' ', 1)
waf_info = splitted[1].strip()
waf_name = waf_info[:waf_info.find('(')].strip()
waf_manufacturer = waf_info[waf_info.find('(')+1:waf_info.find(')')].strip().replace('.', '')
http_url = sanitize_url(splitted[0].strip())
if not waf_name or waf_name == 'None':
continue
# Add waf to db
waf, _ = Waf.objects.get_or_create(
name=waf_name,
manufacturer=waf_manufacturer
)
# Add waf info to Subdomain in DB
subdomain = get_subdomain_from_url(http_url)
logger.info(f'Wafw00f Subdomain : {subdomain}')
subdomain_query, _ = Subdomain.objects.get_or_create(scan_history=self.scan, name=subdomain)
subdomain_query.waf.add(waf)
subdomain_query.save()
return wafs
@app.task(name='dir_file_fuzz', queue='main_scan_queue', base=RengineTask, bind=True)
def dir_file_fuzz(self, ctx={}, description=None):
"""Perform directory scan, and currently uses `ffuf` as a default tool.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of URLs discovered.
"""
# Config
cmd = 'ffuf'
config = self.yaml_configuration.get(DIR_FILE_FUZZ) or {}
custom_header = self.yaml_configuration.get(CUSTOM_HEADER)
auto_calibration = config.get(AUTO_CALIBRATION, True)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
extensions = config.get(EXTENSIONS, DEFAULT_DIR_FILE_FUZZ_EXTENSIONS)
# prepend . on extensions
extensions = [ext if ext.startswith('.') else '.' + ext for ext in extensions]
extensions_str = ','.join(map(str, extensions))
follow_redirect = config.get(FOLLOW_REDIRECT, FFUF_DEFAULT_FOLLOW_REDIRECT)
max_time = config.get(MAX_TIME, 0)
match_http_status = config.get(MATCH_HTTP_STATUS, FFUF_DEFAULT_MATCH_HTTP_STATUS)
mc = ','.join([str(c) for c in match_http_status])
recursive_level = config.get(RECURSIVE_LEVEL, FFUF_DEFAULT_RECURSIVE_LEVEL)
stop_on_error = config.get(STOP_ON_ERROR, False)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
wordlist_name = config.get(WORDLIST, 'dicc')
delay = rate_limit / (threads * 100) # calculate request pause delay from rate_limit and number of threads
input_path = f'{self.results_dir}/input_dir_file_fuzz.txt'
# Get wordlist
wordlist_name = 'dicc' if wordlist_name == 'default' else wordlist_name
wordlist_path = f'/usr/src/wordlist/{wordlist_name}.txt'
# Build command
cmd += f' -w {wordlist_path}'
cmd += f' -e {extensions_str}' if extensions else ''
cmd += f' -maxtime {max_time}' if max_time > 0 else ''
cmd += f' -p {delay}' if delay > 0 else ''
cmd += f' -recursion -recursion-depth {recursive_level} ' if recursive_level > 0 else ''
cmd += f' -t {threads}' if threads and threads > 0 else ''
cmd += f' -timeout {timeout}' if timeout and timeout > 0 else ''
cmd += ' -se' if stop_on_error else ''
cmd += ' -fr' if follow_redirect else ''
cmd += ' -ac' if auto_calibration else ''
cmd += f' -mc {mc}' if mc else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
# Grab URLs to fuzz
urls = get_http_urls(
is_alive=True,
ignore_files=False,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
logger.warning(urls)
# Loop through URLs and run command
results = []
for url in urls:
'''
Above while fetching urls, we are not ignoring files, because some
default urls may redirect to https://example.com/login.php
so, ignore_files is set to False
but, during fuzzing, we will only need part of the path, in above example
it is still a good idea to ffuf base url https://example.com
so files from base url
'''
url_parse = urlparse(url)
url = url_parse.scheme + '://' + url_parse.netloc
url += '/FUZZ' # TODO: fuzz not only URL but also POST / PUT / headers
proxy = get_random_proxy()
# Build final cmd
fcmd = cmd
fcmd += f' -x {proxy}' if proxy else ''
fcmd += f' -u {url} -json'
# Initialize DirectoryScan object
dirscan = DirectoryScan()
dirscan.scanned_date = timezone.now()
dirscan.command_line = fcmd
dirscan.save()
# Loop through results and populate EndPoint and DirectoryFile in DB
results = []
for line in stream_command(
fcmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
name = line['input'].get('FUZZ')
length = line['length']
status = line['status']
words = line['words']
url = line['url']
lines = line['lines']
content_type = line['content-type']
duration = line['duration']
if not name:
logger.error(f'FUZZ not found for "{url}"')
continue
endpoint, created = save_endpoint(url, crawl=False, ctx=ctx)
endpoint.http_status = status
endpoint.content_length = length
endpoint.response_time = duration / 1000000000
endpoint.save()
if created:
urls.append(endpoint.http_url)
endpoint.status = status
endpoint.content_type = content_type
endpoint.content_length = length
dfile, created = DirectoryFile.objects.get_or_create(
name=name,
length=length,
words=words,
lines=lines,
content_type=content_type,
url=url)
dfile.http_status = status
dfile.save()
# if created:
# logger.warning(f'Found new directory or file {url}')
dirscan.directory_files.add(dfile)
dirscan.save()
if self.subscan:
dirscan.dir_subscan_ids.add(self.subscan)
subdomain_name = get_subdomain_from_url(endpoint.http_url)
subdomain = Subdomain.objects.get(name=subdomain_name, scan_history=self.scan)
subdomain.directories.add(dirscan)
subdomain.save()
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(urls, ctx=ctx)
return results
@app.task(name='fetch_url', queue='main_scan_queue', base=RengineTask, bind=True)
def fetch_url(self, urls=[], ctx={}, description=None):
"""Fetch URLs using different tools like gauplus, gau, gospider, waybackurls ...
Args:
urls (list): List of URLs to start from.
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/input_endpoints_fetch_url.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(FETCH_URL) or {}
should_remove_duplicate_endpoints = config.get(REMOVE_DUPLICATE_ENDPOINTS, True)
duplicate_removal_fields = config.get(DUPLICATE_REMOVAL_FIELDS, ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, DEFAULT_GF_PATTERNS)
ignore_file_extension = config.get(IGNORE_FILE_EXTENSION, DEFAULT_IGNORE_FILE_EXTENSIONS)
tools = config.get(USES_TOOLS, ENDPOINT_SCAN_DEFAULT_TOOLS)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
domain_request_headers = self.domain.request_headers if self.domain else None
custom_header = domain_request_headers or self.yaml_configuration.get(CUSTOM_HEADER)
exclude_subdomains = config.get(EXCLUDED_SUBDOMAINS, False)
# Get URLs to scan and save to input file
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
exclude_subdomains=exclude_subdomains,
get_only_default_urls=True,
ctx=ctx
)
# Domain regex
host = self.domain.name if self.domain else urlparse(urls[0]).netloc
host_regex = f"\'https?://([a-z0-9]+[.])*{host}.*\'"
# Tools cmds
cmd_map = {
'gau': f'gau',
'gauplus': f'gauplus -random-agent',
'hakrawler': 'hakrawler -subs -u',
'waybackurls': 'waybackurls',
'gospider': f'gospider -S {input_path} --js -d 2 --sitemap --robots -w -r',
'katana': f'katana -list {input_path} -silent -jc -kf all -d 3 -fs rdn',
}
if proxy:
cmd_map['gau'] += f' --proxy "{proxy}"'
cmd_map['gauplus'] += f' -p "{proxy}"'
cmd_map['gospider'] += f' -p {proxy}'
cmd_map['hakrawler'] += f' -proxy {proxy}'
cmd_map['katana'] += f' -proxy {proxy}'
if threads > 0:
cmd_map['gau'] += f' --threads {threads}'
cmd_map['gauplus'] += f' -t {threads}'
cmd_map['gospider'] += f' -t {threads}'
cmd_map['katana'] += f' -c {threads}'
if custom_header:
header_string = ';;'.join([
f'{key}: {value}' for key, value in custom_header.items()
])
cmd_map['hakrawler'] += f' -h {header_string}'
cmd_map['katana'] += f' -H {header_string}'
header_flags = [':'.join(h) for h in header_string.split(';;')]
for flag in header_flags:
cmd_map['gospider'] += f' -H {flag}'
cat_input = f'cat {input_path}'
grep_output = f'grep -Eo {host_regex}'
cmd_map = {
tool: f'{cat_input} | {cmd} | {grep_output} > {self.results_dir}/urls_{tool}.txt'
for tool, cmd in cmd_map.items()
}
tasks = group(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for tool, cmd in cmd_map.items()
if tool in tools
)
# Cleanup task
sort_output = [
f'cat {self.results_dir}/urls_* > {self.output_path}',
f'cat {input_path} >> {self.output_path}',
f'sort -u {self.output_path} -o {self.output_path}',
]
if ignore_file_extension:
ignore_exts = '|'.join(ignore_file_extension)
grep_ext_filtered_output = [
f'cat {self.output_path} | grep -Eiv "\\.({ignore_exts}).*" > {self.results_dir}/urls_filtered.txt',
f'mv {self.results_dir}/urls_filtered.txt {self.output_path}'
]
sort_output.extend(grep_ext_filtered_output)
cleanup = chain(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for cmd in sort_output
)
# Run all commands
task = chord(tasks)(cleanup)
with allow_join_result():
task.get()
# Store all the endpoints and run httpx
with open(self.output_path) as f:
discovered_urls = f.readlines()
self.notify(fields={'Discovered URLs': len(discovered_urls)})
# Some tools can have an URL in the format <URL>] - <PATH> or <URL> - <PATH>, add them
# to the final URL list
all_urls = []
for url in discovered_urls:
url = url.strip()
urlpath = None
base_url = None
if '] ' in url: # found JS scraped endpoint e.g from gospider
split = tuple(url.split('] '))
if not len(split) == 2:
logger.warning(f'URL format not recognized for "{url}". Skipping.')
continue
base_url, urlpath = split
urlpath = urlpath.lstrip('- ')
elif ' - ' in url: # found JS scraped endpoint e.g from gospider
base_url, urlpath = tuple(url.split(' - '))
if base_url and urlpath:
subdomain = urlparse(base_url)
url = f'{subdomain.scheme}://{subdomain.netloc}{self.url_filter}'
if not validators.url(url):
logger.warning(f'Invalid URL "{url}". Skipping.')
if url not in all_urls:
all_urls.append(url)
# Filter out URLs if a path filter was passed
if self.url_filter:
all_urls = [url for url in all_urls if self.url_filter in url]
# Write result to output path
with open(self.output_path, 'w') as f:
f.write('\n'.join(all_urls))
logger.warning(f'Found {len(all_urls)} usable URLs')
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(
all_urls,
ctx=ctx,
should_remove_duplicate_endpoints=should_remove_duplicate_endpoints,
duplicate_removal_fields=duplicate_removal_fields
)
#-------------------#
# GF PATTERNS MATCH #
#-------------------#
# Combine old gf patterns with new ones
if gf_patterns:
self.scan.used_gf_patterns = ','.join(gf_patterns)
self.scan.save()
# Run gf patterns on saved endpoints
# TODO: refactor to Celery task
for gf_pattern in gf_patterns:
# TODO: js var is causing issues, removing for now
if gf_pattern == 'jsvar':
logger.info('Ignoring jsvar as it is causing issues.')
continue
# Run gf on current pattern
logger.warning(f'Running gf on pattern "{gf_pattern}"')
gf_output_file = f'{self.results_dir}/gf_patterns_{gf_pattern}.txt'
cmd = f'cat {self.output_path} | gf {gf_pattern} | grep -Eo {host_regex} >> {gf_output_file}'
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Check output file
if not os.path.exists(gf_output_file):
logger.error(f'Could not find GF output file {gf_output_file}. Skipping GF pattern "{gf_pattern}"')
continue
# Read output file line by line and
with open(gf_output_file, 'r') as f:
lines = f.readlines()
# Add endpoints / subdomains to DB
for url in lines:
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
endpoint, created = save_endpoint(
http_url,
crawl=False,
subdomain=subdomain,
ctx=ctx)
if not endpoint:
continue
earlier_pattern = None
if not created:
earlier_pattern = endpoint.matched_gf_patterns
pattern = f'{earlier_pattern},{gf_pattern}' if earlier_pattern else gf_pattern
endpoint.matched_gf_patterns = pattern
endpoint.save()
return all_urls
def parse_curl_output(response):
# TODO: Enrich from other cURL fields.
CURL_REGEX_HTTP_STATUS = f'HTTP\/(?:(?:\d\.?)+)\s(\d+)\s(?:\w+)'
http_status = 0
if response:
failed = False
regex = re.compile(CURL_REGEX_HTTP_STATUS, re.MULTILINE)
try:
http_status = int(regex.findall(response)[0])
except (KeyError, TypeError, IndexError):
pass
return {
'http_status': http_status,
}
@app.task(name='vulnerability_scan', queue='main_scan_queue', bind=True, base=RengineTask)
def vulnerability_scan(self, urls=[], ctx={}, description=None):
"""
This function will serve as an entrypoint to vulnerability scan.
All other vulnerability scan will be run from here including nuclei, crlfuzz, etc
"""
logger.info('Running Vulnerability Scan Queue')
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_run_nuclei = config.get(RUN_NUCLEI, True)
should_run_crlfuzz = config.get(RUN_CRLFUZZ, False)
should_run_dalfox = config.get(RUN_DALFOX, False)
should_run_s3scanner = config.get(RUN_S3SCANNER, True)
grouped_tasks = []
if should_run_nuclei:
_task = nuclei_scan.si(
urls=urls,
ctx=ctx,
description=f'Nuclei Scan'
)
grouped_tasks.append(_task)
if should_run_crlfuzz:
_task = crlfuzz_scan.si(
urls=urls,
ctx=ctx,
description=f'CRLFuzz Scan'
)
grouped_tasks.append(_task)
if should_run_dalfox:
_task = dalfox_xss_scan.si(
urls=urls,
ctx=ctx,
description=f'Dalfox XSS Scan'
)
grouped_tasks.append(_task)
if should_run_s3scanner:
_task = s3scanner.si(
ctx=ctx,
description=f'Misconfigured S3 Buckets Scanner'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan completed...')
# return results
return None
@app.task(name='nuclei_individual_severity_module', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_individual_severity_module(self, cmd, severity, enable_http_crawl, should_fetch_gpt_report, ctx={}, description=None):
'''
This celery task will run vulnerability scan in parallel.
All severities supplied should run in parallel as grouped tasks.
'''
results = []
logger.info(f'Running vulnerability scan with severity: {severity}')
cmd += f' -severity {severity}'
# Send start notification
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
# Gather nuclei results
vuln_data = parse_nuclei_result(line)
# Get corresponding subdomain
http_url = sanitize_url(line.get('matched-at'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
# Look for duplicate vulnerabilities by excluding records that might change but are irrelevant.
object_comparison_exclude = ['response', 'curl_command', 'tags', 'references', 'cve_ids', 'cwe_ids']
# Add subdomain and target domain to the duplicate check
vuln_data_copy = vuln_data.copy()
vuln_data_copy['subdomain'] = subdomain
vuln_data_copy['target_domain'] = self.domain
# Check if record exists, if exists do not save it
if record_exists(Vulnerability, data=vuln_data_copy, exclude_keys=object_comparison_exclude):
logger.warning(f'Nuclei vulnerability of severity {severity} : {vuln_data_copy["name"]} for {subdomain_name} already exists')
continue
# Get or create EndPoint object
response = line.get('response')
httpx_crawl = False if response else enable_http_crawl # avoid yet another httpx crawl
endpoint, _ = save_endpoint(
http_url,
crawl=httpx_crawl,
subdomain=subdomain,
ctx=ctx)
if endpoint:
http_url = endpoint.http_url
if not httpx_crawl:
output = parse_curl_output(response)
endpoint.http_status = output['http_status']
endpoint.save()
# Get or create Vulnerability object
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
subdomain=subdomain,
**vuln_data)
if not vuln:
continue
# Print vuln
severity = line['info'].get('severity', 'unknown')
logger.warning(str(vuln))
# Send notification for all vulnerabilities except info
url = vuln.http_url or vuln.subdomain
send_vuln = (
notif and
notif.send_vuln_notif and
vuln and
severity in ['low', 'medium', 'high', 'critical'])
if send_vuln:
fields = {
'Severity': f'**{severity.upper()}**',
'URL': http_url,
'Subdomain': subdomain_name,
'Name': vuln.name,
'Type': vuln.type,
'Description': vuln.description,
'Template': vuln.template_url,
'Tags': vuln.get_tags_str(),
'CVEs': vuln.get_cve_str(),
'CWEs': vuln.get_cwe_str(),
'References': vuln.get_refs_str()
}
severity_map = {
'low': 'info',
'medium': 'warning',
'high': 'error',
'critical': 'error'
}
self.notify(
f'vulnerability_scan_#{vuln.id}',
severity_map[severity],
fields,
add_meta_info=False)
# Send report to hackerone
hackerone_query = Hackerone.objects.all()
send_report = (
hackerone_query.exists() and
severity not in ('info', 'low') and
vuln.target_domain.h1_team_handle
)
if send_report:
hackerone = hackerone_query.first()
if hackerone.send_critical and severity == 'critical':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_high and severity == 'high':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_medium and severity == 'medium':
send_hackerone_report.delay(vuln.id)
# Write results to JSON file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
# Send finish notif
if send_status:
vulns = Vulnerability.objects.filter(scan_history__id=self.scan_id)
info_count = vulns.filter(severity=0).count()
low_count = vulns.filter(severity=1).count()
medium_count = vulns.filter(severity=2).count()
high_count = vulns.filter(severity=3).count()
critical_count = vulns.filter(severity=4).count()
unknown_count = vulns.filter(severity=-1).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count + unknown_count
fields = {
'Total': vulnerability_count,
'Critical': critical_count,
'High': high_count,
'Medium': medium_count,
'Low': low_count,
'Info': info_count,
'Unknown': unknown_count
}
self.notify(fields=fields)
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=NUCLEI
).exclude(
severity=0
)
# find all unique vulnerabilities based on path and title
# all unique vulnerability will go thru gpt function and get report
# once report is got, it will be matched with other vulnerabilities and saved
unique_vulns = set()
for vuln in vulns:
unique_vulns.add((vuln.name, vuln.get_path()))
unique_vulns = list(unique_vulns)
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in unique_vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return None
def get_vulnerability_gpt_report(vuln):
title = vuln[0]
path = vuln[1]
logger.info(f'Getting GPT Report for {title}, PATH: {path}')
# check if in db already exists
stored = GPTVulnerabilityReport.objects.filter(
url_path=path
).filter(
title=title
).first()
if stored:
response = {
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
report = GPTVulnerabilityReportGenerator()
vulnerability_description = get_gpt_vuln_input_description(
title,
path
)
response = report.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
title,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
for vuln in Vulnerability.objects.filter(name=title, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
def add_gpt_description_db(title, path, description, impact, remediation, references):
gpt_report = GPTVulnerabilityReport()
gpt_report.url_path = path
gpt_report.title = title
gpt_report.description = description
gpt_report.impact = impact
gpt_report.remediation = remediation
gpt_report.save()
for url in references:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
gpt_report.references.add(ref)
gpt_report.save()
@app.task(name='nuclei_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_scan(self, urls=[], ctx={}, description=None):
"""HTTP vulnerability scan using Nuclei
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
Notes:
Unfurl the urls to keep only domain and path, will be sent to vuln scan and
ignore certain file extensions. Thanks: https://github.com/six2dez/reconftw
"""
# Config
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
input_path = f'{self.results_dir}/input_endpoints_vulnerability_scan.txt'
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
concurrency = config.get(NUCLEI_CONCURRENCY) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
retries = config.get(RETRIES) or self.yaml_configuration.get(RETRIES, DEFAULT_RETRIES)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
custom_header = config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
should_fetch_gpt_report = config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
proxy = get_random_proxy()
nuclei_specific_config = config.get('nuclei', {})
use_nuclei_conf = nuclei_specific_config.get(USE_CONFIG, False)
severities = nuclei_specific_config.get(NUCLEI_SEVERITY, NUCLEI_DEFAULT_SEVERITIES)
tags = nuclei_specific_config.get(NUCLEI_TAGS, [])
tags = ','.join(tags)
nuclei_templates = nuclei_specific_config.get(NUCLEI_TEMPLATE)
custom_nuclei_templates = nuclei_specific_config.get(NUCLEI_CUSTOM_TEMPLATE)
# severities_str = ','.join(severities)
# Get alive endpoints
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=enable_http_crawl,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
if intensity == 'normal': # reduce number of endpoints to scan
unfurl_filter = f'{self.results_dir}/urls_unfurled.txt'
run_command(
f"cat {input_path} | unfurl -u format %s://%d%p |uro > {unfurl_filter}",
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {unfurl_filter} -o {unfurl_filter}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
input_path = unfurl_filter
# Build templates
# logger.info('Updating Nuclei templates ...')
run_command(
'nuclei -update-templates',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
templates = []
if not (nuclei_templates or custom_nuclei_templates):
templates.append(NUCLEI_DEFAULT_TEMPLATES_PATH)
if nuclei_templates:
if ALL in nuclei_templates:
template = NUCLEI_DEFAULT_TEMPLATES_PATH
templates.append(template)
else:
templates.extend(nuclei_templates)
if custom_nuclei_templates:
custom_nuclei_template_paths = [f'{str(elem)}.yaml' for elem in custom_nuclei_templates]
template = templates.extend(custom_nuclei_template_paths)
# Build CMD
cmd = 'nuclei -j'
cmd += ' -config /root/.config/nuclei/config.yaml' if use_nuclei_conf else ''
cmd += f' -irr'
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -l {input_path}'
cmd += f' -c {str(concurrency)}' if concurrency > 0 else ''
cmd += f' -proxy {proxy} ' if proxy else ''
cmd += f' -retries {retries}' if retries > 0 else ''
cmd += f' -rl {rate_limit}' if rate_limit > 0 else ''
# cmd += f' -severity {severities_str}'
cmd += f' -timeout {str(timeout)}' if timeout and timeout > 0 else ''
cmd += f' -tags {tags}' if tags else ''
cmd += f' -silent'
for tpl in templates:
cmd += f' -t {tpl}'
grouped_tasks = []
custom_ctx = ctx
for severity in severities:
custom_ctx['track'] = True
_task = nuclei_individual_severity_module.si(
cmd,
severity,
enable_http_crawl,
should_fetch_gpt_report,
ctx=custom_ctx,
description=f'Nuclei Scan with severity {severity}'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan with all severities completed...')
return None
@app.task(name='dalfox_xss_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def dalfox_xss_scan(self, urls=[], ctx={}, description=None):
"""XSS Scan using dalfox
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
dalfox_config = vuln_config.get(DALFOX) or {}
custom_header = dalfox_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
is_waf_evasion = dalfox_config.get(WAF_EVASION, False)
blind_xss_server = dalfox_config.get(BLIND_XSS_SERVER)
user_agent = dalfox_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
timeout = dalfox_config.get(TIMEOUT)
delay = dalfox_config.get(DELAY)
threads = dalfox_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_dalfox_xss.txt'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=False,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'dalfox --silence --no-color --no-spinner'
cmd += f' --only-poc r '
cmd += f' --ignore-return 302,404,403'
cmd += f' --skip-bav'
cmd += f' file {input_path}'
cmd += f' --proxy {proxy}' if proxy else ''
cmd += f' --waf-evasion' if is_waf_evasion else ''
cmd += f' -b {blind_xss_server}' if blind_xss_server else ''
cmd += f' --delay {delay}' if delay else ''
cmd += f' --timeout {timeout}' if timeout else ''
cmd += f' --user-agent {user_agent}' if user_agent else ''
cmd += f' --header {custom_header}' if custom_header else ''
cmd += f' --worker {threads}' if threads else ''
cmd += f' --format json'
results = []
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id,
trunc_char=','
):
if not isinstance(line, dict):
continue
results.append(line)
vuln_data = parse_dalfox_result(line)
http_url = sanitize_url(line.get('data'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Dalfox Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=DALFOX
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='crlfuzz_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def crlfuzz_scan(self, urls=[], ctx={}, description=None):
"""CRLF Fuzzing with CRLFuzz
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
custom_header = vuln_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
user_agent = vuln_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
threads = vuln_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_crlf.txt'
output_path = f'{self.results_dir}/{self.filename}'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'crlfuzz -s'
cmd += f' -l {input_path}'
cmd += f' -x {proxy}' if proxy else ''
cmd += f' --H {custom_header}' if custom_header else ''
cmd += f' -o {output_path}'
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id
)
if not os.path.isfile(output_path):
logger.info('No Results from CRLFuzz')
return
crlfs = []
results = []
with open(output_path, 'r') as file:
crlfs = file.readlines()
for crlf in crlfs:
url = crlf.strip()
vuln_data = parse_crlfuzz_result(url)
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting CRLFuzz Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=CRLFUZZ
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='s3scanner', queue='main_scan_queue', base=RengineTask, bind=True)
def s3scanner(self, ctx={}, description=None):
"""Bucket Scanner
Args:
ctx (dict): Context
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/#{self.scan_id}_subdomain_discovery.txt'
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
s3_config = vuln_config.get(S3SCANNER) or {}
threads = s3_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
providers = s3_config.get(PROVIDERS, S3SCANNER_DEFAULT_PROVIDERS)
scan_history = ScanHistory.objects.filter(pk=self.scan_id).first()
for provider in providers:
cmd = f's3scanner -bucket-file {input_path} -enumerate -provider {provider} -threads {threads} -json'
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
if line.get('bucket', {}).get('exists', 0) == 1:
result = parse_s3scanner_result(line)
s3bucket, created = S3Bucket.objects.get_or_create(**result)
scan_history.buckets.add(s3bucket)
logger.info(f"s3 bucket added {result['provider']}-{result['name']}-{result['region']}")
@app.task(name='http_crawl', queue='main_scan_queue', base=RengineTask, bind=True)
def http_crawl(
self,
urls=[],
method=None,
recrawl=False,
ctx={},
track=True,
description=None,
is_ran_from_subdomain_scan=False,
should_remove_duplicate_endpoints=True,
duplicate_removal_fields=[]):
"""Use httpx to query HTTP URLs for important info like page titles, http
status, etc...
Args:
urls (list, optional): A set of URLs to check. Overrides default
behavior which queries all endpoints related to this scan.
method (str): HTTP method to use (GET, HEAD, POST, PUT, DELETE).
recrawl (bool, optional): If False, filter out URLs that have already
been crawled.
should_remove_duplicate_endpoints (bool): Whether to remove duplicate endpoints
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
Returns:
list: httpx results.
"""
logger.info('Initiating HTTP Crawl')
if is_ran_from_subdomain_scan:
logger.info('Running From Subdomain Scan...')
cmd = '/go/bin/httpx'
cfg = self.yaml_configuration.get(HTTP_CRAWL) or {}
custom_header = cfg.get(CUSTOM_HEADER, '')
threads = cfg.get(THREADS, DEFAULT_THREADS)
follow_redirect = cfg.get(FOLLOW_REDIRECT, True)
self.output_path = None
input_path = f'{self.results_dir}/httpx_input.txt'
history_file = f'{self.results_dir}/commands.txt'
if urls: # direct passing URLs to check
if self.url_filter:
urls = [u for u in urls if self.url_filter in u]
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_uncrawled=not recrawl,
write_filepath=input_path,
ctx=ctx
)
# logger.debug(urls)
# If no URLs found, skip it
if not urls:
return
# Re-adjust thread number if few URLs to avoid spinning up a monster to
# kill a fly.
if len(urls) < threads:
threads = len(urls)
# Get random proxy
proxy = get_random_proxy()
# Run command
cmd += f' -cl -ct -rt -location -td -websocket -cname -asn -cdn -probe -random-agent'
cmd += f' -t {threads}' if threads > 0 else ''
cmd += f' --http-proxy {proxy}' if proxy else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -json'
cmd += f' -u {urls[0]}' if len(urls) == 1 else f' -l {input_path}'
cmd += f' -x {method}' if method else ''
cmd += f' -silent'
if follow_redirect:
cmd += ' -fr'
results = []
endpoint_ids = []
for line in stream_command(
cmd,
history_file=history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not line or not isinstance(line, dict):
continue
logger.debug(line)
# No response from endpoint
if line.get('failed', False):
continue
# Parse httpx output
host = line.get('host', '')
content_length = line.get('content_length', 0)
http_status = line.get('status_code')
http_url, is_redirect = extract_httpx_url(line)
page_title = line.get('title')
webserver = line.get('webserver')
cdn = line.get('cdn', False)
rt = line.get('time')
techs = line.get('tech', [])
cname = line.get('cname', '')
content_type = line.get('content_type', '')
response_time = -1
if rt:
response_time = float(''.join(ch for ch in rt if not ch.isalpha()))
if rt[-2:] == 'ms':
response_time = response_time / 1000
# Create Subdomain object in DB
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
# Save default HTTP URL to endpoint object in DB
endpoint, created = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain,
is_default=is_ran_from_subdomain_scan
)
if not endpoint:
continue
endpoint.http_status = http_status
endpoint.page_title = page_title
endpoint.content_length = content_length
endpoint.webserver = webserver
endpoint.response_time = response_time
endpoint.content_type = content_type
endpoint.save()
endpoint_str = f'{http_url} [{http_status}] `{content_length}B` `{webserver}` `{rt}`'
logger.warning(endpoint_str)
if endpoint and endpoint.is_alive and endpoint.http_status != 403:
self.notify(
fields={'Alive endpoint': f'• {endpoint_str}'},
add_meta_info=False)
# Add endpoint to results
line['_cmd'] = cmd
line['final_url'] = http_url
line['endpoint_id'] = endpoint.id
line['endpoint_created'] = created
line['is_redirect'] = is_redirect
results.append(line)
# Add technology objects to DB
for technology in techs:
tech, _ = Technology.objects.get_or_create(name=technology)
endpoint.techs.add(tech)
if is_ran_from_subdomain_scan:
subdomain.technologies.add(tech)
subdomain.save()
endpoint.save()
techs_str = ', '.join([f'`{tech}`' for tech in techs])
self.notify(
fields={'Technologies': techs_str},
add_meta_info=False)
# Add IP objects for 'a' records to DB
a_records = line.get('a', [])
for ip_address in a_records:
ip, created = save_ip_address(
ip_address,
subdomain,
subscan=self.subscan,
cdn=cdn)
ips_str = '• ' + '\n• '.join([f'`{ip}`' for ip in a_records])
self.notify(
fields={'IPs': ips_str},
add_meta_info=False)
# Add IP object for host in DB
if host:
ip, created = save_ip_address(
host,
subdomain,
subscan=self.subscan,
cdn=cdn)
self.notify(
fields={'IPs': f'• `{ip.address}`'},
add_meta_info=False)
# Save subdomain and endpoint
if is_ran_from_subdomain_scan:
# save subdomain stuffs
subdomain.http_url = http_url
subdomain.http_status = http_status
subdomain.page_title = page_title
subdomain.content_length = content_length
subdomain.webserver = webserver
subdomain.response_time = response_time
subdomain.content_type = content_type
subdomain.cname = ','.join(cname)
subdomain.is_cdn = cdn
if cdn:
subdomain.cdn_name = line.get('cdn_name')
subdomain.save()
endpoint.save()
endpoint_ids.append(endpoint.id)
if should_remove_duplicate_endpoints:
# Remove 'fake' alive endpoints that are just redirects to the same page
remove_duplicate_endpoints(
self.scan_id,
self.domain_id,
self.subdomain_id,
filter_ids=endpoint_ids
)
# Remove input file
run_command(
f'rm {input_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
return results
#---------------------#
# Notifications tasks #
#---------------------#
@app.task(name='send_notif', bind=False, queue='send_notif_queue')
def send_notif(
message,
scan_history_id=None,
subscan_id=None,
**options):
if not 'title' in options:
message = enrich_notification(message, scan_history_id, subscan_id)
send_discord_message(message, **options)
send_slack_message(message)
send_telegram_message(message)
@app.task(name='send_scan_notif', bind=False, queue='send_scan_notif_queue')
def send_scan_notif(
scan_history_id,
subscan_id=None,
engine_id=None,
status='RUNNING'):
"""Send scan status notification. Works for scan or a subscan if subscan_id
is passed.
Args:
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Get domain, engine, scan_history objects
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
tasks = ScanActivity.objects.filter(scan_of=scan) if scan else 0
# Build notif options
url = get_scan_url(scan_history_id, subscan_id)
title = get_scan_title(scan_history_id, subscan_id)
fields = get_scan_fields(engine, scan, subscan, status, tasks)
severity = None
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
if status:
severity = STATUS_TO_SEVERITIES.get(status)
opts = {
'title': title,
'url': url,
'fields': fields,
'severity': severity
}
logger.warning(f'Sending notification "{title}" [{severity}]')
# Send notification
send_notif(
msg,
scan_history_id,
subscan_id,
**opts)
@app.task(name='send_task_notif', bind=False, queue='send_task_notif_queue')
def send_task_notif(
task_name,
status=None,
result=None,
output_path=None,
traceback=None,
scan_history_id=None,
engine_id=None,
subscan_id=None,
severity=None,
add_meta_info=True,
update_fields={}):
"""Send task status notification.
Args:
task_name (str): Task name.
status (str, optional): Task status.
result (str, optional): Task result.
output_path (str, optional): Task output path.
traceback (str, optional): Task traceback.
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
severity (str, optional): Severity (will be mapped to notif colors)
add_meta_info (bool, optional): Wheter to add scan / subscan info to notif.
update_fields (dict, optional): Fields key / value to update.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Build fields
url = None
fields = {}
if add_meta_info:
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
url = get_scan_url(scan_history_id)
if status:
fields['Status'] = f'**{status}**'
if engine:
fields['Engine'] = engine.engine_name
if scan:
fields['Scan ID'] = f'[#{scan.id}]({url})'
if subscan:
url = get_scan_url(scan_history_id, subscan_id)
fields['Subscan ID'] = f'[#{subscan.id}]({url})'
title = get_task_title(task_name, scan_history_id, subscan_id)
if status:
severity = STATUS_TO_SEVERITIES.get(status)
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
# Add fields to update
for k, v in update_fields.items():
fields[k] = v
# Add traceback to notif
if traceback and notif.send_scan_tracebacks:
fields['Traceback'] = f'```\n{traceback}\n```'
# Add files to notif
files = []
attach_file = (
notif.send_scan_output_file and
output_path and
result and
not traceback
)
if attach_file:
output_title = output_path.split('/')[-1]
files = [(output_path, output_title)]
# Send notif
opts = {
'title': title,
'url': url,
'files': files,
'severity': severity,
'fields': fields,
'fields_append': update_fields.keys()
}
send_notif(
msg,
scan_history_id=scan_history_id,
subscan_id=subscan_id,
**opts)
@app.task(name='send_file_to_discord', bind=False, queue='send_file_to_discord_queue')
def send_file_to_discord(file_path, title=None):
notif = Notification.objects.first()
do_send = notif and notif.send_to_discord and notif.discord_hook_url
if not do_send:
return False
webhook = DiscordWebhook(
url=notif.discord_hook_url,
rate_limit_retry=True,
username=title or "reNgine Discord Plugin"
)
with open(file_path, "rb") as f:
head, tail = os.path.split(file_path)
webhook.add_file(file=f.read(), filename=tail)
webhook.execute()
@app.task(name='send_hackerone_report', bind=False, queue='send_hackerone_report_queue')
def send_hackerone_report(vulnerability_id):
"""Send HackerOne vulnerability report.
Args:
vulnerability_id (int): Vulnerability id.
Returns:
int: HTTP response status code.
"""
vulnerability = Vulnerability.objects.get(id=vulnerability_id)
severities = {v: k for k,v in NUCLEI_SEVERITY_MAP.items()}
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
# can only send vulnerability report if team_handle exists
if len(vulnerability.target_domain.h1_team_handle) !=0:
hackerone_query = Hackerone.objects.all()
if hackerone_query.exists():
hackerone = Hackerone.objects.first()
severity_value = severities[vulnerability.severity]
tpl = hackerone.report_template
# Replace syntax of report template with actual content
tpl = tpl.replace('{vulnerability_name}', vulnerability.name)
tpl = tpl.replace('{vulnerable_url}', vulnerability.http_url)
tpl = tpl.replace('{vulnerability_severity}', severity_value)
tpl = tpl.replace('{vulnerability_description}', vulnerability.description if vulnerability.description else '')
tpl = tpl.replace('{vulnerability_extracted_results}', vulnerability.extracted_results if vulnerability.extracted_results else '')
tpl = tpl.replace('{vulnerability_reference}', vulnerability.reference if vulnerability.reference else '')
data = {
"data": {
"type": "report",
"attributes": {
"team_handle": vulnerability.target_domain.h1_team_handle,
"title": '{} found in {}'.format(vulnerability.name, vulnerability.http_url),
"vulnerability_information": tpl,
"severity_rating": severity_value,
"impact": "More information about the impact and vulnerability can be found here: \n" + vulnerability.reference if vulnerability.reference else "NA",
}
}
}
r = requests.post(
'https://api.hackerone.com/v1/hackers/reports',
auth=(hackerone.username, hackerone.api_key),
json=data,
headers=headers
)
response = r.json()
status_code = r.status_code
if status_code == 201:
vulnerability.hackerone_report_id = response['data']["id"]
vulnerability.open_status = False
vulnerability.save()
return status_code
else:
logger.error('No team handle found.')
status_code = 111
return status_code
#-------------#
# Utils tasks #
#-------------#
@app.task(name='parse_nmap_results', bind=False, queue='parse_nmap_results_queue')
def parse_nmap_results(xml_file, output_file=None):
"""Parse results from nmap output file.
Args:
xml_file (str): nmap XML report file path.
Returns:
list: List of vulnerabilities found from nmap results.
"""
with open(xml_file, encoding='utf8') as f:
content = f.read()
try:
nmap_results = xmltodict.parse(content) # parse XML to dict
except Exception as e:
logger.exception(e)
logger.error(f'Cannot parse {xml_file} to valid JSON. Skipping.')
return []
# Write JSON to output file
if output_file:
with open(output_file, 'w') as f:
json.dump(nmap_results, f, indent=4)
logger.warning(json.dumps(nmap_results, indent=4))
hosts = (
nmap_results
.get('nmaprun', {})
.get('host', {})
)
all_vulns = []
if isinstance(hosts, dict):
hosts = [hosts]
for host in hosts:
# Grab hostname / IP from output
hostnames_dict = host.get('hostnames', {})
if hostnames_dict:
# Ensure that hostnames['hostname'] is a list for consistency
hostnames_list = hostnames_dict['hostname'] if isinstance(hostnames_dict['hostname'], list) else [hostnames_dict['hostname']]
# Extract all the @name values from the list of dictionaries
hostnames = [entry.get('@name') for entry in hostnames_list]
else:
hostnames = [host.get('address')['@addr']]
# Iterate over each hostname for each port
for hostname in hostnames:
# Grab ports from output
ports = host.get('ports', {}).get('port', [])
if isinstance(ports, dict):
ports = [ports]
for port in ports:
url_vulns = []
port_number = port['@portid']
url = sanitize_url(f'{hostname}:{port_number}')
logger.info(f'Parsing nmap results for {hostname}:{port_number} ...')
if not port_number or not port_number.isdigit():
continue
port_protocol = port['@protocol']
scripts = port.get('script', [])
if isinstance(scripts, dict):
scripts = [scripts]
for script in scripts:
script_id = script['@id']
script_output = script['@output']
script_output_table = script.get('table', [])
logger.debug(f'Ran nmap script "{script_id}" on {port_number}/{port_protocol}:\n{script_output}\n')
if script_id == 'vulscan':
vulns = parse_nmap_vulscan_output(script_output)
url_vulns.extend(vulns)
elif script_id == 'vulners':
vulns = parse_nmap_vulners_output(script_output)
url_vulns.extend(vulns)
# elif script_id == 'http-server-header':
# TODO: nmap can help find technologies as well using the http-server-header script
# regex = r'(\w+)/([\d.]+)\s?(?:\((\w+)\))?'
# tech_name, tech_version, tech_os = re.match(regex, test_string).groups()
# Technology.objects.get_or_create(...)
# elif script_id == 'http_csrf':
# vulns = parse_nmap_http_csrf_output(script_output)
# url_vulns.extend(vulns)
else:
logger.warning(f'Script output parsing for script "{script_id}" is not supported yet.')
# Add URL to vuln
for vuln in url_vulns:
# TODO: This should extend to any URL, not just HTTP
vuln['http_url'] = url
if 'http_path' in vuln:
vuln['http_url'] += vuln['http_path']
all_vulns.append(vuln)
return all_vulns
def parse_nmap_http_csrf_output(script_output):
pass
def parse_nmap_vulscan_output(script_output):
"""Parse nmap vulscan script output.
Args:
script_output (str): Vulscan script output.
Returns:
list: List of Vulnerability dicts.
"""
data = {}
vulns = []
provider_name = ''
# Sort all vulns found by provider so that we can match each provider with
# a function that pulls from its API to get more info about the
# vulnerability.
for line in script_output.splitlines():
if not line:
continue
if not line.startswith('['): # provider line
if "No findings" in line:
logger.info(f"No findings: {line}")
continue
elif ' - ' in line:
provider_name, provider_url = tuple(line.split(' - '))
data[provider_name] = {'url': provider_url.rstrip(':'), 'entries': []}
continue
else:
# Log a warning
logger.warning(f"Unexpected line format: {line}")
continue
reg = r'\[(.*)\] (.*)'
matches = re.match(reg, line)
id, title = matches.groups()
entry = {'id': id, 'title': title}
data[provider_name]['entries'].append(entry)
logger.warning('Vulscan parsed output:')
logger.warning(pprint.pformat(data))
for provider_name in data:
if provider_name == 'Exploit-DB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'IBM X-Force':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'MITRE CVE':
logger.error(f'Provider {provider_name} is not supported YET.')
for entry in data[provider_name]['entries']:
cve_id = entry['id']
vuln = cve_to_vuln(cve_id)
vulns.append(vuln)
elif provider_name == 'OSVDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'OpenVAS (Nessus)':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'SecurityFocus':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'VulDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
else:
logger.error(f'Provider {provider_name} is not supported.')
return vulns
def parse_nmap_vulners_output(script_output, url=''):
"""Parse nmap vulners script output.
TODO: Rework this as it's currently matching all CVEs no matter the
confidence.
Args:
script_output (str): Script output.
Returns:
list: List of found vulnerabilities.
"""
vulns = []
# Check for CVE in script output
CVE_REGEX = re.compile(r'.*(CVE-\d\d\d\d-\d+).*')
matches = CVE_REGEX.findall(script_output)
matches = list(dict.fromkeys(matches))
for cve_id in matches: # get CVE info
vuln = cve_to_vuln(cve_id, vuln_type='nmap-vulners-nse')
if vuln:
vulns.append(vuln)
return vulns
def cve_to_vuln(cve_id, vuln_type=''):
"""Search for a CVE using CVESearch and return Vulnerability data.
Args:
cve_id (str): CVE ID in the form CVE-*
Returns:
dict: Vulnerability dict.
"""
cve_info = CVESearch('https://cve.circl.lu').id(cve_id)
if not cve_info:
logger.error(f'Could not fetch CVE info for cve {cve_id}. Skipping.')
return None
vuln_cve_id = cve_info['id']
vuln_name = vuln_cve_id
vuln_description = cve_info.get('summary', 'none').replace(vuln_cve_id, '').strip()
try:
vuln_cvss = float(cve_info.get('cvss', -1))
except (ValueError, TypeError):
vuln_cvss = -1
vuln_cwe_id = cve_info.get('cwe', '')
exploit_ids = cve_info.get('refmap', {}).get('exploit-db', [])
osvdb_ids = cve_info.get('refmap', {}).get('osvdb', [])
references = cve_info.get('references', [])
capec_objects = cve_info.get('capec', [])
# Parse ovals for a better vuln name / type
ovals = cve_info.get('oval', [])
if ovals:
vuln_name = ovals[0]['title']
vuln_type = ovals[0]['family']
# Set vulnerability severity based on CVSS score
vuln_severity = 'info'
if vuln_cvss < 4:
vuln_severity = 'low'
elif vuln_cvss < 7:
vuln_severity = 'medium'
elif vuln_cvss < 9:
vuln_severity = 'high'
else:
vuln_severity = 'critical'
# Build console warning message
msg = f'{vuln_name} | {vuln_severity.upper()} | {vuln_cve_id} | {vuln_cwe_id} | {vuln_cvss}'
for id in osvdb_ids:
msg += f'\n\tOSVDB: {id}'
for exploit_id in exploit_ids:
msg += f'\n\tEXPLOITDB: {exploit_id}'
logger.warning(msg)
vuln = {
'name': vuln_name,
'type': vuln_type,
'severity': NUCLEI_SEVERITY_MAP[vuln_severity],
'description': vuln_description,
'cvss_score': vuln_cvss,
'references': references,
'cve_ids': [vuln_cve_id],
'cwe_ids': [vuln_cwe_id]
}
return vuln
def parse_s3scanner_result(line):
'''
Parses and returns s3Scanner Data
'''
bucket = line['bucket']
return {
'name': bucket['name'],
'region': bucket['region'],
'provider': bucket['provider'],
'owner_display_name': bucket['owner_display_name'],
'owner_id': bucket['owner_id'],
'perm_auth_users_read': bucket['perm_auth_users_read'],
'perm_auth_users_write': bucket['perm_auth_users_write'],
'perm_auth_users_read_acl': bucket['perm_auth_users_read_acl'],
'perm_auth_users_write_acl': bucket['perm_auth_users_write_acl'],
'perm_auth_users_full_control': bucket['perm_auth_users_full_control'],
'perm_all_users_read': bucket['perm_all_users_read'],
'perm_all_users_write': bucket['perm_all_users_write'],
'perm_all_users_read_acl': bucket['perm_all_users_read_acl'],
'perm_all_users_write_acl': bucket['perm_all_users_write_acl'],
'perm_all_users_full_control': bucket['perm_all_users_full_control'],
'num_objects': bucket['num_objects'],
'size': bucket['bucket_size']
}
def parse_nuclei_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
return {
'name': line['info'].get('name', ''),
'type': line['type'],
'severity': NUCLEI_SEVERITY_MAP[line['info'].get('severity', 'unknown')],
'template': line['template'],
'template_url': line['template-url'],
'template_id': line['template-id'],
'description': line['info'].get('description', ''),
'matcher_name': line.get('matcher-name', ''),
'curl_command': line.get('curl-command'),
'request': line.get('request'),
'response': line.get('response'),
'extracted_results': line.get('extracted-results', []),
'cvss_metrics': line['info'].get('classification', {}).get('cvss-metrics', ''),
'cvss_score': line['info'].get('classification', {}).get('cvss-score'),
'cve_ids': line['info'].get('classification', {}).get('cve_id', []) or [],
'cwe_ids': line['info'].get('classification', {}).get('cwe_id', []) or [],
'references': line['info'].get('reference', []) or [],
'tags': line['info'].get('tags', []),
'source': NUCLEI,
}
def parse_dalfox_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
description = ''
description += f" Evidence: {line.get('evidence')} <br>" if line.get('evidence') else ''
description += f" Message: {line.get('message')} <br>" if line.get('message') else ''
description += f" Payload: {line.get('message_str')} <br>" if line.get('message_str') else ''
description += f" Vulnerable Parameter: {line.get('param')} <br>" if line.get('param') else ''
return {
'name': 'XSS (Cross Site Scripting)',
'type': 'XSS',
'severity': DALFOX_SEVERITY_MAP[line.get('severity', 'unknown')],
'description': description,
'source': DALFOX,
'cwe_ids': [line.get('cwe')]
}
def parse_crlfuzz_result(url):
"""Parse CRLF results
Args:
url (str): CRLF Vulnerable URL
Returns:
dict: Vulnerability data.
"""
return {
'name': 'CRLF (HTTP Response Splitting)',
'type': 'CRLF',
'severity': 2,
'description': 'A CRLF (HTTP Response Splitting) vulnerability has been discovered.',
'source': CRLFUZZ,
}
def record_exists(model, data, exclude_keys=[]):
"""
Check if a record already exists in the database based on the given data.
Args:
model (django.db.models.Model): The Django model to check against.
data (dict): Data dictionary containing fields and values.
exclude_keys (list): List of keys to exclude from the lookup.
Returns:
bool: True if the record exists, False otherwise.
"""
# Extract the keys that will be used for the lookup
lookup_fields = {key: data[key] for key in data if key not in exclude_keys}
# Return True if a record exists based on the lookup fields, False otherwise
return model.objects.filter(**lookup_fields).exists()
@app.task(name='geo_localize', bind=False, queue='geo_localize_queue')
def geo_localize(host, ip_id=None):
"""Uses geoiplookup to find location associated with host.
Args:
host (str): Hostname.
ip_id (int): IpAddress object id.
Returns:
startScan.models.CountryISO: CountryISO object from DB or None.
"""
if validators.ipv6(host):
logger.info(f'Ipv6 "{host}" is not supported by geoiplookup. Skipping.')
return None
cmd = f'geoiplookup {host}'
_, out = run_command(cmd)
if 'IP Address not found' not in out and "can't resolve hostname" not in out:
country_iso = out.split(':')[1].strip().split(',')[0]
country_name = out.split(':')[1].strip().split(',')[1].strip()
geo_object, _ = CountryISO.objects.get_or_create(
iso=country_iso,
name=country_name
)
geo_json = {
'iso': country_iso,
'name': country_name
}
if ip_id:
ip = IpAddress.objects.get(pk=ip_id)
ip.geo_iso = geo_object
ip.save()
return geo_json
logger.info(f'Geo IP lookup failed for host "{host}"')
return None
@app.task(name='query_whois', bind=False, queue='query_whois_queue')
def query_whois(ip_domain, force_reload_whois=False):
"""Query WHOIS information for an IP or a domain name.
Args:
ip_domain (str): IP address or domain name.
save_domain (bool): Whether to save domain or not, default False
Returns:
dict: WHOIS information.
"""
if not force_reload_whois and Domain.objects.filter(name=ip_domain).exists() and Domain.objects.get(name=ip_domain).domain_info:
domain = Domain.objects.get(name=ip_domain)
if not domain.insert_date:
domain.insert_date = timezone.now()
domain.save()
domain_info_db = domain.domain_info
domain_info = DottedDict(
dnssec=domain_info_db.dnssec,
created=domain_info_db.created,
updated=domain_info_db.updated,
expires=domain_info_db.expires,
geolocation_iso=domain_info_db.geolocation_iso,
status=[status['name'] for status in DomainWhoisStatusSerializer(domain_info_db.status, many=True).data],
whois_server=domain_info_db.whois_server,
ns_records=[ns['name'] for ns in NameServersSerializer(domain_info_db.name_servers, many=True).data],
registrar_name=domain_info_db.registrar.name,
registrar_phone=domain_info_db.registrar.phone,
registrar_email=domain_info_db.registrar.email,
registrar_url=domain_info_db.registrar.url,
registrant_name=domain_info_db.registrant.name,
registrant_id=domain_info_db.registrant.id_str,
registrant_organization=domain_info_db.registrant.organization,
registrant_city=domain_info_db.registrant.city,
registrant_state=domain_info_db.registrant.state,
registrant_zip_code=domain_info_db.registrant.zip_code,
registrant_country=domain_info_db.registrant.country,
registrant_phone=domain_info_db.registrant.phone,
registrant_fax=domain_info_db.registrant.fax,
registrant_email=domain_info_db.registrant.email,
registrant_address=domain_info_db.registrant.address,
admin_name=domain_info_db.admin.name,
admin_id=domain_info_db.admin.id_str,
admin_organization=domain_info_db.admin.organization,
admin_city=domain_info_db.admin.city,
admin_state=domain_info_db.admin.state,
admin_zip_code=domain_info_db.admin.zip_code,
admin_country=domain_info_db.admin.country,
admin_phone=domain_info_db.admin.phone,
admin_fax=domain_info_db.admin.fax,
admin_email=domain_info_db.admin.email,
admin_address=domain_info_db.admin.address,
tech_name=domain_info_db.tech.name,
tech_id=domain_info_db.tech.id_str,
tech_organization=domain_info_db.tech.organization,
tech_city=domain_info_db.tech.city,
tech_state=domain_info_db.tech.state,
tech_zip_code=domain_info_db.tech.zip_code,
tech_country=domain_info_db.tech.country,
tech_phone=domain_info_db.tech.phone,
tech_fax=domain_info_db.tech.fax,
tech_email=domain_info_db.tech.email,
tech_address=domain_info_db.tech.address,
related_tlds=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_tlds, many=True).data],
related_domains=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_domains, many=True).data],
historical_ips=[ip for ip in HistoricalIPSerializer(domain_info_db.historical_ips, many=True).data],
)
if domain_info_db.dns_records:
a_records = []
txt_records = []
mx_records = []
dns_records = [{'name': dns['name'], 'type': dns['type']} for dns in DomainDNSRecordSerializer(domain_info_db.dns_records, many=True).data]
for dns in dns_records:
if dns['type'] == 'a':
a_records.append(dns['name'])
elif dns['type'] == 'txt':
txt_records.append(dns['name'])
elif dns['type'] == 'mx':
mx_records.append(dns['name'])
domain_info.a_records = a_records
domain_info.txt_records = txt_records
domain_info.mx_records = mx_records
else:
logger.info(f'Domain info for "{ip_domain}" not found in DB, querying whois')
domain_info = DottedDict()
# find domain historical ip
try:
historical_ips = get_domain_historical_ip_address(ip_domain)
domain_info.historical_ips = historical_ips
except Exception as e:
logger.error(f'HistoricalIP for {ip_domain} not found!\nError: {str(e)}')
historical_ips = []
# find associated domains using ip_domain
try:
related_domains = reverse_whois(ip_domain.split('.')[0])
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
# find related tlds using TLSx
try:
related_tlds = []
output_path = '/tmp/ip_domain_tlsx.txt'
tlsx_command = f'tlsx -san -cn -silent -ro -host {ip_domain} -o {output_path}'
run_command(
tlsx_command,
shell=True,
)
tlsx_output = []
with open(output_path) as f:
tlsx_output = f.readlines()
tldextract_target = tldextract.extract(ip_domain)
for doms in tlsx_output:
doms = doms.strip()
tldextract_res = tldextract.extract(doms)
if ip_domain != doms and tldextract_res.domain == tldextract_target.domain and tldextract_res.subdomain == '':
related_tlds.append(doms)
related_tlds = list(set(related_tlds))
domain_info.related_tlds = related_tlds
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
related_domains_list = []
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_related)
related_domains_list.append(_domain['name'])
for _domain in related_tlds:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain,
)[0]
db_domain_info.related_tlds.add(domain_related)
for _ip in historical_ips:
historical_ip = HistoricalIP.objects.get_or_create(
ip=_ip['ip'],
owner=_ip['owner'],
location=_ip['location'],
last_seen=_ip['last_seen'],
)[0]
db_domain_info.historical_ips.add(historical_ip)
domain.domain_info = db_domain_info
domain.save()
command = f'netlas host {ip_domain} -f json'
# check if netlas key is provided
netlas_key = get_netlas_key()
command += f' -a {netlas_key}' if netlas_key else ''
result = subprocess.check_output(command.split()).decode('utf-8')
if 'Failed to parse response data' in result:
# do fallback
return {
'status': False,
'ip_domain': ip_domain,
'result': "Netlas limit exceeded.",
'message': 'Netlas limit exceeded.'
}
try:
result = json.loads(result)
logger.info(result)
whois = result.get('whois') if result.get('whois') else {}
domain_info.created = whois.get('created_date')
domain_info.expires = whois.get('expiration_date')
domain_info.updated = whois.get('updated_date')
domain_info.whois_server = whois.get('whois_server')
if 'registrant' in whois:
registrant = whois.get('registrant')
domain_info.registrant_name = registrant.get('name')
domain_info.registrant_country = registrant.get('country')
domain_info.registrant_id = registrant.get('id')
domain_info.registrant_state = registrant.get('province')
domain_info.registrant_city = registrant.get('city')
domain_info.registrant_phone = registrant.get('phone')
domain_info.registrant_address = registrant.get('street')
domain_info.registrant_organization = registrant.get('organization')
domain_info.registrant_fax = registrant.get('fax')
domain_info.registrant_zip_code = registrant.get('postal_code')
email_search = EMAIL_REGEX.search(str(registrant.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.registrant_email = field_content
if 'administrative' in whois:
administrative = whois.get('administrative')
domain_info.admin_name = administrative.get('name')
domain_info.admin_country = administrative.get('country')
domain_info.admin_id = administrative.get('id')
domain_info.admin_state = administrative.get('province')
domain_info.admin_city = administrative.get('city')
domain_info.admin_phone = administrative.get('phone')
domain_info.admin_address = administrative.get('street')
domain_info.admin_organization = administrative.get('organization')
domain_info.admin_fax = administrative.get('fax')
domain_info.admin_zip_code = administrative.get('postal_code')
mail_search = EMAIL_REGEX.search(str(administrative.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.admin_email = field_content
if 'technical' in whois:
technical = whois.get('technical')
domain_info.tech_name = technical.get('name')
domain_info.tech_country = technical.get('country')
domain_info.tech_state = technical.get('province')
domain_info.tech_id = technical.get('id')
domain_info.tech_city = technical.get('city')
domain_info.tech_phone = technical.get('phone')
domain_info.tech_address = technical.get('street')
domain_info.tech_organization = technical.get('organization')
domain_info.tech_fax = technical.get('fax')
domain_info.tech_zip_code = technical.get('postal_code')
mail_search = EMAIL_REGEX.search(str(technical.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.tech_email = field_content
if 'dns' in result:
dns = result.get('dns')
domain_info.mx_records = dns.get('mx')
domain_info.txt_records = dns.get('txt')
domain_info.a_records = dns.get('a')
domain_info.ns_records = whois.get('name_servers')
domain_info.dnssec = True if whois.get('dnssec') else False
domain_info.status = whois.get('status')
if 'registrar' in whois:
registrar = whois.get('registrar')
domain_info.registrar_name = registrar.get('name')
domain_info.registrar_email = registrar.get('email')
domain_info.registrar_phone = registrar.get('phone')
domain_info.registrar_url = registrar.get('url')
# find associated domains if registrant email is found
related_domains = reverse_whois(domain_info.get('registrant_email')) if domain_info.get('registrant_email') else []
for _domain in related_domains:
related_domains_list.append(_domain['name'])
# remove duplicate domains from related domains list
related_domains_list = list(set(related_domains_list))
domain_info.related_domains = related_domains_list
# save to db if domain exists
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_rel = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_rel)
db_domain_info.dnssec = domain_info.get('dnssec')
#dates
db_domain_info.created = domain_info.get('created')
db_domain_info.updated = domain_info.get('updated')
db_domain_info.expires = domain_info.get('expires')
#registrar
db_domain_info.registrar = Registrar.objects.get_or_create(
name=domain_info.get('registrar_name'),
email=domain_info.get('registrar_email'),
phone=domain_info.get('registrar_phone'),
url=domain_info.get('registrar_url'),
)[0]
db_domain_info.registrant = DomainRegistration.objects.get_or_create(
name=domain_info.get('registrant_name'),
organization=domain_info.get('registrant_organization'),
address=domain_info.get('registrant_address'),
city=domain_info.get('registrant_city'),
state=domain_info.get('registrant_state'),
zip_code=domain_info.get('registrant_zip_code'),
country=domain_info.get('registrant_country'),
email=domain_info.get('registrant_email'),
phone=domain_info.get('registrant_phone'),
fax=domain_info.get('registrant_fax'),
id_str=domain_info.get('registrant_id'),
)[0]
db_domain_info.admin = DomainRegistration.objects.get_or_create(
name=domain_info.get('admin_name'),
organization=domain_info.get('admin_organization'),
address=domain_info.get('admin_address'),
city=domain_info.get('admin_city'),
state=domain_info.get('admin_state'),
zip_code=domain_info.get('admin_zip_code'),
country=domain_info.get('admin_country'),
email=domain_info.get('admin_email'),
phone=domain_info.get('admin_phone'),
fax=domain_info.get('admin_fax'),
id_str=domain_info.get('admin_id'),
)[0]
db_domain_info.tech = DomainRegistration.objects.get_or_create(
name=domain_info.get('tech_name'),
organization=domain_info.get('tech_organization'),
address=domain_info.get('tech_address'),
city=domain_info.get('tech_city'),
state=domain_info.get('tech_state'),
zip_code=domain_info.get('tech_zip_code'),
country=domain_info.get('tech_country'),
email=domain_info.get('tech_email'),
phone=domain_info.get('tech_phone'),
fax=domain_info.get('tech_fax'),
id_str=domain_info.get('tech_id'),
)[0]
for status in domain_info.get('status') or []:
_status = WhoisStatus.objects.get_or_create(
name=status
)[0]
_status.save()
db_domain_info.status.add(_status)
for ns in domain_info.get('ns_records') or []:
_ns = NameServer.objects.get_or_create(
name=ns
)[0]
_ns.save()
db_domain_info.name_servers.add(_ns)
for a in domain_info.get('a_records') or []:
_a = DNSRecord.objects.get_or_create(
name=a,
type='a'
)[0]
_a.save()
db_domain_info.dns_records.add(_a)
for mx in domain_info.get('mx_records') or []:
_mx = DNSRecord.objects.get_or_create(
name=mx,
type='mx'
)[0]
_mx.save()
db_domain_info.dns_records.add(_mx)
for txt in domain_info.get('txt_records') or []:
_txt = DNSRecord.objects.get_or_create(
name=txt,
type='txt'
)[0]
_txt.save()
db_domain_info.dns_records.add(_txt)
db_domain_info.geolocation_iso = domain_info.get('registrant_country')
db_domain_info.whois_server = domain_info.get('whois_server')
db_domain_info.save()
domain.domain_info = db_domain_info
domain.save()
except Exception as e:
return {
'status': False,
'ip_domain': ip_domain,
'result': "unable to fetch records from WHOIS database.",
'message': str(e)
}
return {
'status': True,
'ip_domain': ip_domain,
'dnssec': domain_info.get('dnssec'),
'created': domain_info.get('created'),
'updated': domain_info.get('updated'),
'expires': domain_info.get('expires'),
'geolocation_iso': domain_info.get('registrant_country'),
'domain_statuses': domain_info.get('status'),
'whois_server': domain_info.get('whois_server'),
'dns': {
'a': domain_info.get('a_records'),
'mx': domain_info.get('mx_records'),
'txt': domain_info.get('txt_records'),
},
'registrar': {
'name': domain_info.get('registrar_name'),
'phone': domain_info.get('registrar_phone'),
'email': domain_info.get('registrar_email'),
'url': domain_info.get('registrar_url'),
},
'registrant': {
'name': domain_info.get('registrant_name'),
'id': domain_info.get('registrant_id'),
'organization': domain_info.get('registrant_organization'),
'address': domain_info.get('registrant_address'),
'city': domain_info.get('registrant_city'),
'state': domain_info.get('registrant_state'),
'zipcode': domain_info.get('registrant_zip_code'),
'country': domain_info.get('registrant_country'),
'phone': domain_info.get('registrant_phone'),
'fax': domain_info.get('registrant_fax'),
'email': domain_info.get('registrant_email'),
},
'admin': {
'name': domain_info.get('admin_name'),
'id': domain_info.get('admin_id'),
'organization': domain_info.get('admin_organization'),
'address':domain_info.get('admin_address'),
'city': domain_info.get('admin_city'),
'state': domain_info.get('admin_state'),
'zipcode': domain_info.get('admin_zip_code'),
'country': domain_info.get('admin_country'),
'phone': domain_info.get('admin_phone'),
'fax': domain_info.get('admin_fax'),
'email': domain_info.get('admin_email'),
},
'technical_contact': {
'name': domain_info.get('tech_name'),
'id': domain_info.get('tech_id'),
'organization': domain_info.get('tech_organization'),
'address': domain_info.get('tech_address'),
'city': domain_info.get('tech_city'),
'state': domain_info.get('tech_state'),
'zipcode': domain_info.get('tech_zip_code'),
'country': domain_info.get('tech_country'),
'phone': domain_info.get('tech_phone'),
'fax': domain_info.get('tech_fax'),
'email': domain_info.get('tech_email'),
},
'nameservers': domain_info.get('ns_records'),
# 'similar_domains': domain_info.get('similar_domains'),
'related_domains': domain_info.get('related_domains'),
'related_tlds': domain_info.get('related_tlds'),
'historical_ips': domain_info.get('historical_ips'),
}
@app.task(name='remove_duplicate_endpoints', bind=False, queue='remove_duplicate_endpoints_queue')
def remove_duplicate_endpoints(
scan_history_id,
domain_id,
subdomain_id=None,
filter_ids=[],
filter_status=[200, 301, 404],
duplicate_removal_fields=ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS
):
"""Remove duplicate endpoints.
Check for implicit redirections by comparing endpoints:
- [x] `content_length` similarities indicating redirections
- [x] `page_title` (check for same page title)
- [ ] Sign-in / login page (check for endpoints with the same words)
Args:
scan_history_id: ScanHistory id.
domain_id (int): Domain id.
subdomain_id (int, optional): Subdomain id.
filter_ids (list): List of endpoint ids to filter on.
filter_status (list): List of HTTP status codes to filter on.
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
"""
logger.info(f'Removing duplicate endpoints based on {duplicate_removal_fields}')
endpoints = (
EndPoint.objects
.filter(scan_history__id=scan_history_id)
.filter(target_domain__id=domain_id)
)
if filter_status:
endpoints = endpoints.filter(http_status__in=filter_status)
if subdomain_id:
endpoints = endpoints.filter(subdomain__id=subdomain_id)
if filter_ids:
endpoints = endpoints.filter(id__in=filter_ids)
for field_name in duplicate_removal_fields:
cl_query = (
endpoints
.values_list(field_name)
.annotate(mc=Count(field_name))
.order_by('-mc')
)
for (field_value, count) in cl_query:
if count > DELETE_DUPLICATES_THRESHOLD:
eps_to_delete = (
endpoints
.filter(**{field_name: field_value})
.order_by('discovered_date')
.all()[1:]
)
msg = f'Deleting {len(eps_to_delete)} endpoints [reason: same {field_name} {field_value}]'
for ep in eps_to_delete:
url = urlparse(ep.http_url)
if url.path in ['', '/', '/login']: # try do not delete the original page that other pages redirect to
continue
msg += f'\n\t {ep.http_url} [{ep.http_status}] [{field_name}={field_value}]'
ep.delete()
logger.warning(msg)
@app.task(name='run_command', bind=False, queue='run_command_queue')
def run_command(cmd, cwd=None, shell=False, history_file=None, scan_id=None, activity_id=None):
"""Run a given command using subprocess module.
Args:
cmd (str): Command to run.
cwd (str): Current working directory.
echo (bool): Log command.
shell (bool): Run within separate shell if True.
history_file (str): Write command + output to history file.
Returns:
tuple: Tuple with return_code, output.
"""
logger.info(cmd)
logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Run the command using subprocess
popen = subprocess.Popen(
cmd if shell else cmd.split(),
shell=shell,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
cwd=cwd,
universal_newlines=True)
output = ''
for stdout_line in iter(popen.stdout.readline, ""):
item = stdout_line.strip()
output += '\n' + item
logger.debug(item)
popen.stdout.close()
popen.wait()
return_code = popen.returncode
command_obj.output = output
command_obj.return_code = return_code
command_obj.save()
if history_file:
mode = 'a'
if not os.path.exists(history_file):
mode = 'w'
with open(history_file, mode) as f:
f.write(f'\n{cmd}\n{return_code}\n{output}\n------------------\n')
return return_code, output
#-------------#
# Other utils #
#-------------#
def stream_command(cmd, cwd=None, shell=False, history_file=None, encoding='utf-8', scan_id=None, activity_id=None, trunc_char=None):
# Log cmd
logger.info(cmd)
# logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Sanitize the cmd
command = cmd if shell else cmd.split()
# Run the command using subprocess
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
shell=shell)
# Log the output in real-time to the database
output = ""
# Process the output
for line in iter(lambda: process.stdout.readline(), b''):
if not line:
break
line = line.strip()
ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')
line = ansi_escape.sub('', line)
line = line.replace('\\x0d\\x0a', '\n')
if trunc_char and line.endswith(trunc_char):
line = line[:-1]
item = line
# Try to parse the line as JSON
try:
item = json.loads(line)
except json.JSONDecodeError:
pass
# Yield the line
#logger.debug(item)
yield item
# Add the log line to the output
output += line + "\n"
# Update the command record in the database
command_obj.output = output
command_obj.save()
# Retrieve the return code and output
process.wait()
return_code = process.returncode
# Update the return code and final output in the database
command_obj.return_code = return_code
command_obj.save()
# Append the command, return code and output to the history file
if history_file is not None:
with open(history_file, "a") as f:
f.write(f"{cmd}\n{return_code}\n{output}\n")
def process_httpx_response(line):
"""TODO: implement this"""
def extract_httpx_url(line):
"""Extract final URL from httpx results. Always follow redirects to find
the last URL.
Args:
line (dict): URL data output by httpx.
Returns:
tuple: (final_url, redirect_bool) tuple.
"""
status_code = line.get('status_code', 0)
final_url = line.get('final_url')
location = line.get('location')
chain_status_codes = line.get('chain_status_codes', [])
# Final URL is already looking nice, if it exists return it
if final_url:
return final_url, False
http_url = line['url'] # fallback to url field
# Handle redirects manually
REDIRECT_STATUS_CODES = [301, 302]
is_redirect = (
status_code in REDIRECT_STATUS_CODES
or
any(x in REDIRECT_STATUS_CODES for x in chain_status_codes)
)
if is_redirect and location:
if location.startswith(('http', 'https')):
http_url = location
else:
http_url = f'{http_url}/{location.lstrip("/")}'
# Sanitize URL
http_url = sanitize_url(http_url)
return http_url, is_redirect
#-------------#
# OSInt utils #
#-------------#
def get_and_save_dork_results(lookup_target, results_dir, type, lookup_keywords=None, lookup_extensions=None, delay=3, page_count=2, scan_history=None):
"""
Uses gofuzz to dork and store information
Args:
lookup_target (str): target to look into such as stackoverflow or even the target itself
results_dir (str): Results directory
type (str): Dork Type Title
lookup_keywords (str): comma separated keywords or paths to look for
lookup_extensions (str): comma separated extensions to look for
delay (int): delay between each requests
page_count (int): pages in google to extract information
scan_history (startScan.ScanHistory): Scan History Object
"""
results = []
gofuzz_command = f'{GOFUZZ_EXEC_PATH} -t {lookup_target} -d {delay} -p {page_count}'
if lookup_extensions:
gofuzz_command += f' -e {lookup_extensions}'
elif lookup_keywords:
gofuzz_command += f' -w {lookup_keywords}'
output_file = f'{results_dir}/gofuzz.txt'
gofuzz_command += f' -o {output_file}'
history_file = f'{results_dir}/commands.txt'
try:
run_command(
gofuzz_command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
)
if not os.path.isfile(output_file):
return
with open(output_file) as f:
for line in f.readlines():
url = line.strip()
if url:
results.append(url)
dork, created = Dork.objects.get_or_create(
type=type,
url=url
)
if scan_history:
scan_history.dorks.add(dork)
# remove output file
os.remove(output_file)
except Exception as e:
logger.exception(e)
return results
def get_and_save_emails(scan_history, activity_id, results_dir):
"""Get and save emails from Google, Bing and Baidu.
Args:
scan_history (startScan.ScanHistory): Scan history object.
activity_id: ScanActivity Object
results_dir (str): Results directory.
Returns:
list: List of emails found.
"""
emails = []
# Proxy settings
# get_random_proxy()
# Gather emails from Google, Bing and Baidu
output_file = f'{results_dir}/emails_tmp.txt'
history_file = f'{results_dir}/commands.txt'
command = f'python3 /usr/src/github/Infoga/infoga.py --domain {scan_history.domain.name} --source all --report {output_file}'
try:
run_command(
command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
activity_id=activity_id)
if not os.path.isfile(output_file):
logger.info('No Email results')
return []
with open(output_file) as f:
for line in f.readlines():
if 'Email' in line:
split_email = line.split(' ')[2]
emails.append(split_email)
output_path = f'{results_dir}/emails.txt'
with open(output_path, 'w') as output_file:
for email_address in emails:
save_email(email_address, scan_history)
output_file.write(f'{email_address}\n')
except Exception as e:
logger.exception(e)
return emails
def save_metadata_info(meta_dict):
"""Extract metadata from Google Search.
Args:
meta_dict (dict): Info dict.
Returns:
list: List of startScan.MetaFinderDocument objects.
"""
logger.warning(f'Getting metadata for {meta_dict.osint_target}')
scan_history = ScanHistory.objects.get(id=meta_dict.scan_id)
# Proxy settings
get_random_proxy()
# Get metadata
result = extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if not result:
logger.error(f'No metadata result from Google Search for {meta_dict.osint_target}.')
return []
# Add metadata info to DB
results = []
for metadata_name, data in result.get_metadata().items():
subdomain = Subdomain.objects.get(
scan_history=meta_dict.scan_id,
name=meta_dict.osint_target)
metadata = DottedDict({k: v for k, v in data.items()})
meta_finder_document = MetaFinderDocument(
subdomain=subdomain,
target_domain=meta_dict.domain,
scan_history=scan_history,
url=metadata.url,
doc_name=metadata_name,
http_status=metadata.status_code,
producer=metadata.metadata.get('Producer'),
creator=metadata.metadata.get('Creator'),
creation_date=metadata.metadata.get('CreationDate'),
modified_date=metadata.metadata.get('ModDate'),
author=metadata.metadata.get('Author'),
title=metadata.metadata.get('Title'),
os=metadata.metadata.get('OSInfo'))
meta_finder_document.save()
results.append(data)
return results
#-----------------#
# Utils functions #
#-----------------#
def create_scan_activity(scan_history_id, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = ScanHistory.objects.get(pk=scan_history_id)
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
#--------------------#
# Database functions #
#--------------------#
def save_vulnerability(**vuln_data):
references = vuln_data.pop('references', [])
cve_ids = vuln_data.pop('cve_ids', [])
cwe_ids = vuln_data.pop('cwe_ids', [])
tags = vuln_data.pop('tags', [])
subscan = vuln_data.pop('subscan', None)
# remove nulls
vuln_data = replace_nulls(vuln_data)
# Create vulnerability
vuln, created = Vulnerability.objects.get_or_create(**vuln_data)
if created:
vuln.discovered_date = timezone.now()
vuln.open_status = True
vuln.save()
# Save vuln tags
for tag_name in tags or []:
tag, created = VulnerabilityTags.objects.get_or_create(name=tag_name)
if tag:
vuln.tags.add(tag)
vuln.save()
# Save CVEs
for cve_id in cve_ids or []:
cve, created = CveId.objects.get_or_create(name=cve_id)
if cve:
vuln.cve_ids.add(cve)
vuln.save()
# Save CWEs
for cve_id in cwe_ids or []:
cwe, created = CweId.objects.get_or_create(name=cve_id)
if cwe:
vuln.cwe_ids.add(cwe)
vuln.save()
# Save vuln reference
for url in references or []:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
if created:
vuln.references.add(ref)
vuln.save()
# Save subscan id in vuln object
if subscan:
vuln.vuln_subscan_ids.add(subscan)
vuln.save()
return vuln, created
def save_endpoint(
http_url,
ctx={},
crawl=False,
is_default=False,
**endpoint_data):
"""Get or create EndPoint object. If crawl is True, also crawl the endpoint
HTTP URL with httpx.
Args:
http_url (str): Input HTTP URL.
is_default (bool): If the url is a default url for SubDomains.
scan_history (startScan.models.ScanHistory): ScanHistory object.
domain (startScan.models.Domain): Domain object.
subdomain (starScan.models.Subdomain): Subdomain object.
results_dir (str, optional): Results directory.
crawl (bool, optional): Run httpx on endpoint if True. Default: False.
force (bool, optional): Force crawl even if ENABLE_HTTP_CRAWL mode is on.
subscan (startScan.models.SubScan, optional): SubScan object.
Returns:
tuple: (startScan.models.EndPoint, created) where `created` is a boolean
indicating if the object is new or already existed.
"""
# remove nulls
endpoint_data = replace_nulls(endpoint_data)
scheme = urlparse(http_url).scheme
endpoint = None
created = False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in http_url:
logger.error(f"{http_url} is not a URL of domain {domain.name}. Skipping.")
return None, False
if crawl:
ctx['track'] = False
results = http_crawl(
urls=[http_url],
method='HEAD',
ctx=ctx)
if results:
endpoint_data = results[0]
endpoint_id = endpoint_data['endpoint_id']
created = endpoint_data['endpoint_created']
endpoint = EndPoint.objects.get(pk=endpoint_id)
elif not scheme:
return None, False
else: # add dumb endpoint without probing it
scan = ScanHistory.objects.filter(pk=ctx.get('scan_history_id')).first()
domain = Domain.objects.filter(pk=ctx.get('domain_id')).first()
if not validators.url(http_url):
return None, False
http_url = sanitize_url(http_url)
# Try to get the first matching record (prevent duplicate error)
endpoints = EndPoint.objects.filter(
scan_history=scan,
target_domain=domain,
http_url=http_url,
**endpoint_data
)
if endpoints.exists():
endpoint = endpoints.first()
created = False
else:
# No existing record, create a new one
endpoint = EndPoint.objects.create(
scan_history=scan,
target_domain=domain,
http_url=http_url,
**endpoint_data
)
created = True
if created:
endpoint.is_default = is_default
endpoint.discovered_date = timezone.now()
endpoint.save()
subscan_id = ctx.get('subscan_id')
if subscan_id:
endpoint.endpoint_subscan_ids.add(subscan_id)
endpoint.save()
return endpoint, created
def save_subdomain(subdomain_name, ctx={}):
"""Get or create Subdomain object.
Args:
subdomain_name (str): Subdomain name.
scan_history (startScan.models.ScanHistory): ScanHistory object.
Returns:
tuple: (startScan.models.Subdomain, created) where `created` is a
boolean indicating if the object has been created in DB.
"""
scan_id = ctx.get('scan_history_id')
subscan_id = ctx.get('subscan_id')
out_of_scope_subdomains = ctx.get('out_of_scope_subdomains', [])
valid_domain = (
validators.domain(subdomain_name) or
validators.ipv4(subdomain_name) or
validators.ipv6(subdomain_name)
)
if not valid_domain:
logger.error(f'{subdomain_name} is not an invalid domain. Skipping.')
return None, False
if subdomain_name in out_of_scope_subdomains:
logger.error(f'{subdomain_name} is out-of-scope. Skipping.')
return None, False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in subdomain_name:
logger.error(f"{subdomain_name} is not a subdomain of domain {domain.name}. Skipping.")
return None, False
scan = ScanHistory.objects.filter(pk=scan_id).first()
domain = scan.domain if scan else None
subdomain, created = Subdomain.objects.get_or_create(
scan_history=scan,
target_domain=domain,
name=subdomain_name)
if created:
# logger.warning(f'Found new subdomain {subdomain_name}')
subdomain.discovered_date = timezone.now()
if subscan_id:
subdomain.subdomain_subscan_ids.add(subscan_id)
subdomain.save()
return subdomain, created
def save_email(email_address, scan_history=None):
if not validators.email(email_address):
logger.info(f'Email {email_address} is invalid. Skipping.')
return None, False
email, created = Email.objects.get_or_create(address=email_address)
# if created:
# logger.warning(f'Found new email address {email_address}')
# Add email to ScanHistory
if scan_history:
scan_history.emails.add(email)
scan_history.save()
return email, created
def save_employee(name, designation, scan_history=None):
employee, created = Employee.objects.get_or_create(
name=name,
designation=designation)
# if created:
# logger.warning(f'Found new employee {name}')
# Add employee to ScanHistory
if scan_history:
scan_history.employees.add(employee)
scan_history.save()
return employee, created
def save_ip_address(ip_address, subdomain=None, subscan=None, **kwargs):
if not (validators.ipv4(ip_address) or validators.ipv6(ip_address)):
logger.info(f'IP {ip_address} is not a valid IP. Skipping.')
return None, False
ip, created = IpAddress.objects.get_or_create(address=ip_address)
# if created:
# logger.warning(f'Found new IP {ip_address}')
# Set extra attributes
for key, value in kwargs.items():
setattr(ip, key, value)
ip.save()
# Add IP to subdomain
if subdomain:
subdomain.ip_addresses.add(ip)
subdomain.save()
# Add subscan to IP
if subscan:
ip.ip_subscan_ids.add(subscan)
# Geo-localize IP asynchronously
if created:
geo_localize.delay(ip_address, ip.id)
return ip, created
def save_imported_subdomains(subdomains, ctx={}):
"""Take a list of subdomains imported and write them to from_imported.txt.
Args:
subdomains (list): List of subdomain names.
scan_history (startScan.models.ScanHistory): ScanHistory instance.
domain (startScan.models.Domain): Domain instance.
results_dir (str): Results directory.
"""
domain_id = ctx['domain_id']
domain = Domain.objects.get(pk=domain_id)
results_dir = ctx.get('results_dir', RENGINE_RESULTS)
# Validate each subdomain and de-duplicate entries
subdomains = list(set([
subdomain for subdomain in subdomains
if validators.domain(subdomain) and domain.name == get_domain_from_subdomain(subdomain)
]))
if not subdomains:
return
logger.warning(f'Found {len(subdomains)} imported subdomains.')
with open(f'{results_dir}/from_imported.txt', 'w+') as output_file:
for name in subdomains:
subdomain_name = name.strip()
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain.is_imported_subdomain = True
subdomain.save()
output_file.write(f'{subdomain}\n')
@app.task(name='query_reverse_whois', bind=False, queue='query_reverse_whois_queue')
def query_reverse_whois(lookup_keyword):
"""Queries Reverse WHOIS information for an organization or email address.
Args:
lookup_keyword (str): Registrar Name or email
Returns:
dict: Reverse WHOIS information.
"""
return get_associated_domains(lookup_keyword)
@app.task(name='query_ip_history', bind=False, queue='query_ip_history_queue')
def query_ip_history(domain):
"""Queries the IP history for a domain
Args:
domain (str): domain_name
Returns:
list: list of historical ip addresses
"""
return get_domain_historical_ip_address(domain)
@app.task(name='gpt_vulnerability_description', bind=False, queue='gpt_queue')
def gpt_vulnerability_description(vulnerability_id):
"""Generate and store Vulnerability Description using GPT.
Args:
vulnerability_id (Vulnerability Model ID): Vulnerability ID to fetch Description.
"""
logger.info('Getting GPT Vulnerability Description')
try:
lookup_vulnerability = Vulnerability.objects.get(id=vulnerability_id)
lookup_url = urlparse(lookup_vulnerability.http_url)
path = lookup_url.path
except Exception as e:
return {
'status': False,
'error': str(e)
}
# check in db GPTVulnerabilityReport model if vulnerability description and path matches
stored = GPTVulnerabilityReport.objects.filter(url_path=path).filter(title=lookup_vulnerability.name).first()
if stored:
response = {
'status': True,
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
vulnerability_description = get_gpt_vuln_input_description(
lookup_vulnerability.name,
path
)
# one can add more description here later
gpt_generator = GPTVulnerabilityReportGenerator()
response = gpt_generator.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
lookup_vulnerability.name,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
# for all vulnerabilities with the same vulnerability name this description has to be stored.
# also the consition is that the url must contain a part of this.
for vuln in Vulnerability.objects.filter(name=lookup_vulnerability.name, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
return response
| psyray | 7c01a46cea370e74385682ba7c28eaf4e58f5d69 | 2e089dc62f1bd64aa481750da10fa750e3aa232d | Don't know, I could delete it | psyray | 8 |
yogeshojha/rengine | 1,063 | Fix crash on saving endpoint (FFUF related only) | Fix #1006
I've added :
- a **try except** block to catch error on duplicate record returned by **get_or_create** in **saving_endpoint** method
- a **check** on endpoint existence in **dir_file_fuzz** method
Errors are logged to the console with the URL.
![image](https://github.com/yogeshojha/rengine/assets/1230954/3067c8a3-f44d-4b8f-b048-d1a356d542a2)
Tested and working
Now we need to find why there are duplicates endpoints in the db
But it's another issue | null | 2023-11-22 02:57:45+00:00 | 2023-11-27 12:37:27+00:00 | web/reNgine/tasks.py | import csv
import json
import os
import pprint
import subprocess
import time
import validators
import whatportis
import xmltodict
import yaml
import tldextract
import concurrent.futures
from datetime import datetime
from urllib.parse import urlparse
from api.serializers import SubdomainSerializer
from celery import chain, chord, group
from celery.result import allow_join_result
from celery.utils.log import get_task_logger
from django.db.models import Count
from dotted_dict import DottedDict
from django.utils import timezone
from pycvesearch import CVESearch
from metafinder.extractor import extract_metadata_from_google_search
from reNgine.celery import app
from reNgine.gpt import GPTVulnerabilityReportGenerator
from reNgine.celery_custom_task import RengineTask
from reNgine.common_func import *
from reNgine.definitions import *
from reNgine.settings import *
from reNgine.gpt import *
from reNgine.utilities import *
from scanEngine.models import (EngineType, InstalledExternalTool, Notification, Proxy)
from startScan.models import *
from startScan.models import EndPoint, Subdomain, Vulnerability
from targetApp.models import Domain
"""
Celery tasks.
"""
logger = get_task_logger(__name__)
#----------------------#
# Scan / Subscan tasks #
#----------------------#
@app.task(name='initiate_scan', bind=False, queue='initiate_scan_queue')
def initiate_scan(
scan_history_id,
domain_id,
engine_id=None,
scan_type=LIVE_SCAN,
results_dir=RENGINE_RESULTS,
imported_subdomains=[],
out_of_scope_subdomains=[],
url_filter=''):
"""Initiate a new scan.
Args:
scan_history_id (int): ScanHistory id.
domain_id (int): Domain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
imported_subdomains (list): Imported subdomains.
out_of_scope_subdomains (list): Out-of-scope subdomains.
url_filter (str): URL path. Default: ''
"""
# Get scan history
scan = ScanHistory.objects.get(pk=scan_history_id)
# Get scan engine
engine_id = engine_id or scan.scan_type.id # scan history engine_id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, [])
# Get domain and set last_scan_date
domain = Domain.objects.get(pk=domain_id)
domain.last_scan_date = timezone.now()
domain.save()
# Get path filter
url_filter = url_filter.rstrip('/')
# Get or create ScanHistory() object
if scan_type == LIVE_SCAN: # immediate
scan = ScanHistory.objects.get(pk=scan_history_id)
scan.scan_status = RUNNING_TASK
elif scan_type == SCHEDULED_SCAN: # scheduled
scan = ScanHistory()
scan.scan_status = INITIATED_TASK
scan.scan_type = engine
scan.celery_ids = [initiate_scan.request.id]
scan.domain = domain
scan.start_scan_date = timezone.now()
scan.tasks = engine.tasks
scan.results_dir = f'{results_dir}/{domain.name}_{scan.id}'
add_gf_patterns = gf_patterns and 'fetch_url' in engine.tasks
if add_gf_patterns:
scan.used_gf_patterns = ','.join(gf_patterns)
scan.save()
# Create scan results dir
os.makedirs(scan.results_dir)
# Build task context
ctx = {
'scan_history_id': scan_history_id,
'engine_id': engine_id,
'domain_id': domain.id,
'results_dir': scan.results_dir,
'url_filter': url_filter,
'yaml_configuration': config,
'out_of_scope_subdomains': out_of_scope_subdomains
}
ctx_str = json.dumps(ctx, indent=2)
# Send start notif
logger.warning(f'Starting scan {scan_history_id} with context:\n{ctx_str}')
send_scan_notif.delay(
scan_history_id,
subscan_id=None,
engine_id=engine_id,
status=CELERY_TASK_STATUS_MAP[scan.scan_status])
# Save imported subdomains in DB
save_imported_subdomains(imported_subdomains, ctx=ctx)
# Create initial subdomain in DB: make a copy of domain as a subdomain so
# that other tasks using subdomains can use it.
subdomain_name = domain.name
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
# If enable_http_crawl is set, create an initial root HTTP endpoint so that
# HTTP crawling can start somewhere
http_url = f'{domain.name}{url_filter}' if url_filter else domain.name
endpoint, _ = save_endpoint(
http_url,
ctx=ctx,
crawl=enable_http_crawl,
is_default=True,
subdomain=subdomain
)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build Celery tasks, crafted according to the dependency graph below:
# subdomain_discovery --> port_scan --> fetch_url --> dir_file_fuzz
# osint vulnerability_scan
# osint dalfox xss scan
# screenshot
# waf_detection
workflow = chain(
group(
subdomain_discovery.si(ctx=ctx, description='Subdomain discovery'),
osint.si(ctx=ctx, description='OS Intelligence')
),
port_scan.si(ctx=ctx, description='Port scan'),
fetch_url.si(ctx=ctx, description='Fetch URL'),
group(
dir_file_fuzz.si(ctx=ctx, description='Directories & files fuzz'),
vulnerability_scan.si(ctx=ctx, description='Vulnerability scan'),
screenshot.si(ctx=ctx, description='Screenshot'),
waf_detection.si(ctx=ctx, description='WAF detection')
)
)
# Build callback
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery chord
logger.info(f'Running Celery workflow with {len(workflow.tasks) + 1} tasks')
task = chain(workflow, callback).on_error(callback).delay()
scan.celery_ids.append(task.id)
scan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='initiate_subscan', bind=False, queue='subscan_queue')
def initiate_subscan(
scan_history_id,
subdomain_id,
engine_id=None,
scan_type=None,
results_dir=RENGINE_RESULTS,
url_filter=''):
"""Initiate a new subscan.
Args:
scan_history_id (int): ScanHistory id.
subdomain_id (int): Subdomain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
url_filter (str): URL path. Default: ''
"""
# Get Subdomain, Domain and ScanHistory
subdomain = Subdomain.objects.get(pk=subdomain_id)
scan = ScanHistory.objects.get(pk=subdomain.scan_history.id)
domain = Domain.objects.get(pk=subdomain.target_domain.id)
# Get EngineType
engine_id = engine_id or scan.scan_type.id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Create scan activity of SubScan Model
subscan = SubScan(
start_scan_date=timezone.now(),
celery_ids=[initiate_subscan.request.id],
scan_history=scan,
subdomain=subdomain,
type=scan_type,
status=RUNNING_TASK,
engine=engine)
subscan.save()
# Get YAML configuration
config = yaml.safe_load(engine.yaml_configuration)
# Create results directory
results_dir = f'{scan.results_dir}/subscans/{subscan.id}'
os.makedirs(results_dir, exist_ok=True)
# Run task
method = globals().get(scan_type)
if not method:
logger.warning(f'Task {scan_type} is not supported by reNgine. Skipping')
return
scan.tasks.append(scan_type)
scan.save()
# Send start notif
send_scan_notif.delay(
scan.id,
subscan_id=subscan.id,
engine_id=engine_id,
status='RUNNING')
# Build context
ctx = {
'scan_history_id': scan.id,
'subscan_id': subscan.id,
'engine_id': engine_id,
'domain_id': domain.id,
'subdomain_id': subdomain.id,
'yaml_configuration': config,
'results_dir': results_dir,
'url_filter': url_filter
}
# Create initial endpoints in DB: find domain HTTP endpoint so that HTTP
# crawling can start somewhere
base_url = f'{subdomain.name}{url_filter}' if url_filter else subdomain.name
endpoint, _ = save_endpoint(
base_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build header + callback
workflow = method.si(ctx=ctx)
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery tasks
task = chain(workflow, callback).on_error(callback).delay()
subscan.celery_ids.append(task.id)
subscan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='report', bind=False, queue='report_queue')
def report(ctx={}, description=None):
"""Report task running after all other tasks.
Mark ScanHistory or SubScan object as completed and update with final
status, log run details and send notification.
Args:
description (str, optional): Task description shown in UI.
"""
# Get objects
subscan_id = ctx.get('subscan_id')
scan_id = ctx.get('scan_history_id')
engine_id = ctx.get('engine_id')
scan = ScanHistory.objects.filter(pk=scan_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
# Get failed tasks
tasks = ScanActivity.objects.filter(scan_of=scan).all()
if subscan:
tasks = tasks.filter(celery_id__in=subscan.celery_ids)
failed_tasks = tasks.filter(status=FAILED_TASK)
# Get task status
failed_count = failed_tasks.count()
status = SUCCESS_TASK if failed_count == 0 else FAILED_TASK
status_h = 'SUCCESS' if failed_count == 0 else 'FAILED'
# Update scan / subscan status
if subscan:
subscan.stop_scan_date = timezone.now()
subscan.status = status
subscan.save()
else:
scan.scan_status = status
scan.stop_scan_date = timezone.now()
scan.save()
# Send scan status notif
send_scan_notif.delay(
scan_history_id=scan_id,
subscan_id=subscan_id,
engine_id=engine_id,
status=status_h)
#------------------------- #
# Tracked reNgine tasks #
#--------------------------#
@app.task(name='subdomain_discovery', queue='main_scan_queue', base=RengineTask, bind=True)
def subdomain_discovery(
self,
host=None,
ctx=None,
description=None):
"""Uses a set of tools (see SUBDOMAIN_SCAN_DEFAULT_TOOLS) to scan all
subdomains associated with a domain.
Args:
host (str): Hostname to scan.
Returns:
subdomains (list): List of subdomain names.
"""
if not host:
host = self.subdomain.name if self.subdomain else self.domain.name
if self.url_filter:
logger.warning(f'Ignoring subdomains scan as an URL path filter was passed ({self.url_filter}).')
return
# Config
config = self.yaml_configuration.get(SUBDOMAIN_DISCOVERY) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL) or self.yaml_configuration.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
tools = config.get(USES_TOOLS, SUBDOMAIN_SCAN_DEFAULT_TOOLS)
default_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=True).filter(is_subdomain_gathering=True)]
custom_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=False).filter(is_subdomain_gathering=True)]
send_subdomain_changes, send_interesting = False, False
notif = Notification.objects.first()
if notif:
send_subdomain_changes = notif.send_subdomain_changes_notif
send_interesting = notif.send_interesting_notif
# Gather tools to run for subdomain scan
if ALL in tools:
tools = SUBDOMAIN_SCAN_DEFAULT_TOOLS + custom_subdomain_tools
tools = [t.lower() for t in tools]
# Make exception for amass since tool name is amass, but command is amass-active/passive
default_subdomain_tools.append('amass-passive')
default_subdomain_tools.append('amass-active')
# Run tools
for tool in tools:
cmd = None
logger.info(f'Scanning subdomains for {host} with {tool}')
proxy = get_random_proxy()
if tool in default_subdomain_tools:
if tool == 'amass-passive':
cmd = f'amass enum -passive -d {host} -o {self.results_dir}/subdomains_amass.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
elif tool == 'amass-active':
use_amass_config = config.get(USE_AMASS_CONFIG, False)
amass_wordlist_name = config.get(AMASS_WORDLIST, 'deepmagic.com-prefixes-top50000')
wordlist_path = f'/usr/src/wordlist/{amass_wordlist_name}.txt'
cmd = f'amass enum -active -d {host} -o {self.results_dir}/subdomains_amass_active.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
cmd += f' -brute -w {wordlist_path}'
elif tool == 'sublist3r':
cmd = f'python3 /usr/src/github/Sublist3r/sublist3r.py -d {host} -t {threads} -o {self.results_dir}/subdomains_sublister.txt'
elif tool == 'subfinder':
cmd = f'subfinder -d {host} -o {self.results_dir}/subdomains_subfinder.txt'
use_subfinder_config = config.get(USE_SUBFINDER_CONFIG, False)
cmd += ' -config /root/.config/subfinder/config.yaml' if use_subfinder_config else ''
cmd += f' -proxy {proxy}' if proxy else ''
cmd += f' -timeout {timeout}' if timeout else ''
cmd += f' -t {threads}' if threads else ''
cmd += f' -silent'
elif tool == 'oneforall':
cmd = f'python3 /usr/src/github/OneForAll/oneforall.py --target {host} run'
cmd_extract = f'cut -d\',\' -f6 /usr/src/github/OneForAll/results/{host}.csv > {self.results_dir}/subdomains_oneforall.txt'
cmd_rm = f'rm -rf /usr/src/github/OneForAll/results/{host}.csv'
cmd += f' && {cmd_extract} && {cmd_rm}'
elif tool == 'ctfr':
results_file = self.results_dir + '/subdomains_ctfr.txt'
cmd = f'python3 /usr/src/github/ctfr/ctfr.py -d {host} -o {results_file}'
cmd_extract = f"cat {results_file} | sed 's/\*.//g' | tail -n +12 | uniq | sort > {results_file}"
cmd += f' && {cmd_extract}'
elif tool == 'tlsx':
results_file = self.results_dir + '/subdomains_tlsx.txt'
cmd = f'tlsx -san -cn -silent -ro -host {host}'
cmd += f" | sed -n '/^\([a-zA-Z0-9]\([-a-zA-Z0-9]*[a-zA-Z0-9]\)\?\.\)\+{host}$/p' | uniq | sort"
cmd += f' > {results_file}'
elif tool == 'netlas':
results_file = self.results_dir + '/subdomains_netlas.txt'
cmd = f'netlas search -d domain -i domain domain:"*.{host}" -f json'
netlas_key = get_netlas_key()
cmd += f' -a {netlas_key}' if netlas_key else ''
cmd_extract = f"grep -oE '([a-zA-Z0-9]([-a-zA-Z0-9]*[a-zA-Z0-9])?\.)+{host}'"
cmd += f' | {cmd_extract} > {results_file}'
elif tool in custom_subdomain_tools:
tool_query = InstalledExternalTool.objects.filter(name__icontains=tool.lower())
if not tool_query.exists():
logger.error(f'Missing {{TARGET}} and {{OUTPUT}} placeholders in {tool} configuration. Skipping.')
continue
custom_tool = tool_query.first()
cmd = custom_tool.subdomain_gathering_command
if '{TARGET}' in cmd and '{OUTPUT}' in cmd:
cmd = cmd.replace('{TARGET}', host)
cmd = cmd.replace('{OUTPUT}', f'{self.results_dir}/subdomains_{tool}.txt')
cmd = cmd.replace('{PATH}', custom_tool.github_clone_path) if '{PATH}' in cmd else cmd
else:
logger.warning(
f'Subdomain discovery tool "{tool}" is not supported by reNgine. Skipping.')
continue
# Run tool
try:
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
except Exception as e:
logger.error(
f'Subdomain discovery tool "{tool}" raised an exception')
logger.exception(e)
# Gather all the tools' results in one single file. Write subdomains into
# separate files, and sort all subdomains.
run_command(
f'cat {self.results_dir}/subdomains_*.txt > {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {self.output_path} -o {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
with open(self.output_path) as f:
lines = f.readlines()
# Parse the output_file file and store Subdomain and EndPoint objects found
# in db.
subdomain_count = 0
subdomains = []
urls = []
for line in lines:
subdomain_name = line.strip()
valid_url = bool(validators.url(subdomain_name))
valid_domain = (
bool(validators.domain(subdomain_name)) or
bool(validators.ipv4(subdomain_name)) or
bool(validators.ipv6(subdomain_name)) or
valid_url
)
if not valid_domain:
logger.error(f'Subdomain {subdomain_name} is not a valid domain, IP or URL. Skipping.')
continue
if valid_url:
subdomain_name = urlparse(subdomain_name).netloc
if subdomain_name in self.out_of_scope_subdomains:
logger.error(f'Subdomain {subdomain_name} is out of scope. Skipping.')
continue
# Add subdomain
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain_count += 1
subdomains.append(subdomain)
urls.append(subdomain.name)
# Bulk crawl subdomains
if enable_http_crawl:
ctx['track'] = True
http_crawl(urls, ctx=ctx, is_ran_from_subdomain_scan=True)
# Find root subdomain endpoints
for subdomain in subdomains:
pass
# Send notifications
subdomains_str = '\n'.join([f'• `{subdomain.name}`' for subdomain in subdomains])
self.notify(fields={
'Subdomain count': len(subdomains),
'Subdomains': subdomains_str,
})
if send_subdomain_changes and self.scan_id and self.domain_id:
added = get_new_added_subdomain(self.scan_id, self.domain_id)
removed = get_removed_subdomain(self.scan_id, self.domain_id)
if added:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in added])
self.notify(fields={'Added subdomains': subdomains_str})
if removed:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in removed])
self.notify(fields={'Removed subdomains': subdomains_str})
if send_interesting and self.scan_id and self.domain_id:
interesting_subdomains = get_interesting_subdomains(self.scan_id, self.domain_id)
if interesting_subdomains:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in interesting_subdomains])
self.notify(fields={'Interesting subdomains': subdomains_str})
return SubdomainSerializer(subdomains, many=True).data
@app.task(name='osint', queue='main_scan_queue', base=RengineTask, bind=True)
def osint(self, host=None, ctx={}, description=None):
"""Run Open-Source Intelligence tools on selected domain.
Args:
host (str): Hostname to scan.
Returns:
dict: Results from osint discovery and dorking.
"""
config = self.yaml_configuration.get(OSINT) or OSINT_DEFAULT_CONFIG
results = {}
grouped_tasks = []
if 'discover' in config:
ctx['track'] = False
# results = osint_discovery(host=host, ctx=ctx)
_task = osint_discovery.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
activity_id=self.activity_id,
results_dir=self.results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if OSINT_DORK in config or OSINT_CUSTOM_DORK in config:
_task = dorking.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
results_dir=self.results_dir
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('OSINT Tasks finished...')
# with open(self.output_path, 'w') as f:
# json.dump(results, f, indent=4)
#
# return results
@app.task(name='osint_discovery', queue='osint_discovery_queue', bind=False)
def osint_discovery(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run OSINT discovery.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
dict: osint metadat and theHarvester and h8mail results.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
osint_lookup = config.get(OSINT_DISCOVER, [])
osint_intensity = config.get(INTENSITY, 'normal')
documents_limit = config.get(OSINT_DOCUMENTS_LIMIT, 50)
results = {}
meta_info = []
emails = []
creds = []
# Get and save meta info
if 'metainfo' in osint_lookup:
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': host,
'domain': host,
'scan_id': scan_history_id,
'documents_limit': documents_limit
})
meta_info.append(save_metadata_info(meta_dict))
# TODO: disabled for now
# elif osint_intensity == 'deep':
# subdomains = Subdomain.objects
# if self.scan:
# subdomains = subdomains.filter(scan_history=self.scan)
# for subdomain in subdomains:
# meta_dict = DottedDict({
# 'osint_target': subdomain.name,
# 'domain': self.domain,
# 'scan_id': self.scan_id,
# 'documents_limit': documents_limit
# })
# meta_info.append(save_metadata_info(meta_dict))
grouped_tasks = []
if 'emails' in osint_lookup:
emails = get_and_save_emails(scan_history, activity_id, results_dir)
emails_str = '\n'.join([f'• `{email}`' for email in emails])
# self.notify(fields={'Emails': emails_str})
# ctx['track'] = False
_task = h8mail.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if 'employees' in osint_lookup:
ctx['track'] = False
_task = theHarvester.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
# results['emails'] = results.get('emails', []) + emails
# results['creds'] = creds
# results['meta_info'] = meta_info
return results
@app.task(name='dorking', bind=False, queue='dorking_queue')
def dorking(config, host, scan_history_id, results_dir):
"""Run Google dorks.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
list: Dorking results for each dork ran.
"""
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
scan_history = ScanHistory.objects.get(pk=scan_history_id)
dorks = config.get(OSINT_DORK, [])
custom_dorks = config.get(OSINT_CUSTOM_DORK, [])
results = []
# custom dorking has higher priority
try:
for custom_dork in custom_dorks:
lookup_target = custom_dork.get('lookup_site')
# replace with original host if _target_
lookup_target = host if lookup_target == '_target_' else lookup_target
if 'lookup_extensions' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_extensions=custom_dork.get('lookup_extensions'),
scan_history=scan_history
)
elif 'lookup_keywords' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_keywords=custom_dork.get('lookup_keywords'),
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
# default dorking
try:
for dork in dorks:
logger.info(f'Getting dork information for {dork}')
if dork == 'stackoverflow':
results = get_and_save_dork_results(
lookup_target='stackoverflow.com',
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'login_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/login/,login.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'admin_panels':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/admin/,admin.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'dashboard_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/dashboard/,dashboard.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'social_media' :
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'reddit.com'
]
for site in social_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'project_management' :
project_websites = [
'trello.com',
'atlassian.net'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'code_sharing' :
project_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'config_files' :
config_file_exts = [
'env',
'xml',
'conf',
'toml',
'yml',
'yaml',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(config_file_exts),
page_count=4,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keyword = 'Jenkins'
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=lookup_keyword,
page_count=1,
scan_history=scan_history
)
elif dork == 'wordpress_files' :
lookup_keywords = [
'/wp-content/',
'/wp-includes/'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'php_error' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'exposed_documents' :
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(docs_file_ext),
page_count=7,
scan_history=scan_history
)
elif dork == 'db_files' :
file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
elif dork == 'git_exposed' :
file_ext = [
'git',
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
return results
@app.task(name='theHarvester', queue='theHarvester_queue', bind=False)
def theHarvester(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run theHarvester to get save emails, hosts, employees found in domain.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
dict: Dict of emails, employees, hosts and ips found during crawling.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
output_path_json = f'{results_dir}/theHarvester.json'
theHarvester_dir = '/usr/src/github/theHarvester'
history_file = f'{results_dir}/commands.txt'
cmd = f'python3 {theHarvester_dir}/theHarvester.py -d {host} -b all -f {output_path_json}'
# Update proxies.yaml
proxy_query = Proxy.objects.all()
if proxy_query.exists():
proxy = proxy_query.first()
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(f'{theHarvester_dir}/proxies.yaml', 'w') as file:
yaml.dump(yaml_data, file)
# Run cmd
run_command(
cmd,
shell=False,
cwd=theHarvester_dir,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
# Get file location
if not os.path.isfile(output_path_json):
logger.error(f'Could not open {output_path_json}')
return {}
# Load theHarvester results
with open(output_path_json, 'r') as f:
data = json.load(f)
# Re-indent theHarvester JSON
with open(output_path_json, 'w') as f:
json.dump(data, f, indent=4)
emails = data.get('emails', [])
for email_address in emails:
email, _ = save_email(email_address, scan_history=scan_history)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
linkedin_people = data.get('linkedin_people', [])
for people in linkedin_people:
employee, _ = save_employee(
people,
designation='linkedin',
scan_history=scan_history)
# if employee:
# self.notify(fields={'LinkedIn people': f'• {employee.name}'})
twitter_people = data.get('twitter_people', [])
for people in twitter_people:
employee, _ = save_employee(
people,
designation='twitter',
scan_history=scan_history)
# if employee:
# self.notify(fields={'Twitter people': f'• {employee.name}'})
hosts = data.get('hosts', [])
urls = []
for host in hosts:
split = tuple(host.split(':'))
http_url = split[0]
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
endpoint, _ = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain)
# if endpoint:
# urls.append(endpoint.http_url)
# self.notify(fields={'Hosts': f'• {endpoint.http_url}'})
# if enable_http_crawl:
# ctx['track'] = False
# http_crawl(urls, ctx=ctx)
# TODO: Lots of ips unrelated with our domain are found, disabling
# this for now.
# ips = data.get('ips', [])
# for ip_address in ips:
# ip, created = save_ip_address(
# ip_address,
# subscan=subscan)
# if ip:
# send_task_notif.delay(
# 'osint',
# scan_history_id=scan_history_id,
# subscan_id=subscan_id,
# severity='success',
# update_fields={'IPs': f'{ip.address}'})
return data
@app.task(name='h8mail', queue='h8mail_queue', bind=False)
def h8mail(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run h8mail.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
list[dict]: List of credentials info.
"""
logger.warning('Getting leaked credentials')
scan_history = ScanHistory.objects.get(pk=scan_history_id)
input_path = f'{results_dir}/emails.txt'
output_file = f'{results_dir}/h8mail.json'
cmd = f'h8mail -t {input_path} --json {output_file}'
history_file = f'{results_dir}/commands.txt'
run_command(
cmd,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
with open(output_file) as f:
data = json.load(f)
creds = data.get('targets', [])
# TODO: go through h8mail output and save emails to DB
for cred in creds:
logger.warning(cred)
email_address = cred['target']
pwn_num = cred['pwn_num']
pwn_data = cred.get('data', [])
email, created = save_email(email_address, scan_history=scan)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
return creds
@app.task(name='screenshot', queue='main_scan_queue', base=RengineTask, bind=True)
def screenshot(self, ctx={}, description=None):
"""Uses EyeWitness to gather screenshot of a domain and/or url.
Args:
description (str, optional): Task description shown in UI.
"""
# Config
screenshots_path = f'{self.results_dir}/screenshots'
output_path = f'{self.results_dir}/screenshots/{self.filename}'
alive_endpoints_file = f'{self.results_dir}/endpoints_alive.txt'
config = self.yaml_configuration.get(SCREENSHOT) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT + 5)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
# If intensity is normal, grab only the root endpoints of each subdomain
strict = True if intensity == 'normal' else False
# Get URLs to take screenshot of
get_http_urls(
is_alive=enable_http_crawl,
strict=strict,
write_filepath=alive_endpoints_file,
get_only_default_urls=True,
ctx=ctx
)
# Send start notif
notification = Notification.objects.first()
send_output_file = notification.send_scan_output_file if notification else False
# Run cmd
cmd = f'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py -f {alive_endpoints_file} -d {screenshots_path} --no-prompt'
cmd += f' --timeout {timeout}' if timeout > 0 else ''
cmd += f' --threads {threads}' if threads > 0 else ''
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(output_path):
logger.error(f'Could not load EyeWitness results at {output_path} for {self.domain.name}.')
return
# Loop through results and save objects in DB
screenshot_paths = []
with open(output_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
"Protocol,Port,Domain,Request Status,Screenshot Path, Source Path"
protocol, port, subdomain_name, status, screenshot_path, source_path = tuple(row)
logger.info(f'{protocol}:{port}:{subdomain_name}:{status}')
subdomain_query = Subdomain.objects.filter(name=subdomain_name)
if self.scan:
subdomain_query = subdomain_query.filter(scan_history=self.scan)
if status == 'Successful' and subdomain_query.exists():
subdomain = subdomain_query.first()
screenshot_paths.append(screenshot_path)
subdomain.screenshot_path = screenshot_path.replace('/usr/src/scan_results/', '')
subdomain.save()
logger.warning(f'Added screenshot for {subdomain.name} to DB')
# Remove all db, html extra files in screenshot results
run_command(
'rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(screenshots_path),
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'rm -rf {screenshots_path}/source',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Send finish notifs
screenshots_str = '• ' + '\n• '.join([f'`{path}`' for path in screenshot_paths])
self.notify(fields={'Screenshots': screenshots_str})
if send_output_file:
for path in screenshot_paths:
title = get_output_file_name(
self.scan_id,
self.subscan_id,
self.filename)
send_file_to_discord.delay(path, title)
@app.task(name='port_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def port_scan(self, hosts=[], ctx={}, description=None):
"""Run port scan.
Args:
hosts (list, optional): Hosts to run port scan on.
description (str, optional): Task description shown in UI.
Returns:
list: List of open ports (dict).
"""
input_file = f'{self.results_dir}/input_subdomains_port_scan.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(PORT_SCAN) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
exclude_ports = config.get(NAABU_EXCLUDE_PORTS, [])
exclude_subdomains = config.get(NAABU_EXCLUDE_SUBDOMAINS, False)
ports = config.get(PORTS, NAABU_DEFAULT_PORTS)
ports = [str(port) for port in ports]
rate_limit = config.get(NAABU_RATE) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
passive = config.get(NAABU_PASSIVE, False)
use_naabu_config = config.get(USE_NAABU_CONFIG, False)
exclude_ports_str = ','.join(return_iterable(exclude_ports))
# nmap args
nmap_enabled = config.get(ENABLE_NMAP, False)
nmap_cmd = config.get(NMAP_COMMAND, '')
nmap_script = config.get(NMAP_SCRIPT, '')
nmap_script = ','.join(return_iterable(nmap_script))
nmap_script_args = config.get(NMAP_SCRIPT_ARGS)
if hosts:
with open(input_file, 'w') as f:
f.write('\n'.join(hosts))
else:
hosts = get_subdomains(
write_filepath=input_file,
exclude_subdomains=exclude_subdomains,
ctx=ctx)
# Build cmd
cmd = 'naabu -json -exclude-cdn'
cmd += f' -list {input_file}' if len(hosts) > 0 else f' -host {hosts[0]}'
if 'full' in ports or 'all' in ports:
ports_str = ' -p "-"'
elif 'top-100' in ports:
ports_str = ' -top-ports 100'
elif 'top-1000' in ports:
ports_str = ' -top-ports 1000'
else:
ports_str = ','.join(ports)
ports_str = f' -p {ports_str}'
cmd += ports_str
cmd += ' -config /root/.config/naabu/config.yaml' if use_naabu_config else ''
cmd += f' -proxy "{proxy}"' if proxy else ''
cmd += f' -c {threads}' if threads else ''
cmd += f' -rate {rate_limit}' if rate_limit > 0 else ''
cmd += f' -timeout {timeout*1000}' if timeout > 0 else ''
cmd += f' -passive' if passive else ''
cmd += f' -exclude-ports {exclude_ports_str}' if exclude_ports else ''
cmd += f' -silent'
# Execute cmd and gather results
results = []
urls = []
ports_data = {}
for line in stream_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
port_number = line['port']
ip_address = line['ip']
host = line.get('host') or ip_address
if port_number == 0:
continue
# Grab subdomain
subdomain = Subdomain.objects.filter(
name=host,
target_domain=self.domain,
scan_history=self.scan
).first()
# Add IP DB
ip, _ = save_ip_address(ip_address, subdomain, subscan=self.subscan)
if self.subscan:
ip.ip_subscan_ids.add(self.subscan)
ip.save()
# Add endpoint to DB
# port 80 and 443 not needed as http crawl already does that.
if port_number not in [80, 443]:
http_url = f'{host}:{port_number}'
endpoint, _ = save_endpoint(
http_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint:
http_url = endpoint.http_url
urls.append(http_url)
# Add Port in DB
port_details = whatportis.get_ports(str(port_number))
service_name = port_details[0].name if len(port_details) > 0 else 'unknown'
description = port_details[0].description if len(port_details) > 0 else ''
# get or create port
port, created = Port.objects.get_or_create(
number=port_number,
service_name=service_name,
description=description
)
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port.save()
ip.ports.add(port)
ip.save()
if host in ports_data:
ports_data[host].append(port_number)
else:
ports_data[host] = [port_number]
# Send notification
logger.warning(f'Found opened port {port_number} on {ip_address} ({host})')
if len(ports_data) == 0:
logger.info('Finished running naabu port scan - No open ports found.')
if nmap_enabled:
logger.info('Nmap scans skipped')
return ports_data
# Send notification
fields_str = ''
for host, ports in ports_data.items():
ports_str = ', '.join([f'`{port}`' for port in ports])
fields_str += f'• `{host}`: {ports_str}\n'
self.notify(fields={'Ports discovered': fields_str})
# Save output to file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
logger.info('Finished running naabu port scan.')
# Process nmap results: 1 process per host
sigs = []
if nmap_enabled:
logger.warning(f'Starting nmap scans ...')
logger.warning(ports_data)
for host, port_list in ports_data.items():
ports_str = '_'.join([str(p) for p in port_list])
ctx_nmap = ctx.copy()
ctx_nmap['description'] = get_task_title(f'nmap_{host}', self.scan_id, self.subscan_id)
ctx_nmap['track'] = False
sig = nmap.si(
cmd=nmap_cmd,
ports=port_list,
host=host,
script=nmap_script,
script_args=nmap_script_args,
max_rate=rate_limit,
ctx=ctx_nmap)
sigs.append(sig)
task = group(sigs).apply_async()
with allow_join_result():
results = task.get()
return ports_data
@app.task(name='nmap', queue='main_scan_queue', base=RengineTask, bind=True)
def nmap(
self,
cmd=None,
ports=[],
host=None,
input_file=None,
script=None,
script_args=None,
max_rate=None,
ctx={},
description=None):
"""Run nmap on a host.
Args:
cmd (str, optional): Existing nmap command to complete.
ports (list, optional): List of ports to scan.
host (str, optional): Host to scan.
input_file (str, optional): Input hosts file.
script (str, optional): NSE script to run.
script_args (str, optional): NSE script args.
max_rate (int): Max rate.
description (str, optional): Task description shown in UI.
"""
notif = Notification.objects.first()
ports_str = ','.join(str(port) for port in ports)
self.filename = self.filename.replace('.txt', '.xml')
filename_vulns = self.filename.replace('.xml', '_vulns.json')
output_file = self.output_path
output_file_xml = f'{self.results_dir}/{host}_{self.filename}'
vulns_file = f'{self.results_dir}/{host}_{filename_vulns}'
logger.warning(f'Running nmap on {host}:{ports}')
# Build cmd
nmap_cmd = get_nmap_cmd(
cmd=cmd,
ports=ports_str,
script=script,
script_args=script_args,
max_rate=max_rate,
host=host,
input_file=input_file,
output_file=output_file_xml)
# Run cmd
run_command(
nmap_cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Get nmap XML results and convert to JSON
vulns = parse_nmap_results(output_file_xml, output_file)
with open(vulns_file, 'w') as f:
json.dump(vulns, f, indent=4)
# Save vulnerabilities found by nmap
vulns_str = ''
for vuln_data in vulns:
# URL is not necessarily an HTTP URL when running nmap (can be any
# other vulnerable protocols). Look for existing endpoint and use its
# URL as vulnerability.http_url if it exists.
url = vuln_data['http_url']
endpoint = EndPoint.objects.filter(http_url__contains=url).first()
if endpoint:
vuln_data['http_url'] = endpoint.http_url
vuln, created = save_vulnerability(
target_domain=self.domain,
subdomain=self.subdomain,
scan_history=self.scan,
subscan=self.subscan,
endpoint=endpoint,
**vuln_data)
vulns_str += f'• {str(vuln)}\n'
if created:
logger.warning(str(vuln))
# Send only 1 notif for all vulns to reduce number of notifs
if notif and notif.send_vuln_notif and vulns_str:
logger.warning(vulns_str)
self.notify(fields={'CVEs': vulns_str})
return vulns
@app.task(name='waf_detection', queue='main_scan_queue', base=RengineTask, bind=True)
def waf_detection(self, ctx={}, description=None):
"""
Uses wafw00f to check for the presence of a WAF.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of startScan.models.Waf objects.
"""
input_path = f'{self.results_dir}/input_endpoints_waf_detection.txt'
config = self.yaml_configuration.get(WAF_DETECTION) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Get alive endpoints from DB
get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
cmd = f'wafw00f -i {input_path} -o {self.output_path}'
run_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(self.output_path):
logger.error(f'Could not find {self.output_path}')
return
with open(self.output_path) as file:
wafs = file.readlines()
for line in wafs:
line = " ".join(line.split())
splitted = line.split(' ', 1)
waf_info = splitted[1].strip()
waf_name = waf_info[:waf_info.find('(')].strip()
waf_manufacturer = waf_info[waf_info.find('(')+1:waf_info.find(')')].strip().replace('.', '')
http_url = sanitize_url(splitted[0].strip())
if not waf_name or waf_name == 'None':
continue
# Add waf to db
waf, _ = Waf.objects.get_or_create(
name=waf_name,
manufacturer=waf_manufacturer
)
# Add waf info to Subdomain in DB
subdomain = get_subdomain_from_url(http_url)
logger.info(f'Wafw00f Subdomain : {subdomain}')
subdomain_query, _ = Subdomain.objects.get_or_create(scan_history=self.scan, name=subdomain)
subdomain_query.waf.add(waf)
subdomain_query.save()
return wafs
@app.task(name='dir_file_fuzz', queue='main_scan_queue', base=RengineTask, bind=True)
def dir_file_fuzz(self, ctx={}, description=None):
"""Perform directory scan, and currently uses `ffuf` as a default tool.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of URLs discovered.
"""
# Config
cmd = 'ffuf'
config = self.yaml_configuration.get(DIR_FILE_FUZZ) or {}
custom_header = self.yaml_configuration.get(CUSTOM_HEADER)
auto_calibration = config.get(AUTO_CALIBRATION, True)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
extensions = config.get(EXTENSIONS, DEFAULT_DIR_FILE_FUZZ_EXTENSIONS)
# prepend . on extensions
extensions = [ext if ext.startswith('.') else '.' + ext for ext in extensions]
extensions_str = ','.join(map(str, extensions))
follow_redirect = config.get(FOLLOW_REDIRECT, FFUF_DEFAULT_FOLLOW_REDIRECT)
max_time = config.get(MAX_TIME, 0)
match_http_status = config.get(MATCH_HTTP_STATUS, FFUF_DEFAULT_MATCH_HTTP_STATUS)
mc = ','.join([str(c) for c in match_http_status])
recursive_level = config.get(RECURSIVE_LEVEL, FFUF_DEFAULT_RECURSIVE_LEVEL)
stop_on_error = config.get(STOP_ON_ERROR, False)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
wordlist_name = config.get(WORDLIST, 'dicc')
delay = rate_limit / (threads * 100) # calculate request pause delay from rate_limit and number of threads
input_path = f'{self.results_dir}/input_dir_file_fuzz.txt'
# Get wordlist
wordlist_name = 'dicc' if wordlist_name == 'default' else wordlist_name
wordlist_path = f'/usr/src/wordlist/{wordlist_name}.txt'
# Build command
cmd += f' -w {wordlist_path}'
cmd += f' -e {extensions_str}' if extensions else ''
cmd += f' -maxtime {max_time}' if max_time > 0 else ''
cmd += f' -p {delay}' if delay > 0 else ''
cmd += f' -recursion -recursion-depth {recursive_level} ' if recursive_level > 0 else ''
cmd += f' -t {threads}' if threads and threads > 0 else ''
cmd += f' -timeout {timeout}' if timeout and timeout > 0 else ''
cmd += ' -se' if stop_on_error else ''
cmd += ' -fr' if follow_redirect else ''
cmd += ' -ac' if auto_calibration else ''
cmd += f' -mc {mc}' if mc else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
# Grab URLs to fuzz
urls = get_http_urls(
is_alive=True,
ignore_files=False,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
logger.warning(urls)
# Loop through URLs and run command
results = []
for url in urls:
'''
Above while fetching urls, we are not ignoring files, because some
default urls may redirect to https://example.com/login.php
so, ignore_files is set to False
but, during fuzzing, we will only need part of the path, in above example
it is still a good idea to ffuf base url https://example.com
so files from base url
'''
url_parse = urlparse(url)
url = url_parse.scheme + '://' + url_parse.netloc
url += '/FUZZ' # TODO: fuzz not only URL but also POST / PUT / headers
proxy = get_random_proxy()
# Build final cmd
fcmd = cmd
fcmd += f' -x {proxy}' if proxy else ''
fcmd += f' -u {url} -json'
# Initialize DirectoryScan object
dirscan = DirectoryScan()
dirscan.scanned_date = timezone.now()
dirscan.command_line = fcmd
dirscan.save()
# Loop through results and populate EndPoint and DirectoryFile in DB
results = []
for line in stream_command(
fcmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
name = line['input'].get('FUZZ')
length = line['length']
status = line['status']
words = line['words']
url = line['url']
lines = line['lines']
content_type = line['content-type']
duration = line['duration']
if not name:
logger.error(f'FUZZ not found for "{url}"')
continue
endpoint, created = save_endpoint(url, crawl=False, ctx=ctx)
# endpoint.is_default = False
endpoint.http_status = status
endpoint.content_length = length
endpoint.response_time = duration / 1000000000
endpoint.save()
if created:
urls.append(endpoint.http_url)
endpoint.status = status
endpoint.content_type = content_type
endpoint.content_length = length
dfile, created = DirectoryFile.objects.get_or_create(
name=name,
length=length,
words=words,
lines=lines,
content_type=content_type,
url=url)
dfile.http_status = status
dfile.save()
# if created:
# logger.warning(f'Found new directory or file {url}')
dirscan.directory_files.add(dfile)
dirscan.save()
if self.subscan:
dirscan.dir_subscan_ids.add(self.subscan)
subdomain_name = get_subdomain_from_url(endpoint.http_url)
subdomain = Subdomain.objects.get(name=subdomain_name, scan_history=self.scan)
subdomain.directories.add(dirscan)
subdomain.save()
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(urls, ctx=ctx)
return results
@app.task(name='fetch_url', queue='main_scan_queue', base=RengineTask, bind=True)
def fetch_url(self, urls=[], ctx={}, description=None):
"""Fetch URLs using different tools like gauplus, gau, gospider, waybackurls ...
Args:
urls (list): List of URLs to start from.
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/input_endpoints_fetch_url.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(FETCH_URL) or {}
should_remove_duplicate_endpoints = config.get(REMOVE_DUPLICATE_ENDPOINTS, True)
duplicate_removal_fields = config.get(DUPLICATE_REMOVAL_FIELDS, ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, DEFAULT_GF_PATTERNS)
ignore_file_extension = config.get(IGNORE_FILE_EXTENSION, DEFAULT_IGNORE_FILE_EXTENSIONS)
tools = config.get(USES_TOOLS, ENDPOINT_SCAN_DEFAULT_TOOLS)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
domain_request_headers = self.domain.request_headers if self.domain else None
custom_header = domain_request_headers or self.yaml_configuration.get(CUSTOM_HEADER)
exclude_subdomains = config.get(EXCLUDED_SUBDOMAINS, False)
# Get URLs to scan and save to input file
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
exclude_subdomains=exclude_subdomains,
get_only_default_urls=True,
ctx=ctx
)
# Domain regex
host = self.domain.name if self.domain else urlparse(urls[0]).netloc
host_regex = f"\'https?://([a-z0-9]+[.])*{host}.*\'"
# Tools cmds
cmd_map = {
'gau': f'gau',
'gauplus': f'gauplus -random-agent',
'hakrawler': 'hakrawler -subs -u',
'waybackurls': 'waybackurls',
'gospider': f'gospider -S {input_path} --js -d 2 --sitemap --robots -w -r',
'katana': f'katana -list {input_path} -silent -jc -kf all -d 3 -fs rdn',
}
if proxy:
cmd_map['gau'] += f' --proxy "{proxy}"'
cmd_map['gauplus'] += f' -p "{proxy}"'
cmd_map['gospider'] += f' -p {proxy}'
cmd_map['hakrawler'] += f' -proxy {proxy}'
cmd_map['katana'] += f' -proxy {proxy}'
if threads > 0:
cmd_map['gau'] += f' --threads {threads}'
cmd_map['gauplus'] += f' -t {threads}'
cmd_map['gospider'] += f' -t {threads}'
cmd_map['katana'] += f' -c {threads}'
if custom_header:
header_string = ';;'.join([
f'{key}: {value}' for key, value in custom_header.items()
])
cmd_map['hakrawler'] += f' -h {header_string}'
cmd_map['katana'] += f' -H {header_string}'
header_flags = [':'.join(h) for h in header_string.split(';;')]
for flag in header_flags:
cmd_map['gospider'] += f' -H {flag}'
cat_input = f'cat {input_path}'
grep_output = f'grep -Eo {host_regex}'
cmd_map = {
tool: f'{cat_input} | {cmd} | {grep_output} > {self.results_dir}/urls_{tool}.txt'
for tool, cmd in cmd_map.items()
}
tasks = group(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for tool, cmd in cmd_map.items()
if tool in tools
)
# Cleanup task
sort_output = [
f'cat {self.results_dir}/urls_* > {self.output_path}',
f'cat {input_path} >> {self.output_path}',
f'sort -u {self.output_path} -o {self.output_path}',
]
if ignore_file_extension:
ignore_exts = '|'.join(ignore_file_extension)
grep_ext_filtered_output = [
f'cat {self.output_path} | grep -Eiv "\\.({ignore_exts}).*" > {self.results_dir}/urls_filtered.txt',
f'mv {self.results_dir}/urls_filtered.txt {self.output_path}'
]
sort_output.extend(grep_ext_filtered_output)
cleanup = chain(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for cmd in sort_output
)
# Run all commands
task = chord(tasks)(cleanup)
with allow_join_result():
task.get()
# Store all the endpoints and run httpx
with open(self.output_path) as f:
discovered_urls = f.readlines()
self.notify(fields={'Discovered URLs': len(discovered_urls)})
# Some tools can have an URL in the format <URL>] - <PATH> or <URL> - <PATH>, add them
# to the final URL list
all_urls = []
for url in discovered_urls:
url = url.strip()
urlpath = None
base_url = None
if '] ' in url: # found JS scraped endpoint e.g from gospider
split = tuple(url.split('] '))
if not len(split) == 2:
logger.warning(f'URL format not recognized for "{url}". Skipping.')
continue
base_url, urlpath = split
urlpath = urlpath.lstrip('- ')
elif ' - ' in url: # found JS scraped endpoint e.g from gospider
base_url, urlpath = tuple(url.split(' - '))
if base_url and urlpath:
subdomain = urlparse(base_url)
url = f'{subdomain.scheme}://{subdomain.netloc}{self.url_filter}'
if not validators.url(url):
logger.warning(f'Invalid URL "{url}". Skipping.')
if url not in all_urls:
all_urls.append(url)
# Filter out URLs if a path filter was passed
if self.url_filter:
all_urls = [url for url in all_urls if self.url_filter in url]
# Write result to output path
with open(self.output_path, 'w') as f:
f.write('\n'.join(all_urls))
logger.warning(f'Found {len(all_urls)} usable URLs')
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(
all_urls,
ctx=ctx,
should_remove_duplicate_endpoints=should_remove_duplicate_endpoints,
duplicate_removal_fields=duplicate_removal_fields
)
#-------------------#
# GF PATTERNS MATCH #
#-------------------#
# Combine old gf patterns with new ones
if gf_patterns:
self.scan.used_gf_patterns = ','.join(gf_patterns)
self.scan.save()
# Run gf patterns on saved endpoints
# TODO: refactor to Celery task
for gf_pattern in gf_patterns:
# TODO: js var is causing issues, removing for now
if gf_pattern == 'jsvar':
logger.info('Ignoring jsvar as it is causing issues.')
continue
# Run gf on current pattern
logger.warning(f'Running gf on pattern "{gf_pattern}"')
gf_output_file = f'{self.results_dir}/gf_patterns_{gf_pattern}.txt'
cmd = f'cat {self.output_path} | gf {gf_pattern} | grep -Eo {host_regex} >> {gf_output_file}'
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Check output file
if not os.path.exists(gf_output_file):
logger.error(f'Could not find GF output file {gf_output_file}. Skipping GF pattern "{gf_pattern}"')
continue
# Read output file line by line and
with open(gf_output_file, 'r') as f:
lines = f.readlines()
# Add endpoints / subdomains to DB
for url in lines:
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
endpoint, created = save_endpoint(
http_url,
crawl=False,
subdomain=subdomain,
ctx=ctx)
if not endpoint:
continue
earlier_pattern = None
if not created:
earlier_pattern = endpoint.matched_gf_patterns
pattern = f'{earlier_pattern},{gf_pattern}' if earlier_pattern else gf_pattern
endpoint.matched_gf_patterns = pattern
endpoint.save()
return all_urls
def parse_curl_output(response):
# TODO: Enrich from other cURL fields.
CURL_REGEX_HTTP_STATUS = f'HTTP\/(?:(?:\d\.?)+)\s(\d+)\s(?:\w+)'
http_status = 0
if response:
failed = False
regex = re.compile(CURL_REGEX_HTTP_STATUS, re.MULTILINE)
try:
http_status = int(regex.findall(response)[0])
except (KeyError, TypeError, IndexError):
pass
return {
'http_status': http_status,
}
@app.task(name='vulnerability_scan', queue='main_scan_queue', bind=True, base=RengineTask)
def vulnerability_scan(self, urls=[], ctx={}, description=None):
"""
This function will serve as an entrypoint to vulnerability scan.
All other vulnerability scan will be run from here including nuclei, crlfuzz, etc
"""
logger.info('Running Vulnerability Scan Queue')
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_run_nuclei = config.get(RUN_NUCLEI, True)
should_run_crlfuzz = config.get(RUN_CRLFUZZ, False)
should_run_dalfox = config.get(RUN_DALFOX, False)
should_run_s3scanner = config.get(RUN_S3SCANNER, True)
grouped_tasks = []
if should_run_nuclei:
_task = nuclei_scan.si(
urls=urls,
ctx=ctx,
description=f'Nuclei Scan'
)
grouped_tasks.append(_task)
if should_run_crlfuzz:
_task = crlfuzz_scan.si(
urls=urls,
ctx=ctx,
description=f'CRLFuzz Scan'
)
grouped_tasks.append(_task)
if should_run_dalfox:
_task = dalfox_xss_scan.si(
urls=urls,
ctx=ctx,
description=f'Dalfox XSS Scan'
)
grouped_tasks.append(_task)
if should_run_s3scanner:
_task = s3scanner.si(
ctx=ctx,
description=f'Misconfigured S3 Buckets Scanner'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan completed...')
# return results
return None
@app.task(name='nuclei_individual_severity_module', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_individual_severity_module(self, cmd, severity, enable_http_crawl, should_fetch_gpt_report, ctx={}, description=None):
'''
This celery task will run vulnerability scan in parallel.
All severities supplied should run in parallel as grouped tasks.
'''
results = []
logger.info(f'Running vulnerability scan with severity: {severity}')
cmd += f' -severity {severity}'
# Send start notification
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
# Gather nuclei results
vuln_data = parse_nuclei_result(line)
# Get corresponding subdomain
http_url = sanitize_url(line.get('matched-at'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
# Look for duplicate vulnerabilities by excluding records that might change but are irrelevant.
object_comparison_exclude = ['response', 'curl_command', 'tags', 'references', 'cve_ids', 'cwe_ids']
# Add subdomain and target domain to the duplicate check
vuln_data_copy = vuln_data.copy()
vuln_data_copy['subdomain'] = subdomain
vuln_data_copy['target_domain'] = self.domain
# Check if record exists, if exists do not save it
if record_exists(Vulnerability, data=vuln_data_copy, exclude_keys=object_comparison_exclude):
logger.warning(f'Nuclei vulnerability of severity {severity} : {vuln_data_copy["name"]} for {subdomain_name} already exists')
continue
# Get or create EndPoint object
response = line.get('response')
httpx_crawl = False if response else enable_http_crawl # avoid yet another httpx crawl
endpoint, _ = save_endpoint(
http_url,
crawl=httpx_crawl,
subdomain=subdomain,
ctx=ctx)
if endpoint:
http_url = endpoint.http_url
if not httpx_crawl:
output = parse_curl_output(response)
endpoint.http_status = output['http_status']
endpoint.save()
# Get or create Vulnerability object
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
subdomain=subdomain,
**vuln_data)
if not vuln:
continue
# Print vuln
severity = line['info'].get('severity', 'unknown')
logger.warning(str(vuln))
# Send notification for all vulnerabilities except info
url = vuln.http_url or vuln.subdomain
send_vuln = (
notif and
notif.send_vuln_notif and
vuln and
severity in ['low', 'medium', 'high', 'critical'])
if send_vuln:
fields = {
'Severity': f'**{severity.upper()}**',
'URL': http_url,
'Subdomain': subdomain_name,
'Name': vuln.name,
'Type': vuln.type,
'Description': vuln.description,
'Template': vuln.template_url,
'Tags': vuln.get_tags_str(),
'CVEs': vuln.get_cve_str(),
'CWEs': vuln.get_cwe_str(),
'References': vuln.get_refs_str()
}
severity_map = {
'low': 'info',
'medium': 'warning',
'high': 'error',
'critical': 'error'
}
self.notify(
f'vulnerability_scan_#{vuln.id}',
severity_map[severity],
fields,
add_meta_info=False)
# Send report to hackerone
hackerone_query = Hackerone.objects.all()
send_report = (
hackerone_query.exists() and
severity not in ('info', 'low') and
vuln.target_domain.h1_team_handle
)
if send_report:
hackerone = hackerone_query.first()
if hackerone.send_critical and severity == 'critical':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_high and severity == 'high':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_medium and severity == 'medium':
send_hackerone_report.delay(vuln.id)
# Write results to JSON file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
# Send finish notif
if send_status:
vulns = Vulnerability.objects.filter(scan_history__id=self.scan_id)
info_count = vulns.filter(severity=0).count()
low_count = vulns.filter(severity=1).count()
medium_count = vulns.filter(severity=2).count()
high_count = vulns.filter(severity=3).count()
critical_count = vulns.filter(severity=4).count()
unknown_count = vulns.filter(severity=-1).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count + unknown_count
fields = {
'Total': vulnerability_count,
'Critical': critical_count,
'High': high_count,
'Medium': medium_count,
'Low': low_count,
'Info': info_count,
'Unknown': unknown_count
}
self.notify(fields=fields)
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=NUCLEI
).exclude(
severity=0
)
# find all unique vulnerabilities based on path and title
# all unique vulnerability will go thru gpt function and get report
# once report is got, it will be matched with other vulnerabilities and saved
unique_vulns = set()
for vuln in vulns:
unique_vulns.add((vuln.name, vuln.get_path()))
unique_vulns = list(unique_vulns)
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in unique_vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return None
def get_vulnerability_gpt_report(vuln):
title = vuln[0]
path = vuln[1]
logger.info(f'Getting GPT Report for {title}, PATH: {path}')
# check if in db already exists
stored = GPTVulnerabilityReport.objects.filter(
url_path=path
).filter(
title=title
).first()
if stored:
response = {
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
report = GPTVulnerabilityReportGenerator()
vulnerability_description = get_gpt_vuln_input_description(
title,
path
)
response = report.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
title,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
for vuln in Vulnerability.objects.filter(name=title, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
def add_gpt_description_db(title, path, description, impact, remediation, references):
gpt_report = GPTVulnerabilityReport()
gpt_report.url_path = path
gpt_report.title = title
gpt_report.description = description
gpt_report.impact = impact
gpt_report.remediation = remediation
gpt_report.save()
for url in references:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
gpt_report.references.add(ref)
gpt_report.save()
@app.task(name='nuclei_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_scan(self, urls=[], ctx={}, description=None):
"""HTTP vulnerability scan using Nuclei
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
Notes:
Unfurl the urls to keep only domain and path, will be sent to vuln scan and
ignore certain file extensions. Thanks: https://github.com/six2dez/reconftw
"""
# Config
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
input_path = f'{self.results_dir}/input_endpoints_vulnerability_scan.txt'
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
concurrency = config.get(NUCLEI_CONCURRENCY) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
retries = config.get(RETRIES) or self.yaml_configuration.get(RETRIES, DEFAULT_RETRIES)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
custom_header = config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
should_fetch_gpt_report = config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
proxy = get_random_proxy()
nuclei_specific_config = config.get('nuclei', {})
use_nuclei_conf = nuclei_specific_config.get(USE_CONFIG, False)
severities = nuclei_specific_config.get(NUCLEI_SEVERITY, NUCLEI_DEFAULT_SEVERITIES)
tags = nuclei_specific_config.get(NUCLEI_TAGS, [])
tags = ','.join(tags)
nuclei_templates = nuclei_specific_config.get(NUCLEI_TEMPLATE)
custom_nuclei_templates = nuclei_specific_config.get(NUCLEI_CUSTOM_TEMPLATE)
# severities_str = ','.join(severities)
# Get alive endpoints
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=enable_http_crawl,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
if intensity == 'normal': # reduce number of endpoints to scan
unfurl_filter = f'{self.results_dir}/urls_unfurled.txt'
run_command(
f"cat {input_path} | unfurl -u format %s://%d%p |uro > {unfurl_filter}",
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {unfurl_filter} -o {unfurl_filter}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
input_path = unfurl_filter
# Build templates
# logger.info('Updating Nuclei templates ...')
run_command(
'nuclei -update-templates',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
templates = []
if not (nuclei_templates or custom_nuclei_templates):
templates.append(NUCLEI_DEFAULT_TEMPLATES_PATH)
if nuclei_templates:
if ALL in nuclei_templates:
template = NUCLEI_DEFAULT_TEMPLATES_PATH
templates.append(template)
else:
templates.extend(nuclei_templates)
if custom_nuclei_templates:
custom_nuclei_template_paths = [f'{str(elem)}.yaml' for elem in custom_nuclei_templates]
template = templates.extend(custom_nuclei_template_paths)
# Build CMD
cmd = 'nuclei -j'
cmd += ' -config /root/.config/nuclei/config.yaml' if use_nuclei_conf else ''
cmd += f' -irr'
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -l {input_path}'
cmd += f' -c {str(concurrency)}' if concurrency > 0 else ''
cmd += f' -proxy {proxy} ' if proxy else ''
cmd += f' -retries {retries}' if retries > 0 else ''
cmd += f' -rl {rate_limit}' if rate_limit > 0 else ''
# cmd += f' -severity {severities_str}'
cmd += f' -timeout {str(timeout)}' if timeout and timeout > 0 else ''
cmd += f' -tags {tags}' if tags else ''
cmd += f' -silent'
for tpl in templates:
cmd += f' -t {tpl}'
grouped_tasks = []
custom_ctx = ctx
for severity in severities:
custom_ctx['track'] = True
_task = nuclei_individual_severity_module.si(
cmd,
severity,
enable_http_crawl,
should_fetch_gpt_report,
ctx=custom_ctx,
description=f'Nuclei Scan with severity {severity}'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan with all severities completed...')
return None
@app.task(name='dalfox_xss_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def dalfox_xss_scan(self, urls=[], ctx={}, description=None):
"""XSS Scan using dalfox
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
dalfox_config = vuln_config.get(DALFOX) or {}
custom_header = dalfox_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
is_waf_evasion = dalfox_config.get(WAF_EVASION, False)
blind_xss_server = dalfox_config.get(BLIND_XSS_SERVER)
user_agent = dalfox_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
timeout = dalfox_config.get(TIMEOUT)
delay = dalfox_config.get(DELAY)
threads = dalfox_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_dalfox_xss.txt'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=False,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'dalfox --silence --no-color --no-spinner'
cmd += f' --only-poc r '
cmd += f' --ignore-return 302,404,403'
cmd += f' --skip-bav'
cmd += f' file {input_path}'
cmd += f' --proxy {proxy}' if proxy else ''
cmd += f' --waf-evasion' if is_waf_evasion else ''
cmd += f' -b {blind_xss_server}' if blind_xss_server else ''
cmd += f' --delay {delay}' if delay else ''
cmd += f' --timeout {timeout}' if timeout else ''
cmd += f' --user-agent {user_agent}' if user_agent else ''
cmd += f' --header {custom_header}' if custom_header else ''
cmd += f' --worker {threads}' if threads else ''
cmd += f' --format json'
results = []
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id,
trunc_char=','
):
if not isinstance(line, dict):
continue
results.append(line)
vuln_data = parse_dalfox_result(line)
http_url = sanitize_url(line.get('data'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Dalfox Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=DALFOX
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='crlfuzz_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def crlfuzz_scan(self, urls=[], ctx={}, description=None):
"""CRLF Fuzzing with CRLFuzz
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
custom_header = vuln_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
user_agent = vuln_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
threads = vuln_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_crlf.txt'
output_path = f'{self.results_dir}/{self.filename}'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'crlfuzz -s'
cmd += f' -l {input_path}'
cmd += f' -x {proxy}' if proxy else ''
cmd += f' --H {custom_header}' if custom_header else ''
cmd += f' -o {output_path}'
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id
)
if not os.path.isfile(output_path):
logger.info('No Results from CRLFuzz')
return
crlfs = []
results = []
with open(output_path, 'r') as file:
crlfs = file.readlines()
for crlf in crlfs:
url = crlf.strip()
vuln_data = parse_crlfuzz_result(url)
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting CRLFuzz Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=CRLFUZZ
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='s3scanner', queue='main_scan_queue', base=RengineTask, bind=True)
def s3scanner(self, ctx={}, description=None):
"""Bucket Scanner
Args:
ctx (dict): Context
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/#{self.scan_id}_subdomain_discovery.txt'
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
s3_config = vuln_config.get(S3SCANNER) or {}
threads = s3_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
providers = s3_config.get(PROVIDERS, S3SCANNER_DEFAULT_PROVIDERS)
scan_history = ScanHistory.objects.filter(pk=self.scan_id).first()
for provider in providers:
cmd = f's3scanner -bucket-file {input_path} -enumerate -provider {provider} -threads {threads} -json'
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
if line.get('bucket', {}).get('exists', 0) == 1:
result = parse_s3scanner_result(line)
s3bucket, created = S3Bucket.objects.get_or_create(**result)
scan_history.buckets.add(s3bucket)
logger.info(f"s3 bucket added {result['provider']}-{result['name']}-{result['region']}")
@app.task(name='http_crawl', queue='main_scan_queue', base=RengineTask, bind=True)
def http_crawl(
self,
urls=[],
method=None,
recrawl=False,
ctx={},
track=True,
description=None,
is_ran_from_subdomain_scan=False,
should_remove_duplicate_endpoints=True,
duplicate_removal_fields=[]):
"""Use httpx to query HTTP URLs for important info like page titles, http
status, etc...
Args:
urls (list, optional): A set of URLs to check. Overrides default
behavior which queries all endpoints related to this scan.
method (str): HTTP method to use (GET, HEAD, POST, PUT, DELETE).
recrawl (bool, optional): If False, filter out URLs that have already
been crawled.
should_remove_duplicate_endpoints (bool): Whether to remove duplicate endpoints
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
Returns:
list: httpx results.
"""
logger.info('Initiating HTTP Crawl')
if is_ran_from_subdomain_scan:
logger.info('Running From Subdomain Scan...')
cmd = '/go/bin/httpx'
cfg = self.yaml_configuration.get(HTTP_CRAWL) or {}
custom_header = cfg.get(CUSTOM_HEADER, '')
threads = cfg.get(THREADS, DEFAULT_THREADS)
follow_redirect = cfg.get(FOLLOW_REDIRECT, True)
self.output_path = None
input_path = f'{self.results_dir}/httpx_input.txt'
history_file = f'{self.results_dir}/commands.txt'
if urls: # direct passing URLs to check
if self.url_filter:
urls = [u for u in urls if self.url_filter in u]
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_uncrawled=not recrawl,
write_filepath=input_path,
ctx=ctx
)
# logger.debug(urls)
# If no URLs found, skip it
if not urls:
return
# Re-adjust thread number if few URLs to avoid spinning up a monster to
# kill a fly.
if len(urls) < threads:
threads = len(urls)
# Get random proxy
proxy = get_random_proxy()
# Run command
cmd += f' -cl -ct -rt -location -td -websocket -cname -asn -cdn -probe -random-agent'
cmd += f' -t {threads}' if threads > 0 else ''
cmd += f' --http-proxy {proxy}' if proxy else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -json'
cmd += f' -u {urls[0]}' if len(urls) == 1 else f' -l {input_path}'
cmd += f' -x {method}' if method else ''
cmd += f' -silent'
if follow_redirect:
cmd += ' -fr'
results = []
endpoint_ids = []
for line in stream_command(
cmd,
history_file=history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not line or not isinstance(line, dict):
continue
logger.debug(line)
# No response from endpoint
if line.get('failed', False):
continue
# Parse httpx output
host = line.get('host', '')
content_length = line.get('content_length', 0)
http_status = line.get('status_code')
http_url, is_redirect = extract_httpx_url(line)
page_title = line.get('title')
webserver = line.get('webserver')
cdn = line.get('cdn', False)
rt = line.get('time')
techs = line.get('tech', [])
cname = line.get('cname', '')
content_type = line.get('content_type', '')
response_time = -1
if rt:
response_time = float(''.join(ch for ch in rt if not ch.isalpha()))
if rt[-2:] == 'ms':
response_time = response_time / 1000
# Create Subdomain object in DB
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
# Save default HTTP URL to endpoint object in DB
endpoint, created = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain,
is_default=is_ran_from_subdomain_scan
)
if not endpoint:
continue
endpoint.http_status = http_status
endpoint.page_title = page_title
endpoint.content_length = content_length
endpoint.webserver = webserver
endpoint.response_time = response_time
endpoint.content_type = content_type
endpoint.save()
endpoint_str = f'{http_url} [{http_status}] `{content_length}B` `{webserver}` `{rt}`'
logger.warning(endpoint_str)
if endpoint and endpoint.is_alive and endpoint.http_status != 403:
self.notify(
fields={'Alive endpoint': f'• {endpoint_str}'},
add_meta_info=False)
# Add endpoint to results
line['_cmd'] = cmd
line['final_url'] = http_url
line['endpoint_id'] = endpoint.id
line['endpoint_created'] = created
line['is_redirect'] = is_redirect
results.append(line)
# Add technology objects to DB
for technology in techs:
tech, _ = Technology.objects.get_or_create(name=technology)
endpoint.techs.add(tech)
if is_ran_from_subdomain_scan:
subdomain.technologies.add(tech)
subdomain.save()
endpoint.save()
techs_str = ', '.join([f'`{tech}`' for tech in techs])
self.notify(
fields={'Technologies': techs_str},
add_meta_info=False)
# Add IP objects for 'a' records to DB
a_records = line.get('a', [])
for ip_address in a_records:
ip, created = save_ip_address(
ip_address,
subdomain,
subscan=self.subscan,
cdn=cdn)
ips_str = '• ' + '\n• '.join([f'`{ip}`' for ip in a_records])
self.notify(
fields={'IPs': ips_str},
add_meta_info=False)
# Add IP object for host in DB
if host:
ip, created = save_ip_address(
host,
subdomain,
subscan=self.subscan,
cdn=cdn)
self.notify(
fields={'IPs': f'• `{ip.address}`'},
add_meta_info=False)
# Save subdomain and endpoint
if is_ran_from_subdomain_scan:
# save subdomain stuffs
subdomain.http_url = http_url
subdomain.http_status = http_status
subdomain.page_title = page_title
subdomain.content_length = content_length
subdomain.webserver = webserver
subdomain.response_time = response_time
subdomain.content_type = content_type
subdomain.cname = ','.join(cname)
subdomain.is_cdn = cdn
if cdn:
subdomain.cdn_name = line.get('cdn_name')
subdomain.save()
endpoint.save()
endpoint_ids.append(endpoint.id)
if should_remove_duplicate_endpoints:
# Remove 'fake' alive endpoints that are just redirects to the same page
remove_duplicate_endpoints(
self.scan_id,
self.domain_id,
self.subdomain_id,
filter_ids=endpoint_ids
)
# Remove input file
run_command(
f'rm {input_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
return results
#---------------------#
# Notifications tasks #
#---------------------#
@app.task(name='send_notif', bind=False, queue='send_notif_queue')
def send_notif(
message,
scan_history_id=None,
subscan_id=None,
**options):
if not 'title' in options:
message = enrich_notification(message, scan_history_id, subscan_id)
send_discord_message(message, **options)
send_slack_message(message)
send_telegram_message(message)
@app.task(name='send_scan_notif', bind=False, queue='send_scan_notif_queue')
def send_scan_notif(
scan_history_id,
subscan_id=None,
engine_id=None,
status='RUNNING'):
"""Send scan status notification. Works for scan or a subscan if subscan_id
is passed.
Args:
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Get domain, engine, scan_history objects
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
tasks = ScanActivity.objects.filter(scan_of=scan) if scan else 0
# Build notif options
url = get_scan_url(scan_history_id, subscan_id)
title = get_scan_title(scan_history_id, subscan_id)
fields = get_scan_fields(engine, scan, subscan, status, tasks)
severity = None
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
if status:
severity = STATUS_TO_SEVERITIES.get(status)
opts = {
'title': title,
'url': url,
'fields': fields,
'severity': severity
}
logger.warning(f'Sending notification "{title}" [{severity}]')
# Send notification
send_notif(
msg,
scan_history_id,
subscan_id,
**opts)
@app.task(name='send_task_notif', bind=False, queue='send_task_notif_queue')
def send_task_notif(
task_name,
status=None,
result=None,
output_path=None,
traceback=None,
scan_history_id=None,
engine_id=None,
subscan_id=None,
severity=None,
add_meta_info=True,
update_fields={}):
"""Send task status notification.
Args:
task_name (str): Task name.
status (str, optional): Task status.
result (str, optional): Task result.
output_path (str, optional): Task output path.
traceback (str, optional): Task traceback.
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
severity (str, optional): Severity (will be mapped to notif colors)
add_meta_info (bool, optional): Wheter to add scan / subscan info to notif.
update_fields (dict, optional): Fields key / value to update.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Build fields
url = None
fields = {}
if add_meta_info:
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
url = get_scan_url(scan_history_id)
if status:
fields['Status'] = f'**{status}**'
if engine:
fields['Engine'] = engine.engine_name
if scan:
fields['Scan ID'] = f'[#{scan.id}]({url})'
if subscan:
url = get_scan_url(scan_history_id, subscan_id)
fields['Subscan ID'] = f'[#{subscan.id}]({url})'
title = get_task_title(task_name, scan_history_id, subscan_id)
if status:
severity = STATUS_TO_SEVERITIES.get(status)
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
# Add fields to update
for k, v in update_fields.items():
fields[k] = v
# Add traceback to notif
if traceback and notif.send_scan_tracebacks:
fields['Traceback'] = f'```\n{traceback}\n```'
# Add files to notif
files = []
attach_file = (
notif.send_scan_output_file and
output_path and
result and
not traceback
)
if attach_file:
output_title = output_path.split('/')[-1]
files = [(output_path, output_title)]
# Send notif
opts = {
'title': title,
'url': url,
'files': files,
'severity': severity,
'fields': fields,
'fields_append': update_fields.keys()
}
send_notif(
msg,
scan_history_id=scan_history_id,
subscan_id=subscan_id,
**opts)
@app.task(name='send_file_to_discord', bind=False, queue='send_file_to_discord_queue')
def send_file_to_discord(file_path, title=None):
notif = Notification.objects.first()
do_send = notif and notif.send_to_discord and notif.discord_hook_url
if not do_send:
return False
webhook = DiscordWebhook(
url=notif.discord_hook_url,
rate_limit_retry=True,
username=title or "reNgine Discord Plugin"
)
with open(file_path, "rb") as f:
head, tail = os.path.split(file_path)
webhook.add_file(file=f.read(), filename=tail)
webhook.execute()
@app.task(name='send_hackerone_report', bind=False, queue='send_hackerone_report_queue')
def send_hackerone_report(vulnerability_id):
"""Send HackerOne vulnerability report.
Args:
vulnerability_id (int): Vulnerability id.
Returns:
int: HTTP response status code.
"""
vulnerability = Vulnerability.objects.get(id=vulnerability_id)
severities = {v: k for k,v in NUCLEI_SEVERITY_MAP.items()}
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
# can only send vulnerability report if team_handle exists
if len(vulnerability.target_domain.h1_team_handle) !=0:
hackerone_query = Hackerone.objects.all()
if hackerone_query.exists():
hackerone = Hackerone.objects.first()
severity_value = severities[vulnerability.severity]
tpl = hackerone.report_template
# Replace syntax of report template with actual content
tpl = tpl.replace('{vulnerability_name}', vulnerability.name)
tpl = tpl.replace('{vulnerable_url}', vulnerability.http_url)
tpl = tpl.replace('{vulnerability_severity}', severity_value)
tpl = tpl.replace('{vulnerability_description}', vulnerability.description if vulnerability.description else '')
tpl = tpl.replace('{vulnerability_extracted_results}', vulnerability.extracted_results if vulnerability.extracted_results else '')
tpl = tpl.replace('{vulnerability_reference}', vulnerability.reference if vulnerability.reference else '')
data = {
"data": {
"type": "report",
"attributes": {
"team_handle": vulnerability.target_domain.h1_team_handle,
"title": '{} found in {}'.format(vulnerability.name, vulnerability.http_url),
"vulnerability_information": tpl,
"severity_rating": severity_value,
"impact": "More information about the impact and vulnerability can be found here: \n" + vulnerability.reference if vulnerability.reference else "NA",
}
}
}
r = requests.post(
'https://api.hackerone.com/v1/hackers/reports',
auth=(hackerone.username, hackerone.api_key),
json=data,
headers=headers
)
response = r.json()
status_code = r.status_code
if status_code == 201:
vulnerability.hackerone_report_id = response['data']["id"]
vulnerability.open_status = False
vulnerability.save()
return status_code
else:
logger.error('No team handle found.')
status_code = 111
return status_code
#-------------#
# Utils tasks #
#-------------#
@app.task(name='parse_nmap_results', bind=False, queue='parse_nmap_results_queue')
def parse_nmap_results(xml_file, output_file=None):
"""Parse results from nmap output file.
Args:
xml_file (str): nmap XML report file path.
Returns:
list: List of vulnerabilities found from nmap results.
"""
with open(xml_file, encoding='utf8') as f:
content = f.read()
try:
nmap_results = xmltodict.parse(content) # parse XML to dict
except Exception as e:
logger.exception(e)
logger.error(f'Cannot parse {xml_file} to valid JSON. Skipping.')
return []
# Write JSON to output file
if output_file:
with open(output_file, 'w') as f:
json.dump(nmap_results, f, indent=4)
logger.warning(json.dumps(nmap_results, indent=4))
hosts = (
nmap_results
.get('nmaprun', {})
.get('host', {})
)
all_vulns = []
if isinstance(hosts, dict):
hosts = [hosts]
for host in hosts:
# Grab hostname / IP from output
hostnames_dict = host.get('hostnames', {})
if hostnames_dict:
# Ensure that hostnames['hostname'] is a list for consistency
hostnames_list = hostnames_dict['hostname'] if isinstance(hostnames_dict['hostname'], list) else [hostnames_dict['hostname']]
# Extract all the @name values from the list of dictionaries
hostnames = [entry.get('@name') for entry in hostnames_list]
else:
hostnames = [host.get('address')['@addr']]
# Iterate over each hostname for each port
for hostname in hostnames:
# Grab ports from output
ports = host.get('ports', {}).get('port', [])
if isinstance(ports, dict):
ports = [ports]
for port in ports:
url_vulns = []
port_number = port['@portid']
url = sanitize_url(f'{hostname}:{port_number}')
logger.info(f'Parsing nmap results for {hostname}:{port_number} ...')
if not port_number or not port_number.isdigit():
continue
port_protocol = port['@protocol']
scripts = port.get('script', [])
if isinstance(scripts, dict):
scripts = [scripts]
for script in scripts:
script_id = script['@id']
script_output = script['@output']
script_output_table = script.get('table', [])
logger.debug(f'Ran nmap script "{script_id}" on {port_number}/{port_protocol}:\n{script_output}\n')
if script_id == 'vulscan':
vulns = parse_nmap_vulscan_output(script_output)
url_vulns.extend(vulns)
elif script_id == 'vulners':
vulns = parse_nmap_vulners_output(script_output)
url_vulns.extend(vulns)
# elif script_id == 'http-server-header':
# TODO: nmap can help find technologies as well using the http-server-header script
# regex = r'(\w+)/([\d.]+)\s?(?:\((\w+)\))?'
# tech_name, tech_version, tech_os = re.match(regex, test_string).groups()
# Technology.objects.get_or_create(...)
# elif script_id == 'http_csrf':
# vulns = parse_nmap_http_csrf_output(script_output)
# url_vulns.extend(vulns)
else:
logger.warning(f'Script output parsing for script "{script_id}" is not supported yet.')
# Add URL to vuln
for vuln in url_vulns:
# TODO: This should extend to any URL, not just HTTP
vuln['http_url'] = url
if 'http_path' in vuln:
vuln['http_url'] += vuln['http_path']
all_vulns.append(vuln)
return all_vulns
def parse_nmap_http_csrf_output(script_output):
pass
def parse_nmap_vulscan_output(script_output):
"""Parse nmap vulscan script output.
Args:
script_output (str): Vulscan script output.
Returns:
list: List of Vulnerability dicts.
"""
data = {}
vulns = []
provider_name = ''
# Sort all vulns found by provider so that we can match each provider with
# a function that pulls from its API to get more info about the
# vulnerability.
for line in script_output.splitlines():
if not line:
continue
if not line.startswith('['): # provider line
if "No findings" in line:
logger.info(f"No findings: {line}")
continue
elif ' - ' in line:
provider_name, provider_url = tuple(line.split(' - '))
data[provider_name] = {'url': provider_url.rstrip(':'), 'entries': []}
continue
else:
# Log a warning
logger.warning(f"Unexpected line format: {line}")
continue
reg = r'\[(.*)\] (.*)'
matches = re.match(reg, line)
id, title = matches.groups()
entry = {'id': id, 'title': title}
data[provider_name]['entries'].append(entry)
logger.warning('Vulscan parsed output:')
logger.warning(pprint.pformat(data))
for provider_name in data:
if provider_name == 'Exploit-DB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'IBM X-Force':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'MITRE CVE':
logger.error(f'Provider {provider_name} is not supported YET.')
for entry in data[provider_name]['entries']:
cve_id = entry['id']
vuln = cve_to_vuln(cve_id)
vulns.append(vuln)
elif provider_name == 'OSVDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'OpenVAS (Nessus)':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'SecurityFocus':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'VulDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
else:
logger.error(f'Provider {provider_name} is not supported.')
return vulns
def parse_nmap_vulners_output(script_output, url=''):
"""Parse nmap vulners script output.
TODO: Rework this as it's currently matching all CVEs no matter the
confidence.
Args:
script_output (str): Script output.
Returns:
list: List of found vulnerabilities.
"""
vulns = []
# Check for CVE in script output
CVE_REGEX = re.compile(r'.*(CVE-\d\d\d\d-\d+).*')
matches = CVE_REGEX.findall(script_output)
matches = list(dict.fromkeys(matches))
for cve_id in matches: # get CVE info
vuln = cve_to_vuln(cve_id, vuln_type='nmap-vulners-nse')
if vuln:
vulns.append(vuln)
return vulns
def cve_to_vuln(cve_id, vuln_type=''):
"""Search for a CVE using CVESearch and return Vulnerability data.
Args:
cve_id (str): CVE ID in the form CVE-*
Returns:
dict: Vulnerability dict.
"""
cve_info = CVESearch('https://cve.circl.lu').id(cve_id)
if not cve_info:
logger.error(f'Could not fetch CVE info for cve {cve_id}. Skipping.')
return None
vuln_cve_id = cve_info['id']
vuln_name = vuln_cve_id
vuln_description = cve_info.get('summary', 'none').replace(vuln_cve_id, '').strip()
try:
vuln_cvss = float(cve_info.get('cvss', -1))
except (ValueError, TypeError):
vuln_cvss = -1
vuln_cwe_id = cve_info.get('cwe', '')
exploit_ids = cve_info.get('refmap', {}).get('exploit-db', [])
osvdb_ids = cve_info.get('refmap', {}).get('osvdb', [])
references = cve_info.get('references', [])
capec_objects = cve_info.get('capec', [])
# Parse ovals for a better vuln name / type
ovals = cve_info.get('oval', [])
if ovals:
vuln_name = ovals[0]['title']
vuln_type = ovals[0]['family']
# Set vulnerability severity based on CVSS score
vuln_severity = 'info'
if vuln_cvss < 4:
vuln_severity = 'low'
elif vuln_cvss < 7:
vuln_severity = 'medium'
elif vuln_cvss < 9:
vuln_severity = 'high'
else:
vuln_severity = 'critical'
# Build console warning message
msg = f'{vuln_name} | {vuln_severity.upper()} | {vuln_cve_id} | {vuln_cwe_id} | {vuln_cvss}'
for id in osvdb_ids:
msg += f'\n\tOSVDB: {id}'
for exploit_id in exploit_ids:
msg += f'\n\tEXPLOITDB: {exploit_id}'
logger.warning(msg)
vuln = {
'name': vuln_name,
'type': vuln_type,
'severity': NUCLEI_SEVERITY_MAP[vuln_severity],
'description': vuln_description,
'cvss_score': vuln_cvss,
'references': references,
'cve_ids': [vuln_cve_id],
'cwe_ids': [vuln_cwe_id]
}
return vuln
def parse_s3scanner_result(line):
'''
Parses and returns s3Scanner Data
'''
bucket = line['bucket']
return {
'name': bucket['name'],
'region': bucket['region'],
'provider': bucket['provider'],
'owner_display_name': bucket['owner_display_name'],
'owner_id': bucket['owner_id'],
'perm_auth_users_read': bucket['perm_auth_users_read'],
'perm_auth_users_write': bucket['perm_auth_users_write'],
'perm_auth_users_read_acl': bucket['perm_auth_users_read_acl'],
'perm_auth_users_write_acl': bucket['perm_auth_users_write_acl'],
'perm_auth_users_full_control': bucket['perm_auth_users_full_control'],
'perm_all_users_read': bucket['perm_all_users_read'],
'perm_all_users_write': bucket['perm_all_users_write'],
'perm_all_users_read_acl': bucket['perm_all_users_read_acl'],
'perm_all_users_write_acl': bucket['perm_all_users_write_acl'],
'perm_all_users_full_control': bucket['perm_all_users_full_control'],
'num_objects': bucket['num_objects'],
'size': bucket['bucket_size']
}
def parse_nuclei_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
return {
'name': line['info'].get('name', ''),
'type': line['type'],
'severity': NUCLEI_SEVERITY_MAP[line['info'].get('severity', 'unknown')],
'template': line['template'],
'template_url': line['template-url'],
'template_id': line['template-id'],
'description': line['info'].get('description', ''),
'matcher_name': line.get('matcher-name', ''),
'curl_command': line.get('curl-command'),
'request': line.get('request'),
'response': line.get('response'),
'extracted_results': line.get('extracted-results', []),
'cvss_metrics': line['info'].get('classification', {}).get('cvss-metrics', ''),
'cvss_score': line['info'].get('classification', {}).get('cvss-score'),
'cve_ids': line['info'].get('classification', {}).get('cve_id', []) or [],
'cwe_ids': line['info'].get('classification', {}).get('cwe_id', []) or [],
'references': line['info'].get('reference', []) or [],
'tags': line['info'].get('tags', []),
'source': NUCLEI,
}
def parse_dalfox_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
description = ''
description += f" Evidence: {line.get('evidence')} <br>" if line.get('evidence') else ''
description += f" Message: {line.get('message')} <br>" if line.get('message') else ''
description += f" Payload: {line.get('message_str')} <br>" if line.get('message_str') else ''
description += f" Vulnerable Parameter: {line.get('param')} <br>" if line.get('param') else ''
return {
'name': 'XSS (Cross Site Scripting)',
'type': 'XSS',
'severity': DALFOX_SEVERITY_MAP[line.get('severity', 'unknown')],
'description': description,
'source': DALFOX,
'cwe_ids': [line.get('cwe')]
}
def parse_crlfuzz_result(url):
"""Parse CRLF results
Args:
url (str): CRLF Vulnerable URL
Returns:
dict: Vulnerability data.
"""
return {
'name': 'CRLF (HTTP Response Splitting)',
'type': 'CRLF',
'severity': 2,
'description': 'A CRLF (HTTP Response Splitting) vulnerability has been discovered.',
'source': CRLFUZZ,
}
def record_exists(model, data, exclude_keys=[]):
"""
Check if a record already exists in the database based on the given data.
Args:
model (django.db.models.Model): The Django model to check against.
data (dict): Data dictionary containing fields and values.
exclude_keys (list): List of keys to exclude from the lookup.
Returns:
bool: True if the record exists, False otherwise.
"""
# Extract the keys that will be used for the lookup
lookup_fields = {key: data[key] for key in data if key not in exclude_keys}
# Return True if a record exists based on the lookup fields, False otherwise
return model.objects.filter(**lookup_fields).exists()
@app.task(name='geo_localize', bind=False, queue='geo_localize_queue')
def geo_localize(host, ip_id=None):
"""Uses geoiplookup to find location associated with host.
Args:
host (str): Hostname.
ip_id (int): IpAddress object id.
Returns:
startScan.models.CountryISO: CountryISO object from DB or None.
"""
if validators.ipv6(host):
logger.info(f'Ipv6 "{host}" is not supported by geoiplookup. Skipping.')
return None
cmd = f'geoiplookup {host}'
_, out = run_command(cmd)
if 'IP Address not found' not in out and "can't resolve hostname" not in out:
country_iso = out.split(':')[1].strip().split(',')[0]
country_name = out.split(':')[1].strip().split(',')[1].strip()
geo_object, _ = CountryISO.objects.get_or_create(
iso=country_iso,
name=country_name
)
geo_json = {
'iso': country_iso,
'name': country_name
}
if ip_id:
ip = IpAddress.objects.get(pk=ip_id)
ip.geo_iso = geo_object
ip.save()
return geo_json
logger.info(f'Geo IP lookup failed for host "{host}"')
return None
@app.task(name='query_whois', bind=False, queue='query_whois_queue')
def query_whois(ip_domain, force_reload_whois=False):
"""Query WHOIS information for an IP or a domain name.
Args:
ip_domain (str): IP address or domain name.
save_domain (bool): Whether to save domain or not, default False
Returns:
dict: WHOIS information.
"""
if not force_reload_whois and Domain.objects.filter(name=ip_domain).exists() and Domain.objects.get(name=ip_domain).domain_info:
domain = Domain.objects.get(name=ip_domain)
if not domain.insert_date:
domain.insert_date = timezone.now()
domain.save()
domain_info_db = domain.domain_info
domain_info = DottedDict(
dnssec=domain_info_db.dnssec,
created=domain_info_db.created,
updated=domain_info_db.updated,
expires=domain_info_db.expires,
geolocation_iso=domain_info_db.geolocation_iso,
status=[status['name'] for status in DomainWhoisStatusSerializer(domain_info_db.status, many=True).data],
whois_server=domain_info_db.whois_server,
ns_records=[ns['name'] for ns in NameServersSerializer(domain_info_db.name_servers, many=True).data],
registrar_name=domain_info_db.registrar.name,
registrar_phone=domain_info_db.registrar.phone,
registrar_email=domain_info_db.registrar.email,
registrar_url=domain_info_db.registrar.url,
registrant_name=domain_info_db.registrant.name,
registrant_id=domain_info_db.registrant.id_str,
registrant_organization=domain_info_db.registrant.organization,
registrant_city=domain_info_db.registrant.city,
registrant_state=domain_info_db.registrant.state,
registrant_zip_code=domain_info_db.registrant.zip_code,
registrant_country=domain_info_db.registrant.country,
registrant_phone=domain_info_db.registrant.phone,
registrant_fax=domain_info_db.registrant.fax,
registrant_email=domain_info_db.registrant.email,
registrant_address=domain_info_db.registrant.address,
admin_name=domain_info_db.admin.name,
admin_id=domain_info_db.admin.id_str,
admin_organization=domain_info_db.admin.organization,
admin_city=domain_info_db.admin.city,
admin_state=domain_info_db.admin.state,
admin_zip_code=domain_info_db.admin.zip_code,
admin_country=domain_info_db.admin.country,
admin_phone=domain_info_db.admin.phone,
admin_fax=domain_info_db.admin.fax,
admin_email=domain_info_db.admin.email,
admin_address=domain_info_db.admin.address,
tech_name=domain_info_db.tech.name,
tech_id=domain_info_db.tech.id_str,
tech_organization=domain_info_db.tech.organization,
tech_city=domain_info_db.tech.city,
tech_state=domain_info_db.tech.state,
tech_zip_code=domain_info_db.tech.zip_code,
tech_country=domain_info_db.tech.country,
tech_phone=domain_info_db.tech.phone,
tech_fax=domain_info_db.tech.fax,
tech_email=domain_info_db.tech.email,
tech_address=domain_info_db.tech.address,
related_tlds=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_tlds, many=True).data],
related_domains=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_domains, many=True).data],
historical_ips=[ip for ip in HistoricalIPSerializer(domain_info_db.historical_ips, many=True).data],
)
if domain_info_db.dns_records:
a_records = []
txt_records = []
mx_records = []
dns_records = [{'name': dns['name'], 'type': dns['type']} for dns in DomainDNSRecordSerializer(domain_info_db.dns_records, many=True).data]
for dns in dns_records:
if dns['type'] == 'a':
a_records.append(dns['name'])
elif dns['type'] == 'txt':
txt_records.append(dns['name'])
elif dns['type'] == 'mx':
mx_records.append(dns['name'])
domain_info.a_records = a_records
domain_info.txt_records = txt_records
domain_info.mx_records = mx_records
else:
logger.info(f'Domain info for "{ip_domain}" not found in DB, querying whois')
domain_info = DottedDict()
# find domain historical ip
try:
historical_ips = get_domain_historical_ip_address(ip_domain)
domain_info.historical_ips = historical_ips
except Exception as e:
logger.error(f'HistoricalIP for {ip_domain} not found!\nError: {str(e)}')
historical_ips = []
# find associated domains using ip_domain
try:
related_domains = reverse_whois(ip_domain.split('.')[0])
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
# find related tlds using TLSx
try:
related_tlds = []
output_path = '/tmp/ip_domain_tlsx.txt'
tlsx_command = f'tlsx -san -cn -silent -ro -host {ip_domain} -o {output_path}'
run_command(
tlsx_command,
shell=True,
)
tlsx_output = []
with open(output_path) as f:
tlsx_output = f.readlines()
tldextract_target = tldextract.extract(ip_domain)
for doms in tlsx_output:
doms = doms.strip()
tldextract_res = tldextract.extract(doms)
if ip_domain != doms and tldextract_res.domain == tldextract_target.domain and tldextract_res.subdomain == '':
related_tlds.append(doms)
related_tlds = list(set(related_tlds))
domain_info.related_tlds = related_tlds
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
related_domains_list = []
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_related)
related_domains_list.append(_domain['name'])
for _domain in related_tlds:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain,
)[0]
db_domain_info.related_tlds.add(domain_related)
for _ip in historical_ips:
historical_ip = HistoricalIP.objects.get_or_create(
ip=_ip['ip'],
owner=_ip['owner'],
location=_ip['location'],
last_seen=_ip['last_seen'],
)[0]
db_domain_info.historical_ips.add(historical_ip)
domain.domain_info = db_domain_info
domain.save()
command = f'netlas host {ip_domain} -f json'
# check if netlas key is provided
netlas_key = get_netlas_key()
command += f' -a {netlas_key}' if netlas_key else ''
result = subprocess.check_output(command.split()).decode('utf-8')
if 'Failed to parse response data' in result:
# do fallback
return {
'status': False,
'ip_domain': ip_domain,
'result': "Netlas limit exceeded.",
'message': 'Netlas limit exceeded.'
}
try:
result = json.loads(result)
logger.info(result)
whois = result.get('whois') if result.get('whois') else {}
domain_info.created = whois.get('created_date')
domain_info.expires = whois.get('expiration_date')
domain_info.updated = whois.get('updated_date')
domain_info.whois_server = whois.get('whois_server')
if 'registrant' in whois:
registrant = whois.get('registrant')
domain_info.registrant_name = registrant.get('name')
domain_info.registrant_country = registrant.get('country')
domain_info.registrant_id = registrant.get('id')
domain_info.registrant_state = registrant.get('province')
domain_info.registrant_city = registrant.get('city')
domain_info.registrant_phone = registrant.get('phone')
domain_info.registrant_address = registrant.get('street')
domain_info.registrant_organization = registrant.get('organization')
domain_info.registrant_fax = registrant.get('fax')
domain_info.registrant_zip_code = registrant.get('postal_code')
email_search = EMAIL_REGEX.search(str(registrant.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.registrant_email = field_content
if 'administrative' in whois:
administrative = whois.get('administrative')
domain_info.admin_name = administrative.get('name')
domain_info.admin_country = administrative.get('country')
domain_info.admin_id = administrative.get('id')
domain_info.admin_state = administrative.get('province')
domain_info.admin_city = administrative.get('city')
domain_info.admin_phone = administrative.get('phone')
domain_info.admin_address = administrative.get('street')
domain_info.admin_organization = administrative.get('organization')
domain_info.admin_fax = administrative.get('fax')
domain_info.admin_zip_code = administrative.get('postal_code')
mail_search = EMAIL_REGEX.search(str(administrative.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.admin_email = field_content
if 'technical' in whois:
technical = whois.get('technical')
domain_info.tech_name = technical.get('name')
domain_info.tech_country = technical.get('country')
domain_info.tech_state = technical.get('province')
domain_info.tech_id = technical.get('id')
domain_info.tech_city = technical.get('city')
domain_info.tech_phone = technical.get('phone')
domain_info.tech_address = technical.get('street')
domain_info.tech_organization = technical.get('organization')
domain_info.tech_fax = technical.get('fax')
domain_info.tech_zip_code = technical.get('postal_code')
mail_search = EMAIL_REGEX.search(str(technical.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.tech_email = field_content
if 'dns' in result:
dns = result.get('dns')
domain_info.mx_records = dns.get('mx')
domain_info.txt_records = dns.get('txt')
domain_info.a_records = dns.get('a')
domain_info.ns_records = whois.get('name_servers')
domain_info.dnssec = True if whois.get('dnssec') else False
domain_info.status = whois.get('status')
if 'registrar' in whois:
registrar = whois.get('registrar')
domain_info.registrar_name = registrar.get('name')
domain_info.registrar_email = registrar.get('email')
domain_info.registrar_phone = registrar.get('phone')
domain_info.registrar_url = registrar.get('url')
# find associated domains if registrant email is found
related_domains = reverse_whois(domain_info.get('registrant_email')) if domain_info.get('registrant_email') else []
for _domain in related_domains:
related_domains_list.append(_domain['name'])
# remove duplicate domains from related domains list
related_domains_list = list(set(related_domains_list))
domain_info.related_domains = related_domains_list
# save to db if domain exists
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_rel = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_rel)
db_domain_info.dnssec = domain_info.get('dnssec')
#dates
db_domain_info.created = domain_info.get('created')
db_domain_info.updated = domain_info.get('updated')
db_domain_info.expires = domain_info.get('expires')
#registrar
db_domain_info.registrar = Registrar.objects.get_or_create(
name=domain_info.get('registrar_name'),
email=domain_info.get('registrar_email'),
phone=domain_info.get('registrar_phone'),
url=domain_info.get('registrar_url'),
)[0]
db_domain_info.registrant = DomainRegistration.objects.get_or_create(
name=domain_info.get('registrant_name'),
organization=domain_info.get('registrant_organization'),
address=domain_info.get('registrant_address'),
city=domain_info.get('registrant_city'),
state=domain_info.get('registrant_state'),
zip_code=domain_info.get('registrant_zip_code'),
country=domain_info.get('registrant_country'),
email=domain_info.get('registrant_email'),
phone=domain_info.get('registrant_phone'),
fax=domain_info.get('registrant_fax'),
id_str=domain_info.get('registrant_id'),
)[0]
db_domain_info.admin = DomainRegistration.objects.get_or_create(
name=domain_info.get('admin_name'),
organization=domain_info.get('admin_organization'),
address=domain_info.get('admin_address'),
city=domain_info.get('admin_city'),
state=domain_info.get('admin_state'),
zip_code=domain_info.get('admin_zip_code'),
country=domain_info.get('admin_country'),
email=domain_info.get('admin_email'),
phone=domain_info.get('admin_phone'),
fax=domain_info.get('admin_fax'),
id_str=domain_info.get('admin_id'),
)[0]
db_domain_info.tech = DomainRegistration.objects.get_or_create(
name=domain_info.get('tech_name'),
organization=domain_info.get('tech_organization'),
address=domain_info.get('tech_address'),
city=domain_info.get('tech_city'),
state=domain_info.get('tech_state'),
zip_code=domain_info.get('tech_zip_code'),
country=domain_info.get('tech_country'),
email=domain_info.get('tech_email'),
phone=domain_info.get('tech_phone'),
fax=domain_info.get('tech_fax'),
id_str=domain_info.get('tech_id'),
)[0]
for status in domain_info.get('status') or []:
_status = WhoisStatus.objects.get_or_create(
name=status
)[0]
_status.save()
db_domain_info.status.add(_status)
for ns in domain_info.get('ns_records') or []:
_ns = NameServer.objects.get_or_create(
name=ns
)[0]
_ns.save()
db_domain_info.name_servers.add(_ns)
for a in domain_info.get('a_records') or []:
_a = DNSRecord.objects.get_or_create(
name=a,
type='a'
)[0]
_a.save()
db_domain_info.dns_records.add(_a)
for mx in domain_info.get('mx_records') or []:
_mx = DNSRecord.objects.get_or_create(
name=mx,
type='mx'
)[0]
_mx.save()
db_domain_info.dns_records.add(_mx)
for txt in domain_info.get('txt_records') or []:
_txt = DNSRecord.objects.get_or_create(
name=txt,
type='txt'
)[0]
_txt.save()
db_domain_info.dns_records.add(_txt)
db_domain_info.geolocation_iso = domain_info.get('registrant_country')
db_domain_info.whois_server = domain_info.get('whois_server')
db_domain_info.save()
domain.domain_info = db_domain_info
domain.save()
except Exception as e:
return {
'status': False,
'ip_domain': ip_domain,
'result': "unable to fetch records from WHOIS database.",
'message': str(e)
}
return {
'status': True,
'ip_domain': ip_domain,
'dnssec': domain_info.get('dnssec'),
'created': domain_info.get('created'),
'updated': domain_info.get('updated'),
'expires': domain_info.get('expires'),
'geolocation_iso': domain_info.get('registrant_country'),
'domain_statuses': domain_info.get('status'),
'whois_server': domain_info.get('whois_server'),
'dns': {
'a': domain_info.get('a_records'),
'mx': domain_info.get('mx_records'),
'txt': domain_info.get('txt_records'),
},
'registrar': {
'name': domain_info.get('registrar_name'),
'phone': domain_info.get('registrar_phone'),
'email': domain_info.get('registrar_email'),
'url': domain_info.get('registrar_url'),
},
'registrant': {
'name': domain_info.get('registrant_name'),
'id': domain_info.get('registrant_id'),
'organization': domain_info.get('registrant_organization'),
'address': domain_info.get('registrant_address'),
'city': domain_info.get('registrant_city'),
'state': domain_info.get('registrant_state'),
'zipcode': domain_info.get('registrant_zip_code'),
'country': domain_info.get('registrant_country'),
'phone': domain_info.get('registrant_phone'),
'fax': domain_info.get('registrant_fax'),
'email': domain_info.get('registrant_email'),
},
'admin': {
'name': domain_info.get('admin_name'),
'id': domain_info.get('admin_id'),
'organization': domain_info.get('admin_organization'),
'address':domain_info.get('admin_address'),
'city': domain_info.get('admin_city'),
'state': domain_info.get('admin_state'),
'zipcode': domain_info.get('admin_zip_code'),
'country': domain_info.get('admin_country'),
'phone': domain_info.get('admin_phone'),
'fax': domain_info.get('admin_fax'),
'email': domain_info.get('admin_email'),
},
'technical_contact': {
'name': domain_info.get('tech_name'),
'id': domain_info.get('tech_id'),
'organization': domain_info.get('tech_organization'),
'address': domain_info.get('tech_address'),
'city': domain_info.get('tech_city'),
'state': domain_info.get('tech_state'),
'zipcode': domain_info.get('tech_zip_code'),
'country': domain_info.get('tech_country'),
'phone': domain_info.get('tech_phone'),
'fax': domain_info.get('tech_fax'),
'email': domain_info.get('tech_email'),
},
'nameservers': domain_info.get('ns_records'),
# 'similar_domains': domain_info.get('similar_domains'),
'related_domains': domain_info.get('related_domains'),
'related_tlds': domain_info.get('related_tlds'),
'historical_ips': domain_info.get('historical_ips'),
}
@app.task(name='remove_duplicate_endpoints', bind=False, queue='remove_duplicate_endpoints_queue')
def remove_duplicate_endpoints(
scan_history_id,
domain_id,
subdomain_id=None,
filter_ids=[],
filter_status=[200, 301, 404],
duplicate_removal_fields=ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS
):
"""Remove duplicate endpoints.
Check for implicit redirections by comparing endpoints:
- [x] `content_length` similarities indicating redirections
- [x] `page_title` (check for same page title)
- [ ] Sign-in / login page (check for endpoints with the same words)
Args:
scan_history_id: ScanHistory id.
domain_id (int): Domain id.
subdomain_id (int, optional): Subdomain id.
filter_ids (list): List of endpoint ids to filter on.
filter_status (list): List of HTTP status codes to filter on.
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
"""
logger.info(f'Removing duplicate endpoints based on {duplicate_removal_fields}')
endpoints = (
EndPoint.objects
.filter(scan_history__id=scan_history_id)
.filter(target_domain__id=domain_id)
)
if filter_status:
endpoints = endpoints.filter(http_status__in=filter_status)
if subdomain_id:
endpoints = endpoints.filter(subdomain__id=subdomain_id)
if filter_ids:
endpoints = endpoints.filter(id__in=filter_ids)
for field_name in duplicate_removal_fields:
cl_query = (
endpoints
.values_list(field_name)
.annotate(mc=Count(field_name))
.order_by('-mc')
)
for (field_value, count) in cl_query:
if count > DELETE_DUPLICATES_THRESHOLD:
eps_to_delete = (
endpoints
.filter(**{field_name: field_value})
.order_by('discovered_date')
.all()[1:]
)
msg = f'Deleting {len(eps_to_delete)} endpoints [reason: same {field_name} {field_value}]'
for ep in eps_to_delete:
url = urlparse(ep.http_url)
if url.path in ['', '/', '/login']: # try do not delete the original page that other pages redirect to
continue
msg += f'\n\t {ep.http_url} [{ep.http_status}] [{field_name}={field_value}]'
ep.delete()
logger.warning(msg)
@app.task(name='run_command', bind=False, queue='run_command_queue')
def run_command(cmd, cwd=None, shell=False, history_file=None, scan_id=None, activity_id=None):
"""Run a given command using subprocess module.
Args:
cmd (str): Command to run.
cwd (str): Current working directory.
echo (bool): Log command.
shell (bool): Run within separate shell if True.
history_file (str): Write command + output to history file.
Returns:
tuple: Tuple with return_code, output.
"""
logger.info(cmd)
logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Run the command using subprocess
popen = subprocess.Popen(
cmd if shell else cmd.split(),
shell=shell,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
cwd=cwd,
universal_newlines=True)
output = ''
for stdout_line in iter(popen.stdout.readline, ""):
item = stdout_line.strip()
output += '\n' + item
logger.debug(item)
popen.stdout.close()
popen.wait()
return_code = popen.returncode
command_obj.output = output
command_obj.return_code = return_code
command_obj.save()
if history_file:
mode = 'a'
if not os.path.exists(history_file):
mode = 'w'
with open(history_file, mode) as f:
f.write(f'\n{cmd}\n{return_code}\n{output}\n------------------\n')
return return_code, output
#-------------#
# Other utils #
#-------------#
def stream_command(cmd, cwd=None, shell=False, history_file=None, encoding='utf-8', scan_id=None, activity_id=None, trunc_char=None):
# Log cmd
logger.info(cmd)
# logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Sanitize the cmd
command = cmd if shell else cmd.split()
# Run the command using subprocess
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
shell=shell)
# Log the output in real-time to the database
output = ""
# Process the output
for line in iter(lambda: process.stdout.readline(), b''):
if not line:
break
line = line.strip()
ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')
line = ansi_escape.sub('', line)
line = line.replace('\\x0d\\x0a', '\n')
if trunc_char and line.endswith(trunc_char):
line = line[:-1]
item = line
# Try to parse the line as JSON
try:
item = json.loads(line)
except json.JSONDecodeError:
pass
# Yield the line
#logger.debug(item)
yield item
# Add the log line to the output
output += line + "\n"
# Update the command record in the database
command_obj.output = output
command_obj.save()
# Retrieve the return code and output
process.wait()
return_code = process.returncode
# Update the return code and final output in the database
command_obj.return_code = return_code
command_obj.save()
# Append the command, return code and output to the history file
if history_file is not None:
with open(history_file, "a") as f:
f.write(f"{cmd}\n{return_code}\n{output}\n")
def process_httpx_response(line):
"""TODO: implement this"""
def extract_httpx_url(line):
"""Extract final URL from httpx results. Always follow redirects to find
the last URL.
Args:
line (dict): URL data output by httpx.
Returns:
tuple: (final_url, redirect_bool) tuple.
"""
status_code = line.get('status_code', 0)
final_url = line.get('final_url')
location = line.get('location')
chain_status_codes = line.get('chain_status_codes', [])
# Final URL is already looking nice, if it exists return it
if final_url:
return final_url, False
http_url = line['url'] # fallback to url field
# Handle redirects manually
REDIRECT_STATUS_CODES = [301, 302]
is_redirect = (
status_code in REDIRECT_STATUS_CODES
or
any(x in REDIRECT_STATUS_CODES for x in chain_status_codes)
)
if is_redirect and location:
if location.startswith(('http', 'https')):
http_url = location
else:
http_url = f'{http_url}/{location.lstrip("/")}'
# Sanitize URL
http_url = sanitize_url(http_url)
return http_url, is_redirect
#-------------#
# OSInt utils #
#-------------#
def get_and_save_dork_results(lookup_target, results_dir, type, lookup_keywords=None, lookup_extensions=None, delay=3, page_count=2, scan_history=None):
"""
Uses gofuzz to dork and store information
Args:
lookup_target (str): target to look into such as stackoverflow or even the target itself
results_dir (str): Results directory
type (str): Dork Type Title
lookup_keywords (str): comma separated keywords or paths to look for
lookup_extensions (str): comma separated extensions to look for
delay (int): delay between each requests
page_count (int): pages in google to extract information
scan_history (startScan.ScanHistory): Scan History Object
"""
results = []
gofuzz_command = f'{GOFUZZ_EXEC_PATH} -t {lookup_target} -d {delay} -p {page_count}'
if lookup_extensions:
gofuzz_command += f' -e {lookup_extensions}'
elif lookup_keywords:
gofuzz_command += f' -w {lookup_keywords}'
output_file = f'{results_dir}/gofuzz.txt'
gofuzz_command += f' -o {output_file}'
history_file = f'{results_dir}/commands.txt'
try:
run_command(
gofuzz_command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
)
if not os.path.isfile(output_file):
return
with open(output_file) as f:
for line in f.readlines():
url = line.strip()
if url:
results.append(url)
dork, created = Dork.objects.get_or_create(
type=type,
url=url
)
if scan_history:
scan_history.dorks.add(dork)
# remove output file
os.remove(output_file)
except Exception as e:
logger.exception(e)
return results
def get_and_save_emails(scan_history, activity_id, results_dir):
"""Get and save emails from Google, Bing and Baidu.
Args:
scan_history (startScan.ScanHistory): Scan history object.
activity_id: ScanActivity Object
results_dir (str): Results directory.
Returns:
list: List of emails found.
"""
emails = []
# Proxy settings
# get_random_proxy()
# Gather emails from Google, Bing and Baidu
output_file = f'{results_dir}/emails_tmp.txt'
history_file = f'{results_dir}/commands.txt'
command = f'python3 /usr/src/github/Infoga/infoga.py --domain {scan_history.domain.name} --source all --report {output_file}'
try:
run_command(
command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
activity_id=activity_id)
if not os.path.isfile(output_file):
logger.info('No Email results')
return []
with open(output_file) as f:
for line in f.readlines():
if 'Email' in line:
split_email = line.split(' ')[2]
emails.append(split_email)
output_path = f'{results_dir}/emails.txt'
with open(output_path, 'w') as output_file:
for email_address in emails:
save_email(email_address, scan_history)
output_file.write(f'{email_address}\n')
except Exception as e:
logger.exception(e)
return emails
def save_metadata_info(meta_dict):
"""Extract metadata from Google Search.
Args:
meta_dict (dict): Info dict.
Returns:
list: List of startScan.MetaFinderDocument objects.
"""
logger.warning(f'Getting metadata for {meta_dict.osint_target}')
scan_history = ScanHistory.objects.get(id=meta_dict.scan_id)
# Proxy settings
get_random_proxy()
# Get metadata
result = extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if not result:
logger.error(f'No metadata result from Google Search for {meta_dict.osint_target}.')
return []
# Add metadata info to DB
results = []
for metadata_name, data in result.get_metadata().items():
subdomain = Subdomain.objects.get(
scan_history=meta_dict.scan_id,
name=meta_dict.osint_target)
metadata = DottedDict({k: v for k, v in data.items()})
meta_finder_document = MetaFinderDocument(
subdomain=subdomain,
target_domain=meta_dict.domain,
scan_history=scan_history,
url=metadata.url,
doc_name=metadata_name,
http_status=metadata.status_code,
producer=metadata.metadata.get('Producer'),
creator=metadata.metadata.get('Creator'),
creation_date=metadata.metadata.get('CreationDate'),
modified_date=metadata.metadata.get('ModDate'),
author=metadata.metadata.get('Author'),
title=metadata.metadata.get('Title'),
os=metadata.metadata.get('OSInfo'))
meta_finder_document.save()
results.append(data)
return results
#-----------------#
# Utils functions #
#-----------------#
def create_scan_activity(scan_history_id, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = ScanHistory.objects.get(pk=scan_history_id)
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
#--------------------#
# Database functions #
#--------------------#
def save_vulnerability(**vuln_data):
references = vuln_data.pop('references', [])
cve_ids = vuln_data.pop('cve_ids', [])
cwe_ids = vuln_data.pop('cwe_ids', [])
tags = vuln_data.pop('tags', [])
subscan = vuln_data.pop('subscan', None)
# remove nulls
vuln_data = replace_nulls(vuln_data)
# Create vulnerability
vuln, created = Vulnerability.objects.get_or_create(**vuln_data)
if created:
vuln.discovered_date = timezone.now()
vuln.open_status = True
vuln.save()
# Save vuln tags
for tag_name in tags or []:
tag, created = VulnerabilityTags.objects.get_or_create(name=tag_name)
if tag:
vuln.tags.add(tag)
vuln.save()
# Save CVEs
for cve_id in cve_ids or []:
cve, created = CveId.objects.get_or_create(name=cve_id)
if cve:
vuln.cve_ids.add(cve)
vuln.save()
# Save CWEs
for cve_id in cwe_ids or []:
cwe, created = CweId.objects.get_or_create(name=cve_id)
if cwe:
vuln.cwe_ids.add(cwe)
vuln.save()
# Save vuln reference
for url in references or []:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
if created:
vuln.references.add(ref)
vuln.save()
# Save subscan id in vuln object
if subscan:
vuln.vuln_subscan_ids.add(subscan)
vuln.save()
return vuln, created
def save_endpoint(
http_url,
ctx={},
crawl=False,
is_default=False,
**endpoint_data):
"""Get or create EndPoint object. If crawl is True, also crawl the endpoint
HTTP URL with httpx.
Args:
http_url (str): Input HTTP URL.
is_default (bool): If the url is a default url for SubDomains.
scan_history (startScan.models.ScanHistory): ScanHistory object.
domain (startScan.models.Domain): Domain object.
subdomain (starScan.models.Subdomain): Subdomain object.
results_dir (str, optional): Results directory.
crawl (bool, optional): Run httpx on endpoint if True. Default: False.
force (bool, optional): Force crawl even if ENABLE_HTTP_CRAWL mode is on.
subscan (startScan.models.SubScan, optional): SubScan object.
Returns:
tuple: (startScan.models.EndPoint, created) where `created` is a boolean
indicating if the object is new or already existed.
"""
# remove nulls
endpoint_data = replace_nulls(endpoint_data)
scheme = urlparse(http_url).scheme
endpoint = None
created = False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in http_url:
logger.error(f"{http_url} is not a URL of domain {domain.name}. Skipping.")
return None, False
if crawl:
ctx['track'] = False
results = http_crawl(
urls=[http_url],
method='HEAD',
ctx=ctx)
if results:
endpoint_data = results[0]
endpoint_id = endpoint_data['endpoint_id']
created = endpoint_data['endpoint_created']
endpoint = EndPoint.objects.get(pk=endpoint_id)
elif not scheme:
return None, False
else: # add dumb endpoint without probing it
scan = ScanHistory.objects.filter(pk=ctx.get('scan_history_id')).first()
domain = Domain.objects.filter(pk=ctx.get('domain_id')).first()
if not validators.url(http_url):
return None, False
http_url = sanitize_url(http_url)
endpoint, created = EndPoint.objects.get_or_create(
scan_history=scan,
target_domain=domain,
http_url=http_url,
**endpoint_data)
if created:
endpoint.is_default = is_default
endpoint.discovered_date = timezone.now()
endpoint.save()
subscan_id = ctx.get('subscan_id')
if subscan_id:
endpoint.endpoint_subscan_ids.add(subscan_id)
endpoint.save()
return endpoint, created
def save_subdomain(subdomain_name, ctx={}):
"""Get or create Subdomain object.
Args:
subdomain_name (str): Subdomain name.
scan_history (startScan.models.ScanHistory): ScanHistory object.
Returns:
tuple: (startScan.models.Subdomain, created) where `created` is a
boolean indicating if the object has been created in DB.
"""
scan_id = ctx.get('scan_history_id')
subscan_id = ctx.get('subscan_id')
out_of_scope_subdomains = ctx.get('out_of_scope_subdomains', [])
valid_domain = (
validators.domain(subdomain_name) or
validators.ipv4(subdomain_name) or
validators.ipv6(subdomain_name)
)
if not valid_domain:
logger.error(f'{subdomain_name} is not an invalid domain. Skipping.')
return None, False
if subdomain_name in out_of_scope_subdomains:
logger.error(f'{subdomain_name} is out-of-scope. Skipping.')
return None, False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in subdomain_name:
logger.error(f"{subdomain_name} is not a subdomain of domain {domain.name}. Skipping.")
return None, False
scan = ScanHistory.objects.filter(pk=scan_id).first()
domain = scan.domain if scan else None
subdomain, created = Subdomain.objects.get_or_create(
scan_history=scan,
target_domain=domain,
name=subdomain_name)
if created:
# logger.warning(f'Found new subdomain {subdomain_name}')
subdomain.discovered_date = timezone.now()
if subscan_id:
subdomain.subdomain_subscan_ids.add(subscan_id)
subdomain.save()
return subdomain, created
def save_email(email_address, scan_history=None):
if not validators.email(email_address):
logger.info(f'Email {email_address} is invalid. Skipping.')
return None, False
email, created = Email.objects.get_or_create(address=email_address)
# if created:
# logger.warning(f'Found new email address {email_address}')
# Add email to ScanHistory
if scan_history:
scan_history.emails.add(email)
scan_history.save()
return email, created
def save_employee(name, designation, scan_history=None):
employee, created = Employee.objects.get_or_create(
name=name,
designation=designation)
# if created:
# logger.warning(f'Found new employee {name}')
# Add employee to ScanHistory
if scan_history:
scan_history.employees.add(employee)
scan_history.save()
return employee, created
def save_ip_address(ip_address, subdomain=None, subscan=None, **kwargs):
if not (validators.ipv4(ip_address) or validators.ipv6(ip_address)):
logger.info(f'IP {ip_address} is not a valid IP. Skipping.')
return None, False
ip, created = IpAddress.objects.get_or_create(address=ip_address)
# if created:
# logger.warning(f'Found new IP {ip_address}')
# Set extra attributes
for key, value in kwargs.items():
setattr(ip, key, value)
ip.save()
# Add IP to subdomain
if subdomain:
subdomain.ip_addresses.add(ip)
subdomain.save()
# Add subscan to IP
if subscan:
ip.ip_subscan_ids.add(subscan)
# Geo-localize IP asynchronously
if created:
geo_localize.delay(ip_address, ip.id)
return ip, created
def save_imported_subdomains(subdomains, ctx={}):
"""Take a list of subdomains imported and write them to from_imported.txt.
Args:
subdomains (list): List of subdomain names.
scan_history (startScan.models.ScanHistory): ScanHistory instance.
domain (startScan.models.Domain): Domain instance.
results_dir (str): Results directory.
"""
domain_id = ctx['domain_id']
domain = Domain.objects.get(pk=domain_id)
results_dir = ctx.get('results_dir', RENGINE_RESULTS)
# Validate each subdomain and de-duplicate entries
subdomains = list(set([
subdomain for subdomain in subdomains
if validators.domain(subdomain) and domain.name == get_domain_from_subdomain(subdomain)
]))
if not subdomains:
return
logger.warning(f'Found {len(subdomains)} imported subdomains.')
with open(f'{results_dir}/from_imported.txt', 'w+') as output_file:
for name in subdomains:
subdomain_name = name.strip()
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain.is_imported_subdomain = True
subdomain.save()
output_file.write(f'{subdomain}\n')
@app.task(name='query_reverse_whois', bind=False, queue='query_reverse_whois_queue')
def query_reverse_whois(lookup_keyword):
"""Queries Reverse WHOIS information for an organization or email address.
Args:
lookup_keyword (str): Registrar Name or email
Returns:
dict: Reverse WHOIS information.
"""
return get_associated_domains(lookup_keyword)
@app.task(name='query_ip_history', bind=False, queue='query_ip_history_queue')
def query_ip_history(domain):
"""Queries the IP history for a domain
Args:
domain (str): domain_name
Returns:
list: list of historical ip addresses
"""
return get_domain_historical_ip_address(domain)
@app.task(name='gpt_vulnerability_description', bind=False, queue='gpt_queue')
def gpt_vulnerability_description(vulnerability_id):
"""Generate and store Vulnerability Description using GPT.
Args:
vulnerability_id (Vulnerability Model ID): Vulnerability ID to fetch Description.
"""
logger.info('Getting GPT Vulnerability Description')
try:
lookup_vulnerability = Vulnerability.objects.get(id=vulnerability_id)
lookup_url = urlparse(lookup_vulnerability.http_url)
path = lookup_url.path
except Exception as e:
return {
'status': False,
'error': str(e)
}
# check in db GPTVulnerabilityReport model if vulnerability description and path matches
stored = GPTVulnerabilityReport.objects.filter(url_path=path).filter(title=lookup_vulnerability.name).first()
if stored:
response = {
'status': True,
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
vulnerability_description = get_gpt_vuln_input_description(
lookup_vulnerability.name,
path
)
# one can add more description here later
gpt_generator = GPTVulnerabilityReportGenerator()
response = gpt_generator.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
lookup_vulnerability.name,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
# for all vulnerabilities with the same vulnerability name this description has to be stored.
# also the consition is that the url must contain a part of this.
for vuln in Vulnerability.objects.filter(name=lookup_vulnerability.name, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
return response
| import csv
import json
import os
import pprint
import subprocess
import time
import validators
import whatportis
import xmltodict
import yaml
import tldextract
import concurrent.futures
from datetime import datetime
from urllib.parse import urlparse
from api.serializers import SubdomainSerializer
from celery import chain, chord, group
from celery.result import allow_join_result
from celery.utils.log import get_task_logger
from django.db.models import Count
from dotted_dict import DottedDict
from django.utils import timezone
from pycvesearch import CVESearch
from metafinder.extractor import extract_metadata_from_google_search
from reNgine.celery import app
from reNgine.gpt import GPTVulnerabilityReportGenerator
from reNgine.celery_custom_task import RengineTask
from reNgine.common_func import *
from reNgine.definitions import *
from reNgine.settings import *
from reNgine.gpt import *
from reNgine.utilities import *
from scanEngine.models import (EngineType, InstalledExternalTool, Notification, Proxy)
from startScan.models import *
from startScan.models import EndPoint, Subdomain, Vulnerability
from targetApp.models import Domain
"""
Celery tasks.
"""
logger = get_task_logger(__name__)
#----------------------#
# Scan / Subscan tasks #
#----------------------#
@app.task(name='initiate_scan', bind=False, queue='initiate_scan_queue')
def initiate_scan(
scan_history_id,
domain_id,
engine_id=None,
scan_type=LIVE_SCAN,
results_dir=RENGINE_RESULTS,
imported_subdomains=[],
out_of_scope_subdomains=[],
url_filter=''):
"""Initiate a new scan.
Args:
scan_history_id (int): ScanHistory id.
domain_id (int): Domain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
imported_subdomains (list): Imported subdomains.
out_of_scope_subdomains (list): Out-of-scope subdomains.
url_filter (str): URL path. Default: ''
"""
# Get scan history
scan = ScanHistory.objects.get(pk=scan_history_id)
# Get scan engine
engine_id = engine_id or scan.scan_type.id # scan history engine_id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, [])
# Get domain and set last_scan_date
domain = Domain.objects.get(pk=domain_id)
domain.last_scan_date = timezone.now()
domain.save()
# Get path filter
url_filter = url_filter.rstrip('/')
# Get or create ScanHistory() object
if scan_type == LIVE_SCAN: # immediate
scan = ScanHistory.objects.get(pk=scan_history_id)
scan.scan_status = RUNNING_TASK
elif scan_type == SCHEDULED_SCAN: # scheduled
scan = ScanHistory()
scan.scan_status = INITIATED_TASK
scan.scan_type = engine
scan.celery_ids = [initiate_scan.request.id]
scan.domain = domain
scan.start_scan_date = timezone.now()
scan.tasks = engine.tasks
scan.results_dir = f'{results_dir}/{domain.name}_{scan.id}'
add_gf_patterns = gf_patterns and 'fetch_url' in engine.tasks
if add_gf_patterns:
scan.used_gf_patterns = ','.join(gf_patterns)
scan.save()
# Create scan results dir
os.makedirs(scan.results_dir)
# Build task context
ctx = {
'scan_history_id': scan_history_id,
'engine_id': engine_id,
'domain_id': domain.id,
'results_dir': scan.results_dir,
'url_filter': url_filter,
'yaml_configuration': config,
'out_of_scope_subdomains': out_of_scope_subdomains
}
ctx_str = json.dumps(ctx, indent=2)
# Send start notif
logger.warning(f'Starting scan {scan_history_id} with context:\n{ctx_str}')
send_scan_notif.delay(
scan_history_id,
subscan_id=None,
engine_id=engine_id,
status=CELERY_TASK_STATUS_MAP[scan.scan_status])
# Save imported subdomains in DB
save_imported_subdomains(imported_subdomains, ctx=ctx)
# Create initial subdomain in DB: make a copy of domain as a subdomain so
# that other tasks using subdomains can use it.
subdomain_name = domain.name
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
# If enable_http_crawl is set, create an initial root HTTP endpoint so that
# HTTP crawling can start somewhere
http_url = f'{domain.name}{url_filter}' if url_filter else domain.name
endpoint, _ = save_endpoint(
http_url,
ctx=ctx,
crawl=enable_http_crawl,
is_default=True,
subdomain=subdomain
)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build Celery tasks, crafted according to the dependency graph below:
# subdomain_discovery --> port_scan --> fetch_url --> dir_file_fuzz
# osint vulnerability_scan
# osint dalfox xss scan
# screenshot
# waf_detection
workflow = chain(
group(
subdomain_discovery.si(ctx=ctx, description='Subdomain discovery'),
osint.si(ctx=ctx, description='OS Intelligence')
),
port_scan.si(ctx=ctx, description='Port scan'),
fetch_url.si(ctx=ctx, description='Fetch URL'),
group(
dir_file_fuzz.si(ctx=ctx, description='Directories & files fuzz'),
vulnerability_scan.si(ctx=ctx, description='Vulnerability scan'),
screenshot.si(ctx=ctx, description='Screenshot'),
waf_detection.si(ctx=ctx, description='WAF detection')
)
)
# Build callback
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery chord
logger.info(f'Running Celery workflow with {len(workflow.tasks) + 1} tasks')
task = chain(workflow, callback).on_error(callback).delay()
scan.celery_ids.append(task.id)
scan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='initiate_subscan', bind=False, queue='subscan_queue')
def initiate_subscan(
scan_history_id,
subdomain_id,
engine_id=None,
scan_type=None,
results_dir=RENGINE_RESULTS,
url_filter=''):
"""Initiate a new subscan.
Args:
scan_history_id (int): ScanHistory id.
subdomain_id (int): Subdomain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
url_filter (str): URL path. Default: ''
"""
# Get Subdomain, Domain and ScanHistory
subdomain = Subdomain.objects.get(pk=subdomain_id)
scan = ScanHistory.objects.get(pk=subdomain.scan_history.id)
domain = Domain.objects.get(pk=subdomain.target_domain.id)
# Get EngineType
engine_id = engine_id or scan.scan_type.id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Create scan activity of SubScan Model
subscan = SubScan(
start_scan_date=timezone.now(),
celery_ids=[initiate_subscan.request.id],
scan_history=scan,
subdomain=subdomain,
type=scan_type,
status=RUNNING_TASK,
engine=engine)
subscan.save()
# Get YAML configuration
config = yaml.safe_load(engine.yaml_configuration)
# Create results directory
results_dir = f'{scan.results_dir}/subscans/{subscan.id}'
os.makedirs(results_dir, exist_ok=True)
# Run task
method = globals().get(scan_type)
if not method:
logger.warning(f'Task {scan_type} is not supported by reNgine. Skipping')
return
scan.tasks.append(scan_type)
scan.save()
# Send start notif
send_scan_notif.delay(
scan.id,
subscan_id=subscan.id,
engine_id=engine_id,
status='RUNNING')
# Build context
ctx = {
'scan_history_id': scan.id,
'subscan_id': subscan.id,
'engine_id': engine_id,
'domain_id': domain.id,
'subdomain_id': subdomain.id,
'yaml_configuration': config,
'results_dir': results_dir,
'url_filter': url_filter
}
# Create initial endpoints in DB: find domain HTTP endpoint so that HTTP
# crawling can start somewhere
base_url = f'{subdomain.name}{url_filter}' if url_filter else subdomain.name
endpoint, _ = save_endpoint(
base_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build header + callback
workflow = method.si(ctx=ctx)
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery tasks
task = chain(workflow, callback).on_error(callback).delay()
subscan.celery_ids.append(task.id)
subscan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='report', bind=False, queue='report_queue')
def report(ctx={}, description=None):
"""Report task running after all other tasks.
Mark ScanHistory or SubScan object as completed and update with final
status, log run details and send notification.
Args:
description (str, optional): Task description shown in UI.
"""
# Get objects
subscan_id = ctx.get('subscan_id')
scan_id = ctx.get('scan_history_id')
engine_id = ctx.get('engine_id')
scan = ScanHistory.objects.filter(pk=scan_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
# Get failed tasks
tasks = ScanActivity.objects.filter(scan_of=scan).all()
if subscan:
tasks = tasks.filter(celery_id__in=subscan.celery_ids)
failed_tasks = tasks.filter(status=FAILED_TASK)
# Get task status
failed_count = failed_tasks.count()
status = SUCCESS_TASK if failed_count == 0 else FAILED_TASK
status_h = 'SUCCESS' if failed_count == 0 else 'FAILED'
# Update scan / subscan status
if subscan:
subscan.stop_scan_date = timezone.now()
subscan.status = status
subscan.save()
else:
scan.scan_status = status
scan.stop_scan_date = timezone.now()
scan.save()
# Send scan status notif
send_scan_notif.delay(
scan_history_id=scan_id,
subscan_id=subscan_id,
engine_id=engine_id,
status=status_h)
#------------------------- #
# Tracked reNgine tasks #
#--------------------------#
@app.task(name='subdomain_discovery', queue='main_scan_queue', base=RengineTask, bind=True)
def subdomain_discovery(
self,
host=None,
ctx=None,
description=None):
"""Uses a set of tools (see SUBDOMAIN_SCAN_DEFAULT_TOOLS) to scan all
subdomains associated with a domain.
Args:
host (str): Hostname to scan.
Returns:
subdomains (list): List of subdomain names.
"""
if not host:
host = self.subdomain.name if self.subdomain else self.domain.name
if self.url_filter:
logger.warning(f'Ignoring subdomains scan as an URL path filter was passed ({self.url_filter}).')
return
# Config
config = self.yaml_configuration.get(SUBDOMAIN_DISCOVERY) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL) or self.yaml_configuration.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
tools = config.get(USES_TOOLS, SUBDOMAIN_SCAN_DEFAULT_TOOLS)
default_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=True).filter(is_subdomain_gathering=True)]
custom_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=False).filter(is_subdomain_gathering=True)]
send_subdomain_changes, send_interesting = False, False
notif = Notification.objects.first()
if notif:
send_subdomain_changes = notif.send_subdomain_changes_notif
send_interesting = notif.send_interesting_notif
# Gather tools to run for subdomain scan
if ALL in tools:
tools = SUBDOMAIN_SCAN_DEFAULT_TOOLS + custom_subdomain_tools
tools = [t.lower() for t in tools]
# Make exception for amass since tool name is amass, but command is amass-active/passive
default_subdomain_tools.append('amass-passive')
default_subdomain_tools.append('amass-active')
# Run tools
for tool in tools:
cmd = None
logger.info(f'Scanning subdomains for {host} with {tool}')
proxy = get_random_proxy()
if tool in default_subdomain_tools:
if tool == 'amass-passive':
cmd = f'amass enum -passive -d {host} -o {self.results_dir}/subdomains_amass.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
elif tool == 'amass-active':
use_amass_config = config.get(USE_AMASS_CONFIG, False)
amass_wordlist_name = config.get(AMASS_WORDLIST, 'deepmagic.com-prefixes-top50000')
wordlist_path = f'/usr/src/wordlist/{amass_wordlist_name}.txt'
cmd = f'amass enum -active -d {host} -o {self.results_dir}/subdomains_amass_active.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
cmd += f' -brute -w {wordlist_path}'
elif tool == 'sublist3r':
cmd = f'python3 /usr/src/github/Sublist3r/sublist3r.py -d {host} -t {threads} -o {self.results_dir}/subdomains_sublister.txt'
elif tool == 'subfinder':
cmd = f'subfinder -d {host} -o {self.results_dir}/subdomains_subfinder.txt'
use_subfinder_config = config.get(USE_SUBFINDER_CONFIG, False)
cmd += ' -config /root/.config/subfinder/config.yaml' if use_subfinder_config else ''
cmd += f' -proxy {proxy}' if proxy else ''
cmd += f' -timeout {timeout}' if timeout else ''
cmd += f' -t {threads}' if threads else ''
cmd += f' -silent'
elif tool == 'oneforall':
cmd = f'python3 /usr/src/github/OneForAll/oneforall.py --target {host} run'
cmd_extract = f'cut -d\',\' -f6 /usr/src/github/OneForAll/results/{host}.csv > {self.results_dir}/subdomains_oneforall.txt'
cmd_rm = f'rm -rf /usr/src/github/OneForAll/results/{host}.csv'
cmd += f' && {cmd_extract} && {cmd_rm}'
elif tool == 'ctfr':
results_file = self.results_dir + '/subdomains_ctfr.txt'
cmd = f'python3 /usr/src/github/ctfr/ctfr.py -d {host} -o {results_file}'
cmd_extract = f"cat {results_file} | sed 's/\*.//g' | tail -n +12 | uniq | sort > {results_file}"
cmd += f' && {cmd_extract}'
elif tool == 'tlsx':
results_file = self.results_dir + '/subdomains_tlsx.txt'
cmd = f'tlsx -san -cn -silent -ro -host {host}'
cmd += f" | sed -n '/^\([a-zA-Z0-9]\([-a-zA-Z0-9]*[a-zA-Z0-9]\)\?\.\)\+{host}$/p' | uniq | sort"
cmd += f' > {results_file}'
elif tool == 'netlas':
results_file = self.results_dir + '/subdomains_netlas.txt'
cmd = f'netlas search -d domain -i domain domain:"*.{host}" -f json'
netlas_key = get_netlas_key()
cmd += f' -a {netlas_key}' if netlas_key else ''
cmd_extract = f"grep -oE '([a-zA-Z0-9]([-a-zA-Z0-9]*[a-zA-Z0-9])?\.)+{host}'"
cmd += f' | {cmd_extract} > {results_file}'
elif tool in custom_subdomain_tools:
tool_query = InstalledExternalTool.objects.filter(name__icontains=tool.lower())
if not tool_query.exists():
logger.error(f'Missing {{TARGET}} and {{OUTPUT}} placeholders in {tool} configuration. Skipping.')
continue
custom_tool = tool_query.first()
cmd = custom_tool.subdomain_gathering_command
if '{TARGET}' in cmd and '{OUTPUT}' in cmd:
cmd = cmd.replace('{TARGET}', host)
cmd = cmd.replace('{OUTPUT}', f'{self.results_dir}/subdomains_{tool}.txt')
cmd = cmd.replace('{PATH}', custom_tool.github_clone_path) if '{PATH}' in cmd else cmd
else:
logger.warning(
f'Subdomain discovery tool "{tool}" is not supported by reNgine. Skipping.')
continue
# Run tool
try:
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
except Exception as e:
logger.error(
f'Subdomain discovery tool "{tool}" raised an exception')
logger.exception(e)
# Gather all the tools' results in one single file. Write subdomains into
# separate files, and sort all subdomains.
run_command(
f'cat {self.results_dir}/subdomains_*.txt > {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {self.output_path} -o {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
with open(self.output_path) as f:
lines = f.readlines()
# Parse the output_file file and store Subdomain and EndPoint objects found
# in db.
subdomain_count = 0
subdomains = []
urls = []
for line in lines:
subdomain_name = line.strip()
valid_url = bool(validators.url(subdomain_name))
valid_domain = (
bool(validators.domain(subdomain_name)) or
bool(validators.ipv4(subdomain_name)) or
bool(validators.ipv6(subdomain_name)) or
valid_url
)
if not valid_domain:
logger.error(f'Subdomain {subdomain_name} is not a valid domain, IP or URL. Skipping.')
continue
if valid_url:
subdomain_name = urlparse(subdomain_name).netloc
if subdomain_name in self.out_of_scope_subdomains:
logger.error(f'Subdomain {subdomain_name} is out of scope. Skipping.')
continue
# Add subdomain
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain_count += 1
subdomains.append(subdomain)
urls.append(subdomain.name)
# Bulk crawl subdomains
if enable_http_crawl:
ctx['track'] = True
http_crawl(urls, ctx=ctx, is_ran_from_subdomain_scan=True)
# Find root subdomain endpoints
for subdomain in subdomains:
pass
# Send notifications
subdomains_str = '\n'.join([f'• `{subdomain.name}`' for subdomain in subdomains])
self.notify(fields={
'Subdomain count': len(subdomains),
'Subdomains': subdomains_str,
})
if send_subdomain_changes and self.scan_id and self.domain_id:
added = get_new_added_subdomain(self.scan_id, self.domain_id)
removed = get_removed_subdomain(self.scan_id, self.domain_id)
if added:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in added])
self.notify(fields={'Added subdomains': subdomains_str})
if removed:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in removed])
self.notify(fields={'Removed subdomains': subdomains_str})
if send_interesting and self.scan_id and self.domain_id:
interesting_subdomains = get_interesting_subdomains(self.scan_id, self.domain_id)
if interesting_subdomains:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in interesting_subdomains])
self.notify(fields={'Interesting subdomains': subdomains_str})
return SubdomainSerializer(subdomains, many=True).data
@app.task(name='osint', queue='main_scan_queue', base=RengineTask, bind=True)
def osint(self, host=None, ctx={}, description=None):
"""Run Open-Source Intelligence tools on selected domain.
Args:
host (str): Hostname to scan.
Returns:
dict: Results from osint discovery and dorking.
"""
config = self.yaml_configuration.get(OSINT) or OSINT_DEFAULT_CONFIG
results = {}
grouped_tasks = []
if 'discover' in config:
ctx['track'] = False
# results = osint_discovery(host=host, ctx=ctx)
_task = osint_discovery.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
activity_id=self.activity_id,
results_dir=self.results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if OSINT_DORK in config or OSINT_CUSTOM_DORK in config:
_task = dorking.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
results_dir=self.results_dir
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('OSINT Tasks finished...')
# with open(self.output_path, 'w') as f:
# json.dump(results, f, indent=4)
#
# return results
@app.task(name='osint_discovery', queue='osint_discovery_queue', bind=False)
def osint_discovery(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run OSINT discovery.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
dict: osint metadat and theHarvester and h8mail results.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
osint_lookup = config.get(OSINT_DISCOVER, [])
osint_intensity = config.get(INTENSITY, 'normal')
documents_limit = config.get(OSINT_DOCUMENTS_LIMIT, 50)
results = {}
meta_info = []
emails = []
creds = []
# Get and save meta info
if 'metainfo' in osint_lookup:
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': host,
'domain': host,
'scan_id': scan_history_id,
'documents_limit': documents_limit
})
meta_info.append(save_metadata_info(meta_dict))
# TODO: disabled for now
# elif osint_intensity == 'deep':
# subdomains = Subdomain.objects
# if self.scan:
# subdomains = subdomains.filter(scan_history=self.scan)
# for subdomain in subdomains:
# meta_dict = DottedDict({
# 'osint_target': subdomain.name,
# 'domain': self.domain,
# 'scan_id': self.scan_id,
# 'documents_limit': documents_limit
# })
# meta_info.append(save_metadata_info(meta_dict))
grouped_tasks = []
if 'emails' in osint_lookup:
emails = get_and_save_emails(scan_history, activity_id, results_dir)
emails_str = '\n'.join([f'• `{email}`' for email in emails])
# self.notify(fields={'Emails': emails_str})
# ctx['track'] = False
_task = h8mail.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if 'employees' in osint_lookup:
ctx['track'] = False
_task = theHarvester.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
# results['emails'] = results.get('emails', []) + emails
# results['creds'] = creds
# results['meta_info'] = meta_info
return results
@app.task(name='dorking', bind=False, queue='dorking_queue')
def dorking(config, host, scan_history_id, results_dir):
"""Run Google dorks.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
list: Dorking results for each dork ran.
"""
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
scan_history = ScanHistory.objects.get(pk=scan_history_id)
dorks = config.get(OSINT_DORK, [])
custom_dorks = config.get(OSINT_CUSTOM_DORK, [])
results = []
# custom dorking has higher priority
try:
for custom_dork in custom_dorks:
lookup_target = custom_dork.get('lookup_site')
# replace with original host if _target_
lookup_target = host if lookup_target == '_target_' else lookup_target
if 'lookup_extensions' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_extensions=custom_dork.get('lookup_extensions'),
scan_history=scan_history
)
elif 'lookup_keywords' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_keywords=custom_dork.get('lookup_keywords'),
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
# default dorking
try:
for dork in dorks:
logger.info(f'Getting dork information for {dork}')
if dork == 'stackoverflow':
results = get_and_save_dork_results(
lookup_target='stackoverflow.com',
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'login_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/login/,login.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'admin_panels':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/admin/,admin.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'dashboard_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/dashboard/,dashboard.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'social_media' :
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'reddit.com'
]
for site in social_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'project_management' :
project_websites = [
'trello.com',
'atlassian.net'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'code_sharing' :
project_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'config_files' :
config_file_exts = [
'env',
'xml',
'conf',
'toml',
'yml',
'yaml',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(config_file_exts),
page_count=4,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keyword = 'Jenkins'
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=lookup_keyword,
page_count=1,
scan_history=scan_history
)
elif dork == 'wordpress_files' :
lookup_keywords = [
'/wp-content/',
'/wp-includes/'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'php_error' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'exposed_documents' :
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(docs_file_ext),
page_count=7,
scan_history=scan_history
)
elif dork == 'db_files' :
file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
elif dork == 'git_exposed' :
file_ext = [
'git',
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
return results
@app.task(name='theHarvester', queue='theHarvester_queue', bind=False)
def theHarvester(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run theHarvester to get save emails, hosts, employees found in domain.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
dict: Dict of emails, employees, hosts and ips found during crawling.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
output_path_json = f'{results_dir}/theHarvester.json'
theHarvester_dir = '/usr/src/github/theHarvester'
history_file = f'{results_dir}/commands.txt'
cmd = f'python3 {theHarvester_dir}/theHarvester.py -d {host} -b all -f {output_path_json}'
# Update proxies.yaml
proxy_query = Proxy.objects.all()
if proxy_query.exists():
proxy = proxy_query.first()
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(f'{theHarvester_dir}/proxies.yaml', 'w') as file:
yaml.dump(yaml_data, file)
# Run cmd
run_command(
cmd,
shell=False,
cwd=theHarvester_dir,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
# Get file location
if not os.path.isfile(output_path_json):
logger.error(f'Could not open {output_path_json}')
return {}
# Load theHarvester results
with open(output_path_json, 'r') as f:
data = json.load(f)
# Re-indent theHarvester JSON
with open(output_path_json, 'w') as f:
json.dump(data, f, indent=4)
emails = data.get('emails', [])
for email_address in emails:
email, _ = save_email(email_address, scan_history=scan_history)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
linkedin_people = data.get('linkedin_people', [])
for people in linkedin_people:
employee, _ = save_employee(
people,
designation='linkedin',
scan_history=scan_history)
# if employee:
# self.notify(fields={'LinkedIn people': f'• {employee.name}'})
twitter_people = data.get('twitter_people', [])
for people in twitter_people:
employee, _ = save_employee(
people,
designation='twitter',
scan_history=scan_history)
# if employee:
# self.notify(fields={'Twitter people': f'• {employee.name}'})
hosts = data.get('hosts', [])
urls = []
for host in hosts:
split = tuple(host.split(':'))
http_url = split[0]
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
endpoint, _ = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain)
# if endpoint:
# urls.append(endpoint.http_url)
# self.notify(fields={'Hosts': f'• {endpoint.http_url}'})
# if enable_http_crawl:
# ctx['track'] = False
# http_crawl(urls, ctx=ctx)
# TODO: Lots of ips unrelated with our domain are found, disabling
# this for now.
# ips = data.get('ips', [])
# for ip_address in ips:
# ip, created = save_ip_address(
# ip_address,
# subscan=subscan)
# if ip:
# send_task_notif.delay(
# 'osint',
# scan_history_id=scan_history_id,
# subscan_id=subscan_id,
# severity='success',
# update_fields={'IPs': f'{ip.address}'})
return data
@app.task(name='h8mail', queue='h8mail_queue', bind=False)
def h8mail(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run h8mail.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
list[dict]: List of credentials info.
"""
logger.warning('Getting leaked credentials')
scan_history = ScanHistory.objects.get(pk=scan_history_id)
input_path = f'{results_dir}/emails.txt'
output_file = f'{results_dir}/h8mail.json'
cmd = f'h8mail -t {input_path} --json {output_file}'
history_file = f'{results_dir}/commands.txt'
run_command(
cmd,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
with open(output_file) as f:
data = json.load(f)
creds = data.get('targets', [])
# TODO: go through h8mail output and save emails to DB
for cred in creds:
logger.warning(cred)
email_address = cred['target']
pwn_num = cred['pwn_num']
pwn_data = cred.get('data', [])
email, created = save_email(email_address, scan_history=scan)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
return creds
@app.task(name='screenshot', queue='main_scan_queue', base=RengineTask, bind=True)
def screenshot(self, ctx={}, description=None):
"""Uses EyeWitness to gather screenshot of a domain and/or url.
Args:
description (str, optional): Task description shown in UI.
"""
# Config
screenshots_path = f'{self.results_dir}/screenshots'
output_path = f'{self.results_dir}/screenshots/{self.filename}'
alive_endpoints_file = f'{self.results_dir}/endpoints_alive.txt'
config = self.yaml_configuration.get(SCREENSHOT) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT + 5)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
# If intensity is normal, grab only the root endpoints of each subdomain
strict = True if intensity == 'normal' else False
# Get URLs to take screenshot of
get_http_urls(
is_alive=enable_http_crawl,
strict=strict,
write_filepath=alive_endpoints_file,
get_only_default_urls=True,
ctx=ctx
)
# Send start notif
notification = Notification.objects.first()
send_output_file = notification.send_scan_output_file if notification else False
# Run cmd
cmd = f'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py -f {alive_endpoints_file} -d {screenshots_path} --no-prompt'
cmd += f' --timeout {timeout}' if timeout > 0 else ''
cmd += f' --threads {threads}' if threads > 0 else ''
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(output_path):
logger.error(f'Could not load EyeWitness results at {output_path} for {self.domain.name}.')
return
# Loop through results and save objects in DB
screenshot_paths = []
with open(output_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
"Protocol,Port,Domain,Request Status,Screenshot Path, Source Path"
protocol, port, subdomain_name, status, screenshot_path, source_path = tuple(row)
logger.info(f'{protocol}:{port}:{subdomain_name}:{status}')
subdomain_query = Subdomain.objects.filter(name=subdomain_name)
if self.scan:
subdomain_query = subdomain_query.filter(scan_history=self.scan)
if status == 'Successful' and subdomain_query.exists():
subdomain = subdomain_query.first()
screenshot_paths.append(screenshot_path)
subdomain.screenshot_path = screenshot_path.replace('/usr/src/scan_results/', '')
subdomain.save()
logger.warning(f'Added screenshot for {subdomain.name} to DB')
# Remove all db, html extra files in screenshot results
run_command(
'rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(screenshots_path),
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'rm -rf {screenshots_path}/source',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Send finish notifs
screenshots_str = '• ' + '\n• '.join([f'`{path}`' for path in screenshot_paths])
self.notify(fields={'Screenshots': screenshots_str})
if send_output_file:
for path in screenshot_paths:
title = get_output_file_name(
self.scan_id,
self.subscan_id,
self.filename)
send_file_to_discord.delay(path, title)
@app.task(name='port_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def port_scan(self, hosts=[], ctx={}, description=None):
"""Run port scan.
Args:
hosts (list, optional): Hosts to run port scan on.
description (str, optional): Task description shown in UI.
Returns:
list: List of open ports (dict).
"""
input_file = f'{self.results_dir}/input_subdomains_port_scan.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(PORT_SCAN) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
exclude_ports = config.get(NAABU_EXCLUDE_PORTS, [])
exclude_subdomains = config.get(NAABU_EXCLUDE_SUBDOMAINS, False)
ports = config.get(PORTS, NAABU_DEFAULT_PORTS)
ports = [str(port) for port in ports]
rate_limit = config.get(NAABU_RATE) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
passive = config.get(NAABU_PASSIVE, False)
use_naabu_config = config.get(USE_NAABU_CONFIG, False)
exclude_ports_str = ','.join(return_iterable(exclude_ports))
# nmap args
nmap_enabled = config.get(ENABLE_NMAP, False)
nmap_cmd = config.get(NMAP_COMMAND, '')
nmap_script = config.get(NMAP_SCRIPT, '')
nmap_script = ','.join(return_iterable(nmap_script))
nmap_script_args = config.get(NMAP_SCRIPT_ARGS)
if hosts:
with open(input_file, 'w') as f:
f.write('\n'.join(hosts))
else:
hosts = get_subdomains(
write_filepath=input_file,
exclude_subdomains=exclude_subdomains,
ctx=ctx)
# Build cmd
cmd = 'naabu -json -exclude-cdn'
cmd += f' -list {input_file}' if len(hosts) > 0 else f' -host {hosts[0]}'
if 'full' in ports or 'all' in ports:
ports_str = ' -p "-"'
elif 'top-100' in ports:
ports_str = ' -top-ports 100'
elif 'top-1000' in ports:
ports_str = ' -top-ports 1000'
else:
ports_str = ','.join(ports)
ports_str = f' -p {ports_str}'
cmd += ports_str
cmd += ' -config /root/.config/naabu/config.yaml' if use_naabu_config else ''
cmd += f' -proxy "{proxy}"' if proxy else ''
cmd += f' -c {threads}' if threads else ''
cmd += f' -rate {rate_limit}' if rate_limit > 0 else ''
cmd += f' -timeout {timeout*1000}' if timeout > 0 else ''
cmd += f' -passive' if passive else ''
cmd += f' -exclude-ports {exclude_ports_str}' if exclude_ports else ''
cmd += f' -silent'
# Execute cmd and gather results
results = []
urls = []
ports_data = {}
for line in stream_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
port_number = line['port']
ip_address = line['ip']
host = line.get('host') or ip_address
if port_number == 0:
continue
# Grab subdomain
subdomain = Subdomain.objects.filter(
name=host,
target_domain=self.domain,
scan_history=self.scan
).first()
# Add IP DB
ip, _ = save_ip_address(ip_address, subdomain, subscan=self.subscan)
if self.subscan:
ip.ip_subscan_ids.add(self.subscan)
ip.save()
# Add endpoint to DB
# port 80 and 443 not needed as http crawl already does that.
if port_number not in [80, 443]:
http_url = f'{host}:{port_number}'
endpoint, _ = save_endpoint(
http_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint:
http_url = endpoint.http_url
urls.append(http_url)
# Add Port in DB
port_details = whatportis.get_ports(str(port_number))
service_name = port_details[0].name if len(port_details) > 0 else 'unknown'
description = port_details[0].description if len(port_details) > 0 else ''
# get or create port
port, created = Port.objects.get_or_create(
number=port_number,
service_name=service_name,
description=description
)
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port.save()
ip.ports.add(port)
ip.save()
if host in ports_data:
ports_data[host].append(port_number)
else:
ports_data[host] = [port_number]
# Send notification
logger.warning(f'Found opened port {port_number} on {ip_address} ({host})')
if len(ports_data) == 0:
logger.info('Finished running naabu port scan - No open ports found.')
if nmap_enabled:
logger.info('Nmap scans skipped')
return ports_data
# Send notification
fields_str = ''
for host, ports in ports_data.items():
ports_str = ', '.join([f'`{port}`' for port in ports])
fields_str += f'• `{host}`: {ports_str}\n'
self.notify(fields={'Ports discovered': fields_str})
# Save output to file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
logger.info('Finished running naabu port scan.')
# Process nmap results: 1 process per host
sigs = []
if nmap_enabled:
logger.warning(f'Starting nmap scans ...')
logger.warning(ports_data)
for host, port_list in ports_data.items():
ports_str = '_'.join([str(p) for p in port_list])
ctx_nmap = ctx.copy()
ctx_nmap['description'] = get_task_title(f'nmap_{host}', self.scan_id, self.subscan_id)
ctx_nmap['track'] = False
sig = nmap.si(
cmd=nmap_cmd,
ports=port_list,
host=host,
script=nmap_script,
script_args=nmap_script_args,
max_rate=rate_limit,
ctx=ctx_nmap)
sigs.append(sig)
task = group(sigs).apply_async()
with allow_join_result():
results = task.get()
return ports_data
@app.task(name='nmap', queue='main_scan_queue', base=RengineTask, bind=True)
def nmap(
self,
cmd=None,
ports=[],
host=None,
input_file=None,
script=None,
script_args=None,
max_rate=None,
ctx={},
description=None):
"""Run nmap on a host.
Args:
cmd (str, optional): Existing nmap command to complete.
ports (list, optional): List of ports to scan.
host (str, optional): Host to scan.
input_file (str, optional): Input hosts file.
script (str, optional): NSE script to run.
script_args (str, optional): NSE script args.
max_rate (int): Max rate.
description (str, optional): Task description shown in UI.
"""
notif = Notification.objects.first()
ports_str = ','.join(str(port) for port in ports)
self.filename = self.filename.replace('.txt', '.xml')
filename_vulns = self.filename.replace('.xml', '_vulns.json')
output_file = self.output_path
output_file_xml = f'{self.results_dir}/{host}_{self.filename}'
vulns_file = f'{self.results_dir}/{host}_{filename_vulns}'
logger.warning(f'Running nmap on {host}:{ports}')
# Build cmd
nmap_cmd = get_nmap_cmd(
cmd=cmd,
ports=ports_str,
script=script,
script_args=script_args,
max_rate=max_rate,
host=host,
input_file=input_file,
output_file=output_file_xml)
# Run cmd
run_command(
nmap_cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Get nmap XML results and convert to JSON
vulns = parse_nmap_results(output_file_xml, output_file)
with open(vulns_file, 'w') as f:
json.dump(vulns, f, indent=4)
# Save vulnerabilities found by nmap
vulns_str = ''
for vuln_data in vulns:
# URL is not necessarily an HTTP URL when running nmap (can be any
# other vulnerable protocols). Look for existing endpoint and use its
# URL as vulnerability.http_url if it exists.
url = vuln_data['http_url']
endpoint = EndPoint.objects.filter(http_url__contains=url).first()
if endpoint:
vuln_data['http_url'] = endpoint.http_url
vuln, created = save_vulnerability(
target_domain=self.domain,
subdomain=self.subdomain,
scan_history=self.scan,
subscan=self.subscan,
endpoint=endpoint,
**vuln_data)
vulns_str += f'• {str(vuln)}\n'
if created:
logger.warning(str(vuln))
# Send only 1 notif for all vulns to reduce number of notifs
if notif and notif.send_vuln_notif and vulns_str:
logger.warning(vulns_str)
self.notify(fields={'CVEs': vulns_str})
return vulns
@app.task(name='waf_detection', queue='main_scan_queue', base=RengineTask, bind=True)
def waf_detection(self, ctx={}, description=None):
"""
Uses wafw00f to check for the presence of a WAF.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of startScan.models.Waf objects.
"""
input_path = f'{self.results_dir}/input_endpoints_waf_detection.txt'
config = self.yaml_configuration.get(WAF_DETECTION) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Get alive endpoints from DB
get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
cmd = f'wafw00f -i {input_path} -o {self.output_path}'
run_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(self.output_path):
logger.error(f'Could not find {self.output_path}')
return
with open(self.output_path) as file:
wafs = file.readlines()
for line in wafs:
line = " ".join(line.split())
splitted = line.split(' ', 1)
waf_info = splitted[1].strip()
waf_name = waf_info[:waf_info.find('(')].strip()
waf_manufacturer = waf_info[waf_info.find('(')+1:waf_info.find(')')].strip().replace('.', '')
http_url = sanitize_url(splitted[0].strip())
if not waf_name or waf_name == 'None':
continue
# Add waf to db
waf, _ = Waf.objects.get_or_create(
name=waf_name,
manufacturer=waf_manufacturer
)
# Add waf info to Subdomain in DB
subdomain = get_subdomain_from_url(http_url)
logger.info(f'Wafw00f Subdomain : {subdomain}')
subdomain_query, _ = Subdomain.objects.get_or_create(scan_history=self.scan, name=subdomain)
subdomain_query.waf.add(waf)
subdomain_query.save()
return wafs
@app.task(name='dir_file_fuzz', queue='main_scan_queue', base=RengineTask, bind=True)
def dir_file_fuzz(self, ctx={}, description=None):
"""Perform directory scan, and currently uses `ffuf` as a default tool.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of URLs discovered.
"""
# Config
cmd = 'ffuf'
config = self.yaml_configuration.get(DIR_FILE_FUZZ) or {}
custom_header = self.yaml_configuration.get(CUSTOM_HEADER)
auto_calibration = config.get(AUTO_CALIBRATION, True)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
extensions = config.get(EXTENSIONS, DEFAULT_DIR_FILE_FUZZ_EXTENSIONS)
# prepend . on extensions
extensions = [ext if ext.startswith('.') else '.' + ext for ext in extensions]
extensions_str = ','.join(map(str, extensions))
follow_redirect = config.get(FOLLOW_REDIRECT, FFUF_DEFAULT_FOLLOW_REDIRECT)
max_time = config.get(MAX_TIME, 0)
match_http_status = config.get(MATCH_HTTP_STATUS, FFUF_DEFAULT_MATCH_HTTP_STATUS)
mc = ','.join([str(c) for c in match_http_status])
recursive_level = config.get(RECURSIVE_LEVEL, FFUF_DEFAULT_RECURSIVE_LEVEL)
stop_on_error = config.get(STOP_ON_ERROR, False)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
wordlist_name = config.get(WORDLIST, 'dicc')
delay = rate_limit / (threads * 100) # calculate request pause delay from rate_limit and number of threads
input_path = f'{self.results_dir}/input_dir_file_fuzz.txt'
# Get wordlist
wordlist_name = 'dicc' if wordlist_name == 'default' else wordlist_name
wordlist_path = f'/usr/src/wordlist/{wordlist_name}.txt'
# Build command
cmd += f' -w {wordlist_path}'
cmd += f' -e {extensions_str}' if extensions else ''
cmd += f' -maxtime {max_time}' if max_time > 0 else ''
cmd += f' -p {delay}' if delay > 0 else ''
cmd += f' -recursion -recursion-depth {recursive_level} ' if recursive_level > 0 else ''
cmd += f' -t {threads}' if threads and threads > 0 else ''
cmd += f' -timeout {timeout}' if timeout and timeout > 0 else ''
cmd += ' -se' if stop_on_error else ''
cmd += ' -fr' if follow_redirect else ''
cmd += ' -ac' if auto_calibration else ''
cmd += f' -mc {mc}' if mc else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
# Grab URLs to fuzz
urls = get_http_urls(
is_alive=True,
ignore_files=False,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
logger.warning(urls)
# Loop through URLs and run command
results = []
for url in urls:
'''
Above while fetching urls, we are not ignoring files, because some
default urls may redirect to https://example.com/login.php
so, ignore_files is set to False
but, during fuzzing, we will only need part of the path, in above example
it is still a good idea to ffuf base url https://example.com
so files from base url
'''
url_parse = urlparse(url)
url = url_parse.scheme + '://' + url_parse.netloc
url += '/FUZZ' # TODO: fuzz not only URL but also POST / PUT / headers
proxy = get_random_proxy()
# Build final cmd
fcmd = cmd
fcmd += f' -x {proxy}' if proxy else ''
fcmd += f' -u {url} -json'
# Initialize DirectoryScan object
dirscan = DirectoryScan()
dirscan.scanned_date = timezone.now()
dirscan.command_line = fcmd
dirscan.save()
# Loop through results and populate EndPoint and DirectoryFile in DB
results = []
for line in stream_command(
fcmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
name = line['input'].get('FUZZ')
length = line['length']
status = line['status']
words = line['words']
url = line['url']
lines = line['lines']
content_type = line['content-type']
duration = line['duration']
if not name:
logger.error(f'FUZZ not found for "{url}"')
continue
endpoint, created = save_endpoint(url, crawl=False, ctx=ctx)
endpoint.http_status = status
endpoint.content_length = length
endpoint.response_time = duration / 1000000000
endpoint.save()
if created:
urls.append(endpoint.http_url)
endpoint.status = status
endpoint.content_type = content_type
endpoint.content_length = length
dfile, created = DirectoryFile.objects.get_or_create(
name=name,
length=length,
words=words,
lines=lines,
content_type=content_type,
url=url)
dfile.http_status = status
dfile.save()
# if created:
# logger.warning(f'Found new directory or file {url}')
dirscan.directory_files.add(dfile)
dirscan.save()
if self.subscan:
dirscan.dir_subscan_ids.add(self.subscan)
subdomain_name = get_subdomain_from_url(endpoint.http_url)
subdomain = Subdomain.objects.get(name=subdomain_name, scan_history=self.scan)
subdomain.directories.add(dirscan)
subdomain.save()
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(urls, ctx=ctx)
return results
@app.task(name='fetch_url', queue='main_scan_queue', base=RengineTask, bind=True)
def fetch_url(self, urls=[], ctx={}, description=None):
"""Fetch URLs using different tools like gauplus, gau, gospider, waybackurls ...
Args:
urls (list): List of URLs to start from.
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/input_endpoints_fetch_url.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(FETCH_URL) or {}
should_remove_duplicate_endpoints = config.get(REMOVE_DUPLICATE_ENDPOINTS, True)
duplicate_removal_fields = config.get(DUPLICATE_REMOVAL_FIELDS, ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, DEFAULT_GF_PATTERNS)
ignore_file_extension = config.get(IGNORE_FILE_EXTENSION, DEFAULT_IGNORE_FILE_EXTENSIONS)
tools = config.get(USES_TOOLS, ENDPOINT_SCAN_DEFAULT_TOOLS)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
domain_request_headers = self.domain.request_headers if self.domain else None
custom_header = domain_request_headers or self.yaml_configuration.get(CUSTOM_HEADER)
exclude_subdomains = config.get(EXCLUDED_SUBDOMAINS, False)
# Get URLs to scan and save to input file
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
exclude_subdomains=exclude_subdomains,
get_only_default_urls=True,
ctx=ctx
)
# Domain regex
host = self.domain.name if self.domain else urlparse(urls[0]).netloc
host_regex = f"\'https?://([a-z0-9]+[.])*{host}.*\'"
# Tools cmds
cmd_map = {
'gau': f'gau',
'gauplus': f'gauplus -random-agent',
'hakrawler': 'hakrawler -subs -u',
'waybackurls': 'waybackurls',
'gospider': f'gospider -S {input_path} --js -d 2 --sitemap --robots -w -r',
'katana': f'katana -list {input_path} -silent -jc -kf all -d 3 -fs rdn',
}
if proxy:
cmd_map['gau'] += f' --proxy "{proxy}"'
cmd_map['gauplus'] += f' -p "{proxy}"'
cmd_map['gospider'] += f' -p {proxy}'
cmd_map['hakrawler'] += f' -proxy {proxy}'
cmd_map['katana'] += f' -proxy {proxy}'
if threads > 0:
cmd_map['gau'] += f' --threads {threads}'
cmd_map['gauplus'] += f' -t {threads}'
cmd_map['gospider'] += f' -t {threads}'
cmd_map['katana'] += f' -c {threads}'
if custom_header:
header_string = ';;'.join([
f'{key}: {value}' for key, value in custom_header.items()
])
cmd_map['hakrawler'] += f' -h {header_string}'
cmd_map['katana'] += f' -H {header_string}'
header_flags = [':'.join(h) for h in header_string.split(';;')]
for flag in header_flags:
cmd_map['gospider'] += f' -H {flag}'
cat_input = f'cat {input_path}'
grep_output = f'grep -Eo {host_regex}'
cmd_map = {
tool: f'{cat_input} | {cmd} | {grep_output} > {self.results_dir}/urls_{tool}.txt'
for tool, cmd in cmd_map.items()
}
tasks = group(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for tool, cmd in cmd_map.items()
if tool in tools
)
# Cleanup task
sort_output = [
f'cat {self.results_dir}/urls_* > {self.output_path}',
f'cat {input_path} >> {self.output_path}',
f'sort -u {self.output_path} -o {self.output_path}',
]
if ignore_file_extension:
ignore_exts = '|'.join(ignore_file_extension)
grep_ext_filtered_output = [
f'cat {self.output_path} | grep -Eiv "\\.({ignore_exts}).*" > {self.results_dir}/urls_filtered.txt',
f'mv {self.results_dir}/urls_filtered.txt {self.output_path}'
]
sort_output.extend(grep_ext_filtered_output)
cleanup = chain(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for cmd in sort_output
)
# Run all commands
task = chord(tasks)(cleanup)
with allow_join_result():
task.get()
# Store all the endpoints and run httpx
with open(self.output_path) as f:
discovered_urls = f.readlines()
self.notify(fields={'Discovered URLs': len(discovered_urls)})
# Some tools can have an URL in the format <URL>] - <PATH> or <URL> - <PATH>, add them
# to the final URL list
all_urls = []
for url in discovered_urls:
url = url.strip()
urlpath = None
base_url = None
if '] ' in url: # found JS scraped endpoint e.g from gospider
split = tuple(url.split('] '))
if not len(split) == 2:
logger.warning(f'URL format not recognized for "{url}". Skipping.')
continue
base_url, urlpath = split
urlpath = urlpath.lstrip('- ')
elif ' - ' in url: # found JS scraped endpoint e.g from gospider
base_url, urlpath = tuple(url.split(' - '))
if base_url and urlpath:
subdomain = urlparse(base_url)
url = f'{subdomain.scheme}://{subdomain.netloc}{self.url_filter}'
if not validators.url(url):
logger.warning(f'Invalid URL "{url}". Skipping.')
if url not in all_urls:
all_urls.append(url)
# Filter out URLs if a path filter was passed
if self.url_filter:
all_urls = [url for url in all_urls if self.url_filter in url]
# Write result to output path
with open(self.output_path, 'w') as f:
f.write('\n'.join(all_urls))
logger.warning(f'Found {len(all_urls)} usable URLs')
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(
all_urls,
ctx=ctx,
should_remove_duplicate_endpoints=should_remove_duplicate_endpoints,
duplicate_removal_fields=duplicate_removal_fields
)
#-------------------#
# GF PATTERNS MATCH #
#-------------------#
# Combine old gf patterns with new ones
if gf_patterns:
self.scan.used_gf_patterns = ','.join(gf_patterns)
self.scan.save()
# Run gf patterns on saved endpoints
# TODO: refactor to Celery task
for gf_pattern in gf_patterns:
# TODO: js var is causing issues, removing for now
if gf_pattern == 'jsvar':
logger.info('Ignoring jsvar as it is causing issues.')
continue
# Run gf on current pattern
logger.warning(f'Running gf on pattern "{gf_pattern}"')
gf_output_file = f'{self.results_dir}/gf_patterns_{gf_pattern}.txt'
cmd = f'cat {self.output_path} | gf {gf_pattern} | grep -Eo {host_regex} >> {gf_output_file}'
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Check output file
if not os.path.exists(gf_output_file):
logger.error(f'Could not find GF output file {gf_output_file}. Skipping GF pattern "{gf_pattern}"')
continue
# Read output file line by line and
with open(gf_output_file, 'r') as f:
lines = f.readlines()
# Add endpoints / subdomains to DB
for url in lines:
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
endpoint, created = save_endpoint(
http_url,
crawl=False,
subdomain=subdomain,
ctx=ctx)
if not endpoint:
continue
earlier_pattern = None
if not created:
earlier_pattern = endpoint.matched_gf_patterns
pattern = f'{earlier_pattern},{gf_pattern}' if earlier_pattern else gf_pattern
endpoint.matched_gf_patterns = pattern
endpoint.save()
return all_urls
def parse_curl_output(response):
# TODO: Enrich from other cURL fields.
CURL_REGEX_HTTP_STATUS = f'HTTP\/(?:(?:\d\.?)+)\s(\d+)\s(?:\w+)'
http_status = 0
if response:
failed = False
regex = re.compile(CURL_REGEX_HTTP_STATUS, re.MULTILINE)
try:
http_status = int(regex.findall(response)[0])
except (KeyError, TypeError, IndexError):
pass
return {
'http_status': http_status,
}
@app.task(name='vulnerability_scan', queue='main_scan_queue', bind=True, base=RengineTask)
def vulnerability_scan(self, urls=[], ctx={}, description=None):
"""
This function will serve as an entrypoint to vulnerability scan.
All other vulnerability scan will be run from here including nuclei, crlfuzz, etc
"""
logger.info('Running Vulnerability Scan Queue')
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_run_nuclei = config.get(RUN_NUCLEI, True)
should_run_crlfuzz = config.get(RUN_CRLFUZZ, False)
should_run_dalfox = config.get(RUN_DALFOX, False)
should_run_s3scanner = config.get(RUN_S3SCANNER, True)
grouped_tasks = []
if should_run_nuclei:
_task = nuclei_scan.si(
urls=urls,
ctx=ctx,
description=f'Nuclei Scan'
)
grouped_tasks.append(_task)
if should_run_crlfuzz:
_task = crlfuzz_scan.si(
urls=urls,
ctx=ctx,
description=f'CRLFuzz Scan'
)
grouped_tasks.append(_task)
if should_run_dalfox:
_task = dalfox_xss_scan.si(
urls=urls,
ctx=ctx,
description=f'Dalfox XSS Scan'
)
grouped_tasks.append(_task)
if should_run_s3scanner:
_task = s3scanner.si(
ctx=ctx,
description=f'Misconfigured S3 Buckets Scanner'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan completed...')
# return results
return None
@app.task(name='nuclei_individual_severity_module', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_individual_severity_module(self, cmd, severity, enable_http_crawl, should_fetch_gpt_report, ctx={}, description=None):
'''
This celery task will run vulnerability scan in parallel.
All severities supplied should run in parallel as grouped tasks.
'''
results = []
logger.info(f'Running vulnerability scan with severity: {severity}')
cmd += f' -severity {severity}'
# Send start notification
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
# Gather nuclei results
vuln_data = parse_nuclei_result(line)
# Get corresponding subdomain
http_url = sanitize_url(line.get('matched-at'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
# Look for duplicate vulnerabilities by excluding records that might change but are irrelevant.
object_comparison_exclude = ['response', 'curl_command', 'tags', 'references', 'cve_ids', 'cwe_ids']
# Add subdomain and target domain to the duplicate check
vuln_data_copy = vuln_data.copy()
vuln_data_copy['subdomain'] = subdomain
vuln_data_copy['target_domain'] = self.domain
# Check if record exists, if exists do not save it
if record_exists(Vulnerability, data=vuln_data_copy, exclude_keys=object_comparison_exclude):
logger.warning(f'Nuclei vulnerability of severity {severity} : {vuln_data_copy["name"]} for {subdomain_name} already exists')
continue
# Get or create EndPoint object
response = line.get('response')
httpx_crawl = False if response else enable_http_crawl # avoid yet another httpx crawl
endpoint, _ = save_endpoint(
http_url,
crawl=httpx_crawl,
subdomain=subdomain,
ctx=ctx)
if endpoint:
http_url = endpoint.http_url
if not httpx_crawl:
output = parse_curl_output(response)
endpoint.http_status = output['http_status']
endpoint.save()
# Get or create Vulnerability object
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
subdomain=subdomain,
**vuln_data)
if not vuln:
continue
# Print vuln
severity = line['info'].get('severity', 'unknown')
logger.warning(str(vuln))
# Send notification for all vulnerabilities except info
url = vuln.http_url or vuln.subdomain
send_vuln = (
notif and
notif.send_vuln_notif and
vuln and
severity in ['low', 'medium', 'high', 'critical'])
if send_vuln:
fields = {
'Severity': f'**{severity.upper()}**',
'URL': http_url,
'Subdomain': subdomain_name,
'Name': vuln.name,
'Type': vuln.type,
'Description': vuln.description,
'Template': vuln.template_url,
'Tags': vuln.get_tags_str(),
'CVEs': vuln.get_cve_str(),
'CWEs': vuln.get_cwe_str(),
'References': vuln.get_refs_str()
}
severity_map = {
'low': 'info',
'medium': 'warning',
'high': 'error',
'critical': 'error'
}
self.notify(
f'vulnerability_scan_#{vuln.id}',
severity_map[severity],
fields,
add_meta_info=False)
# Send report to hackerone
hackerone_query = Hackerone.objects.all()
send_report = (
hackerone_query.exists() and
severity not in ('info', 'low') and
vuln.target_domain.h1_team_handle
)
if send_report:
hackerone = hackerone_query.first()
if hackerone.send_critical and severity == 'critical':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_high and severity == 'high':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_medium and severity == 'medium':
send_hackerone_report.delay(vuln.id)
# Write results to JSON file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
# Send finish notif
if send_status:
vulns = Vulnerability.objects.filter(scan_history__id=self.scan_id)
info_count = vulns.filter(severity=0).count()
low_count = vulns.filter(severity=1).count()
medium_count = vulns.filter(severity=2).count()
high_count = vulns.filter(severity=3).count()
critical_count = vulns.filter(severity=4).count()
unknown_count = vulns.filter(severity=-1).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count + unknown_count
fields = {
'Total': vulnerability_count,
'Critical': critical_count,
'High': high_count,
'Medium': medium_count,
'Low': low_count,
'Info': info_count,
'Unknown': unknown_count
}
self.notify(fields=fields)
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=NUCLEI
).exclude(
severity=0
)
# find all unique vulnerabilities based on path and title
# all unique vulnerability will go thru gpt function and get report
# once report is got, it will be matched with other vulnerabilities and saved
unique_vulns = set()
for vuln in vulns:
unique_vulns.add((vuln.name, vuln.get_path()))
unique_vulns = list(unique_vulns)
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in unique_vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return None
def get_vulnerability_gpt_report(vuln):
title = vuln[0]
path = vuln[1]
logger.info(f'Getting GPT Report for {title}, PATH: {path}')
# check if in db already exists
stored = GPTVulnerabilityReport.objects.filter(
url_path=path
).filter(
title=title
).first()
if stored:
response = {
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
report = GPTVulnerabilityReportGenerator()
vulnerability_description = get_gpt_vuln_input_description(
title,
path
)
response = report.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
title,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
for vuln in Vulnerability.objects.filter(name=title, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
def add_gpt_description_db(title, path, description, impact, remediation, references):
gpt_report = GPTVulnerabilityReport()
gpt_report.url_path = path
gpt_report.title = title
gpt_report.description = description
gpt_report.impact = impact
gpt_report.remediation = remediation
gpt_report.save()
for url in references:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
gpt_report.references.add(ref)
gpt_report.save()
@app.task(name='nuclei_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_scan(self, urls=[], ctx={}, description=None):
"""HTTP vulnerability scan using Nuclei
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
Notes:
Unfurl the urls to keep only domain and path, will be sent to vuln scan and
ignore certain file extensions. Thanks: https://github.com/six2dez/reconftw
"""
# Config
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
input_path = f'{self.results_dir}/input_endpoints_vulnerability_scan.txt'
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
concurrency = config.get(NUCLEI_CONCURRENCY) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
retries = config.get(RETRIES) or self.yaml_configuration.get(RETRIES, DEFAULT_RETRIES)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
custom_header = config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
should_fetch_gpt_report = config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
proxy = get_random_proxy()
nuclei_specific_config = config.get('nuclei', {})
use_nuclei_conf = nuclei_specific_config.get(USE_CONFIG, False)
severities = nuclei_specific_config.get(NUCLEI_SEVERITY, NUCLEI_DEFAULT_SEVERITIES)
tags = nuclei_specific_config.get(NUCLEI_TAGS, [])
tags = ','.join(tags)
nuclei_templates = nuclei_specific_config.get(NUCLEI_TEMPLATE)
custom_nuclei_templates = nuclei_specific_config.get(NUCLEI_CUSTOM_TEMPLATE)
# severities_str = ','.join(severities)
# Get alive endpoints
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=enable_http_crawl,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
if intensity == 'normal': # reduce number of endpoints to scan
unfurl_filter = f'{self.results_dir}/urls_unfurled.txt'
run_command(
f"cat {input_path} | unfurl -u format %s://%d%p |uro > {unfurl_filter}",
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {unfurl_filter} -o {unfurl_filter}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
input_path = unfurl_filter
# Build templates
# logger.info('Updating Nuclei templates ...')
run_command(
'nuclei -update-templates',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
templates = []
if not (nuclei_templates or custom_nuclei_templates):
templates.append(NUCLEI_DEFAULT_TEMPLATES_PATH)
if nuclei_templates:
if ALL in nuclei_templates:
template = NUCLEI_DEFAULT_TEMPLATES_PATH
templates.append(template)
else:
templates.extend(nuclei_templates)
if custom_nuclei_templates:
custom_nuclei_template_paths = [f'{str(elem)}.yaml' for elem in custom_nuclei_templates]
template = templates.extend(custom_nuclei_template_paths)
# Build CMD
cmd = 'nuclei -j'
cmd += ' -config /root/.config/nuclei/config.yaml' if use_nuclei_conf else ''
cmd += f' -irr'
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -l {input_path}'
cmd += f' -c {str(concurrency)}' if concurrency > 0 else ''
cmd += f' -proxy {proxy} ' if proxy else ''
cmd += f' -retries {retries}' if retries > 0 else ''
cmd += f' -rl {rate_limit}' if rate_limit > 0 else ''
# cmd += f' -severity {severities_str}'
cmd += f' -timeout {str(timeout)}' if timeout and timeout > 0 else ''
cmd += f' -tags {tags}' if tags else ''
cmd += f' -silent'
for tpl in templates:
cmd += f' -t {tpl}'
grouped_tasks = []
custom_ctx = ctx
for severity in severities:
custom_ctx['track'] = True
_task = nuclei_individual_severity_module.si(
cmd,
severity,
enable_http_crawl,
should_fetch_gpt_report,
ctx=custom_ctx,
description=f'Nuclei Scan with severity {severity}'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan with all severities completed...')
return None
@app.task(name='dalfox_xss_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def dalfox_xss_scan(self, urls=[], ctx={}, description=None):
"""XSS Scan using dalfox
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
dalfox_config = vuln_config.get(DALFOX) or {}
custom_header = dalfox_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
is_waf_evasion = dalfox_config.get(WAF_EVASION, False)
blind_xss_server = dalfox_config.get(BLIND_XSS_SERVER)
user_agent = dalfox_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
timeout = dalfox_config.get(TIMEOUT)
delay = dalfox_config.get(DELAY)
threads = dalfox_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_dalfox_xss.txt'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=False,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'dalfox --silence --no-color --no-spinner'
cmd += f' --only-poc r '
cmd += f' --ignore-return 302,404,403'
cmd += f' --skip-bav'
cmd += f' file {input_path}'
cmd += f' --proxy {proxy}' if proxy else ''
cmd += f' --waf-evasion' if is_waf_evasion else ''
cmd += f' -b {blind_xss_server}' if blind_xss_server else ''
cmd += f' --delay {delay}' if delay else ''
cmd += f' --timeout {timeout}' if timeout else ''
cmd += f' --user-agent {user_agent}' if user_agent else ''
cmd += f' --header {custom_header}' if custom_header else ''
cmd += f' --worker {threads}' if threads else ''
cmd += f' --format json'
results = []
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id,
trunc_char=','
):
if not isinstance(line, dict):
continue
results.append(line)
vuln_data = parse_dalfox_result(line)
http_url = sanitize_url(line.get('data'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Dalfox Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=DALFOX
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='crlfuzz_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def crlfuzz_scan(self, urls=[], ctx={}, description=None):
"""CRLF Fuzzing with CRLFuzz
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
custom_header = vuln_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
user_agent = vuln_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
threads = vuln_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_crlf.txt'
output_path = f'{self.results_dir}/{self.filename}'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'crlfuzz -s'
cmd += f' -l {input_path}'
cmd += f' -x {proxy}' if proxy else ''
cmd += f' --H {custom_header}' if custom_header else ''
cmd += f' -o {output_path}'
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id
)
if not os.path.isfile(output_path):
logger.info('No Results from CRLFuzz')
return
crlfs = []
results = []
with open(output_path, 'r') as file:
crlfs = file.readlines()
for crlf in crlfs:
url = crlf.strip()
vuln_data = parse_crlfuzz_result(url)
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting CRLFuzz Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=CRLFUZZ
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='s3scanner', queue='main_scan_queue', base=RengineTask, bind=True)
def s3scanner(self, ctx={}, description=None):
"""Bucket Scanner
Args:
ctx (dict): Context
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/#{self.scan_id}_subdomain_discovery.txt'
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
s3_config = vuln_config.get(S3SCANNER) or {}
threads = s3_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
providers = s3_config.get(PROVIDERS, S3SCANNER_DEFAULT_PROVIDERS)
scan_history = ScanHistory.objects.filter(pk=self.scan_id).first()
for provider in providers:
cmd = f's3scanner -bucket-file {input_path} -enumerate -provider {provider} -threads {threads} -json'
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
if line.get('bucket', {}).get('exists', 0) == 1:
result = parse_s3scanner_result(line)
s3bucket, created = S3Bucket.objects.get_or_create(**result)
scan_history.buckets.add(s3bucket)
logger.info(f"s3 bucket added {result['provider']}-{result['name']}-{result['region']}")
@app.task(name='http_crawl', queue='main_scan_queue', base=RengineTask, bind=True)
def http_crawl(
self,
urls=[],
method=None,
recrawl=False,
ctx={},
track=True,
description=None,
is_ran_from_subdomain_scan=False,
should_remove_duplicate_endpoints=True,
duplicate_removal_fields=[]):
"""Use httpx to query HTTP URLs for important info like page titles, http
status, etc...
Args:
urls (list, optional): A set of URLs to check. Overrides default
behavior which queries all endpoints related to this scan.
method (str): HTTP method to use (GET, HEAD, POST, PUT, DELETE).
recrawl (bool, optional): If False, filter out URLs that have already
been crawled.
should_remove_duplicate_endpoints (bool): Whether to remove duplicate endpoints
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
Returns:
list: httpx results.
"""
logger.info('Initiating HTTP Crawl')
if is_ran_from_subdomain_scan:
logger.info('Running From Subdomain Scan...')
cmd = '/go/bin/httpx'
cfg = self.yaml_configuration.get(HTTP_CRAWL) or {}
custom_header = cfg.get(CUSTOM_HEADER, '')
threads = cfg.get(THREADS, DEFAULT_THREADS)
follow_redirect = cfg.get(FOLLOW_REDIRECT, True)
self.output_path = None
input_path = f'{self.results_dir}/httpx_input.txt'
history_file = f'{self.results_dir}/commands.txt'
if urls: # direct passing URLs to check
if self.url_filter:
urls = [u for u in urls if self.url_filter in u]
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_uncrawled=not recrawl,
write_filepath=input_path,
ctx=ctx
)
# logger.debug(urls)
# If no URLs found, skip it
if not urls:
return
# Re-adjust thread number if few URLs to avoid spinning up a monster to
# kill a fly.
if len(urls) < threads:
threads = len(urls)
# Get random proxy
proxy = get_random_proxy()
# Run command
cmd += f' -cl -ct -rt -location -td -websocket -cname -asn -cdn -probe -random-agent'
cmd += f' -t {threads}' if threads > 0 else ''
cmd += f' --http-proxy {proxy}' if proxy else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -json'
cmd += f' -u {urls[0]}' if len(urls) == 1 else f' -l {input_path}'
cmd += f' -x {method}' if method else ''
cmd += f' -silent'
if follow_redirect:
cmd += ' -fr'
results = []
endpoint_ids = []
for line in stream_command(
cmd,
history_file=history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not line or not isinstance(line, dict):
continue
logger.debug(line)
# No response from endpoint
if line.get('failed', False):
continue
# Parse httpx output
host = line.get('host', '')
content_length = line.get('content_length', 0)
http_status = line.get('status_code')
http_url, is_redirect = extract_httpx_url(line)
page_title = line.get('title')
webserver = line.get('webserver')
cdn = line.get('cdn', False)
rt = line.get('time')
techs = line.get('tech', [])
cname = line.get('cname', '')
content_type = line.get('content_type', '')
response_time = -1
if rt:
response_time = float(''.join(ch for ch in rt if not ch.isalpha()))
if rt[-2:] == 'ms':
response_time = response_time / 1000
# Create Subdomain object in DB
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
# Save default HTTP URL to endpoint object in DB
endpoint, created = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain,
is_default=is_ran_from_subdomain_scan
)
if not endpoint:
continue
endpoint.http_status = http_status
endpoint.page_title = page_title
endpoint.content_length = content_length
endpoint.webserver = webserver
endpoint.response_time = response_time
endpoint.content_type = content_type
endpoint.save()
endpoint_str = f'{http_url} [{http_status}] `{content_length}B` `{webserver}` `{rt}`'
logger.warning(endpoint_str)
if endpoint and endpoint.is_alive and endpoint.http_status != 403:
self.notify(
fields={'Alive endpoint': f'• {endpoint_str}'},
add_meta_info=False)
# Add endpoint to results
line['_cmd'] = cmd
line['final_url'] = http_url
line['endpoint_id'] = endpoint.id
line['endpoint_created'] = created
line['is_redirect'] = is_redirect
results.append(line)
# Add technology objects to DB
for technology in techs:
tech, _ = Technology.objects.get_or_create(name=technology)
endpoint.techs.add(tech)
if is_ran_from_subdomain_scan:
subdomain.technologies.add(tech)
subdomain.save()
endpoint.save()
techs_str = ', '.join([f'`{tech}`' for tech in techs])
self.notify(
fields={'Technologies': techs_str},
add_meta_info=False)
# Add IP objects for 'a' records to DB
a_records = line.get('a', [])
for ip_address in a_records:
ip, created = save_ip_address(
ip_address,
subdomain,
subscan=self.subscan,
cdn=cdn)
ips_str = '• ' + '\n• '.join([f'`{ip}`' for ip in a_records])
self.notify(
fields={'IPs': ips_str},
add_meta_info=False)
# Add IP object for host in DB
if host:
ip, created = save_ip_address(
host,
subdomain,
subscan=self.subscan,
cdn=cdn)
self.notify(
fields={'IPs': f'• `{ip.address}`'},
add_meta_info=False)
# Save subdomain and endpoint
if is_ran_from_subdomain_scan:
# save subdomain stuffs
subdomain.http_url = http_url
subdomain.http_status = http_status
subdomain.page_title = page_title
subdomain.content_length = content_length
subdomain.webserver = webserver
subdomain.response_time = response_time
subdomain.content_type = content_type
subdomain.cname = ','.join(cname)
subdomain.is_cdn = cdn
if cdn:
subdomain.cdn_name = line.get('cdn_name')
subdomain.save()
endpoint.save()
endpoint_ids.append(endpoint.id)
if should_remove_duplicate_endpoints:
# Remove 'fake' alive endpoints that are just redirects to the same page
remove_duplicate_endpoints(
self.scan_id,
self.domain_id,
self.subdomain_id,
filter_ids=endpoint_ids
)
# Remove input file
run_command(
f'rm {input_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
return results
#---------------------#
# Notifications tasks #
#---------------------#
@app.task(name='send_notif', bind=False, queue='send_notif_queue')
def send_notif(
message,
scan_history_id=None,
subscan_id=None,
**options):
if not 'title' in options:
message = enrich_notification(message, scan_history_id, subscan_id)
send_discord_message(message, **options)
send_slack_message(message)
send_telegram_message(message)
@app.task(name='send_scan_notif', bind=False, queue='send_scan_notif_queue')
def send_scan_notif(
scan_history_id,
subscan_id=None,
engine_id=None,
status='RUNNING'):
"""Send scan status notification. Works for scan or a subscan if subscan_id
is passed.
Args:
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Get domain, engine, scan_history objects
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
tasks = ScanActivity.objects.filter(scan_of=scan) if scan else 0
# Build notif options
url = get_scan_url(scan_history_id, subscan_id)
title = get_scan_title(scan_history_id, subscan_id)
fields = get_scan_fields(engine, scan, subscan, status, tasks)
severity = None
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
if status:
severity = STATUS_TO_SEVERITIES.get(status)
opts = {
'title': title,
'url': url,
'fields': fields,
'severity': severity
}
logger.warning(f'Sending notification "{title}" [{severity}]')
# Send notification
send_notif(
msg,
scan_history_id,
subscan_id,
**opts)
@app.task(name='send_task_notif', bind=False, queue='send_task_notif_queue')
def send_task_notif(
task_name,
status=None,
result=None,
output_path=None,
traceback=None,
scan_history_id=None,
engine_id=None,
subscan_id=None,
severity=None,
add_meta_info=True,
update_fields={}):
"""Send task status notification.
Args:
task_name (str): Task name.
status (str, optional): Task status.
result (str, optional): Task result.
output_path (str, optional): Task output path.
traceback (str, optional): Task traceback.
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
severity (str, optional): Severity (will be mapped to notif colors)
add_meta_info (bool, optional): Wheter to add scan / subscan info to notif.
update_fields (dict, optional): Fields key / value to update.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Build fields
url = None
fields = {}
if add_meta_info:
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
url = get_scan_url(scan_history_id)
if status:
fields['Status'] = f'**{status}**'
if engine:
fields['Engine'] = engine.engine_name
if scan:
fields['Scan ID'] = f'[#{scan.id}]({url})'
if subscan:
url = get_scan_url(scan_history_id, subscan_id)
fields['Subscan ID'] = f'[#{subscan.id}]({url})'
title = get_task_title(task_name, scan_history_id, subscan_id)
if status:
severity = STATUS_TO_SEVERITIES.get(status)
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
# Add fields to update
for k, v in update_fields.items():
fields[k] = v
# Add traceback to notif
if traceback and notif.send_scan_tracebacks:
fields['Traceback'] = f'```\n{traceback}\n```'
# Add files to notif
files = []
attach_file = (
notif.send_scan_output_file and
output_path and
result and
not traceback
)
if attach_file:
output_title = output_path.split('/')[-1]
files = [(output_path, output_title)]
# Send notif
opts = {
'title': title,
'url': url,
'files': files,
'severity': severity,
'fields': fields,
'fields_append': update_fields.keys()
}
send_notif(
msg,
scan_history_id=scan_history_id,
subscan_id=subscan_id,
**opts)
@app.task(name='send_file_to_discord', bind=False, queue='send_file_to_discord_queue')
def send_file_to_discord(file_path, title=None):
notif = Notification.objects.first()
do_send = notif and notif.send_to_discord and notif.discord_hook_url
if not do_send:
return False
webhook = DiscordWebhook(
url=notif.discord_hook_url,
rate_limit_retry=True,
username=title or "reNgine Discord Plugin"
)
with open(file_path, "rb") as f:
head, tail = os.path.split(file_path)
webhook.add_file(file=f.read(), filename=tail)
webhook.execute()
@app.task(name='send_hackerone_report', bind=False, queue='send_hackerone_report_queue')
def send_hackerone_report(vulnerability_id):
"""Send HackerOne vulnerability report.
Args:
vulnerability_id (int): Vulnerability id.
Returns:
int: HTTP response status code.
"""
vulnerability = Vulnerability.objects.get(id=vulnerability_id)
severities = {v: k for k,v in NUCLEI_SEVERITY_MAP.items()}
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
# can only send vulnerability report if team_handle exists
if len(vulnerability.target_domain.h1_team_handle) !=0:
hackerone_query = Hackerone.objects.all()
if hackerone_query.exists():
hackerone = Hackerone.objects.first()
severity_value = severities[vulnerability.severity]
tpl = hackerone.report_template
# Replace syntax of report template with actual content
tpl = tpl.replace('{vulnerability_name}', vulnerability.name)
tpl = tpl.replace('{vulnerable_url}', vulnerability.http_url)
tpl = tpl.replace('{vulnerability_severity}', severity_value)
tpl = tpl.replace('{vulnerability_description}', vulnerability.description if vulnerability.description else '')
tpl = tpl.replace('{vulnerability_extracted_results}', vulnerability.extracted_results if vulnerability.extracted_results else '')
tpl = tpl.replace('{vulnerability_reference}', vulnerability.reference if vulnerability.reference else '')
data = {
"data": {
"type": "report",
"attributes": {
"team_handle": vulnerability.target_domain.h1_team_handle,
"title": '{} found in {}'.format(vulnerability.name, vulnerability.http_url),
"vulnerability_information": tpl,
"severity_rating": severity_value,
"impact": "More information about the impact and vulnerability can be found here: \n" + vulnerability.reference if vulnerability.reference else "NA",
}
}
}
r = requests.post(
'https://api.hackerone.com/v1/hackers/reports',
auth=(hackerone.username, hackerone.api_key),
json=data,
headers=headers
)
response = r.json()
status_code = r.status_code
if status_code == 201:
vulnerability.hackerone_report_id = response['data']["id"]
vulnerability.open_status = False
vulnerability.save()
return status_code
else:
logger.error('No team handle found.')
status_code = 111
return status_code
#-------------#
# Utils tasks #
#-------------#
@app.task(name='parse_nmap_results', bind=False, queue='parse_nmap_results_queue')
def parse_nmap_results(xml_file, output_file=None):
"""Parse results from nmap output file.
Args:
xml_file (str): nmap XML report file path.
Returns:
list: List of vulnerabilities found from nmap results.
"""
with open(xml_file, encoding='utf8') as f:
content = f.read()
try:
nmap_results = xmltodict.parse(content) # parse XML to dict
except Exception as e:
logger.exception(e)
logger.error(f'Cannot parse {xml_file} to valid JSON. Skipping.')
return []
# Write JSON to output file
if output_file:
with open(output_file, 'w') as f:
json.dump(nmap_results, f, indent=4)
logger.warning(json.dumps(nmap_results, indent=4))
hosts = (
nmap_results
.get('nmaprun', {})
.get('host', {})
)
all_vulns = []
if isinstance(hosts, dict):
hosts = [hosts]
for host in hosts:
# Grab hostname / IP from output
hostnames_dict = host.get('hostnames', {})
if hostnames_dict:
# Ensure that hostnames['hostname'] is a list for consistency
hostnames_list = hostnames_dict['hostname'] if isinstance(hostnames_dict['hostname'], list) else [hostnames_dict['hostname']]
# Extract all the @name values from the list of dictionaries
hostnames = [entry.get('@name') for entry in hostnames_list]
else:
hostnames = [host.get('address')['@addr']]
# Iterate over each hostname for each port
for hostname in hostnames:
# Grab ports from output
ports = host.get('ports', {}).get('port', [])
if isinstance(ports, dict):
ports = [ports]
for port in ports:
url_vulns = []
port_number = port['@portid']
url = sanitize_url(f'{hostname}:{port_number}')
logger.info(f'Parsing nmap results for {hostname}:{port_number} ...')
if not port_number or not port_number.isdigit():
continue
port_protocol = port['@protocol']
scripts = port.get('script', [])
if isinstance(scripts, dict):
scripts = [scripts]
for script in scripts:
script_id = script['@id']
script_output = script['@output']
script_output_table = script.get('table', [])
logger.debug(f'Ran nmap script "{script_id}" on {port_number}/{port_protocol}:\n{script_output}\n')
if script_id == 'vulscan':
vulns = parse_nmap_vulscan_output(script_output)
url_vulns.extend(vulns)
elif script_id == 'vulners':
vulns = parse_nmap_vulners_output(script_output)
url_vulns.extend(vulns)
# elif script_id == 'http-server-header':
# TODO: nmap can help find technologies as well using the http-server-header script
# regex = r'(\w+)/([\d.]+)\s?(?:\((\w+)\))?'
# tech_name, tech_version, tech_os = re.match(regex, test_string).groups()
# Technology.objects.get_or_create(...)
# elif script_id == 'http_csrf':
# vulns = parse_nmap_http_csrf_output(script_output)
# url_vulns.extend(vulns)
else:
logger.warning(f'Script output parsing for script "{script_id}" is not supported yet.')
# Add URL to vuln
for vuln in url_vulns:
# TODO: This should extend to any URL, not just HTTP
vuln['http_url'] = url
if 'http_path' in vuln:
vuln['http_url'] += vuln['http_path']
all_vulns.append(vuln)
return all_vulns
def parse_nmap_http_csrf_output(script_output):
pass
def parse_nmap_vulscan_output(script_output):
"""Parse nmap vulscan script output.
Args:
script_output (str): Vulscan script output.
Returns:
list: List of Vulnerability dicts.
"""
data = {}
vulns = []
provider_name = ''
# Sort all vulns found by provider so that we can match each provider with
# a function that pulls from its API to get more info about the
# vulnerability.
for line in script_output.splitlines():
if not line:
continue
if not line.startswith('['): # provider line
if "No findings" in line:
logger.info(f"No findings: {line}")
continue
elif ' - ' in line:
provider_name, provider_url = tuple(line.split(' - '))
data[provider_name] = {'url': provider_url.rstrip(':'), 'entries': []}
continue
else:
# Log a warning
logger.warning(f"Unexpected line format: {line}")
continue
reg = r'\[(.*)\] (.*)'
matches = re.match(reg, line)
id, title = matches.groups()
entry = {'id': id, 'title': title}
data[provider_name]['entries'].append(entry)
logger.warning('Vulscan parsed output:')
logger.warning(pprint.pformat(data))
for provider_name in data:
if provider_name == 'Exploit-DB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'IBM X-Force':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'MITRE CVE':
logger.error(f'Provider {provider_name} is not supported YET.')
for entry in data[provider_name]['entries']:
cve_id = entry['id']
vuln = cve_to_vuln(cve_id)
vulns.append(vuln)
elif provider_name == 'OSVDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'OpenVAS (Nessus)':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'SecurityFocus':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'VulDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
else:
logger.error(f'Provider {provider_name} is not supported.')
return vulns
def parse_nmap_vulners_output(script_output, url=''):
"""Parse nmap vulners script output.
TODO: Rework this as it's currently matching all CVEs no matter the
confidence.
Args:
script_output (str): Script output.
Returns:
list: List of found vulnerabilities.
"""
vulns = []
# Check for CVE in script output
CVE_REGEX = re.compile(r'.*(CVE-\d\d\d\d-\d+).*')
matches = CVE_REGEX.findall(script_output)
matches = list(dict.fromkeys(matches))
for cve_id in matches: # get CVE info
vuln = cve_to_vuln(cve_id, vuln_type='nmap-vulners-nse')
if vuln:
vulns.append(vuln)
return vulns
def cve_to_vuln(cve_id, vuln_type=''):
"""Search for a CVE using CVESearch and return Vulnerability data.
Args:
cve_id (str): CVE ID in the form CVE-*
Returns:
dict: Vulnerability dict.
"""
cve_info = CVESearch('https://cve.circl.lu').id(cve_id)
if not cve_info:
logger.error(f'Could not fetch CVE info for cve {cve_id}. Skipping.')
return None
vuln_cve_id = cve_info['id']
vuln_name = vuln_cve_id
vuln_description = cve_info.get('summary', 'none').replace(vuln_cve_id, '').strip()
try:
vuln_cvss = float(cve_info.get('cvss', -1))
except (ValueError, TypeError):
vuln_cvss = -1
vuln_cwe_id = cve_info.get('cwe', '')
exploit_ids = cve_info.get('refmap', {}).get('exploit-db', [])
osvdb_ids = cve_info.get('refmap', {}).get('osvdb', [])
references = cve_info.get('references', [])
capec_objects = cve_info.get('capec', [])
# Parse ovals for a better vuln name / type
ovals = cve_info.get('oval', [])
if ovals:
vuln_name = ovals[0]['title']
vuln_type = ovals[0]['family']
# Set vulnerability severity based on CVSS score
vuln_severity = 'info'
if vuln_cvss < 4:
vuln_severity = 'low'
elif vuln_cvss < 7:
vuln_severity = 'medium'
elif vuln_cvss < 9:
vuln_severity = 'high'
else:
vuln_severity = 'critical'
# Build console warning message
msg = f'{vuln_name} | {vuln_severity.upper()} | {vuln_cve_id} | {vuln_cwe_id} | {vuln_cvss}'
for id in osvdb_ids:
msg += f'\n\tOSVDB: {id}'
for exploit_id in exploit_ids:
msg += f'\n\tEXPLOITDB: {exploit_id}'
logger.warning(msg)
vuln = {
'name': vuln_name,
'type': vuln_type,
'severity': NUCLEI_SEVERITY_MAP[vuln_severity],
'description': vuln_description,
'cvss_score': vuln_cvss,
'references': references,
'cve_ids': [vuln_cve_id],
'cwe_ids': [vuln_cwe_id]
}
return vuln
def parse_s3scanner_result(line):
'''
Parses and returns s3Scanner Data
'''
bucket = line['bucket']
return {
'name': bucket['name'],
'region': bucket['region'],
'provider': bucket['provider'],
'owner_display_name': bucket['owner_display_name'],
'owner_id': bucket['owner_id'],
'perm_auth_users_read': bucket['perm_auth_users_read'],
'perm_auth_users_write': bucket['perm_auth_users_write'],
'perm_auth_users_read_acl': bucket['perm_auth_users_read_acl'],
'perm_auth_users_write_acl': bucket['perm_auth_users_write_acl'],
'perm_auth_users_full_control': bucket['perm_auth_users_full_control'],
'perm_all_users_read': bucket['perm_all_users_read'],
'perm_all_users_write': bucket['perm_all_users_write'],
'perm_all_users_read_acl': bucket['perm_all_users_read_acl'],
'perm_all_users_write_acl': bucket['perm_all_users_write_acl'],
'perm_all_users_full_control': bucket['perm_all_users_full_control'],
'num_objects': bucket['num_objects'],
'size': bucket['bucket_size']
}
def parse_nuclei_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
return {
'name': line['info'].get('name', ''),
'type': line['type'],
'severity': NUCLEI_SEVERITY_MAP[line['info'].get('severity', 'unknown')],
'template': line['template'],
'template_url': line['template-url'],
'template_id': line['template-id'],
'description': line['info'].get('description', ''),
'matcher_name': line.get('matcher-name', ''),
'curl_command': line.get('curl-command'),
'request': line.get('request'),
'response': line.get('response'),
'extracted_results': line.get('extracted-results', []),
'cvss_metrics': line['info'].get('classification', {}).get('cvss-metrics', ''),
'cvss_score': line['info'].get('classification', {}).get('cvss-score'),
'cve_ids': line['info'].get('classification', {}).get('cve_id', []) or [],
'cwe_ids': line['info'].get('classification', {}).get('cwe_id', []) or [],
'references': line['info'].get('reference', []) or [],
'tags': line['info'].get('tags', []),
'source': NUCLEI,
}
def parse_dalfox_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
description = ''
description += f" Evidence: {line.get('evidence')} <br>" if line.get('evidence') else ''
description += f" Message: {line.get('message')} <br>" if line.get('message') else ''
description += f" Payload: {line.get('message_str')} <br>" if line.get('message_str') else ''
description += f" Vulnerable Parameter: {line.get('param')} <br>" if line.get('param') else ''
return {
'name': 'XSS (Cross Site Scripting)',
'type': 'XSS',
'severity': DALFOX_SEVERITY_MAP[line.get('severity', 'unknown')],
'description': description,
'source': DALFOX,
'cwe_ids': [line.get('cwe')]
}
def parse_crlfuzz_result(url):
"""Parse CRLF results
Args:
url (str): CRLF Vulnerable URL
Returns:
dict: Vulnerability data.
"""
return {
'name': 'CRLF (HTTP Response Splitting)',
'type': 'CRLF',
'severity': 2,
'description': 'A CRLF (HTTP Response Splitting) vulnerability has been discovered.',
'source': CRLFUZZ,
}
def record_exists(model, data, exclude_keys=[]):
"""
Check if a record already exists in the database based on the given data.
Args:
model (django.db.models.Model): The Django model to check against.
data (dict): Data dictionary containing fields and values.
exclude_keys (list): List of keys to exclude from the lookup.
Returns:
bool: True if the record exists, False otherwise.
"""
# Extract the keys that will be used for the lookup
lookup_fields = {key: data[key] for key in data if key not in exclude_keys}
# Return True if a record exists based on the lookup fields, False otherwise
return model.objects.filter(**lookup_fields).exists()
@app.task(name='geo_localize', bind=False, queue='geo_localize_queue')
def geo_localize(host, ip_id=None):
"""Uses geoiplookup to find location associated with host.
Args:
host (str): Hostname.
ip_id (int): IpAddress object id.
Returns:
startScan.models.CountryISO: CountryISO object from DB or None.
"""
if validators.ipv6(host):
logger.info(f'Ipv6 "{host}" is not supported by geoiplookup. Skipping.')
return None
cmd = f'geoiplookup {host}'
_, out = run_command(cmd)
if 'IP Address not found' not in out and "can't resolve hostname" not in out:
country_iso = out.split(':')[1].strip().split(',')[0]
country_name = out.split(':')[1].strip().split(',')[1].strip()
geo_object, _ = CountryISO.objects.get_or_create(
iso=country_iso,
name=country_name
)
geo_json = {
'iso': country_iso,
'name': country_name
}
if ip_id:
ip = IpAddress.objects.get(pk=ip_id)
ip.geo_iso = geo_object
ip.save()
return geo_json
logger.info(f'Geo IP lookup failed for host "{host}"')
return None
@app.task(name='query_whois', bind=False, queue='query_whois_queue')
def query_whois(ip_domain, force_reload_whois=False):
"""Query WHOIS information for an IP or a domain name.
Args:
ip_domain (str): IP address or domain name.
save_domain (bool): Whether to save domain or not, default False
Returns:
dict: WHOIS information.
"""
if not force_reload_whois and Domain.objects.filter(name=ip_domain).exists() and Domain.objects.get(name=ip_domain).domain_info:
domain = Domain.objects.get(name=ip_domain)
if not domain.insert_date:
domain.insert_date = timezone.now()
domain.save()
domain_info_db = domain.domain_info
domain_info = DottedDict(
dnssec=domain_info_db.dnssec,
created=domain_info_db.created,
updated=domain_info_db.updated,
expires=domain_info_db.expires,
geolocation_iso=domain_info_db.geolocation_iso,
status=[status['name'] for status in DomainWhoisStatusSerializer(domain_info_db.status, many=True).data],
whois_server=domain_info_db.whois_server,
ns_records=[ns['name'] for ns in NameServersSerializer(domain_info_db.name_servers, many=True).data],
registrar_name=domain_info_db.registrar.name,
registrar_phone=domain_info_db.registrar.phone,
registrar_email=domain_info_db.registrar.email,
registrar_url=domain_info_db.registrar.url,
registrant_name=domain_info_db.registrant.name,
registrant_id=domain_info_db.registrant.id_str,
registrant_organization=domain_info_db.registrant.organization,
registrant_city=domain_info_db.registrant.city,
registrant_state=domain_info_db.registrant.state,
registrant_zip_code=domain_info_db.registrant.zip_code,
registrant_country=domain_info_db.registrant.country,
registrant_phone=domain_info_db.registrant.phone,
registrant_fax=domain_info_db.registrant.fax,
registrant_email=domain_info_db.registrant.email,
registrant_address=domain_info_db.registrant.address,
admin_name=domain_info_db.admin.name,
admin_id=domain_info_db.admin.id_str,
admin_organization=domain_info_db.admin.organization,
admin_city=domain_info_db.admin.city,
admin_state=domain_info_db.admin.state,
admin_zip_code=domain_info_db.admin.zip_code,
admin_country=domain_info_db.admin.country,
admin_phone=domain_info_db.admin.phone,
admin_fax=domain_info_db.admin.fax,
admin_email=domain_info_db.admin.email,
admin_address=domain_info_db.admin.address,
tech_name=domain_info_db.tech.name,
tech_id=domain_info_db.tech.id_str,
tech_organization=domain_info_db.tech.organization,
tech_city=domain_info_db.tech.city,
tech_state=domain_info_db.tech.state,
tech_zip_code=domain_info_db.tech.zip_code,
tech_country=domain_info_db.tech.country,
tech_phone=domain_info_db.tech.phone,
tech_fax=domain_info_db.tech.fax,
tech_email=domain_info_db.tech.email,
tech_address=domain_info_db.tech.address,
related_tlds=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_tlds, many=True).data],
related_domains=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_domains, many=True).data],
historical_ips=[ip for ip in HistoricalIPSerializer(domain_info_db.historical_ips, many=True).data],
)
if domain_info_db.dns_records:
a_records = []
txt_records = []
mx_records = []
dns_records = [{'name': dns['name'], 'type': dns['type']} for dns in DomainDNSRecordSerializer(domain_info_db.dns_records, many=True).data]
for dns in dns_records:
if dns['type'] == 'a':
a_records.append(dns['name'])
elif dns['type'] == 'txt':
txt_records.append(dns['name'])
elif dns['type'] == 'mx':
mx_records.append(dns['name'])
domain_info.a_records = a_records
domain_info.txt_records = txt_records
domain_info.mx_records = mx_records
else:
logger.info(f'Domain info for "{ip_domain}" not found in DB, querying whois')
domain_info = DottedDict()
# find domain historical ip
try:
historical_ips = get_domain_historical_ip_address(ip_domain)
domain_info.historical_ips = historical_ips
except Exception as e:
logger.error(f'HistoricalIP for {ip_domain} not found!\nError: {str(e)}')
historical_ips = []
# find associated domains using ip_domain
try:
related_domains = reverse_whois(ip_domain.split('.')[0])
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
# find related tlds using TLSx
try:
related_tlds = []
output_path = '/tmp/ip_domain_tlsx.txt'
tlsx_command = f'tlsx -san -cn -silent -ro -host {ip_domain} -o {output_path}'
run_command(
tlsx_command,
shell=True,
)
tlsx_output = []
with open(output_path) as f:
tlsx_output = f.readlines()
tldextract_target = tldextract.extract(ip_domain)
for doms in tlsx_output:
doms = doms.strip()
tldextract_res = tldextract.extract(doms)
if ip_domain != doms and tldextract_res.domain == tldextract_target.domain and tldextract_res.subdomain == '':
related_tlds.append(doms)
related_tlds = list(set(related_tlds))
domain_info.related_tlds = related_tlds
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
related_domains_list = []
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_related)
related_domains_list.append(_domain['name'])
for _domain in related_tlds:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain,
)[0]
db_domain_info.related_tlds.add(domain_related)
for _ip in historical_ips:
historical_ip = HistoricalIP.objects.get_or_create(
ip=_ip['ip'],
owner=_ip['owner'],
location=_ip['location'],
last_seen=_ip['last_seen'],
)[0]
db_domain_info.historical_ips.add(historical_ip)
domain.domain_info = db_domain_info
domain.save()
command = f'netlas host {ip_domain} -f json'
# check if netlas key is provided
netlas_key = get_netlas_key()
command += f' -a {netlas_key}' if netlas_key else ''
result = subprocess.check_output(command.split()).decode('utf-8')
if 'Failed to parse response data' in result:
# do fallback
return {
'status': False,
'ip_domain': ip_domain,
'result': "Netlas limit exceeded.",
'message': 'Netlas limit exceeded.'
}
try:
result = json.loads(result)
logger.info(result)
whois = result.get('whois') if result.get('whois') else {}
domain_info.created = whois.get('created_date')
domain_info.expires = whois.get('expiration_date')
domain_info.updated = whois.get('updated_date')
domain_info.whois_server = whois.get('whois_server')
if 'registrant' in whois:
registrant = whois.get('registrant')
domain_info.registrant_name = registrant.get('name')
domain_info.registrant_country = registrant.get('country')
domain_info.registrant_id = registrant.get('id')
domain_info.registrant_state = registrant.get('province')
domain_info.registrant_city = registrant.get('city')
domain_info.registrant_phone = registrant.get('phone')
domain_info.registrant_address = registrant.get('street')
domain_info.registrant_organization = registrant.get('organization')
domain_info.registrant_fax = registrant.get('fax')
domain_info.registrant_zip_code = registrant.get('postal_code')
email_search = EMAIL_REGEX.search(str(registrant.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.registrant_email = field_content
if 'administrative' in whois:
administrative = whois.get('administrative')
domain_info.admin_name = administrative.get('name')
domain_info.admin_country = administrative.get('country')
domain_info.admin_id = administrative.get('id')
domain_info.admin_state = administrative.get('province')
domain_info.admin_city = administrative.get('city')
domain_info.admin_phone = administrative.get('phone')
domain_info.admin_address = administrative.get('street')
domain_info.admin_organization = administrative.get('organization')
domain_info.admin_fax = administrative.get('fax')
domain_info.admin_zip_code = administrative.get('postal_code')
mail_search = EMAIL_REGEX.search(str(administrative.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.admin_email = field_content
if 'technical' in whois:
technical = whois.get('technical')
domain_info.tech_name = technical.get('name')
domain_info.tech_country = technical.get('country')
domain_info.tech_state = technical.get('province')
domain_info.tech_id = technical.get('id')
domain_info.tech_city = technical.get('city')
domain_info.tech_phone = technical.get('phone')
domain_info.tech_address = technical.get('street')
domain_info.tech_organization = technical.get('organization')
domain_info.tech_fax = technical.get('fax')
domain_info.tech_zip_code = technical.get('postal_code')
mail_search = EMAIL_REGEX.search(str(technical.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.tech_email = field_content
if 'dns' in result:
dns = result.get('dns')
domain_info.mx_records = dns.get('mx')
domain_info.txt_records = dns.get('txt')
domain_info.a_records = dns.get('a')
domain_info.ns_records = whois.get('name_servers')
domain_info.dnssec = True if whois.get('dnssec') else False
domain_info.status = whois.get('status')
if 'registrar' in whois:
registrar = whois.get('registrar')
domain_info.registrar_name = registrar.get('name')
domain_info.registrar_email = registrar.get('email')
domain_info.registrar_phone = registrar.get('phone')
domain_info.registrar_url = registrar.get('url')
# find associated domains if registrant email is found
related_domains = reverse_whois(domain_info.get('registrant_email')) if domain_info.get('registrant_email') else []
for _domain in related_domains:
related_domains_list.append(_domain['name'])
# remove duplicate domains from related domains list
related_domains_list = list(set(related_domains_list))
domain_info.related_domains = related_domains_list
# save to db if domain exists
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_rel = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_rel)
db_domain_info.dnssec = domain_info.get('dnssec')
#dates
db_domain_info.created = domain_info.get('created')
db_domain_info.updated = domain_info.get('updated')
db_domain_info.expires = domain_info.get('expires')
#registrar
db_domain_info.registrar = Registrar.objects.get_or_create(
name=domain_info.get('registrar_name'),
email=domain_info.get('registrar_email'),
phone=domain_info.get('registrar_phone'),
url=domain_info.get('registrar_url'),
)[0]
db_domain_info.registrant = DomainRegistration.objects.get_or_create(
name=domain_info.get('registrant_name'),
organization=domain_info.get('registrant_organization'),
address=domain_info.get('registrant_address'),
city=domain_info.get('registrant_city'),
state=domain_info.get('registrant_state'),
zip_code=domain_info.get('registrant_zip_code'),
country=domain_info.get('registrant_country'),
email=domain_info.get('registrant_email'),
phone=domain_info.get('registrant_phone'),
fax=domain_info.get('registrant_fax'),
id_str=domain_info.get('registrant_id'),
)[0]
db_domain_info.admin = DomainRegistration.objects.get_or_create(
name=domain_info.get('admin_name'),
organization=domain_info.get('admin_organization'),
address=domain_info.get('admin_address'),
city=domain_info.get('admin_city'),
state=domain_info.get('admin_state'),
zip_code=domain_info.get('admin_zip_code'),
country=domain_info.get('admin_country'),
email=domain_info.get('admin_email'),
phone=domain_info.get('admin_phone'),
fax=domain_info.get('admin_fax'),
id_str=domain_info.get('admin_id'),
)[0]
db_domain_info.tech = DomainRegistration.objects.get_or_create(
name=domain_info.get('tech_name'),
organization=domain_info.get('tech_organization'),
address=domain_info.get('tech_address'),
city=domain_info.get('tech_city'),
state=domain_info.get('tech_state'),
zip_code=domain_info.get('tech_zip_code'),
country=domain_info.get('tech_country'),
email=domain_info.get('tech_email'),
phone=domain_info.get('tech_phone'),
fax=domain_info.get('tech_fax'),
id_str=domain_info.get('tech_id'),
)[0]
for status in domain_info.get('status') or []:
_status = WhoisStatus.objects.get_or_create(
name=status
)[0]
_status.save()
db_domain_info.status.add(_status)
for ns in domain_info.get('ns_records') or []:
_ns = NameServer.objects.get_or_create(
name=ns
)[0]
_ns.save()
db_domain_info.name_servers.add(_ns)
for a in domain_info.get('a_records') or []:
_a = DNSRecord.objects.get_or_create(
name=a,
type='a'
)[0]
_a.save()
db_domain_info.dns_records.add(_a)
for mx in domain_info.get('mx_records') or []:
_mx = DNSRecord.objects.get_or_create(
name=mx,
type='mx'
)[0]
_mx.save()
db_domain_info.dns_records.add(_mx)
for txt in domain_info.get('txt_records') or []:
_txt = DNSRecord.objects.get_or_create(
name=txt,
type='txt'
)[0]
_txt.save()
db_domain_info.dns_records.add(_txt)
db_domain_info.geolocation_iso = domain_info.get('registrant_country')
db_domain_info.whois_server = domain_info.get('whois_server')
db_domain_info.save()
domain.domain_info = db_domain_info
domain.save()
except Exception as e:
return {
'status': False,
'ip_domain': ip_domain,
'result': "unable to fetch records from WHOIS database.",
'message': str(e)
}
return {
'status': True,
'ip_domain': ip_domain,
'dnssec': domain_info.get('dnssec'),
'created': domain_info.get('created'),
'updated': domain_info.get('updated'),
'expires': domain_info.get('expires'),
'geolocation_iso': domain_info.get('registrant_country'),
'domain_statuses': domain_info.get('status'),
'whois_server': domain_info.get('whois_server'),
'dns': {
'a': domain_info.get('a_records'),
'mx': domain_info.get('mx_records'),
'txt': domain_info.get('txt_records'),
},
'registrar': {
'name': domain_info.get('registrar_name'),
'phone': domain_info.get('registrar_phone'),
'email': domain_info.get('registrar_email'),
'url': domain_info.get('registrar_url'),
},
'registrant': {
'name': domain_info.get('registrant_name'),
'id': domain_info.get('registrant_id'),
'organization': domain_info.get('registrant_organization'),
'address': domain_info.get('registrant_address'),
'city': domain_info.get('registrant_city'),
'state': domain_info.get('registrant_state'),
'zipcode': domain_info.get('registrant_zip_code'),
'country': domain_info.get('registrant_country'),
'phone': domain_info.get('registrant_phone'),
'fax': domain_info.get('registrant_fax'),
'email': domain_info.get('registrant_email'),
},
'admin': {
'name': domain_info.get('admin_name'),
'id': domain_info.get('admin_id'),
'organization': domain_info.get('admin_organization'),
'address':domain_info.get('admin_address'),
'city': domain_info.get('admin_city'),
'state': domain_info.get('admin_state'),
'zipcode': domain_info.get('admin_zip_code'),
'country': domain_info.get('admin_country'),
'phone': domain_info.get('admin_phone'),
'fax': domain_info.get('admin_fax'),
'email': domain_info.get('admin_email'),
},
'technical_contact': {
'name': domain_info.get('tech_name'),
'id': domain_info.get('tech_id'),
'organization': domain_info.get('tech_organization'),
'address': domain_info.get('tech_address'),
'city': domain_info.get('tech_city'),
'state': domain_info.get('tech_state'),
'zipcode': domain_info.get('tech_zip_code'),
'country': domain_info.get('tech_country'),
'phone': domain_info.get('tech_phone'),
'fax': domain_info.get('tech_fax'),
'email': domain_info.get('tech_email'),
},
'nameservers': domain_info.get('ns_records'),
# 'similar_domains': domain_info.get('similar_domains'),
'related_domains': domain_info.get('related_domains'),
'related_tlds': domain_info.get('related_tlds'),
'historical_ips': domain_info.get('historical_ips'),
}
@app.task(name='remove_duplicate_endpoints', bind=False, queue='remove_duplicate_endpoints_queue')
def remove_duplicate_endpoints(
scan_history_id,
domain_id,
subdomain_id=None,
filter_ids=[],
filter_status=[200, 301, 404],
duplicate_removal_fields=ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS
):
"""Remove duplicate endpoints.
Check for implicit redirections by comparing endpoints:
- [x] `content_length` similarities indicating redirections
- [x] `page_title` (check for same page title)
- [ ] Sign-in / login page (check for endpoints with the same words)
Args:
scan_history_id: ScanHistory id.
domain_id (int): Domain id.
subdomain_id (int, optional): Subdomain id.
filter_ids (list): List of endpoint ids to filter on.
filter_status (list): List of HTTP status codes to filter on.
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
"""
logger.info(f'Removing duplicate endpoints based on {duplicate_removal_fields}')
endpoints = (
EndPoint.objects
.filter(scan_history__id=scan_history_id)
.filter(target_domain__id=domain_id)
)
if filter_status:
endpoints = endpoints.filter(http_status__in=filter_status)
if subdomain_id:
endpoints = endpoints.filter(subdomain__id=subdomain_id)
if filter_ids:
endpoints = endpoints.filter(id__in=filter_ids)
for field_name in duplicate_removal_fields:
cl_query = (
endpoints
.values_list(field_name)
.annotate(mc=Count(field_name))
.order_by('-mc')
)
for (field_value, count) in cl_query:
if count > DELETE_DUPLICATES_THRESHOLD:
eps_to_delete = (
endpoints
.filter(**{field_name: field_value})
.order_by('discovered_date')
.all()[1:]
)
msg = f'Deleting {len(eps_to_delete)} endpoints [reason: same {field_name} {field_value}]'
for ep in eps_to_delete:
url = urlparse(ep.http_url)
if url.path in ['', '/', '/login']: # try do not delete the original page that other pages redirect to
continue
msg += f'\n\t {ep.http_url} [{ep.http_status}] [{field_name}={field_value}]'
ep.delete()
logger.warning(msg)
@app.task(name='run_command', bind=False, queue='run_command_queue')
def run_command(cmd, cwd=None, shell=False, history_file=None, scan_id=None, activity_id=None):
"""Run a given command using subprocess module.
Args:
cmd (str): Command to run.
cwd (str): Current working directory.
echo (bool): Log command.
shell (bool): Run within separate shell if True.
history_file (str): Write command + output to history file.
Returns:
tuple: Tuple with return_code, output.
"""
logger.info(cmd)
logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Run the command using subprocess
popen = subprocess.Popen(
cmd if shell else cmd.split(),
shell=shell,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
cwd=cwd,
universal_newlines=True)
output = ''
for stdout_line in iter(popen.stdout.readline, ""):
item = stdout_line.strip()
output += '\n' + item
logger.debug(item)
popen.stdout.close()
popen.wait()
return_code = popen.returncode
command_obj.output = output
command_obj.return_code = return_code
command_obj.save()
if history_file:
mode = 'a'
if not os.path.exists(history_file):
mode = 'w'
with open(history_file, mode) as f:
f.write(f'\n{cmd}\n{return_code}\n{output}\n------------------\n')
return return_code, output
#-------------#
# Other utils #
#-------------#
def stream_command(cmd, cwd=None, shell=False, history_file=None, encoding='utf-8', scan_id=None, activity_id=None, trunc_char=None):
# Log cmd
logger.info(cmd)
# logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Sanitize the cmd
command = cmd if shell else cmd.split()
# Run the command using subprocess
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
shell=shell)
# Log the output in real-time to the database
output = ""
# Process the output
for line in iter(lambda: process.stdout.readline(), b''):
if not line:
break
line = line.strip()
ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')
line = ansi_escape.sub('', line)
line = line.replace('\\x0d\\x0a', '\n')
if trunc_char and line.endswith(trunc_char):
line = line[:-1]
item = line
# Try to parse the line as JSON
try:
item = json.loads(line)
except json.JSONDecodeError:
pass
# Yield the line
#logger.debug(item)
yield item
# Add the log line to the output
output += line + "\n"
# Update the command record in the database
command_obj.output = output
command_obj.save()
# Retrieve the return code and output
process.wait()
return_code = process.returncode
# Update the return code and final output in the database
command_obj.return_code = return_code
command_obj.save()
# Append the command, return code and output to the history file
if history_file is not None:
with open(history_file, "a") as f:
f.write(f"{cmd}\n{return_code}\n{output}\n")
def process_httpx_response(line):
"""TODO: implement this"""
def extract_httpx_url(line):
"""Extract final URL from httpx results. Always follow redirects to find
the last URL.
Args:
line (dict): URL data output by httpx.
Returns:
tuple: (final_url, redirect_bool) tuple.
"""
status_code = line.get('status_code', 0)
final_url = line.get('final_url')
location = line.get('location')
chain_status_codes = line.get('chain_status_codes', [])
# Final URL is already looking nice, if it exists return it
if final_url:
return final_url, False
http_url = line['url'] # fallback to url field
# Handle redirects manually
REDIRECT_STATUS_CODES = [301, 302]
is_redirect = (
status_code in REDIRECT_STATUS_CODES
or
any(x in REDIRECT_STATUS_CODES for x in chain_status_codes)
)
if is_redirect and location:
if location.startswith(('http', 'https')):
http_url = location
else:
http_url = f'{http_url}/{location.lstrip("/")}'
# Sanitize URL
http_url = sanitize_url(http_url)
return http_url, is_redirect
#-------------#
# OSInt utils #
#-------------#
def get_and_save_dork_results(lookup_target, results_dir, type, lookup_keywords=None, lookup_extensions=None, delay=3, page_count=2, scan_history=None):
"""
Uses gofuzz to dork and store information
Args:
lookup_target (str): target to look into such as stackoverflow or even the target itself
results_dir (str): Results directory
type (str): Dork Type Title
lookup_keywords (str): comma separated keywords or paths to look for
lookup_extensions (str): comma separated extensions to look for
delay (int): delay between each requests
page_count (int): pages in google to extract information
scan_history (startScan.ScanHistory): Scan History Object
"""
results = []
gofuzz_command = f'{GOFUZZ_EXEC_PATH} -t {lookup_target} -d {delay} -p {page_count}'
if lookup_extensions:
gofuzz_command += f' -e {lookup_extensions}'
elif lookup_keywords:
gofuzz_command += f' -w {lookup_keywords}'
output_file = f'{results_dir}/gofuzz.txt'
gofuzz_command += f' -o {output_file}'
history_file = f'{results_dir}/commands.txt'
try:
run_command(
gofuzz_command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
)
if not os.path.isfile(output_file):
return
with open(output_file) as f:
for line in f.readlines():
url = line.strip()
if url:
results.append(url)
dork, created = Dork.objects.get_or_create(
type=type,
url=url
)
if scan_history:
scan_history.dorks.add(dork)
# remove output file
os.remove(output_file)
except Exception as e:
logger.exception(e)
return results
def get_and_save_emails(scan_history, activity_id, results_dir):
"""Get and save emails from Google, Bing and Baidu.
Args:
scan_history (startScan.ScanHistory): Scan history object.
activity_id: ScanActivity Object
results_dir (str): Results directory.
Returns:
list: List of emails found.
"""
emails = []
# Proxy settings
# get_random_proxy()
# Gather emails from Google, Bing and Baidu
output_file = f'{results_dir}/emails_tmp.txt'
history_file = f'{results_dir}/commands.txt'
command = f'python3 /usr/src/github/Infoga/infoga.py --domain {scan_history.domain.name} --source all --report {output_file}'
try:
run_command(
command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
activity_id=activity_id)
if not os.path.isfile(output_file):
logger.info('No Email results')
return []
with open(output_file) as f:
for line in f.readlines():
if 'Email' in line:
split_email = line.split(' ')[2]
emails.append(split_email)
output_path = f'{results_dir}/emails.txt'
with open(output_path, 'w') as output_file:
for email_address in emails:
save_email(email_address, scan_history)
output_file.write(f'{email_address}\n')
except Exception as e:
logger.exception(e)
return emails
def save_metadata_info(meta_dict):
"""Extract metadata from Google Search.
Args:
meta_dict (dict): Info dict.
Returns:
list: List of startScan.MetaFinderDocument objects.
"""
logger.warning(f'Getting metadata for {meta_dict.osint_target}')
scan_history = ScanHistory.objects.get(id=meta_dict.scan_id)
# Proxy settings
get_random_proxy()
# Get metadata
result = extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if not result:
logger.error(f'No metadata result from Google Search for {meta_dict.osint_target}.')
return []
# Add metadata info to DB
results = []
for metadata_name, data in result.get_metadata().items():
subdomain = Subdomain.objects.get(
scan_history=meta_dict.scan_id,
name=meta_dict.osint_target)
metadata = DottedDict({k: v for k, v in data.items()})
meta_finder_document = MetaFinderDocument(
subdomain=subdomain,
target_domain=meta_dict.domain,
scan_history=scan_history,
url=metadata.url,
doc_name=metadata_name,
http_status=metadata.status_code,
producer=metadata.metadata.get('Producer'),
creator=metadata.metadata.get('Creator'),
creation_date=metadata.metadata.get('CreationDate'),
modified_date=metadata.metadata.get('ModDate'),
author=metadata.metadata.get('Author'),
title=metadata.metadata.get('Title'),
os=metadata.metadata.get('OSInfo'))
meta_finder_document.save()
results.append(data)
return results
#-----------------#
# Utils functions #
#-----------------#
def create_scan_activity(scan_history_id, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = ScanHistory.objects.get(pk=scan_history_id)
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
#--------------------#
# Database functions #
#--------------------#
def save_vulnerability(**vuln_data):
references = vuln_data.pop('references', [])
cve_ids = vuln_data.pop('cve_ids', [])
cwe_ids = vuln_data.pop('cwe_ids', [])
tags = vuln_data.pop('tags', [])
subscan = vuln_data.pop('subscan', None)
# remove nulls
vuln_data = replace_nulls(vuln_data)
# Create vulnerability
vuln, created = Vulnerability.objects.get_or_create(**vuln_data)
if created:
vuln.discovered_date = timezone.now()
vuln.open_status = True
vuln.save()
# Save vuln tags
for tag_name in tags or []:
tag, created = VulnerabilityTags.objects.get_or_create(name=tag_name)
if tag:
vuln.tags.add(tag)
vuln.save()
# Save CVEs
for cve_id in cve_ids or []:
cve, created = CveId.objects.get_or_create(name=cve_id)
if cve:
vuln.cve_ids.add(cve)
vuln.save()
# Save CWEs
for cve_id in cwe_ids or []:
cwe, created = CweId.objects.get_or_create(name=cve_id)
if cwe:
vuln.cwe_ids.add(cwe)
vuln.save()
# Save vuln reference
for url in references or []:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
if created:
vuln.references.add(ref)
vuln.save()
# Save subscan id in vuln object
if subscan:
vuln.vuln_subscan_ids.add(subscan)
vuln.save()
return vuln, created
def save_endpoint(
http_url,
ctx={},
crawl=False,
is_default=False,
**endpoint_data):
"""Get or create EndPoint object. If crawl is True, also crawl the endpoint
HTTP URL with httpx.
Args:
http_url (str): Input HTTP URL.
is_default (bool): If the url is a default url for SubDomains.
scan_history (startScan.models.ScanHistory): ScanHistory object.
domain (startScan.models.Domain): Domain object.
subdomain (starScan.models.Subdomain): Subdomain object.
results_dir (str, optional): Results directory.
crawl (bool, optional): Run httpx on endpoint if True. Default: False.
force (bool, optional): Force crawl even if ENABLE_HTTP_CRAWL mode is on.
subscan (startScan.models.SubScan, optional): SubScan object.
Returns:
tuple: (startScan.models.EndPoint, created) where `created` is a boolean
indicating if the object is new or already existed.
"""
# remove nulls
endpoint_data = replace_nulls(endpoint_data)
scheme = urlparse(http_url).scheme
endpoint = None
created = False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in http_url:
logger.error(f"{http_url} is not a URL of domain {domain.name}. Skipping.")
return None, False
if crawl:
ctx['track'] = False
results = http_crawl(
urls=[http_url],
method='HEAD',
ctx=ctx)
if results:
endpoint_data = results[0]
endpoint_id = endpoint_data['endpoint_id']
created = endpoint_data['endpoint_created']
endpoint = EndPoint.objects.get(pk=endpoint_id)
elif not scheme:
return None, False
else: # add dumb endpoint without probing it
scan = ScanHistory.objects.filter(pk=ctx.get('scan_history_id')).first()
domain = Domain.objects.filter(pk=ctx.get('domain_id')).first()
if not validators.url(http_url):
return None, False
http_url = sanitize_url(http_url)
# Try to get the first matching record (prevent duplicate error)
endpoints = EndPoint.objects.filter(
scan_history=scan,
target_domain=domain,
http_url=http_url,
**endpoint_data
)
if endpoints.exists():
endpoint = endpoints.first()
created = False
else:
# No existing record, create a new one
endpoint = EndPoint.objects.create(
scan_history=scan,
target_domain=domain,
http_url=http_url,
**endpoint_data
)
created = True
if created:
endpoint.is_default = is_default
endpoint.discovered_date = timezone.now()
endpoint.save()
subscan_id = ctx.get('subscan_id')
if subscan_id:
endpoint.endpoint_subscan_ids.add(subscan_id)
endpoint.save()
return endpoint, created
def save_subdomain(subdomain_name, ctx={}):
"""Get or create Subdomain object.
Args:
subdomain_name (str): Subdomain name.
scan_history (startScan.models.ScanHistory): ScanHistory object.
Returns:
tuple: (startScan.models.Subdomain, created) where `created` is a
boolean indicating if the object has been created in DB.
"""
scan_id = ctx.get('scan_history_id')
subscan_id = ctx.get('subscan_id')
out_of_scope_subdomains = ctx.get('out_of_scope_subdomains', [])
valid_domain = (
validators.domain(subdomain_name) or
validators.ipv4(subdomain_name) or
validators.ipv6(subdomain_name)
)
if not valid_domain:
logger.error(f'{subdomain_name} is not an invalid domain. Skipping.')
return None, False
if subdomain_name in out_of_scope_subdomains:
logger.error(f'{subdomain_name} is out-of-scope. Skipping.')
return None, False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in subdomain_name:
logger.error(f"{subdomain_name} is not a subdomain of domain {domain.name}. Skipping.")
return None, False
scan = ScanHistory.objects.filter(pk=scan_id).first()
domain = scan.domain if scan else None
subdomain, created = Subdomain.objects.get_or_create(
scan_history=scan,
target_domain=domain,
name=subdomain_name)
if created:
# logger.warning(f'Found new subdomain {subdomain_name}')
subdomain.discovered_date = timezone.now()
if subscan_id:
subdomain.subdomain_subscan_ids.add(subscan_id)
subdomain.save()
return subdomain, created
def save_email(email_address, scan_history=None):
if not validators.email(email_address):
logger.info(f'Email {email_address} is invalid. Skipping.')
return None, False
email, created = Email.objects.get_or_create(address=email_address)
# if created:
# logger.warning(f'Found new email address {email_address}')
# Add email to ScanHistory
if scan_history:
scan_history.emails.add(email)
scan_history.save()
return email, created
def save_employee(name, designation, scan_history=None):
employee, created = Employee.objects.get_or_create(
name=name,
designation=designation)
# if created:
# logger.warning(f'Found new employee {name}')
# Add employee to ScanHistory
if scan_history:
scan_history.employees.add(employee)
scan_history.save()
return employee, created
def save_ip_address(ip_address, subdomain=None, subscan=None, **kwargs):
if not (validators.ipv4(ip_address) or validators.ipv6(ip_address)):
logger.info(f'IP {ip_address} is not a valid IP. Skipping.')
return None, False
ip, created = IpAddress.objects.get_or_create(address=ip_address)
# if created:
# logger.warning(f'Found new IP {ip_address}')
# Set extra attributes
for key, value in kwargs.items():
setattr(ip, key, value)
ip.save()
# Add IP to subdomain
if subdomain:
subdomain.ip_addresses.add(ip)
subdomain.save()
# Add subscan to IP
if subscan:
ip.ip_subscan_ids.add(subscan)
# Geo-localize IP asynchronously
if created:
geo_localize.delay(ip_address, ip.id)
return ip, created
def save_imported_subdomains(subdomains, ctx={}):
"""Take a list of subdomains imported and write them to from_imported.txt.
Args:
subdomains (list): List of subdomain names.
scan_history (startScan.models.ScanHistory): ScanHistory instance.
domain (startScan.models.Domain): Domain instance.
results_dir (str): Results directory.
"""
domain_id = ctx['domain_id']
domain = Domain.objects.get(pk=domain_id)
results_dir = ctx.get('results_dir', RENGINE_RESULTS)
# Validate each subdomain and de-duplicate entries
subdomains = list(set([
subdomain for subdomain in subdomains
if validators.domain(subdomain) and domain.name == get_domain_from_subdomain(subdomain)
]))
if not subdomains:
return
logger.warning(f'Found {len(subdomains)} imported subdomains.')
with open(f'{results_dir}/from_imported.txt', 'w+') as output_file:
for name in subdomains:
subdomain_name = name.strip()
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain.is_imported_subdomain = True
subdomain.save()
output_file.write(f'{subdomain}\n')
@app.task(name='query_reverse_whois', bind=False, queue='query_reverse_whois_queue')
def query_reverse_whois(lookup_keyword):
"""Queries Reverse WHOIS information for an organization or email address.
Args:
lookup_keyword (str): Registrar Name or email
Returns:
dict: Reverse WHOIS information.
"""
return get_associated_domains(lookup_keyword)
@app.task(name='query_ip_history', bind=False, queue='query_ip_history_queue')
def query_ip_history(domain):
"""Queries the IP history for a domain
Args:
domain (str): domain_name
Returns:
list: list of historical ip addresses
"""
return get_domain_historical_ip_address(domain)
@app.task(name='gpt_vulnerability_description', bind=False, queue='gpt_queue')
def gpt_vulnerability_description(vulnerability_id):
"""Generate and store Vulnerability Description using GPT.
Args:
vulnerability_id (Vulnerability Model ID): Vulnerability ID to fetch Description.
"""
logger.info('Getting GPT Vulnerability Description')
try:
lookup_vulnerability = Vulnerability.objects.get(id=vulnerability_id)
lookup_url = urlparse(lookup_vulnerability.http_url)
path = lookup_url.path
except Exception as e:
return {
'status': False,
'error': str(e)
}
# check in db GPTVulnerabilityReport model if vulnerability description and path matches
stored = GPTVulnerabilityReport.objects.filter(url_path=path).filter(title=lookup_vulnerability.name).first()
if stored:
response = {
'status': True,
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
vulnerability_description = get_gpt_vuln_input_description(
lookup_vulnerability.name,
path
)
# one can add more description here later
gpt_generator = GPTVulnerabilityReportGenerator()
response = gpt_generator.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
lookup_vulnerability.name,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
# for all vulnerabilities with the same vulnerability name this description has to be stored.
# also the consition is that the url must contain a part of this.
for vuln in Vulnerability.objects.filter(name=lookup_vulnerability.name, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
return response
| psyray | 7c01a46cea370e74385682ba7c28eaf4e58f5d69 | 2e089dc62f1bd64aa481750da10fa750e3aa232d | Not needed, feel free to remove | yogeshojha | 9 |
yogeshojha/rengine | 1,063 | Fix crash on saving endpoint (FFUF related only) | Fix #1006
I've added :
- a **try except** block to catch error on duplicate record returned by **get_or_create** in **saving_endpoint** method
- a **check** on endpoint existence in **dir_file_fuzz** method
Errors are logged to the console with the URL.
![image](https://github.com/yogeshojha/rengine/assets/1230954/3067c8a3-f44d-4b8f-b048-d1a356d542a2)
Tested and working
Now we need to find why there are duplicates endpoints in the db
But it's another issue | null | 2023-11-22 02:57:45+00:00 | 2023-11-27 12:37:27+00:00 | web/reNgine/tasks.py | import csv
import json
import os
import pprint
import subprocess
import time
import validators
import whatportis
import xmltodict
import yaml
import tldextract
import concurrent.futures
from datetime import datetime
from urllib.parse import urlparse
from api.serializers import SubdomainSerializer
from celery import chain, chord, group
from celery.result import allow_join_result
from celery.utils.log import get_task_logger
from django.db.models import Count
from dotted_dict import DottedDict
from django.utils import timezone
from pycvesearch import CVESearch
from metafinder.extractor import extract_metadata_from_google_search
from reNgine.celery import app
from reNgine.gpt import GPTVulnerabilityReportGenerator
from reNgine.celery_custom_task import RengineTask
from reNgine.common_func import *
from reNgine.definitions import *
from reNgine.settings import *
from reNgine.gpt import *
from reNgine.utilities import *
from scanEngine.models import (EngineType, InstalledExternalTool, Notification, Proxy)
from startScan.models import *
from startScan.models import EndPoint, Subdomain, Vulnerability
from targetApp.models import Domain
"""
Celery tasks.
"""
logger = get_task_logger(__name__)
#----------------------#
# Scan / Subscan tasks #
#----------------------#
@app.task(name='initiate_scan', bind=False, queue='initiate_scan_queue')
def initiate_scan(
scan_history_id,
domain_id,
engine_id=None,
scan_type=LIVE_SCAN,
results_dir=RENGINE_RESULTS,
imported_subdomains=[],
out_of_scope_subdomains=[],
url_filter=''):
"""Initiate a new scan.
Args:
scan_history_id (int): ScanHistory id.
domain_id (int): Domain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
imported_subdomains (list): Imported subdomains.
out_of_scope_subdomains (list): Out-of-scope subdomains.
url_filter (str): URL path. Default: ''
"""
# Get scan history
scan = ScanHistory.objects.get(pk=scan_history_id)
# Get scan engine
engine_id = engine_id or scan.scan_type.id # scan history engine_id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, [])
# Get domain and set last_scan_date
domain = Domain.objects.get(pk=domain_id)
domain.last_scan_date = timezone.now()
domain.save()
# Get path filter
url_filter = url_filter.rstrip('/')
# Get or create ScanHistory() object
if scan_type == LIVE_SCAN: # immediate
scan = ScanHistory.objects.get(pk=scan_history_id)
scan.scan_status = RUNNING_TASK
elif scan_type == SCHEDULED_SCAN: # scheduled
scan = ScanHistory()
scan.scan_status = INITIATED_TASK
scan.scan_type = engine
scan.celery_ids = [initiate_scan.request.id]
scan.domain = domain
scan.start_scan_date = timezone.now()
scan.tasks = engine.tasks
scan.results_dir = f'{results_dir}/{domain.name}_{scan.id}'
add_gf_patterns = gf_patterns and 'fetch_url' in engine.tasks
if add_gf_patterns:
scan.used_gf_patterns = ','.join(gf_patterns)
scan.save()
# Create scan results dir
os.makedirs(scan.results_dir)
# Build task context
ctx = {
'scan_history_id': scan_history_id,
'engine_id': engine_id,
'domain_id': domain.id,
'results_dir': scan.results_dir,
'url_filter': url_filter,
'yaml_configuration': config,
'out_of_scope_subdomains': out_of_scope_subdomains
}
ctx_str = json.dumps(ctx, indent=2)
# Send start notif
logger.warning(f'Starting scan {scan_history_id} with context:\n{ctx_str}')
send_scan_notif.delay(
scan_history_id,
subscan_id=None,
engine_id=engine_id,
status=CELERY_TASK_STATUS_MAP[scan.scan_status])
# Save imported subdomains in DB
save_imported_subdomains(imported_subdomains, ctx=ctx)
# Create initial subdomain in DB: make a copy of domain as a subdomain so
# that other tasks using subdomains can use it.
subdomain_name = domain.name
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
# If enable_http_crawl is set, create an initial root HTTP endpoint so that
# HTTP crawling can start somewhere
http_url = f'{domain.name}{url_filter}' if url_filter else domain.name
endpoint, _ = save_endpoint(
http_url,
ctx=ctx,
crawl=enable_http_crawl,
is_default=True,
subdomain=subdomain
)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build Celery tasks, crafted according to the dependency graph below:
# subdomain_discovery --> port_scan --> fetch_url --> dir_file_fuzz
# osint vulnerability_scan
# osint dalfox xss scan
# screenshot
# waf_detection
workflow = chain(
group(
subdomain_discovery.si(ctx=ctx, description='Subdomain discovery'),
osint.si(ctx=ctx, description='OS Intelligence')
),
port_scan.si(ctx=ctx, description='Port scan'),
fetch_url.si(ctx=ctx, description='Fetch URL'),
group(
dir_file_fuzz.si(ctx=ctx, description='Directories & files fuzz'),
vulnerability_scan.si(ctx=ctx, description='Vulnerability scan'),
screenshot.si(ctx=ctx, description='Screenshot'),
waf_detection.si(ctx=ctx, description='WAF detection')
)
)
# Build callback
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery chord
logger.info(f'Running Celery workflow with {len(workflow.tasks) + 1} tasks')
task = chain(workflow, callback).on_error(callback).delay()
scan.celery_ids.append(task.id)
scan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='initiate_subscan', bind=False, queue='subscan_queue')
def initiate_subscan(
scan_history_id,
subdomain_id,
engine_id=None,
scan_type=None,
results_dir=RENGINE_RESULTS,
url_filter=''):
"""Initiate a new subscan.
Args:
scan_history_id (int): ScanHistory id.
subdomain_id (int): Subdomain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
url_filter (str): URL path. Default: ''
"""
# Get Subdomain, Domain and ScanHistory
subdomain = Subdomain.objects.get(pk=subdomain_id)
scan = ScanHistory.objects.get(pk=subdomain.scan_history.id)
domain = Domain.objects.get(pk=subdomain.target_domain.id)
# Get EngineType
engine_id = engine_id or scan.scan_type.id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Create scan activity of SubScan Model
subscan = SubScan(
start_scan_date=timezone.now(),
celery_ids=[initiate_subscan.request.id],
scan_history=scan,
subdomain=subdomain,
type=scan_type,
status=RUNNING_TASK,
engine=engine)
subscan.save()
# Get YAML configuration
config = yaml.safe_load(engine.yaml_configuration)
# Create results directory
results_dir = f'{scan.results_dir}/subscans/{subscan.id}'
os.makedirs(results_dir, exist_ok=True)
# Run task
method = globals().get(scan_type)
if not method:
logger.warning(f'Task {scan_type} is not supported by reNgine. Skipping')
return
scan.tasks.append(scan_type)
scan.save()
# Send start notif
send_scan_notif.delay(
scan.id,
subscan_id=subscan.id,
engine_id=engine_id,
status='RUNNING')
# Build context
ctx = {
'scan_history_id': scan.id,
'subscan_id': subscan.id,
'engine_id': engine_id,
'domain_id': domain.id,
'subdomain_id': subdomain.id,
'yaml_configuration': config,
'results_dir': results_dir,
'url_filter': url_filter
}
# Create initial endpoints in DB: find domain HTTP endpoint so that HTTP
# crawling can start somewhere
base_url = f'{subdomain.name}{url_filter}' if url_filter else subdomain.name
endpoint, _ = save_endpoint(
base_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build header + callback
workflow = method.si(ctx=ctx)
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery tasks
task = chain(workflow, callback).on_error(callback).delay()
subscan.celery_ids.append(task.id)
subscan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='report', bind=False, queue='report_queue')
def report(ctx={}, description=None):
"""Report task running after all other tasks.
Mark ScanHistory or SubScan object as completed and update with final
status, log run details and send notification.
Args:
description (str, optional): Task description shown in UI.
"""
# Get objects
subscan_id = ctx.get('subscan_id')
scan_id = ctx.get('scan_history_id')
engine_id = ctx.get('engine_id')
scan = ScanHistory.objects.filter(pk=scan_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
# Get failed tasks
tasks = ScanActivity.objects.filter(scan_of=scan).all()
if subscan:
tasks = tasks.filter(celery_id__in=subscan.celery_ids)
failed_tasks = tasks.filter(status=FAILED_TASK)
# Get task status
failed_count = failed_tasks.count()
status = SUCCESS_TASK if failed_count == 0 else FAILED_TASK
status_h = 'SUCCESS' if failed_count == 0 else 'FAILED'
# Update scan / subscan status
if subscan:
subscan.stop_scan_date = timezone.now()
subscan.status = status
subscan.save()
else:
scan.scan_status = status
scan.stop_scan_date = timezone.now()
scan.save()
# Send scan status notif
send_scan_notif.delay(
scan_history_id=scan_id,
subscan_id=subscan_id,
engine_id=engine_id,
status=status_h)
#------------------------- #
# Tracked reNgine tasks #
#--------------------------#
@app.task(name='subdomain_discovery', queue='main_scan_queue', base=RengineTask, bind=True)
def subdomain_discovery(
self,
host=None,
ctx=None,
description=None):
"""Uses a set of tools (see SUBDOMAIN_SCAN_DEFAULT_TOOLS) to scan all
subdomains associated with a domain.
Args:
host (str): Hostname to scan.
Returns:
subdomains (list): List of subdomain names.
"""
if not host:
host = self.subdomain.name if self.subdomain else self.domain.name
if self.url_filter:
logger.warning(f'Ignoring subdomains scan as an URL path filter was passed ({self.url_filter}).')
return
# Config
config = self.yaml_configuration.get(SUBDOMAIN_DISCOVERY) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL) or self.yaml_configuration.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
tools = config.get(USES_TOOLS, SUBDOMAIN_SCAN_DEFAULT_TOOLS)
default_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=True).filter(is_subdomain_gathering=True)]
custom_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=False).filter(is_subdomain_gathering=True)]
send_subdomain_changes, send_interesting = False, False
notif = Notification.objects.first()
if notif:
send_subdomain_changes = notif.send_subdomain_changes_notif
send_interesting = notif.send_interesting_notif
# Gather tools to run for subdomain scan
if ALL in tools:
tools = SUBDOMAIN_SCAN_DEFAULT_TOOLS + custom_subdomain_tools
tools = [t.lower() for t in tools]
# Make exception for amass since tool name is amass, but command is amass-active/passive
default_subdomain_tools.append('amass-passive')
default_subdomain_tools.append('amass-active')
# Run tools
for tool in tools:
cmd = None
logger.info(f'Scanning subdomains for {host} with {tool}')
proxy = get_random_proxy()
if tool in default_subdomain_tools:
if tool == 'amass-passive':
cmd = f'amass enum -passive -d {host} -o {self.results_dir}/subdomains_amass.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
elif tool == 'amass-active':
use_amass_config = config.get(USE_AMASS_CONFIG, False)
amass_wordlist_name = config.get(AMASS_WORDLIST, 'deepmagic.com-prefixes-top50000')
wordlist_path = f'/usr/src/wordlist/{amass_wordlist_name}.txt'
cmd = f'amass enum -active -d {host} -o {self.results_dir}/subdomains_amass_active.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
cmd += f' -brute -w {wordlist_path}'
elif tool == 'sublist3r':
cmd = f'python3 /usr/src/github/Sublist3r/sublist3r.py -d {host} -t {threads} -o {self.results_dir}/subdomains_sublister.txt'
elif tool == 'subfinder':
cmd = f'subfinder -d {host} -o {self.results_dir}/subdomains_subfinder.txt'
use_subfinder_config = config.get(USE_SUBFINDER_CONFIG, False)
cmd += ' -config /root/.config/subfinder/config.yaml' if use_subfinder_config else ''
cmd += f' -proxy {proxy}' if proxy else ''
cmd += f' -timeout {timeout}' if timeout else ''
cmd += f' -t {threads}' if threads else ''
cmd += f' -silent'
elif tool == 'oneforall':
cmd = f'python3 /usr/src/github/OneForAll/oneforall.py --target {host} run'
cmd_extract = f'cut -d\',\' -f6 /usr/src/github/OneForAll/results/{host}.csv > {self.results_dir}/subdomains_oneforall.txt'
cmd_rm = f'rm -rf /usr/src/github/OneForAll/results/{host}.csv'
cmd += f' && {cmd_extract} && {cmd_rm}'
elif tool == 'ctfr':
results_file = self.results_dir + '/subdomains_ctfr.txt'
cmd = f'python3 /usr/src/github/ctfr/ctfr.py -d {host} -o {results_file}'
cmd_extract = f"cat {results_file} | sed 's/\*.//g' | tail -n +12 | uniq | sort > {results_file}"
cmd += f' && {cmd_extract}'
elif tool == 'tlsx':
results_file = self.results_dir + '/subdomains_tlsx.txt'
cmd = f'tlsx -san -cn -silent -ro -host {host}'
cmd += f" | sed -n '/^\([a-zA-Z0-9]\([-a-zA-Z0-9]*[a-zA-Z0-9]\)\?\.\)\+{host}$/p' | uniq | sort"
cmd += f' > {results_file}'
elif tool == 'netlas':
results_file = self.results_dir + '/subdomains_netlas.txt'
cmd = f'netlas search -d domain -i domain domain:"*.{host}" -f json'
netlas_key = get_netlas_key()
cmd += f' -a {netlas_key}' if netlas_key else ''
cmd_extract = f"grep -oE '([a-zA-Z0-9]([-a-zA-Z0-9]*[a-zA-Z0-9])?\.)+{host}'"
cmd += f' | {cmd_extract} > {results_file}'
elif tool in custom_subdomain_tools:
tool_query = InstalledExternalTool.objects.filter(name__icontains=tool.lower())
if not tool_query.exists():
logger.error(f'Missing {{TARGET}} and {{OUTPUT}} placeholders in {tool} configuration. Skipping.')
continue
custom_tool = tool_query.first()
cmd = custom_tool.subdomain_gathering_command
if '{TARGET}' in cmd and '{OUTPUT}' in cmd:
cmd = cmd.replace('{TARGET}', host)
cmd = cmd.replace('{OUTPUT}', f'{self.results_dir}/subdomains_{tool}.txt')
cmd = cmd.replace('{PATH}', custom_tool.github_clone_path) if '{PATH}' in cmd else cmd
else:
logger.warning(
f'Subdomain discovery tool "{tool}" is not supported by reNgine. Skipping.')
continue
# Run tool
try:
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
except Exception as e:
logger.error(
f'Subdomain discovery tool "{tool}" raised an exception')
logger.exception(e)
# Gather all the tools' results in one single file. Write subdomains into
# separate files, and sort all subdomains.
run_command(
f'cat {self.results_dir}/subdomains_*.txt > {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {self.output_path} -o {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
with open(self.output_path) as f:
lines = f.readlines()
# Parse the output_file file and store Subdomain and EndPoint objects found
# in db.
subdomain_count = 0
subdomains = []
urls = []
for line in lines:
subdomain_name = line.strip()
valid_url = bool(validators.url(subdomain_name))
valid_domain = (
bool(validators.domain(subdomain_name)) or
bool(validators.ipv4(subdomain_name)) or
bool(validators.ipv6(subdomain_name)) or
valid_url
)
if not valid_domain:
logger.error(f'Subdomain {subdomain_name} is not a valid domain, IP or URL. Skipping.')
continue
if valid_url:
subdomain_name = urlparse(subdomain_name).netloc
if subdomain_name in self.out_of_scope_subdomains:
logger.error(f'Subdomain {subdomain_name} is out of scope. Skipping.')
continue
# Add subdomain
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain_count += 1
subdomains.append(subdomain)
urls.append(subdomain.name)
# Bulk crawl subdomains
if enable_http_crawl:
ctx['track'] = True
http_crawl(urls, ctx=ctx, is_ran_from_subdomain_scan=True)
# Find root subdomain endpoints
for subdomain in subdomains:
pass
# Send notifications
subdomains_str = '\n'.join([f'• `{subdomain.name}`' for subdomain in subdomains])
self.notify(fields={
'Subdomain count': len(subdomains),
'Subdomains': subdomains_str,
})
if send_subdomain_changes and self.scan_id and self.domain_id:
added = get_new_added_subdomain(self.scan_id, self.domain_id)
removed = get_removed_subdomain(self.scan_id, self.domain_id)
if added:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in added])
self.notify(fields={'Added subdomains': subdomains_str})
if removed:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in removed])
self.notify(fields={'Removed subdomains': subdomains_str})
if send_interesting and self.scan_id and self.domain_id:
interesting_subdomains = get_interesting_subdomains(self.scan_id, self.domain_id)
if interesting_subdomains:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in interesting_subdomains])
self.notify(fields={'Interesting subdomains': subdomains_str})
return SubdomainSerializer(subdomains, many=True).data
@app.task(name='osint', queue='main_scan_queue', base=RengineTask, bind=True)
def osint(self, host=None, ctx={}, description=None):
"""Run Open-Source Intelligence tools on selected domain.
Args:
host (str): Hostname to scan.
Returns:
dict: Results from osint discovery and dorking.
"""
config = self.yaml_configuration.get(OSINT) or OSINT_DEFAULT_CONFIG
results = {}
grouped_tasks = []
if 'discover' in config:
ctx['track'] = False
# results = osint_discovery(host=host, ctx=ctx)
_task = osint_discovery.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
activity_id=self.activity_id,
results_dir=self.results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if OSINT_DORK in config or OSINT_CUSTOM_DORK in config:
_task = dorking.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
results_dir=self.results_dir
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('OSINT Tasks finished...')
# with open(self.output_path, 'w') as f:
# json.dump(results, f, indent=4)
#
# return results
@app.task(name='osint_discovery', queue='osint_discovery_queue', bind=False)
def osint_discovery(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run OSINT discovery.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
dict: osint metadat and theHarvester and h8mail results.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
osint_lookup = config.get(OSINT_DISCOVER, [])
osint_intensity = config.get(INTENSITY, 'normal')
documents_limit = config.get(OSINT_DOCUMENTS_LIMIT, 50)
results = {}
meta_info = []
emails = []
creds = []
# Get and save meta info
if 'metainfo' in osint_lookup:
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': host,
'domain': host,
'scan_id': scan_history_id,
'documents_limit': documents_limit
})
meta_info.append(save_metadata_info(meta_dict))
# TODO: disabled for now
# elif osint_intensity == 'deep':
# subdomains = Subdomain.objects
# if self.scan:
# subdomains = subdomains.filter(scan_history=self.scan)
# for subdomain in subdomains:
# meta_dict = DottedDict({
# 'osint_target': subdomain.name,
# 'domain': self.domain,
# 'scan_id': self.scan_id,
# 'documents_limit': documents_limit
# })
# meta_info.append(save_metadata_info(meta_dict))
grouped_tasks = []
if 'emails' in osint_lookup:
emails = get_and_save_emails(scan_history, activity_id, results_dir)
emails_str = '\n'.join([f'• `{email}`' for email in emails])
# self.notify(fields={'Emails': emails_str})
# ctx['track'] = False
_task = h8mail.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if 'employees' in osint_lookup:
ctx['track'] = False
_task = theHarvester.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
# results['emails'] = results.get('emails', []) + emails
# results['creds'] = creds
# results['meta_info'] = meta_info
return results
@app.task(name='dorking', bind=False, queue='dorking_queue')
def dorking(config, host, scan_history_id, results_dir):
"""Run Google dorks.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
list: Dorking results for each dork ran.
"""
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
scan_history = ScanHistory.objects.get(pk=scan_history_id)
dorks = config.get(OSINT_DORK, [])
custom_dorks = config.get(OSINT_CUSTOM_DORK, [])
results = []
# custom dorking has higher priority
try:
for custom_dork in custom_dorks:
lookup_target = custom_dork.get('lookup_site')
# replace with original host if _target_
lookup_target = host if lookup_target == '_target_' else lookup_target
if 'lookup_extensions' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_extensions=custom_dork.get('lookup_extensions'),
scan_history=scan_history
)
elif 'lookup_keywords' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_keywords=custom_dork.get('lookup_keywords'),
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
# default dorking
try:
for dork in dorks:
logger.info(f'Getting dork information for {dork}')
if dork == 'stackoverflow':
results = get_and_save_dork_results(
lookup_target='stackoverflow.com',
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'login_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/login/,login.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'admin_panels':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/admin/,admin.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'dashboard_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/dashboard/,dashboard.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'social_media' :
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'reddit.com'
]
for site in social_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'project_management' :
project_websites = [
'trello.com',
'atlassian.net'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'code_sharing' :
project_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'config_files' :
config_file_exts = [
'env',
'xml',
'conf',
'toml',
'yml',
'yaml',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(config_file_exts),
page_count=4,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keyword = 'Jenkins'
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=lookup_keyword,
page_count=1,
scan_history=scan_history
)
elif dork == 'wordpress_files' :
lookup_keywords = [
'/wp-content/',
'/wp-includes/'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'php_error' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'exposed_documents' :
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(docs_file_ext),
page_count=7,
scan_history=scan_history
)
elif dork == 'db_files' :
file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
elif dork == 'git_exposed' :
file_ext = [
'git',
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
return results
@app.task(name='theHarvester', queue='theHarvester_queue', bind=False)
def theHarvester(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run theHarvester to get save emails, hosts, employees found in domain.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
dict: Dict of emails, employees, hosts and ips found during crawling.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
output_path_json = f'{results_dir}/theHarvester.json'
theHarvester_dir = '/usr/src/github/theHarvester'
history_file = f'{results_dir}/commands.txt'
cmd = f'python3 {theHarvester_dir}/theHarvester.py -d {host} -b all -f {output_path_json}'
# Update proxies.yaml
proxy_query = Proxy.objects.all()
if proxy_query.exists():
proxy = proxy_query.first()
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(f'{theHarvester_dir}/proxies.yaml', 'w') as file:
yaml.dump(yaml_data, file)
# Run cmd
run_command(
cmd,
shell=False,
cwd=theHarvester_dir,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
# Get file location
if not os.path.isfile(output_path_json):
logger.error(f'Could not open {output_path_json}')
return {}
# Load theHarvester results
with open(output_path_json, 'r') as f:
data = json.load(f)
# Re-indent theHarvester JSON
with open(output_path_json, 'w') as f:
json.dump(data, f, indent=4)
emails = data.get('emails', [])
for email_address in emails:
email, _ = save_email(email_address, scan_history=scan_history)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
linkedin_people = data.get('linkedin_people', [])
for people in linkedin_people:
employee, _ = save_employee(
people,
designation='linkedin',
scan_history=scan_history)
# if employee:
# self.notify(fields={'LinkedIn people': f'• {employee.name}'})
twitter_people = data.get('twitter_people', [])
for people in twitter_people:
employee, _ = save_employee(
people,
designation='twitter',
scan_history=scan_history)
# if employee:
# self.notify(fields={'Twitter people': f'• {employee.name}'})
hosts = data.get('hosts', [])
urls = []
for host in hosts:
split = tuple(host.split(':'))
http_url = split[0]
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
endpoint, _ = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain)
# if endpoint:
# urls.append(endpoint.http_url)
# self.notify(fields={'Hosts': f'• {endpoint.http_url}'})
# if enable_http_crawl:
# ctx['track'] = False
# http_crawl(urls, ctx=ctx)
# TODO: Lots of ips unrelated with our domain are found, disabling
# this for now.
# ips = data.get('ips', [])
# for ip_address in ips:
# ip, created = save_ip_address(
# ip_address,
# subscan=subscan)
# if ip:
# send_task_notif.delay(
# 'osint',
# scan_history_id=scan_history_id,
# subscan_id=subscan_id,
# severity='success',
# update_fields={'IPs': f'{ip.address}'})
return data
@app.task(name='h8mail', queue='h8mail_queue', bind=False)
def h8mail(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run h8mail.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
list[dict]: List of credentials info.
"""
logger.warning('Getting leaked credentials')
scan_history = ScanHistory.objects.get(pk=scan_history_id)
input_path = f'{results_dir}/emails.txt'
output_file = f'{results_dir}/h8mail.json'
cmd = f'h8mail -t {input_path} --json {output_file}'
history_file = f'{results_dir}/commands.txt'
run_command(
cmd,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
with open(output_file) as f:
data = json.load(f)
creds = data.get('targets', [])
# TODO: go through h8mail output and save emails to DB
for cred in creds:
logger.warning(cred)
email_address = cred['target']
pwn_num = cred['pwn_num']
pwn_data = cred.get('data', [])
email, created = save_email(email_address, scan_history=scan)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
return creds
@app.task(name='screenshot', queue='main_scan_queue', base=RengineTask, bind=True)
def screenshot(self, ctx={}, description=None):
"""Uses EyeWitness to gather screenshot of a domain and/or url.
Args:
description (str, optional): Task description shown in UI.
"""
# Config
screenshots_path = f'{self.results_dir}/screenshots'
output_path = f'{self.results_dir}/screenshots/{self.filename}'
alive_endpoints_file = f'{self.results_dir}/endpoints_alive.txt'
config = self.yaml_configuration.get(SCREENSHOT) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT + 5)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
# If intensity is normal, grab only the root endpoints of each subdomain
strict = True if intensity == 'normal' else False
# Get URLs to take screenshot of
get_http_urls(
is_alive=enable_http_crawl,
strict=strict,
write_filepath=alive_endpoints_file,
get_only_default_urls=True,
ctx=ctx
)
# Send start notif
notification = Notification.objects.first()
send_output_file = notification.send_scan_output_file if notification else False
# Run cmd
cmd = f'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py -f {alive_endpoints_file} -d {screenshots_path} --no-prompt'
cmd += f' --timeout {timeout}' if timeout > 0 else ''
cmd += f' --threads {threads}' if threads > 0 else ''
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(output_path):
logger.error(f'Could not load EyeWitness results at {output_path} for {self.domain.name}.')
return
# Loop through results and save objects in DB
screenshot_paths = []
with open(output_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
"Protocol,Port,Domain,Request Status,Screenshot Path, Source Path"
protocol, port, subdomain_name, status, screenshot_path, source_path = tuple(row)
logger.info(f'{protocol}:{port}:{subdomain_name}:{status}')
subdomain_query = Subdomain.objects.filter(name=subdomain_name)
if self.scan:
subdomain_query = subdomain_query.filter(scan_history=self.scan)
if status == 'Successful' and subdomain_query.exists():
subdomain = subdomain_query.first()
screenshot_paths.append(screenshot_path)
subdomain.screenshot_path = screenshot_path.replace('/usr/src/scan_results/', '')
subdomain.save()
logger.warning(f'Added screenshot for {subdomain.name} to DB')
# Remove all db, html extra files in screenshot results
run_command(
'rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(screenshots_path),
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'rm -rf {screenshots_path}/source',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Send finish notifs
screenshots_str = '• ' + '\n• '.join([f'`{path}`' for path in screenshot_paths])
self.notify(fields={'Screenshots': screenshots_str})
if send_output_file:
for path in screenshot_paths:
title = get_output_file_name(
self.scan_id,
self.subscan_id,
self.filename)
send_file_to_discord.delay(path, title)
@app.task(name='port_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def port_scan(self, hosts=[], ctx={}, description=None):
"""Run port scan.
Args:
hosts (list, optional): Hosts to run port scan on.
description (str, optional): Task description shown in UI.
Returns:
list: List of open ports (dict).
"""
input_file = f'{self.results_dir}/input_subdomains_port_scan.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(PORT_SCAN) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
exclude_ports = config.get(NAABU_EXCLUDE_PORTS, [])
exclude_subdomains = config.get(NAABU_EXCLUDE_SUBDOMAINS, False)
ports = config.get(PORTS, NAABU_DEFAULT_PORTS)
ports = [str(port) for port in ports]
rate_limit = config.get(NAABU_RATE) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
passive = config.get(NAABU_PASSIVE, False)
use_naabu_config = config.get(USE_NAABU_CONFIG, False)
exclude_ports_str = ','.join(return_iterable(exclude_ports))
# nmap args
nmap_enabled = config.get(ENABLE_NMAP, False)
nmap_cmd = config.get(NMAP_COMMAND, '')
nmap_script = config.get(NMAP_SCRIPT, '')
nmap_script = ','.join(return_iterable(nmap_script))
nmap_script_args = config.get(NMAP_SCRIPT_ARGS)
if hosts:
with open(input_file, 'w') as f:
f.write('\n'.join(hosts))
else:
hosts = get_subdomains(
write_filepath=input_file,
exclude_subdomains=exclude_subdomains,
ctx=ctx)
# Build cmd
cmd = 'naabu -json -exclude-cdn'
cmd += f' -list {input_file}' if len(hosts) > 0 else f' -host {hosts[0]}'
if 'full' in ports or 'all' in ports:
ports_str = ' -p "-"'
elif 'top-100' in ports:
ports_str = ' -top-ports 100'
elif 'top-1000' in ports:
ports_str = ' -top-ports 1000'
else:
ports_str = ','.join(ports)
ports_str = f' -p {ports_str}'
cmd += ports_str
cmd += ' -config /root/.config/naabu/config.yaml' if use_naabu_config else ''
cmd += f' -proxy "{proxy}"' if proxy else ''
cmd += f' -c {threads}' if threads else ''
cmd += f' -rate {rate_limit}' if rate_limit > 0 else ''
cmd += f' -timeout {timeout*1000}' if timeout > 0 else ''
cmd += f' -passive' if passive else ''
cmd += f' -exclude-ports {exclude_ports_str}' if exclude_ports else ''
cmd += f' -silent'
# Execute cmd and gather results
results = []
urls = []
ports_data = {}
for line in stream_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
port_number = line['port']
ip_address = line['ip']
host = line.get('host') or ip_address
if port_number == 0:
continue
# Grab subdomain
subdomain = Subdomain.objects.filter(
name=host,
target_domain=self.domain,
scan_history=self.scan
).first()
# Add IP DB
ip, _ = save_ip_address(ip_address, subdomain, subscan=self.subscan)
if self.subscan:
ip.ip_subscan_ids.add(self.subscan)
ip.save()
# Add endpoint to DB
# port 80 and 443 not needed as http crawl already does that.
if port_number not in [80, 443]:
http_url = f'{host}:{port_number}'
endpoint, _ = save_endpoint(
http_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint:
http_url = endpoint.http_url
urls.append(http_url)
# Add Port in DB
port_details = whatportis.get_ports(str(port_number))
service_name = port_details[0].name if len(port_details) > 0 else 'unknown'
description = port_details[0].description if len(port_details) > 0 else ''
# get or create port
port, created = Port.objects.get_or_create(
number=port_number,
service_name=service_name,
description=description
)
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port.save()
ip.ports.add(port)
ip.save()
if host in ports_data:
ports_data[host].append(port_number)
else:
ports_data[host] = [port_number]
# Send notification
logger.warning(f'Found opened port {port_number} on {ip_address} ({host})')
if len(ports_data) == 0:
logger.info('Finished running naabu port scan - No open ports found.')
if nmap_enabled:
logger.info('Nmap scans skipped')
return ports_data
# Send notification
fields_str = ''
for host, ports in ports_data.items():
ports_str = ', '.join([f'`{port}`' for port in ports])
fields_str += f'• `{host}`: {ports_str}\n'
self.notify(fields={'Ports discovered': fields_str})
# Save output to file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
logger.info('Finished running naabu port scan.')
# Process nmap results: 1 process per host
sigs = []
if nmap_enabled:
logger.warning(f'Starting nmap scans ...')
logger.warning(ports_data)
for host, port_list in ports_data.items():
ports_str = '_'.join([str(p) for p in port_list])
ctx_nmap = ctx.copy()
ctx_nmap['description'] = get_task_title(f'nmap_{host}', self.scan_id, self.subscan_id)
ctx_nmap['track'] = False
sig = nmap.si(
cmd=nmap_cmd,
ports=port_list,
host=host,
script=nmap_script,
script_args=nmap_script_args,
max_rate=rate_limit,
ctx=ctx_nmap)
sigs.append(sig)
task = group(sigs).apply_async()
with allow_join_result():
results = task.get()
return ports_data
@app.task(name='nmap', queue='main_scan_queue', base=RengineTask, bind=True)
def nmap(
self,
cmd=None,
ports=[],
host=None,
input_file=None,
script=None,
script_args=None,
max_rate=None,
ctx={},
description=None):
"""Run nmap on a host.
Args:
cmd (str, optional): Existing nmap command to complete.
ports (list, optional): List of ports to scan.
host (str, optional): Host to scan.
input_file (str, optional): Input hosts file.
script (str, optional): NSE script to run.
script_args (str, optional): NSE script args.
max_rate (int): Max rate.
description (str, optional): Task description shown in UI.
"""
notif = Notification.objects.first()
ports_str = ','.join(str(port) for port in ports)
self.filename = self.filename.replace('.txt', '.xml')
filename_vulns = self.filename.replace('.xml', '_vulns.json')
output_file = self.output_path
output_file_xml = f'{self.results_dir}/{host}_{self.filename}'
vulns_file = f'{self.results_dir}/{host}_{filename_vulns}'
logger.warning(f'Running nmap on {host}:{ports}')
# Build cmd
nmap_cmd = get_nmap_cmd(
cmd=cmd,
ports=ports_str,
script=script,
script_args=script_args,
max_rate=max_rate,
host=host,
input_file=input_file,
output_file=output_file_xml)
# Run cmd
run_command(
nmap_cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Get nmap XML results and convert to JSON
vulns = parse_nmap_results(output_file_xml, output_file)
with open(vulns_file, 'w') as f:
json.dump(vulns, f, indent=4)
# Save vulnerabilities found by nmap
vulns_str = ''
for vuln_data in vulns:
# URL is not necessarily an HTTP URL when running nmap (can be any
# other vulnerable protocols). Look for existing endpoint and use its
# URL as vulnerability.http_url if it exists.
url = vuln_data['http_url']
endpoint = EndPoint.objects.filter(http_url__contains=url).first()
if endpoint:
vuln_data['http_url'] = endpoint.http_url
vuln, created = save_vulnerability(
target_domain=self.domain,
subdomain=self.subdomain,
scan_history=self.scan,
subscan=self.subscan,
endpoint=endpoint,
**vuln_data)
vulns_str += f'• {str(vuln)}\n'
if created:
logger.warning(str(vuln))
# Send only 1 notif for all vulns to reduce number of notifs
if notif and notif.send_vuln_notif and vulns_str:
logger.warning(vulns_str)
self.notify(fields={'CVEs': vulns_str})
return vulns
@app.task(name='waf_detection', queue='main_scan_queue', base=RengineTask, bind=True)
def waf_detection(self, ctx={}, description=None):
"""
Uses wafw00f to check for the presence of a WAF.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of startScan.models.Waf objects.
"""
input_path = f'{self.results_dir}/input_endpoints_waf_detection.txt'
config = self.yaml_configuration.get(WAF_DETECTION) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Get alive endpoints from DB
get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
cmd = f'wafw00f -i {input_path} -o {self.output_path}'
run_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(self.output_path):
logger.error(f'Could not find {self.output_path}')
return
with open(self.output_path) as file:
wafs = file.readlines()
for line in wafs:
line = " ".join(line.split())
splitted = line.split(' ', 1)
waf_info = splitted[1].strip()
waf_name = waf_info[:waf_info.find('(')].strip()
waf_manufacturer = waf_info[waf_info.find('(')+1:waf_info.find(')')].strip().replace('.', '')
http_url = sanitize_url(splitted[0].strip())
if not waf_name or waf_name == 'None':
continue
# Add waf to db
waf, _ = Waf.objects.get_or_create(
name=waf_name,
manufacturer=waf_manufacturer
)
# Add waf info to Subdomain in DB
subdomain = get_subdomain_from_url(http_url)
logger.info(f'Wafw00f Subdomain : {subdomain}')
subdomain_query, _ = Subdomain.objects.get_or_create(scan_history=self.scan, name=subdomain)
subdomain_query.waf.add(waf)
subdomain_query.save()
return wafs
@app.task(name='dir_file_fuzz', queue='main_scan_queue', base=RengineTask, bind=True)
def dir_file_fuzz(self, ctx={}, description=None):
"""Perform directory scan, and currently uses `ffuf` as a default tool.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of URLs discovered.
"""
# Config
cmd = 'ffuf'
config = self.yaml_configuration.get(DIR_FILE_FUZZ) or {}
custom_header = self.yaml_configuration.get(CUSTOM_HEADER)
auto_calibration = config.get(AUTO_CALIBRATION, True)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
extensions = config.get(EXTENSIONS, DEFAULT_DIR_FILE_FUZZ_EXTENSIONS)
# prepend . on extensions
extensions = [ext if ext.startswith('.') else '.' + ext for ext in extensions]
extensions_str = ','.join(map(str, extensions))
follow_redirect = config.get(FOLLOW_REDIRECT, FFUF_DEFAULT_FOLLOW_REDIRECT)
max_time = config.get(MAX_TIME, 0)
match_http_status = config.get(MATCH_HTTP_STATUS, FFUF_DEFAULT_MATCH_HTTP_STATUS)
mc = ','.join([str(c) for c in match_http_status])
recursive_level = config.get(RECURSIVE_LEVEL, FFUF_DEFAULT_RECURSIVE_LEVEL)
stop_on_error = config.get(STOP_ON_ERROR, False)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
wordlist_name = config.get(WORDLIST, 'dicc')
delay = rate_limit / (threads * 100) # calculate request pause delay from rate_limit and number of threads
input_path = f'{self.results_dir}/input_dir_file_fuzz.txt'
# Get wordlist
wordlist_name = 'dicc' if wordlist_name == 'default' else wordlist_name
wordlist_path = f'/usr/src/wordlist/{wordlist_name}.txt'
# Build command
cmd += f' -w {wordlist_path}'
cmd += f' -e {extensions_str}' if extensions else ''
cmd += f' -maxtime {max_time}' if max_time > 0 else ''
cmd += f' -p {delay}' if delay > 0 else ''
cmd += f' -recursion -recursion-depth {recursive_level} ' if recursive_level > 0 else ''
cmd += f' -t {threads}' if threads and threads > 0 else ''
cmd += f' -timeout {timeout}' if timeout and timeout > 0 else ''
cmd += ' -se' if stop_on_error else ''
cmd += ' -fr' if follow_redirect else ''
cmd += ' -ac' if auto_calibration else ''
cmd += f' -mc {mc}' if mc else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
# Grab URLs to fuzz
urls = get_http_urls(
is_alive=True,
ignore_files=False,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
logger.warning(urls)
# Loop through URLs and run command
results = []
for url in urls:
'''
Above while fetching urls, we are not ignoring files, because some
default urls may redirect to https://example.com/login.php
so, ignore_files is set to False
but, during fuzzing, we will only need part of the path, in above example
it is still a good idea to ffuf base url https://example.com
so files from base url
'''
url_parse = urlparse(url)
url = url_parse.scheme + '://' + url_parse.netloc
url += '/FUZZ' # TODO: fuzz not only URL but also POST / PUT / headers
proxy = get_random_proxy()
# Build final cmd
fcmd = cmd
fcmd += f' -x {proxy}' if proxy else ''
fcmd += f' -u {url} -json'
# Initialize DirectoryScan object
dirscan = DirectoryScan()
dirscan.scanned_date = timezone.now()
dirscan.command_line = fcmd
dirscan.save()
# Loop through results and populate EndPoint and DirectoryFile in DB
results = []
for line in stream_command(
fcmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
name = line['input'].get('FUZZ')
length = line['length']
status = line['status']
words = line['words']
url = line['url']
lines = line['lines']
content_type = line['content-type']
duration = line['duration']
if not name:
logger.error(f'FUZZ not found for "{url}"')
continue
endpoint, created = save_endpoint(url, crawl=False, ctx=ctx)
# endpoint.is_default = False
endpoint.http_status = status
endpoint.content_length = length
endpoint.response_time = duration / 1000000000
endpoint.save()
if created:
urls.append(endpoint.http_url)
endpoint.status = status
endpoint.content_type = content_type
endpoint.content_length = length
dfile, created = DirectoryFile.objects.get_or_create(
name=name,
length=length,
words=words,
lines=lines,
content_type=content_type,
url=url)
dfile.http_status = status
dfile.save()
# if created:
# logger.warning(f'Found new directory or file {url}')
dirscan.directory_files.add(dfile)
dirscan.save()
if self.subscan:
dirscan.dir_subscan_ids.add(self.subscan)
subdomain_name = get_subdomain_from_url(endpoint.http_url)
subdomain = Subdomain.objects.get(name=subdomain_name, scan_history=self.scan)
subdomain.directories.add(dirscan)
subdomain.save()
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(urls, ctx=ctx)
return results
@app.task(name='fetch_url', queue='main_scan_queue', base=RengineTask, bind=True)
def fetch_url(self, urls=[], ctx={}, description=None):
"""Fetch URLs using different tools like gauplus, gau, gospider, waybackurls ...
Args:
urls (list): List of URLs to start from.
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/input_endpoints_fetch_url.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(FETCH_URL) or {}
should_remove_duplicate_endpoints = config.get(REMOVE_DUPLICATE_ENDPOINTS, True)
duplicate_removal_fields = config.get(DUPLICATE_REMOVAL_FIELDS, ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, DEFAULT_GF_PATTERNS)
ignore_file_extension = config.get(IGNORE_FILE_EXTENSION, DEFAULT_IGNORE_FILE_EXTENSIONS)
tools = config.get(USES_TOOLS, ENDPOINT_SCAN_DEFAULT_TOOLS)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
domain_request_headers = self.domain.request_headers if self.domain else None
custom_header = domain_request_headers or self.yaml_configuration.get(CUSTOM_HEADER)
exclude_subdomains = config.get(EXCLUDED_SUBDOMAINS, False)
# Get URLs to scan and save to input file
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
exclude_subdomains=exclude_subdomains,
get_only_default_urls=True,
ctx=ctx
)
# Domain regex
host = self.domain.name if self.domain else urlparse(urls[0]).netloc
host_regex = f"\'https?://([a-z0-9]+[.])*{host}.*\'"
# Tools cmds
cmd_map = {
'gau': f'gau',
'gauplus': f'gauplus -random-agent',
'hakrawler': 'hakrawler -subs -u',
'waybackurls': 'waybackurls',
'gospider': f'gospider -S {input_path} --js -d 2 --sitemap --robots -w -r',
'katana': f'katana -list {input_path} -silent -jc -kf all -d 3 -fs rdn',
}
if proxy:
cmd_map['gau'] += f' --proxy "{proxy}"'
cmd_map['gauplus'] += f' -p "{proxy}"'
cmd_map['gospider'] += f' -p {proxy}'
cmd_map['hakrawler'] += f' -proxy {proxy}'
cmd_map['katana'] += f' -proxy {proxy}'
if threads > 0:
cmd_map['gau'] += f' --threads {threads}'
cmd_map['gauplus'] += f' -t {threads}'
cmd_map['gospider'] += f' -t {threads}'
cmd_map['katana'] += f' -c {threads}'
if custom_header:
header_string = ';;'.join([
f'{key}: {value}' for key, value in custom_header.items()
])
cmd_map['hakrawler'] += f' -h {header_string}'
cmd_map['katana'] += f' -H {header_string}'
header_flags = [':'.join(h) for h in header_string.split(';;')]
for flag in header_flags:
cmd_map['gospider'] += f' -H {flag}'
cat_input = f'cat {input_path}'
grep_output = f'grep -Eo {host_regex}'
cmd_map = {
tool: f'{cat_input} | {cmd} | {grep_output} > {self.results_dir}/urls_{tool}.txt'
for tool, cmd in cmd_map.items()
}
tasks = group(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for tool, cmd in cmd_map.items()
if tool in tools
)
# Cleanup task
sort_output = [
f'cat {self.results_dir}/urls_* > {self.output_path}',
f'cat {input_path} >> {self.output_path}',
f'sort -u {self.output_path} -o {self.output_path}',
]
if ignore_file_extension:
ignore_exts = '|'.join(ignore_file_extension)
grep_ext_filtered_output = [
f'cat {self.output_path} | grep -Eiv "\\.({ignore_exts}).*" > {self.results_dir}/urls_filtered.txt',
f'mv {self.results_dir}/urls_filtered.txt {self.output_path}'
]
sort_output.extend(grep_ext_filtered_output)
cleanup = chain(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for cmd in sort_output
)
# Run all commands
task = chord(tasks)(cleanup)
with allow_join_result():
task.get()
# Store all the endpoints and run httpx
with open(self.output_path) as f:
discovered_urls = f.readlines()
self.notify(fields={'Discovered URLs': len(discovered_urls)})
# Some tools can have an URL in the format <URL>] - <PATH> or <URL> - <PATH>, add them
# to the final URL list
all_urls = []
for url in discovered_urls:
url = url.strip()
urlpath = None
base_url = None
if '] ' in url: # found JS scraped endpoint e.g from gospider
split = tuple(url.split('] '))
if not len(split) == 2:
logger.warning(f'URL format not recognized for "{url}". Skipping.')
continue
base_url, urlpath = split
urlpath = urlpath.lstrip('- ')
elif ' - ' in url: # found JS scraped endpoint e.g from gospider
base_url, urlpath = tuple(url.split(' - '))
if base_url and urlpath:
subdomain = urlparse(base_url)
url = f'{subdomain.scheme}://{subdomain.netloc}{self.url_filter}'
if not validators.url(url):
logger.warning(f'Invalid URL "{url}". Skipping.')
if url not in all_urls:
all_urls.append(url)
# Filter out URLs if a path filter was passed
if self.url_filter:
all_urls = [url for url in all_urls if self.url_filter in url]
# Write result to output path
with open(self.output_path, 'w') as f:
f.write('\n'.join(all_urls))
logger.warning(f'Found {len(all_urls)} usable URLs')
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(
all_urls,
ctx=ctx,
should_remove_duplicate_endpoints=should_remove_duplicate_endpoints,
duplicate_removal_fields=duplicate_removal_fields
)
#-------------------#
# GF PATTERNS MATCH #
#-------------------#
# Combine old gf patterns with new ones
if gf_patterns:
self.scan.used_gf_patterns = ','.join(gf_patterns)
self.scan.save()
# Run gf patterns on saved endpoints
# TODO: refactor to Celery task
for gf_pattern in gf_patterns:
# TODO: js var is causing issues, removing for now
if gf_pattern == 'jsvar':
logger.info('Ignoring jsvar as it is causing issues.')
continue
# Run gf on current pattern
logger.warning(f'Running gf on pattern "{gf_pattern}"')
gf_output_file = f'{self.results_dir}/gf_patterns_{gf_pattern}.txt'
cmd = f'cat {self.output_path} | gf {gf_pattern} | grep -Eo {host_regex} >> {gf_output_file}'
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Check output file
if not os.path.exists(gf_output_file):
logger.error(f'Could not find GF output file {gf_output_file}. Skipping GF pattern "{gf_pattern}"')
continue
# Read output file line by line and
with open(gf_output_file, 'r') as f:
lines = f.readlines()
# Add endpoints / subdomains to DB
for url in lines:
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
endpoint, created = save_endpoint(
http_url,
crawl=False,
subdomain=subdomain,
ctx=ctx)
if not endpoint:
continue
earlier_pattern = None
if not created:
earlier_pattern = endpoint.matched_gf_patterns
pattern = f'{earlier_pattern},{gf_pattern}' if earlier_pattern else gf_pattern
endpoint.matched_gf_patterns = pattern
endpoint.save()
return all_urls
def parse_curl_output(response):
# TODO: Enrich from other cURL fields.
CURL_REGEX_HTTP_STATUS = f'HTTP\/(?:(?:\d\.?)+)\s(\d+)\s(?:\w+)'
http_status = 0
if response:
failed = False
regex = re.compile(CURL_REGEX_HTTP_STATUS, re.MULTILINE)
try:
http_status = int(regex.findall(response)[0])
except (KeyError, TypeError, IndexError):
pass
return {
'http_status': http_status,
}
@app.task(name='vulnerability_scan', queue='main_scan_queue', bind=True, base=RengineTask)
def vulnerability_scan(self, urls=[], ctx={}, description=None):
"""
This function will serve as an entrypoint to vulnerability scan.
All other vulnerability scan will be run from here including nuclei, crlfuzz, etc
"""
logger.info('Running Vulnerability Scan Queue')
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_run_nuclei = config.get(RUN_NUCLEI, True)
should_run_crlfuzz = config.get(RUN_CRLFUZZ, False)
should_run_dalfox = config.get(RUN_DALFOX, False)
should_run_s3scanner = config.get(RUN_S3SCANNER, True)
grouped_tasks = []
if should_run_nuclei:
_task = nuclei_scan.si(
urls=urls,
ctx=ctx,
description=f'Nuclei Scan'
)
grouped_tasks.append(_task)
if should_run_crlfuzz:
_task = crlfuzz_scan.si(
urls=urls,
ctx=ctx,
description=f'CRLFuzz Scan'
)
grouped_tasks.append(_task)
if should_run_dalfox:
_task = dalfox_xss_scan.si(
urls=urls,
ctx=ctx,
description=f'Dalfox XSS Scan'
)
grouped_tasks.append(_task)
if should_run_s3scanner:
_task = s3scanner.si(
ctx=ctx,
description=f'Misconfigured S3 Buckets Scanner'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan completed...')
# return results
return None
@app.task(name='nuclei_individual_severity_module', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_individual_severity_module(self, cmd, severity, enable_http_crawl, should_fetch_gpt_report, ctx={}, description=None):
'''
This celery task will run vulnerability scan in parallel.
All severities supplied should run in parallel as grouped tasks.
'''
results = []
logger.info(f'Running vulnerability scan with severity: {severity}')
cmd += f' -severity {severity}'
# Send start notification
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
# Gather nuclei results
vuln_data = parse_nuclei_result(line)
# Get corresponding subdomain
http_url = sanitize_url(line.get('matched-at'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
# Look for duplicate vulnerabilities by excluding records that might change but are irrelevant.
object_comparison_exclude = ['response', 'curl_command', 'tags', 'references', 'cve_ids', 'cwe_ids']
# Add subdomain and target domain to the duplicate check
vuln_data_copy = vuln_data.copy()
vuln_data_copy['subdomain'] = subdomain
vuln_data_copy['target_domain'] = self.domain
# Check if record exists, if exists do not save it
if record_exists(Vulnerability, data=vuln_data_copy, exclude_keys=object_comparison_exclude):
logger.warning(f'Nuclei vulnerability of severity {severity} : {vuln_data_copy["name"]} for {subdomain_name} already exists')
continue
# Get or create EndPoint object
response = line.get('response')
httpx_crawl = False if response else enable_http_crawl # avoid yet another httpx crawl
endpoint, _ = save_endpoint(
http_url,
crawl=httpx_crawl,
subdomain=subdomain,
ctx=ctx)
if endpoint:
http_url = endpoint.http_url
if not httpx_crawl:
output = parse_curl_output(response)
endpoint.http_status = output['http_status']
endpoint.save()
# Get or create Vulnerability object
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
subdomain=subdomain,
**vuln_data)
if not vuln:
continue
# Print vuln
severity = line['info'].get('severity', 'unknown')
logger.warning(str(vuln))
# Send notification for all vulnerabilities except info
url = vuln.http_url or vuln.subdomain
send_vuln = (
notif and
notif.send_vuln_notif and
vuln and
severity in ['low', 'medium', 'high', 'critical'])
if send_vuln:
fields = {
'Severity': f'**{severity.upper()}**',
'URL': http_url,
'Subdomain': subdomain_name,
'Name': vuln.name,
'Type': vuln.type,
'Description': vuln.description,
'Template': vuln.template_url,
'Tags': vuln.get_tags_str(),
'CVEs': vuln.get_cve_str(),
'CWEs': vuln.get_cwe_str(),
'References': vuln.get_refs_str()
}
severity_map = {
'low': 'info',
'medium': 'warning',
'high': 'error',
'critical': 'error'
}
self.notify(
f'vulnerability_scan_#{vuln.id}',
severity_map[severity],
fields,
add_meta_info=False)
# Send report to hackerone
hackerone_query = Hackerone.objects.all()
send_report = (
hackerone_query.exists() and
severity not in ('info', 'low') and
vuln.target_domain.h1_team_handle
)
if send_report:
hackerone = hackerone_query.first()
if hackerone.send_critical and severity == 'critical':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_high and severity == 'high':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_medium and severity == 'medium':
send_hackerone_report.delay(vuln.id)
# Write results to JSON file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
# Send finish notif
if send_status:
vulns = Vulnerability.objects.filter(scan_history__id=self.scan_id)
info_count = vulns.filter(severity=0).count()
low_count = vulns.filter(severity=1).count()
medium_count = vulns.filter(severity=2).count()
high_count = vulns.filter(severity=3).count()
critical_count = vulns.filter(severity=4).count()
unknown_count = vulns.filter(severity=-1).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count + unknown_count
fields = {
'Total': vulnerability_count,
'Critical': critical_count,
'High': high_count,
'Medium': medium_count,
'Low': low_count,
'Info': info_count,
'Unknown': unknown_count
}
self.notify(fields=fields)
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=NUCLEI
).exclude(
severity=0
)
# find all unique vulnerabilities based on path and title
# all unique vulnerability will go thru gpt function and get report
# once report is got, it will be matched with other vulnerabilities and saved
unique_vulns = set()
for vuln in vulns:
unique_vulns.add((vuln.name, vuln.get_path()))
unique_vulns = list(unique_vulns)
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in unique_vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return None
def get_vulnerability_gpt_report(vuln):
title = vuln[0]
path = vuln[1]
logger.info(f'Getting GPT Report for {title}, PATH: {path}')
# check if in db already exists
stored = GPTVulnerabilityReport.objects.filter(
url_path=path
).filter(
title=title
).first()
if stored:
response = {
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
report = GPTVulnerabilityReportGenerator()
vulnerability_description = get_gpt_vuln_input_description(
title,
path
)
response = report.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
title,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
for vuln in Vulnerability.objects.filter(name=title, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
def add_gpt_description_db(title, path, description, impact, remediation, references):
gpt_report = GPTVulnerabilityReport()
gpt_report.url_path = path
gpt_report.title = title
gpt_report.description = description
gpt_report.impact = impact
gpt_report.remediation = remediation
gpt_report.save()
for url in references:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
gpt_report.references.add(ref)
gpt_report.save()
@app.task(name='nuclei_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_scan(self, urls=[], ctx={}, description=None):
"""HTTP vulnerability scan using Nuclei
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
Notes:
Unfurl the urls to keep only domain and path, will be sent to vuln scan and
ignore certain file extensions. Thanks: https://github.com/six2dez/reconftw
"""
# Config
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
input_path = f'{self.results_dir}/input_endpoints_vulnerability_scan.txt'
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
concurrency = config.get(NUCLEI_CONCURRENCY) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
retries = config.get(RETRIES) or self.yaml_configuration.get(RETRIES, DEFAULT_RETRIES)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
custom_header = config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
should_fetch_gpt_report = config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
proxy = get_random_proxy()
nuclei_specific_config = config.get('nuclei', {})
use_nuclei_conf = nuclei_specific_config.get(USE_CONFIG, False)
severities = nuclei_specific_config.get(NUCLEI_SEVERITY, NUCLEI_DEFAULT_SEVERITIES)
tags = nuclei_specific_config.get(NUCLEI_TAGS, [])
tags = ','.join(tags)
nuclei_templates = nuclei_specific_config.get(NUCLEI_TEMPLATE)
custom_nuclei_templates = nuclei_specific_config.get(NUCLEI_CUSTOM_TEMPLATE)
# severities_str = ','.join(severities)
# Get alive endpoints
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=enable_http_crawl,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
if intensity == 'normal': # reduce number of endpoints to scan
unfurl_filter = f'{self.results_dir}/urls_unfurled.txt'
run_command(
f"cat {input_path} | unfurl -u format %s://%d%p |uro > {unfurl_filter}",
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {unfurl_filter} -o {unfurl_filter}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
input_path = unfurl_filter
# Build templates
# logger.info('Updating Nuclei templates ...')
run_command(
'nuclei -update-templates',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
templates = []
if not (nuclei_templates or custom_nuclei_templates):
templates.append(NUCLEI_DEFAULT_TEMPLATES_PATH)
if nuclei_templates:
if ALL in nuclei_templates:
template = NUCLEI_DEFAULT_TEMPLATES_PATH
templates.append(template)
else:
templates.extend(nuclei_templates)
if custom_nuclei_templates:
custom_nuclei_template_paths = [f'{str(elem)}.yaml' for elem in custom_nuclei_templates]
template = templates.extend(custom_nuclei_template_paths)
# Build CMD
cmd = 'nuclei -j'
cmd += ' -config /root/.config/nuclei/config.yaml' if use_nuclei_conf else ''
cmd += f' -irr'
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -l {input_path}'
cmd += f' -c {str(concurrency)}' if concurrency > 0 else ''
cmd += f' -proxy {proxy} ' if proxy else ''
cmd += f' -retries {retries}' if retries > 0 else ''
cmd += f' -rl {rate_limit}' if rate_limit > 0 else ''
# cmd += f' -severity {severities_str}'
cmd += f' -timeout {str(timeout)}' if timeout and timeout > 0 else ''
cmd += f' -tags {tags}' if tags else ''
cmd += f' -silent'
for tpl in templates:
cmd += f' -t {tpl}'
grouped_tasks = []
custom_ctx = ctx
for severity in severities:
custom_ctx['track'] = True
_task = nuclei_individual_severity_module.si(
cmd,
severity,
enable_http_crawl,
should_fetch_gpt_report,
ctx=custom_ctx,
description=f'Nuclei Scan with severity {severity}'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan with all severities completed...')
return None
@app.task(name='dalfox_xss_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def dalfox_xss_scan(self, urls=[], ctx={}, description=None):
"""XSS Scan using dalfox
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
dalfox_config = vuln_config.get(DALFOX) or {}
custom_header = dalfox_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
is_waf_evasion = dalfox_config.get(WAF_EVASION, False)
blind_xss_server = dalfox_config.get(BLIND_XSS_SERVER)
user_agent = dalfox_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
timeout = dalfox_config.get(TIMEOUT)
delay = dalfox_config.get(DELAY)
threads = dalfox_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_dalfox_xss.txt'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=False,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'dalfox --silence --no-color --no-spinner'
cmd += f' --only-poc r '
cmd += f' --ignore-return 302,404,403'
cmd += f' --skip-bav'
cmd += f' file {input_path}'
cmd += f' --proxy {proxy}' if proxy else ''
cmd += f' --waf-evasion' if is_waf_evasion else ''
cmd += f' -b {blind_xss_server}' if blind_xss_server else ''
cmd += f' --delay {delay}' if delay else ''
cmd += f' --timeout {timeout}' if timeout else ''
cmd += f' --user-agent {user_agent}' if user_agent else ''
cmd += f' --header {custom_header}' if custom_header else ''
cmd += f' --worker {threads}' if threads else ''
cmd += f' --format json'
results = []
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id,
trunc_char=','
):
if not isinstance(line, dict):
continue
results.append(line)
vuln_data = parse_dalfox_result(line)
http_url = sanitize_url(line.get('data'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Dalfox Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=DALFOX
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='crlfuzz_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def crlfuzz_scan(self, urls=[], ctx={}, description=None):
"""CRLF Fuzzing with CRLFuzz
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
custom_header = vuln_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
user_agent = vuln_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
threads = vuln_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_crlf.txt'
output_path = f'{self.results_dir}/{self.filename}'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'crlfuzz -s'
cmd += f' -l {input_path}'
cmd += f' -x {proxy}' if proxy else ''
cmd += f' --H {custom_header}' if custom_header else ''
cmd += f' -o {output_path}'
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id
)
if not os.path.isfile(output_path):
logger.info('No Results from CRLFuzz')
return
crlfs = []
results = []
with open(output_path, 'r') as file:
crlfs = file.readlines()
for crlf in crlfs:
url = crlf.strip()
vuln_data = parse_crlfuzz_result(url)
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting CRLFuzz Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=CRLFUZZ
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='s3scanner', queue='main_scan_queue', base=RengineTask, bind=True)
def s3scanner(self, ctx={}, description=None):
"""Bucket Scanner
Args:
ctx (dict): Context
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/#{self.scan_id}_subdomain_discovery.txt'
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
s3_config = vuln_config.get(S3SCANNER) or {}
threads = s3_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
providers = s3_config.get(PROVIDERS, S3SCANNER_DEFAULT_PROVIDERS)
scan_history = ScanHistory.objects.filter(pk=self.scan_id).first()
for provider in providers:
cmd = f's3scanner -bucket-file {input_path} -enumerate -provider {provider} -threads {threads} -json'
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
if line.get('bucket', {}).get('exists', 0) == 1:
result = parse_s3scanner_result(line)
s3bucket, created = S3Bucket.objects.get_or_create(**result)
scan_history.buckets.add(s3bucket)
logger.info(f"s3 bucket added {result['provider']}-{result['name']}-{result['region']}")
@app.task(name='http_crawl', queue='main_scan_queue', base=RengineTask, bind=True)
def http_crawl(
self,
urls=[],
method=None,
recrawl=False,
ctx={},
track=True,
description=None,
is_ran_from_subdomain_scan=False,
should_remove_duplicate_endpoints=True,
duplicate_removal_fields=[]):
"""Use httpx to query HTTP URLs for important info like page titles, http
status, etc...
Args:
urls (list, optional): A set of URLs to check. Overrides default
behavior which queries all endpoints related to this scan.
method (str): HTTP method to use (GET, HEAD, POST, PUT, DELETE).
recrawl (bool, optional): If False, filter out URLs that have already
been crawled.
should_remove_duplicate_endpoints (bool): Whether to remove duplicate endpoints
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
Returns:
list: httpx results.
"""
logger.info('Initiating HTTP Crawl')
if is_ran_from_subdomain_scan:
logger.info('Running From Subdomain Scan...')
cmd = '/go/bin/httpx'
cfg = self.yaml_configuration.get(HTTP_CRAWL) or {}
custom_header = cfg.get(CUSTOM_HEADER, '')
threads = cfg.get(THREADS, DEFAULT_THREADS)
follow_redirect = cfg.get(FOLLOW_REDIRECT, True)
self.output_path = None
input_path = f'{self.results_dir}/httpx_input.txt'
history_file = f'{self.results_dir}/commands.txt'
if urls: # direct passing URLs to check
if self.url_filter:
urls = [u for u in urls if self.url_filter in u]
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_uncrawled=not recrawl,
write_filepath=input_path,
ctx=ctx
)
# logger.debug(urls)
# If no URLs found, skip it
if not urls:
return
# Re-adjust thread number if few URLs to avoid spinning up a monster to
# kill a fly.
if len(urls) < threads:
threads = len(urls)
# Get random proxy
proxy = get_random_proxy()
# Run command
cmd += f' -cl -ct -rt -location -td -websocket -cname -asn -cdn -probe -random-agent'
cmd += f' -t {threads}' if threads > 0 else ''
cmd += f' --http-proxy {proxy}' if proxy else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -json'
cmd += f' -u {urls[0]}' if len(urls) == 1 else f' -l {input_path}'
cmd += f' -x {method}' if method else ''
cmd += f' -silent'
if follow_redirect:
cmd += ' -fr'
results = []
endpoint_ids = []
for line in stream_command(
cmd,
history_file=history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not line or not isinstance(line, dict):
continue
logger.debug(line)
# No response from endpoint
if line.get('failed', False):
continue
# Parse httpx output
host = line.get('host', '')
content_length = line.get('content_length', 0)
http_status = line.get('status_code')
http_url, is_redirect = extract_httpx_url(line)
page_title = line.get('title')
webserver = line.get('webserver')
cdn = line.get('cdn', False)
rt = line.get('time')
techs = line.get('tech', [])
cname = line.get('cname', '')
content_type = line.get('content_type', '')
response_time = -1
if rt:
response_time = float(''.join(ch for ch in rt if not ch.isalpha()))
if rt[-2:] == 'ms':
response_time = response_time / 1000
# Create Subdomain object in DB
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
# Save default HTTP URL to endpoint object in DB
endpoint, created = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain,
is_default=is_ran_from_subdomain_scan
)
if not endpoint:
continue
endpoint.http_status = http_status
endpoint.page_title = page_title
endpoint.content_length = content_length
endpoint.webserver = webserver
endpoint.response_time = response_time
endpoint.content_type = content_type
endpoint.save()
endpoint_str = f'{http_url} [{http_status}] `{content_length}B` `{webserver}` `{rt}`'
logger.warning(endpoint_str)
if endpoint and endpoint.is_alive and endpoint.http_status != 403:
self.notify(
fields={'Alive endpoint': f'• {endpoint_str}'},
add_meta_info=False)
# Add endpoint to results
line['_cmd'] = cmd
line['final_url'] = http_url
line['endpoint_id'] = endpoint.id
line['endpoint_created'] = created
line['is_redirect'] = is_redirect
results.append(line)
# Add technology objects to DB
for technology in techs:
tech, _ = Technology.objects.get_or_create(name=technology)
endpoint.techs.add(tech)
if is_ran_from_subdomain_scan:
subdomain.technologies.add(tech)
subdomain.save()
endpoint.save()
techs_str = ', '.join([f'`{tech}`' for tech in techs])
self.notify(
fields={'Technologies': techs_str},
add_meta_info=False)
# Add IP objects for 'a' records to DB
a_records = line.get('a', [])
for ip_address in a_records:
ip, created = save_ip_address(
ip_address,
subdomain,
subscan=self.subscan,
cdn=cdn)
ips_str = '• ' + '\n• '.join([f'`{ip}`' for ip in a_records])
self.notify(
fields={'IPs': ips_str},
add_meta_info=False)
# Add IP object for host in DB
if host:
ip, created = save_ip_address(
host,
subdomain,
subscan=self.subscan,
cdn=cdn)
self.notify(
fields={'IPs': f'• `{ip.address}`'},
add_meta_info=False)
# Save subdomain and endpoint
if is_ran_from_subdomain_scan:
# save subdomain stuffs
subdomain.http_url = http_url
subdomain.http_status = http_status
subdomain.page_title = page_title
subdomain.content_length = content_length
subdomain.webserver = webserver
subdomain.response_time = response_time
subdomain.content_type = content_type
subdomain.cname = ','.join(cname)
subdomain.is_cdn = cdn
if cdn:
subdomain.cdn_name = line.get('cdn_name')
subdomain.save()
endpoint.save()
endpoint_ids.append(endpoint.id)
if should_remove_duplicate_endpoints:
# Remove 'fake' alive endpoints that are just redirects to the same page
remove_duplicate_endpoints(
self.scan_id,
self.domain_id,
self.subdomain_id,
filter_ids=endpoint_ids
)
# Remove input file
run_command(
f'rm {input_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
return results
#---------------------#
# Notifications tasks #
#---------------------#
@app.task(name='send_notif', bind=False, queue='send_notif_queue')
def send_notif(
message,
scan_history_id=None,
subscan_id=None,
**options):
if not 'title' in options:
message = enrich_notification(message, scan_history_id, subscan_id)
send_discord_message(message, **options)
send_slack_message(message)
send_telegram_message(message)
@app.task(name='send_scan_notif', bind=False, queue='send_scan_notif_queue')
def send_scan_notif(
scan_history_id,
subscan_id=None,
engine_id=None,
status='RUNNING'):
"""Send scan status notification. Works for scan or a subscan if subscan_id
is passed.
Args:
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Get domain, engine, scan_history objects
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
tasks = ScanActivity.objects.filter(scan_of=scan) if scan else 0
# Build notif options
url = get_scan_url(scan_history_id, subscan_id)
title = get_scan_title(scan_history_id, subscan_id)
fields = get_scan_fields(engine, scan, subscan, status, tasks)
severity = None
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
if status:
severity = STATUS_TO_SEVERITIES.get(status)
opts = {
'title': title,
'url': url,
'fields': fields,
'severity': severity
}
logger.warning(f'Sending notification "{title}" [{severity}]')
# Send notification
send_notif(
msg,
scan_history_id,
subscan_id,
**opts)
@app.task(name='send_task_notif', bind=False, queue='send_task_notif_queue')
def send_task_notif(
task_name,
status=None,
result=None,
output_path=None,
traceback=None,
scan_history_id=None,
engine_id=None,
subscan_id=None,
severity=None,
add_meta_info=True,
update_fields={}):
"""Send task status notification.
Args:
task_name (str): Task name.
status (str, optional): Task status.
result (str, optional): Task result.
output_path (str, optional): Task output path.
traceback (str, optional): Task traceback.
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
severity (str, optional): Severity (will be mapped to notif colors)
add_meta_info (bool, optional): Wheter to add scan / subscan info to notif.
update_fields (dict, optional): Fields key / value to update.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Build fields
url = None
fields = {}
if add_meta_info:
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
url = get_scan_url(scan_history_id)
if status:
fields['Status'] = f'**{status}**'
if engine:
fields['Engine'] = engine.engine_name
if scan:
fields['Scan ID'] = f'[#{scan.id}]({url})'
if subscan:
url = get_scan_url(scan_history_id, subscan_id)
fields['Subscan ID'] = f'[#{subscan.id}]({url})'
title = get_task_title(task_name, scan_history_id, subscan_id)
if status:
severity = STATUS_TO_SEVERITIES.get(status)
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
# Add fields to update
for k, v in update_fields.items():
fields[k] = v
# Add traceback to notif
if traceback and notif.send_scan_tracebacks:
fields['Traceback'] = f'```\n{traceback}\n```'
# Add files to notif
files = []
attach_file = (
notif.send_scan_output_file and
output_path and
result and
not traceback
)
if attach_file:
output_title = output_path.split('/')[-1]
files = [(output_path, output_title)]
# Send notif
opts = {
'title': title,
'url': url,
'files': files,
'severity': severity,
'fields': fields,
'fields_append': update_fields.keys()
}
send_notif(
msg,
scan_history_id=scan_history_id,
subscan_id=subscan_id,
**opts)
@app.task(name='send_file_to_discord', bind=False, queue='send_file_to_discord_queue')
def send_file_to_discord(file_path, title=None):
notif = Notification.objects.first()
do_send = notif and notif.send_to_discord and notif.discord_hook_url
if not do_send:
return False
webhook = DiscordWebhook(
url=notif.discord_hook_url,
rate_limit_retry=True,
username=title or "reNgine Discord Plugin"
)
with open(file_path, "rb") as f:
head, tail = os.path.split(file_path)
webhook.add_file(file=f.read(), filename=tail)
webhook.execute()
@app.task(name='send_hackerone_report', bind=False, queue='send_hackerone_report_queue')
def send_hackerone_report(vulnerability_id):
"""Send HackerOne vulnerability report.
Args:
vulnerability_id (int): Vulnerability id.
Returns:
int: HTTP response status code.
"""
vulnerability = Vulnerability.objects.get(id=vulnerability_id)
severities = {v: k for k,v in NUCLEI_SEVERITY_MAP.items()}
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
# can only send vulnerability report if team_handle exists
if len(vulnerability.target_domain.h1_team_handle) !=0:
hackerone_query = Hackerone.objects.all()
if hackerone_query.exists():
hackerone = Hackerone.objects.first()
severity_value = severities[vulnerability.severity]
tpl = hackerone.report_template
# Replace syntax of report template with actual content
tpl = tpl.replace('{vulnerability_name}', vulnerability.name)
tpl = tpl.replace('{vulnerable_url}', vulnerability.http_url)
tpl = tpl.replace('{vulnerability_severity}', severity_value)
tpl = tpl.replace('{vulnerability_description}', vulnerability.description if vulnerability.description else '')
tpl = tpl.replace('{vulnerability_extracted_results}', vulnerability.extracted_results if vulnerability.extracted_results else '')
tpl = tpl.replace('{vulnerability_reference}', vulnerability.reference if vulnerability.reference else '')
data = {
"data": {
"type": "report",
"attributes": {
"team_handle": vulnerability.target_domain.h1_team_handle,
"title": '{} found in {}'.format(vulnerability.name, vulnerability.http_url),
"vulnerability_information": tpl,
"severity_rating": severity_value,
"impact": "More information about the impact and vulnerability can be found here: \n" + vulnerability.reference if vulnerability.reference else "NA",
}
}
}
r = requests.post(
'https://api.hackerone.com/v1/hackers/reports',
auth=(hackerone.username, hackerone.api_key),
json=data,
headers=headers
)
response = r.json()
status_code = r.status_code
if status_code == 201:
vulnerability.hackerone_report_id = response['data']["id"]
vulnerability.open_status = False
vulnerability.save()
return status_code
else:
logger.error('No team handle found.')
status_code = 111
return status_code
#-------------#
# Utils tasks #
#-------------#
@app.task(name='parse_nmap_results', bind=False, queue='parse_nmap_results_queue')
def parse_nmap_results(xml_file, output_file=None):
"""Parse results from nmap output file.
Args:
xml_file (str): nmap XML report file path.
Returns:
list: List of vulnerabilities found from nmap results.
"""
with open(xml_file, encoding='utf8') as f:
content = f.read()
try:
nmap_results = xmltodict.parse(content) # parse XML to dict
except Exception as e:
logger.exception(e)
logger.error(f'Cannot parse {xml_file} to valid JSON. Skipping.')
return []
# Write JSON to output file
if output_file:
with open(output_file, 'w') as f:
json.dump(nmap_results, f, indent=4)
logger.warning(json.dumps(nmap_results, indent=4))
hosts = (
nmap_results
.get('nmaprun', {})
.get('host', {})
)
all_vulns = []
if isinstance(hosts, dict):
hosts = [hosts]
for host in hosts:
# Grab hostname / IP from output
hostnames_dict = host.get('hostnames', {})
if hostnames_dict:
# Ensure that hostnames['hostname'] is a list for consistency
hostnames_list = hostnames_dict['hostname'] if isinstance(hostnames_dict['hostname'], list) else [hostnames_dict['hostname']]
# Extract all the @name values from the list of dictionaries
hostnames = [entry.get('@name') for entry in hostnames_list]
else:
hostnames = [host.get('address')['@addr']]
# Iterate over each hostname for each port
for hostname in hostnames:
# Grab ports from output
ports = host.get('ports', {}).get('port', [])
if isinstance(ports, dict):
ports = [ports]
for port in ports:
url_vulns = []
port_number = port['@portid']
url = sanitize_url(f'{hostname}:{port_number}')
logger.info(f'Parsing nmap results for {hostname}:{port_number} ...')
if not port_number or not port_number.isdigit():
continue
port_protocol = port['@protocol']
scripts = port.get('script', [])
if isinstance(scripts, dict):
scripts = [scripts]
for script in scripts:
script_id = script['@id']
script_output = script['@output']
script_output_table = script.get('table', [])
logger.debug(f'Ran nmap script "{script_id}" on {port_number}/{port_protocol}:\n{script_output}\n')
if script_id == 'vulscan':
vulns = parse_nmap_vulscan_output(script_output)
url_vulns.extend(vulns)
elif script_id == 'vulners':
vulns = parse_nmap_vulners_output(script_output)
url_vulns.extend(vulns)
# elif script_id == 'http-server-header':
# TODO: nmap can help find technologies as well using the http-server-header script
# regex = r'(\w+)/([\d.]+)\s?(?:\((\w+)\))?'
# tech_name, tech_version, tech_os = re.match(regex, test_string).groups()
# Technology.objects.get_or_create(...)
# elif script_id == 'http_csrf':
# vulns = parse_nmap_http_csrf_output(script_output)
# url_vulns.extend(vulns)
else:
logger.warning(f'Script output parsing for script "{script_id}" is not supported yet.')
# Add URL to vuln
for vuln in url_vulns:
# TODO: This should extend to any URL, not just HTTP
vuln['http_url'] = url
if 'http_path' in vuln:
vuln['http_url'] += vuln['http_path']
all_vulns.append(vuln)
return all_vulns
def parse_nmap_http_csrf_output(script_output):
pass
def parse_nmap_vulscan_output(script_output):
"""Parse nmap vulscan script output.
Args:
script_output (str): Vulscan script output.
Returns:
list: List of Vulnerability dicts.
"""
data = {}
vulns = []
provider_name = ''
# Sort all vulns found by provider so that we can match each provider with
# a function that pulls from its API to get more info about the
# vulnerability.
for line in script_output.splitlines():
if not line:
continue
if not line.startswith('['): # provider line
if "No findings" in line:
logger.info(f"No findings: {line}")
continue
elif ' - ' in line:
provider_name, provider_url = tuple(line.split(' - '))
data[provider_name] = {'url': provider_url.rstrip(':'), 'entries': []}
continue
else:
# Log a warning
logger.warning(f"Unexpected line format: {line}")
continue
reg = r'\[(.*)\] (.*)'
matches = re.match(reg, line)
id, title = matches.groups()
entry = {'id': id, 'title': title}
data[provider_name]['entries'].append(entry)
logger.warning('Vulscan parsed output:')
logger.warning(pprint.pformat(data))
for provider_name in data:
if provider_name == 'Exploit-DB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'IBM X-Force':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'MITRE CVE':
logger.error(f'Provider {provider_name} is not supported YET.')
for entry in data[provider_name]['entries']:
cve_id = entry['id']
vuln = cve_to_vuln(cve_id)
vulns.append(vuln)
elif provider_name == 'OSVDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'OpenVAS (Nessus)':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'SecurityFocus':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'VulDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
else:
logger.error(f'Provider {provider_name} is not supported.')
return vulns
def parse_nmap_vulners_output(script_output, url=''):
"""Parse nmap vulners script output.
TODO: Rework this as it's currently matching all CVEs no matter the
confidence.
Args:
script_output (str): Script output.
Returns:
list: List of found vulnerabilities.
"""
vulns = []
# Check for CVE in script output
CVE_REGEX = re.compile(r'.*(CVE-\d\d\d\d-\d+).*')
matches = CVE_REGEX.findall(script_output)
matches = list(dict.fromkeys(matches))
for cve_id in matches: # get CVE info
vuln = cve_to_vuln(cve_id, vuln_type='nmap-vulners-nse')
if vuln:
vulns.append(vuln)
return vulns
def cve_to_vuln(cve_id, vuln_type=''):
"""Search for a CVE using CVESearch and return Vulnerability data.
Args:
cve_id (str): CVE ID in the form CVE-*
Returns:
dict: Vulnerability dict.
"""
cve_info = CVESearch('https://cve.circl.lu').id(cve_id)
if not cve_info:
logger.error(f'Could not fetch CVE info for cve {cve_id}. Skipping.')
return None
vuln_cve_id = cve_info['id']
vuln_name = vuln_cve_id
vuln_description = cve_info.get('summary', 'none').replace(vuln_cve_id, '').strip()
try:
vuln_cvss = float(cve_info.get('cvss', -1))
except (ValueError, TypeError):
vuln_cvss = -1
vuln_cwe_id = cve_info.get('cwe', '')
exploit_ids = cve_info.get('refmap', {}).get('exploit-db', [])
osvdb_ids = cve_info.get('refmap', {}).get('osvdb', [])
references = cve_info.get('references', [])
capec_objects = cve_info.get('capec', [])
# Parse ovals for a better vuln name / type
ovals = cve_info.get('oval', [])
if ovals:
vuln_name = ovals[0]['title']
vuln_type = ovals[0]['family']
# Set vulnerability severity based on CVSS score
vuln_severity = 'info'
if vuln_cvss < 4:
vuln_severity = 'low'
elif vuln_cvss < 7:
vuln_severity = 'medium'
elif vuln_cvss < 9:
vuln_severity = 'high'
else:
vuln_severity = 'critical'
# Build console warning message
msg = f'{vuln_name} | {vuln_severity.upper()} | {vuln_cve_id} | {vuln_cwe_id} | {vuln_cvss}'
for id in osvdb_ids:
msg += f'\n\tOSVDB: {id}'
for exploit_id in exploit_ids:
msg += f'\n\tEXPLOITDB: {exploit_id}'
logger.warning(msg)
vuln = {
'name': vuln_name,
'type': vuln_type,
'severity': NUCLEI_SEVERITY_MAP[vuln_severity],
'description': vuln_description,
'cvss_score': vuln_cvss,
'references': references,
'cve_ids': [vuln_cve_id],
'cwe_ids': [vuln_cwe_id]
}
return vuln
def parse_s3scanner_result(line):
'''
Parses and returns s3Scanner Data
'''
bucket = line['bucket']
return {
'name': bucket['name'],
'region': bucket['region'],
'provider': bucket['provider'],
'owner_display_name': bucket['owner_display_name'],
'owner_id': bucket['owner_id'],
'perm_auth_users_read': bucket['perm_auth_users_read'],
'perm_auth_users_write': bucket['perm_auth_users_write'],
'perm_auth_users_read_acl': bucket['perm_auth_users_read_acl'],
'perm_auth_users_write_acl': bucket['perm_auth_users_write_acl'],
'perm_auth_users_full_control': bucket['perm_auth_users_full_control'],
'perm_all_users_read': bucket['perm_all_users_read'],
'perm_all_users_write': bucket['perm_all_users_write'],
'perm_all_users_read_acl': bucket['perm_all_users_read_acl'],
'perm_all_users_write_acl': bucket['perm_all_users_write_acl'],
'perm_all_users_full_control': bucket['perm_all_users_full_control'],
'num_objects': bucket['num_objects'],
'size': bucket['bucket_size']
}
def parse_nuclei_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
return {
'name': line['info'].get('name', ''),
'type': line['type'],
'severity': NUCLEI_SEVERITY_MAP[line['info'].get('severity', 'unknown')],
'template': line['template'],
'template_url': line['template-url'],
'template_id': line['template-id'],
'description': line['info'].get('description', ''),
'matcher_name': line.get('matcher-name', ''),
'curl_command': line.get('curl-command'),
'request': line.get('request'),
'response': line.get('response'),
'extracted_results': line.get('extracted-results', []),
'cvss_metrics': line['info'].get('classification', {}).get('cvss-metrics', ''),
'cvss_score': line['info'].get('classification', {}).get('cvss-score'),
'cve_ids': line['info'].get('classification', {}).get('cve_id', []) or [],
'cwe_ids': line['info'].get('classification', {}).get('cwe_id', []) or [],
'references': line['info'].get('reference', []) or [],
'tags': line['info'].get('tags', []),
'source': NUCLEI,
}
def parse_dalfox_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
description = ''
description += f" Evidence: {line.get('evidence')} <br>" if line.get('evidence') else ''
description += f" Message: {line.get('message')} <br>" if line.get('message') else ''
description += f" Payload: {line.get('message_str')} <br>" if line.get('message_str') else ''
description += f" Vulnerable Parameter: {line.get('param')} <br>" if line.get('param') else ''
return {
'name': 'XSS (Cross Site Scripting)',
'type': 'XSS',
'severity': DALFOX_SEVERITY_MAP[line.get('severity', 'unknown')],
'description': description,
'source': DALFOX,
'cwe_ids': [line.get('cwe')]
}
def parse_crlfuzz_result(url):
"""Parse CRLF results
Args:
url (str): CRLF Vulnerable URL
Returns:
dict: Vulnerability data.
"""
return {
'name': 'CRLF (HTTP Response Splitting)',
'type': 'CRLF',
'severity': 2,
'description': 'A CRLF (HTTP Response Splitting) vulnerability has been discovered.',
'source': CRLFUZZ,
}
def record_exists(model, data, exclude_keys=[]):
"""
Check if a record already exists in the database based on the given data.
Args:
model (django.db.models.Model): The Django model to check against.
data (dict): Data dictionary containing fields and values.
exclude_keys (list): List of keys to exclude from the lookup.
Returns:
bool: True if the record exists, False otherwise.
"""
# Extract the keys that will be used for the lookup
lookup_fields = {key: data[key] for key in data if key not in exclude_keys}
# Return True if a record exists based on the lookup fields, False otherwise
return model.objects.filter(**lookup_fields).exists()
@app.task(name='geo_localize', bind=False, queue='geo_localize_queue')
def geo_localize(host, ip_id=None):
"""Uses geoiplookup to find location associated with host.
Args:
host (str): Hostname.
ip_id (int): IpAddress object id.
Returns:
startScan.models.CountryISO: CountryISO object from DB or None.
"""
if validators.ipv6(host):
logger.info(f'Ipv6 "{host}" is not supported by geoiplookup. Skipping.')
return None
cmd = f'geoiplookup {host}'
_, out = run_command(cmd)
if 'IP Address not found' not in out and "can't resolve hostname" not in out:
country_iso = out.split(':')[1].strip().split(',')[0]
country_name = out.split(':')[1].strip().split(',')[1].strip()
geo_object, _ = CountryISO.objects.get_or_create(
iso=country_iso,
name=country_name
)
geo_json = {
'iso': country_iso,
'name': country_name
}
if ip_id:
ip = IpAddress.objects.get(pk=ip_id)
ip.geo_iso = geo_object
ip.save()
return geo_json
logger.info(f'Geo IP lookup failed for host "{host}"')
return None
@app.task(name='query_whois', bind=False, queue='query_whois_queue')
def query_whois(ip_domain, force_reload_whois=False):
"""Query WHOIS information for an IP or a domain name.
Args:
ip_domain (str): IP address or domain name.
save_domain (bool): Whether to save domain or not, default False
Returns:
dict: WHOIS information.
"""
if not force_reload_whois and Domain.objects.filter(name=ip_domain).exists() and Domain.objects.get(name=ip_domain).domain_info:
domain = Domain.objects.get(name=ip_domain)
if not domain.insert_date:
domain.insert_date = timezone.now()
domain.save()
domain_info_db = domain.domain_info
domain_info = DottedDict(
dnssec=domain_info_db.dnssec,
created=domain_info_db.created,
updated=domain_info_db.updated,
expires=domain_info_db.expires,
geolocation_iso=domain_info_db.geolocation_iso,
status=[status['name'] for status in DomainWhoisStatusSerializer(domain_info_db.status, many=True).data],
whois_server=domain_info_db.whois_server,
ns_records=[ns['name'] for ns in NameServersSerializer(domain_info_db.name_servers, many=True).data],
registrar_name=domain_info_db.registrar.name,
registrar_phone=domain_info_db.registrar.phone,
registrar_email=domain_info_db.registrar.email,
registrar_url=domain_info_db.registrar.url,
registrant_name=domain_info_db.registrant.name,
registrant_id=domain_info_db.registrant.id_str,
registrant_organization=domain_info_db.registrant.organization,
registrant_city=domain_info_db.registrant.city,
registrant_state=domain_info_db.registrant.state,
registrant_zip_code=domain_info_db.registrant.zip_code,
registrant_country=domain_info_db.registrant.country,
registrant_phone=domain_info_db.registrant.phone,
registrant_fax=domain_info_db.registrant.fax,
registrant_email=domain_info_db.registrant.email,
registrant_address=domain_info_db.registrant.address,
admin_name=domain_info_db.admin.name,
admin_id=domain_info_db.admin.id_str,
admin_organization=domain_info_db.admin.organization,
admin_city=domain_info_db.admin.city,
admin_state=domain_info_db.admin.state,
admin_zip_code=domain_info_db.admin.zip_code,
admin_country=domain_info_db.admin.country,
admin_phone=domain_info_db.admin.phone,
admin_fax=domain_info_db.admin.fax,
admin_email=domain_info_db.admin.email,
admin_address=domain_info_db.admin.address,
tech_name=domain_info_db.tech.name,
tech_id=domain_info_db.tech.id_str,
tech_organization=domain_info_db.tech.organization,
tech_city=domain_info_db.tech.city,
tech_state=domain_info_db.tech.state,
tech_zip_code=domain_info_db.tech.zip_code,
tech_country=domain_info_db.tech.country,
tech_phone=domain_info_db.tech.phone,
tech_fax=domain_info_db.tech.fax,
tech_email=domain_info_db.tech.email,
tech_address=domain_info_db.tech.address,
related_tlds=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_tlds, many=True).data],
related_domains=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_domains, many=True).data],
historical_ips=[ip for ip in HistoricalIPSerializer(domain_info_db.historical_ips, many=True).data],
)
if domain_info_db.dns_records:
a_records = []
txt_records = []
mx_records = []
dns_records = [{'name': dns['name'], 'type': dns['type']} for dns in DomainDNSRecordSerializer(domain_info_db.dns_records, many=True).data]
for dns in dns_records:
if dns['type'] == 'a':
a_records.append(dns['name'])
elif dns['type'] == 'txt':
txt_records.append(dns['name'])
elif dns['type'] == 'mx':
mx_records.append(dns['name'])
domain_info.a_records = a_records
domain_info.txt_records = txt_records
domain_info.mx_records = mx_records
else:
logger.info(f'Domain info for "{ip_domain}" not found in DB, querying whois')
domain_info = DottedDict()
# find domain historical ip
try:
historical_ips = get_domain_historical_ip_address(ip_domain)
domain_info.historical_ips = historical_ips
except Exception as e:
logger.error(f'HistoricalIP for {ip_domain} not found!\nError: {str(e)}')
historical_ips = []
# find associated domains using ip_domain
try:
related_domains = reverse_whois(ip_domain.split('.')[0])
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
# find related tlds using TLSx
try:
related_tlds = []
output_path = '/tmp/ip_domain_tlsx.txt'
tlsx_command = f'tlsx -san -cn -silent -ro -host {ip_domain} -o {output_path}'
run_command(
tlsx_command,
shell=True,
)
tlsx_output = []
with open(output_path) as f:
tlsx_output = f.readlines()
tldextract_target = tldextract.extract(ip_domain)
for doms in tlsx_output:
doms = doms.strip()
tldextract_res = tldextract.extract(doms)
if ip_domain != doms and tldextract_res.domain == tldextract_target.domain and tldextract_res.subdomain == '':
related_tlds.append(doms)
related_tlds = list(set(related_tlds))
domain_info.related_tlds = related_tlds
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
related_domains_list = []
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_related)
related_domains_list.append(_domain['name'])
for _domain in related_tlds:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain,
)[0]
db_domain_info.related_tlds.add(domain_related)
for _ip in historical_ips:
historical_ip = HistoricalIP.objects.get_or_create(
ip=_ip['ip'],
owner=_ip['owner'],
location=_ip['location'],
last_seen=_ip['last_seen'],
)[0]
db_domain_info.historical_ips.add(historical_ip)
domain.domain_info = db_domain_info
domain.save()
command = f'netlas host {ip_domain} -f json'
# check if netlas key is provided
netlas_key = get_netlas_key()
command += f' -a {netlas_key}' if netlas_key else ''
result = subprocess.check_output(command.split()).decode('utf-8')
if 'Failed to parse response data' in result:
# do fallback
return {
'status': False,
'ip_domain': ip_domain,
'result': "Netlas limit exceeded.",
'message': 'Netlas limit exceeded.'
}
try:
result = json.loads(result)
logger.info(result)
whois = result.get('whois') if result.get('whois') else {}
domain_info.created = whois.get('created_date')
domain_info.expires = whois.get('expiration_date')
domain_info.updated = whois.get('updated_date')
domain_info.whois_server = whois.get('whois_server')
if 'registrant' in whois:
registrant = whois.get('registrant')
domain_info.registrant_name = registrant.get('name')
domain_info.registrant_country = registrant.get('country')
domain_info.registrant_id = registrant.get('id')
domain_info.registrant_state = registrant.get('province')
domain_info.registrant_city = registrant.get('city')
domain_info.registrant_phone = registrant.get('phone')
domain_info.registrant_address = registrant.get('street')
domain_info.registrant_organization = registrant.get('organization')
domain_info.registrant_fax = registrant.get('fax')
domain_info.registrant_zip_code = registrant.get('postal_code')
email_search = EMAIL_REGEX.search(str(registrant.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.registrant_email = field_content
if 'administrative' in whois:
administrative = whois.get('administrative')
domain_info.admin_name = administrative.get('name')
domain_info.admin_country = administrative.get('country')
domain_info.admin_id = administrative.get('id')
domain_info.admin_state = administrative.get('province')
domain_info.admin_city = administrative.get('city')
domain_info.admin_phone = administrative.get('phone')
domain_info.admin_address = administrative.get('street')
domain_info.admin_organization = administrative.get('organization')
domain_info.admin_fax = administrative.get('fax')
domain_info.admin_zip_code = administrative.get('postal_code')
mail_search = EMAIL_REGEX.search(str(administrative.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.admin_email = field_content
if 'technical' in whois:
technical = whois.get('technical')
domain_info.tech_name = technical.get('name')
domain_info.tech_country = technical.get('country')
domain_info.tech_state = technical.get('province')
domain_info.tech_id = technical.get('id')
domain_info.tech_city = technical.get('city')
domain_info.tech_phone = technical.get('phone')
domain_info.tech_address = technical.get('street')
domain_info.tech_organization = technical.get('organization')
domain_info.tech_fax = technical.get('fax')
domain_info.tech_zip_code = technical.get('postal_code')
mail_search = EMAIL_REGEX.search(str(technical.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.tech_email = field_content
if 'dns' in result:
dns = result.get('dns')
domain_info.mx_records = dns.get('mx')
domain_info.txt_records = dns.get('txt')
domain_info.a_records = dns.get('a')
domain_info.ns_records = whois.get('name_servers')
domain_info.dnssec = True if whois.get('dnssec') else False
domain_info.status = whois.get('status')
if 'registrar' in whois:
registrar = whois.get('registrar')
domain_info.registrar_name = registrar.get('name')
domain_info.registrar_email = registrar.get('email')
domain_info.registrar_phone = registrar.get('phone')
domain_info.registrar_url = registrar.get('url')
# find associated domains if registrant email is found
related_domains = reverse_whois(domain_info.get('registrant_email')) if domain_info.get('registrant_email') else []
for _domain in related_domains:
related_domains_list.append(_domain['name'])
# remove duplicate domains from related domains list
related_domains_list = list(set(related_domains_list))
domain_info.related_domains = related_domains_list
# save to db if domain exists
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_rel = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_rel)
db_domain_info.dnssec = domain_info.get('dnssec')
#dates
db_domain_info.created = domain_info.get('created')
db_domain_info.updated = domain_info.get('updated')
db_domain_info.expires = domain_info.get('expires')
#registrar
db_domain_info.registrar = Registrar.objects.get_or_create(
name=domain_info.get('registrar_name'),
email=domain_info.get('registrar_email'),
phone=domain_info.get('registrar_phone'),
url=domain_info.get('registrar_url'),
)[0]
db_domain_info.registrant = DomainRegistration.objects.get_or_create(
name=domain_info.get('registrant_name'),
organization=domain_info.get('registrant_organization'),
address=domain_info.get('registrant_address'),
city=domain_info.get('registrant_city'),
state=domain_info.get('registrant_state'),
zip_code=domain_info.get('registrant_zip_code'),
country=domain_info.get('registrant_country'),
email=domain_info.get('registrant_email'),
phone=domain_info.get('registrant_phone'),
fax=domain_info.get('registrant_fax'),
id_str=domain_info.get('registrant_id'),
)[0]
db_domain_info.admin = DomainRegistration.objects.get_or_create(
name=domain_info.get('admin_name'),
organization=domain_info.get('admin_organization'),
address=domain_info.get('admin_address'),
city=domain_info.get('admin_city'),
state=domain_info.get('admin_state'),
zip_code=domain_info.get('admin_zip_code'),
country=domain_info.get('admin_country'),
email=domain_info.get('admin_email'),
phone=domain_info.get('admin_phone'),
fax=domain_info.get('admin_fax'),
id_str=domain_info.get('admin_id'),
)[0]
db_domain_info.tech = DomainRegistration.objects.get_or_create(
name=domain_info.get('tech_name'),
organization=domain_info.get('tech_organization'),
address=domain_info.get('tech_address'),
city=domain_info.get('tech_city'),
state=domain_info.get('tech_state'),
zip_code=domain_info.get('tech_zip_code'),
country=domain_info.get('tech_country'),
email=domain_info.get('tech_email'),
phone=domain_info.get('tech_phone'),
fax=domain_info.get('tech_fax'),
id_str=domain_info.get('tech_id'),
)[0]
for status in domain_info.get('status') or []:
_status = WhoisStatus.objects.get_or_create(
name=status
)[0]
_status.save()
db_domain_info.status.add(_status)
for ns in domain_info.get('ns_records') or []:
_ns = NameServer.objects.get_or_create(
name=ns
)[0]
_ns.save()
db_domain_info.name_servers.add(_ns)
for a in domain_info.get('a_records') or []:
_a = DNSRecord.objects.get_or_create(
name=a,
type='a'
)[0]
_a.save()
db_domain_info.dns_records.add(_a)
for mx in domain_info.get('mx_records') or []:
_mx = DNSRecord.objects.get_or_create(
name=mx,
type='mx'
)[0]
_mx.save()
db_domain_info.dns_records.add(_mx)
for txt in domain_info.get('txt_records') or []:
_txt = DNSRecord.objects.get_or_create(
name=txt,
type='txt'
)[0]
_txt.save()
db_domain_info.dns_records.add(_txt)
db_domain_info.geolocation_iso = domain_info.get('registrant_country')
db_domain_info.whois_server = domain_info.get('whois_server')
db_domain_info.save()
domain.domain_info = db_domain_info
domain.save()
except Exception as e:
return {
'status': False,
'ip_domain': ip_domain,
'result': "unable to fetch records from WHOIS database.",
'message': str(e)
}
return {
'status': True,
'ip_domain': ip_domain,
'dnssec': domain_info.get('dnssec'),
'created': domain_info.get('created'),
'updated': domain_info.get('updated'),
'expires': domain_info.get('expires'),
'geolocation_iso': domain_info.get('registrant_country'),
'domain_statuses': domain_info.get('status'),
'whois_server': domain_info.get('whois_server'),
'dns': {
'a': domain_info.get('a_records'),
'mx': domain_info.get('mx_records'),
'txt': domain_info.get('txt_records'),
},
'registrar': {
'name': domain_info.get('registrar_name'),
'phone': domain_info.get('registrar_phone'),
'email': domain_info.get('registrar_email'),
'url': domain_info.get('registrar_url'),
},
'registrant': {
'name': domain_info.get('registrant_name'),
'id': domain_info.get('registrant_id'),
'organization': domain_info.get('registrant_organization'),
'address': domain_info.get('registrant_address'),
'city': domain_info.get('registrant_city'),
'state': domain_info.get('registrant_state'),
'zipcode': domain_info.get('registrant_zip_code'),
'country': domain_info.get('registrant_country'),
'phone': domain_info.get('registrant_phone'),
'fax': domain_info.get('registrant_fax'),
'email': domain_info.get('registrant_email'),
},
'admin': {
'name': domain_info.get('admin_name'),
'id': domain_info.get('admin_id'),
'organization': domain_info.get('admin_organization'),
'address':domain_info.get('admin_address'),
'city': domain_info.get('admin_city'),
'state': domain_info.get('admin_state'),
'zipcode': domain_info.get('admin_zip_code'),
'country': domain_info.get('admin_country'),
'phone': domain_info.get('admin_phone'),
'fax': domain_info.get('admin_fax'),
'email': domain_info.get('admin_email'),
},
'technical_contact': {
'name': domain_info.get('tech_name'),
'id': domain_info.get('tech_id'),
'organization': domain_info.get('tech_organization'),
'address': domain_info.get('tech_address'),
'city': domain_info.get('tech_city'),
'state': domain_info.get('tech_state'),
'zipcode': domain_info.get('tech_zip_code'),
'country': domain_info.get('tech_country'),
'phone': domain_info.get('tech_phone'),
'fax': domain_info.get('tech_fax'),
'email': domain_info.get('tech_email'),
},
'nameservers': domain_info.get('ns_records'),
# 'similar_domains': domain_info.get('similar_domains'),
'related_domains': domain_info.get('related_domains'),
'related_tlds': domain_info.get('related_tlds'),
'historical_ips': domain_info.get('historical_ips'),
}
@app.task(name='remove_duplicate_endpoints', bind=False, queue='remove_duplicate_endpoints_queue')
def remove_duplicate_endpoints(
scan_history_id,
domain_id,
subdomain_id=None,
filter_ids=[],
filter_status=[200, 301, 404],
duplicate_removal_fields=ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS
):
"""Remove duplicate endpoints.
Check for implicit redirections by comparing endpoints:
- [x] `content_length` similarities indicating redirections
- [x] `page_title` (check for same page title)
- [ ] Sign-in / login page (check for endpoints with the same words)
Args:
scan_history_id: ScanHistory id.
domain_id (int): Domain id.
subdomain_id (int, optional): Subdomain id.
filter_ids (list): List of endpoint ids to filter on.
filter_status (list): List of HTTP status codes to filter on.
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
"""
logger.info(f'Removing duplicate endpoints based on {duplicate_removal_fields}')
endpoints = (
EndPoint.objects
.filter(scan_history__id=scan_history_id)
.filter(target_domain__id=domain_id)
)
if filter_status:
endpoints = endpoints.filter(http_status__in=filter_status)
if subdomain_id:
endpoints = endpoints.filter(subdomain__id=subdomain_id)
if filter_ids:
endpoints = endpoints.filter(id__in=filter_ids)
for field_name in duplicate_removal_fields:
cl_query = (
endpoints
.values_list(field_name)
.annotate(mc=Count(field_name))
.order_by('-mc')
)
for (field_value, count) in cl_query:
if count > DELETE_DUPLICATES_THRESHOLD:
eps_to_delete = (
endpoints
.filter(**{field_name: field_value})
.order_by('discovered_date')
.all()[1:]
)
msg = f'Deleting {len(eps_to_delete)} endpoints [reason: same {field_name} {field_value}]'
for ep in eps_to_delete:
url = urlparse(ep.http_url)
if url.path in ['', '/', '/login']: # try do not delete the original page that other pages redirect to
continue
msg += f'\n\t {ep.http_url} [{ep.http_status}] [{field_name}={field_value}]'
ep.delete()
logger.warning(msg)
@app.task(name='run_command', bind=False, queue='run_command_queue')
def run_command(cmd, cwd=None, shell=False, history_file=None, scan_id=None, activity_id=None):
"""Run a given command using subprocess module.
Args:
cmd (str): Command to run.
cwd (str): Current working directory.
echo (bool): Log command.
shell (bool): Run within separate shell if True.
history_file (str): Write command + output to history file.
Returns:
tuple: Tuple with return_code, output.
"""
logger.info(cmd)
logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Run the command using subprocess
popen = subprocess.Popen(
cmd if shell else cmd.split(),
shell=shell,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
cwd=cwd,
universal_newlines=True)
output = ''
for stdout_line in iter(popen.stdout.readline, ""):
item = stdout_line.strip()
output += '\n' + item
logger.debug(item)
popen.stdout.close()
popen.wait()
return_code = popen.returncode
command_obj.output = output
command_obj.return_code = return_code
command_obj.save()
if history_file:
mode = 'a'
if not os.path.exists(history_file):
mode = 'w'
with open(history_file, mode) as f:
f.write(f'\n{cmd}\n{return_code}\n{output}\n------------------\n')
return return_code, output
#-------------#
# Other utils #
#-------------#
def stream_command(cmd, cwd=None, shell=False, history_file=None, encoding='utf-8', scan_id=None, activity_id=None, trunc_char=None):
# Log cmd
logger.info(cmd)
# logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Sanitize the cmd
command = cmd if shell else cmd.split()
# Run the command using subprocess
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
shell=shell)
# Log the output in real-time to the database
output = ""
# Process the output
for line in iter(lambda: process.stdout.readline(), b''):
if not line:
break
line = line.strip()
ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')
line = ansi_escape.sub('', line)
line = line.replace('\\x0d\\x0a', '\n')
if trunc_char and line.endswith(trunc_char):
line = line[:-1]
item = line
# Try to parse the line as JSON
try:
item = json.loads(line)
except json.JSONDecodeError:
pass
# Yield the line
#logger.debug(item)
yield item
# Add the log line to the output
output += line + "\n"
# Update the command record in the database
command_obj.output = output
command_obj.save()
# Retrieve the return code and output
process.wait()
return_code = process.returncode
# Update the return code and final output in the database
command_obj.return_code = return_code
command_obj.save()
# Append the command, return code and output to the history file
if history_file is not None:
with open(history_file, "a") as f:
f.write(f"{cmd}\n{return_code}\n{output}\n")
def process_httpx_response(line):
"""TODO: implement this"""
def extract_httpx_url(line):
"""Extract final URL from httpx results. Always follow redirects to find
the last URL.
Args:
line (dict): URL data output by httpx.
Returns:
tuple: (final_url, redirect_bool) tuple.
"""
status_code = line.get('status_code', 0)
final_url = line.get('final_url')
location = line.get('location')
chain_status_codes = line.get('chain_status_codes', [])
# Final URL is already looking nice, if it exists return it
if final_url:
return final_url, False
http_url = line['url'] # fallback to url field
# Handle redirects manually
REDIRECT_STATUS_CODES = [301, 302]
is_redirect = (
status_code in REDIRECT_STATUS_CODES
or
any(x in REDIRECT_STATUS_CODES for x in chain_status_codes)
)
if is_redirect and location:
if location.startswith(('http', 'https')):
http_url = location
else:
http_url = f'{http_url}/{location.lstrip("/")}'
# Sanitize URL
http_url = sanitize_url(http_url)
return http_url, is_redirect
#-------------#
# OSInt utils #
#-------------#
def get_and_save_dork_results(lookup_target, results_dir, type, lookup_keywords=None, lookup_extensions=None, delay=3, page_count=2, scan_history=None):
"""
Uses gofuzz to dork and store information
Args:
lookup_target (str): target to look into such as stackoverflow or even the target itself
results_dir (str): Results directory
type (str): Dork Type Title
lookup_keywords (str): comma separated keywords or paths to look for
lookup_extensions (str): comma separated extensions to look for
delay (int): delay between each requests
page_count (int): pages in google to extract information
scan_history (startScan.ScanHistory): Scan History Object
"""
results = []
gofuzz_command = f'{GOFUZZ_EXEC_PATH} -t {lookup_target} -d {delay} -p {page_count}'
if lookup_extensions:
gofuzz_command += f' -e {lookup_extensions}'
elif lookup_keywords:
gofuzz_command += f' -w {lookup_keywords}'
output_file = f'{results_dir}/gofuzz.txt'
gofuzz_command += f' -o {output_file}'
history_file = f'{results_dir}/commands.txt'
try:
run_command(
gofuzz_command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
)
if not os.path.isfile(output_file):
return
with open(output_file) as f:
for line in f.readlines():
url = line.strip()
if url:
results.append(url)
dork, created = Dork.objects.get_or_create(
type=type,
url=url
)
if scan_history:
scan_history.dorks.add(dork)
# remove output file
os.remove(output_file)
except Exception as e:
logger.exception(e)
return results
def get_and_save_emails(scan_history, activity_id, results_dir):
"""Get and save emails from Google, Bing and Baidu.
Args:
scan_history (startScan.ScanHistory): Scan history object.
activity_id: ScanActivity Object
results_dir (str): Results directory.
Returns:
list: List of emails found.
"""
emails = []
# Proxy settings
# get_random_proxy()
# Gather emails from Google, Bing and Baidu
output_file = f'{results_dir}/emails_tmp.txt'
history_file = f'{results_dir}/commands.txt'
command = f'python3 /usr/src/github/Infoga/infoga.py --domain {scan_history.domain.name} --source all --report {output_file}'
try:
run_command(
command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
activity_id=activity_id)
if not os.path.isfile(output_file):
logger.info('No Email results')
return []
with open(output_file) as f:
for line in f.readlines():
if 'Email' in line:
split_email = line.split(' ')[2]
emails.append(split_email)
output_path = f'{results_dir}/emails.txt'
with open(output_path, 'w') as output_file:
for email_address in emails:
save_email(email_address, scan_history)
output_file.write(f'{email_address}\n')
except Exception as e:
logger.exception(e)
return emails
def save_metadata_info(meta_dict):
"""Extract metadata from Google Search.
Args:
meta_dict (dict): Info dict.
Returns:
list: List of startScan.MetaFinderDocument objects.
"""
logger.warning(f'Getting metadata for {meta_dict.osint_target}')
scan_history = ScanHistory.objects.get(id=meta_dict.scan_id)
# Proxy settings
get_random_proxy()
# Get metadata
result = extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if not result:
logger.error(f'No metadata result from Google Search for {meta_dict.osint_target}.')
return []
# Add metadata info to DB
results = []
for metadata_name, data in result.get_metadata().items():
subdomain = Subdomain.objects.get(
scan_history=meta_dict.scan_id,
name=meta_dict.osint_target)
metadata = DottedDict({k: v for k, v in data.items()})
meta_finder_document = MetaFinderDocument(
subdomain=subdomain,
target_domain=meta_dict.domain,
scan_history=scan_history,
url=metadata.url,
doc_name=metadata_name,
http_status=metadata.status_code,
producer=metadata.metadata.get('Producer'),
creator=metadata.metadata.get('Creator'),
creation_date=metadata.metadata.get('CreationDate'),
modified_date=metadata.metadata.get('ModDate'),
author=metadata.metadata.get('Author'),
title=metadata.metadata.get('Title'),
os=metadata.metadata.get('OSInfo'))
meta_finder_document.save()
results.append(data)
return results
#-----------------#
# Utils functions #
#-----------------#
def create_scan_activity(scan_history_id, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = ScanHistory.objects.get(pk=scan_history_id)
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
#--------------------#
# Database functions #
#--------------------#
def save_vulnerability(**vuln_data):
references = vuln_data.pop('references', [])
cve_ids = vuln_data.pop('cve_ids', [])
cwe_ids = vuln_data.pop('cwe_ids', [])
tags = vuln_data.pop('tags', [])
subscan = vuln_data.pop('subscan', None)
# remove nulls
vuln_data = replace_nulls(vuln_data)
# Create vulnerability
vuln, created = Vulnerability.objects.get_or_create(**vuln_data)
if created:
vuln.discovered_date = timezone.now()
vuln.open_status = True
vuln.save()
# Save vuln tags
for tag_name in tags or []:
tag, created = VulnerabilityTags.objects.get_or_create(name=tag_name)
if tag:
vuln.tags.add(tag)
vuln.save()
# Save CVEs
for cve_id in cve_ids or []:
cve, created = CveId.objects.get_or_create(name=cve_id)
if cve:
vuln.cve_ids.add(cve)
vuln.save()
# Save CWEs
for cve_id in cwe_ids or []:
cwe, created = CweId.objects.get_or_create(name=cve_id)
if cwe:
vuln.cwe_ids.add(cwe)
vuln.save()
# Save vuln reference
for url in references or []:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
if created:
vuln.references.add(ref)
vuln.save()
# Save subscan id in vuln object
if subscan:
vuln.vuln_subscan_ids.add(subscan)
vuln.save()
return vuln, created
def save_endpoint(
http_url,
ctx={},
crawl=False,
is_default=False,
**endpoint_data):
"""Get or create EndPoint object. If crawl is True, also crawl the endpoint
HTTP URL with httpx.
Args:
http_url (str): Input HTTP URL.
is_default (bool): If the url is a default url for SubDomains.
scan_history (startScan.models.ScanHistory): ScanHistory object.
domain (startScan.models.Domain): Domain object.
subdomain (starScan.models.Subdomain): Subdomain object.
results_dir (str, optional): Results directory.
crawl (bool, optional): Run httpx on endpoint if True. Default: False.
force (bool, optional): Force crawl even if ENABLE_HTTP_CRAWL mode is on.
subscan (startScan.models.SubScan, optional): SubScan object.
Returns:
tuple: (startScan.models.EndPoint, created) where `created` is a boolean
indicating if the object is new or already existed.
"""
# remove nulls
endpoint_data = replace_nulls(endpoint_data)
scheme = urlparse(http_url).scheme
endpoint = None
created = False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in http_url:
logger.error(f"{http_url} is not a URL of domain {domain.name}. Skipping.")
return None, False
if crawl:
ctx['track'] = False
results = http_crawl(
urls=[http_url],
method='HEAD',
ctx=ctx)
if results:
endpoint_data = results[0]
endpoint_id = endpoint_data['endpoint_id']
created = endpoint_data['endpoint_created']
endpoint = EndPoint.objects.get(pk=endpoint_id)
elif not scheme:
return None, False
else: # add dumb endpoint without probing it
scan = ScanHistory.objects.filter(pk=ctx.get('scan_history_id')).first()
domain = Domain.objects.filter(pk=ctx.get('domain_id')).first()
if not validators.url(http_url):
return None, False
http_url = sanitize_url(http_url)
endpoint, created = EndPoint.objects.get_or_create(
scan_history=scan,
target_domain=domain,
http_url=http_url,
**endpoint_data)
if created:
endpoint.is_default = is_default
endpoint.discovered_date = timezone.now()
endpoint.save()
subscan_id = ctx.get('subscan_id')
if subscan_id:
endpoint.endpoint_subscan_ids.add(subscan_id)
endpoint.save()
return endpoint, created
def save_subdomain(subdomain_name, ctx={}):
"""Get or create Subdomain object.
Args:
subdomain_name (str): Subdomain name.
scan_history (startScan.models.ScanHistory): ScanHistory object.
Returns:
tuple: (startScan.models.Subdomain, created) where `created` is a
boolean indicating if the object has been created in DB.
"""
scan_id = ctx.get('scan_history_id')
subscan_id = ctx.get('subscan_id')
out_of_scope_subdomains = ctx.get('out_of_scope_subdomains', [])
valid_domain = (
validators.domain(subdomain_name) or
validators.ipv4(subdomain_name) or
validators.ipv6(subdomain_name)
)
if not valid_domain:
logger.error(f'{subdomain_name} is not an invalid domain. Skipping.')
return None, False
if subdomain_name in out_of_scope_subdomains:
logger.error(f'{subdomain_name} is out-of-scope. Skipping.')
return None, False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in subdomain_name:
logger.error(f"{subdomain_name} is not a subdomain of domain {domain.name}. Skipping.")
return None, False
scan = ScanHistory.objects.filter(pk=scan_id).first()
domain = scan.domain if scan else None
subdomain, created = Subdomain.objects.get_or_create(
scan_history=scan,
target_domain=domain,
name=subdomain_name)
if created:
# logger.warning(f'Found new subdomain {subdomain_name}')
subdomain.discovered_date = timezone.now()
if subscan_id:
subdomain.subdomain_subscan_ids.add(subscan_id)
subdomain.save()
return subdomain, created
def save_email(email_address, scan_history=None):
if not validators.email(email_address):
logger.info(f'Email {email_address} is invalid. Skipping.')
return None, False
email, created = Email.objects.get_or_create(address=email_address)
# if created:
# logger.warning(f'Found new email address {email_address}')
# Add email to ScanHistory
if scan_history:
scan_history.emails.add(email)
scan_history.save()
return email, created
def save_employee(name, designation, scan_history=None):
employee, created = Employee.objects.get_or_create(
name=name,
designation=designation)
# if created:
# logger.warning(f'Found new employee {name}')
# Add employee to ScanHistory
if scan_history:
scan_history.employees.add(employee)
scan_history.save()
return employee, created
def save_ip_address(ip_address, subdomain=None, subscan=None, **kwargs):
if not (validators.ipv4(ip_address) or validators.ipv6(ip_address)):
logger.info(f'IP {ip_address} is not a valid IP. Skipping.')
return None, False
ip, created = IpAddress.objects.get_or_create(address=ip_address)
# if created:
# logger.warning(f'Found new IP {ip_address}')
# Set extra attributes
for key, value in kwargs.items():
setattr(ip, key, value)
ip.save()
# Add IP to subdomain
if subdomain:
subdomain.ip_addresses.add(ip)
subdomain.save()
# Add subscan to IP
if subscan:
ip.ip_subscan_ids.add(subscan)
# Geo-localize IP asynchronously
if created:
geo_localize.delay(ip_address, ip.id)
return ip, created
def save_imported_subdomains(subdomains, ctx={}):
"""Take a list of subdomains imported and write them to from_imported.txt.
Args:
subdomains (list): List of subdomain names.
scan_history (startScan.models.ScanHistory): ScanHistory instance.
domain (startScan.models.Domain): Domain instance.
results_dir (str): Results directory.
"""
domain_id = ctx['domain_id']
domain = Domain.objects.get(pk=domain_id)
results_dir = ctx.get('results_dir', RENGINE_RESULTS)
# Validate each subdomain and de-duplicate entries
subdomains = list(set([
subdomain for subdomain in subdomains
if validators.domain(subdomain) and domain.name == get_domain_from_subdomain(subdomain)
]))
if not subdomains:
return
logger.warning(f'Found {len(subdomains)} imported subdomains.')
with open(f'{results_dir}/from_imported.txt', 'w+') as output_file:
for name in subdomains:
subdomain_name = name.strip()
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain.is_imported_subdomain = True
subdomain.save()
output_file.write(f'{subdomain}\n')
@app.task(name='query_reverse_whois', bind=False, queue='query_reverse_whois_queue')
def query_reverse_whois(lookup_keyword):
"""Queries Reverse WHOIS information for an organization or email address.
Args:
lookup_keyword (str): Registrar Name or email
Returns:
dict: Reverse WHOIS information.
"""
return get_associated_domains(lookup_keyword)
@app.task(name='query_ip_history', bind=False, queue='query_ip_history_queue')
def query_ip_history(domain):
"""Queries the IP history for a domain
Args:
domain (str): domain_name
Returns:
list: list of historical ip addresses
"""
return get_domain_historical_ip_address(domain)
@app.task(name='gpt_vulnerability_description', bind=False, queue='gpt_queue')
def gpt_vulnerability_description(vulnerability_id):
"""Generate and store Vulnerability Description using GPT.
Args:
vulnerability_id (Vulnerability Model ID): Vulnerability ID to fetch Description.
"""
logger.info('Getting GPT Vulnerability Description')
try:
lookup_vulnerability = Vulnerability.objects.get(id=vulnerability_id)
lookup_url = urlparse(lookup_vulnerability.http_url)
path = lookup_url.path
except Exception as e:
return {
'status': False,
'error': str(e)
}
# check in db GPTVulnerabilityReport model if vulnerability description and path matches
stored = GPTVulnerabilityReport.objects.filter(url_path=path).filter(title=lookup_vulnerability.name).first()
if stored:
response = {
'status': True,
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
vulnerability_description = get_gpt_vuln_input_description(
lookup_vulnerability.name,
path
)
# one can add more description here later
gpt_generator = GPTVulnerabilityReportGenerator()
response = gpt_generator.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
lookup_vulnerability.name,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
# for all vulnerabilities with the same vulnerability name this description has to be stored.
# also the consition is that the url must contain a part of this.
for vuln in Vulnerability.objects.filter(name=lookup_vulnerability.name, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
return response
| import csv
import json
import os
import pprint
import subprocess
import time
import validators
import whatportis
import xmltodict
import yaml
import tldextract
import concurrent.futures
from datetime import datetime
from urllib.parse import urlparse
from api.serializers import SubdomainSerializer
from celery import chain, chord, group
from celery.result import allow_join_result
from celery.utils.log import get_task_logger
from django.db.models import Count
from dotted_dict import DottedDict
from django.utils import timezone
from pycvesearch import CVESearch
from metafinder.extractor import extract_metadata_from_google_search
from reNgine.celery import app
from reNgine.gpt import GPTVulnerabilityReportGenerator
from reNgine.celery_custom_task import RengineTask
from reNgine.common_func import *
from reNgine.definitions import *
from reNgine.settings import *
from reNgine.gpt import *
from reNgine.utilities import *
from scanEngine.models import (EngineType, InstalledExternalTool, Notification, Proxy)
from startScan.models import *
from startScan.models import EndPoint, Subdomain, Vulnerability
from targetApp.models import Domain
"""
Celery tasks.
"""
logger = get_task_logger(__name__)
#----------------------#
# Scan / Subscan tasks #
#----------------------#
@app.task(name='initiate_scan', bind=False, queue='initiate_scan_queue')
def initiate_scan(
scan_history_id,
domain_id,
engine_id=None,
scan_type=LIVE_SCAN,
results_dir=RENGINE_RESULTS,
imported_subdomains=[],
out_of_scope_subdomains=[],
url_filter=''):
"""Initiate a new scan.
Args:
scan_history_id (int): ScanHistory id.
domain_id (int): Domain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
imported_subdomains (list): Imported subdomains.
out_of_scope_subdomains (list): Out-of-scope subdomains.
url_filter (str): URL path. Default: ''
"""
# Get scan history
scan = ScanHistory.objects.get(pk=scan_history_id)
# Get scan engine
engine_id = engine_id or scan.scan_type.id # scan history engine_id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, [])
# Get domain and set last_scan_date
domain = Domain.objects.get(pk=domain_id)
domain.last_scan_date = timezone.now()
domain.save()
# Get path filter
url_filter = url_filter.rstrip('/')
# Get or create ScanHistory() object
if scan_type == LIVE_SCAN: # immediate
scan = ScanHistory.objects.get(pk=scan_history_id)
scan.scan_status = RUNNING_TASK
elif scan_type == SCHEDULED_SCAN: # scheduled
scan = ScanHistory()
scan.scan_status = INITIATED_TASK
scan.scan_type = engine
scan.celery_ids = [initiate_scan.request.id]
scan.domain = domain
scan.start_scan_date = timezone.now()
scan.tasks = engine.tasks
scan.results_dir = f'{results_dir}/{domain.name}_{scan.id}'
add_gf_patterns = gf_patterns and 'fetch_url' in engine.tasks
if add_gf_patterns:
scan.used_gf_patterns = ','.join(gf_patterns)
scan.save()
# Create scan results dir
os.makedirs(scan.results_dir)
# Build task context
ctx = {
'scan_history_id': scan_history_id,
'engine_id': engine_id,
'domain_id': domain.id,
'results_dir': scan.results_dir,
'url_filter': url_filter,
'yaml_configuration': config,
'out_of_scope_subdomains': out_of_scope_subdomains
}
ctx_str = json.dumps(ctx, indent=2)
# Send start notif
logger.warning(f'Starting scan {scan_history_id} with context:\n{ctx_str}')
send_scan_notif.delay(
scan_history_id,
subscan_id=None,
engine_id=engine_id,
status=CELERY_TASK_STATUS_MAP[scan.scan_status])
# Save imported subdomains in DB
save_imported_subdomains(imported_subdomains, ctx=ctx)
# Create initial subdomain in DB: make a copy of domain as a subdomain so
# that other tasks using subdomains can use it.
subdomain_name = domain.name
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
# If enable_http_crawl is set, create an initial root HTTP endpoint so that
# HTTP crawling can start somewhere
http_url = f'{domain.name}{url_filter}' if url_filter else domain.name
endpoint, _ = save_endpoint(
http_url,
ctx=ctx,
crawl=enable_http_crawl,
is_default=True,
subdomain=subdomain
)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build Celery tasks, crafted according to the dependency graph below:
# subdomain_discovery --> port_scan --> fetch_url --> dir_file_fuzz
# osint vulnerability_scan
# osint dalfox xss scan
# screenshot
# waf_detection
workflow = chain(
group(
subdomain_discovery.si(ctx=ctx, description='Subdomain discovery'),
osint.si(ctx=ctx, description='OS Intelligence')
),
port_scan.si(ctx=ctx, description='Port scan'),
fetch_url.si(ctx=ctx, description='Fetch URL'),
group(
dir_file_fuzz.si(ctx=ctx, description='Directories & files fuzz'),
vulnerability_scan.si(ctx=ctx, description='Vulnerability scan'),
screenshot.si(ctx=ctx, description='Screenshot'),
waf_detection.si(ctx=ctx, description='WAF detection')
)
)
# Build callback
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery chord
logger.info(f'Running Celery workflow with {len(workflow.tasks) + 1} tasks')
task = chain(workflow, callback).on_error(callback).delay()
scan.celery_ids.append(task.id)
scan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='initiate_subscan', bind=False, queue='subscan_queue')
def initiate_subscan(
scan_history_id,
subdomain_id,
engine_id=None,
scan_type=None,
results_dir=RENGINE_RESULTS,
url_filter=''):
"""Initiate a new subscan.
Args:
scan_history_id (int): ScanHistory id.
subdomain_id (int): Subdomain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
url_filter (str): URL path. Default: ''
"""
# Get Subdomain, Domain and ScanHistory
subdomain = Subdomain.objects.get(pk=subdomain_id)
scan = ScanHistory.objects.get(pk=subdomain.scan_history.id)
domain = Domain.objects.get(pk=subdomain.target_domain.id)
# Get EngineType
engine_id = engine_id or scan.scan_type.id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Create scan activity of SubScan Model
subscan = SubScan(
start_scan_date=timezone.now(),
celery_ids=[initiate_subscan.request.id],
scan_history=scan,
subdomain=subdomain,
type=scan_type,
status=RUNNING_TASK,
engine=engine)
subscan.save()
# Get YAML configuration
config = yaml.safe_load(engine.yaml_configuration)
# Create results directory
results_dir = f'{scan.results_dir}/subscans/{subscan.id}'
os.makedirs(results_dir, exist_ok=True)
# Run task
method = globals().get(scan_type)
if not method:
logger.warning(f'Task {scan_type} is not supported by reNgine. Skipping')
return
scan.tasks.append(scan_type)
scan.save()
# Send start notif
send_scan_notif.delay(
scan.id,
subscan_id=subscan.id,
engine_id=engine_id,
status='RUNNING')
# Build context
ctx = {
'scan_history_id': scan.id,
'subscan_id': subscan.id,
'engine_id': engine_id,
'domain_id': domain.id,
'subdomain_id': subdomain.id,
'yaml_configuration': config,
'results_dir': results_dir,
'url_filter': url_filter
}
# Create initial endpoints in DB: find domain HTTP endpoint so that HTTP
# crawling can start somewhere
base_url = f'{subdomain.name}{url_filter}' if url_filter else subdomain.name
endpoint, _ = save_endpoint(
base_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build header + callback
workflow = method.si(ctx=ctx)
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery tasks
task = chain(workflow, callback).on_error(callback).delay()
subscan.celery_ids.append(task.id)
subscan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='report', bind=False, queue='report_queue')
def report(ctx={}, description=None):
"""Report task running after all other tasks.
Mark ScanHistory or SubScan object as completed and update with final
status, log run details and send notification.
Args:
description (str, optional): Task description shown in UI.
"""
# Get objects
subscan_id = ctx.get('subscan_id')
scan_id = ctx.get('scan_history_id')
engine_id = ctx.get('engine_id')
scan = ScanHistory.objects.filter(pk=scan_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
# Get failed tasks
tasks = ScanActivity.objects.filter(scan_of=scan).all()
if subscan:
tasks = tasks.filter(celery_id__in=subscan.celery_ids)
failed_tasks = tasks.filter(status=FAILED_TASK)
# Get task status
failed_count = failed_tasks.count()
status = SUCCESS_TASK if failed_count == 0 else FAILED_TASK
status_h = 'SUCCESS' if failed_count == 0 else 'FAILED'
# Update scan / subscan status
if subscan:
subscan.stop_scan_date = timezone.now()
subscan.status = status
subscan.save()
else:
scan.scan_status = status
scan.stop_scan_date = timezone.now()
scan.save()
# Send scan status notif
send_scan_notif.delay(
scan_history_id=scan_id,
subscan_id=subscan_id,
engine_id=engine_id,
status=status_h)
#------------------------- #
# Tracked reNgine tasks #
#--------------------------#
@app.task(name='subdomain_discovery', queue='main_scan_queue', base=RengineTask, bind=True)
def subdomain_discovery(
self,
host=None,
ctx=None,
description=None):
"""Uses a set of tools (see SUBDOMAIN_SCAN_DEFAULT_TOOLS) to scan all
subdomains associated with a domain.
Args:
host (str): Hostname to scan.
Returns:
subdomains (list): List of subdomain names.
"""
if not host:
host = self.subdomain.name if self.subdomain else self.domain.name
if self.url_filter:
logger.warning(f'Ignoring subdomains scan as an URL path filter was passed ({self.url_filter}).')
return
# Config
config = self.yaml_configuration.get(SUBDOMAIN_DISCOVERY) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL) or self.yaml_configuration.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
tools = config.get(USES_TOOLS, SUBDOMAIN_SCAN_DEFAULT_TOOLS)
default_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=True).filter(is_subdomain_gathering=True)]
custom_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=False).filter(is_subdomain_gathering=True)]
send_subdomain_changes, send_interesting = False, False
notif = Notification.objects.first()
if notif:
send_subdomain_changes = notif.send_subdomain_changes_notif
send_interesting = notif.send_interesting_notif
# Gather tools to run for subdomain scan
if ALL in tools:
tools = SUBDOMAIN_SCAN_DEFAULT_TOOLS + custom_subdomain_tools
tools = [t.lower() for t in tools]
# Make exception for amass since tool name is amass, but command is amass-active/passive
default_subdomain_tools.append('amass-passive')
default_subdomain_tools.append('amass-active')
# Run tools
for tool in tools:
cmd = None
logger.info(f'Scanning subdomains for {host} with {tool}')
proxy = get_random_proxy()
if tool in default_subdomain_tools:
if tool == 'amass-passive':
cmd = f'amass enum -passive -d {host} -o {self.results_dir}/subdomains_amass.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
elif tool == 'amass-active':
use_amass_config = config.get(USE_AMASS_CONFIG, False)
amass_wordlist_name = config.get(AMASS_WORDLIST, 'deepmagic.com-prefixes-top50000')
wordlist_path = f'/usr/src/wordlist/{amass_wordlist_name}.txt'
cmd = f'amass enum -active -d {host} -o {self.results_dir}/subdomains_amass_active.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
cmd += f' -brute -w {wordlist_path}'
elif tool == 'sublist3r':
cmd = f'python3 /usr/src/github/Sublist3r/sublist3r.py -d {host} -t {threads} -o {self.results_dir}/subdomains_sublister.txt'
elif tool == 'subfinder':
cmd = f'subfinder -d {host} -o {self.results_dir}/subdomains_subfinder.txt'
use_subfinder_config = config.get(USE_SUBFINDER_CONFIG, False)
cmd += ' -config /root/.config/subfinder/config.yaml' if use_subfinder_config else ''
cmd += f' -proxy {proxy}' if proxy else ''
cmd += f' -timeout {timeout}' if timeout else ''
cmd += f' -t {threads}' if threads else ''
cmd += f' -silent'
elif tool == 'oneforall':
cmd = f'python3 /usr/src/github/OneForAll/oneforall.py --target {host} run'
cmd_extract = f'cut -d\',\' -f6 /usr/src/github/OneForAll/results/{host}.csv > {self.results_dir}/subdomains_oneforall.txt'
cmd_rm = f'rm -rf /usr/src/github/OneForAll/results/{host}.csv'
cmd += f' && {cmd_extract} && {cmd_rm}'
elif tool == 'ctfr':
results_file = self.results_dir + '/subdomains_ctfr.txt'
cmd = f'python3 /usr/src/github/ctfr/ctfr.py -d {host} -o {results_file}'
cmd_extract = f"cat {results_file} | sed 's/\*.//g' | tail -n +12 | uniq | sort > {results_file}"
cmd += f' && {cmd_extract}'
elif tool == 'tlsx':
results_file = self.results_dir + '/subdomains_tlsx.txt'
cmd = f'tlsx -san -cn -silent -ro -host {host}'
cmd += f" | sed -n '/^\([a-zA-Z0-9]\([-a-zA-Z0-9]*[a-zA-Z0-9]\)\?\.\)\+{host}$/p' | uniq | sort"
cmd += f' > {results_file}'
elif tool == 'netlas':
results_file = self.results_dir + '/subdomains_netlas.txt'
cmd = f'netlas search -d domain -i domain domain:"*.{host}" -f json'
netlas_key = get_netlas_key()
cmd += f' -a {netlas_key}' if netlas_key else ''
cmd_extract = f"grep -oE '([a-zA-Z0-9]([-a-zA-Z0-9]*[a-zA-Z0-9])?\.)+{host}'"
cmd += f' | {cmd_extract} > {results_file}'
elif tool in custom_subdomain_tools:
tool_query = InstalledExternalTool.objects.filter(name__icontains=tool.lower())
if not tool_query.exists():
logger.error(f'Missing {{TARGET}} and {{OUTPUT}} placeholders in {tool} configuration. Skipping.')
continue
custom_tool = tool_query.first()
cmd = custom_tool.subdomain_gathering_command
if '{TARGET}' in cmd and '{OUTPUT}' in cmd:
cmd = cmd.replace('{TARGET}', host)
cmd = cmd.replace('{OUTPUT}', f'{self.results_dir}/subdomains_{tool}.txt')
cmd = cmd.replace('{PATH}', custom_tool.github_clone_path) if '{PATH}' in cmd else cmd
else:
logger.warning(
f'Subdomain discovery tool "{tool}" is not supported by reNgine. Skipping.')
continue
# Run tool
try:
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
except Exception as e:
logger.error(
f'Subdomain discovery tool "{tool}" raised an exception')
logger.exception(e)
# Gather all the tools' results in one single file. Write subdomains into
# separate files, and sort all subdomains.
run_command(
f'cat {self.results_dir}/subdomains_*.txt > {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {self.output_path} -o {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
with open(self.output_path) as f:
lines = f.readlines()
# Parse the output_file file and store Subdomain and EndPoint objects found
# in db.
subdomain_count = 0
subdomains = []
urls = []
for line in lines:
subdomain_name = line.strip()
valid_url = bool(validators.url(subdomain_name))
valid_domain = (
bool(validators.domain(subdomain_name)) or
bool(validators.ipv4(subdomain_name)) or
bool(validators.ipv6(subdomain_name)) or
valid_url
)
if not valid_domain:
logger.error(f'Subdomain {subdomain_name} is not a valid domain, IP or URL. Skipping.')
continue
if valid_url:
subdomain_name = urlparse(subdomain_name).netloc
if subdomain_name in self.out_of_scope_subdomains:
logger.error(f'Subdomain {subdomain_name} is out of scope. Skipping.')
continue
# Add subdomain
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain_count += 1
subdomains.append(subdomain)
urls.append(subdomain.name)
# Bulk crawl subdomains
if enable_http_crawl:
ctx['track'] = True
http_crawl(urls, ctx=ctx, is_ran_from_subdomain_scan=True)
# Find root subdomain endpoints
for subdomain in subdomains:
pass
# Send notifications
subdomains_str = '\n'.join([f'• `{subdomain.name}`' for subdomain in subdomains])
self.notify(fields={
'Subdomain count': len(subdomains),
'Subdomains': subdomains_str,
})
if send_subdomain_changes and self.scan_id and self.domain_id:
added = get_new_added_subdomain(self.scan_id, self.domain_id)
removed = get_removed_subdomain(self.scan_id, self.domain_id)
if added:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in added])
self.notify(fields={'Added subdomains': subdomains_str})
if removed:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in removed])
self.notify(fields={'Removed subdomains': subdomains_str})
if send_interesting and self.scan_id and self.domain_id:
interesting_subdomains = get_interesting_subdomains(self.scan_id, self.domain_id)
if interesting_subdomains:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in interesting_subdomains])
self.notify(fields={'Interesting subdomains': subdomains_str})
return SubdomainSerializer(subdomains, many=True).data
@app.task(name='osint', queue='main_scan_queue', base=RengineTask, bind=True)
def osint(self, host=None, ctx={}, description=None):
"""Run Open-Source Intelligence tools on selected domain.
Args:
host (str): Hostname to scan.
Returns:
dict: Results from osint discovery and dorking.
"""
config = self.yaml_configuration.get(OSINT) or OSINT_DEFAULT_CONFIG
results = {}
grouped_tasks = []
if 'discover' in config:
ctx['track'] = False
# results = osint_discovery(host=host, ctx=ctx)
_task = osint_discovery.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
activity_id=self.activity_id,
results_dir=self.results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if OSINT_DORK in config or OSINT_CUSTOM_DORK in config:
_task = dorking.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
results_dir=self.results_dir
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('OSINT Tasks finished...')
# with open(self.output_path, 'w') as f:
# json.dump(results, f, indent=4)
#
# return results
@app.task(name='osint_discovery', queue='osint_discovery_queue', bind=False)
def osint_discovery(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run OSINT discovery.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
dict: osint metadat and theHarvester and h8mail results.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
osint_lookup = config.get(OSINT_DISCOVER, [])
osint_intensity = config.get(INTENSITY, 'normal')
documents_limit = config.get(OSINT_DOCUMENTS_LIMIT, 50)
results = {}
meta_info = []
emails = []
creds = []
# Get and save meta info
if 'metainfo' in osint_lookup:
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': host,
'domain': host,
'scan_id': scan_history_id,
'documents_limit': documents_limit
})
meta_info.append(save_metadata_info(meta_dict))
# TODO: disabled for now
# elif osint_intensity == 'deep':
# subdomains = Subdomain.objects
# if self.scan:
# subdomains = subdomains.filter(scan_history=self.scan)
# for subdomain in subdomains:
# meta_dict = DottedDict({
# 'osint_target': subdomain.name,
# 'domain': self.domain,
# 'scan_id': self.scan_id,
# 'documents_limit': documents_limit
# })
# meta_info.append(save_metadata_info(meta_dict))
grouped_tasks = []
if 'emails' in osint_lookup:
emails = get_and_save_emails(scan_history, activity_id, results_dir)
emails_str = '\n'.join([f'• `{email}`' for email in emails])
# self.notify(fields={'Emails': emails_str})
# ctx['track'] = False
_task = h8mail.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if 'employees' in osint_lookup:
ctx['track'] = False
_task = theHarvester.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
# results['emails'] = results.get('emails', []) + emails
# results['creds'] = creds
# results['meta_info'] = meta_info
return results
@app.task(name='dorking', bind=False, queue='dorking_queue')
def dorking(config, host, scan_history_id, results_dir):
"""Run Google dorks.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
list: Dorking results for each dork ran.
"""
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
scan_history = ScanHistory.objects.get(pk=scan_history_id)
dorks = config.get(OSINT_DORK, [])
custom_dorks = config.get(OSINT_CUSTOM_DORK, [])
results = []
# custom dorking has higher priority
try:
for custom_dork in custom_dorks:
lookup_target = custom_dork.get('lookup_site')
# replace with original host if _target_
lookup_target = host if lookup_target == '_target_' else lookup_target
if 'lookup_extensions' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_extensions=custom_dork.get('lookup_extensions'),
scan_history=scan_history
)
elif 'lookup_keywords' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_keywords=custom_dork.get('lookup_keywords'),
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
# default dorking
try:
for dork in dorks:
logger.info(f'Getting dork information for {dork}')
if dork == 'stackoverflow':
results = get_and_save_dork_results(
lookup_target='stackoverflow.com',
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'login_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/login/,login.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'admin_panels':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/admin/,admin.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'dashboard_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/dashboard/,dashboard.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'social_media' :
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'reddit.com'
]
for site in social_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'project_management' :
project_websites = [
'trello.com',
'atlassian.net'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'code_sharing' :
project_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'config_files' :
config_file_exts = [
'env',
'xml',
'conf',
'toml',
'yml',
'yaml',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(config_file_exts),
page_count=4,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keyword = 'Jenkins'
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=lookup_keyword,
page_count=1,
scan_history=scan_history
)
elif dork == 'wordpress_files' :
lookup_keywords = [
'/wp-content/',
'/wp-includes/'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'php_error' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'exposed_documents' :
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(docs_file_ext),
page_count=7,
scan_history=scan_history
)
elif dork == 'db_files' :
file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
elif dork == 'git_exposed' :
file_ext = [
'git',
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
return results
@app.task(name='theHarvester', queue='theHarvester_queue', bind=False)
def theHarvester(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run theHarvester to get save emails, hosts, employees found in domain.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
dict: Dict of emails, employees, hosts and ips found during crawling.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
output_path_json = f'{results_dir}/theHarvester.json'
theHarvester_dir = '/usr/src/github/theHarvester'
history_file = f'{results_dir}/commands.txt'
cmd = f'python3 {theHarvester_dir}/theHarvester.py -d {host} -b all -f {output_path_json}'
# Update proxies.yaml
proxy_query = Proxy.objects.all()
if proxy_query.exists():
proxy = proxy_query.first()
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(f'{theHarvester_dir}/proxies.yaml', 'w') as file:
yaml.dump(yaml_data, file)
# Run cmd
run_command(
cmd,
shell=False,
cwd=theHarvester_dir,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
# Get file location
if not os.path.isfile(output_path_json):
logger.error(f'Could not open {output_path_json}')
return {}
# Load theHarvester results
with open(output_path_json, 'r') as f:
data = json.load(f)
# Re-indent theHarvester JSON
with open(output_path_json, 'w') as f:
json.dump(data, f, indent=4)
emails = data.get('emails', [])
for email_address in emails:
email, _ = save_email(email_address, scan_history=scan_history)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
linkedin_people = data.get('linkedin_people', [])
for people in linkedin_people:
employee, _ = save_employee(
people,
designation='linkedin',
scan_history=scan_history)
# if employee:
# self.notify(fields={'LinkedIn people': f'• {employee.name}'})
twitter_people = data.get('twitter_people', [])
for people in twitter_people:
employee, _ = save_employee(
people,
designation='twitter',
scan_history=scan_history)
# if employee:
# self.notify(fields={'Twitter people': f'• {employee.name}'})
hosts = data.get('hosts', [])
urls = []
for host in hosts:
split = tuple(host.split(':'))
http_url = split[0]
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
endpoint, _ = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain)
# if endpoint:
# urls.append(endpoint.http_url)
# self.notify(fields={'Hosts': f'• {endpoint.http_url}'})
# if enable_http_crawl:
# ctx['track'] = False
# http_crawl(urls, ctx=ctx)
# TODO: Lots of ips unrelated with our domain are found, disabling
# this for now.
# ips = data.get('ips', [])
# for ip_address in ips:
# ip, created = save_ip_address(
# ip_address,
# subscan=subscan)
# if ip:
# send_task_notif.delay(
# 'osint',
# scan_history_id=scan_history_id,
# subscan_id=subscan_id,
# severity='success',
# update_fields={'IPs': f'{ip.address}'})
return data
@app.task(name='h8mail', queue='h8mail_queue', bind=False)
def h8mail(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run h8mail.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
list[dict]: List of credentials info.
"""
logger.warning('Getting leaked credentials')
scan_history = ScanHistory.objects.get(pk=scan_history_id)
input_path = f'{results_dir}/emails.txt'
output_file = f'{results_dir}/h8mail.json'
cmd = f'h8mail -t {input_path} --json {output_file}'
history_file = f'{results_dir}/commands.txt'
run_command(
cmd,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
with open(output_file) as f:
data = json.load(f)
creds = data.get('targets', [])
# TODO: go through h8mail output and save emails to DB
for cred in creds:
logger.warning(cred)
email_address = cred['target']
pwn_num = cred['pwn_num']
pwn_data = cred.get('data', [])
email, created = save_email(email_address, scan_history=scan)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
return creds
@app.task(name='screenshot', queue='main_scan_queue', base=RengineTask, bind=True)
def screenshot(self, ctx={}, description=None):
"""Uses EyeWitness to gather screenshot of a domain and/or url.
Args:
description (str, optional): Task description shown in UI.
"""
# Config
screenshots_path = f'{self.results_dir}/screenshots'
output_path = f'{self.results_dir}/screenshots/{self.filename}'
alive_endpoints_file = f'{self.results_dir}/endpoints_alive.txt'
config = self.yaml_configuration.get(SCREENSHOT) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT + 5)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
# If intensity is normal, grab only the root endpoints of each subdomain
strict = True if intensity == 'normal' else False
# Get URLs to take screenshot of
get_http_urls(
is_alive=enable_http_crawl,
strict=strict,
write_filepath=alive_endpoints_file,
get_only_default_urls=True,
ctx=ctx
)
# Send start notif
notification = Notification.objects.first()
send_output_file = notification.send_scan_output_file if notification else False
# Run cmd
cmd = f'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py -f {alive_endpoints_file} -d {screenshots_path} --no-prompt'
cmd += f' --timeout {timeout}' if timeout > 0 else ''
cmd += f' --threads {threads}' if threads > 0 else ''
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(output_path):
logger.error(f'Could not load EyeWitness results at {output_path} for {self.domain.name}.')
return
# Loop through results and save objects in DB
screenshot_paths = []
with open(output_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
"Protocol,Port,Domain,Request Status,Screenshot Path, Source Path"
protocol, port, subdomain_name, status, screenshot_path, source_path = tuple(row)
logger.info(f'{protocol}:{port}:{subdomain_name}:{status}')
subdomain_query = Subdomain.objects.filter(name=subdomain_name)
if self.scan:
subdomain_query = subdomain_query.filter(scan_history=self.scan)
if status == 'Successful' and subdomain_query.exists():
subdomain = subdomain_query.first()
screenshot_paths.append(screenshot_path)
subdomain.screenshot_path = screenshot_path.replace('/usr/src/scan_results/', '')
subdomain.save()
logger.warning(f'Added screenshot for {subdomain.name} to DB')
# Remove all db, html extra files in screenshot results
run_command(
'rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(screenshots_path),
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'rm -rf {screenshots_path}/source',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Send finish notifs
screenshots_str = '• ' + '\n• '.join([f'`{path}`' for path in screenshot_paths])
self.notify(fields={'Screenshots': screenshots_str})
if send_output_file:
for path in screenshot_paths:
title = get_output_file_name(
self.scan_id,
self.subscan_id,
self.filename)
send_file_to_discord.delay(path, title)
@app.task(name='port_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def port_scan(self, hosts=[], ctx={}, description=None):
"""Run port scan.
Args:
hosts (list, optional): Hosts to run port scan on.
description (str, optional): Task description shown in UI.
Returns:
list: List of open ports (dict).
"""
input_file = f'{self.results_dir}/input_subdomains_port_scan.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(PORT_SCAN) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
exclude_ports = config.get(NAABU_EXCLUDE_PORTS, [])
exclude_subdomains = config.get(NAABU_EXCLUDE_SUBDOMAINS, False)
ports = config.get(PORTS, NAABU_DEFAULT_PORTS)
ports = [str(port) for port in ports]
rate_limit = config.get(NAABU_RATE) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
passive = config.get(NAABU_PASSIVE, False)
use_naabu_config = config.get(USE_NAABU_CONFIG, False)
exclude_ports_str = ','.join(return_iterable(exclude_ports))
# nmap args
nmap_enabled = config.get(ENABLE_NMAP, False)
nmap_cmd = config.get(NMAP_COMMAND, '')
nmap_script = config.get(NMAP_SCRIPT, '')
nmap_script = ','.join(return_iterable(nmap_script))
nmap_script_args = config.get(NMAP_SCRIPT_ARGS)
if hosts:
with open(input_file, 'w') as f:
f.write('\n'.join(hosts))
else:
hosts = get_subdomains(
write_filepath=input_file,
exclude_subdomains=exclude_subdomains,
ctx=ctx)
# Build cmd
cmd = 'naabu -json -exclude-cdn'
cmd += f' -list {input_file}' if len(hosts) > 0 else f' -host {hosts[0]}'
if 'full' in ports or 'all' in ports:
ports_str = ' -p "-"'
elif 'top-100' in ports:
ports_str = ' -top-ports 100'
elif 'top-1000' in ports:
ports_str = ' -top-ports 1000'
else:
ports_str = ','.join(ports)
ports_str = f' -p {ports_str}'
cmd += ports_str
cmd += ' -config /root/.config/naabu/config.yaml' if use_naabu_config else ''
cmd += f' -proxy "{proxy}"' if proxy else ''
cmd += f' -c {threads}' if threads else ''
cmd += f' -rate {rate_limit}' if rate_limit > 0 else ''
cmd += f' -timeout {timeout*1000}' if timeout > 0 else ''
cmd += f' -passive' if passive else ''
cmd += f' -exclude-ports {exclude_ports_str}' if exclude_ports else ''
cmd += f' -silent'
# Execute cmd and gather results
results = []
urls = []
ports_data = {}
for line in stream_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
port_number = line['port']
ip_address = line['ip']
host = line.get('host') or ip_address
if port_number == 0:
continue
# Grab subdomain
subdomain = Subdomain.objects.filter(
name=host,
target_domain=self.domain,
scan_history=self.scan
).first()
# Add IP DB
ip, _ = save_ip_address(ip_address, subdomain, subscan=self.subscan)
if self.subscan:
ip.ip_subscan_ids.add(self.subscan)
ip.save()
# Add endpoint to DB
# port 80 and 443 not needed as http crawl already does that.
if port_number not in [80, 443]:
http_url = f'{host}:{port_number}'
endpoint, _ = save_endpoint(
http_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint:
http_url = endpoint.http_url
urls.append(http_url)
# Add Port in DB
port_details = whatportis.get_ports(str(port_number))
service_name = port_details[0].name if len(port_details) > 0 else 'unknown'
description = port_details[0].description if len(port_details) > 0 else ''
# get or create port
port, created = Port.objects.get_or_create(
number=port_number,
service_name=service_name,
description=description
)
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port.save()
ip.ports.add(port)
ip.save()
if host in ports_data:
ports_data[host].append(port_number)
else:
ports_data[host] = [port_number]
# Send notification
logger.warning(f'Found opened port {port_number} on {ip_address} ({host})')
if len(ports_data) == 0:
logger.info('Finished running naabu port scan - No open ports found.')
if nmap_enabled:
logger.info('Nmap scans skipped')
return ports_data
# Send notification
fields_str = ''
for host, ports in ports_data.items():
ports_str = ', '.join([f'`{port}`' for port in ports])
fields_str += f'• `{host}`: {ports_str}\n'
self.notify(fields={'Ports discovered': fields_str})
# Save output to file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
logger.info('Finished running naabu port scan.')
# Process nmap results: 1 process per host
sigs = []
if nmap_enabled:
logger.warning(f'Starting nmap scans ...')
logger.warning(ports_data)
for host, port_list in ports_data.items():
ports_str = '_'.join([str(p) for p in port_list])
ctx_nmap = ctx.copy()
ctx_nmap['description'] = get_task_title(f'nmap_{host}', self.scan_id, self.subscan_id)
ctx_nmap['track'] = False
sig = nmap.si(
cmd=nmap_cmd,
ports=port_list,
host=host,
script=nmap_script,
script_args=nmap_script_args,
max_rate=rate_limit,
ctx=ctx_nmap)
sigs.append(sig)
task = group(sigs).apply_async()
with allow_join_result():
results = task.get()
return ports_data
@app.task(name='nmap', queue='main_scan_queue', base=RengineTask, bind=True)
def nmap(
self,
cmd=None,
ports=[],
host=None,
input_file=None,
script=None,
script_args=None,
max_rate=None,
ctx={},
description=None):
"""Run nmap on a host.
Args:
cmd (str, optional): Existing nmap command to complete.
ports (list, optional): List of ports to scan.
host (str, optional): Host to scan.
input_file (str, optional): Input hosts file.
script (str, optional): NSE script to run.
script_args (str, optional): NSE script args.
max_rate (int): Max rate.
description (str, optional): Task description shown in UI.
"""
notif = Notification.objects.first()
ports_str = ','.join(str(port) for port in ports)
self.filename = self.filename.replace('.txt', '.xml')
filename_vulns = self.filename.replace('.xml', '_vulns.json')
output_file = self.output_path
output_file_xml = f'{self.results_dir}/{host}_{self.filename}'
vulns_file = f'{self.results_dir}/{host}_{filename_vulns}'
logger.warning(f'Running nmap on {host}:{ports}')
# Build cmd
nmap_cmd = get_nmap_cmd(
cmd=cmd,
ports=ports_str,
script=script,
script_args=script_args,
max_rate=max_rate,
host=host,
input_file=input_file,
output_file=output_file_xml)
# Run cmd
run_command(
nmap_cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Get nmap XML results and convert to JSON
vulns = parse_nmap_results(output_file_xml, output_file)
with open(vulns_file, 'w') as f:
json.dump(vulns, f, indent=4)
# Save vulnerabilities found by nmap
vulns_str = ''
for vuln_data in vulns:
# URL is not necessarily an HTTP URL when running nmap (can be any
# other vulnerable protocols). Look for existing endpoint and use its
# URL as vulnerability.http_url if it exists.
url = vuln_data['http_url']
endpoint = EndPoint.objects.filter(http_url__contains=url).first()
if endpoint:
vuln_data['http_url'] = endpoint.http_url
vuln, created = save_vulnerability(
target_domain=self.domain,
subdomain=self.subdomain,
scan_history=self.scan,
subscan=self.subscan,
endpoint=endpoint,
**vuln_data)
vulns_str += f'• {str(vuln)}\n'
if created:
logger.warning(str(vuln))
# Send only 1 notif for all vulns to reduce number of notifs
if notif and notif.send_vuln_notif and vulns_str:
logger.warning(vulns_str)
self.notify(fields={'CVEs': vulns_str})
return vulns
@app.task(name='waf_detection', queue='main_scan_queue', base=RengineTask, bind=True)
def waf_detection(self, ctx={}, description=None):
"""
Uses wafw00f to check for the presence of a WAF.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of startScan.models.Waf objects.
"""
input_path = f'{self.results_dir}/input_endpoints_waf_detection.txt'
config = self.yaml_configuration.get(WAF_DETECTION) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Get alive endpoints from DB
get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
cmd = f'wafw00f -i {input_path} -o {self.output_path}'
run_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(self.output_path):
logger.error(f'Could not find {self.output_path}')
return
with open(self.output_path) as file:
wafs = file.readlines()
for line in wafs:
line = " ".join(line.split())
splitted = line.split(' ', 1)
waf_info = splitted[1].strip()
waf_name = waf_info[:waf_info.find('(')].strip()
waf_manufacturer = waf_info[waf_info.find('(')+1:waf_info.find(')')].strip().replace('.', '')
http_url = sanitize_url(splitted[0].strip())
if not waf_name or waf_name == 'None':
continue
# Add waf to db
waf, _ = Waf.objects.get_or_create(
name=waf_name,
manufacturer=waf_manufacturer
)
# Add waf info to Subdomain in DB
subdomain = get_subdomain_from_url(http_url)
logger.info(f'Wafw00f Subdomain : {subdomain}')
subdomain_query, _ = Subdomain.objects.get_or_create(scan_history=self.scan, name=subdomain)
subdomain_query.waf.add(waf)
subdomain_query.save()
return wafs
@app.task(name='dir_file_fuzz', queue='main_scan_queue', base=RengineTask, bind=True)
def dir_file_fuzz(self, ctx={}, description=None):
"""Perform directory scan, and currently uses `ffuf` as a default tool.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of URLs discovered.
"""
# Config
cmd = 'ffuf'
config = self.yaml_configuration.get(DIR_FILE_FUZZ) or {}
custom_header = self.yaml_configuration.get(CUSTOM_HEADER)
auto_calibration = config.get(AUTO_CALIBRATION, True)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
extensions = config.get(EXTENSIONS, DEFAULT_DIR_FILE_FUZZ_EXTENSIONS)
# prepend . on extensions
extensions = [ext if ext.startswith('.') else '.' + ext for ext in extensions]
extensions_str = ','.join(map(str, extensions))
follow_redirect = config.get(FOLLOW_REDIRECT, FFUF_DEFAULT_FOLLOW_REDIRECT)
max_time = config.get(MAX_TIME, 0)
match_http_status = config.get(MATCH_HTTP_STATUS, FFUF_DEFAULT_MATCH_HTTP_STATUS)
mc = ','.join([str(c) for c in match_http_status])
recursive_level = config.get(RECURSIVE_LEVEL, FFUF_DEFAULT_RECURSIVE_LEVEL)
stop_on_error = config.get(STOP_ON_ERROR, False)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
wordlist_name = config.get(WORDLIST, 'dicc')
delay = rate_limit / (threads * 100) # calculate request pause delay from rate_limit and number of threads
input_path = f'{self.results_dir}/input_dir_file_fuzz.txt'
# Get wordlist
wordlist_name = 'dicc' if wordlist_name == 'default' else wordlist_name
wordlist_path = f'/usr/src/wordlist/{wordlist_name}.txt'
# Build command
cmd += f' -w {wordlist_path}'
cmd += f' -e {extensions_str}' if extensions else ''
cmd += f' -maxtime {max_time}' if max_time > 0 else ''
cmd += f' -p {delay}' if delay > 0 else ''
cmd += f' -recursion -recursion-depth {recursive_level} ' if recursive_level > 0 else ''
cmd += f' -t {threads}' if threads and threads > 0 else ''
cmd += f' -timeout {timeout}' if timeout and timeout > 0 else ''
cmd += ' -se' if stop_on_error else ''
cmd += ' -fr' if follow_redirect else ''
cmd += ' -ac' if auto_calibration else ''
cmd += f' -mc {mc}' if mc else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
# Grab URLs to fuzz
urls = get_http_urls(
is_alive=True,
ignore_files=False,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
logger.warning(urls)
# Loop through URLs and run command
results = []
for url in urls:
'''
Above while fetching urls, we are not ignoring files, because some
default urls may redirect to https://example.com/login.php
so, ignore_files is set to False
but, during fuzzing, we will only need part of the path, in above example
it is still a good idea to ffuf base url https://example.com
so files from base url
'''
url_parse = urlparse(url)
url = url_parse.scheme + '://' + url_parse.netloc
url += '/FUZZ' # TODO: fuzz not only URL but also POST / PUT / headers
proxy = get_random_proxy()
# Build final cmd
fcmd = cmd
fcmd += f' -x {proxy}' if proxy else ''
fcmd += f' -u {url} -json'
# Initialize DirectoryScan object
dirscan = DirectoryScan()
dirscan.scanned_date = timezone.now()
dirscan.command_line = fcmd
dirscan.save()
# Loop through results and populate EndPoint and DirectoryFile in DB
results = []
for line in stream_command(
fcmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
name = line['input'].get('FUZZ')
length = line['length']
status = line['status']
words = line['words']
url = line['url']
lines = line['lines']
content_type = line['content-type']
duration = line['duration']
if not name:
logger.error(f'FUZZ not found for "{url}"')
continue
endpoint, created = save_endpoint(url, crawl=False, ctx=ctx)
endpoint.http_status = status
endpoint.content_length = length
endpoint.response_time = duration / 1000000000
endpoint.save()
if created:
urls.append(endpoint.http_url)
endpoint.status = status
endpoint.content_type = content_type
endpoint.content_length = length
dfile, created = DirectoryFile.objects.get_or_create(
name=name,
length=length,
words=words,
lines=lines,
content_type=content_type,
url=url)
dfile.http_status = status
dfile.save()
# if created:
# logger.warning(f'Found new directory or file {url}')
dirscan.directory_files.add(dfile)
dirscan.save()
if self.subscan:
dirscan.dir_subscan_ids.add(self.subscan)
subdomain_name = get_subdomain_from_url(endpoint.http_url)
subdomain = Subdomain.objects.get(name=subdomain_name, scan_history=self.scan)
subdomain.directories.add(dirscan)
subdomain.save()
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(urls, ctx=ctx)
return results
@app.task(name='fetch_url', queue='main_scan_queue', base=RengineTask, bind=True)
def fetch_url(self, urls=[], ctx={}, description=None):
"""Fetch URLs using different tools like gauplus, gau, gospider, waybackurls ...
Args:
urls (list): List of URLs to start from.
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/input_endpoints_fetch_url.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(FETCH_URL) or {}
should_remove_duplicate_endpoints = config.get(REMOVE_DUPLICATE_ENDPOINTS, True)
duplicate_removal_fields = config.get(DUPLICATE_REMOVAL_FIELDS, ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, DEFAULT_GF_PATTERNS)
ignore_file_extension = config.get(IGNORE_FILE_EXTENSION, DEFAULT_IGNORE_FILE_EXTENSIONS)
tools = config.get(USES_TOOLS, ENDPOINT_SCAN_DEFAULT_TOOLS)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
domain_request_headers = self.domain.request_headers if self.domain else None
custom_header = domain_request_headers or self.yaml_configuration.get(CUSTOM_HEADER)
exclude_subdomains = config.get(EXCLUDED_SUBDOMAINS, False)
# Get URLs to scan and save to input file
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
exclude_subdomains=exclude_subdomains,
get_only_default_urls=True,
ctx=ctx
)
# Domain regex
host = self.domain.name if self.domain else urlparse(urls[0]).netloc
host_regex = f"\'https?://([a-z0-9]+[.])*{host}.*\'"
# Tools cmds
cmd_map = {
'gau': f'gau',
'gauplus': f'gauplus -random-agent',
'hakrawler': 'hakrawler -subs -u',
'waybackurls': 'waybackurls',
'gospider': f'gospider -S {input_path} --js -d 2 --sitemap --robots -w -r',
'katana': f'katana -list {input_path} -silent -jc -kf all -d 3 -fs rdn',
}
if proxy:
cmd_map['gau'] += f' --proxy "{proxy}"'
cmd_map['gauplus'] += f' -p "{proxy}"'
cmd_map['gospider'] += f' -p {proxy}'
cmd_map['hakrawler'] += f' -proxy {proxy}'
cmd_map['katana'] += f' -proxy {proxy}'
if threads > 0:
cmd_map['gau'] += f' --threads {threads}'
cmd_map['gauplus'] += f' -t {threads}'
cmd_map['gospider'] += f' -t {threads}'
cmd_map['katana'] += f' -c {threads}'
if custom_header:
header_string = ';;'.join([
f'{key}: {value}' for key, value in custom_header.items()
])
cmd_map['hakrawler'] += f' -h {header_string}'
cmd_map['katana'] += f' -H {header_string}'
header_flags = [':'.join(h) for h in header_string.split(';;')]
for flag in header_flags:
cmd_map['gospider'] += f' -H {flag}'
cat_input = f'cat {input_path}'
grep_output = f'grep -Eo {host_regex}'
cmd_map = {
tool: f'{cat_input} | {cmd} | {grep_output} > {self.results_dir}/urls_{tool}.txt'
for tool, cmd in cmd_map.items()
}
tasks = group(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for tool, cmd in cmd_map.items()
if tool in tools
)
# Cleanup task
sort_output = [
f'cat {self.results_dir}/urls_* > {self.output_path}',
f'cat {input_path} >> {self.output_path}',
f'sort -u {self.output_path} -o {self.output_path}',
]
if ignore_file_extension:
ignore_exts = '|'.join(ignore_file_extension)
grep_ext_filtered_output = [
f'cat {self.output_path} | grep -Eiv "\\.({ignore_exts}).*" > {self.results_dir}/urls_filtered.txt',
f'mv {self.results_dir}/urls_filtered.txt {self.output_path}'
]
sort_output.extend(grep_ext_filtered_output)
cleanup = chain(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for cmd in sort_output
)
# Run all commands
task = chord(tasks)(cleanup)
with allow_join_result():
task.get()
# Store all the endpoints and run httpx
with open(self.output_path) as f:
discovered_urls = f.readlines()
self.notify(fields={'Discovered URLs': len(discovered_urls)})
# Some tools can have an URL in the format <URL>] - <PATH> or <URL> - <PATH>, add them
# to the final URL list
all_urls = []
for url in discovered_urls:
url = url.strip()
urlpath = None
base_url = None
if '] ' in url: # found JS scraped endpoint e.g from gospider
split = tuple(url.split('] '))
if not len(split) == 2:
logger.warning(f'URL format not recognized for "{url}". Skipping.')
continue
base_url, urlpath = split
urlpath = urlpath.lstrip('- ')
elif ' - ' in url: # found JS scraped endpoint e.g from gospider
base_url, urlpath = tuple(url.split(' - '))
if base_url and urlpath:
subdomain = urlparse(base_url)
url = f'{subdomain.scheme}://{subdomain.netloc}{self.url_filter}'
if not validators.url(url):
logger.warning(f'Invalid URL "{url}". Skipping.')
if url not in all_urls:
all_urls.append(url)
# Filter out URLs if a path filter was passed
if self.url_filter:
all_urls = [url for url in all_urls if self.url_filter in url]
# Write result to output path
with open(self.output_path, 'w') as f:
f.write('\n'.join(all_urls))
logger.warning(f'Found {len(all_urls)} usable URLs')
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(
all_urls,
ctx=ctx,
should_remove_duplicate_endpoints=should_remove_duplicate_endpoints,
duplicate_removal_fields=duplicate_removal_fields
)
#-------------------#
# GF PATTERNS MATCH #
#-------------------#
# Combine old gf patterns with new ones
if gf_patterns:
self.scan.used_gf_patterns = ','.join(gf_patterns)
self.scan.save()
# Run gf patterns on saved endpoints
# TODO: refactor to Celery task
for gf_pattern in gf_patterns:
# TODO: js var is causing issues, removing for now
if gf_pattern == 'jsvar':
logger.info('Ignoring jsvar as it is causing issues.')
continue
# Run gf on current pattern
logger.warning(f'Running gf on pattern "{gf_pattern}"')
gf_output_file = f'{self.results_dir}/gf_patterns_{gf_pattern}.txt'
cmd = f'cat {self.output_path} | gf {gf_pattern} | grep -Eo {host_regex} >> {gf_output_file}'
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Check output file
if not os.path.exists(gf_output_file):
logger.error(f'Could not find GF output file {gf_output_file}. Skipping GF pattern "{gf_pattern}"')
continue
# Read output file line by line and
with open(gf_output_file, 'r') as f:
lines = f.readlines()
# Add endpoints / subdomains to DB
for url in lines:
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
endpoint, created = save_endpoint(
http_url,
crawl=False,
subdomain=subdomain,
ctx=ctx)
if not endpoint:
continue
earlier_pattern = None
if not created:
earlier_pattern = endpoint.matched_gf_patterns
pattern = f'{earlier_pattern},{gf_pattern}' if earlier_pattern else gf_pattern
endpoint.matched_gf_patterns = pattern
endpoint.save()
return all_urls
def parse_curl_output(response):
# TODO: Enrich from other cURL fields.
CURL_REGEX_HTTP_STATUS = f'HTTP\/(?:(?:\d\.?)+)\s(\d+)\s(?:\w+)'
http_status = 0
if response:
failed = False
regex = re.compile(CURL_REGEX_HTTP_STATUS, re.MULTILINE)
try:
http_status = int(regex.findall(response)[0])
except (KeyError, TypeError, IndexError):
pass
return {
'http_status': http_status,
}
@app.task(name='vulnerability_scan', queue='main_scan_queue', bind=True, base=RengineTask)
def vulnerability_scan(self, urls=[], ctx={}, description=None):
"""
This function will serve as an entrypoint to vulnerability scan.
All other vulnerability scan will be run from here including nuclei, crlfuzz, etc
"""
logger.info('Running Vulnerability Scan Queue')
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_run_nuclei = config.get(RUN_NUCLEI, True)
should_run_crlfuzz = config.get(RUN_CRLFUZZ, False)
should_run_dalfox = config.get(RUN_DALFOX, False)
should_run_s3scanner = config.get(RUN_S3SCANNER, True)
grouped_tasks = []
if should_run_nuclei:
_task = nuclei_scan.si(
urls=urls,
ctx=ctx,
description=f'Nuclei Scan'
)
grouped_tasks.append(_task)
if should_run_crlfuzz:
_task = crlfuzz_scan.si(
urls=urls,
ctx=ctx,
description=f'CRLFuzz Scan'
)
grouped_tasks.append(_task)
if should_run_dalfox:
_task = dalfox_xss_scan.si(
urls=urls,
ctx=ctx,
description=f'Dalfox XSS Scan'
)
grouped_tasks.append(_task)
if should_run_s3scanner:
_task = s3scanner.si(
ctx=ctx,
description=f'Misconfigured S3 Buckets Scanner'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan completed...')
# return results
return None
@app.task(name='nuclei_individual_severity_module', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_individual_severity_module(self, cmd, severity, enable_http_crawl, should_fetch_gpt_report, ctx={}, description=None):
'''
This celery task will run vulnerability scan in parallel.
All severities supplied should run in parallel as grouped tasks.
'''
results = []
logger.info(f'Running vulnerability scan with severity: {severity}')
cmd += f' -severity {severity}'
# Send start notification
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
# Gather nuclei results
vuln_data = parse_nuclei_result(line)
# Get corresponding subdomain
http_url = sanitize_url(line.get('matched-at'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
# Look for duplicate vulnerabilities by excluding records that might change but are irrelevant.
object_comparison_exclude = ['response', 'curl_command', 'tags', 'references', 'cve_ids', 'cwe_ids']
# Add subdomain and target domain to the duplicate check
vuln_data_copy = vuln_data.copy()
vuln_data_copy['subdomain'] = subdomain
vuln_data_copy['target_domain'] = self.domain
# Check if record exists, if exists do not save it
if record_exists(Vulnerability, data=vuln_data_copy, exclude_keys=object_comparison_exclude):
logger.warning(f'Nuclei vulnerability of severity {severity} : {vuln_data_copy["name"]} for {subdomain_name} already exists')
continue
# Get or create EndPoint object
response = line.get('response')
httpx_crawl = False if response else enable_http_crawl # avoid yet another httpx crawl
endpoint, _ = save_endpoint(
http_url,
crawl=httpx_crawl,
subdomain=subdomain,
ctx=ctx)
if endpoint:
http_url = endpoint.http_url
if not httpx_crawl:
output = parse_curl_output(response)
endpoint.http_status = output['http_status']
endpoint.save()
# Get or create Vulnerability object
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
subdomain=subdomain,
**vuln_data)
if not vuln:
continue
# Print vuln
severity = line['info'].get('severity', 'unknown')
logger.warning(str(vuln))
# Send notification for all vulnerabilities except info
url = vuln.http_url or vuln.subdomain
send_vuln = (
notif and
notif.send_vuln_notif and
vuln and
severity in ['low', 'medium', 'high', 'critical'])
if send_vuln:
fields = {
'Severity': f'**{severity.upper()}**',
'URL': http_url,
'Subdomain': subdomain_name,
'Name': vuln.name,
'Type': vuln.type,
'Description': vuln.description,
'Template': vuln.template_url,
'Tags': vuln.get_tags_str(),
'CVEs': vuln.get_cve_str(),
'CWEs': vuln.get_cwe_str(),
'References': vuln.get_refs_str()
}
severity_map = {
'low': 'info',
'medium': 'warning',
'high': 'error',
'critical': 'error'
}
self.notify(
f'vulnerability_scan_#{vuln.id}',
severity_map[severity],
fields,
add_meta_info=False)
# Send report to hackerone
hackerone_query = Hackerone.objects.all()
send_report = (
hackerone_query.exists() and
severity not in ('info', 'low') and
vuln.target_domain.h1_team_handle
)
if send_report:
hackerone = hackerone_query.first()
if hackerone.send_critical and severity == 'critical':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_high and severity == 'high':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_medium and severity == 'medium':
send_hackerone_report.delay(vuln.id)
# Write results to JSON file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
# Send finish notif
if send_status:
vulns = Vulnerability.objects.filter(scan_history__id=self.scan_id)
info_count = vulns.filter(severity=0).count()
low_count = vulns.filter(severity=1).count()
medium_count = vulns.filter(severity=2).count()
high_count = vulns.filter(severity=3).count()
critical_count = vulns.filter(severity=4).count()
unknown_count = vulns.filter(severity=-1).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count + unknown_count
fields = {
'Total': vulnerability_count,
'Critical': critical_count,
'High': high_count,
'Medium': medium_count,
'Low': low_count,
'Info': info_count,
'Unknown': unknown_count
}
self.notify(fields=fields)
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=NUCLEI
).exclude(
severity=0
)
# find all unique vulnerabilities based on path and title
# all unique vulnerability will go thru gpt function and get report
# once report is got, it will be matched with other vulnerabilities and saved
unique_vulns = set()
for vuln in vulns:
unique_vulns.add((vuln.name, vuln.get_path()))
unique_vulns = list(unique_vulns)
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in unique_vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return None
def get_vulnerability_gpt_report(vuln):
title = vuln[0]
path = vuln[1]
logger.info(f'Getting GPT Report for {title}, PATH: {path}')
# check if in db already exists
stored = GPTVulnerabilityReport.objects.filter(
url_path=path
).filter(
title=title
).first()
if stored:
response = {
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
report = GPTVulnerabilityReportGenerator()
vulnerability_description = get_gpt_vuln_input_description(
title,
path
)
response = report.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
title,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
for vuln in Vulnerability.objects.filter(name=title, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
def add_gpt_description_db(title, path, description, impact, remediation, references):
gpt_report = GPTVulnerabilityReport()
gpt_report.url_path = path
gpt_report.title = title
gpt_report.description = description
gpt_report.impact = impact
gpt_report.remediation = remediation
gpt_report.save()
for url in references:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
gpt_report.references.add(ref)
gpt_report.save()
@app.task(name='nuclei_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_scan(self, urls=[], ctx={}, description=None):
"""HTTP vulnerability scan using Nuclei
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
Notes:
Unfurl the urls to keep only domain and path, will be sent to vuln scan and
ignore certain file extensions. Thanks: https://github.com/six2dez/reconftw
"""
# Config
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
input_path = f'{self.results_dir}/input_endpoints_vulnerability_scan.txt'
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
concurrency = config.get(NUCLEI_CONCURRENCY) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
retries = config.get(RETRIES) or self.yaml_configuration.get(RETRIES, DEFAULT_RETRIES)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
custom_header = config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
should_fetch_gpt_report = config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
proxy = get_random_proxy()
nuclei_specific_config = config.get('nuclei', {})
use_nuclei_conf = nuclei_specific_config.get(USE_CONFIG, False)
severities = nuclei_specific_config.get(NUCLEI_SEVERITY, NUCLEI_DEFAULT_SEVERITIES)
tags = nuclei_specific_config.get(NUCLEI_TAGS, [])
tags = ','.join(tags)
nuclei_templates = nuclei_specific_config.get(NUCLEI_TEMPLATE)
custom_nuclei_templates = nuclei_specific_config.get(NUCLEI_CUSTOM_TEMPLATE)
# severities_str = ','.join(severities)
# Get alive endpoints
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=enable_http_crawl,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
if intensity == 'normal': # reduce number of endpoints to scan
unfurl_filter = f'{self.results_dir}/urls_unfurled.txt'
run_command(
f"cat {input_path} | unfurl -u format %s://%d%p |uro > {unfurl_filter}",
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {unfurl_filter} -o {unfurl_filter}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
input_path = unfurl_filter
# Build templates
# logger.info('Updating Nuclei templates ...')
run_command(
'nuclei -update-templates',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
templates = []
if not (nuclei_templates or custom_nuclei_templates):
templates.append(NUCLEI_DEFAULT_TEMPLATES_PATH)
if nuclei_templates:
if ALL in nuclei_templates:
template = NUCLEI_DEFAULT_TEMPLATES_PATH
templates.append(template)
else:
templates.extend(nuclei_templates)
if custom_nuclei_templates:
custom_nuclei_template_paths = [f'{str(elem)}.yaml' for elem in custom_nuclei_templates]
template = templates.extend(custom_nuclei_template_paths)
# Build CMD
cmd = 'nuclei -j'
cmd += ' -config /root/.config/nuclei/config.yaml' if use_nuclei_conf else ''
cmd += f' -irr'
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -l {input_path}'
cmd += f' -c {str(concurrency)}' if concurrency > 0 else ''
cmd += f' -proxy {proxy} ' if proxy else ''
cmd += f' -retries {retries}' if retries > 0 else ''
cmd += f' -rl {rate_limit}' if rate_limit > 0 else ''
# cmd += f' -severity {severities_str}'
cmd += f' -timeout {str(timeout)}' if timeout and timeout > 0 else ''
cmd += f' -tags {tags}' if tags else ''
cmd += f' -silent'
for tpl in templates:
cmd += f' -t {tpl}'
grouped_tasks = []
custom_ctx = ctx
for severity in severities:
custom_ctx['track'] = True
_task = nuclei_individual_severity_module.si(
cmd,
severity,
enable_http_crawl,
should_fetch_gpt_report,
ctx=custom_ctx,
description=f'Nuclei Scan with severity {severity}'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan with all severities completed...')
return None
@app.task(name='dalfox_xss_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def dalfox_xss_scan(self, urls=[], ctx={}, description=None):
"""XSS Scan using dalfox
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
dalfox_config = vuln_config.get(DALFOX) or {}
custom_header = dalfox_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
is_waf_evasion = dalfox_config.get(WAF_EVASION, False)
blind_xss_server = dalfox_config.get(BLIND_XSS_SERVER)
user_agent = dalfox_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
timeout = dalfox_config.get(TIMEOUT)
delay = dalfox_config.get(DELAY)
threads = dalfox_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_dalfox_xss.txt'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=False,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'dalfox --silence --no-color --no-spinner'
cmd += f' --only-poc r '
cmd += f' --ignore-return 302,404,403'
cmd += f' --skip-bav'
cmd += f' file {input_path}'
cmd += f' --proxy {proxy}' if proxy else ''
cmd += f' --waf-evasion' if is_waf_evasion else ''
cmd += f' -b {blind_xss_server}' if blind_xss_server else ''
cmd += f' --delay {delay}' if delay else ''
cmd += f' --timeout {timeout}' if timeout else ''
cmd += f' --user-agent {user_agent}' if user_agent else ''
cmd += f' --header {custom_header}' if custom_header else ''
cmd += f' --worker {threads}' if threads else ''
cmd += f' --format json'
results = []
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id,
trunc_char=','
):
if not isinstance(line, dict):
continue
results.append(line)
vuln_data = parse_dalfox_result(line)
http_url = sanitize_url(line.get('data'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Dalfox Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=DALFOX
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='crlfuzz_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def crlfuzz_scan(self, urls=[], ctx={}, description=None):
"""CRLF Fuzzing with CRLFuzz
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
custom_header = vuln_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
user_agent = vuln_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
threads = vuln_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_crlf.txt'
output_path = f'{self.results_dir}/{self.filename}'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'crlfuzz -s'
cmd += f' -l {input_path}'
cmd += f' -x {proxy}' if proxy else ''
cmd += f' --H {custom_header}' if custom_header else ''
cmd += f' -o {output_path}'
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id
)
if not os.path.isfile(output_path):
logger.info('No Results from CRLFuzz')
return
crlfs = []
results = []
with open(output_path, 'r') as file:
crlfs = file.readlines()
for crlf in crlfs:
url = crlf.strip()
vuln_data = parse_crlfuzz_result(url)
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting CRLFuzz Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=CRLFUZZ
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='s3scanner', queue='main_scan_queue', base=RengineTask, bind=True)
def s3scanner(self, ctx={}, description=None):
"""Bucket Scanner
Args:
ctx (dict): Context
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/#{self.scan_id}_subdomain_discovery.txt'
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
s3_config = vuln_config.get(S3SCANNER) or {}
threads = s3_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
providers = s3_config.get(PROVIDERS, S3SCANNER_DEFAULT_PROVIDERS)
scan_history = ScanHistory.objects.filter(pk=self.scan_id).first()
for provider in providers:
cmd = f's3scanner -bucket-file {input_path} -enumerate -provider {provider} -threads {threads} -json'
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
if line.get('bucket', {}).get('exists', 0) == 1:
result = parse_s3scanner_result(line)
s3bucket, created = S3Bucket.objects.get_or_create(**result)
scan_history.buckets.add(s3bucket)
logger.info(f"s3 bucket added {result['provider']}-{result['name']}-{result['region']}")
@app.task(name='http_crawl', queue='main_scan_queue', base=RengineTask, bind=True)
def http_crawl(
self,
urls=[],
method=None,
recrawl=False,
ctx={},
track=True,
description=None,
is_ran_from_subdomain_scan=False,
should_remove_duplicate_endpoints=True,
duplicate_removal_fields=[]):
"""Use httpx to query HTTP URLs for important info like page titles, http
status, etc...
Args:
urls (list, optional): A set of URLs to check. Overrides default
behavior which queries all endpoints related to this scan.
method (str): HTTP method to use (GET, HEAD, POST, PUT, DELETE).
recrawl (bool, optional): If False, filter out URLs that have already
been crawled.
should_remove_duplicate_endpoints (bool): Whether to remove duplicate endpoints
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
Returns:
list: httpx results.
"""
logger.info('Initiating HTTP Crawl')
if is_ran_from_subdomain_scan:
logger.info('Running From Subdomain Scan...')
cmd = '/go/bin/httpx'
cfg = self.yaml_configuration.get(HTTP_CRAWL) or {}
custom_header = cfg.get(CUSTOM_HEADER, '')
threads = cfg.get(THREADS, DEFAULT_THREADS)
follow_redirect = cfg.get(FOLLOW_REDIRECT, True)
self.output_path = None
input_path = f'{self.results_dir}/httpx_input.txt'
history_file = f'{self.results_dir}/commands.txt'
if urls: # direct passing URLs to check
if self.url_filter:
urls = [u for u in urls if self.url_filter in u]
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_uncrawled=not recrawl,
write_filepath=input_path,
ctx=ctx
)
# logger.debug(urls)
# If no URLs found, skip it
if not urls:
return
# Re-adjust thread number if few URLs to avoid spinning up a monster to
# kill a fly.
if len(urls) < threads:
threads = len(urls)
# Get random proxy
proxy = get_random_proxy()
# Run command
cmd += f' -cl -ct -rt -location -td -websocket -cname -asn -cdn -probe -random-agent'
cmd += f' -t {threads}' if threads > 0 else ''
cmd += f' --http-proxy {proxy}' if proxy else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -json'
cmd += f' -u {urls[0]}' if len(urls) == 1 else f' -l {input_path}'
cmd += f' -x {method}' if method else ''
cmd += f' -silent'
if follow_redirect:
cmd += ' -fr'
results = []
endpoint_ids = []
for line in stream_command(
cmd,
history_file=history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not line or not isinstance(line, dict):
continue
logger.debug(line)
# No response from endpoint
if line.get('failed', False):
continue
# Parse httpx output
host = line.get('host', '')
content_length = line.get('content_length', 0)
http_status = line.get('status_code')
http_url, is_redirect = extract_httpx_url(line)
page_title = line.get('title')
webserver = line.get('webserver')
cdn = line.get('cdn', False)
rt = line.get('time')
techs = line.get('tech', [])
cname = line.get('cname', '')
content_type = line.get('content_type', '')
response_time = -1
if rt:
response_time = float(''.join(ch for ch in rt if not ch.isalpha()))
if rt[-2:] == 'ms':
response_time = response_time / 1000
# Create Subdomain object in DB
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
# Save default HTTP URL to endpoint object in DB
endpoint, created = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain,
is_default=is_ran_from_subdomain_scan
)
if not endpoint:
continue
endpoint.http_status = http_status
endpoint.page_title = page_title
endpoint.content_length = content_length
endpoint.webserver = webserver
endpoint.response_time = response_time
endpoint.content_type = content_type
endpoint.save()
endpoint_str = f'{http_url} [{http_status}] `{content_length}B` `{webserver}` `{rt}`'
logger.warning(endpoint_str)
if endpoint and endpoint.is_alive and endpoint.http_status != 403:
self.notify(
fields={'Alive endpoint': f'• {endpoint_str}'},
add_meta_info=False)
# Add endpoint to results
line['_cmd'] = cmd
line['final_url'] = http_url
line['endpoint_id'] = endpoint.id
line['endpoint_created'] = created
line['is_redirect'] = is_redirect
results.append(line)
# Add technology objects to DB
for technology in techs:
tech, _ = Technology.objects.get_or_create(name=technology)
endpoint.techs.add(tech)
if is_ran_from_subdomain_scan:
subdomain.technologies.add(tech)
subdomain.save()
endpoint.save()
techs_str = ', '.join([f'`{tech}`' for tech in techs])
self.notify(
fields={'Technologies': techs_str},
add_meta_info=False)
# Add IP objects for 'a' records to DB
a_records = line.get('a', [])
for ip_address in a_records:
ip, created = save_ip_address(
ip_address,
subdomain,
subscan=self.subscan,
cdn=cdn)
ips_str = '• ' + '\n• '.join([f'`{ip}`' for ip in a_records])
self.notify(
fields={'IPs': ips_str},
add_meta_info=False)
# Add IP object for host in DB
if host:
ip, created = save_ip_address(
host,
subdomain,
subscan=self.subscan,
cdn=cdn)
self.notify(
fields={'IPs': f'• `{ip.address}`'},
add_meta_info=False)
# Save subdomain and endpoint
if is_ran_from_subdomain_scan:
# save subdomain stuffs
subdomain.http_url = http_url
subdomain.http_status = http_status
subdomain.page_title = page_title
subdomain.content_length = content_length
subdomain.webserver = webserver
subdomain.response_time = response_time
subdomain.content_type = content_type
subdomain.cname = ','.join(cname)
subdomain.is_cdn = cdn
if cdn:
subdomain.cdn_name = line.get('cdn_name')
subdomain.save()
endpoint.save()
endpoint_ids.append(endpoint.id)
if should_remove_duplicate_endpoints:
# Remove 'fake' alive endpoints that are just redirects to the same page
remove_duplicate_endpoints(
self.scan_id,
self.domain_id,
self.subdomain_id,
filter_ids=endpoint_ids
)
# Remove input file
run_command(
f'rm {input_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
return results
#---------------------#
# Notifications tasks #
#---------------------#
@app.task(name='send_notif', bind=False, queue='send_notif_queue')
def send_notif(
message,
scan_history_id=None,
subscan_id=None,
**options):
if not 'title' in options:
message = enrich_notification(message, scan_history_id, subscan_id)
send_discord_message(message, **options)
send_slack_message(message)
send_telegram_message(message)
@app.task(name='send_scan_notif', bind=False, queue='send_scan_notif_queue')
def send_scan_notif(
scan_history_id,
subscan_id=None,
engine_id=None,
status='RUNNING'):
"""Send scan status notification. Works for scan or a subscan if subscan_id
is passed.
Args:
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Get domain, engine, scan_history objects
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
tasks = ScanActivity.objects.filter(scan_of=scan) if scan else 0
# Build notif options
url = get_scan_url(scan_history_id, subscan_id)
title = get_scan_title(scan_history_id, subscan_id)
fields = get_scan_fields(engine, scan, subscan, status, tasks)
severity = None
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
if status:
severity = STATUS_TO_SEVERITIES.get(status)
opts = {
'title': title,
'url': url,
'fields': fields,
'severity': severity
}
logger.warning(f'Sending notification "{title}" [{severity}]')
# Send notification
send_notif(
msg,
scan_history_id,
subscan_id,
**opts)
@app.task(name='send_task_notif', bind=False, queue='send_task_notif_queue')
def send_task_notif(
task_name,
status=None,
result=None,
output_path=None,
traceback=None,
scan_history_id=None,
engine_id=None,
subscan_id=None,
severity=None,
add_meta_info=True,
update_fields={}):
"""Send task status notification.
Args:
task_name (str): Task name.
status (str, optional): Task status.
result (str, optional): Task result.
output_path (str, optional): Task output path.
traceback (str, optional): Task traceback.
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
severity (str, optional): Severity (will be mapped to notif colors)
add_meta_info (bool, optional): Wheter to add scan / subscan info to notif.
update_fields (dict, optional): Fields key / value to update.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Build fields
url = None
fields = {}
if add_meta_info:
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
url = get_scan_url(scan_history_id)
if status:
fields['Status'] = f'**{status}**'
if engine:
fields['Engine'] = engine.engine_name
if scan:
fields['Scan ID'] = f'[#{scan.id}]({url})'
if subscan:
url = get_scan_url(scan_history_id, subscan_id)
fields['Subscan ID'] = f'[#{subscan.id}]({url})'
title = get_task_title(task_name, scan_history_id, subscan_id)
if status:
severity = STATUS_TO_SEVERITIES.get(status)
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
# Add fields to update
for k, v in update_fields.items():
fields[k] = v
# Add traceback to notif
if traceback and notif.send_scan_tracebacks:
fields['Traceback'] = f'```\n{traceback}\n```'
# Add files to notif
files = []
attach_file = (
notif.send_scan_output_file and
output_path and
result and
not traceback
)
if attach_file:
output_title = output_path.split('/')[-1]
files = [(output_path, output_title)]
# Send notif
opts = {
'title': title,
'url': url,
'files': files,
'severity': severity,
'fields': fields,
'fields_append': update_fields.keys()
}
send_notif(
msg,
scan_history_id=scan_history_id,
subscan_id=subscan_id,
**opts)
@app.task(name='send_file_to_discord', bind=False, queue='send_file_to_discord_queue')
def send_file_to_discord(file_path, title=None):
notif = Notification.objects.first()
do_send = notif and notif.send_to_discord and notif.discord_hook_url
if not do_send:
return False
webhook = DiscordWebhook(
url=notif.discord_hook_url,
rate_limit_retry=True,
username=title or "reNgine Discord Plugin"
)
with open(file_path, "rb") as f:
head, tail = os.path.split(file_path)
webhook.add_file(file=f.read(), filename=tail)
webhook.execute()
@app.task(name='send_hackerone_report', bind=False, queue='send_hackerone_report_queue')
def send_hackerone_report(vulnerability_id):
"""Send HackerOne vulnerability report.
Args:
vulnerability_id (int): Vulnerability id.
Returns:
int: HTTP response status code.
"""
vulnerability = Vulnerability.objects.get(id=vulnerability_id)
severities = {v: k for k,v in NUCLEI_SEVERITY_MAP.items()}
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
# can only send vulnerability report if team_handle exists
if len(vulnerability.target_domain.h1_team_handle) !=0:
hackerone_query = Hackerone.objects.all()
if hackerone_query.exists():
hackerone = Hackerone.objects.first()
severity_value = severities[vulnerability.severity]
tpl = hackerone.report_template
# Replace syntax of report template with actual content
tpl = tpl.replace('{vulnerability_name}', vulnerability.name)
tpl = tpl.replace('{vulnerable_url}', vulnerability.http_url)
tpl = tpl.replace('{vulnerability_severity}', severity_value)
tpl = tpl.replace('{vulnerability_description}', vulnerability.description if vulnerability.description else '')
tpl = tpl.replace('{vulnerability_extracted_results}', vulnerability.extracted_results if vulnerability.extracted_results else '')
tpl = tpl.replace('{vulnerability_reference}', vulnerability.reference if vulnerability.reference else '')
data = {
"data": {
"type": "report",
"attributes": {
"team_handle": vulnerability.target_domain.h1_team_handle,
"title": '{} found in {}'.format(vulnerability.name, vulnerability.http_url),
"vulnerability_information": tpl,
"severity_rating": severity_value,
"impact": "More information about the impact and vulnerability can be found here: \n" + vulnerability.reference if vulnerability.reference else "NA",
}
}
}
r = requests.post(
'https://api.hackerone.com/v1/hackers/reports',
auth=(hackerone.username, hackerone.api_key),
json=data,
headers=headers
)
response = r.json()
status_code = r.status_code
if status_code == 201:
vulnerability.hackerone_report_id = response['data']["id"]
vulnerability.open_status = False
vulnerability.save()
return status_code
else:
logger.error('No team handle found.')
status_code = 111
return status_code
#-------------#
# Utils tasks #
#-------------#
@app.task(name='parse_nmap_results', bind=False, queue='parse_nmap_results_queue')
def parse_nmap_results(xml_file, output_file=None):
"""Parse results from nmap output file.
Args:
xml_file (str): nmap XML report file path.
Returns:
list: List of vulnerabilities found from nmap results.
"""
with open(xml_file, encoding='utf8') as f:
content = f.read()
try:
nmap_results = xmltodict.parse(content) # parse XML to dict
except Exception as e:
logger.exception(e)
logger.error(f'Cannot parse {xml_file} to valid JSON. Skipping.')
return []
# Write JSON to output file
if output_file:
with open(output_file, 'w') as f:
json.dump(nmap_results, f, indent=4)
logger.warning(json.dumps(nmap_results, indent=4))
hosts = (
nmap_results
.get('nmaprun', {})
.get('host', {})
)
all_vulns = []
if isinstance(hosts, dict):
hosts = [hosts]
for host in hosts:
# Grab hostname / IP from output
hostnames_dict = host.get('hostnames', {})
if hostnames_dict:
# Ensure that hostnames['hostname'] is a list for consistency
hostnames_list = hostnames_dict['hostname'] if isinstance(hostnames_dict['hostname'], list) else [hostnames_dict['hostname']]
# Extract all the @name values from the list of dictionaries
hostnames = [entry.get('@name') for entry in hostnames_list]
else:
hostnames = [host.get('address')['@addr']]
# Iterate over each hostname for each port
for hostname in hostnames:
# Grab ports from output
ports = host.get('ports', {}).get('port', [])
if isinstance(ports, dict):
ports = [ports]
for port in ports:
url_vulns = []
port_number = port['@portid']
url = sanitize_url(f'{hostname}:{port_number}')
logger.info(f'Parsing nmap results for {hostname}:{port_number} ...')
if not port_number or not port_number.isdigit():
continue
port_protocol = port['@protocol']
scripts = port.get('script', [])
if isinstance(scripts, dict):
scripts = [scripts]
for script in scripts:
script_id = script['@id']
script_output = script['@output']
script_output_table = script.get('table', [])
logger.debug(f'Ran nmap script "{script_id}" on {port_number}/{port_protocol}:\n{script_output}\n')
if script_id == 'vulscan':
vulns = parse_nmap_vulscan_output(script_output)
url_vulns.extend(vulns)
elif script_id == 'vulners':
vulns = parse_nmap_vulners_output(script_output)
url_vulns.extend(vulns)
# elif script_id == 'http-server-header':
# TODO: nmap can help find technologies as well using the http-server-header script
# regex = r'(\w+)/([\d.]+)\s?(?:\((\w+)\))?'
# tech_name, tech_version, tech_os = re.match(regex, test_string).groups()
# Technology.objects.get_or_create(...)
# elif script_id == 'http_csrf':
# vulns = parse_nmap_http_csrf_output(script_output)
# url_vulns.extend(vulns)
else:
logger.warning(f'Script output parsing for script "{script_id}" is not supported yet.')
# Add URL to vuln
for vuln in url_vulns:
# TODO: This should extend to any URL, not just HTTP
vuln['http_url'] = url
if 'http_path' in vuln:
vuln['http_url'] += vuln['http_path']
all_vulns.append(vuln)
return all_vulns
def parse_nmap_http_csrf_output(script_output):
pass
def parse_nmap_vulscan_output(script_output):
"""Parse nmap vulscan script output.
Args:
script_output (str): Vulscan script output.
Returns:
list: List of Vulnerability dicts.
"""
data = {}
vulns = []
provider_name = ''
# Sort all vulns found by provider so that we can match each provider with
# a function that pulls from its API to get more info about the
# vulnerability.
for line in script_output.splitlines():
if not line:
continue
if not line.startswith('['): # provider line
if "No findings" in line:
logger.info(f"No findings: {line}")
continue
elif ' - ' in line:
provider_name, provider_url = tuple(line.split(' - '))
data[provider_name] = {'url': provider_url.rstrip(':'), 'entries': []}
continue
else:
# Log a warning
logger.warning(f"Unexpected line format: {line}")
continue
reg = r'\[(.*)\] (.*)'
matches = re.match(reg, line)
id, title = matches.groups()
entry = {'id': id, 'title': title}
data[provider_name]['entries'].append(entry)
logger.warning('Vulscan parsed output:')
logger.warning(pprint.pformat(data))
for provider_name in data:
if provider_name == 'Exploit-DB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'IBM X-Force':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'MITRE CVE':
logger.error(f'Provider {provider_name} is not supported YET.')
for entry in data[provider_name]['entries']:
cve_id = entry['id']
vuln = cve_to_vuln(cve_id)
vulns.append(vuln)
elif provider_name == 'OSVDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'OpenVAS (Nessus)':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'SecurityFocus':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'VulDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
else:
logger.error(f'Provider {provider_name} is not supported.')
return vulns
def parse_nmap_vulners_output(script_output, url=''):
"""Parse nmap vulners script output.
TODO: Rework this as it's currently matching all CVEs no matter the
confidence.
Args:
script_output (str): Script output.
Returns:
list: List of found vulnerabilities.
"""
vulns = []
# Check for CVE in script output
CVE_REGEX = re.compile(r'.*(CVE-\d\d\d\d-\d+).*')
matches = CVE_REGEX.findall(script_output)
matches = list(dict.fromkeys(matches))
for cve_id in matches: # get CVE info
vuln = cve_to_vuln(cve_id, vuln_type='nmap-vulners-nse')
if vuln:
vulns.append(vuln)
return vulns
def cve_to_vuln(cve_id, vuln_type=''):
"""Search for a CVE using CVESearch and return Vulnerability data.
Args:
cve_id (str): CVE ID in the form CVE-*
Returns:
dict: Vulnerability dict.
"""
cve_info = CVESearch('https://cve.circl.lu').id(cve_id)
if not cve_info:
logger.error(f'Could not fetch CVE info for cve {cve_id}. Skipping.')
return None
vuln_cve_id = cve_info['id']
vuln_name = vuln_cve_id
vuln_description = cve_info.get('summary', 'none').replace(vuln_cve_id, '').strip()
try:
vuln_cvss = float(cve_info.get('cvss', -1))
except (ValueError, TypeError):
vuln_cvss = -1
vuln_cwe_id = cve_info.get('cwe', '')
exploit_ids = cve_info.get('refmap', {}).get('exploit-db', [])
osvdb_ids = cve_info.get('refmap', {}).get('osvdb', [])
references = cve_info.get('references', [])
capec_objects = cve_info.get('capec', [])
# Parse ovals for a better vuln name / type
ovals = cve_info.get('oval', [])
if ovals:
vuln_name = ovals[0]['title']
vuln_type = ovals[0]['family']
# Set vulnerability severity based on CVSS score
vuln_severity = 'info'
if vuln_cvss < 4:
vuln_severity = 'low'
elif vuln_cvss < 7:
vuln_severity = 'medium'
elif vuln_cvss < 9:
vuln_severity = 'high'
else:
vuln_severity = 'critical'
# Build console warning message
msg = f'{vuln_name} | {vuln_severity.upper()} | {vuln_cve_id} | {vuln_cwe_id} | {vuln_cvss}'
for id in osvdb_ids:
msg += f'\n\tOSVDB: {id}'
for exploit_id in exploit_ids:
msg += f'\n\tEXPLOITDB: {exploit_id}'
logger.warning(msg)
vuln = {
'name': vuln_name,
'type': vuln_type,
'severity': NUCLEI_SEVERITY_MAP[vuln_severity],
'description': vuln_description,
'cvss_score': vuln_cvss,
'references': references,
'cve_ids': [vuln_cve_id],
'cwe_ids': [vuln_cwe_id]
}
return vuln
def parse_s3scanner_result(line):
'''
Parses and returns s3Scanner Data
'''
bucket = line['bucket']
return {
'name': bucket['name'],
'region': bucket['region'],
'provider': bucket['provider'],
'owner_display_name': bucket['owner_display_name'],
'owner_id': bucket['owner_id'],
'perm_auth_users_read': bucket['perm_auth_users_read'],
'perm_auth_users_write': bucket['perm_auth_users_write'],
'perm_auth_users_read_acl': bucket['perm_auth_users_read_acl'],
'perm_auth_users_write_acl': bucket['perm_auth_users_write_acl'],
'perm_auth_users_full_control': bucket['perm_auth_users_full_control'],
'perm_all_users_read': bucket['perm_all_users_read'],
'perm_all_users_write': bucket['perm_all_users_write'],
'perm_all_users_read_acl': bucket['perm_all_users_read_acl'],
'perm_all_users_write_acl': bucket['perm_all_users_write_acl'],
'perm_all_users_full_control': bucket['perm_all_users_full_control'],
'num_objects': bucket['num_objects'],
'size': bucket['bucket_size']
}
def parse_nuclei_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
return {
'name': line['info'].get('name', ''),
'type': line['type'],
'severity': NUCLEI_SEVERITY_MAP[line['info'].get('severity', 'unknown')],
'template': line['template'],
'template_url': line['template-url'],
'template_id': line['template-id'],
'description': line['info'].get('description', ''),
'matcher_name': line.get('matcher-name', ''),
'curl_command': line.get('curl-command'),
'request': line.get('request'),
'response': line.get('response'),
'extracted_results': line.get('extracted-results', []),
'cvss_metrics': line['info'].get('classification', {}).get('cvss-metrics', ''),
'cvss_score': line['info'].get('classification', {}).get('cvss-score'),
'cve_ids': line['info'].get('classification', {}).get('cve_id', []) or [],
'cwe_ids': line['info'].get('classification', {}).get('cwe_id', []) or [],
'references': line['info'].get('reference', []) or [],
'tags': line['info'].get('tags', []),
'source': NUCLEI,
}
def parse_dalfox_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
description = ''
description += f" Evidence: {line.get('evidence')} <br>" if line.get('evidence') else ''
description += f" Message: {line.get('message')} <br>" if line.get('message') else ''
description += f" Payload: {line.get('message_str')} <br>" if line.get('message_str') else ''
description += f" Vulnerable Parameter: {line.get('param')} <br>" if line.get('param') else ''
return {
'name': 'XSS (Cross Site Scripting)',
'type': 'XSS',
'severity': DALFOX_SEVERITY_MAP[line.get('severity', 'unknown')],
'description': description,
'source': DALFOX,
'cwe_ids': [line.get('cwe')]
}
def parse_crlfuzz_result(url):
"""Parse CRLF results
Args:
url (str): CRLF Vulnerable URL
Returns:
dict: Vulnerability data.
"""
return {
'name': 'CRLF (HTTP Response Splitting)',
'type': 'CRLF',
'severity': 2,
'description': 'A CRLF (HTTP Response Splitting) vulnerability has been discovered.',
'source': CRLFUZZ,
}
def record_exists(model, data, exclude_keys=[]):
"""
Check if a record already exists in the database based on the given data.
Args:
model (django.db.models.Model): The Django model to check against.
data (dict): Data dictionary containing fields and values.
exclude_keys (list): List of keys to exclude from the lookup.
Returns:
bool: True if the record exists, False otherwise.
"""
# Extract the keys that will be used for the lookup
lookup_fields = {key: data[key] for key in data if key not in exclude_keys}
# Return True if a record exists based on the lookup fields, False otherwise
return model.objects.filter(**lookup_fields).exists()
@app.task(name='geo_localize', bind=False, queue='geo_localize_queue')
def geo_localize(host, ip_id=None):
"""Uses geoiplookup to find location associated with host.
Args:
host (str): Hostname.
ip_id (int): IpAddress object id.
Returns:
startScan.models.CountryISO: CountryISO object from DB or None.
"""
if validators.ipv6(host):
logger.info(f'Ipv6 "{host}" is not supported by geoiplookup. Skipping.')
return None
cmd = f'geoiplookup {host}'
_, out = run_command(cmd)
if 'IP Address not found' not in out and "can't resolve hostname" not in out:
country_iso = out.split(':')[1].strip().split(',')[0]
country_name = out.split(':')[1].strip().split(',')[1].strip()
geo_object, _ = CountryISO.objects.get_or_create(
iso=country_iso,
name=country_name
)
geo_json = {
'iso': country_iso,
'name': country_name
}
if ip_id:
ip = IpAddress.objects.get(pk=ip_id)
ip.geo_iso = geo_object
ip.save()
return geo_json
logger.info(f'Geo IP lookup failed for host "{host}"')
return None
@app.task(name='query_whois', bind=False, queue='query_whois_queue')
def query_whois(ip_domain, force_reload_whois=False):
"""Query WHOIS information for an IP or a domain name.
Args:
ip_domain (str): IP address or domain name.
save_domain (bool): Whether to save domain or not, default False
Returns:
dict: WHOIS information.
"""
if not force_reload_whois and Domain.objects.filter(name=ip_domain).exists() and Domain.objects.get(name=ip_domain).domain_info:
domain = Domain.objects.get(name=ip_domain)
if not domain.insert_date:
domain.insert_date = timezone.now()
domain.save()
domain_info_db = domain.domain_info
domain_info = DottedDict(
dnssec=domain_info_db.dnssec,
created=domain_info_db.created,
updated=domain_info_db.updated,
expires=domain_info_db.expires,
geolocation_iso=domain_info_db.geolocation_iso,
status=[status['name'] for status in DomainWhoisStatusSerializer(domain_info_db.status, many=True).data],
whois_server=domain_info_db.whois_server,
ns_records=[ns['name'] for ns in NameServersSerializer(domain_info_db.name_servers, many=True).data],
registrar_name=domain_info_db.registrar.name,
registrar_phone=domain_info_db.registrar.phone,
registrar_email=domain_info_db.registrar.email,
registrar_url=domain_info_db.registrar.url,
registrant_name=domain_info_db.registrant.name,
registrant_id=domain_info_db.registrant.id_str,
registrant_organization=domain_info_db.registrant.organization,
registrant_city=domain_info_db.registrant.city,
registrant_state=domain_info_db.registrant.state,
registrant_zip_code=domain_info_db.registrant.zip_code,
registrant_country=domain_info_db.registrant.country,
registrant_phone=domain_info_db.registrant.phone,
registrant_fax=domain_info_db.registrant.fax,
registrant_email=domain_info_db.registrant.email,
registrant_address=domain_info_db.registrant.address,
admin_name=domain_info_db.admin.name,
admin_id=domain_info_db.admin.id_str,
admin_organization=domain_info_db.admin.organization,
admin_city=domain_info_db.admin.city,
admin_state=domain_info_db.admin.state,
admin_zip_code=domain_info_db.admin.zip_code,
admin_country=domain_info_db.admin.country,
admin_phone=domain_info_db.admin.phone,
admin_fax=domain_info_db.admin.fax,
admin_email=domain_info_db.admin.email,
admin_address=domain_info_db.admin.address,
tech_name=domain_info_db.tech.name,
tech_id=domain_info_db.tech.id_str,
tech_organization=domain_info_db.tech.organization,
tech_city=domain_info_db.tech.city,
tech_state=domain_info_db.tech.state,
tech_zip_code=domain_info_db.tech.zip_code,
tech_country=domain_info_db.tech.country,
tech_phone=domain_info_db.tech.phone,
tech_fax=domain_info_db.tech.fax,
tech_email=domain_info_db.tech.email,
tech_address=domain_info_db.tech.address,
related_tlds=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_tlds, many=True).data],
related_domains=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_domains, many=True).data],
historical_ips=[ip for ip in HistoricalIPSerializer(domain_info_db.historical_ips, many=True).data],
)
if domain_info_db.dns_records:
a_records = []
txt_records = []
mx_records = []
dns_records = [{'name': dns['name'], 'type': dns['type']} for dns in DomainDNSRecordSerializer(domain_info_db.dns_records, many=True).data]
for dns in dns_records:
if dns['type'] == 'a':
a_records.append(dns['name'])
elif dns['type'] == 'txt':
txt_records.append(dns['name'])
elif dns['type'] == 'mx':
mx_records.append(dns['name'])
domain_info.a_records = a_records
domain_info.txt_records = txt_records
domain_info.mx_records = mx_records
else:
logger.info(f'Domain info for "{ip_domain}" not found in DB, querying whois')
domain_info = DottedDict()
# find domain historical ip
try:
historical_ips = get_domain_historical_ip_address(ip_domain)
domain_info.historical_ips = historical_ips
except Exception as e:
logger.error(f'HistoricalIP for {ip_domain} not found!\nError: {str(e)}')
historical_ips = []
# find associated domains using ip_domain
try:
related_domains = reverse_whois(ip_domain.split('.')[0])
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
# find related tlds using TLSx
try:
related_tlds = []
output_path = '/tmp/ip_domain_tlsx.txt'
tlsx_command = f'tlsx -san -cn -silent -ro -host {ip_domain} -o {output_path}'
run_command(
tlsx_command,
shell=True,
)
tlsx_output = []
with open(output_path) as f:
tlsx_output = f.readlines()
tldextract_target = tldextract.extract(ip_domain)
for doms in tlsx_output:
doms = doms.strip()
tldextract_res = tldextract.extract(doms)
if ip_domain != doms and tldextract_res.domain == tldextract_target.domain and tldextract_res.subdomain == '':
related_tlds.append(doms)
related_tlds = list(set(related_tlds))
domain_info.related_tlds = related_tlds
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
related_domains_list = []
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_related)
related_domains_list.append(_domain['name'])
for _domain in related_tlds:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain,
)[0]
db_domain_info.related_tlds.add(domain_related)
for _ip in historical_ips:
historical_ip = HistoricalIP.objects.get_or_create(
ip=_ip['ip'],
owner=_ip['owner'],
location=_ip['location'],
last_seen=_ip['last_seen'],
)[0]
db_domain_info.historical_ips.add(historical_ip)
domain.domain_info = db_domain_info
domain.save()
command = f'netlas host {ip_domain} -f json'
# check if netlas key is provided
netlas_key = get_netlas_key()
command += f' -a {netlas_key}' if netlas_key else ''
result = subprocess.check_output(command.split()).decode('utf-8')
if 'Failed to parse response data' in result:
# do fallback
return {
'status': False,
'ip_domain': ip_domain,
'result': "Netlas limit exceeded.",
'message': 'Netlas limit exceeded.'
}
try:
result = json.loads(result)
logger.info(result)
whois = result.get('whois') if result.get('whois') else {}
domain_info.created = whois.get('created_date')
domain_info.expires = whois.get('expiration_date')
domain_info.updated = whois.get('updated_date')
domain_info.whois_server = whois.get('whois_server')
if 'registrant' in whois:
registrant = whois.get('registrant')
domain_info.registrant_name = registrant.get('name')
domain_info.registrant_country = registrant.get('country')
domain_info.registrant_id = registrant.get('id')
domain_info.registrant_state = registrant.get('province')
domain_info.registrant_city = registrant.get('city')
domain_info.registrant_phone = registrant.get('phone')
domain_info.registrant_address = registrant.get('street')
domain_info.registrant_organization = registrant.get('organization')
domain_info.registrant_fax = registrant.get('fax')
domain_info.registrant_zip_code = registrant.get('postal_code')
email_search = EMAIL_REGEX.search(str(registrant.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.registrant_email = field_content
if 'administrative' in whois:
administrative = whois.get('administrative')
domain_info.admin_name = administrative.get('name')
domain_info.admin_country = administrative.get('country')
domain_info.admin_id = administrative.get('id')
domain_info.admin_state = administrative.get('province')
domain_info.admin_city = administrative.get('city')
domain_info.admin_phone = administrative.get('phone')
domain_info.admin_address = administrative.get('street')
domain_info.admin_organization = administrative.get('organization')
domain_info.admin_fax = administrative.get('fax')
domain_info.admin_zip_code = administrative.get('postal_code')
mail_search = EMAIL_REGEX.search(str(administrative.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.admin_email = field_content
if 'technical' in whois:
technical = whois.get('technical')
domain_info.tech_name = technical.get('name')
domain_info.tech_country = technical.get('country')
domain_info.tech_state = technical.get('province')
domain_info.tech_id = technical.get('id')
domain_info.tech_city = technical.get('city')
domain_info.tech_phone = technical.get('phone')
domain_info.tech_address = technical.get('street')
domain_info.tech_organization = technical.get('organization')
domain_info.tech_fax = technical.get('fax')
domain_info.tech_zip_code = technical.get('postal_code')
mail_search = EMAIL_REGEX.search(str(technical.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.tech_email = field_content
if 'dns' in result:
dns = result.get('dns')
domain_info.mx_records = dns.get('mx')
domain_info.txt_records = dns.get('txt')
domain_info.a_records = dns.get('a')
domain_info.ns_records = whois.get('name_servers')
domain_info.dnssec = True if whois.get('dnssec') else False
domain_info.status = whois.get('status')
if 'registrar' in whois:
registrar = whois.get('registrar')
domain_info.registrar_name = registrar.get('name')
domain_info.registrar_email = registrar.get('email')
domain_info.registrar_phone = registrar.get('phone')
domain_info.registrar_url = registrar.get('url')
# find associated domains if registrant email is found
related_domains = reverse_whois(domain_info.get('registrant_email')) if domain_info.get('registrant_email') else []
for _domain in related_domains:
related_domains_list.append(_domain['name'])
# remove duplicate domains from related domains list
related_domains_list = list(set(related_domains_list))
domain_info.related_domains = related_domains_list
# save to db if domain exists
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_rel = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_rel)
db_domain_info.dnssec = domain_info.get('dnssec')
#dates
db_domain_info.created = domain_info.get('created')
db_domain_info.updated = domain_info.get('updated')
db_domain_info.expires = domain_info.get('expires')
#registrar
db_domain_info.registrar = Registrar.objects.get_or_create(
name=domain_info.get('registrar_name'),
email=domain_info.get('registrar_email'),
phone=domain_info.get('registrar_phone'),
url=domain_info.get('registrar_url'),
)[0]
db_domain_info.registrant = DomainRegistration.objects.get_or_create(
name=domain_info.get('registrant_name'),
organization=domain_info.get('registrant_organization'),
address=domain_info.get('registrant_address'),
city=domain_info.get('registrant_city'),
state=domain_info.get('registrant_state'),
zip_code=domain_info.get('registrant_zip_code'),
country=domain_info.get('registrant_country'),
email=domain_info.get('registrant_email'),
phone=domain_info.get('registrant_phone'),
fax=domain_info.get('registrant_fax'),
id_str=domain_info.get('registrant_id'),
)[0]
db_domain_info.admin = DomainRegistration.objects.get_or_create(
name=domain_info.get('admin_name'),
organization=domain_info.get('admin_organization'),
address=domain_info.get('admin_address'),
city=domain_info.get('admin_city'),
state=domain_info.get('admin_state'),
zip_code=domain_info.get('admin_zip_code'),
country=domain_info.get('admin_country'),
email=domain_info.get('admin_email'),
phone=domain_info.get('admin_phone'),
fax=domain_info.get('admin_fax'),
id_str=domain_info.get('admin_id'),
)[0]
db_domain_info.tech = DomainRegistration.objects.get_or_create(
name=domain_info.get('tech_name'),
organization=domain_info.get('tech_organization'),
address=domain_info.get('tech_address'),
city=domain_info.get('tech_city'),
state=domain_info.get('tech_state'),
zip_code=domain_info.get('tech_zip_code'),
country=domain_info.get('tech_country'),
email=domain_info.get('tech_email'),
phone=domain_info.get('tech_phone'),
fax=domain_info.get('tech_fax'),
id_str=domain_info.get('tech_id'),
)[0]
for status in domain_info.get('status') or []:
_status = WhoisStatus.objects.get_or_create(
name=status
)[0]
_status.save()
db_domain_info.status.add(_status)
for ns in domain_info.get('ns_records') or []:
_ns = NameServer.objects.get_or_create(
name=ns
)[0]
_ns.save()
db_domain_info.name_servers.add(_ns)
for a in domain_info.get('a_records') or []:
_a = DNSRecord.objects.get_or_create(
name=a,
type='a'
)[0]
_a.save()
db_domain_info.dns_records.add(_a)
for mx in domain_info.get('mx_records') or []:
_mx = DNSRecord.objects.get_or_create(
name=mx,
type='mx'
)[0]
_mx.save()
db_domain_info.dns_records.add(_mx)
for txt in domain_info.get('txt_records') or []:
_txt = DNSRecord.objects.get_or_create(
name=txt,
type='txt'
)[0]
_txt.save()
db_domain_info.dns_records.add(_txt)
db_domain_info.geolocation_iso = domain_info.get('registrant_country')
db_domain_info.whois_server = domain_info.get('whois_server')
db_domain_info.save()
domain.domain_info = db_domain_info
domain.save()
except Exception as e:
return {
'status': False,
'ip_domain': ip_domain,
'result': "unable to fetch records from WHOIS database.",
'message': str(e)
}
return {
'status': True,
'ip_domain': ip_domain,
'dnssec': domain_info.get('dnssec'),
'created': domain_info.get('created'),
'updated': domain_info.get('updated'),
'expires': domain_info.get('expires'),
'geolocation_iso': domain_info.get('registrant_country'),
'domain_statuses': domain_info.get('status'),
'whois_server': domain_info.get('whois_server'),
'dns': {
'a': domain_info.get('a_records'),
'mx': domain_info.get('mx_records'),
'txt': domain_info.get('txt_records'),
},
'registrar': {
'name': domain_info.get('registrar_name'),
'phone': domain_info.get('registrar_phone'),
'email': domain_info.get('registrar_email'),
'url': domain_info.get('registrar_url'),
},
'registrant': {
'name': domain_info.get('registrant_name'),
'id': domain_info.get('registrant_id'),
'organization': domain_info.get('registrant_organization'),
'address': domain_info.get('registrant_address'),
'city': domain_info.get('registrant_city'),
'state': domain_info.get('registrant_state'),
'zipcode': domain_info.get('registrant_zip_code'),
'country': domain_info.get('registrant_country'),
'phone': domain_info.get('registrant_phone'),
'fax': domain_info.get('registrant_fax'),
'email': domain_info.get('registrant_email'),
},
'admin': {
'name': domain_info.get('admin_name'),
'id': domain_info.get('admin_id'),
'organization': domain_info.get('admin_organization'),
'address':domain_info.get('admin_address'),
'city': domain_info.get('admin_city'),
'state': domain_info.get('admin_state'),
'zipcode': domain_info.get('admin_zip_code'),
'country': domain_info.get('admin_country'),
'phone': domain_info.get('admin_phone'),
'fax': domain_info.get('admin_fax'),
'email': domain_info.get('admin_email'),
},
'technical_contact': {
'name': domain_info.get('tech_name'),
'id': domain_info.get('tech_id'),
'organization': domain_info.get('tech_organization'),
'address': domain_info.get('tech_address'),
'city': domain_info.get('tech_city'),
'state': domain_info.get('tech_state'),
'zipcode': domain_info.get('tech_zip_code'),
'country': domain_info.get('tech_country'),
'phone': domain_info.get('tech_phone'),
'fax': domain_info.get('tech_fax'),
'email': domain_info.get('tech_email'),
},
'nameservers': domain_info.get('ns_records'),
# 'similar_domains': domain_info.get('similar_domains'),
'related_domains': domain_info.get('related_domains'),
'related_tlds': domain_info.get('related_tlds'),
'historical_ips': domain_info.get('historical_ips'),
}
@app.task(name='remove_duplicate_endpoints', bind=False, queue='remove_duplicate_endpoints_queue')
def remove_duplicate_endpoints(
scan_history_id,
domain_id,
subdomain_id=None,
filter_ids=[],
filter_status=[200, 301, 404],
duplicate_removal_fields=ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS
):
"""Remove duplicate endpoints.
Check for implicit redirections by comparing endpoints:
- [x] `content_length` similarities indicating redirections
- [x] `page_title` (check for same page title)
- [ ] Sign-in / login page (check for endpoints with the same words)
Args:
scan_history_id: ScanHistory id.
domain_id (int): Domain id.
subdomain_id (int, optional): Subdomain id.
filter_ids (list): List of endpoint ids to filter on.
filter_status (list): List of HTTP status codes to filter on.
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
"""
logger.info(f'Removing duplicate endpoints based on {duplicate_removal_fields}')
endpoints = (
EndPoint.objects
.filter(scan_history__id=scan_history_id)
.filter(target_domain__id=domain_id)
)
if filter_status:
endpoints = endpoints.filter(http_status__in=filter_status)
if subdomain_id:
endpoints = endpoints.filter(subdomain__id=subdomain_id)
if filter_ids:
endpoints = endpoints.filter(id__in=filter_ids)
for field_name in duplicate_removal_fields:
cl_query = (
endpoints
.values_list(field_name)
.annotate(mc=Count(field_name))
.order_by('-mc')
)
for (field_value, count) in cl_query:
if count > DELETE_DUPLICATES_THRESHOLD:
eps_to_delete = (
endpoints
.filter(**{field_name: field_value})
.order_by('discovered_date')
.all()[1:]
)
msg = f'Deleting {len(eps_to_delete)} endpoints [reason: same {field_name} {field_value}]'
for ep in eps_to_delete:
url = urlparse(ep.http_url)
if url.path in ['', '/', '/login']: # try do not delete the original page that other pages redirect to
continue
msg += f'\n\t {ep.http_url} [{ep.http_status}] [{field_name}={field_value}]'
ep.delete()
logger.warning(msg)
@app.task(name='run_command', bind=False, queue='run_command_queue')
def run_command(cmd, cwd=None, shell=False, history_file=None, scan_id=None, activity_id=None):
"""Run a given command using subprocess module.
Args:
cmd (str): Command to run.
cwd (str): Current working directory.
echo (bool): Log command.
shell (bool): Run within separate shell if True.
history_file (str): Write command + output to history file.
Returns:
tuple: Tuple with return_code, output.
"""
logger.info(cmd)
logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Run the command using subprocess
popen = subprocess.Popen(
cmd if shell else cmd.split(),
shell=shell,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
cwd=cwd,
universal_newlines=True)
output = ''
for stdout_line in iter(popen.stdout.readline, ""):
item = stdout_line.strip()
output += '\n' + item
logger.debug(item)
popen.stdout.close()
popen.wait()
return_code = popen.returncode
command_obj.output = output
command_obj.return_code = return_code
command_obj.save()
if history_file:
mode = 'a'
if not os.path.exists(history_file):
mode = 'w'
with open(history_file, mode) as f:
f.write(f'\n{cmd}\n{return_code}\n{output}\n------------------\n')
return return_code, output
#-------------#
# Other utils #
#-------------#
def stream_command(cmd, cwd=None, shell=False, history_file=None, encoding='utf-8', scan_id=None, activity_id=None, trunc_char=None):
# Log cmd
logger.info(cmd)
# logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Sanitize the cmd
command = cmd if shell else cmd.split()
# Run the command using subprocess
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
shell=shell)
# Log the output in real-time to the database
output = ""
# Process the output
for line in iter(lambda: process.stdout.readline(), b''):
if not line:
break
line = line.strip()
ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')
line = ansi_escape.sub('', line)
line = line.replace('\\x0d\\x0a', '\n')
if trunc_char and line.endswith(trunc_char):
line = line[:-1]
item = line
# Try to parse the line as JSON
try:
item = json.loads(line)
except json.JSONDecodeError:
pass
# Yield the line
#logger.debug(item)
yield item
# Add the log line to the output
output += line + "\n"
# Update the command record in the database
command_obj.output = output
command_obj.save()
# Retrieve the return code and output
process.wait()
return_code = process.returncode
# Update the return code and final output in the database
command_obj.return_code = return_code
command_obj.save()
# Append the command, return code and output to the history file
if history_file is not None:
with open(history_file, "a") as f:
f.write(f"{cmd}\n{return_code}\n{output}\n")
def process_httpx_response(line):
"""TODO: implement this"""
def extract_httpx_url(line):
"""Extract final URL from httpx results. Always follow redirects to find
the last URL.
Args:
line (dict): URL data output by httpx.
Returns:
tuple: (final_url, redirect_bool) tuple.
"""
status_code = line.get('status_code', 0)
final_url = line.get('final_url')
location = line.get('location')
chain_status_codes = line.get('chain_status_codes', [])
# Final URL is already looking nice, if it exists return it
if final_url:
return final_url, False
http_url = line['url'] # fallback to url field
# Handle redirects manually
REDIRECT_STATUS_CODES = [301, 302]
is_redirect = (
status_code in REDIRECT_STATUS_CODES
or
any(x in REDIRECT_STATUS_CODES for x in chain_status_codes)
)
if is_redirect and location:
if location.startswith(('http', 'https')):
http_url = location
else:
http_url = f'{http_url}/{location.lstrip("/")}'
# Sanitize URL
http_url = sanitize_url(http_url)
return http_url, is_redirect
#-------------#
# OSInt utils #
#-------------#
def get_and_save_dork_results(lookup_target, results_dir, type, lookup_keywords=None, lookup_extensions=None, delay=3, page_count=2, scan_history=None):
"""
Uses gofuzz to dork and store information
Args:
lookup_target (str): target to look into such as stackoverflow or even the target itself
results_dir (str): Results directory
type (str): Dork Type Title
lookup_keywords (str): comma separated keywords or paths to look for
lookup_extensions (str): comma separated extensions to look for
delay (int): delay between each requests
page_count (int): pages in google to extract information
scan_history (startScan.ScanHistory): Scan History Object
"""
results = []
gofuzz_command = f'{GOFUZZ_EXEC_PATH} -t {lookup_target} -d {delay} -p {page_count}'
if lookup_extensions:
gofuzz_command += f' -e {lookup_extensions}'
elif lookup_keywords:
gofuzz_command += f' -w {lookup_keywords}'
output_file = f'{results_dir}/gofuzz.txt'
gofuzz_command += f' -o {output_file}'
history_file = f'{results_dir}/commands.txt'
try:
run_command(
gofuzz_command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
)
if not os.path.isfile(output_file):
return
with open(output_file) as f:
for line in f.readlines():
url = line.strip()
if url:
results.append(url)
dork, created = Dork.objects.get_or_create(
type=type,
url=url
)
if scan_history:
scan_history.dorks.add(dork)
# remove output file
os.remove(output_file)
except Exception as e:
logger.exception(e)
return results
def get_and_save_emails(scan_history, activity_id, results_dir):
"""Get and save emails from Google, Bing and Baidu.
Args:
scan_history (startScan.ScanHistory): Scan history object.
activity_id: ScanActivity Object
results_dir (str): Results directory.
Returns:
list: List of emails found.
"""
emails = []
# Proxy settings
# get_random_proxy()
# Gather emails from Google, Bing and Baidu
output_file = f'{results_dir}/emails_tmp.txt'
history_file = f'{results_dir}/commands.txt'
command = f'python3 /usr/src/github/Infoga/infoga.py --domain {scan_history.domain.name} --source all --report {output_file}'
try:
run_command(
command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
activity_id=activity_id)
if not os.path.isfile(output_file):
logger.info('No Email results')
return []
with open(output_file) as f:
for line in f.readlines():
if 'Email' in line:
split_email = line.split(' ')[2]
emails.append(split_email)
output_path = f'{results_dir}/emails.txt'
with open(output_path, 'w') as output_file:
for email_address in emails:
save_email(email_address, scan_history)
output_file.write(f'{email_address}\n')
except Exception as e:
logger.exception(e)
return emails
def save_metadata_info(meta_dict):
"""Extract metadata from Google Search.
Args:
meta_dict (dict): Info dict.
Returns:
list: List of startScan.MetaFinderDocument objects.
"""
logger.warning(f'Getting metadata for {meta_dict.osint_target}')
scan_history = ScanHistory.objects.get(id=meta_dict.scan_id)
# Proxy settings
get_random_proxy()
# Get metadata
result = extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if not result:
logger.error(f'No metadata result from Google Search for {meta_dict.osint_target}.')
return []
# Add metadata info to DB
results = []
for metadata_name, data in result.get_metadata().items():
subdomain = Subdomain.objects.get(
scan_history=meta_dict.scan_id,
name=meta_dict.osint_target)
metadata = DottedDict({k: v for k, v in data.items()})
meta_finder_document = MetaFinderDocument(
subdomain=subdomain,
target_domain=meta_dict.domain,
scan_history=scan_history,
url=metadata.url,
doc_name=metadata_name,
http_status=metadata.status_code,
producer=metadata.metadata.get('Producer'),
creator=metadata.metadata.get('Creator'),
creation_date=metadata.metadata.get('CreationDate'),
modified_date=metadata.metadata.get('ModDate'),
author=metadata.metadata.get('Author'),
title=metadata.metadata.get('Title'),
os=metadata.metadata.get('OSInfo'))
meta_finder_document.save()
results.append(data)
return results
#-----------------#
# Utils functions #
#-----------------#
def create_scan_activity(scan_history_id, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = ScanHistory.objects.get(pk=scan_history_id)
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
#--------------------#
# Database functions #
#--------------------#
def save_vulnerability(**vuln_data):
references = vuln_data.pop('references', [])
cve_ids = vuln_data.pop('cve_ids', [])
cwe_ids = vuln_data.pop('cwe_ids', [])
tags = vuln_data.pop('tags', [])
subscan = vuln_data.pop('subscan', None)
# remove nulls
vuln_data = replace_nulls(vuln_data)
# Create vulnerability
vuln, created = Vulnerability.objects.get_or_create(**vuln_data)
if created:
vuln.discovered_date = timezone.now()
vuln.open_status = True
vuln.save()
# Save vuln tags
for tag_name in tags or []:
tag, created = VulnerabilityTags.objects.get_or_create(name=tag_name)
if tag:
vuln.tags.add(tag)
vuln.save()
# Save CVEs
for cve_id in cve_ids or []:
cve, created = CveId.objects.get_or_create(name=cve_id)
if cve:
vuln.cve_ids.add(cve)
vuln.save()
# Save CWEs
for cve_id in cwe_ids or []:
cwe, created = CweId.objects.get_or_create(name=cve_id)
if cwe:
vuln.cwe_ids.add(cwe)
vuln.save()
# Save vuln reference
for url in references or []:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
if created:
vuln.references.add(ref)
vuln.save()
# Save subscan id in vuln object
if subscan:
vuln.vuln_subscan_ids.add(subscan)
vuln.save()
return vuln, created
def save_endpoint(
http_url,
ctx={},
crawl=False,
is_default=False,
**endpoint_data):
"""Get or create EndPoint object. If crawl is True, also crawl the endpoint
HTTP URL with httpx.
Args:
http_url (str): Input HTTP URL.
is_default (bool): If the url is a default url for SubDomains.
scan_history (startScan.models.ScanHistory): ScanHistory object.
domain (startScan.models.Domain): Domain object.
subdomain (starScan.models.Subdomain): Subdomain object.
results_dir (str, optional): Results directory.
crawl (bool, optional): Run httpx on endpoint if True. Default: False.
force (bool, optional): Force crawl even if ENABLE_HTTP_CRAWL mode is on.
subscan (startScan.models.SubScan, optional): SubScan object.
Returns:
tuple: (startScan.models.EndPoint, created) where `created` is a boolean
indicating if the object is new or already existed.
"""
# remove nulls
endpoint_data = replace_nulls(endpoint_data)
scheme = urlparse(http_url).scheme
endpoint = None
created = False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in http_url:
logger.error(f"{http_url} is not a URL of domain {domain.name}. Skipping.")
return None, False
if crawl:
ctx['track'] = False
results = http_crawl(
urls=[http_url],
method='HEAD',
ctx=ctx)
if results:
endpoint_data = results[0]
endpoint_id = endpoint_data['endpoint_id']
created = endpoint_data['endpoint_created']
endpoint = EndPoint.objects.get(pk=endpoint_id)
elif not scheme:
return None, False
else: # add dumb endpoint without probing it
scan = ScanHistory.objects.filter(pk=ctx.get('scan_history_id')).first()
domain = Domain.objects.filter(pk=ctx.get('domain_id')).first()
if not validators.url(http_url):
return None, False
http_url = sanitize_url(http_url)
# Try to get the first matching record (prevent duplicate error)
endpoints = EndPoint.objects.filter(
scan_history=scan,
target_domain=domain,
http_url=http_url,
**endpoint_data
)
if endpoints.exists():
endpoint = endpoints.first()
created = False
else:
# No existing record, create a new one
endpoint = EndPoint.objects.create(
scan_history=scan,
target_domain=domain,
http_url=http_url,
**endpoint_data
)
created = True
if created:
endpoint.is_default = is_default
endpoint.discovered_date = timezone.now()
endpoint.save()
subscan_id = ctx.get('subscan_id')
if subscan_id:
endpoint.endpoint_subscan_ids.add(subscan_id)
endpoint.save()
return endpoint, created
def save_subdomain(subdomain_name, ctx={}):
"""Get or create Subdomain object.
Args:
subdomain_name (str): Subdomain name.
scan_history (startScan.models.ScanHistory): ScanHistory object.
Returns:
tuple: (startScan.models.Subdomain, created) where `created` is a
boolean indicating if the object has been created in DB.
"""
scan_id = ctx.get('scan_history_id')
subscan_id = ctx.get('subscan_id')
out_of_scope_subdomains = ctx.get('out_of_scope_subdomains', [])
valid_domain = (
validators.domain(subdomain_name) or
validators.ipv4(subdomain_name) or
validators.ipv6(subdomain_name)
)
if not valid_domain:
logger.error(f'{subdomain_name} is not an invalid domain. Skipping.')
return None, False
if subdomain_name in out_of_scope_subdomains:
logger.error(f'{subdomain_name} is out-of-scope. Skipping.')
return None, False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in subdomain_name:
logger.error(f"{subdomain_name} is not a subdomain of domain {domain.name}. Skipping.")
return None, False
scan = ScanHistory.objects.filter(pk=scan_id).first()
domain = scan.domain if scan else None
subdomain, created = Subdomain.objects.get_or_create(
scan_history=scan,
target_domain=domain,
name=subdomain_name)
if created:
# logger.warning(f'Found new subdomain {subdomain_name}')
subdomain.discovered_date = timezone.now()
if subscan_id:
subdomain.subdomain_subscan_ids.add(subscan_id)
subdomain.save()
return subdomain, created
def save_email(email_address, scan_history=None):
if not validators.email(email_address):
logger.info(f'Email {email_address} is invalid. Skipping.')
return None, False
email, created = Email.objects.get_or_create(address=email_address)
# if created:
# logger.warning(f'Found new email address {email_address}')
# Add email to ScanHistory
if scan_history:
scan_history.emails.add(email)
scan_history.save()
return email, created
def save_employee(name, designation, scan_history=None):
employee, created = Employee.objects.get_or_create(
name=name,
designation=designation)
# if created:
# logger.warning(f'Found new employee {name}')
# Add employee to ScanHistory
if scan_history:
scan_history.employees.add(employee)
scan_history.save()
return employee, created
def save_ip_address(ip_address, subdomain=None, subscan=None, **kwargs):
if not (validators.ipv4(ip_address) or validators.ipv6(ip_address)):
logger.info(f'IP {ip_address} is not a valid IP. Skipping.')
return None, False
ip, created = IpAddress.objects.get_or_create(address=ip_address)
# if created:
# logger.warning(f'Found new IP {ip_address}')
# Set extra attributes
for key, value in kwargs.items():
setattr(ip, key, value)
ip.save()
# Add IP to subdomain
if subdomain:
subdomain.ip_addresses.add(ip)
subdomain.save()
# Add subscan to IP
if subscan:
ip.ip_subscan_ids.add(subscan)
# Geo-localize IP asynchronously
if created:
geo_localize.delay(ip_address, ip.id)
return ip, created
def save_imported_subdomains(subdomains, ctx={}):
"""Take a list of subdomains imported and write them to from_imported.txt.
Args:
subdomains (list): List of subdomain names.
scan_history (startScan.models.ScanHistory): ScanHistory instance.
domain (startScan.models.Domain): Domain instance.
results_dir (str): Results directory.
"""
domain_id = ctx['domain_id']
domain = Domain.objects.get(pk=domain_id)
results_dir = ctx.get('results_dir', RENGINE_RESULTS)
# Validate each subdomain and de-duplicate entries
subdomains = list(set([
subdomain for subdomain in subdomains
if validators.domain(subdomain) and domain.name == get_domain_from_subdomain(subdomain)
]))
if not subdomains:
return
logger.warning(f'Found {len(subdomains)} imported subdomains.')
with open(f'{results_dir}/from_imported.txt', 'w+') as output_file:
for name in subdomains:
subdomain_name = name.strip()
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain.is_imported_subdomain = True
subdomain.save()
output_file.write(f'{subdomain}\n')
@app.task(name='query_reverse_whois', bind=False, queue='query_reverse_whois_queue')
def query_reverse_whois(lookup_keyword):
"""Queries Reverse WHOIS information for an organization or email address.
Args:
lookup_keyword (str): Registrar Name or email
Returns:
dict: Reverse WHOIS information.
"""
return get_associated_domains(lookup_keyword)
@app.task(name='query_ip_history', bind=False, queue='query_ip_history_queue')
def query_ip_history(domain):
"""Queries the IP history for a domain
Args:
domain (str): domain_name
Returns:
list: list of historical ip addresses
"""
return get_domain_historical_ip_address(domain)
@app.task(name='gpt_vulnerability_description', bind=False, queue='gpt_queue')
def gpt_vulnerability_description(vulnerability_id):
"""Generate and store Vulnerability Description using GPT.
Args:
vulnerability_id (Vulnerability Model ID): Vulnerability ID to fetch Description.
"""
logger.info('Getting GPT Vulnerability Description')
try:
lookup_vulnerability = Vulnerability.objects.get(id=vulnerability_id)
lookup_url = urlparse(lookup_vulnerability.http_url)
path = lookup_url.path
except Exception as e:
return {
'status': False,
'error': str(e)
}
# check in db GPTVulnerabilityReport model if vulnerability description and path matches
stored = GPTVulnerabilityReport.objects.filter(url_path=path).filter(title=lookup_vulnerability.name).first()
if stored:
response = {
'status': True,
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
vulnerability_description = get_gpt_vuln_input_description(
lookup_vulnerability.name,
path
)
# one can add more description here later
gpt_generator = GPTVulnerabilityReportGenerator()
response = gpt_generator.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
lookup_vulnerability.name,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
# for all vulnerabilities with the same vulnerability name this description has to be stored.
# also the consition is that the url must contain a part of this.
for vuln in Vulnerability.objects.filter(name=lookup_vulnerability.name, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
return response
| psyray | 7c01a46cea370e74385682ba7c28eaf4e58f5d69 | 2e089dc62f1bd64aa481750da10fa750e3aa232d | @psyray Remove it please so I can merge it. | AnonymousWP | 10 |
yogeshojha/rengine | 1,058 | fix: ffuf ANSI code processing preventing task to finish | Should
- [ ] fix #1006
Needs to be tested for potential impact on other tasks (e.g: dalfox) | null | 2023-11-21 11:54:34+00:00 | 2023-11-24 03:10:39+00:00 | web/reNgine/tasks.py | import csv
import json
import os
import pprint
import subprocess
import time
import validators
import whatportis
import xmltodict
import yaml
import tldextract
import concurrent.futures
from datetime import datetime
from urllib.parse import urlparse
from api.serializers import SubdomainSerializer
from celery import chain, chord, group
from celery.result import allow_join_result
from celery.utils.log import get_task_logger
from django.db.models import Count
from dotted_dict import DottedDict
from django.utils import timezone
from pycvesearch import CVESearch
from metafinder.extractor import extract_metadata_from_google_search
from reNgine.celery import app
from reNgine.gpt import GPTVulnerabilityReportGenerator
from reNgine.celery_custom_task import RengineTask
from reNgine.common_func import *
from reNgine.definitions import *
from reNgine.settings import *
from reNgine.gpt import *
from reNgine.utilities import *
from scanEngine.models import (EngineType, InstalledExternalTool, Notification, Proxy)
from startScan.models import *
from startScan.models import EndPoint, Subdomain, Vulnerability
from targetApp.models import Domain
"""
Celery tasks.
"""
logger = get_task_logger(__name__)
#----------------------#
# Scan / Subscan tasks #
#----------------------#
@app.task(name='initiate_scan', bind=False, queue='initiate_scan_queue')
def initiate_scan(
scan_history_id,
domain_id,
engine_id=None,
scan_type=LIVE_SCAN,
results_dir=RENGINE_RESULTS,
imported_subdomains=[],
out_of_scope_subdomains=[],
url_filter=''):
"""Initiate a new scan.
Args:
scan_history_id (int): ScanHistory id.
domain_id (int): Domain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
imported_subdomains (list): Imported subdomains.
out_of_scope_subdomains (list): Out-of-scope subdomains.
url_filter (str): URL path. Default: ''
"""
# Get scan history
scan = ScanHistory.objects.get(pk=scan_history_id)
# Get scan engine
engine_id = engine_id or scan.scan_type.id # scan history engine_id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, [])
# Get domain and set last_scan_date
domain = Domain.objects.get(pk=domain_id)
domain.last_scan_date = timezone.now()
domain.save()
# Get path filter
url_filter = url_filter.rstrip('/')
# Get or create ScanHistory() object
if scan_type == LIVE_SCAN: # immediate
scan = ScanHistory.objects.get(pk=scan_history_id)
scan.scan_status = RUNNING_TASK
elif scan_type == SCHEDULED_SCAN: # scheduled
scan = ScanHistory()
scan.scan_status = INITIATED_TASK
scan.scan_type = engine
scan.celery_ids = [initiate_scan.request.id]
scan.domain = domain
scan.start_scan_date = timezone.now()
scan.tasks = engine.tasks
scan.results_dir = f'{results_dir}/{domain.name}_{scan.id}'
add_gf_patterns = gf_patterns and 'fetch_url' in engine.tasks
if add_gf_patterns:
scan.used_gf_patterns = ','.join(gf_patterns)
scan.save()
# Create scan results dir
os.makedirs(scan.results_dir)
# Build task context
ctx = {
'scan_history_id': scan_history_id,
'engine_id': engine_id,
'domain_id': domain.id,
'results_dir': scan.results_dir,
'url_filter': url_filter,
'yaml_configuration': config,
'out_of_scope_subdomains': out_of_scope_subdomains
}
ctx_str = json.dumps(ctx, indent=2)
# Send start notif
logger.warning(f'Starting scan {scan_history_id} with context:\n{ctx_str}')
send_scan_notif.delay(
scan_history_id,
subscan_id=None,
engine_id=engine_id,
status=CELERY_TASK_STATUS_MAP[scan.scan_status])
# Save imported subdomains in DB
save_imported_subdomains(imported_subdomains, ctx=ctx)
# Create initial subdomain in DB: make a copy of domain as a subdomain so
# that other tasks using subdomains can use it.
subdomain_name = domain.name
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
# If enable_http_crawl is set, create an initial root HTTP endpoint so that
# HTTP crawling can start somewhere
http_url = f'{domain.name}{url_filter}' if url_filter else domain.name
endpoint, _ = save_endpoint(
http_url,
ctx=ctx,
crawl=enable_http_crawl,
is_default=True,
subdomain=subdomain
)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build Celery tasks, crafted according to the dependency graph below:
# subdomain_discovery --> port_scan --> fetch_url --> dir_file_fuzz
# osint vulnerability_scan
# osint dalfox xss scan
# screenshot
# waf_detection
workflow = chain(
group(
subdomain_discovery.si(ctx=ctx, description='Subdomain discovery'),
osint.si(ctx=ctx, description='OS Intelligence')
),
port_scan.si(ctx=ctx, description='Port scan'),
fetch_url.si(ctx=ctx, description='Fetch URL'),
group(
dir_file_fuzz.si(ctx=ctx, description='Directories & files fuzz'),
vulnerability_scan.si(ctx=ctx, description='Vulnerability scan'),
screenshot.si(ctx=ctx, description='Screenshot'),
waf_detection.si(ctx=ctx, description='WAF detection')
)
)
# Build callback
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery chord
logger.info(f'Running Celery workflow with {len(workflow.tasks) + 1} tasks')
task = chain(workflow, callback).on_error(callback).delay()
scan.celery_ids.append(task.id)
scan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='initiate_subscan', bind=False, queue='subscan_queue')
def initiate_subscan(
scan_history_id,
subdomain_id,
engine_id=None,
scan_type=None,
results_dir=RENGINE_RESULTS,
url_filter=''):
"""Initiate a new subscan.
Args:
scan_history_id (int): ScanHistory id.
subdomain_id (int): Subdomain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
url_filter (str): URL path. Default: ''
"""
# Get Subdomain, Domain and ScanHistory
subdomain = Subdomain.objects.get(pk=subdomain_id)
scan = ScanHistory.objects.get(pk=subdomain.scan_history.id)
domain = Domain.objects.get(pk=subdomain.target_domain.id)
# Get EngineType
engine_id = engine_id or scan.scan_type.id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Create scan activity of SubScan Model
subscan = SubScan(
start_scan_date=timezone.now(),
celery_ids=[initiate_subscan.request.id],
scan_history=scan,
subdomain=subdomain,
type=scan_type,
status=RUNNING_TASK,
engine=engine)
subscan.save()
# Get YAML configuration
config = yaml.safe_load(engine.yaml_configuration)
# Create results directory
results_dir = f'{scan.results_dir}/subscans/{subscan.id}'
os.makedirs(results_dir, exist_ok=True)
# Run task
method = globals().get(scan_type)
if not method:
logger.warning(f'Task {scan_type} is not supported by reNgine. Skipping')
return
scan.tasks.append(scan_type)
scan.save()
# Send start notif
send_scan_notif.delay(
scan.id,
subscan_id=subscan.id,
engine_id=engine_id,
status='RUNNING')
# Build context
ctx = {
'scan_history_id': scan.id,
'subscan_id': subscan.id,
'engine_id': engine_id,
'domain_id': domain.id,
'subdomain_id': subdomain.id,
'yaml_configuration': config,
'results_dir': results_dir,
'url_filter': url_filter
}
# Create initial endpoints in DB: find domain HTTP endpoint so that HTTP
# crawling can start somewhere
base_url = f'{subdomain.name}{url_filter}' if url_filter else subdomain.name
endpoint, _ = save_endpoint(
base_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build header + callback
workflow = method.si(ctx=ctx)
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery tasks
task = chain(workflow, callback).on_error(callback).delay()
subscan.celery_ids.append(task.id)
subscan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='report', bind=False, queue='report_queue')
def report(ctx={}, description=None):
"""Report task running after all other tasks.
Mark ScanHistory or SubScan object as completed and update with final
status, log run details and send notification.
Args:
description (str, optional): Task description shown in UI.
"""
# Get objects
subscan_id = ctx.get('subscan_id')
scan_id = ctx.get('scan_history_id')
engine_id = ctx.get('engine_id')
scan = ScanHistory.objects.filter(pk=scan_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
# Get failed tasks
tasks = ScanActivity.objects.filter(scan_of=scan).all()
if subscan:
tasks = tasks.filter(celery_id__in=subscan.celery_ids)
failed_tasks = tasks.filter(status=FAILED_TASK)
# Get task status
failed_count = failed_tasks.count()
status = SUCCESS_TASK if failed_count == 0 else FAILED_TASK
status_h = 'SUCCESS' if failed_count == 0 else 'FAILED'
# Update scan / subscan status
if subscan:
subscan.stop_scan_date = timezone.now()
subscan.status = status
subscan.save()
else:
scan.scan_status = status
scan.stop_scan_date = timezone.now()
scan.save()
# Send scan status notif
send_scan_notif.delay(
scan_history_id=scan_id,
subscan_id=subscan_id,
engine_id=engine_id,
status=status_h)
#------------------------- #
# Tracked reNgine tasks #
#--------------------------#
@app.task(name='subdomain_discovery', queue='main_scan_queue', base=RengineTask, bind=True)
def subdomain_discovery(
self,
host=None,
ctx=None,
description=None):
"""Uses a set of tools (see SUBDOMAIN_SCAN_DEFAULT_TOOLS) to scan all
subdomains associated with a domain.
Args:
host (str): Hostname to scan.
Returns:
subdomains (list): List of subdomain names.
"""
if not host:
host = self.subdomain.name if self.subdomain else self.domain.name
if self.url_filter:
logger.warning(f'Ignoring subdomains scan as an URL path filter was passed ({self.url_filter}).')
return
# Config
config = self.yaml_configuration.get(SUBDOMAIN_DISCOVERY) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL) or self.yaml_configuration.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
tools = config.get(USES_TOOLS, SUBDOMAIN_SCAN_DEFAULT_TOOLS)
default_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=True).filter(is_subdomain_gathering=True)]
custom_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=False).filter(is_subdomain_gathering=True)]
send_subdomain_changes, send_interesting = False, False
notif = Notification.objects.first()
if notif:
send_subdomain_changes = notif.send_subdomain_changes_notif
send_interesting = notif.send_interesting_notif
# Gather tools to run for subdomain scan
if ALL in tools:
tools = SUBDOMAIN_SCAN_DEFAULT_TOOLS + custom_subdomain_tools
tools = [t.lower() for t in tools]
# Make exception for amass since tool name is amass, but command is amass-active/passive
default_subdomain_tools.append('amass-passive')
default_subdomain_tools.append('amass-active')
# Run tools
for tool in tools:
cmd = None
logger.info(f'Scanning subdomains for {host} with {tool}')
proxy = get_random_proxy()
if tool in default_subdomain_tools:
if tool == 'amass-passive':
cmd = f'amass enum -passive -d {host} -o {self.results_dir}/subdomains_amass.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
elif tool == 'amass-active':
use_amass_config = config.get(USE_AMASS_CONFIG, False)
amass_wordlist_name = config.get(AMASS_WORDLIST, 'deepmagic.com-prefixes-top50000')
wordlist_path = f'/usr/src/wordlist/{amass_wordlist_name}.txt'
cmd = f'amass enum -active -d {host} -o {self.results_dir}/subdomains_amass_active.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
cmd += f' -brute -w {wordlist_path}'
elif tool == 'sublist3r':
cmd = f'python3 /usr/src/github/Sublist3r/sublist3r.py -d {host} -t {threads} -o {self.results_dir}/subdomains_sublister.txt'
elif tool == 'subfinder':
cmd = f'subfinder -d {host} -o {self.results_dir}/subdomains_subfinder.txt'
use_subfinder_config = config.get(USE_SUBFINDER_CONFIG, False)
cmd += ' -config /root/.config/subfinder/config.yaml' if use_subfinder_config else ''
cmd += f' -proxy {proxy}' if proxy else ''
cmd += f' -timeout {timeout}' if timeout else ''
cmd += f' -t {threads}' if threads else ''
cmd += f' -silent'
elif tool == 'oneforall':
cmd = f'python3 /usr/src/github/OneForAll/oneforall.py --target {host} run'
cmd_extract = f'cut -d\',\' -f6 /usr/src/github/OneForAll/results/{host}.csv > {self.results_dir}/subdomains_oneforall.txt'
cmd_rm = f'rm -rf /usr/src/github/OneForAll/results/{host}.csv'
cmd += f' && {cmd_extract} && {cmd_rm}'
elif tool == 'ctfr':
results_file = self.results_dir + '/subdomains_ctfr.txt'
cmd = f'python3 /usr/src/github/ctfr/ctfr.py -d {host} -o {results_file}'
cmd_extract = f"cat {results_file} | sed 's/\*.//g' | tail -n +12 | uniq | sort > {results_file}"
cmd += f' && {cmd_extract}'
elif tool == 'tlsx':
results_file = self.results_dir + '/subdomains_tlsx.txt'
cmd = f'tlsx -san -cn -silent -ro -host {host}'
cmd += f" | sed -n '/^\([a-zA-Z0-9]\([-a-zA-Z0-9]*[a-zA-Z0-9]\)\?\.\)\+{host}$/p' | uniq | sort"
cmd += f' > {results_file}'
elif tool == 'netlas':
results_file = self.results_dir + '/subdomains_netlas.txt'
cmd = f'netlas search -d domain -i domain domain:"*.{host}" -f json'
netlas_key = get_netlas_key()
cmd += f' -a {netlas_key}' if netlas_key else ''
cmd_extract = f"grep -oE '([a-zA-Z0-9]([-a-zA-Z0-9]*[a-zA-Z0-9])?\.)+{host}'"
cmd += f' | {cmd_extract} > {results_file}'
elif tool in custom_subdomain_tools:
tool_query = InstalledExternalTool.objects.filter(name__icontains=tool.lower())
if not tool_query.exists():
logger.error(f'Missing {{TARGET}} and {{OUTPUT}} placeholders in {tool} configuration. Skipping.')
continue
custom_tool = tool_query.first()
cmd = custom_tool.subdomain_gathering_command
if '{TARGET}' in cmd and '{OUTPUT}' in cmd:
cmd = cmd.replace('{TARGET}', host)
cmd = cmd.replace('{OUTPUT}', f'{self.results_dir}/subdomains_{tool}.txt')
cmd = cmd.replace('{PATH}', custom_tool.github_clone_path) if '{PATH}' in cmd else cmd
else:
logger.warning(
f'Subdomain discovery tool "{tool}" is not supported by reNgine. Skipping.')
continue
# Run tool
try:
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
except Exception as e:
logger.error(
f'Subdomain discovery tool "{tool}" raised an exception')
logger.exception(e)
# Gather all the tools' results in one single file. Write subdomains into
# separate files, and sort all subdomains.
run_command(
f'cat {self.results_dir}/subdomains_*.txt > {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {self.output_path} -o {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
with open(self.output_path) as f:
lines = f.readlines()
# Parse the output_file file and store Subdomain and EndPoint objects found
# in db.
subdomain_count = 0
subdomains = []
urls = []
for line in lines:
subdomain_name = line.strip()
valid_url = bool(validators.url(subdomain_name))
valid_domain = (
bool(validators.domain(subdomain_name)) or
bool(validators.ipv4(subdomain_name)) or
bool(validators.ipv6(subdomain_name)) or
valid_url
)
if not valid_domain:
logger.error(f'Subdomain {subdomain_name} is not a valid domain, IP or URL. Skipping.')
continue
if valid_url:
subdomain_name = urlparse(subdomain_name).netloc
if subdomain_name in self.out_of_scope_subdomains:
logger.error(f'Subdomain {subdomain_name} is out of scope. Skipping.')
continue
# Add subdomain
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain_count += 1
subdomains.append(subdomain)
urls.append(subdomain.name)
# Bulk crawl subdomains
if enable_http_crawl:
ctx['track'] = True
http_crawl(urls, ctx=ctx, is_ran_from_subdomain_scan=True)
# Find root subdomain endpoints
for subdomain in subdomains:
pass
# Send notifications
subdomains_str = '\n'.join([f'• `{subdomain.name}`' for subdomain in subdomains])
self.notify(fields={
'Subdomain count': len(subdomains),
'Subdomains': subdomains_str,
})
if send_subdomain_changes and self.scan_id and self.domain_id:
added = get_new_added_subdomain(self.scan_id, self.domain_id)
removed = get_removed_subdomain(self.scan_id, self.domain_id)
if added:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in added])
self.notify(fields={'Added subdomains': subdomains_str})
if removed:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in removed])
self.notify(fields={'Removed subdomains': subdomains_str})
if send_interesting and self.scan_id and self.domain_id:
interesting_subdomains = get_interesting_subdomains(self.scan_id, self.domain_id)
if interesting_subdomains:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in interesting_subdomains])
self.notify(fields={'Interesting subdomains': subdomains_str})
return SubdomainSerializer(subdomains, many=True).data
@app.task(name='osint', queue='main_scan_queue', base=RengineTask, bind=True)
def osint(self, host=None, ctx={}, description=None):
"""Run Open-Source Intelligence tools on selected domain.
Args:
host (str): Hostname to scan.
Returns:
dict: Results from osint discovery and dorking.
"""
config = self.yaml_configuration.get(OSINT) or OSINT_DEFAULT_CONFIG
results = {}
grouped_tasks = []
if 'discover' in config:
ctx['track'] = False
# results = osint_discovery(host=host, ctx=ctx)
_task = osint_discovery.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
activity_id=self.activity_id,
results_dir=self.results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if OSINT_DORK in config or OSINT_CUSTOM_DORK in config:
_task = dorking.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
results_dir=self.results_dir
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('OSINT Tasks finished...')
# with open(self.output_path, 'w') as f:
# json.dump(results, f, indent=4)
#
# return results
@app.task(name='osint_discovery', queue='osint_discovery_queue', bind=False)
def osint_discovery(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run OSINT discovery.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
dict: osint metadat and theHarvester and h8mail results.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
osint_lookup = config.get(OSINT_DISCOVER, [])
osint_intensity = config.get(INTENSITY, 'normal')
documents_limit = config.get(OSINT_DOCUMENTS_LIMIT, 50)
results = {}
meta_info = []
emails = []
creds = []
# Get and save meta info
if 'metainfo' in osint_lookup:
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': host,
'domain': host,
'scan_id': scan_history_id,
'documents_limit': documents_limit
})
meta_info.append(save_metadata_info(meta_dict))
# TODO: disabled for now
# elif osint_intensity == 'deep':
# subdomains = Subdomain.objects
# if self.scan:
# subdomains = subdomains.filter(scan_history=self.scan)
# for subdomain in subdomains:
# meta_dict = DottedDict({
# 'osint_target': subdomain.name,
# 'domain': self.domain,
# 'scan_id': self.scan_id,
# 'documents_limit': documents_limit
# })
# meta_info.append(save_metadata_info(meta_dict))
grouped_tasks = []
if 'emails' in osint_lookup:
emails = get_and_save_emails(scan_history, activity_id, results_dir)
emails_str = '\n'.join([f'• `{email}`' for email in emails])
# self.notify(fields={'Emails': emails_str})
# ctx['track'] = False
_task = h8mail.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if 'employees' in osint_lookup:
ctx['track'] = False
_task = theHarvester.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
# results['emails'] = results.get('emails', []) + emails
# results['creds'] = creds
# results['meta_info'] = meta_info
return results
@app.task(name='dorking', bind=False, queue='dorking_queue')
def dorking(config, host, scan_history_id, results_dir):
"""Run Google dorks.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
list: Dorking results for each dork ran.
"""
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
scan_history = ScanHistory.objects.get(pk=scan_history_id)
dorks = config.get(OSINT_DORK, [])
custom_dorks = config.get(OSINT_CUSTOM_DORK, [])
results = []
# custom dorking has higher priority
try:
for custom_dork in custom_dorks:
lookup_target = custom_dork.get('lookup_site')
# replace with original host if _target_
lookup_target = host if lookup_target == '_target_' else lookup_target
if 'lookup_extensions' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_extensions=custom_dork.get('lookup_extensions'),
scan_history=scan_history
)
elif 'lookup_keywords' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_keywords=custom_dork.get('lookup_keywords'),
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
# default dorking
try:
for dork in dorks:
logger.info(f'Getting dork information for {dork}')
if dork == 'stackoverflow':
results = get_and_save_dork_results(
lookup_target='stackoverflow.com',
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'login_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/login/,login.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'admin_panels':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/admin/,admin.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'dashboard_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/dashboard/,dashboard.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'social_media' :
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'reddit.com'
]
for site in social_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'project_management' :
project_websites = [
'trello.com',
'atlassian.net'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'code_sharing' :
project_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'config_files' :
config_file_exts = [
'env',
'xml',
'conf',
'toml',
'yml',
'yaml',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(config_file_exts),
page_count=4,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keyword = 'Jenkins'
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=lookup_keyword,
page_count=1,
scan_history=scan_history
)
elif dork == 'wordpress_files' :
lookup_keywords = [
'/wp-content/',
'/wp-includes/'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'php_error' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'exposed_documents' :
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(docs_file_ext),
page_count=7,
scan_history=scan_history
)
elif dork == 'db_files' :
file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
elif dork == 'git_exposed' :
file_ext = [
'git',
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
return results
@app.task(name='theHarvester', queue='theHarvester_queue', bind=False)
def theHarvester(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run theHarvester to get save emails, hosts, employees found in domain.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
dict: Dict of emails, employees, hosts and ips found during crawling.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
output_path_json = f'{results_dir}/theHarvester.json'
theHarvester_dir = '/usr/src/github/theHarvester'
history_file = f'{results_dir}/commands.txt'
cmd = f'python3 {theHarvester_dir}/theHarvester.py -d {host} -b all -f {output_path_json}'
# Update proxies.yaml
proxy_query = Proxy.objects.all()
if proxy_query.exists():
proxy = proxy_query.first()
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(f'{theHarvester_dir}/proxies.yaml', 'w') as file:
yaml.dump(yaml_data, file)
# Run cmd
run_command(
cmd,
shell=False,
cwd=theHarvester_dir,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
# Get file location
if not os.path.isfile(output_path_json):
logger.error(f'Could not open {output_path_json}')
return {}
# Load theHarvester results
with open(output_path_json, 'r') as f:
data = json.load(f)
# Re-indent theHarvester JSON
with open(output_path_json, 'w') as f:
json.dump(data, f, indent=4)
emails = data.get('emails', [])
for email_address in emails:
email, _ = save_email(email_address, scan_history=scan_history)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
linkedin_people = data.get('linkedin_people', [])
for people in linkedin_people:
employee, _ = save_employee(
people,
designation='linkedin',
scan_history=scan_history)
# if employee:
# self.notify(fields={'LinkedIn people': f'• {employee.name}'})
twitter_people = data.get('twitter_people', [])
for people in twitter_people:
employee, _ = save_employee(
people,
designation='twitter',
scan_history=scan_history)
# if employee:
# self.notify(fields={'Twitter people': f'• {employee.name}'})
hosts = data.get('hosts', [])
urls = []
for host in hosts:
split = tuple(host.split(':'))
http_url = split[0]
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
endpoint, _ = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain)
# if endpoint:
# urls.append(endpoint.http_url)
# self.notify(fields={'Hosts': f'• {endpoint.http_url}'})
# if enable_http_crawl:
# ctx['track'] = False
# http_crawl(urls, ctx=ctx)
# TODO: Lots of ips unrelated with our domain are found, disabling
# this for now.
# ips = data.get('ips', [])
# for ip_address in ips:
# ip, created = save_ip_address(
# ip_address,
# subscan=subscan)
# if ip:
# send_task_notif.delay(
# 'osint',
# scan_history_id=scan_history_id,
# subscan_id=subscan_id,
# severity='success',
# update_fields={'IPs': f'{ip.address}'})
return data
@app.task(name='h8mail', queue='h8mail_queue', bind=False)
def h8mail(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run h8mail.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
list[dict]: List of credentials info.
"""
logger.warning('Getting leaked credentials')
scan_history = ScanHistory.objects.get(pk=scan_history_id)
input_path = f'{results_dir}/emails.txt'
output_file = f'{results_dir}/h8mail.json'
cmd = f'h8mail -t {input_path} --json {output_file}'
history_file = f'{results_dir}/commands.txt'
run_command(
cmd,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
with open(output_file) as f:
data = json.load(f)
creds = data.get('targets', [])
# TODO: go through h8mail output and save emails to DB
for cred in creds:
logger.warning(cred)
email_address = cred['target']
pwn_num = cred['pwn_num']
pwn_data = cred.get('data', [])
email, created = save_email(email_address, scan_history=scan)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
return creds
@app.task(name='screenshot', queue='main_scan_queue', base=RengineTask, bind=True)
def screenshot(self, ctx={}, description=None):
"""Uses EyeWitness to gather screenshot of a domain and/or url.
Args:
description (str, optional): Task description shown in UI.
"""
# Config
screenshots_path = f'{self.results_dir}/screenshots'
output_path = f'{self.results_dir}/screenshots/{self.filename}'
alive_endpoints_file = f'{self.results_dir}/endpoints_alive.txt'
config = self.yaml_configuration.get(SCREENSHOT) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT + 5)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
# If intensity is normal, grab only the root endpoints of each subdomain
strict = True if intensity == 'normal' else False
# Get URLs to take screenshot of
get_http_urls(
is_alive=enable_http_crawl,
strict=strict,
write_filepath=alive_endpoints_file,
get_only_default_urls=True,
ctx=ctx
)
# Send start notif
notification = Notification.objects.first()
send_output_file = notification.send_scan_output_file if notification else False
# Run cmd
cmd = f'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py -f {alive_endpoints_file} -d {screenshots_path} --no-prompt'
cmd += f' --timeout {timeout}' if timeout > 0 else ''
cmd += f' --threads {threads}' if threads > 0 else ''
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(output_path):
logger.error(f'Could not load EyeWitness results at {output_path} for {self.domain.name}.')
return
# Loop through results and save objects in DB
screenshot_paths = []
with open(output_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
"Protocol,Port,Domain,Request Status,Screenshot Path, Source Path"
protocol, port, subdomain_name, status, screenshot_path, source_path = tuple(row)
logger.info(f'{protocol}:{port}:{subdomain_name}:{status}')
subdomain_query = Subdomain.objects.filter(name=subdomain_name)
if self.scan:
subdomain_query = subdomain_query.filter(scan_history=self.scan)
if status == 'Successful' and subdomain_query.exists():
subdomain = subdomain_query.first()
screenshot_paths.append(screenshot_path)
subdomain.screenshot_path = screenshot_path.replace('/usr/src/scan_results/', '')
subdomain.save()
logger.warning(f'Added screenshot for {subdomain.name} to DB')
# Remove all db, html extra files in screenshot results
run_command(
'rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(screenshots_path),
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'rm -rf {screenshots_path}/source',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Send finish notifs
screenshots_str = '• ' + '\n• '.join([f'`{path}`' for path in screenshot_paths])
self.notify(fields={'Screenshots': screenshots_str})
if send_output_file:
for path in screenshot_paths:
title = get_output_file_name(
self.scan_id,
self.subscan_id,
self.filename)
send_file_to_discord.delay(path, title)
@app.task(name='port_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def port_scan(self, hosts=[], ctx={}, description=None):
"""Run port scan.
Args:
hosts (list, optional): Hosts to run port scan on.
description (str, optional): Task description shown in UI.
Returns:
list: List of open ports (dict).
"""
input_file = f'{self.results_dir}/input_subdomains_port_scan.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(PORT_SCAN) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
exclude_ports = config.get(NAABU_EXCLUDE_PORTS, [])
exclude_subdomains = config.get(NAABU_EXCLUDE_SUBDOMAINS, False)
ports = config.get(PORTS, NAABU_DEFAULT_PORTS)
ports = [str(port) for port in ports]
rate_limit = config.get(NAABU_RATE) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
passive = config.get(NAABU_PASSIVE, False)
use_naabu_config = config.get(USE_NAABU_CONFIG, False)
exclude_ports_str = ','.join(return_iterable(exclude_ports))
# nmap args
nmap_enabled = config.get(ENABLE_NMAP, False)
nmap_cmd = config.get(NMAP_COMMAND, '')
nmap_script = config.get(NMAP_SCRIPT, '')
nmap_script = ','.join(return_iterable(nmap_script))
nmap_script_args = config.get(NMAP_SCRIPT_ARGS)
if hosts:
with open(input_file, 'w') as f:
f.write('\n'.join(hosts))
else:
hosts = get_subdomains(
write_filepath=input_file,
exclude_subdomains=exclude_subdomains,
ctx=ctx)
# Build cmd
cmd = 'naabu -json -exclude-cdn'
cmd += f' -list {input_file}' if len(hosts) > 0 else f' -host {hosts[0]}'
if 'full' in ports or 'all' in ports:
ports_str = ' -p "-"'
elif 'top-100' in ports:
ports_str = ' -top-ports 100'
elif 'top-1000' in ports:
ports_str = ' -top-ports 1000'
else:
ports_str = ','.join(ports)
ports_str = f' -p {ports_str}'
cmd += ports_str
cmd += ' -config /root/.config/naabu/config.yaml' if use_naabu_config else ''
cmd += f' -proxy "{proxy}"' if proxy else ''
cmd += f' -c {threads}' if threads else ''
cmd += f' -rate {rate_limit}' if rate_limit > 0 else ''
cmd += f' -timeout {timeout*1000}' if timeout > 0 else ''
cmd += f' -passive' if passive else ''
cmd += f' -exclude-ports {exclude_ports_str}' if exclude_ports else ''
cmd += f' -silent'
# Execute cmd and gather results
results = []
urls = []
ports_data = {}
for line in stream_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
port_number = line['port']
ip_address = line['ip']
host = line.get('host') or ip_address
if port_number == 0:
continue
# Grab subdomain
subdomain = Subdomain.objects.filter(
name=host,
target_domain=self.domain,
scan_history=self.scan
).first()
# Add IP DB
ip, _ = save_ip_address(ip_address, subdomain, subscan=self.subscan)
if self.subscan:
ip.ip_subscan_ids.add(self.subscan)
ip.save()
# Add endpoint to DB
# port 80 and 443 not needed as http crawl already does that.
if port_number not in [80, 443]:
http_url = f'{host}:{port_number}'
endpoint, _ = save_endpoint(
http_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint:
http_url = endpoint.http_url
urls.append(http_url)
# Add Port in DB
port_details = whatportis.get_ports(str(port_number))
service_name = port_details[0].name if len(port_details) > 0 else 'unknown'
description = port_details[0].description if len(port_details) > 0 else ''
# get or create port
port, created = Port.objects.get_or_create(
number=port_number,
service_name=service_name,
description=description
)
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port.save()
ip.ports.add(port)
ip.save()
if host in ports_data:
ports_data[host].append(port_number)
else:
ports_data[host] = [port_number]
# Send notification
logger.warning(f'Found opened port {port_number} on {ip_address} ({host})')
if len(ports_data) == 0:
logger.info('Finished running naabu port scan - No open ports found.')
if nmap_enabled:
logger.info('Nmap scans skipped')
return ports_data
# Send notification
fields_str = ''
for host, ports in ports_data.items():
ports_str = ', '.join([f'`{port}`' for port in ports])
fields_str += f'• `{host}`: {ports_str}\n'
self.notify(fields={'Ports discovered': fields_str})
# Save output to file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
logger.info('Finished running naabu port scan.')
# Process nmap results: 1 process per host
sigs = []
if nmap_enabled:
logger.warning(f'Starting nmap scans ...')
logger.warning(ports_data)
for host, port_list in ports_data.items():
ports_str = '_'.join([str(p) for p in port_list])
ctx_nmap = ctx.copy()
ctx_nmap['description'] = get_task_title(f'nmap_{host}', self.scan_id, self.subscan_id)
ctx_nmap['track'] = False
sig = nmap.si(
cmd=nmap_cmd,
ports=port_list,
host=host,
script=nmap_script,
script_args=nmap_script_args,
max_rate=rate_limit,
ctx=ctx_nmap)
sigs.append(sig)
task = group(sigs).apply_async()
with allow_join_result():
results = task.get()
return ports_data
@app.task(name='nmap', queue='main_scan_queue', base=RengineTask, bind=True)
def nmap(
self,
cmd=None,
ports=[],
host=None,
input_file=None,
script=None,
script_args=None,
max_rate=None,
ctx={},
description=None):
"""Run nmap on a host.
Args:
cmd (str, optional): Existing nmap command to complete.
ports (list, optional): List of ports to scan.
host (str, optional): Host to scan.
input_file (str, optional): Input hosts file.
script (str, optional): NSE script to run.
script_args (str, optional): NSE script args.
max_rate (int): Max rate.
description (str, optional): Task description shown in UI.
"""
notif = Notification.objects.first()
ports_str = ','.join(str(port) for port in ports)
self.filename = self.filename.replace('.txt', '.xml')
filename_vulns = self.filename.replace('.xml', '_vulns.json')
output_file = self.output_path
output_file_xml = f'{self.results_dir}/{host}_{self.filename}'
vulns_file = f'{self.results_dir}/{host}_{filename_vulns}'
logger.warning(f'Running nmap on {host}:{ports}')
# Build cmd
nmap_cmd = get_nmap_cmd(
cmd=cmd,
ports=ports_str,
script=script,
script_args=script_args,
max_rate=max_rate,
host=host,
input_file=input_file,
output_file=output_file_xml)
# Run cmd
run_command(
nmap_cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Get nmap XML results and convert to JSON
vulns = parse_nmap_results(output_file_xml, output_file)
with open(vulns_file, 'w') as f:
json.dump(vulns, f, indent=4)
# Save vulnerabilities found by nmap
vulns_str = ''
for vuln_data in vulns:
# URL is not necessarily an HTTP URL when running nmap (can be any
# other vulnerable protocols). Look for existing endpoint and use its
# URL as vulnerability.http_url if it exists.
url = vuln_data['http_url']
endpoint = EndPoint.objects.filter(http_url__contains=url).first()
if endpoint:
vuln_data['http_url'] = endpoint.http_url
vuln, created = save_vulnerability(
target_domain=self.domain,
subdomain=self.subdomain,
scan_history=self.scan,
subscan=self.subscan,
endpoint=endpoint,
**vuln_data)
vulns_str += f'• {str(vuln)}\n'
if created:
logger.warning(str(vuln))
# Send only 1 notif for all vulns to reduce number of notifs
if notif and notif.send_vuln_notif and vulns_str:
logger.warning(vulns_str)
self.notify(fields={'CVEs': vulns_str})
return vulns
@app.task(name='waf_detection', queue='main_scan_queue', base=RengineTask, bind=True)
def waf_detection(self, ctx={}, description=None):
"""
Uses wafw00f to check for the presence of a WAF.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of startScan.models.Waf objects.
"""
input_path = f'{self.results_dir}/input_endpoints_waf_detection.txt'
config = self.yaml_configuration.get(WAF_DETECTION) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Get alive endpoints from DB
get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
cmd = f'wafw00f -i {input_path} -o {self.output_path}'
run_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(self.output_path):
logger.error(f'Could not find {self.output_path}')
return
with open(self.output_path) as file:
wafs = file.readlines()
for line in wafs:
line = " ".join(line.split())
splitted = line.split(' ', 1)
waf_info = splitted[1].strip()
waf_name = waf_info[:waf_info.find('(')].strip()
waf_manufacturer = waf_info[waf_info.find('(')+1:waf_info.find(')')].strip().replace('.', '')
http_url = sanitize_url(splitted[0].strip())
if not waf_name or waf_name == 'None':
continue
# Add waf to db
waf, _ = Waf.objects.get_or_create(
name=waf_name,
manufacturer=waf_manufacturer
)
# Add waf info to Subdomain in DB
subdomain = get_subdomain_from_url(http_url)
logger.info(f'Wafw00f Subdomain : {subdomain}')
subdomain_query, _ = Subdomain.objects.get_or_create(scan_history=self.scan, name=subdomain)
subdomain_query.waf.add(waf)
subdomain_query.save()
return wafs
@app.task(name='dir_file_fuzz', queue='main_scan_queue', base=RengineTask, bind=True)
def dir_file_fuzz(self, ctx={}, description=None):
"""Perform directory scan, and currently uses `ffuf` as a default tool.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of URLs discovered.
"""
# Config
cmd = 'ffuf'
config = self.yaml_configuration.get(DIR_FILE_FUZZ) or {}
custom_header = self.yaml_configuration.get(CUSTOM_HEADER)
auto_calibration = config.get(AUTO_CALIBRATION, True)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
extensions = config.get(EXTENSIONS, DEFAULT_DIR_FILE_FUZZ_EXTENSIONS)
extensions_str = ','.join(map(str, extensions))
follow_redirect = config.get(FOLLOW_REDIRECT, FFUF_DEFAULT_FOLLOW_REDIRECT)
max_time = config.get(MAX_TIME, 0)
match_http_status = config.get(MATCH_HTTP_STATUS, FFUF_DEFAULT_MATCH_HTTP_STATUS)
mc = ','.join([str(c) for c in match_http_status])
recursive_level = config.get(RECURSIVE_LEVEL, FFUF_DEFAULT_RECURSIVE_LEVEL)
stop_on_error = config.get(STOP_ON_ERROR, False)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
wordlist_name = config.get(WORDLIST, 'dicc')
delay = rate_limit / (threads * 100) # calculate request pause delay from rate_limit and number of threads
input_path = f'{self.results_dir}/input_dir_file_fuzz.txt'
# Get wordlist
wordlist_name = 'dicc' if wordlist_name == 'default' else wordlist_name
wordlist_path = f'/usr/src/wordlist/{wordlist_name}.txt'
# Build command
cmd += f' -w {wordlist_path}'
cmd += f' -e {extensions_str}' if extensions else ''
cmd += f' -maxtime {max_time}' if max_time > 0 else ''
cmd += f' -p {delay}' if delay > 0 else ''
cmd += f' -recursion -recursion-depth {recursive_level} ' if recursive_level > 0 else ''
cmd += f' -t {threads}' if threads and threads > 0 else ''
cmd += f' -timeout {timeout}' if timeout and timeout > 0 else ''
cmd += ' -se' if stop_on_error else ''
cmd += ' -fr' if follow_redirect else ''
cmd += ' -ac' if auto_calibration else ''
cmd += f' -mc {mc}' if mc else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
# Grab URLs to fuzz
urls = get_http_urls(
is_alive=True,
ignore_files=False,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
logger.warning(urls)
# Loop through URLs and run command
results = []
for url in urls:
'''
Above while fetching urls, we are not ignoring files, because some
default urls may redirect to https://example.com/login.php
so, ignore_files is set to False
but, during fuzzing, we will only need part of the path, in above example
it is still a good idea to ffuf base url https://example.com
so files from base url
'''
url_parse = urlparse(url)
url = url_parse.scheme + '://' + url_parse.netloc
url += '/FUZZ' # TODO: fuzz not only URL but also POST / PUT / headers
proxy = get_random_proxy()
# Build final cmd
fcmd = cmd
fcmd += f' -x {proxy}' if proxy else ''
fcmd += f' -u {url} -json'
# Initialize DirectoryScan object
dirscan = DirectoryScan()
dirscan.scanned_date = timezone.now()
dirscan.command_line = fcmd
dirscan.save()
# Loop through results and populate EndPoint and DirectoryFile in DB
results = []
for line in stream_command(
fcmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
name = line['input'].get('FUZZ')
length = line['length']
status = line['status']
words = line['words']
url = line['url']
lines = line['lines']
content_type = line['content-type']
duration = line['duration']
if not name:
logger.error(f'FUZZ not found for "{url}"')
continue
endpoint, created = save_endpoint(url, crawl=False, ctx=ctx)
# endpoint.is_default = False
endpoint.http_status = status
endpoint.content_length = length
endpoint.response_time = duration / 1000000000
endpoint.save()
if created:
urls.append(endpoint.http_url)
endpoint.status = status
endpoint.content_type = content_type
endpoint.content_length = length
dfile, created = DirectoryFile.objects.get_or_create(
name=name,
length=length,
words=words,
lines=lines,
content_type=content_type,
url=url)
dfile.http_status = status
dfile.save()
# if created:
# logger.warning(f'Found new directory or file {url}')
dirscan.directory_files.add(dfile)
dirscan.save()
if self.subscan:
dirscan.dir_subscan_ids.add(self.subscan)
subdomain_name = get_subdomain_from_url(endpoint.http_url)
subdomain = Subdomain.objects.get(name=subdomain_name, scan_history=self.scan)
subdomain.directories.add(dirscan)
subdomain.save()
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(urls, ctx=ctx)
return results
@app.task(name='fetch_url', queue='main_scan_queue', base=RengineTask, bind=True)
def fetch_url(self, urls=[], ctx={}, description=None):
"""Fetch URLs using different tools like gauplus, gau, gospider, waybackurls ...
Args:
urls (list): List of URLs to start from.
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/input_endpoints_fetch_url.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(FETCH_URL) or {}
should_remove_duplicate_endpoints = config.get(REMOVE_DUPLICATE_ENDPOINTS, True)
duplicate_removal_fields = config.get(DUPLICATE_REMOVAL_FIELDS, ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, DEFAULT_GF_PATTERNS)
ignore_file_extension = config.get(IGNORE_FILE_EXTENSION, DEFAULT_IGNORE_FILE_EXTENSIONS)
tools = config.get(USES_TOOLS, ENDPOINT_SCAN_DEFAULT_TOOLS)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
domain_request_headers = self.domain.request_headers if self.domain else None
custom_header = domain_request_headers or self.yaml_configuration.get(CUSTOM_HEADER)
exclude_subdomains = config.get(EXCLUDED_SUBDOMAINS, False)
# Get URLs to scan and save to input file
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
exclude_subdomains=exclude_subdomains,
get_only_default_urls=True,
ctx=ctx
)
# Domain regex
host = self.domain.name if self.domain else urlparse(urls[0]).netloc
host_regex = f"\'https?://([a-z0-9]+[.])*{host}.*\'"
# Tools cmds
cmd_map = {
'gau': f'gau',
'gauplus': f'gauplus -random-agent',
'hakrawler': 'hakrawler -subs -u',
'waybackurls': 'waybackurls',
'gospider': f'gospider -S {input_path} --js -d 2 --sitemap --robots -w -r',
'katana': f'katana -list {input_path} -silent -jc -kf all -d 3 -fs rdn',
}
if proxy:
cmd_map['gau'] += f' --proxy "{proxy}"'
cmd_map['gauplus'] += f' -p "{proxy}"'
cmd_map['gospider'] += f' -p {proxy}'
cmd_map['hakrawler'] += f' -proxy {proxy}'
cmd_map['katana'] += f' -proxy {proxy}'
if threads > 0:
cmd_map['gau'] += f' --threads {threads}'
cmd_map['gauplus'] += f' -t {threads}'
cmd_map['gospider'] += f' -t {threads}'
cmd_map['katana'] += f' -c {threads}'
if custom_header:
header_string = ';;'.join([
f'{key}: {value}' for key, value in custom_header.items()
])
cmd_map['hakrawler'] += f' -h {header_string}'
cmd_map['katana'] += f' -H {header_string}'
header_flags = [':'.join(h) for h in header_string.split(';;')]
for flag in header_flags:
cmd_map['gospider'] += f' -H {flag}'
cat_input = f'cat {input_path}'
grep_output = f'grep -Eo {host_regex}'
cmd_map = {
tool: f'{cat_input} | {cmd} | {grep_output} > {self.results_dir}/urls_{tool}.txt'
for tool, cmd in cmd_map.items()
}
tasks = group(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for tool, cmd in cmd_map.items()
if tool in tools
)
# Cleanup task
sort_output = [
f'cat {self.results_dir}/urls_* > {self.output_path}',
f'cat {input_path} >> {self.output_path}',
f'sort -u {self.output_path} -o {self.output_path}',
]
if ignore_file_extension:
ignore_exts = '|'.join(ignore_file_extension)
grep_ext_filtered_output = [
f'cat {self.output_path} | grep -Eiv "\\.({ignore_exts}).*" > {self.results_dir}/urls_filtered.txt',
f'mv {self.results_dir}/urls_filtered.txt {self.output_path}'
]
sort_output.extend(grep_ext_filtered_output)
cleanup = chain(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for cmd in sort_output
)
# Run all commands
task = chord(tasks)(cleanup)
with allow_join_result():
task.get()
# Store all the endpoints and run httpx
with open(self.output_path) as f:
discovered_urls = f.readlines()
self.notify(fields={'Discovered URLs': len(discovered_urls)})
# Some tools can have an URL in the format <URL>] - <PATH> or <URL> - <PATH>, add them
# to the final URL list
all_urls = []
for url in discovered_urls:
url = url.strip()
urlpath = None
base_url = None
if '] ' in url: # found JS scraped endpoint e.g from gospider
split = tuple(url.split('] '))
if not len(split) == 2:
logger.warning(f'URL format not recognized for "{url}". Skipping.')
continue
base_url, urlpath = split
urlpath = urlpath.lstrip('- ')
elif ' - ' in url: # found JS scraped endpoint e.g from gospider
base_url, urlpath = tuple(url.split(' - '))
if base_url and urlpath:
subdomain = urlparse(base_url)
url = f'{subdomain.scheme}://{subdomain.netloc}{self.url_filter}'
if not validators.url(url):
logger.warning(f'Invalid URL "{url}". Skipping.')
if url not in all_urls:
all_urls.append(url)
# Filter out URLs if a path filter was passed
if self.url_filter:
all_urls = [url for url in all_urls if self.url_filter in url]
# Write result to output path
with open(self.output_path, 'w') as f:
f.write('\n'.join(all_urls))
logger.warning(f'Found {len(all_urls)} usable URLs')
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(
all_urls,
ctx=ctx,
should_remove_duplicate_endpoints=should_remove_duplicate_endpoints,
duplicate_removal_fields=duplicate_removal_fields
)
#-------------------#
# GF PATTERNS MATCH #
#-------------------#
# Combine old gf patterns with new ones
if gf_patterns:
self.scan.used_gf_patterns = ','.join(gf_patterns)
self.scan.save()
# Run gf patterns on saved endpoints
# TODO: refactor to Celery task
for gf_pattern in gf_patterns:
# TODO: js var is causing issues, removing for now
if gf_pattern == 'jsvar':
logger.info('Ignoring jsvar as it is causing issues.')
continue
# Run gf on current pattern
logger.warning(f'Running gf on pattern "{gf_pattern}"')
gf_output_file = f'{self.results_dir}/gf_patterns_{gf_pattern}.txt'
cmd = f'cat {self.output_path} | gf {gf_pattern} | grep -Eo {host_regex} >> {gf_output_file}'
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Check output file
if not os.path.exists(gf_output_file):
logger.error(f'Could not find GF output file {gf_output_file}. Skipping GF pattern "{gf_pattern}"')
continue
# Read output file line by line and
with open(gf_output_file, 'r') as f:
lines = f.readlines()
# Add endpoints / subdomains to DB
for url in lines:
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
endpoint, created = save_endpoint(
http_url,
crawl=False,
subdomain=subdomain,
ctx=ctx)
if not endpoint:
continue
earlier_pattern = None
if not created:
earlier_pattern = endpoint.matched_gf_patterns
pattern = f'{earlier_pattern},{gf_pattern}' if earlier_pattern else gf_pattern
endpoint.matched_gf_patterns = pattern
endpoint.save()
return all_urls
def parse_curl_output(response):
# TODO: Enrich from other cURL fields.
CURL_REGEX_HTTP_STATUS = f'HTTP\/(?:(?:\d\.?)+)\s(\d+)\s(?:\w+)'
http_status = 0
if response:
failed = False
regex = re.compile(CURL_REGEX_HTTP_STATUS, re.MULTILINE)
try:
http_status = int(regex.findall(response)[0])
except (KeyError, TypeError, IndexError):
pass
return {
'http_status': http_status,
}
@app.task(name='vulnerability_scan', queue='main_scan_queue', bind=True, base=RengineTask)
def vulnerability_scan(self, urls=[], ctx={}, description=None):
"""
This function will serve as an entrypoint to vulnerability scan.
All other vulnerability scan will be run from here including nuclei, crlfuzz, etc
"""
logger.info('Running Vulnerability Scan Queue')
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_run_nuclei = config.get(RUN_NUCLEI, True)
should_run_crlfuzz = config.get(RUN_CRLFUZZ, False)
should_run_dalfox = config.get(RUN_DALFOX, False)
should_run_s3scanner = config.get(RUN_S3SCANNER, True)
grouped_tasks = []
if should_run_nuclei:
_task = nuclei_scan.si(
urls=urls,
ctx=ctx,
description=f'Nuclei Scan'
)
grouped_tasks.append(_task)
if should_run_crlfuzz:
_task = crlfuzz_scan.si(
urls=urls,
ctx=ctx,
description=f'CRLFuzz Scan'
)
grouped_tasks.append(_task)
if should_run_dalfox:
_task = dalfox_xss_scan.si(
urls=urls,
ctx=ctx,
description=f'Dalfox XSS Scan'
)
grouped_tasks.append(_task)
if should_run_s3scanner:
_task = s3scanner.si(
ctx=ctx,
description=f'Misconfigured S3 Buckets Scanner'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan completed...')
# return results
return None
@app.task(name='nuclei_individual_severity_module', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_individual_severity_module(self, cmd, severity, enable_http_crawl, should_fetch_gpt_report, ctx={}, description=None):
'''
This celery task will run vulnerability scan in parallel.
All severities supplied should run in parallel as grouped tasks.
'''
results = []
logger.info(f'Running vulnerability scan with severity: {severity}')
cmd += f' -severity {severity}'
# Send start notification
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
# Gather nuclei results
vuln_data = parse_nuclei_result(line)
# Get corresponding subdomain
http_url = sanitize_url(line.get('matched-at'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
# Look for duplicate vulnerabilities by excluding records that might change but are irrelevant.
object_comparison_exclude = ['response', 'curl_command', 'tags', 'references', 'cve_ids', 'cwe_ids']
# Add subdomain and target domain to the duplicate check
vuln_data_copy = vuln_data.copy()
vuln_data_copy['subdomain'] = subdomain
vuln_data_copy['target_domain'] = self.domain
# Check if record exists, if exists do not save it
if record_exists(Vulnerability, data=vuln_data_copy, exclude_keys=object_comparison_exclude):
logger.warning(f'Nuclei vulnerability of severity {severity} : {vuln_data_copy["name"]} for {subdomain_name} already exists')
continue
# Get or create EndPoint object
response = line.get('response')
httpx_crawl = False if response else enable_http_crawl # avoid yet another httpx crawl
endpoint, _ = save_endpoint(
http_url,
crawl=httpx_crawl,
subdomain=subdomain,
ctx=ctx)
if endpoint:
http_url = endpoint.http_url
if not httpx_crawl:
output = parse_curl_output(response)
endpoint.http_status = output['http_status']
endpoint.save()
# Get or create Vulnerability object
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
subdomain=subdomain,
**vuln_data)
if not vuln:
continue
# Print vuln
severity = line['info'].get('severity', 'unknown')
logger.warning(str(vuln))
# Send notification for all vulnerabilities except info
url = vuln.http_url or vuln.subdomain
send_vuln = (
notif and
notif.send_vuln_notif and
vuln and
severity in ['low', 'medium', 'high', 'critical'])
if send_vuln:
fields = {
'Severity': f'**{severity.upper()}**',
'URL': http_url,
'Subdomain': subdomain_name,
'Name': vuln.name,
'Type': vuln.type,
'Description': vuln.description,
'Template': vuln.template_url,
'Tags': vuln.get_tags_str(),
'CVEs': vuln.get_cve_str(),
'CWEs': vuln.get_cwe_str(),
'References': vuln.get_refs_str()
}
severity_map = {
'low': 'info',
'medium': 'warning',
'high': 'error',
'critical': 'error'
}
self.notify(
f'vulnerability_scan_#{vuln.id}',
severity_map[severity],
fields,
add_meta_info=False)
# Send report to hackerone
hackerone_query = Hackerone.objects.all()
send_report = (
hackerone_query.exists() and
severity not in ('info', 'low') and
vuln.target_domain.h1_team_handle
)
if send_report:
hackerone = hackerone_query.first()
if hackerone.send_critical and severity == 'critical':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_high and severity == 'high':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_medium and severity == 'medium':
send_hackerone_report.delay(vuln.id)
# Write results to JSON file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
# Send finish notif
if send_status:
vulns = Vulnerability.objects.filter(scan_history__id=self.scan_id)
info_count = vulns.filter(severity=0).count()
low_count = vulns.filter(severity=1).count()
medium_count = vulns.filter(severity=2).count()
high_count = vulns.filter(severity=3).count()
critical_count = vulns.filter(severity=4).count()
unknown_count = vulns.filter(severity=-1).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count + unknown_count
fields = {
'Total': vulnerability_count,
'Critical': critical_count,
'High': high_count,
'Medium': medium_count,
'Low': low_count,
'Info': info_count,
'Unknown': unknown_count
}
self.notify(fields=fields)
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=NUCLEI
).exclude(
severity=0
)
# find all unique vulnerabilities based on path and title
# all unique vulnerability will go thru gpt function and get report
# once report is got, it will be matched with other vulnerabilities and saved
unique_vulns = set()
for vuln in vulns:
unique_vulns.add((vuln.name, vuln.get_path()))
unique_vulns = list(unique_vulns)
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in unique_vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return None
def get_vulnerability_gpt_report(vuln):
title = vuln[0]
path = vuln[1]
logger.info(f'Getting GPT Report for {title}, PATH: {path}')
# check if in db already exists
stored = GPTVulnerabilityReport.objects.filter(
url_path=path
).filter(
title=title
).first()
if stored:
response = {
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
report = GPTVulnerabilityReportGenerator()
vulnerability_description = get_gpt_vuln_input_description(
title,
path
)
response = report.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
title,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
for vuln in Vulnerability.objects.filter(name=title, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
def add_gpt_description_db(title, path, description, impact, remediation, references):
gpt_report = GPTVulnerabilityReport()
gpt_report.url_path = path
gpt_report.title = title
gpt_report.description = description
gpt_report.impact = impact
gpt_report.remediation = remediation
gpt_report.save()
for url in references:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
gpt_report.references.add(ref)
gpt_report.save()
@app.task(name='nuclei_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_scan(self, urls=[], ctx={}, description=None):
"""HTTP vulnerability scan using Nuclei
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
Notes:
Unfurl the urls to keep only domain and path, will be sent to vuln scan and
ignore certain file extensions. Thanks: https://github.com/six2dez/reconftw
"""
# Config
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
input_path = f'{self.results_dir}/input_endpoints_vulnerability_scan.txt'
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
concurrency = config.get(NUCLEI_CONCURRENCY) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
retries = config.get(RETRIES) or self.yaml_configuration.get(RETRIES, DEFAULT_RETRIES)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
custom_header = config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
should_fetch_gpt_report = config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
proxy = get_random_proxy()
nuclei_specific_config = config.get('nuclei', {})
use_nuclei_conf = nuclei_specific_config.get(USE_CONFIG, False)
severities = nuclei_specific_config.get(NUCLEI_SEVERITY, NUCLEI_DEFAULT_SEVERITIES)
tags = nuclei_specific_config.get(NUCLEI_TAGS, [])
tags = ','.join(tags)
nuclei_templates = nuclei_specific_config.get(NUCLEI_TEMPLATE)
custom_nuclei_templates = nuclei_specific_config.get(NUCLEI_CUSTOM_TEMPLATE)
# severities_str = ','.join(severities)
# Get alive endpoints
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=enable_http_crawl,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
if intensity == 'normal': # reduce number of endpoints to scan
unfurl_filter = f'{self.results_dir}/urls_unfurled.txt'
run_command(
f"cat {input_path} | unfurl -u format %s://%d%p |uro > {unfurl_filter}",
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {unfurl_filter} -o {unfurl_filter}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
input_path = unfurl_filter
# Build templates
# logger.info('Updating Nuclei templates ...')
run_command(
'nuclei -update-templates',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
templates = []
if not (nuclei_templates or custom_nuclei_templates):
templates.append(NUCLEI_DEFAULT_TEMPLATES_PATH)
if nuclei_templates:
if ALL in nuclei_templates:
template = NUCLEI_DEFAULT_TEMPLATES_PATH
templates.append(template)
else:
templates.extend(nuclei_templates)
if custom_nuclei_templates:
custom_nuclei_template_paths = [f'{str(elem)}.yaml' for elem in custom_nuclei_templates]
template = templates.extend(custom_nuclei_template_paths)
# Build CMD
cmd = 'nuclei -j'
cmd += ' -config /root/.config/nuclei/config.yaml' if use_nuclei_conf else ''
cmd += f' -irr'
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -l {input_path}'
cmd += f' -c {str(concurrency)}' if concurrency > 0 else ''
cmd += f' -proxy {proxy} ' if proxy else ''
cmd += f' -retries {retries}' if retries > 0 else ''
cmd += f' -rl {rate_limit}' if rate_limit > 0 else ''
# cmd += f' -severity {severities_str}'
cmd += f' -timeout {str(timeout)}' if timeout and timeout > 0 else ''
cmd += f' -tags {tags}' if tags else ''
cmd += f' -silent'
for tpl in templates:
cmd += f' -t {tpl}'
grouped_tasks = []
custom_ctx = ctx
for severity in severities:
custom_ctx['track'] = True
_task = nuclei_individual_severity_module.si(
cmd,
severity,
enable_http_crawl,
should_fetch_gpt_report,
ctx=custom_ctx,
description=f'Nuclei Scan with severity {severity}'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan with all severities completed...')
return None
@app.task(name='dalfox_xss_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def dalfox_xss_scan(self, urls=[], ctx={}, description=None):
"""XSS Scan using dalfox
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
dalfox_config = vuln_config.get(DALFOX) or {}
custom_header = dalfox_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
is_waf_evasion = dalfox_config.get(WAF_EVASION, False)
blind_xss_server = dalfox_config.get(BLIND_XSS_SERVER)
user_agent = dalfox_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
timeout = dalfox_config.get(TIMEOUT)
delay = dalfox_config.get(DELAY)
threads = dalfox_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_dalfox_xss.txt'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=False,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'dalfox --silence --no-color --no-spinner'
cmd += f' --only-poc r '
cmd += f' --ignore-return 302,404,403'
cmd += f' --skip-bav'
cmd += f' file {input_path}'
cmd += f' --proxy {proxy}' if proxy else ''
cmd += f' --waf-evasion' if is_waf_evasion else ''
cmd += f' -b {blind_xss_server}' if blind_xss_server else ''
cmd += f' --delay {delay}' if delay else ''
cmd += f' --timeout {timeout}' if timeout else ''
cmd += f' --user-agent {user_agent}' if user_agent else ''
cmd += f' --header {custom_header}' if custom_header else ''
cmd += f' --worker {threads}' if threads else ''
cmd += f' --format json'
results = []
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id,
trunc_char=','
):
if not isinstance(line, dict):
continue
results.append(line)
vuln_data = parse_dalfox_result(line)
http_url = sanitize_url(line.get('data'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Dalfox Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=DALFOX
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='crlfuzz_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def crlfuzz_scan(self, urls=[], ctx={}, description=None):
"""CRLF Fuzzing with CRLFuzz
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
custom_header = vuln_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
user_agent = vuln_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
threads = vuln_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_crlf.txt'
output_path = f'{self.results_dir}/{self.filename}'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'crlfuzz -s'
cmd += f' -l {input_path}'
cmd += f' -x {proxy}' if proxy else ''
cmd += f' --H {custom_header}' if custom_header else ''
cmd += f' -o {output_path}'
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id
)
if not os.path.isfile(output_path):
logger.info('No Results from CRLFuzz')
return
crlfs = []
results = []
with open(output_path, 'r') as file:
crlfs = file.readlines()
for crlf in crlfs:
url = crlf.strip()
vuln_data = parse_crlfuzz_result(url)
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting CRLFuzz Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=CRLFUZZ
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='s3scanner', queue='main_scan_queue', base=RengineTask, bind=True)
def s3scanner(self, ctx={}, description=None):
"""Bucket Scanner
Args:
ctx (dict): Context
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/#{self.scan_id}_subdomain_discovery.txt'
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
s3_config = vuln_config.get(S3SCANNER) or {}
threads = s3_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
providers = s3_config.get(PROVIDERS, S3SCANNER_DEFAULT_PROVIDERS)
scan_history = ScanHistory.objects.filter(pk=self.scan_id).first()
for provider in providers:
cmd = f's3scanner -bucket-file {input_path} -enumerate -provider {provider} -threads {threads} -json'
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
if line.get('bucket', {}).get('exists', 0) == 1:
result = parse_s3scanner_result(line)
s3bucket, created = S3Bucket.objects.get_or_create(**result)
scan_history.buckets.add(s3bucket)
logger.info(f"s3 bucket added {result['provider']}-{result['name']}-{result['region']}")
@app.task(name='http_crawl', queue='main_scan_queue', base=RengineTask, bind=True)
def http_crawl(
self,
urls=[],
method=None,
recrawl=False,
ctx={},
track=True,
description=None,
is_ran_from_subdomain_scan=False,
should_remove_duplicate_endpoints=True,
duplicate_removal_fields=[]):
"""Use httpx to query HTTP URLs for important info like page titles, http
status, etc...
Args:
urls (list, optional): A set of URLs to check. Overrides default
behavior which queries all endpoints related to this scan.
method (str): HTTP method to use (GET, HEAD, POST, PUT, DELETE).
recrawl (bool, optional): If False, filter out URLs that have already
been crawled.
should_remove_duplicate_endpoints (bool): Whether to remove duplicate endpoints
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
Returns:
list: httpx results.
"""
logger.info('Initiating HTTP Crawl')
if is_ran_from_subdomain_scan:
logger.info('Running From Subdomain Scan...')
cmd = '/go/bin/httpx'
cfg = self.yaml_configuration.get(HTTP_CRAWL) or {}
custom_header = cfg.get(CUSTOM_HEADER, '')
threads = cfg.get(THREADS, DEFAULT_THREADS)
follow_redirect = cfg.get(FOLLOW_REDIRECT, True)
self.output_path = None
input_path = f'{self.results_dir}/httpx_input.txt'
history_file = f'{self.results_dir}/commands.txt'
if urls: # direct passing URLs to check
if self.url_filter:
urls = [u for u in urls if self.url_filter in u]
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_uncrawled=not recrawl,
write_filepath=input_path,
ctx=ctx
)
# logger.debug(urls)
# If no URLs found, skip it
if not urls:
return
# Re-adjust thread number if few URLs to avoid spinning up a monster to
# kill a fly.
if len(urls) < threads:
threads = len(urls)
# Get random proxy
proxy = get_random_proxy()
# Run command
cmd += f' -cl -ct -rt -location -td -websocket -cname -asn -cdn -probe -random-agent'
cmd += f' -t {threads}' if threads > 0 else ''
cmd += f' --http-proxy {proxy}' if proxy else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -json'
cmd += f' -u {urls[0]}' if len(urls) == 1 else f' -l {input_path}'
cmd += f' -x {method}' if method else ''
cmd += f' -silent'
if follow_redirect:
cmd += ' -fr'
results = []
endpoint_ids = []
for line in stream_command(
cmd,
history_file=history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not line or not isinstance(line, dict):
continue
logger.debug(line)
# No response from endpoint
if line.get('failed', False):
continue
# Parse httpx output
host = line.get('host', '')
content_length = line.get('content_length', 0)
http_status = line.get('status_code')
http_url, is_redirect = extract_httpx_url(line)
page_title = line.get('title')
webserver = line.get('webserver')
cdn = line.get('cdn', False)
rt = line.get('time')
techs = line.get('tech', [])
cname = line.get('cname', '')
content_type = line.get('content_type', '')
response_time = -1
if rt:
response_time = float(''.join(ch for ch in rt if not ch.isalpha()))
if rt[-2:] == 'ms':
response_time = response_time / 1000
# Create Subdomain object in DB
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
# Save default HTTP URL to endpoint object in DB
endpoint, created = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain,
is_default=is_ran_from_subdomain_scan
)
if not endpoint:
continue
endpoint.http_status = http_status
endpoint.page_title = page_title
endpoint.content_length = content_length
endpoint.webserver = webserver
endpoint.response_time = response_time
endpoint.content_type = content_type
endpoint.save()
endpoint_str = f'{http_url} [{http_status}] `{content_length}B` `{webserver}` `{rt}`'
logger.warning(endpoint_str)
if endpoint and endpoint.is_alive and endpoint.http_status != 403:
self.notify(
fields={'Alive endpoint': f'• {endpoint_str}'},
add_meta_info=False)
# Add endpoint to results
line['_cmd'] = cmd
line['final_url'] = http_url
line['endpoint_id'] = endpoint.id
line['endpoint_created'] = created
line['is_redirect'] = is_redirect
results.append(line)
# Add technology objects to DB
for technology in techs:
tech, _ = Technology.objects.get_or_create(name=technology)
endpoint.techs.add(tech)
if is_ran_from_subdomain_scan:
subdomain.technologies.add(tech)
subdomain.save()
endpoint.save()
techs_str = ', '.join([f'`{tech}`' for tech in techs])
self.notify(
fields={'Technologies': techs_str},
add_meta_info=False)
# Add IP objects for 'a' records to DB
a_records = line.get('a', [])
for ip_address in a_records:
ip, created = save_ip_address(
ip_address,
subdomain,
subscan=self.subscan,
cdn=cdn)
ips_str = '• ' + '\n• '.join([f'`{ip}`' for ip in a_records])
self.notify(
fields={'IPs': ips_str},
add_meta_info=False)
# Add IP object for host in DB
if host:
ip, created = save_ip_address(
host,
subdomain,
subscan=self.subscan,
cdn=cdn)
self.notify(
fields={'IPs': f'• `{ip.address}`'},
add_meta_info=False)
# Save subdomain and endpoint
if is_ran_from_subdomain_scan:
# save subdomain stuffs
subdomain.http_url = http_url
subdomain.http_status = http_status
subdomain.page_title = page_title
subdomain.content_length = content_length
subdomain.webserver = webserver
subdomain.response_time = response_time
subdomain.content_type = content_type
subdomain.cname = ','.join(cname)
subdomain.is_cdn = cdn
if cdn:
subdomain.cdn_name = line.get('cdn_name')
subdomain.save()
endpoint.save()
endpoint_ids.append(endpoint.id)
if should_remove_duplicate_endpoints:
# Remove 'fake' alive endpoints that are just redirects to the same page
remove_duplicate_endpoints(
self.scan_id,
self.domain_id,
self.subdomain_id,
filter_ids=endpoint_ids
)
# Remove input file
run_command(
f'rm {input_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
return results
#---------------------#
# Notifications tasks #
#---------------------#
@app.task(name='send_notif', bind=False, queue='send_notif_queue')
def send_notif(
message,
scan_history_id=None,
subscan_id=None,
**options):
if not 'title' in options:
message = enrich_notification(message, scan_history_id, subscan_id)
send_discord_message(message, **options)
send_slack_message(message)
send_telegram_message(message)
@app.task(name='send_scan_notif', bind=False, queue='send_scan_notif_queue')
def send_scan_notif(
scan_history_id,
subscan_id=None,
engine_id=None,
status='RUNNING'):
"""Send scan status notification. Works for scan or a subscan if subscan_id
is passed.
Args:
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Get domain, engine, scan_history objects
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
tasks = ScanActivity.objects.filter(scan_of=scan) if scan else 0
# Build notif options
url = get_scan_url(scan_history_id, subscan_id)
title = get_scan_title(scan_history_id, subscan_id)
fields = get_scan_fields(engine, scan, subscan, status, tasks)
severity = None
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
if status:
severity = STATUS_TO_SEVERITIES.get(status)
opts = {
'title': title,
'url': url,
'fields': fields,
'severity': severity
}
logger.warning(f'Sending notification "{title}" [{severity}]')
# Send notification
send_notif(
msg,
scan_history_id,
subscan_id,
**opts)
@app.task(name='send_task_notif', bind=False, queue='send_task_notif_queue')
def send_task_notif(
task_name,
status=None,
result=None,
output_path=None,
traceback=None,
scan_history_id=None,
engine_id=None,
subscan_id=None,
severity=None,
add_meta_info=True,
update_fields={}):
"""Send task status notification.
Args:
task_name (str): Task name.
status (str, optional): Task status.
result (str, optional): Task result.
output_path (str, optional): Task output path.
traceback (str, optional): Task traceback.
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
severity (str, optional): Severity (will be mapped to notif colors)
add_meta_info (bool, optional): Wheter to add scan / subscan info to notif.
update_fields (dict, optional): Fields key / value to update.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Build fields
url = None
fields = {}
if add_meta_info:
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
url = get_scan_url(scan_history_id)
if status:
fields['Status'] = f'**{status}**'
if engine:
fields['Engine'] = engine.engine_name
if scan:
fields['Scan ID'] = f'[#{scan.id}]({url})'
if subscan:
url = get_scan_url(scan_history_id, subscan_id)
fields['Subscan ID'] = f'[#{subscan.id}]({url})'
title = get_task_title(task_name, scan_history_id, subscan_id)
if status:
severity = STATUS_TO_SEVERITIES.get(status)
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
# Add fields to update
for k, v in update_fields.items():
fields[k] = v
# Add traceback to notif
if traceback and notif.send_scan_tracebacks:
fields['Traceback'] = f'```\n{traceback}\n```'
# Add files to notif
files = []
attach_file = (
notif.send_scan_output_file and
output_path and
result and
not traceback
)
if attach_file:
output_title = output_path.split('/')[-1]
files = [(output_path, output_title)]
# Send notif
opts = {
'title': title,
'url': url,
'files': files,
'severity': severity,
'fields': fields,
'fields_append': update_fields.keys()
}
send_notif(
msg,
scan_history_id=scan_history_id,
subscan_id=subscan_id,
**opts)
@app.task(name='send_file_to_discord', bind=False, queue='send_file_to_discord_queue')
def send_file_to_discord(file_path, title=None):
notif = Notification.objects.first()
do_send = notif and notif.send_to_discord and notif.discord_hook_url
if not do_send:
return False
webhook = DiscordWebhook(
url=notif.discord_hook_url,
rate_limit_retry=True,
username=title or "reNgine Discord Plugin"
)
with open(file_path, "rb") as f:
head, tail = os.path.split(file_path)
webhook.add_file(file=f.read(), filename=tail)
webhook.execute()
@app.task(name='send_hackerone_report', bind=False, queue='send_hackerone_report_queue')
def send_hackerone_report(vulnerability_id):
"""Send HackerOne vulnerability report.
Args:
vulnerability_id (int): Vulnerability id.
Returns:
int: HTTP response status code.
"""
vulnerability = Vulnerability.objects.get(id=vulnerability_id)
severities = {v: k for k,v in NUCLEI_SEVERITY_MAP.items()}
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
# can only send vulnerability report if team_handle exists
if len(vulnerability.target_domain.h1_team_handle) !=0:
hackerone_query = Hackerone.objects.all()
if hackerone_query.exists():
hackerone = Hackerone.objects.first()
severity_value = severities[vulnerability.severity]
tpl = hackerone.report_template
# Replace syntax of report template with actual content
tpl = tpl.replace('{vulnerability_name}', vulnerability.name)
tpl = tpl.replace('{vulnerable_url}', vulnerability.http_url)
tpl = tpl.replace('{vulnerability_severity}', severity_value)
tpl = tpl.replace('{vulnerability_description}', vulnerability.description if vulnerability.description else '')
tpl = tpl.replace('{vulnerability_extracted_results}', vulnerability.extracted_results if vulnerability.extracted_results else '')
tpl = tpl.replace('{vulnerability_reference}', vulnerability.reference if vulnerability.reference else '')
data = {
"data": {
"type": "report",
"attributes": {
"team_handle": vulnerability.target_domain.h1_team_handle,
"title": '{} found in {}'.format(vulnerability.name, vulnerability.http_url),
"vulnerability_information": tpl,
"severity_rating": severity_value,
"impact": "More information about the impact and vulnerability can be found here: \n" + vulnerability.reference if vulnerability.reference else "NA",
}
}
}
r = requests.post(
'https://api.hackerone.com/v1/hackers/reports',
auth=(hackerone.username, hackerone.api_key),
json=data,
headers=headers
)
response = r.json()
status_code = r.status_code
if status_code == 201:
vulnerability.hackerone_report_id = response['data']["id"]
vulnerability.open_status = False
vulnerability.save()
return status_code
else:
logger.error('No team handle found.')
status_code = 111
return status_code
#-------------#
# Utils tasks #
#-------------#
@app.task(name='parse_nmap_results', bind=False, queue='parse_nmap_results_queue')
def parse_nmap_results(xml_file, output_file=None):
"""Parse results from nmap output file.
Args:
xml_file (str): nmap XML report file path.
Returns:
list: List of vulnerabilities found from nmap results.
"""
with open(xml_file, encoding='utf8') as f:
content = f.read()
try:
nmap_results = xmltodict.parse(content) # parse XML to dict
except Exception as e:
logger.exception(e)
logger.error(f'Cannot parse {xml_file} to valid JSON. Skipping.')
return []
# Write JSON to output file
if output_file:
with open(output_file, 'w') as f:
json.dump(nmap_results, f, indent=4)
logger.warning(json.dumps(nmap_results, indent=4))
hosts = (
nmap_results
.get('nmaprun', {})
.get('host', {})
)
all_vulns = []
if isinstance(hosts, dict):
hosts = [hosts]
for host in hosts:
# Grab hostname / IP from output
hostnames_dict = host.get('hostnames', {})
if hostnames_dict:
# Ensure that hostnames['hostname'] is a list for consistency
hostnames_list = hostnames_dict['hostname'] if isinstance(hostnames_dict['hostname'], list) else [hostnames_dict['hostname']]
# Extract all the @name values from the list of dictionaries
hostnames = [entry.get('@name') for entry in hostnames_list]
else:
hostnames = [host.get('address')['@addr']]
# Iterate over each hostname for each port
for hostname in hostnames:
# Grab ports from output
ports = host.get('ports', {}).get('port', [])
if isinstance(ports, dict):
ports = [ports]
for port in ports:
url_vulns = []
port_number = port['@portid']
url = sanitize_url(f'{hostname}:{port_number}')
logger.info(f'Parsing nmap results for {hostname}:{port_number} ...')
if not port_number or not port_number.isdigit():
continue
port_protocol = port['@protocol']
scripts = port.get('script', [])
if isinstance(scripts, dict):
scripts = [scripts]
for script in scripts:
script_id = script['@id']
script_output = script['@output']
script_output_table = script.get('table', [])
logger.debug(f'Ran nmap script "{script_id}" on {port_number}/{port_protocol}:\n{script_output}\n')
if script_id == 'vulscan':
vulns = parse_nmap_vulscan_output(script_output)
url_vulns.extend(vulns)
elif script_id == 'vulners':
vulns = parse_nmap_vulners_output(script_output)
url_vulns.extend(vulns)
# elif script_id == 'http-server-header':
# TODO: nmap can help find technologies as well using the http-server-header script
# regex = r'(\w+)/([\d.]+)\s?(?:\((\w+)\))?'
# tech_name, tech_version, tech_os = re.match(regex, test_string).groups()
# Technology.objects.get_or_create(...)
# elif script_id == 'http_csrf':
# vulns = parse_nmap_http_csrf_output(script_output)
# url_vulns.extend(vulns)
else:
logger.warning(f'Script output parsing for script "{script_id}" is not supported yet.')
# Add URL to vuln
for vuln in url_vulns:
# TODO: This should extend to any URL, not just HTTP
vuln['http_url'] = url
if 'http_path' in vuln:
vuln['http_url'] += vuln['http_path']
all_vulns.append(vuln)
return all_vulns
def parse_nmap_http_csrf_output(script_output):
pass
def parse_nmap_vulscan_output(script_output):
"""Parse nmap vulscan script output.
Args:
script_output (str): Vulscan script output.
Returns:
list: List of Vulnerability dicts.
"""
data = {}
vulns = []
provider_name = ''
# Sort all vulns found by provider so that we can match each provider with
# a function that pulls from its API to get more info about the
# vulnerability.
for line in script_output.splitlines():
if not line:
continue
if not line.startswith('['): # provider line
if "No findings" in line:
logger.info(f"No findings: {line}")
continue
elif ' - ' in line:
provider_name, provider_url = tuple(line.split(' - '))
data[provider_name] = {'url': provider_url.rstrip(':'), 'entries': []}
continue
else:
# Log a warning
logger.warning(f"Unexpected line format: {line}")
continue
reg = r'\[(.*)\] (.*)'
matches = re.match(reg, line)
id, title = matches.groups()
entry = {'id': id, 'title': title}
data[provider_name]['entries'].append(entry)
logger.warning('Vulscan parsed output:')
logger.warning(pprint.pformat(data))
for provider_name in data:
if provider_name == 'Exploit-DB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'IBM X-Force':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'MITRE CVE':
logger.error(f'Provider {provider_name} is not supported YET.')
for entry in data[provider_name]['entries']:
cve_id = entry['id']
vuln = cve_to_vuln(cve_id)
vulns.append(vuln)
elif provider_name == 'OSVDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'OpenVAS (Nessus)':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'SecurityFocus':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'VulDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
else:
logger.error(f'Provider {provider_name} is not supported.')
return vulns
def parse_nmap_vulners_output(script_output, url=''):
"""Parse nmap vulners script output.
TODO: Rework this as it's currently matching all CVEs no matter the
confidence.
Args:
script_output (str): Script output.
Returns:
list: List of found vulnerabilities.
"""
vulns = []
# Check for CVE in script output
CVE_REGEX = re.compile(r'.*(CVE-\d\d\d\d-\d+).*')
matches = CVE_REGEX.findall(script_output)
matches = list(dict.fromkeys(matches))
for cve_id in matches: # get CVE info
vuln = cve_to_vuln(cve_id, vuln_type='nmap-vulners-nse')
if vuln:
vulns.append(vuln)
return vulns
def cve_to_vuln(cve_id, vuln_type=''):
"""Search for a CVE using CVESearch and return Vulnerability data.
Args:
cve_id (str): CVE ID in the form CVE-*
Returns:
dict: Vulnerability dict.
"""
cve_info = CVESearch('https://cve.circl.lu').id(cve_id)
if not cve_info:
logger.error(f'Could not fetch CVE info for cve {cve_id}. Skipping.')
return None
vuln_cve_id = cve_info['id']
vuln_name = vuln_cve_id
vuln_description = cve_info.get('summary', 'none').replace(vuln_cve_id, '').strip()
try:
vuln_cvss = float(cve_info.get('cvss', -1))
except (ValueError, TypeError):
vuln_cvss = -1
vuln_cwe_id = cve_info.get('cwe', '')
exploit_ids = cve_info.get('refmap', {}).get('exploit-db', [])
osvdb_ids = cve_info.get('refmap', {}).get('osvdb', [])
references = cve_info.get('references', [])
capec_objects = cve_info.get('capec', [])
# Parse ovals for a better vuln name / type
ovals = cve_info.get('oval', [])
if ovals:
vuln_name = ovals[0]['title']
vuln_type = ovals[0]['family']
# Set vulnerability severity based on CVSS score
vuln_severity = 'info'
if vuln_cvss < 4:
vuln_severity = 'low'
elif vuln_cvss < 7:
vuln_severity = 'medium'
elif vuln_cvss < 9:
vuln_severity = 'high'
else:
vuln_severity = 'critical'
# Build console warning message
msg = f'{vuln_name} | {vuln_severity.upper()} | {vuln_cve_id} | {vuln_cwe_id} | {vuln_cvss}'
for id in osvdb_ids:
msg += f'\n\tOSVDB: {id}'
for exploit_id in exploit_ids:
msg += f'\n\tEXPLOITDB: {exploit_id}'
logger.warning(msg)
vuln = {
'name': vuln_name,
'type': vuln_type,
'severity': NUCLEI_SEVERITY_MAP[vuln_severity],
'description': vuln_description,
'cvss_score': vuln_cvss,
'references': references,
'cve_ids': [vuln_cve_id],
'cwe_ids': [vuln_cwe_id]
}
return vuln
def parse_s3scanner_result(line):
'''
Parses and returns s3Scanner Data
'''
bucket = line['bucket']
return {
'name': bucket['name'],
'region': bucket['region'],
'provider': bucket['provider'],
'owner_display_name': bucket['owner_display_name'],
'owner_id': bucket['owner_id'],
'perm_auth_users_read': bucket['perm_auth_users_read'],
'perm_auth_users_write': bucket['perm_auth_users_write'],
'perm_auth_users_read_acl': bucket['perm_auth_users_read_acl'],
'perm_auth_users_write_acl': bucket['perm_auth_users_write_acl'],
'perm_auth_users_full_control': bucket['perm_auth_users_full_control'],
'perm_all_users_read': bucket['perm_all_users_read'],
'perm_all_users_write': bucket['perm_all_users_write'],
'perm_all_users_read_acl': bucket['perm_all_users_read_acl'],
'perm_all_users_write_acl': bucket['perm_all_users_write_acl'],
'perm_all_users_full_control': bucket['perm_all_users_full_control'],
'num_objects': bucket['num_objects'],
'size': bucket['bucket_size']
}
def parse_nuclei_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
return {
'name': line['info'].get('name', ''),
'type': line['type'],
'severity': NUCLEI_SEVERITY_MAP[line['info'].get('severity', 'unknown')],
'template': line['template'],
'template_url': line['template-url'],
'template_id': line['template-id'],
'description': line['info'].get('description', ''),
'matcher_name': line.get('matcher-name', ''),
'curl_command': line.get('curl-command'),
'request': line.get('request'),
'response': line.get('response'),
'extracted_results': line.get('extracted-results', []),
'cvss_metrics': line['info'].get('classification', {}).get('cvss-metrics', ''),
'cvss_score': line['info'].get('classification', {}).get('cvss-score'),
'cve_ids': line['info'].get('classification', {}).get('cve_id', []) or [],
'cwe_ids': line['info'].get('classification', {}).get('cwe_id', []) or [],
'references': line['info'].get('reference', []) or [],
'tags': line['info'].get('tags', []),
'source': NUCLEI,
}
def parse_dalfox_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
description = ''
description += f" Evidence: {line.get('evidence')} <br>" if line.get('evidence') else ''
description += f" Message: {line.get('message')} <br>" if line.get('message') else ''
description += f" Payload: {line.get('message_str')} <br>" if line.get('message_str') else ''
description += f" Vulnerable Parameter: {line.get('param')} <br>" if line.get('param') else ''
return {
'name': 'XSS (Cross Site Scripting)',
'type': 'XSS',
'severity': DALFOX_SEVERITY_MAP[line.get('severity', 'unknown')],
'description': description,
'source': DALFOX,
'cwe_ids': [line.get('cwe')]
}
def parse_crlfuzz_result(url):
"""Parse CRLF results
Args:
url (str): CRLF Vulnerable URL
Returns:
dict: Vulnerability data.
"""
return {
'name': 'CRLF (HTTP Response Splitting)',
'type': 'CRLF',
'severity': 2,
'description': 'A CRLF (HTTP Response Splitting) vulnerability has been discovered.',
'source': CRLFUZZ,
}
def record_exists(model, data, exclude_keys=[]):
"""
Check if a record already exists in the database based on the given data.
Args:
model (django.db.models.Model): The Django model to check against.
data (dict): Data dictionary containing fields and values.
exclude_keys (list): List of keys to exclude from the lookup.
Returns:
bool: True if the record exists, False otherwise.
"""
# Extract the keys that will be used for the lookup
lookup_fields = {key: data[key] for key in data if key not in exclude_keys}
# Return True if a record exists based on the lookup fields, False otherwise
return model.objects.filter(**lookup_fields).exists()
@app.task(name='geo_localize', bind=False, queue='geo_localize_queue')
def geo_localize(host, ip_id=None):
"""Uses geoiplookup to find location associated with host.
Args:
host (str): Hostname.
ip_id (int): IpAddress object id.
Returns:
startScan.models.CountryISO: CountryISO object from DB or None.
"""
if validators.ipv6(host):
logger.info(f'Ipv6 "{host}" is not supported by geoiplookup. Skipping.')
return None
cmd = f'geoiplookup {host}'
_, out = run_command(cmd)
if 'IP Address not found' not in out and "can't resolve hostname" not in out:
country_iso = out.split(':')[1].strip().split(',')[0]
country_name = out.split(':')[1].strip().split(',')[1].strip()
geo_object, _ = CountryISO.objects.get_or_create(
iso=country_iso,
name=country_name
)
geo_json = {
'iso': country_iso,
'name': country_name
}
if ip_id:
ip = IpAddress.objects.get(pk=ip_id)
ip.geo_iso = geo_object
ip.save()
return geo_json
logger.info(f'Geo IP lookup failed for host "{host}"')
return None
@app.task(name='query_whois', bind=False, queue='query_whois_queue')
def query_whois(ip_domain, force_reload_whois=False):
"""Query WHOIS information for an IP or a domain name.
Args:
ip_domain (str): IP address or domain name.
save_domain (bool): Whether to save domain or not, default False
Returns:
dict: WHOIS information.
"""
if not force_reload_whois and Domain.objects.filter(name=ip_domain).exists() and Domain.objects.get(name=ip_domain).domain_info:
domain = Domain.objects.get(name=ip_domain)
if not domain.insert_date:
domain.insert_date = timezone.now()
domain.save()
domain_info_db = domain.domain_info
domain_info = DottedDict(
dnssec=domain_info_db.dnssec,
created=domain_info_db.created,
updated=domain_info_db.updated,
expires=domain_info_db.expires,
geolocation_iso=domain_info_db.geolocation_iso,
status=[status['name'] for status in DomainWhoisStatusSerializer(domain_info_db.status, many=True).data],
whois_server=domain_info_db.whois_server,
ns_records=[ns['name'] for ns in NameServersSerializer(domain_info_db.name_servers, many=True).data],
registrar_name=domain_info_db.registrar.name,
registrar_phone=domain_info_db.registrar.phone,
registrar_email=domain_info_db.registrar.email,
registrar_url=domain_info_db.registrar.url,
registrant_name=domain_info_db.registrant.name,
registrant_id=domain_info_db.registrant.id_str,
registrant_organization=domain_info_db.registrant.organization,
registrant_city=domain_info_db.registrant.city,
registrant_state=domain_info_db.registrant.state,
registrant_zip_code=domain_info_db.registrant.zip_code,
registrant_country=domain_info_db.registrant.country,
registrant_phone=domain_info_db.registrant.phone,
registrant_fax=domain_info_db.registrant.fax,
registrant_email=domain_info_db.registrant.email,
registrant_address=domain_info_db.registrant.address,
admin_name=domain_info_db.admin.name,
admin_id=domain_info_db.admin.id_str,
admin_organization=domain_info_db.admin.organization,
admin_city=domain_info_db.admin.city,
admin_state=domain_info_db.admin.state,
admin_zip_code=domain_info_db.admin.zip_code,
admin_country=domain_info_db.admin.country,
admin_phone=domain_info_db.admin.phone,
admin_fax=domain_info_db.admin.fax,
admin_email=domain_info_db.admin.email,
admin_address=domain_info_db.admin.address,
tech_name=domain_info_db.tech.name,
tech_id=domain_info_db.tech.id_str,
tech_organization=domain_info_db.tech.organization,
tech_city=domain_info_db.tech.city,
tech_state=domain_info_db.tech.state,
tech_zip_code=domain_info_db.tech.zip_code,
tech_country=domain_info_db.tech.country,
tech_phone=domain_info_db.tech.phone,
tech_fax=domain_info_db.tech.fax,
tech_email=domain_info_db.tech.email,
tech_address=domain_info_db.tech.address,
related_tlds=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_tlds, many=True).data],
related_domains=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_domains, many=True).data],
historical_ips=[ip for ip in HistoricalIPSerializer(domain_info_db.historical_ips, many=True).data],
)
if domain_info_db.dns_records:
a_records = []
txt_records = []
mx_records = []
dns_records = [{'name': dns['name'], 'type': dns['type']} for dns in DomainDNSRecordSerializer(domain_info_db.dns_records, many=True).data]
for dns in dns_records:
if dns['type'] == 'a':
a_records.append(dns['name'])
elif dns['type'] == 'txt':
txt_records.append(dns['name'])
elif dns['type'] == 'mx':
mx_records.append(dns['name'])
domain_info.a_records = a_records
domain_info.txt_records = txt_records
domain_info.mx_records = mx_records
else:
logger.info(f'Domain info for "{ip_domain}" not found in DB, querying whois')
domain_info = DottedDict()
# find domain historical ip
try:
historical_ips = get_domain_historical_ip_address(ip_domain)
domain_info.historical_ips = historical_ips
except Exception as e:
logger.error(f'HistoricalIP for {ip_domain} not found!\nError: {str(e)}')
historical_ips = []
# find associated domains using ip_domain
try:
related_domains = reverse_whois(ip_domain.split('.')[0])
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
# find related tlds using TLSx
try:
related_tlds = []
output_path = '/tmp/ip_domain_tlsx.txt'
tlsx_command = f'tlsx -san -cn -silent -ro -host {ip_domain} -o {output_path}'
run_command(
tlsx_command,
shell=True,
)
tlsx_output = []
with open(output_path) as f:
tlsx_output = f.readlines()
tldextract_target = tldextract.extract(ip_domain)
for doms in tlsx_output:
doms = doms.strip()
tldextract_res = tldextract.extract(doms)
if ip_domain != doms and tldextract_res.domain == tldextract_target.domain and tldextract_res.subdomain == '':
related_tlds.append(doms)
related_tlds = list(set(related_tlds))
domain_info.related_tlds = related_tlds
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
related_domains_list = []
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_related)
related_domains_list.append(_domain['name'])
for _domain in related_tlds:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain,
)[0]
db_domain_info.related_tlds.add(domain_related)
for _ip in historical_ips:
historical_ip = HistoricalIP.objects.get_or_create(
ip=_ip['ip'],
owner=_ip['owner'],
location=_ip['location'],
last_seen=_ip['last_seen'],
)[0]
db_domain_info.historical_ips.add(historical_ip)
domain.domain_info = db_domain_info
domain.save()
command = f'netlas host {ip_domain} -f json'
# check if netlas key is provided
netlas_key = get_netlas_key()
command += f' -a {netlas_key}' if netlas_key else ''
result = subprocess.check_output(command.split()).decode('utf-8')
if 'Failed to parse response data' in result:
# do fallback
return {
'status': False,
'ip_domain': ip_domain,
'result': "Netlas limit exceeded.",
'message': 'Netlas limit exceeded.'
}
try:
result = json.loads(result)
logger.info(result)
whois = result.get('whois') if result.get('whois') else {}
domain_info.created = whois.get('created_date')
domain_info.expires = whois.get('expiration_date')
domain_info.updated = whois.get('updated_date')
domain_info.whois_server = whois.get('whois_server')
if 'registrant' in whois:
registrant = whois.get('registrant')
domain_info.registrant_name = registrant.get('name')
domain_info.registrant_country = registrant.get('country')
domain_info.registrant_id = registrant.get('id')
domain_info.registrant_state = registrant.get('province')
domain_info.registrant_city = registrant.get('city')
domain_info.registrant_phone = registrant.get('phone')
domain_info.registrant_address = registrant.get('street')
domain_info.registrant_organization = registrant.get('organization')
domain_info.registrant_fax = registrant.get('fax')
domain_info.registrant_zip_code = registrant.get('postal_code')
email_search = EMAIL_REGEX.search(str(registrant.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.registrant_email = field_content
if 'administrative' in whois:
administrative = whois.get('administrative')
domain_info.admin_name = administrative.get('name')
domain_info.admin_country = administrative.get('country')
domain_info.admin_id = administrative.get('id')
domain_info.admin_state = administrative.get('province')
domain_info.admin_city = administrative.get('city')
domain_info.admin_phone = administrative.get('phone')
domain_info.admin_address = administrative.get('street')
domain_info.admin_organization = administrative.get('organization')
domain_info.admin_fax = administrative.get('fax')
domain_info.admin_zip_code = administrative.get('postal_code')
mail_search = EMAIL_REGEX.search(str(administrative.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.admin_email = field_content
if 'technical' in whois:
technical = whois.get('technical')
domain_info.tech_name = technical.get('name')
domain_info.tech_country = technical.get('country')
domain_info.tech_state = technical.get('province')
domain_info.tech_id = technical.get('id')
domain_info.tech_city = technical.get('city')
domain_info.tech_phone = technical.get('phone')
domain_info.tech_address = technical.get('street')
domain_info.tech_organization = technical.get('organization')
domain_info.tech_fax = technical.get('fax')
domain_info.tech_zip_code = technical.get('postal_code')
mail_search = EMAIL_REGEX.search(str(technical.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.tech_email = field_content
if 'dns' in result:
dns = result.get('dns')
domain_info.mx_records = dns.get('mx')
domain_info.txt_records = dns.get('txt')
domain_info.a_records = dns.get('a')
domain_info.ns_records = whois.get('name_servers')
domain_info.dnssec = True if whois.get('dnssec') else False
domain_info.status = whois.get('status')
if 'registrar' in whois:
registrar = whois.get('registrar')
domain_info.registrar_name = registrar.get('name')
domain_info.registrar_email = registrar.get('email')
domain_info.registrar_phone = registrar.get('phone')
domain_info.registrar_url = registrar.get('url')
# find associated domains if registrant email is found
related_domains = reverse_whois(domain_info.get('registrant_email')) if domain_info.get('registrant_email') else []
for _domain in related_domains:
related_domains_list.append(_domain['name'])
# remove duplicate domains from related domains list
related_domains_list = list(set(related_domains_list))
domain_info.related_domains = related_domains_list
# save to db if domain exists
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_rel = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_rel)
db_domain_info.dnssec = domain_info.get('dnssec')
#dates
db_domain_info.created = domain_info.get('created')
db_domain_info.updated = domain_info.get('updated')
db_domain_info.expires = domain_info.get('expires')
#registrar
db_domain_info.registrar = Registrar.objects.get_or_create(
name=domain_info.get('registrar_name'),
email=domain_info.get('registrar_email'),
phone=domain_info.get('registrar_phone'),
url=domain_info.get('registrar_url'),
)[0]
db_domain_info.registrant = DomainRegistration.objects.get_or_create(
name=domain_info.get('registrant_name'),
organization=domain_info.get('registrant_organization'),
address=domain_info.get('registrant_address'),
city=domain_info.get('registrant_city'),
state=domain_info.get('registrant_state'),
zip_code=domain_info.get('registrant_zip_code'),
country=domain_info.get('registrant_country'),
email=domain_info.get('registrant_email'),
phone=domain_info.get('registrant_phone'),
fax=domain_info.get('registrant_fax'),
id_str=domain_info.get('registrant_id'),
)[0]
db_domain_info.admin = DomainRegistration.objects.get_or_create(
name=domain_info.get('admin_name'),
organization=domain_info.get('admin_organization'),
address=domain_info.get('admin_address'),
city=domain_info.get('admin_city'),
state=domain_info.get('admin_state'),
zip_code=domain_info.get('admin_zip_code'),
country=domain_info.get('admin_country'),
email=domain_info.get('admin_email'),
phone=domain_info.get('admin_phone'),
fax=domain_info.get('admin_fax'),
id_str=domain_info.get('admin_id'),
)[0]
db_domain_info.tech = DomainRegistration.objects.get_or_create(
name=domain_info.get('tech_name'),
organization=domain_info.get('tech_organization'),
address=domain_info.get('tech_address'),
city=domain_info.get('tech_city'),
state=domain_info.get('tech_state'),
zip_code=domain_info.get('tech_zip_code'),
country=domain_info.get('tech_country'),
email=domain_info.get('tech_email'),
phone=domain_info.get('tech_phone'),
fax=domain_info.get('tech_fax'),
id_str=domain_info.get('tech_id'),
)[0]
for status in domain_info.get('status') or []:
_status = WhoisStatus.objects.get_or_create(
name=status
)[0]
_status.save()
db_domain_info.status.add(_status)
for ns in domain_info.get('ns_records') or []:
_ns = NameServer.objects.get_or_create(
name=ns
)[0]
_ns.save()
db_domain_info.name_servers.add(_ns)
for a in domain_info.get('a_records') or []:
_a = DNSRecord.objects.get_or_create(
name=a,
type='a'
)[0]
_a.save()
db_domain_info.dns_records.add(_a)
for mx in domain_info.get('mx_records') or []:
_mx = DNSRecord.objects.get_or_create(
name=mx,
type='mx'
)[0]
_mx.save()
db_domain_info.dns_records.add(_mx)
for txt in domain_info.get('txt_records') or []:
_txt = DNSRecord.objects.get_or_create(
name=txt,
type='txt'
)[0]
_txt.save()
db_domain_info.dns_records.add(_txt)
db_domain_info.geolocation_iso = domain_info.get('registrant_country')
db_domain_info.whois_server = domain_info.get('whois_server')
db_domain_info.save()
domain.domain_info = db_domain_info
domain.save()
except Exception as e:
return {
'status': False,
'ip_domain': ip_domain,
'result': "unable to fetch records from WHOIS database.",
'message': str(e)
}
return {
'status': True,
'ip_domain': ip_domain,
'dnssec': domain_info.get('dnssec'),
'created': domain_info.get('created'),
'updated': domain_info.get('updated'),
'expires': domain_info.get('expires'),
'geolocation_iso': domain_info.get('registrant_country'),
'domain_statuses': domain_info.get('status'),
'whois_server': domain_info.get('whois_server'),
'dns': {
'a': domain_info.get('a_records'),
'mx': domain_info.get('mx_records'),
'txt': domain_info.get('txt_records'),
},
'registrar': {
'name': domain_info.get('registrar_name'),
'phone': domain_info.get('registrar_phone'),
'email': domain_info.get('registrar_email'),
'url': domain_info.get('registrar_url'),
},
'registrant': {
'name': domain_info.get('registrant_name'),
'id': domain_info.get('registrant_id'),
'organization': domain_info.get('registrant_organization'),
'address': domain_info.get('registrant_address'),
'city': domain_info.get('registrant_city'),
'state': domain_info.get('registrant_state'),
'zipcode': domain_info.get('registrant_zip_code'),
'country': domain_info.get('registrant_country'),
'phone': domain_info.get('registrant_phone'),
'fax': domain_info.get('registrant_fax'),
'email': domain_info.get('registrant_email'),
},
'admin': {
'name': domain_info.get('admin_name'),
'id': domain_info.get('admin_id'),
'organization': domain_info.get('admin_organization'),
'address':domain_info.get('admin_address'),
'city': domain_info.get('admin_city'),
'state': domain_info.get('admin_state'),
'zipcode': domain_info.get('admin_zip_code'),
'country': domain_info.get('admin_country'),
'phone': domain_info.get('admin_phone'),
'fax': domain_info.get('admin_fax'),
'email': domain_info.get('admin_email'),
},
'technical_contact': {
'name': domain_info.get('tech_name'),
'id': domain_info.get('tech_id'),
'organization': domain_info.get('tech_organization'),
'address': domain_info.get('tech_address'),
'city': domain_info.get('tech_city'),
'state': domain_info.get('tech_state'),
'zipcode': domain_info.get('tech_zip_code'),
'country': domain_info.get('tech_country'),
'phone': domain_info.get('tech_phone'),
'fax': domain_info.get('tech_fax'),
'email': domain_info.get('tech_email'),
},
'nameservers': domain_info.get('ns_records'),
# 'similar_domains': domain_info.get('similar_domains'),
'related_domains': domain_info.get('related_domains'),
'related_tlds': domain_info.get('related_tlds'),
'historical_ips': domain_info.get('historical_ips'),
}
@app.task(name='remove_duplicate_endpoints', bind=False, queue='remove_duplicate_endpoints_queue')
def remove_duplicate_endpoints(
scan_history_id,
domain_id,
subdomain_id=None,
filter_ids=[],
filter_status=[200, 301, 404],
duplicate_removal_fields=ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS
):
"""Remove duplicate endpoints.
Check for implicit redirections by comparing endpoints:
- [x] `content_length` similarities indicating redirections
- [x] `page_title` (check for same page title)
- [ ] Sign-in / login page (check for endpoints with the same words)
Args:
scan_history_id: ScanHistory id.
domain_id (int): Domain id.
subdomain_id (int, optional): Subdomain id.
filter_ids (list): List of endpoint ids to filter on.
filter_status (list): List of HTTP status codes to filter on.
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
"""
logger.info(f'Removing duplicate endpoints based on {duplicate_removal_fields}')
endpoints = (
EndPoint.objects
.filter(scan_history__id=scan_history_id)
.filter(target_domain__id=domain_id)
)
if filter_status:
endpoints = endpoints.filter(http_status__in=filter_status)
if subdomain_id:
endpoints = endpoints.filter(subdomain__id=subdomain_id)
if filter_ids:
endpoints = endpoints.filter(id__in=filter_ids)
for field_name in duplicate_removal_fields:
cl_query = (
endpoints
.values_list(field_name)
.annotate(mc=Count(field_name))
.order_by('-mc')
)
for (field_value, count) in cl_query:
if count > DELETE_DUPLICATES_THRESHOLD:
eps_to_delete = (
endpoints
.filter(**{field_name: field_value})
.order_by('discovered_date')
.all()[1:]
)
msg = f'Deleting {len(eps_to_delete)} endpoints [reason: same {field_name} {field_value}]'
for ep in eps_to_delete:
url = urlparse(ep.http_url)
if url.path in ['', '/', '/login']: # try do not delete the original page that other pages redirect to
continue
msg += f'\n\t {ep.http_url} [{ep.http_status}] [{field_name}={field_value}]'
ep.delete()
logger.warning(msg)
@app.task(name='run_command', bind=False, queue='run_command_queue')
def run_command(cmd, cwd=None, shell=False, history_file=None, scan_id=None, activity_id=None):
"""Run a given command using subprocess module.
Args:
cmd (str): Command to run.
cwd (str): Current working directory.
echo (bool): Log command.
shell (bool): Run within separate shell if True.
history_file (str): Write command + output to history file.
Returns:
tuple: Tuple with return_code, output.
"""
logger.info(cmd)
logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Run the command using subprocess
popen = subprocess.Popen(
cmd if shell else cmd.split(),
shell=shell,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
cwd=cwd,
universal_newlines=True)
output = ''
for stdout_line in iter(popen.stdout.readline, ""):
item = stdout_line.strip()
output += '\n' + item
logger.debug(item)
popen.stdout.close()
popen.wait()
return_code = popen.returncode
command_obj.output = output
command_obj.return_code = return_code
command_obj.save()
if history_file:
mode = 'a'
if not os.path.exists(history_file):
mode = 'w'
with open(history_file, mode) as f:
f.write(f'\n{cmd}\n{return_code}\n{output}\n------------------\n')
return return_code, output
#-------------#
# Other utils #
#-------------#
def stream_command(cmd, cwd=None, shell=False, history_file=None, encoding='utf-8', scan_id=None, activity_id=None, trunc_char=None):
# Log cmd
logger.info(cmd)
# logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Sanitize the cmd
command = cmd if shell else cmd.split()
# Run the command using subprocess
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=shell)
# Log the output in real-time to the database
output = ""
# Process the output
for line in iter(lambda: process.stdout.readline() or process.stderr.readline(), b''):
line = re.sub(r'\x1b[^m]*m', '', line.decode('utf-8').strip())
if trunc_char and line.endswith(trunc_char):
line = line[:-1]
item = line
# Try to parse the line as JSON
try:
item = json.loads(line)
except json.JSONDecodeError:
pass
# Yield the line
#logger.debug(item)
yield item
# Add the log line to the output
output += line + "\n"
# Update the command record in the database
command_obj.output = output
command_obj.save()
# Retrieve the return code and output
process.wait()
return_code = process.returncode
# Update the return code and final output in the database
command_obj.return_code = return_code
command_obj.save()
# Append the command, return code and output to the history file
if history_file is not None:
with open(history_file, "a") as f:
f.write(f"{cmd}\n{return_code}\n{output}\n")
def process_httpx_response(line):
"""TODO: implement this"""
def extract_httpx_url(line):
"""Extract final URL from httpx results. Always follow redirects to find
the last URL.
Args:
line (dict): URL data output by httpx.
Returns:
tuple: (final_url, redirect_bool) tuple.
"""
status_code = line.get('status_code', 0)
final_url = line.get('final_url')
location = line.get('location')
chain_status_codes = line.get('chain_status_codes', [])
# Final URL is already looking nice, if it exists return it
if final_url:
return final_url, False
http_url = line['url'] # fallback to url field
# Handle redirects manually
REDIRECT_STATUS_CODES = [301, 302]
is_redirect = (
status_code in REDIRECT_STATUS_CODES
or
any(x in REDIRECT_STATUS_CODES for x in chain_status_codes)
)
if is_redirect and location:
if location.startswith(('http', 'https')):
http_url = location
else:
http_url = f'{http_url}/{location.lstrip("/")}'
# Sanitize URL
http_url = sanitize_url(http_url)
return http_url, is_redirect
#-------------#
# OSInt utils #
#-------------#
def get_and_save_dork_results(lookup_target, results_dir, type, lookup_keywords=None, lookup_extensions=None, delay=3, page_count=2, scan_history=None):
"""
Uses gofuzz to dork and store information
Args:
lookup_target (str): target to look into such as stackoverflow or even the target itself
results_dir (str): Results directory
type (str): Dork Type Title
lookup_keywords (str): comma separated keywords or paths to look for
lookup_extensions (str): comma separated extensions to look for
delay (int): delay between each requests
page_count (int): pages in google to extract information
scan_history (startScan.ScanHistory): Scan History Object
"""
results = []
gofuzz_command = f'{GOFUZZ_EXEC_PATH} -t {lookup_target} -d {delay} -p {page_count}'
if lookup_extensions:
gofuzz_command += f' -e {lookup_extensions}'
elif lookup_keywords:
gofuzz_command += f' -w {lookup_keywords}'
output_file = f'{results_dir}/gofuzz.txt'
gofuzz_command += f' -o {output_file}'
history_file = f'{results_dir}/commands.txt'
try:
run_command(
gofuzz_command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
)
if not os.path.isfile(output_file):
return
with open(output_file) as f:
for line in f.readlines():
url = line.strip()
if url:
results.append(url)
dork, created = Dork.objects.get_or_create(
type=type,
url=url
)
if scan_history:
scan_history.dorks.add(dork)
# remove output file
os.remove(output_file)
except Exception as e:
logger.exception(e)
return results
def get_and_save_emails(scan_history, activity_id, results_dir):
"""Get and save emails from Google, Bing and Baidu.
Args:
scan_history (startScan.ScanHistory): Scan history object.
activity_id: ScanActivity Object
results_dir (str): Results directory.
Returns:
list: List of emails found.
"""
emails = []
# Proxy settings
# get_random_proxy()
# Gather emails from Google, Bing and Baidu
output_file = f'{results_dir}/emails_tmp.txt'
history_file = f'{results_dir}/commands.txt'
command = f'python3 /usr/src/github/Infoga/infoga.py --domain {scan_history.domain.name} --source all --report {output_file}'
try:
run_command(
command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
activity_id=activity_id)
if not os.path.isfile(output_file):
logger.info('No Email results')
return []
with open(output_file) as f:
for line in f.readlines():
if 'Email' in line:
split_email = line.split(' ')[2]
emails.append(split_email)
output_path = f'{results_dir}/emails.txt'
with open(output_path, 'w') as output_file:
for email_address in emails:
save_email(email_address, scan_history)
output_file.write(f'{email_address}\n')
except Exception as e:
logger.exception(e)
return emails
def save_metadata_info(meta_dict):
"""Extract metadata from Google Search.
Args:
meta_dict (dict): Info dict.
Returns:
list: List of startScan.MetaFinderDocument objects.
"""
logger.warning(f'Getting metadata for {meta_dict.osint_target}')
scan_history = ScanHistory.objects.get(id=meta_dict.scan_id)
# Proxy settings
get_random_proxy()
# Get metadata
result = extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if not result:
logger.error(f'No metadata result from Google Search for {meta_dict.osint_target}.')
return []
# Add metadata info to DB
results = []
for metadata_name, data in result.get_metadata().items():
subdomain = Subdomain.objects.get(
scan_history=meta_dict.scan_id,
name=meta_dict.osint_target)
metadata = DottedDict({k: v for k, v in data.items()})
meta_finder_document = MetaFinderDocument(
subdomain=subdomain,
target_domain=meta_dict.domain,
scan_history=scan_history,
url=metadata.url,
doc_name=metadata_name,
http_status=metadata.status_code,
producer=metadata.metadata.get('Producer'),
creator=metadata.metadata.get('Creator'),
creation_date=metadata.metadata.get('CreationDate'),
modified_date=metadata.metadata.get('ModDate'),
author=metadata.metadata.get('Author'),
title=metadata.metadata.get('Title'),
os=metadata.metadata.get('OSInfo'))
meta_finder_document.save()
results.append(data)
return results
#-----------------#
# Utils functions #
#-----------------#
def create_scan_activity(scan_history_id, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = ScanHistory.objects.get(pk=scan_history_id)
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
#--------------------#
# Database functions #
#--------------------#
def save_vulnerability(**vuln_data):
references = vuln_data.pop('references', [])
cve_ids = vuln_data.pop('cve_ids', [])
cwe_ids = vuln_data.pop('cwe_ids', [])
tags = vuln_data.pop('tags', [])
subscan = vuln_data.pop('subscan', None)
# remove nulls
vuln_data = replace_nulls(vuln_data)
# Create vulnerability
vuln, created = Vulnerability.objects.get_or_create(**vuln_data)
if created:
vuln.discovered_date = timezone.now()
vuln.open_status = True
vuln.save()
# Save vuln tags
for tag_name in tags or []:
tag, created = VulnerabilityTags.objects.get_or_create(name=tag_name)
if tag:
vuln.tags.add(tag)
vuln.save()
# Save CVEs
for cve_id in cve_ids or []:
cve, created = CveId.objects.get_or_create(name=cve_id)
if cve:
vuln.cve_ids.add(cve)
vuln.save()
# Save CWEs
for cve_id in cwe_ids or []:
cwe, created = CweId.objects.get_or_create(name=cve_id)
if cwe:
vuln.cwe_ids.add(cwe)
vuln.save()
# Save vuln reference
for url in references or []:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
if created:
vuln.references.add(ref)
vuln.save()
# Save subscan id in vuln object
if subscan:
vuln.vuln_subscan_ids.add(subscan)
vuln.save()
return vuln, created
def save_endpoint(
http_url,
ctx={},
crawl=False,
is_default=False,
**endpoint_data):
"""Get or create EndPoint object. If crawl is True, also crawl the endpoint
HTTP URL with httpx.
Args:
http_url (str): Input HTTP URL.
is_default (bool): If the url is a default url for SubDomains.
scan_history (startScan.models.ScanHistory): ScanHistory object.
domain (startScan.models.Domain): Domain object.
subdomain (starScan.models.Subdomain): Subdomain object.
results_dir (str, optional): Results directory.
crawl (bool, optional): Run httpx on endpoint if True. Default: False.
force (bool, optional): Force crawl even if ENABLE_HTTP_CRAWL mode is on.
subscan (startScan.models.SubScan, optional): SubScan object.
Returns:
tuple: (startScan.models.EndPoint, created) where `created` is a boolean
indicating if the object is new or already existed.
"""
# remove nulls
endpoint_data = replace_nulls(endpoint_data)
scheme = urlparse(http_url).scheme
endpoint = None
created = False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in http_url:
logger.error(f"{http_url} is not a URL of domain {domain.name}. Skipping.")
return None, False
if crawl:
ctx['track'] = False
results = http_crawl(
urls=[http_url],
method='HEAD',
ctx=ctx)
if results:
endpoint_data = results[0]
endpoint_id = endpoint_data['endpoint_id']
created = endpoint_data['endpoint_created']
endpoint = EndPoint.objects.get(pk=endpoint_id)
elif not scheme:
return None, False
else: # add dumb endpoint without probing it
scan = ScanHistory.objects.filter(pk=ctx.get('scan_history_id')).first()
domain = Domain.objects.filter(pk=ctx.get('domain_id')).first()
if not validators.url(http_url):
return None, False
http_url = sanitize_url(http_url)
endpoint, created = EndPoint.objects.get_or_create(
scan_history=scan,
target_domain=domain,
http_url=http_url,
**endpoint_data)
if created:
endpoint.is_default = is_default
endpoint.discovered_date = timezone.now()
endpoint.save()
subscan_id = ctx.get('subscan_id')
if subscan_id:
endpoint.endpoint_subscan_ids.add(subscan_id)
endpoint.save()
return endpoint, created
def save_subdomain(subdomain_name, ctx={}):
"""Get or create Subdomain object.
Args:
subdomain_name (str): Subdomain name.
scan_history (startScan.models.ScanHistory): ScanHistory object.
Returns:
tuple: (startScan.models.Subdomain, created) where `created` is a
boolean indicating if the object has been created in DB.
"""
scan_id = ctx.get('scan_history_id')
subscan_id = ctx.get('subscan_id')
out_of_scope_subdomains = ctx.get('out_of_scope_subdomains', [])
valid_domain = (
validators.domain(subdomain_name) or
validators.ipv4(subdomain_name) or
validators.ipv6(subdomain_name)
)
if not valid_domain:
logger.error(f'{subdomain_name} is not an invalid domain. Skipping.')
return None, False
if subdomain_name in out_of_scope_subdomains:
logger.error(f'{subdomain_name} is out-of-scope. Skipping.')
return None, False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in subdomain_name:
logger.error(f"{subdomain_name} is not a subdomain of domain {domain.name}. Skipping.")
return None, False
scan = ScanHistory.objects.filter(pk=scan_id).first()
domain = scan.domain if scan else None
subdomain, created = Subdomain.objects.get_or_create(
scan_history=scan,
target_domain=domain,
name=subdomain_name)
if created:
# logger.warning(f'Found new subdomain {subdomain_name}')
subdomain.discovered_date = timezone.now()
if subscan_id:
subdomain.subdomain_subscan_ids.add(subscan_id)
subdomain.save()
return subdomain, created
def save_email(email_address, scan_history=None):
if not validators.email(email_address):
logger.info(f'Email {email_address} is invalid. Skipping.')
return None, False
email, created = Email.objects.get_or_create(address=email_address)
# if created:
# logger.warning(f'Found new email address {email_address}')
# Add email to ScanHistory
if scan_history:
scan_history.emails.add(email)
scan_history.save()
return email, created
def save_employee(name, designation, scan_history=None):
employee, created = Employee.objects.get_or_create(
name=name,
designation=designation)
# if created:
# logger.warning(f'Found new employee {name}')
# Add employee to ScanHistory
if scan_history:
scan_history.employees.add(employee)
scan_history.save()
return employee, created
def save_ip_address(ip_address, subdomain=None, subscan=None, **kwargs):
if not (validators.ipv4(ip_address) or validators.ipv6(ip_address)):
logger.info(f'IP {ip_address} is not a valid IP. Skipping.')
return None, False
ip, created = IpAddress.objects.get_or_create(address=ip_address)
# if created:
# logger.warning(f'Found new IP {ip_address}')
# Set extra attributes
for key, value in kwargs.items():
setattr(ip, key, value)
ip.save()
# Add IP to subdomain
if subdomain:
subdomain.ip_addresses.add(ip)
subdomain.save()
# Add subscan to IP
if subscan:
ip.ip_subscan_ids.add(subscan)
# Geo-localize IP asynchronously
if created:
geo_localize.delay(ip_address, ip.id)
return ip, created
def save_imported_subdomains(subdomains, ctx={}):
"""Take a list of subdomains imported and write them to from_imported.txt.
Args:
subdomains (list): List of subdomain names.
scan_history (startScan.models.ScanHistory): ScanHistory instance.
domain (startScan.models.Domain): Domain instance.
results_dir (str): Results directory.
"""
domain_id = ctx['domain_id']
domain = Domain.objects.get(pk=domain_id)
results_dir = ctx.get('results_dir', RENGINE_RESULTS)
# Validate each subdomain and de-duplicate entries
subdomains = list(set([
subdomain for subdomain in subdomains
if validators.domain(subdomain) and domain.name == get_domain_from_subdomain(subdomain)
]))
if not subdomains:
return
logger.warning(f'Found {len(subdomains)} imported subdomains.')
with open(f'{results_dir}/from_imported.txt', 'w+') as output_file:
for name in subdomains:
subdomain_name = name.strip()
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain.is_imported_subdomain = True
subdomain.save()
output_file.write(f'{subdomain}\n')
@app.task(name='query_reverse_whois', bind=False, queue='query_reverse_whois_queue')
def query_reverse_whois(lookup_keyword):
"""Queries Reverse WHOIS information for an organization or email address.
Args:
lookup_keyword (str): Registrar Name or email
Returns:
dict: Reverse WHOIS information.
"""
return get_associated_domains(lookup_keyword)
@app.task(name='query_ip_history', bind=False, queue='query_ip_history_queue')
def query_ip_history(domain):
"""Queries the IP history for a domain
Args:
domain (str): domain_name
Returns:
list: list of historical ip addresses
"""
return get_domain_historical_ip_address(domain)
@app.task(name='gpt_vulnerability_description', bind=False, queue='gpt_queue')
def gpt_vulnerability_description(vulnerability_id):
"""Generate and store Vulnerability Description using GPT.
Args:
vulnerability_id (Vulnerability Model ID): Vulnerability ID to fetch Description.
"""
logger.info('Getting GPT Vulnerability Description')
try:
lookup_vulnerability = Vulnerability.objects.get(id=vulnerability_id)
lookup_url = urlparse(lookup_vulnerability.http_url)
path = lookup_url.path
except Exception as e:
return {
'status': False,
'error': str(e)
}
# check in db GPTVulnerabilityReport model if vulnerability description and path matches
stored = GPTVulnerabilityReport.objects.filter(url_path=path).filter(title=lookup_vulnerability.name).first()
if stored:
response = {
'status': True,
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
vulnerability_description = get_gpt_vuln_input_description(
lookup_vulnerability.name,
path
)
# one can add more description here later
gpt_generator = GPTVulnerabilityReportGenerator()
response = gpt_generator.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
lookup_vulnerability.name,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
# for all vulnerabilities with the same vulnerability name this description has to be stored.
# also the consition is that the url must contain a part of this.
for vuln in Vulnerability.objects.filter(name=lookup_vulnerability.name, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
return response
| import csv
import json
import os
import pprint
import subprocess
import time
import validators
import whatportis
import xmltodict
import yaml
import tldextract
import concurrent.futures
from datetime import datetime
from urllib.parse import urlparse
from api.serializers import SubdomainSerializer
from celery import chain, chord, group
from celery.result import allow_join_result
from celery.utils.log import get_task_logger
from django.db.models import Count
from dotted_dict import DottedDict
from django.utils import timezone
from pycvesearch import CVESearch
from metafinder.extractor import extract_metadata_from_google_search
from reNgine.celery import app
from reNgine.gpt import GPTVulnerabilityReportGenerator
from reNgine.celery_custom_task import RengineTask
from reNgine.common_func import *
from reNgine.definitions import *
from reNgine.settings import *
from reNgine.gpt import *
from reNgine.utilities import *
from scanEngine.models import (EngineType, InstalledExternalTool, Notification, Proxy)
from startScan.models import *
from startScan.models import EndPoint, Subdomain, Vulnerability
from targetApp.models import Domain
"""
Celery tasks.
"""
logger = get_task_logger(__name__)
#----------------------#
# Scan / Subscan tasks #
#----------------------#
@app.task(name='initiate_scan', bind=False, queue='initiate_scan_queue')
def initiate_scan(
scan_history_id,
domain_id,
engine_id=None,
scan_type=LIVE_SCAN,
results_dir=RENGINE_RESULTS,
imported_subdomains=[],
out_of_scope_subdomains=[],
url_filter=''):
"""Initiate a new scan.
Args:
scan_history_id (int): ScanHistory id.
domain_id (int): Domain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
imported_subdomains (list): Imported subdomains.
out_of_scope_subdomains (list): Out-of-scope subdomains.
url_filter (str): URL path. Default: ''
"""
# Get scan history
scan = ScanHistory.objects.get(pk=scan_history_id)
# Get scan engine
engine_id = engine_id or scan.scan_type.id # scan history engine_id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, [])
# Get domain and set last_scan_date
domain = Domain.objects.get(pk=domain_id)
domain.last_scan_date = timezone.now()
domain.save()
# Get path filter
url_filter = url_filter.rstrip('/')
# Get or create ScanHistory() object
if scan_type == LIVE_SCAN: # immediate
scan = ScanHistory.objects.get(pk=scan_history_id)
scan.scan_status = RUNNING_TASK
elif scan_type == SCHEDULED_SCAN: # scheduled
scan = ScanHistory()
scan.scan_status = INITIATED_TASK
scan.scan_type = engine
scan.celery_ids = [initiate_scan.request.id]
scan.domain = domain
scan.start_scan_date = timezone.now()
scan.tasks = engine.tasks
scan.results_dir = f'{results_dir}/{domain.name}_{scan.id}'
add_gf_patterns = gf_patterns and 'fetch_url' in engine.tasks
if add_gf_patterns:
scan.used_gf_patterns = ','.join(gf_patterns)
scan.save()
# Create scan results dir
os.makedirs(scan.results_dir)
# Build task context
ctx = {
'scan_history_id': scan_history_id,
'engine_id': engine_id,
'domain_id': domain.id,
'results_dir': scan.results_dir,
'url_filter': url_filter,
'yaml_configuration': config,
'out_of_scope_subdomains': out_of_scope_subdomains
}
ctx_str = json.dumps(ctx, indent=2)
# Send start notif
logger.warning(f'Starting scan {scan_history_id} with context:\n{ctx_str}')
send_scan_notif.delay(
scan_history_id,
subscan_id=None,
engine_id=engine_id,
status=CELERY_TASK_STATUS_MAP[scan.scan_status])
# Save imported subdomains in DB
save_imported_subdomains(imported_subdomains, ctx=ctx)
# Create initial subdomain in DB: make a copy of domain as a subdomain so
# that other tasks using subdomains can use it.
subdomain_name = domain.name
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
# If enable_http_crawl is set, create an initial root HTTP endpoint so that
# HTTP crawling can start somewhere
http_url = f'{domain.name}{url_filter}' if url_filter else domain.name
endpoint, _ = save_endpoint(
http_url,
ctx=ctx,
crawl=enable_http_crawl,
is_default=True,
subdomain=subdomain
)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build Celery tasks, crafted according to the dependency graph below:
# subdomain_discovery --> port_scan --> fetch_url --> dir_file_fuzz
# osint vulnerability_scan
# osint dalfox xss scan
# screenshot
# waf_detection
workflow = chain(
group(
subdomain_discovery.si(ctx=ctx, description='Subdomain discovery'),
osint.si(ctx=ctx, description='OS Intelligence')
),
port_scan.si(ctx=ctx, description='Port scan'),
fetch_url.si(ctx=ctx, description='Fetch URL'),
group(
dir_file_fuzz.si(ctx=ctx, description='Directories & files fuzz'),
vulnerability_scan.si(ctx=ctx, description='Vulnerability scan'),
screenshot.si(ctx=ctx, description='Screenshot'),
waf_detection.si(ctx=ctx, description='WAF detection')
)
)
# Build callback
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery chord
logger.info(f'Running Celery workflow with {len(workflow.tasks) + 1} tasks')
task = chain(workflow, callback).on_error(callback).delay()
scan.celery_ids.append(task.id)
scan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='initiate_subscan', bind=False, queue='subscan_queue')
def initiate_subscan(
scan_history_id,
subdomain_id,
engine_id=None,
scan_type=None,
results_dir=RENGINE_RESULTS,
url_filter=''):
"""Initiate a new subscan.
Args:
scan_history_id (int): ScanHistory id.
subdomain_id (int): Subdomain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
url_filter (str): URL path. Default: ''
"""
# Get Subdomain, Domain and ScanHistory
subdomain = Subdomain.objects.get(pk=subdomain_id)
scan = ScanHistory.objects.get(pk=subdomain.scan_history.id)
domain = Domain.objects.get(pk=subdomain.target_domain.id)
# Get EngineType
engine_id = engine_id or scan.scan_type.id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Create scan activity of SubScan Model
subscan = SubScan(
start_scan_date=timezone.now(),
celery_ids=[initiate_subscan.request.id],
scan_history=scan,
subdomain=subdomain,
type=scan_type,
status=RUNNING_TASK,
engine=engine)
subscan.save()
# Get YAML configuration
config = yaml.safe_load(engine.yaml_configuration)
# Create results directory
results_dir = f'{scan.results_dir}/subscans/{subscan.id}'
os.makedirs(results_dir, exist_ok=True)
# Run task
method = globals().get(scan_type)
if not method:
logger.warning(f'Task {scan_type} is not supported by reNgine. Skipping')
return
scan.tasks.append(scan_type)
scan.save()
# Send start notif
send_scan_notif.delay(
scan.id,
subscan_id=subscan.id,
engine_id=engine_id,
status='RUNNING')
# Build context
ctx = {
'scan_history_id': scan.id,
'subscan_id': subscan.id,
'engine_id': engine_id,
'domain_id': domain.id,
'subdomain_id': subdomain.id,
'yaml_configuration': config,
'results_dir': results_dir,
'url_filter': url_filter
}
# Create initial endpoints in DB: find domain HTTP endpoint so that HTTP
# crawling can start somewhere
base_url = f'{subdomain.name}{url_filter}' if url_filter else subdomain.name
endpoint, _ = save_endpoint(
base_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build header + callback
workflow = method.si(ctx=ctx)
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery tasks
task = chain(workflow, callback).on_error(callback).delay()
subscan.celery_ids.append(task.id)
subscan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='report', bind=False, queue='report_queue')
def report(ctx={}, description=None):
"""Report task running after all other tasks.
Mark ScanHistory or SubScan object as completed and update with final
status, log run details and send notification.
Args:
description (str, optional): Task description shown in UI.
"""
# Get objects
subscan_id = ctx.get('subscan_id')
scan_id = ctx.get('scan_history_id')
engine_id = ctx.get('engine_id')
scan = ScanHistory.objects.filter(pk=scan_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
# Get failed tasks
tasks = ScanActivity.objects.filter(scan_of=scan).all()
if subscan:
tasks = tasks.filter(celery_id__in=subscan.celery_ids)
failed_tasks = tasks.filter(status=FAILED_TASK)
# Get task status
failed_count = failed_tasks.count()
status = SUCCESS_TASK if failed_count == 0 else FAILED_TASK
status_h = 'SUCCESS' if failed_count == 0 else 'FAILED'
# Update scan / subscan status
if subscan:
subscan.stop_scan_date = timezone.now()
subscan.status = status
subscan.save()
else:
scan.scan_status = status
scan.stop_scan_date = timezone.now()
scan.save()
# Send scan status notif
send_scan_notif.delay(
scan_history_id=scan_id,
subscan_id=subscan_id,
engine_id=engine_id,
status=status_h)
#------------------------- #
# Tracked reNgine tasks #
#--------------------------#
@app.task(name='subdomain_discovery', queue='main_scan_queue', base=RengineTask, bind=True)
def subdomain_discovery(
self,
host=None,
ctx=None,
description=None):
"""Uses a set of tools (see SUBDOMAIN_SCAN_DEFAULT_TOOLS) to scan all
subdomains associated with a domain.
Args:
host (str): Hostname to scan.
Returns:
subdomains (list): List of subdomain names.
"""
if not host:
host = self.subdomain.name if self.subdomain else self.domain.name
if self.url_filter:
logger.warning(f'Ignoring subdomains scan as an URL path filter was passed ({self.url_filter}).')
return
# Config
config = self.yaml_configuration.get(SUBDOMAIN_DISCOVERY) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL) or self.yaml_configuration.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
tools = config.get(USES_TOOLS, SUBDOMAIN_SCAN_DEFAULT_TOOLS)
default_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=True).filter(is_subdomain_gathering=True)]
custom_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=False).filter(is_subdomain_gathering=True)]
send_subdomain_changes, send_interesting = False, False
notif = Notification.objects.first()
if notif:
send_subdomain_changes = notif.send_subdomain_changes_notif
send_interesting = notif.send_interesting_notif
# Gather tools to run for subdomain scan
if ALL in tools:
tools = SUBDOMAIN_SCAN_DEFAULT_TOOLS + custom_subdomain_tools
tools = [t.lower() for t in tools]
# Make exception for amass since tool name is amass, but command is amass-active/passive
default_subdomain_tools.append('amass-passive')
default_subdomain_tools.append('amass-active')
# Run tools
for tool in tools:
cmd = None
logger.info(f'Scanning subdomains for {host} with {tool}')
proxy = get_random_proxy()
if tool in default_subdomain_tools:
if tool == 'amass-passive':
cmd = f'amass enum -passive -d {host} -o {self.results_dir}/subdomains_amass.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
elif tool == 'amass-active':
use_amass_config = config.get(USE_AMASS_CONFIG, False)
amass_wordlist_name = config.get(AMASS_WORDLIST, 'deepmagic.com-prefixes-top50000')
wordlist_path = f'/usr/src/wordlist/{amass_wordlist_name}.txt'
cmd = f'amass enum -active -d {host} -o {self.results_dir}/subdomains_amass_active.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
cmd += f' -brute -w {wordlist_path}'
elif tool == 'sublist3r':
cmd = f'python3 /usr/src/github/Sublist3r/sublist3r.py -d {host} -t {threads} -o {self.results_dir}/subdomains_sublister.txt'
elif tool == 'subfinder':
cmd = f'subfinder -d {host} -o {self.results_dir}/subdomains_subfinder.txt'
use_subfinder_config = config.get(USE_SUBFINDER_CONFIG, False)
cmd += ' -config /root/.config/subfinder/config.yaml' if use_subfinder_config else ''
cmd += f' -proxy {proxy}' if proxy else ''
cmd += f' -timeout {timeout}' if timeout else ''
cmd += f' -t {threads}' if threads else ''
cmd += f' -silent'
elif tool == 'oneforall':
cmd = f'python3 /usr/src/github/OneForAll/oneforall.py --target {host} run'
cmd_extract = f'cut -d\',\' -f6 /usr/src/github/OneForAll/results/{host}.csv > {self.results_dir}/subdomains_oneforall.txt'
cmd_rm = f'rm -rf /usr/src/github/OneForAll/results/{host}.csv'
cmd += f' && {cmd_extract} && {cmd_rm}'
elif tool == 'ctfr':
results_file = self.results_dir + '/subdomains_ctfr.txt'
cmd = f'python3 /usr/src/github/ctfr/ctfr.py -d {host} -o {results_file}'
cmd_extract = f"cat {results_file} | sed 's/\*.//g' | tail -n +12 | uniq | sort > {results_file}"
cmd += f' && {cmd_extract}'
elif tool == 'tlsx':
results_file = self.results_dir + '/subdomains_tlsx.txt'
cmd = f'tlsx -san -cn -silent -ro -host {host}'
cmd += f" | sed -n '/^\([a-zA-Z0-9]\([-a-zA-Z0-9]*[a-zA-Z0-9]\)\?\.\)\+{host}$/p' | uniq | sort"
cmd += f' > {results_file}'
elif tool == 'netlas':
results_file = self.results_dir + '/subdomains_netlas.txt'
cmd = f'netlas search -d domain -i domain domain:"*.{host}" -f json'
netlas_key = get_netlas_key()
cmd += f' -a {netlas_key}' if netlas_key else ''
cmd_extract = f"grep -oE '([a-zA-Z0-9]([-a-zA-Z0-9]*[a-zA-Z0-9])?\.)+{host}'"
cmd += f' | {cmd_extract} > {results_file}'
elif tool in custom_subdomain_tools:
tool_query = InstalledExternalTool.objects.filter(name__icontains=tool.lower())
if not tool_query.exists():
logger.error(f'Missing {{TARGET}} and {{OUTPUT}} placeholders in {tool} configuration. Skipping.')
continue
custom_tool = tool_query.first()
cmd = custom_tool.subdomain_gathering_command
if '{TARGET}' in cmd and '{OUTPUT}' in cmd:
cmd = cmd.replace('{TARGET}', host)
cmd = cmd.replace('{OUTPUT}', f'{self.results_dir}/subdomains_{tool}.txt')
cmd = cmd.replace('{PATH}', custom_tool.github_clone_path) if '{PATH}' in cmd else cmd
else:
logger.warning(
f'Subdomain discovery tool "{tool}" is not supported by reNgine. Skipping.')
continue
# Run tool
try:
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
except Exception as e:
logger.error(
f'Subdomain discovery tool "{tool}" raised an exception')
logger.exception(e)
# Gather all the tools' results in one single file. Write subdomains into
# separate files, and sort all subdomains.
run_command(
f'cat {self.results_dir}/subdomains_*.txt > {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {self.output_path} -o {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
with open(self.output_path) as f:
lines = f.readlines()
# Parse the output_file file and store Subdomain and EndPoint objects found
# in db.
subdomain_count = 0
subdomains = []
urls = []
for line in lines:
subdomain_name = line.strip()
valid_url = bool(validators.url(subdomain_name))
valid_domain = (
bool(validators.domain(subdomain_name)) or
bool(validators.ipv4(subdomain_name)) or
bool(validators.ipv6(subdomain_name)) or
valid_url
)
if not valid_domain:
logger.error(f'Subdomain {subdomain_name} is not a valid domain, IP or URL. Skipping.')
continue
if valid_url:
subdomain_name = urlparse(subdomain_name).netloc
if subdomain_name in self.out_of_scope_subdomains:
logger.error(f'Subdomain {subdomain_name} is out of scope. Skipping.')
continue
# Add subdomain
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain_count += 1
subdomains.append(subdomain)
urls.append(subdomain.name)
# Bulk crawl subdomains
if enable_http_crawl:
ctx['track'] = True
http_crawl(urls, ctx=ctx, is_ran_from_subdomain_scan=True)
# Find root subdomain endpoints
for subdomain in subdomains:
pass
# Send notifications
subdomains_str = '\n'.join([f'• `{subdomain.name}`' for subdomain in subdomains])
self.notify(fields={
'Subdomain count': len(subdomains),
'Subdomains': subdomains_str,
})
if send_subdomain_changes and self.scan_id and self.domain_id:
added = get_new_added_subdomain(self.scan_id, self.domain_id)
removed = get_removed_subdomain(self.scan_id, self.domain_id)
if added:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in added])
self.notify(fields={'Added subdomains': subdomains_str})
if removed:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in removed])
self.notify(fields={'Removed subdomains': subdomains_str})
if send_interesting and self.scan_id and self.domain_id:
interesting_subdomains = get_interesting_subdomains(self.scan_id, self.domain_id)
if interesting_subdomains:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in interesting_subdomains])
self.notify(fields={'Interesting subdomains': subdomains_str})
return SubdomainSerializer(subdomains, many=True).data
@app.task(name='osint', queue='main_scan_queue', base=RengineTask, bind=True)
def osint(self, host=None, ctx={}, description=None):
"""Run Open-Source Intelligence tools on selected domain.
Args:
host (str): Hostname to scan.
Returns:
dict: Results from osint discovery and dorking.
"""
config = self.yaml_configuration.get(OSINT) or OSINT_DEFAULT_CONFIG
results = {}
grouped_tasks = []
if 'discover' in config:
ctx['track'] = False
# results = osint_discovery(host=host, ctx=ctx)
_task = osint_discovery.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
activity_id=self.activity_id,
results_dir=self.results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if OSINT_DORK in config or OSINT_CUSTOM_DORK in config:
_task = dorking.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
results_dir=self.results_dir
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('OSINT Tasks finished...')
# with open(self.output_path, 'w') as f:
# json.dump(results, f, indent=4)
#
# return results
@app.task(name='osint_discovery', queue='osint_discovery_queue', bind=False)
def osint_discovery(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run OSINT discovery.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
dict: osint metadat and theHarvester and h8mail results.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
osint_lookup = config.get(OSINT_DISCOVER, [])
osint_intensity = config.get(INTENSITY, 'normal')
documents_limit = config.get(OSINT_DOCUMENTS_LIMIT, 50)
results = {}
meta_info = []
emails = []
creds = []
# Get and save meta info
if 'metainfo' in osint_lookup:
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': host,
'domain': host,
'scan_id': scan_history_id,
'documents_limit': documents_limit
})
meta_info.append(save_metadata_info(meta_dict))
# TODO: disabled for now
# elif osint_intensity == 'deep':
# subdomains = Subdomain.objects
# if self.scan:
# subdomains = subdomains.filter(scan_history=self.scan)
# for subdomain in subdomains:
# meta_dict = DottedDict({
# 'osint_target': subdomain.name,
# 'domain': self.domain,
# 'scan_id': self.scan_id,
# 'documents_limit': documents_limit
# })
# meta_info.append(save_metadata_info(meta_dict))
grouped_tasks = []
if 'emails' in osint_lookup:
emails = get_and_save_emails(scan_history, activity_id, results_dir)
emails_str = '\n'.join([f'• `{email}`' for email in emails])
# self.notify(fields={'Emails': emails_str})
# ctx['track'] = False
_task = h8mail.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if 'employees' in osint_lookup:
ctx['track'] = False
_task = theHarvester.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
# results['emails'] = results.get('emails', []) + emails
# results['creds'] = creds
# results['meta_info'] = meta_info
return results
@app.task(name='dorking', bind=False, queue='dorking_queue')
def dorking(config, host, scan_history_id, results_dir):
"""Run Google dorks.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
list: Dorking results for each dork ran.
"""
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
scan_history = ScanHistory.objects.get(pk=scan_history_id)
dorks = config.get(OSINT_DORK, [])
custom_dorks = config.get(OSINT_CUSTOM_DORK, [])
results = []
# custom dorking has higher priority
try:
for custom_dork in custom_dorks:
lookup_target = custom_dork.get('lookup_site')
# replace with original host if _target_
lookup_target = host if lookup_target == '_target_' else lookup_target
if 'lookup_extensions' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_extensions=custom_dork.get('lookup_extensions'),
scan_history=scan_history
)
elif 'lookup_keywords' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_keywords=custom_dork.get('lookup_keywords'),
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
# default dorking
try:
for dork in dorks:
logger.info(f'Getting dork information for {dork}')
if dork == 'stackoverflow':
results = get_and_save_dork_results(
lookup_target='stackoverflow.com',
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'login_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/login/,login.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'admin_panels':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/admin/,admin.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'dashboard_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/dashboard/,dashboard.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'social_media' :
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'reddit.com'
]
for site in social_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'project_management' :
project_websites = [
'trello.com',
'atlassian.net'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'code_sharing' :
project_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'config_files' :
config_file_exts = [
'env',
'xml',
'conf',
'toml',
'yml',
'yaml',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(config_file_exts),
page_count=4,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keyword = 'Jenkins'
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=lookup_keyword,
page_count=1,
scan_history=scan_history
)
elif dork == 'wordpress_files' :
lookup_keywords = [
'/wp-content/',
'/wp-includes/'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'php_error' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'exposed_documents' :
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(docs_file_ext),
page_count=7,
scan_history=scan_history
)
elif dork == 'db_files' :
file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
elif dork == 'git_exposed' :
file_ext = [
'git',
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
return results
@app.task(name='theHarvester', queue='theHarvester_queue', bind=False)
def theHarvester(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run theHarvester to get save emails, hosts, employees found in domain.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
dict: Dict of emails, employees, hosts and ips found during crawling.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
output_path_json = f'{results_dir}/theHarvester.json'
theHarvester_dir = '/usr/src/github/theHarvester'
history_file = f'{results_dir}/commands.txt'
cmd = f'python3 {theHarvester_dir}/theHarvester.py -d {host} -b all -f {output_path_json}'
# Update proxies.yaml
proxy_query = Proxy.objects.all()
if proxy_query.exists():
proxy = proxy_query.first()
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(f'{theHarvester_dir}/proxies.yaml', 'w') as file:
yaml.dump(yaml_data, file)
# Run cmd
run_command(
cmd,
shell=False,
cwd=theHarvester_dir,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
# Get file location
if not os.path.isfile(output_path_json):
logger.error(f'Could not open {output_path_json}')
return {}
# Load theHarvester results
with open(output_path_json, 'r') as f:
data = json.load(f)
# Re-indent theHarvester JSON
with open(output_path_json, 'w') as f:
json.dump(data, f, indent=4)
emails = data.get('emails', [])
for email_address in emails:
email, _ = save_email(email_address, scan_history=scan_history)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
linkedin_people = data.get('linkedin_people', [])
for people in linkedin_people:
employee, _ = save_employee(
people,
designation='linkedin',
scan_history=scan_history)
# if employee:
# self.notify(fields={'LinkedIn people': f'• {employee.name}'})
twitter_people = data.get('twitter_people', [])
for people in twitter_people:
employee, _ = save_employee(
people,
designation='twitter',
scan_history=scan_history)
# if employee:
# self.notify(fields={'Twitter people': f'• {employee.name}'})
hosts = data.get('hosts', [])
urls = []
for host in hosts:
split = tuple(host.split(':'))
http_url = split[0]
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
endpoint, _ = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain)
# if endpoint:
# urls.append(endpoint.http_url)
# self.notify(fields={'Hosts': f'• {endpoint.http_url}'})
# if enable_http_crawl:
# ctx['track'] = False
# http_crawl(urls, ctx=ctx)
# TODO: Lots of ips unrelated with our domain are found, disabling
# this for now.
# ips = data.get('ips', [])
# for ip_address in ips:
# ip, created = save_ip_address(
# ip_address,
# subscan=subscan)
# if ip:
# send_task_notif.delay(
# 'osint',
# scan_history_id=scan_history_id,
# subscan_id=subscan_id,
# severity='success',
# update_fields={'IPs': f'{ip.address}'})
return data
@app.task(name='h8mail', queue='h8mail_queue', bind=False)
def h8mail(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run h8mail.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
list[dict]: List of credentials info.
"""
logger.warning('Getting leaked credentials')
scan_history = ScanHistory.objects.get(pk=scan_history_id)
input_path = f'{results_dir}/emails.txt'
output_file = f'{results_dir}/h8mail.json'
cmd = f'h8mail -t {input_path} --json {output_file}'
history_file = f'{results_dir}/commands.txt'
run_command(
cmd,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
with open(output_file) as f:
data = json.load(f)
creds = data.get('targets', [])
# TODO: go through h8mail output and save emails to DB
for cred in creds:
logger.warning(cred)
email_address = cred['target']
pwn_num = cred['pwn_num']
pwn_data = cred.get('data', [])
email, created = save_email(email_address, scan_history=scan)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
return creds
@app.task(name='screenshot', queue='main_scan_queue', base=RengineTask, bind=True)
def screenshot(self, ctx={}, description=None):
"""Uses EyeWitness to gather screenshot of a domain and/or url.
Args:
description (str, optional): Task description shown in UI.
"""
# Config
screenshots_path = f'{self.results_dir}/screenshots'
output_path = f'{self.results_dir}/screenshots/{self.filename}'
alive_endpoints_file = f'{self.results_dir}/endpoints_alive.txt'
config = self.yaml_configuration.get(SCREENSHOT) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT + 5)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
# If intensity is normal, grab only the root endpoints of each subdomain
strict = True if intensity == 'normal' else False
# Get URLs to take screenshot of
get_http_urls(
is_alive=enable_http_crawl,
strict=strict,
write_filepath=alive_endpoints_file,
get_only_default_urls=True,
ctx=ctx
)
# Send start notif
notification = Notification.objects.first()
send_output_file = notification.send_scan_output_file if notification else False
# Run cmd
cmd = f'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py -f {alive_endpoints_file} -d {screenshots_path} --no-prompt'
cmd += f' --timeout {timeout}' if timeout > 0 else ''
cmd += f' --threads {threads}' if threads > 0 else ''
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(output_path):
logger.error(f'Could not load EyeWitness results at {output_path} for {self.domain.name}.')
return
# Loop through results and save objects in DB
screenshot_paths = []
with open(output_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
"Protocol,Port,Domain,Request Status,Screenshot Path, Source Path"
protocol, port, subdomain_name, status, screenshot_path, source_path = tuple(row)
logger.info(f'{protocol}:{port}:{subdomain_name}:{status}')
subdomain_query = Subdomain.objects.filter(name=subdomain_name)
if self.scan:
subdomain_query = subdomain_query.filter(scan_history=self.scan)
if status == 'Successful' and subdomain_query.exists():
subdomain = subdomain_query.first()
screenshot_paths.append(screenshot_path)
subdomain.screenshot_path = screenshot_path.replace('/usr/src/scan_results/', '')
subdomain.save()
logger.warning(f'Added screenshot for {subdomain.name} to DB')
# Remove all db, html extra files in screenshot results
run_command(
'rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(screenshots_path),
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'rm -rf {screenshots_path}/source',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Send finish notifs
screenshots_str = '• ' + '\n• '.join([f'`{path}`' for path in screenshot_paths])
self.notify(fields={'Screenshots': screenshots_str})
if send_output_file:
for path in screenshot_paths:
title = get_output_file_name(
self.scan_id,
self.subscan_id,
self.filename)
send_file_to_discord.delay(path, title)
@app.task(name='port_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def port_scan(self, hosts=[], ctx={}, description=None):
"""Run port scan.
Args:
hosts (list, optional): Hosts to run port scan on.
description (str, optional): Task description shown in UI.
Returns:
list: List of open ports (dict).
"""
input_file = f'{self.results_dir}/input_subdomains_port_scan.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(PORT_SCAN) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
exclude_ports = config.get(NAABU_EXCLUDE_PORTS, [])
exclude_subdomains = config.get(NAABU_EXCLUDE_SUBDOMAINS, False)
ports = config.get(PORTS, NAABU_DEFAULT_PORTS)
ports = [str(port) for port in ports]
rate_limit = config.get(NAABU_RATE) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
passive = config.get(NAABU_PASSIVE, False)
use_naabu_config = config.get(USE_NAABU_CONFIG, False)
exclude_ports_str = ','.join(return_iterable(exclude_ports))
# nmap args
nmap_enabled = config.get(ENABLE_NMAP, False)
nmap_cmd = config.get(NMAP_COMMAND, '')
nmap_script = config.get(NMAP_SCRIPT, '')
nmap_script = ','.join(return_iterable(nmap_script))
nmap_script_args = config.get(NMAP_SCRIPT_ARGS)
if hosts:
with open(input_file, 'w') as f:
f.write('\n'.join(hosts))
else:
hosts = get_subdomains(
write_filepath=input_file,
exclude_subdomains=exclude_subdomains,
ctx=ctx)
# Build cmd
cmd = 'naabu -json -exclude-cdn'
cmd += f' -list {input_file}' if len(hosts) > 0 else f' -host {hosts[0]}'
if 'full' in ports or 'all' in ports:
ports_str = ' -p "-"'
elif 'top-100' in ports:
ports_str = ' -top-ports 100'
elif 'top-1000' in ports:
ports_str = ' -top-ports 1000'
else:
ports_str = ','.join(ports)
ports_str = f' -p {ports_str}'
cmd += ports_str
cmd += ' -config /root/.config/naabu/config.yaml' if use_naabu_config else ''
cmd += f' -proxy "{proxy}"' if proxy else ''
cmd += f' -c {threads}' if threads else ''
cmd += f' -rate {rate_limit}' if rate_limit > 0 else ''
cmd += f' -timeout {timeout*1000}' if timeout > 0 else ''
cmd += f' -passive' if passive else ''
cmd += f' -exclude-ports {exclude_ports_str}' if exclude_ports else ''
cmd += f' -silent'
# Execute cmd and gather results
results = []
urls = []
ports_data = {}
for line in stream_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
port_number = line['port']
ip_address = line['ip']
host = line.get('host') or ip_address
if port_number == 0:
continue
# Grab subdomain
subdomain = Subdomain.objects.filter(
name=host,
target_domain=self.domain,
scan_history=self.scan
).first()
# Add IP DB
ip, _ = save_ip_address(ip_address, subdomain, subscan=self.subscan)
if self.subscan:
ip.ip_subscan_ids.add(self.subscan)
ip.save()
# Add endpoint to DB
# port 80 and 443 not needed as http crawl already does that.
if port_number not in [80, 443]:
http_url = f'{host}:{port_number}'
endpoint, _ = save_endpoint(
http_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint:
http_url = endpoint.http_url
urls.append(http_url)
# Add Port in DB
port_details = whatportis.get_ports(str(port_number))
service_name = port_details[0].name if len(port_details) > 0 else 'unknown'
description = port_details[0].description if len(port_details) > 0 else ''
# get or create port
port, created = Port.objects.get_or_create(
number=port_number,
service_name=service_name,
description=description
)
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port.save()
ip.ports.add(port)
ip.save()
if host in ports_data:
ports_data[host].append(port_number)
else:
ports_data[host] = [port_number]
# Send notification
logger.warning(f'Found opened port {port_number} on {ip_address} ({host})')
if len(ports_data) == 0:
logger.info('Finished running naabu port scan - No open ports found.')
if nmap_enabled:
logger.info('Nmap scans skipped')
return ports_data
# Send notification
fields_str = ''
for host, ports in ports_data.items():
ports_str = ', '.join([f'`{port}`' for port in ports])
fields_str += f'• `{host}`: {ports_str}\n'
self.notify(fields={'Ports discovered': fields_str})
# Save output to file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
logger.info('Finished running naabu port scan.')
# Process nmap results: 1 process per host
sigs = []
if nmap_enabled:
logger.warning(f'Starting nmap scans ...')
logger.warning(ports_data)
for host, port_list in ports_data.items():
ports_str = '_'.join([str(p) for p in port_list])
ctx_nmap = ctx.copy()
ctx_nmap['description'] = get_task_title(f'nmap_{host}', self.scan_id, self.subscan_id)
ctx_nmap['track'] = False
sig = nmap.si(
cmd=nmap_cmd,
ports=port_list,
host=host,
script=nmap_script,
script_args=nmap_script_args,
max_rate=rate_limit,
ctx=ctx_nmap)
sigs.append(sig)
task = group(sigs).apply_async()
with allow_join_result():
results = task.get()
return ports_data
@app.task(name='nmap', queue='main_scan_queue', base=RengineTask, bind=True)
def nmap(
self,
cmd=None,
ports=[],
host=None,
input_file=None,
script=None,
script_args=None,
max_rate=None,
ctx={},
description=None):
"""Run nmap on a host.
Args:
cmd (str, optional): Existing nmap command to complete.
ports (list, optional): List of ports to scan.
host (str, optional): Host to scan.
input_file (str, optional): Input hosts file.
script (str, optional): NSE script to run.
script_args (str, optional): NSE script args.
max_rate (int): Max rate.
description (str, optional): Task description shown in UI.
"""
notif = Notification.objects.first()
ports_str = ','.join(str(port) for port in ports)
self.filename = self.filename.replace('.txt', '.xml')
filename_vulns = self.filename.replace('.xml', '_vulns.json')
output_file = self.output_path
output_file_xml = f'{self.results_dir}/{host}_{self.filename}'
vulns_file = f'{self.results_dir}/{host}_{filename_vulns}'
logger.warning(f'Running nmap on {host}:{ports}')
# Build cmd
nmap_cmd = get_nmap_cmd(
cmd=cmd,
ports=ports_str,
script=script,
script_args=script_args,
max_rate=max_rate,
host=host,
input_file=input_file,
output_file=output_file_xml)
# Run cmd
run_command(
nmap_cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Get nmap XML results and convert to JSON
vulns = parse_nmap_results(output_file_xml, output_file)
with open(vulns_file, 'w') as f:
json.dump(vulns, f, indent=4)
# Save vulnerabilities found by nmap
vulns_str = ''
for vuln_data in vulns:
# URL is not necessarily an HTTP URL when running nmap (can be any
# other vulnerable protocols). Look for existing endpoint and use its
# URL as vulnerability.http_url if it exists.
url = vuln_data['http_url']
endpoint = EndPoint.objects.filter(http_url__contains=url).first()
if endpoint:
vuln_data['http_url'] = endpoint.http_url
vuln, created = save_vulnerability(
target_domain=self.domain,
subdomain=self.subdomain,
scan_history=self.scan,
subscan=self.subscan,
endpoint=endpoint,
**vuln_data)
vulns_str += f'• {str(vuln)}\n'
if created:
logger.warning(str(vuln))
# Send only 1 notif for all vulns to reduce number of notifs
if notif and notif.send_vuln_notif and vulns_str:
logger.warning(vulns_str)
self.notify(fields={'CVEs': vulns_str})
return vulns
@app.task(name='waf_detection', queue='main_scan_queue', base=RengineTask, bind=True)
def waf_detection(self, ctx={}, description=None):
"""
Uses wafw00f to check for the presence of a WAF.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of startScan.models.Waf objects.
"""
input_path = f'{self.results_dir}/input_endpoints_waf_detection.txt'
config = self.yaml_configuration.get(WAF_DETECTION) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Get alive endpoints from DB
get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
cmd = f'wafw00f -i {input_path} -o {self.output_path}'
run_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(self.output_path):
logger.error(f'Could not find {self.output_path}')
return
with open(self.output_path) as file:
wafs = file.readlines()
for line in wafs:
line = " ".join(line.split())
splitted = line.split(' ', 1)
waf_info = splitted[1].strip()
waf_name = waf_info[:waf_info.find('(')].strip()
waf_manufacturer = waf_info[waf_info.find('(')+1:waf_info.find(')')].strip().replace('.', '')
http_url = sanitize_url(splitted[0].strip())
if not waf_name or waf_name == 'None':
continue
# Add waf to db
waf, _ = Waf.objects.get_or_create(
name=waf_name,
manufacturer=waf_manufacturer
)
# Add waf info to Subdomain in DB
subdomain = get_subdomain_from_url(http_url)
logger.info(f'Wafw00f Subdomain : {subdomain}')
subdomain_query, _ = Subdomain.objects.get_or_create(scan_history=self.scan, name=subdomain)
subdomain_query.waf.add(waf)
subdomain_query.save()
return wafs
@app.task(name='dir_file_fuzz', queue='main_scan_queue', base=RengineTask, bind=True)
def dir_file_fuzz(self, ctx={}, description=None):
"""Perform directory scan, and currently uses `ffuf` as a default tool.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of URLs discovered.
"""
# Config
cmd = 'ffuf'
config = self.yaml_configuration.get(DIR_FILE_FUZZ) or {}
custom_header = self.yaml_configuration.get(CUSTOM_HEADER)
auto_calibration = config.get(AUTO_CALIBRATION, True)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
extensions = config.get(EXTENSIONS, DEFAULT_DIR_FILE_FUZZ_EXTENSIONS)
# prepend . on extensions
extensions = [ext if ext.startswith('.') else '.' + ext for ext in extensions]
extensions_str = ','.join(map(str, extensions))
follow_redirect = config.get(FOLLOW_REDIRECT, FFUF_DEFAULT_FOLLOW_REDIRECT)
max_time = config.get(MAX_TIME, 0)
match_http_status = config.get(MATCH_HTTP_STATUS, FFUF_DEFAULT_MATCH_HTTP_STATUS)
mc = ','.join([str(c) for c in match_http_status])
recursive_level = config.get(RECURSIVE_LEVEL, FFUF_DEFAULT_RECURSIVE_LEVEL)
stop_on_error = config.get(STOP_ON_ERROR, False)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
wordlist_name = config.get(WORDLIST, 'dicc')
delay = rate_limit / (threads * 100) # calculate request pause delay from rate_limit and number of threads
input_path = f'{self.results_dir}/input_dir_file_fuzz.txt'
# Get wordlist
wordlist_name = 'dicc' if wordlist_name == 'default' else wordlist_name
wordlist_path = f'/usr/src/wordlist/{wordlist_name}.txt'
# Build command
cmd += f' -w {wordlist_path}'
cmd += f' -e {extensions_str}' if extensions else ''
cmd += f' -maxtime {max_time}' if max_time > 0 else ''
cmd += f' -p {delay}' if delay > 0 else ''
cmd += f' -recursion -recursion-depth {recursive_level} ' if recursive_level > 0 else ''
cmd += f' -t {threads}' if threads and threads > 0 else ''
cmd += f' -timeout {timeout}' if timeout and timeout > 0 else ''
cmd += ' -se' if stop_on_error else ''
cmd += ' -fr' if follow_redirect else ''
cmd += ' -ac' if auto_calibration else ''
cmd += f' -mc {mc}' if mc else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
# Grab URLs to fuzz
urls = get_http_urls(
is_alive=True,
ignore_files=False,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
logger.warning(urls)
# Loop through URLs and run command
results = []
for url in urls:
'''
Above while fetching urls, we are not ignoring files, because some
default urls may redirect to https://example.com/login.php
so, ignore_files is set to False
but, during fuzzing, we will only need part of the path, in above example
it is still a good idea to ffuf base url https://example.com
so files from base url
'''
url_parse = urlparse(url)
url = url_parse.scheme + '://' + url_parse.netloc
url += '/FUZZ' # TODO: fuzz not only URL but also POST / PUT / headers
proxy = get_random_proxy()
# Build final cmd
fcmd = cmd
fcmd += f' -x {proxy}' if proxy else ''
fcmd += f' -u {url} -json'
# Initialize DirectoryScan object
dirscan = DirectoryScan()
dirscan.scanned_date = timezone.now()
dirscan.command_line = fcmd
dirscan.save()
# Loop through results and populate EndPoint and DirectoryFile in DB
results = []
for line in stream_command(
fcmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
name = line['input'].get('FUZZ')
length = line['length']
status = line['status']
words = line['words']
url = line['url']
lines = line['lines']
content_type = line['content-type']
duration = line['duration']
if not name:
logger.error(f'FUZZ not found for "{url}"')
continue
endpoint, created = save_endpoint(url, crawl=False, ctx=ctx)
# endpoint.is_default = False
endpoint.http_status = status
endpoint.content_length = length
endpoint.response_time = duration / 1000000000
endpoint.save()
if created:
urls.append(endpoint.http_url)
endpoint.status = status
endpoint.content_type = content_type
endpoint.content_length = length
dfile, created = DirectoryFile.objects.get_or_create(
name=name,
length=length,
words=words,
lines=lines,
content_type=content_type,
url=url)
dfile.http_status = status
dfile.save()
# if created:
# logger.warning(f'Found new directory or file {url}')
dirscan.directory_files.add(dfile)
dirscan.save()
if self.subscan:
dirscan.dir_subscan_ids.add(self.subscan)
subdomain_name = get_subdomain_from_url(endpoint.http_url)
subdomain = Subdomain.objects.get(name=subdomain_name, scan_history=self.scan)
subdomain.directories.add(dirscan)
subdomain.save()
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(urls, ctx=ctx)
return results
@app.task(name='fetch_url', queue='main_scan_queue', base=RengineTask, bind=True)
def fetch_url(self, urls=[], ctx={}, description=None):
"""Fetch URLs using different tools like gauplus, gau, gospider, waybackurls ...
Args:
urls (list): List of URLs to start from.
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/input_endpoints_fetch_url.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(FETCH_URL) or {}
should_remove_duplicate_endpoints = config.get(REMOVE_DUPLICATE_ENDPOINTS, True)
duplicate_removal_fields = config.get(DUPLICATE_REMOVAL_FIELDS, ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, DEFAULT_GF_PATTERNS)
ignore_file_extension = config.get(IGNORE_FILE_EXTENSION, DEFAULT_IGNORE_FILE_EXTENSIONS)
tools = config.get(USES_TOOLS, ENDPOINT_SCAN_DEFAULT_TOOLS)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
domain_request_headers = self.domain.request_headers if self.domain else None
custom_header = domain_request_headers or self.yaml_configuration.get(CUSTOM_HEADER)
exclude_subdomains = config.get(EXCLUDED_SUBDOMAINS, False)
# Get URLs to scan and save to input file
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
exclude_subdomains=exclude_subdomains,
get_only_default_urls=True,
ctx=ctx
)
# Domain regex
host = self.domain.name if self.domain else urlparse(urls[0]).netloc
host_regex = f"\'https?://([a-z0-9]+[.])*{host}.*\'"
# Tools cmds
cmd_map = {
'gau': f'gau',
'gauplus': f'gauplus -random-agent',
'hakrawler': 'hakrawler -subs -u',
'waybackurls': 'waybackurls',
'gospider': f'gospider -S {input_path} --js -d 2 --sitemap --robots -w -r',
'katana': f'katana -list {input_path} -silent -jc -kf all -d 3 -fs rdn',
}
if proxy:
cmd_map['gau'] += f' --proxy "{proxy}"'
cmd_map['gauplus'] += f' -p "{proxy}"'
cmd_map['gospider'] += f' -p {proxy}'
cmd_map['hakrawler'] += f' -proxy {proxy}'
cmd_map['katana'] += f' -proxy {proxy}'
if threads > 0:
cmd_map['gau'] += f' --threads {threads}'
cmd_map['gauplus'] += f' -t {threads}'
cmd_map['gospider'] += f' -t {threads}'
cmd_map['katana'] += f' -c {threads}'
if custom_header:
header_string = ';;'.join([
f'{key}: {value}' for key, value in custom_header.items()
])
cmd_map['hakrawler'] += f' -h {header_string}'
cmd_map['katana'] += f' -H {header_string}'
header_flags = [':'.join(h) for h in header_string.split(';;')]
for flag in header_flags:
cmd_map['gospider'] += f' -H {flag}'
cat_input = f'cat {input_path}'
grep_output = f'grep -Eo {host_regex}'
cmd_map = {
tool: f'{cat_input} | {cmd} | {grep_output} > {self.results_dir}/urls_{tool}.txt'
for tool, cmd in cmd_map.items()
}
tasks = group(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for tool, cmd in cmd_map.items()
if tool in tools
)
# Cleanup task
sort_output = [
f'cat {self.results_dir}/urls_* > {self.output_path}',
f'cat {input_path} >> {self.output_path}',
f'sort -u {self.output_path} -o {self.output_path}',
]
if ignore_file_extension:
ignore_exts = '|'.join(ignore_file_extension)
grep_ext_filtered_output = [
f'cat {self.output_path} | grep -Eiv "\\.({ignore_exts}).*" > {self.results_dir}/urls_filtered.txt',
f'mv {self.results_dir}/urls_filtered.txt {self.output_path}'
]
sort_output.extend(grep_ext_filtered_output)
cleanup = chain(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for cmd in sort_output
)
# Run all commands
task = chord(tasks)(cleanup)
with allow_join_result():
task.get()
# Store all the endpoints and run httpx
with open(self.output_path) as f:
discovered_urls = f.readlines()
self.notify(fields={'Discovered URLs': len(discovered_urls)})
# Some tools can have an URL in the format <URL>] - <PATH> or <URL> - <PATH>, add them
# to the final URL list
all_urls = []
for url in discovered_urls:
url = url.strip()
urlpath = None
base_url = None
if '] ' in url: # found JS scraped endpoint e.g from gospider
split = tuple(url.split('] '))
if not len(split) == 2:
logger.warning(f'URL format not recognized for "{url}". Skipping.')
continue
base_url, urlpath = split
urlpath = urlpath.lstrip('- ')
elif ' - ' in url: # found JS scraped endpoint e.g from gospider
base_url, urlpath = tuple(url.split(' - '))
if base_url and urlpath:
subdomain = urlparse(base_url)
url = f'{subdomain.scheme}://{subdomain.netloc}{self.url_filter}'
if not validators.url(url):
logger.warning(f'Invalid URL "{url}". Skipping.')
if url not in all_urls:
all_urls.append(url)
# Filter out URLs if a path filter was passed
if self.url_filter:
all_urls = [url for url in all_urls if self.url_filter in url]
# Write result to output path
with open(self.output_path, 'w') as f:
f.write('\n'.join(all_urls))
logger.warning(f'Found {len(all_urls)} usable URLs')
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(
all_urls,
ctx=ctx,
should_remove_duplicate_endpoints=should_remove_duplicate_endpoints,
duplicate_removal_fields=duplicate_removal_fields
)
#-------------------#
# GF PATTERNS MATCH #
#-------------------#
# Combine old gf patterns with new ones
if gf_patterns:
self.scan.used_gf_patterns = ','.join(gf_patterns)
self.scan.save()
# Run gf patterns on saved endpoints
# TODO: refactor to Celery task
for gf_pattern in gf_patterns:
# TODO: js var is causing issues, removing for now
if gf_pattern == 'jsvar':
logger.info('Ignoring jsvar as it is causing issues.')
continue
# Run gf on current pattern
logger.warning(f'Running gf on pattern "{gf_pattern}"')
gf_output_file = f'{self.results_dir}/gf_patterns_{gf_pattern}.txt'
cmd = f'cat {self.output_path} | gf {gf_pattern} | grep -Eo {host_regex} >> {gf_output_file}'
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Check output file
if not os.path.exists(gf_output_file):
logger.error(f'Could not find GF output file {gf_output_file}. Skipping GF pattern "{gf_pattern}"')
continue
# Read output file line by line and
with open(gf_output_file, 'r') as f:
lines = f.readlines()
# Add endpoints / subdomains to DB
for url in lines:
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
endpoint, created = save_endpoint(
http_url,
crawl=False,
subdomain=subdomain,
ctx=ctx)
if not endpoint:
continue
earlier_pattern = None
if not created:
earlier_pattern = endpoint.matched_gf_patterns
pattern = f'{earlier_pattern},{gf_pattern}' if earlier_pattern else gf_pattern
endpoint.matched_gf_patterns = pattern
endpoint.save()
return all_urls
def parse_curl_output(response):
# TODO: Enrich from other cURL fields.
CURL_REGEX_HTTP_STATUS = f'HTTP\/(?:(?:\d\.?)+)\s(\d+)\s(?:\w+)'
http_status = 0
if response:
failed = False
regex = re.compile(CURL_REGEX_HTTP_STATUS, re.MULTILINE)
try:
http_status = int(regex.findall(response)[0])
except (KeyError, TypeError, IndexError):
pass
return {
'http_status': http_status,
}
@app.task(name='vulnerability_scan', queue='main_scan_queue', bind=True, base=RengineTask)
def vulnerability_scan(self, urls=[], ctx={}, description=None):
"""
This function will serve as an entrypoint to vulnerability scan.
All other vulnerability scan will be run from here including nuclei, crlfuzz, etc
"""
logger.info('Running Vulnerability Scan Queue')
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_run_nuclei = config.get(RUN_NUCLEI, True)
should_run_crlfuzz = config.get(RUN_CRLFUZZ, False)
should_run_dalfox = config.get(RUN_DALFOX, False)
should_run_s3scanner = config.get(RUN_S3SCANNER, True)
grouped_tasks = []
if should_run_nuclei:
_task = nuclei_scan.si(
urls=urls,
ctx=ctx,
description=f'Nuclei Scan'
)
grouped_tasks.append(_task)
if should_run_crlfuzz:
_task = crlfuzz_scan.si(
urls=urls,
ctx=ctx,
description=f'CRLFuzz Scan'
)
grouped_tasks.append(_task)
if should_run_dalfox:
_task = dalfox_xss_scan.si(
urls=urls,
ctx=ctx,
description=f'Dalfox XSS Scan'
)
grouped_tasks.append(_task)
if should_run_s3scanner:
_task = s3scanner.si(
ctx=ctx,
description=f'Misconfigured S3 Buckets Scanner'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan completed...')
# return results
return None
@app.task(name='nuclei_individual_severity_module', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_individual_severity_module(self, cmd, severity, enable_http_crawl, should_fetch_gpt_report, ctx={}, description=None):
'''
This celery task will run vulnerability scan in parallel.
All severities supplied should run in parallel as grouped tasks.
'''
results = []
logger.info(f'Running vulnerability scan with severity: {severity}')
cmd += f' -severity {severity}'
# Send start notification
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
# Gather nuclei results
vuln_data = parse_nuclei_result(line)
# Get corresponding subdomain
http_url = sanitize_url(line.get('matched-at'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
# Look for duplicate vulnerabilities by excluding records that might change but are irrelevant.
object_comparison_exclude = ['response', 'curl_command', 'tags', 'references', 'cve_ids', 'cwe_ids']
# Add subdomain and target domain to the duplicate check
vuln_data_copy = vuln_data.copy()
vuln_data_copy['subdomain'] = subdomain
vuln_data_copy['target_domain'] = self.domain
# Check if record exists, if exists do not save it
if record_exists(Vulnerability, data=vuln_data_copy, exclude_keys=object_comparison_exclude):
logger.warning(f'Nuclei vulnerability of severity {severity} : {vuln_data_copy["name"]} for {subdomain_name} already exists')
continue
# Get or create EndPoint object
response = line.get('response')
httpx_crawl = False if response else enable_http_crawl # avoid yet another httpx crawl
endpoint, _ = save_endpoint(
http_url,
crawl=httpx_crawl,
subdomain=subdomain,
ctx=ctx)
if endpoint:
http_url = endpoint.http_url
if not httpx_crawl:
output = parse_curl_output(response)
endpoint.http_status = output['http_status']
endpoint.save()
# Get or create Vulnerability object
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
subdomain=subdomain,
**vuln_data)
if not vuln:
continue
# Print vuln
severity = line['info'].get('severity', 'unknown')
logger.warning(str(vuln))
# Send notification for all vulnerabilities except info
url = vuln.http_url or vuln.subdomain
send_vuln = (
notif and
notif.send_vuln_notif and
vuln and
severity in ['low', 'medium', 'high', 'critical'])
if send_vuln:
fields = {
'Severity': f'**{severity.upper()}**',
'URL': http_url,
'Subdomain': subdomain_name,
'Name': vuln.name,
'Type': vuln.type,
'Description': vuln.description,
'Template': vuln.template_url,
'Tags': vuln.get_tags_str(),
'CVEs': vuln.get_cve_str(),
'CWEs': vuln.get_cwe_str(),
'References': vuln.get_refs_str()
}
severity_map = {
'low': 'info',
'medium': 'warning',
'high': 'error',
'critical': 'error'
}
self.notify(
f'vulnerability_scan_#{vuln.id}',
severity_map[severity],
fields,
add_meta_info=False)
# Send report to hackerone
hackerone_query = Hackerone.objects.all()
send_report = (
hackerone_query.exists() and
severity not in ('info', 'low') and
vuln.target_domain.h1_team_handle
)
if send_report:
hackerone = hackerone_query.first()
if hackerone.send_critical and severity == 'critical':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_high and severity == 'high':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_medium and severity == 'medium':
send_hackerone_report.delay(vuln.id)
# Write results to JSON file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
# Send finish notif
if send_status:
vulns = Vulnerability.objects.filter(scan_history__id=self.scan_id)
info_count = vulns.filter(severity=0).count()
low_count = vulns.filter(severity=1).count()
medium_count = vulns.filter(severity=2).count()
high_count = vulns.filter(severity=3).count()
critical_count = vulns.filter(severity=4).count()
unknown_count = vulns.filter(severity=-1).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count + unknown_count
fields = {
'Total': vulnerability_count,
'Critical': critical_count,
'High': high_count,
'Medium': medium_count,
'Low': low_count,
'Info': info_count,
'Unknown': unknown_count
}
self.notify(fields=fields)
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=NUCLEI
).exclude(
severity=0
)
# find all unique vulnerabilities based on path and title
# all unique vulnerability will go thru gpt function and get report
# once report is got, it will be matched with other vulnerabilities and saved
unique_vulns = set()
for vuln in vulns:
unique_vulns.add((vuln.name, vuln.get_path()))
unique_vulns = list(unique_vulns)
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in unique_vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return None
def get_vulnerability_gpt_report(vuln):
title = vuln[0]
path = vuln[1]
logger.info(f'Getting GPT Report for {title}, PATH: {path}')
# check if in db already exists
stored = GPTVulnerabilityReport.objects.filter(
url_path=path
).filter(
title=title
).first()
if stored:
response = {
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
report = GPTVulnerabilityReportGenerator()
vulnerability_description = get_gpt_vuln_input_description(
title,
path
)
response = report.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
title,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
for vuln in Vulnerability.objects.filter(name=title, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
def add_gpt_description_db(title, path, description, impact, remediation, references):
gpt_report = GPTVulnerabilityReport()
gpt_report.url_path = path
gpt_report.title = title
gpt_report.description = description
gpt_report.impact = impact
gpt_report.remediation = remediation
gpt_report.save()
for url in references:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
gpt_report.references.add(ref)
gpt_report.save()
@app.task(name='nuclei_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_scan(self, urls=[], ctx={}, description=None):
"""HTTP vulnerability scan using Nuclei
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
Notes:
Unfurl the urls to keep only domain and path, will be sent to vuln scan and
ignore certain file extensions. Thanks: https://github.com/six2dez/reconftw
"""
# Config
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
input_path = f'{self.results_dir}/input_endpoints_vulnerability_scan.txt'
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
concurrency = config.get(NUCLEI_CONCURRENCY) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
retries = config.get(RETRIES) or self.yaml_configuration.get(RETRIES, DEFAULT_RETRIES)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
custom_header = config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
should_fetch_gpt_report = config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
proxy = get_random_proxy()
nuclei_specific_config = config.get('nuclei', {})
use_nuclei_conf = nuclei_specific_config.get(USE_CONFIG, False)
severities = nuclei_specific_config.get(NUCLEI_SEVERITY, NUCLEI_DEFAULT_SEVERITIES)
tags = nuclei_specific_config.get(NUCLEI_TAGS, [])
tags = ','.join(tags)
nuclei_templates = nuclei_specific_config.get(NUCLEI_TEMPLATE)
custom_nuclei_templates = nuclei_specific_config.get(NUCLEI_CUSTOM_TEMPLATE)
# severities_str = ','.join(severities)
# Get alive endpoints
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=enable_http_crawl,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
if intensity == 'normal': # reduce number of endpoints to scan
unfurl_filter = f'{self.results_dir}/urls_unfurled.txt'
run_command(
f"cat {input_path} | unfurl -u format %s://%d%p |uro > {unfurl_filter}",
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {unfurl_filter} -o {unfurl_filter}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
input_path = unfurl_filter
# Build templates
# logger.info('Updating Nuclei templates ...')
run_command(
'nuclei -update-templates',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
templates = []
if not (nuclei_templates or custom_nuclei_templates):
templates.append(NUCLEI_DEFAULT_TEMPLATES_PATH)
if nuclei_templates:
if ALL in nuclei_templates:
template = NUCLEI_DEFAULT_TEMPLATES_PATH
templates.append(template)
else:
templates.extend(nuclei_templates)
if custom_nuclei_templates:
custom_nuclei_template_paths = [f'{str(elem)}.yaml' for elem in custom_nuclei_templates]
template = templates.extend(custom_nuclei_template_paths)
# Build CMD
cmd = 'nuclei -j'
cmd += ' -config /root/.config/nuclei/config.yaml' if use_nuclei_conf else ''
cmd += f' -irr'
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -l {input_path}'
cmd += f' -c {str(concurrency)}' if concurrency > 0 else ''
cmd += f' -proxy {proxy} ' if proxy else ''
cmd += f' -retries {retries}' if retries > 0 else ''
cmd += f' -rl {rate_limit}' if rate_limit > 0 else ''
# cmd += f' -severity {severities_str}'
cmd += f' -timeout {str(timeout)}' if timeout and timeout > 0 else ''
cmd += f' -tags {tags}' if tags else ''
cmd += f' -silent'
for tpl in templates:
cmd += f' -t {tpl}'
grouped_tasks = []
custom_ctx = ctx
for severity in severities:
custom_ctx['track'] = True
_task = nuclei_individual_severity_module.si(
cmd,
severity,
enable_http_crawl,
should_fetch_gpt_report,
ctx=custom_ctx,
description=f'Nuclei Scan with severity {severity}'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan with all severities completed...')
return None
@app.task(name='dalfox_xss_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def dalfox_xss_scan(self, urls=[], ctx={}, description=None):
"""XSS Scan using dalfox
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
dalfox_config = vuln_config.get(DALFOX) or {}
custom_header = dalfox_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
is_waf_evasion = dalfox_config.get(WAF_EVASION, False)
blind_xss_server = dalfox_config.get(BLIND_XSS_SERVER)
user_agent = dalfox_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
timeout = dalfox_config.get(TIMEOUT)
delay = dalfox_config.get(DELAY)
threads = dalfox_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_dalfox_xss.txt'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=False,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'dalfox --silence --no-color --no-spinner'
cmd += f' --only-poc r '
cmd += f' --ignore-return 302,404,403'
cmd += f' --skip-bav'
cmd += f' file {input_path}'
cmd += f' --proxy {proxy}' if proxy else ''
cmd += f' --waf-evasion' if is_waf_evasion else ''
cmd += f' -b {blind_xss_server}' if blind_xss_server else ''
cmd += f' --delay {delay}' if delay else ''
cmd += f' --timeout {timeout}' if timeout else ''
cmd += f' --user-agent {user_agent}' if user_agent else ''
cmd += f' --header {custom_header}' if custom_header else ''
cmd += f' --worker {threads}' if threads else ''
cmd += f' --format json'
results = []
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id,
trunc_char=','
):
if not isinstance(line, dict):
continue
results.append(line)
vuln_data = parse_dalfox_result(line)
http_url = sanitize_url(line.get('data'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Dalfox Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=DALFOX
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='crlfuzz_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def crlfuzz_scan(self, urls=[], ctx={}, description=None):
"""CRLF Fuzzing with CRLFuzz
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
custom_header = vuln_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
user_agent = vuln_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
threads = vuln_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_crlf.txt'
output_path = f'{self.results_dir}/{self.filename}'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'crlfuzz -s'
cmd += f' -l {input_path}'
cmd += f' -x {proxy}' if proxy else ''
cmd += f' --H {custom_header}' if custom_header else ''
cmd += f' -o {output_path}'
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id
)
if not os.path.isfile(output_path):
logger.info('No Results from CRLFuzz')
return
crlfs = []
results = []
with open(output_path, 'r') as file:
crlfs = file.readlines()
for crlf in crlfs:
url = crlf.strip()
vuln_data = parse_crlfuzz_result(url)
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting CRLFuzz Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=CRLFUZZ
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='s3scanner', queue='main_scan_queue', base=RengineTask, bind=True)
def s3scanner(self, ctx={}, description=None):
"""Bucket Scanner
Args:
ctx (dict): Context
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/#{self.scan_id}_subdomain_discovery.txt'
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
s3_config = vuln_config.get(S3SCANNER) or {}
threads = s3_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
providers = s3_config.get(PROVIDERS, S3SCANNER_DEFAULT_PROVIDERS)
scan_history = ScanHistory.objects.filter(pk=self.scan_id).first()
for provider in providers:
cmd = f's3scanner -bucket-file {input_path} -enumerate -provider {provider} -threads {threads} -json'
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
if line.get('bucket', {}).get('exists', 0) == 1:
result = parse_s3scanner_result(line)
s3bucket, created = S3Bucket.objects.get_or_create(**result)
scan_history.buckets.add(s3bucket)
logger.info(f"s3 bucket added {result['provider']}-{result['name']}-{result['region']}")
@app.task(name='http_crawl', queue='main_scan_queue', base=RengineTask, bind=True)
def http_crawl(
self,
urls=[],
method=None,
recrawl=False,
ctx={},
track=True,
description=None,
is_ran_from_subdomain_scan=False,
should_remove_duplicate_endpoints=True,
duplicate_removal_fields=[]):
"""Use httpx to query HTTP URLs for important info like page titles, http
status, etc...
Args:
urls (list, optional): A set of URLs to check. Overrides default
behavior which queries all endpoints related to this scan.
method (str): HTTP method to use (GET, HEAD, POST, PUT, DELETE).
recrawl (bool, optional): If False, filter out URLs that have already
been crawled.
should_remove_duplicate_endpoints (bool): Whether to remove duplicate endpoints
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
Returns:
list: httpx results.
"""
logger.info('Initiating HTTP Crawl')
if is_ran_from_subdomain_scan:
logger.info('Running From Subdomain Scan...')
cmd = '/go/bin/httpx'
cfg = self.yaml_configuration.get(HTTP_CRAWL) or {}
custom_header = cfg.get(CUSTOM_HEADER, '')
threads = cfg.get(THREADS, DEFAULT_THREADS)
follow_redirect = cfg.get(FOLLOW_REDIRECT, True)
self.output_path = None
input_path = f'{self.results_dir}/httpx_input.txt'
history_file = f'{self.results_dir}/commands.txt'
if urls: # direct passing URLs to check
if self.url_filter:
urls = [u for u in urls if self.url_filter in u]
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_uncrawled=not recrawl,
write_filepath=input_path,
ctx=ctx
)
# logger.debug(urls)
# If no URLs found, skip it
if not urls:
return
# Re-adjust thread number if few URLs to avoid spinning up a monster to
# kill a fly.
if len(urls) < threads:
threads = len(urls)
# Get random proxy
proxy = get_random_proxy()
# Run command
cmd += f' -cl -ct -rt -location -td -websocket -cname -asn -cdn -probe -random-agent'
cmd += f' -t {threads}' if threads > 0 else ''
cmd += f' --http-proxy {proxy}' if proxy else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -json'
cmd += f' -u {urls[0]}' if len(urls) == 1 else f' -l {input_path}'
cmd += f' -x {method}' if method else ''
cmd += f' -silent'
if follow_redirect:
cmd += ' -fr'
results = []
endpoint_ids = []
for line in stream_command(
cmd,
history_file=history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not line or not isinstance(line, dict):
continue
logger.debug(line)
# No response from endpoint
if line.get('failed', False):
continue
# Parse httpx output
host = line.get('host', '')
content_length = line.get('content_length', 0)
http_status = line.get('status_code')
http_url, is_redirect = extract_httpx_url(line)
page_title = line.get('title')
webserver = line.get('webserver')
cdn = line.get('cdn', False)
rt = line.get('time')
techs = line.get('tech', [])
cname = line.get('cname', '')
content_type = line.get('content_type', '')
response_time = -1
if rt:
response_time = float(''.join(ch for ch in rt if not ch.isalpha()))
if rt[-2:] == 'ms':
response_time = response_time / 1000
# Create Subdomain object in DB
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
# Save default HTTP URL to endpoint object in DB
endpoint, created = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain,
is_default=is_ran_from_subdomain_scan
)
if not endpoint:
continue
endpoint.http_status = http_status
endpoint.page_title = page_title
endpoint.content_length = content_length
endpoint.webserver = webserver
endpoint.response_time = response_time
endpoint.content_type = content_type
endpoint.save()
endpoint_str = f'{http_url} [{http_status}] `{content_length}B` `{webserver}` `{rt}`'
logger.warning(endpoint_str)
if endpoint and endpoint.is_alive and endpoint.http_status != 403:
self.notify(
fields={'Alive endpoint': f'• {endpoint_str}'},
add_meta_info=False)
# Add endpoint to results
line['_cmd'] = cmd
line['final_url'] = http_url
line['endpoint_id'] = endpoint.id
line['endpoint_created'] = created
line['is_redirect'] = is_redirect
results.append(line)
# Add technology objects to DB
for technology in techs:
tech, _ = Technology.objects.get_or_create(name=technology)
endpoint.techs.add(tech)
if is_ran_from_subdomain_scan:
subdomain.technologies.add(tech)
subdomain.save()
endpoint.save()
techs_str = ', '.join([f'`{tech}`' for tech in techs])
self.notify(
fields={'Technologies': techs_str},
add_meta_info=False)
# Add IP objects for 'a' records to DB
a_records = line.get('a', [])
for ip_address in a_records:
ip, created = save_ip_address(
ip_address,
subdomain,
subscan=self.subscan,
cdn=cdn)
ips_str = '• ' + '\n• '.join([f'`{ip}`' for ip in a_records])
self.notify(
fields={'IPs': ips_str},
add_meta_info=False)
# Add IP object for host in DB
if host:
ip, created = save_ip_address(
host,
subdomain,
subscan=self.subscan,
cdn=cdn)
self.notify(
fields={'IPs': f'• `{ip.address}`'},
add_meta_info=False)
# Save subdomain and endpoint
if is_ran_from_subdomain_scan:
# save subdomain stuffs
subdomain.http_url = http_url
subdomain.http_status = http_status
subdomain.page_title = page_title
subdomain.content_length = content_length
subdomain.webserver = webserver
subdomain.response_time = response_time
subdomain.content_type = content_type
subdomain.cname = ','.join(cname)
subdomain.is_cdn = cdn
if cdn:
subdomain.cdn_name = line.get('cdn_name')
subdomain.save()
endpoint.save()
endpoint_ids.append(endpoint.id)
if should_remove_duplicate_endpoints:
# Remove 'fake' alive endpoints that are just redirects to the same page
remove_duplicate_endpoints(
self.scan_id,
self.domain_id,
self.subdomain_id,
filter_ids=endpoint_ids
)
# Remove input file
run_command(
f'rm {input_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
return results
#---------------------#
# Notifications tasks #
#---------------------#
@app.task(name='send_notif', bind=False, queue='send_notif_queue')
def send_notif(
message,
scan_history_id=None,
subscan_id=None,
**options):
if not 'title' in options:
message = enrich_notification(message, scan_history_id, subscan_id)
send_discord_message(message, **options)
send_slack_message(message)
send_telegram_message(message)
@app.task(name='send_scan_notif', bind=False, queue='send_scan_notif_queue')
def send_scan_notif(
scan_history_id,
subscan_id=None,
engine_id=None,
status='RUNNING'):
"""Send scan status notification. Works for scan or a subscan if subscan_id
is passed.
Args:
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Get domain, engine, scan_history objects
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
tasks = ScanActivity.objects.filter(scan_of=scan) if scan else 0
# Build notif options
url = get_scan_url(scan_history_id, subscan_id)
title = get_scan_title(scan_history_id, subscan_id)
fields = get_scan_fields(engine, scan, subscan, status, tasks)
severity = None
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
if status:
severity = STATUS_TO_SEVERITIES.get(status)
opts = {
'title': title,
'url': url,
'fields': fields,
'severity': severity
}
logger.warning(f'Sending notification "{title}" [{severity}]')
# Send notification
send_notif(
msg,
scan_history_id,
subscan_id,
**opts)
@app.task(name='send_task_notif', bind=False, queue='send_task_notif_queue')
def send_task_notif(
task_name,
status=None,
result=None,
output_path=None,
traceback=None,
scan_history_id=None,
engine_id=None,
subscan_id=None,
severity=None,
add_meta_info=True,
update_fields={}):
"""Send task status notification.
Args:
task_name (str): Task name.
status (str, optional): Task status.
result (str, optional): Task result.
output_path (str, optional): Task output path.
traceback (str, optional): Task traceback.
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
severity (str, optional): Severity (will be mapped to notif colors)
add_meta_info (bool, optional): Wheter to add scan / subscan info to notif.
update_fields (dict, optional): Fields key / value to update.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Build fields
url = None
fields = {}
if add_meta_info:
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
url = get_scan_url(scan_history_id)
if status:
fields['Status'] = f'**{status}**'
if engine:
fields['Engine'] = engine.engine_name
if scan:
fields['Scan ID'] = f'[#{scan.id}]({url})'
if subscan:
url = get_scan_url(scan_history_id, subscan_id)
fields['Subscan ID'] = f'[#{subscan.id}]({url})'
title = get_task_title(task_name, scan_history_id, subscan_id)
if status:
severity = STATUS_TO_SEVERITIES.get(status)
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
# Add fields to update
for k, v in update_fields.items():
fields[k] = v
# Add traceback to notif
if traceback and notif.send_scan_tracebacks:
fields['Traceback'] = f'```\n{traceback}\n```'
# Add files to notif
files = []
attach_file = (
notif.send_scan_output_file and
output_path and
result and
not traceback
)
if attach_file:
output_title = output_path.split('/')[-1]
files = [(output_path, output_title)]
# Send notif
opts = {
'title': title,
'url': url,
'files': files,
'severity': severity,
'fields': fields,
'fields_append': update_fields.keys()
}
send_notif(
msg,
scan_history_id=scan_history_id,
subscan_id=subscan_id,
**opts)
@app.task(name='send_file_to_discord', bind=False, queue='send_file_to_discord_queue')
def send_file_to_discord(file_path, title=None):
notif = Notification.objects.first()
do_send = notif and notif.send_to_discord and notif.discord_hook_url
if not do_send:
return False
webhook = DiscordWebhook(
url=notif.discord_hook_url,
rate_limit_retry=True,
username=title or "reNgine Discord Plugin"
)
with open(file_path, "rb") as f:
head, tail = os.path.split(file_path)
webhook.add_file(file=f.read(), filename=tail)
webhook.execute()
@app.task(name='send_hackerone_report', bind=False, queue='send_hackerone_report_queue')
def send_hackerone_report(vulnerability_id):
"""Send HackerOne vulnerability report.
Args:
vulnerability_id (int): Vulnerability id.
Returns:
int: HTTP response status code.
"""
vulnerability = Vulnerability.objects.get(id=vulnerability_id)
severities = {v: k for k,v in NUCLEI_SEVERITY_MAP.items()}
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
# can only send vulnerability report if team_handle exists
if len(vulnerability.target_domain.h1_team_handle) !=0:
hackerone_query = Hackerone.objects.all()
if hackerone_query.exists():
hackerone = Hackerone.objects.first()
severity_value = severities[vulnerability.severity]
tpl = hackerone.report_template
# Replace syntax of report template with actual content
tpl = tpl.replace('{vulnerability_name}', vulnerability.name)
tpl = tpl.replace('{vulnerable_url}', vulnerability.http_url)
tpl = tpl.replace('{vulnerability_severity}', severity_value)
tpl = tpl.replace('{vulnerability_description}', vulnerability.description if vulnerability.description else '')
tpl = tpl.replace('{vulnerability_extracted_results}', vulnerability.extracted_results if vulnerability.extracted_results else '')
tpl = tpl.replace('{vulnerability_reference}', vulnerability.reference if vulnerability.reference else '')
data = {
"data": {
"type": "report",
"attributes": {
"team_handle": vulnerability.target_domain.h1_team_handle,
"title": '{} found in {}'.format(vulnerability.name, vulnerability.http_url),
"vulnerability_information": tpl,
"severity_rating": severity_value,
"impact": "More information about the impact and vulnerability can be found here: \n" + vulnerability.reference if vulnerability.reference else "NA",
}
}
}
r = requests.post(
'https://api.hackerone.com/v1/hackers/reports',
auth=(hackerone.username, hackerone.api_key),
json=data,
headers=headers
)
response = r.json()
status_code = r.status_code
if status_code == 201:
vulnerability.hackerone_report_id = response['data']["id"]
vulnerability.open_status = False
vulnerability.save()
return status_code
else:
logger.error('No team handle found.')
status_code = 111
return status_code
#-------------#
# Utils tasks #
#-------------#
@app.task(name='parse_nmap_results', bind=False, queue='parse_nmap_results_queue')
def parse_nmap_results(xml_file, output_file=None):
"""Parse results from nmap output file.
Args:
xml_file (str): nmap XML report file path.
Returns:
list: List of vulnerabilities found from nmap results.
"""
with open(xml_file, encoding='utf8') as f:
content = f.read()
try:
nmap_results = xmltodict.parse(content) # parse XML to dict
except Exception as e:
logger.exception(e)
logger.error(f'Cannot parse {xml_file} to valid JSON. Skipping.')
return []
# Write JSON to output file
if output_file:
with open(output_file, 'w') as f:
json.dump(nmap_results, f, indent=4)
logger.warning(json.dumps(nmap_results, indent=4))
hosts = (
nmap_results
.get('nmaprun', {})
.get('host', {})
)
all_vulns = []
if isinstance(hosts, dict):
hosts = [hosts]
for host in hosts:
# Grab hostname / IP from output
hostnames_dict = host.get('hostnames', {})
if hostnames_dict:
# Ensure that hostnames['hostname'] is a list for consistency
hostnames_list = hostnames_dict['hostname'] if isinstance(hostnames_dict['hostname'], list) else [hostnames_dict['hostname']]
# Extract all the @name values from the list of dictionaries
hostnames = [entry.get('@name') for entry in hostnames_list]
else:
hostnames = [host.get('address')['@addr']]
# Iterate over each hostname for each port
for hostname in hostnames:
# Grab ports from output
ports = host.get('ports', {}).get('port', [])
if isinstance(ports, dict):
ports = [ports]
for port in ports:
url_vulns = []
port_number = port['@portid']
url = sanitize_url(f'{hostname}:{port_number}')
logger.info(f'Parsing nmap results for {hostname}:{port_number} ...')
if not port_number or not port_number.isdigit():
continue
port_protocol = port['@protocol']
scripts = port.get('script', [])
if isinstance(scripts, dict):
scripts = [scripts]
for script in scripts:
script_id = script['@id']
script_output = script['@output']
script_output_table = script.get('table', [])
logger.debug(f'Ran nmap script "{script_id}" on {port_number}/{port_protocol}:\n{script_output}\n')
if script_id == 'vulscan':
vulns = parse_nmap_vulscan_output(script_output)
url_vulns.extend(vulns)
elif script_id == 'vulners':
vulns = parse_nmap_vulners_output(script_output)
url_vulns.extend(vulns)
# elif script_id == 'http-server-header':
# TODO: nmap can help find technologies as well using the http-server-header script
# regex = r'(\w+)/([\d.]+)\s?(?:\((\w+)\))?'
# tech_name, tech_version, tech_os = re.match(regex, test_string).groups()
# Technology.objects.get_or_create(...)
# elif script_id == 'http_csrf':
# vulns = parse_nmap_http_csrf_output(script_output)
# url_vulns.extend(vulns)
else:
logger.warning(f'Script output parsing for script "{script_id}" is not supported yet.')
# Add URL to vuln
for vuln in url_vulns:
# TODO: This should extend to any URL, not just HTTP
vuln['http_url'] = url
if 'http_path' in vuln:
vuln['http_url'] += vuln['http_path']
all_vulns.append(vuln)
return all_vulns
def parse_nmap_http_csrf_output(script_output):
pass
def parse_nmap_vulscan_output(script_output):
"""Parse nmap vulscan script output.
Args:
script_output (str): Vulscan script output.
Returns:
list: List of Vulnerability dicts.
"""
data = {}
vulns = []
provider_name = ''
# Sort all vulns found by provider so that we can match each provider with
# a function that pulls from its API to get more info about the
# vulnerability.
for line in script_output.splitlines():
if not line:
continue
if not line.startswith('['): # provider line
if "No findings" in line:
logger.info(f"No findings: {line}")
continue
elif ' - ' in line:
provider_name, provider_url = tuple(line.split(' - '))
data[provider_name] = {'url': provider_url.rstrip(':'), 'entries': []}
continue
else:
# Log a warning
logger.warning(f"Unexpected line format: {line}")
continue
reg = r'\[(.*)\] (.*)'
matches = re.match(reg, line)
id, title = matches.groups()
entry = {'id': id, 'title': title}
data[provider_name]['entries'].append(entry)
logger.warning('Vulscan parsed output:')
logger.warning(pprint.pformat(data))
for provider_name in data:
if provider_name == 'Exploit-DB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'IBM X-Force':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'MITRE CVE':
logger.error(f'Provider {provider_name} is not supported YET.')
for entry in data[provider_name]['entries']:
cve_id = entry['id']
vuln = cve_to_vuln(cve_id)
vulns.append(vuln)
elif provider_name == 'OSVDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'OpenVAS (Nessus)':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'SecurityFocus':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'VulDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
else:
logger.error(f'Provider {provider_name} is not supported.')
return vulns
def parse_nmap_vulners_output(script_output, url=''):
"""Parse nmap vulners script output.
TODO: Rework this as it's currently matching all CVEs no matter the
confidence.
Args:
script_output (str): Script output.
Returns:
list: List of found vulnerabilities.
"""
vulns = []
# Check for CVE in script output
CVE_REGEX = re.compile(r'.*(CVE-\d\d\d\d-\d+).*')
matches = CVE_REGEX.findall(script_output)
matches = list(dict.fromkeys(matches))
for cve_id in matches: # get CVE info
vuln = cve_to_vuln(cve_id, vuln_type='nmap-vulners-nse')
if vuln:
vulns.append(vuln)
return vulns
def cve_to_vuln(cve_id, vuln_type=''):
"""Search for a CVE using CVESearch and return Vulnerability data.
Args:
cve_id (str): CVE ID in the form CVE-*
Returns:
dict: Vulnerability dict.
"""
cve_info = CVESearch('https://cve.circl.lu').id(cve_id)
if not cve_info:
logger.error(f'Could not fetch CVE info for cve {cve_id}. Skipping.')
return None
vuln_cve_id = cve_info['id']
vuln_name = vuln_cve_id
vuln_description = cve_info.get('summary', 'none').replace(vuln_cve_id, '').strip()
try:
vuln_cvss = float(cve_info.get('cvss', -1))
except (ValueError, TypeError):
vuln_cvss = -1
vuln_cwe_id = cve_info.get('cwe', '')
exploit_ids = cve_info.get('refmap', {}).get('exploit-db', [])
osvdb_ids = cve_info.get('refmap', {}).get('osvdb', [])
references = cve_info.get('references', [])
capec_objects = cve_info.get('capec', [])
# Parse ovals for a better vuln name / type
ovals = cve_info.get('oval', [])
if ovals:
vuln_name = ovals[0]['title']
vuln_type = ovals[0]['family']
# Set vulnerability severity based on CVSS score
vuln_severity = 'info'
if vuln_cvss < 4:
vuln_severity = 'low'
elif vuln_cvss < 7:
vuln_severity = 'medium'
elif vuln_cvss < 9:
vuln_severity = 'high'
else:
vuln_severity = 'critical'
# Build console warning message
msg = f'{vuln_name} | {vuln_severity.upper()} | {vuln_cve_id} | {vuln_cwe_id} | {vuln_cvss}'
for id in osvdb_ids:
msg += f'\n\tOSVDB: {id}'
for exploit_id in exploit_ids:
msg += f'\n\tEXPLOITDB: {exploit_id}'
logger.warning(msg)
vuln = {
'name': vuln_name,
'type': vuln_type,
'severity': NUCLEI_SEVERITY_MAP[vuln_severity],
'description': vuln_description,
'cvss_score': vuln_cvss,
'references': references,
'cve_ids': [vuln_cve_id],
'cwe_ids': [vuln_cwe_id]
}
return vuln
def parse_s3scanner_result(line):
'''
Parses and returns s3Scanner Data
'''
bucket = line['bucket']
return {
'name': bucket['name'],
'region': bucket['region'],
'provider': bucket['provider'],
'owner_display_name': bucket['owner_display_name'],
'owner_id': bucket['owner_id'],
'perm_auth_users_read': bucket['perm_auth_users_read'],
'perm_auth_users_write': bucket['perm_auth_users_write'],
'perm_auth_users_read_acl': bucket['perm_auth_users_read_acl'],
'perm_auth_users_write_acl': bucket['perm_auth_users_write_acl'],
'perm_auth_users_full_control': bucket['perm_auth_users_full_control'],
'perm_all_users_read': bucket['perm_all_users_read'],
'perm_all_users_write': bucket['perm_all_users_write'],
'perm_all_users_read_acl': bucket['perm_all_users_read_acl'],
'perm_all_users_write_acl': bucket['perm_all_users_write_acl'],
'perm_all_users_full_control': bucket['perm_all_users_full_control'],
'num_objects': bucket['num_objects'],
'size': bucket['bucket_size']
}
def parse_nuclei_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
return {
'name': line['info'].get('name', ''),
'type': line['type'],
'severity': NUCLEI_SEVERITY_MAP[line['info'].get('severity', 'unknown')],
'template': line['template'],
'template_url': line['template-url'],
'template_id': line['template-id'],
'description': line['info'].get('description', ''),
'matcher_name': line.get('matcher-name', ''),
'curl_command': line.get('curl-command'),
'request': line.get('request'),
'response': line.get('response'),
'extracted_results': line.get('extracted-results', []),
'cvss_metrics': line['info'].get('classification', {}).get('cvss-metrics', ''),
'cvss_score': line['info'].get('classification', {}).get('cvss-score'),
'cve_ids': line['info'].get('classification', {}).get('cve_id', []) or [],
'cwe_ids': line['info'].get('classification', {}).get('cwe_id', []) or [],
'references': line['info'].get('reference', []) or [],
'tags': line['info'].get('tags', []),
'source': NUCLEI,
}
def parse_dalfox_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
description = ''
description += f" Evidence: {line.get('evidence')} <br>" if line.get('evidence') else ''
description += f" Message: {line.get('message')} <br>" if line.get('message') else ''
description += f" Payload: {line.get('message_str')} <br>" if line.get('message_str') else ''
description += f" Vulnerable Parameter: {line.get('param')} <br>" if line.get('param') else ''
return {
'name': 'XSS (Cross Site Scripting)',
'type': 'XSS',
'severity': DALFOX_SEVERITY_MAP[line.get('severity', 'unknown')],
'description': description,
'source': DALFOX,
'cwe_ids': [line.get('cwe')]
}
def parse_crlfuzz_result(url):
"""Parse CRLF results
Args:
url (str): CRLF Vulnerable URL
Returns:
dict: Vulnerability data.
"""
return {
'name': 'CRLF (HTTP Response Splitting)',
'type': 'CRLF',
'severity': 2,
'description': 'A CRLF (HTTP Response Splitting) vulnerability has been discovered.',
'source': CRLFUZZ,
}
def record_exists(model, data, exclude_keys=[]):
"""
Check if a record already exists in the database based on the given data.
Args:
model (django.db.models.Model): The Django model to check against.
data (dict): Data dictionary containing fields and values.
exclude_keys (list): List of keys to exclude from the lookup.
Returns:
bool: True if the record exists, False otherwise.
"""
# Extract the keys that will be used for the lookup
lookup_fields = {key: data[key] for key in data if key not in exclude_keys}
# Return True if a record exists based on the lookup fields, False otherwise
return model.objects.filter(**lookup_fields).exists()
@app.task(name='geo_localize', bind=False, queue='geo_localize_queue')
def geo_localize(host, ip_id=None):
"""Uses geoiplookup to find location associated with host.
Args:
host (str): Hostname.
ip_id (int): IpAddress object id.
Returns:
startScan.models.CountryISO: CountryISO object from DB or None.
"""
if validators.ipv6(host):
logger.info(f'Ipv6 "{host}" is not supported by geoiplookup. Skipping.')
return None
cmd = f'geoiplookup {host}'
_, out = run_command(cmd)
if 'IP Address not found' not in out and "can't resolve hostname" not in out:
country_iso = out.split(':')[1].strip().split(',')[0]
country_name = out.split(':')[1].strip().split(',')[1].strip()
geo_object, _ = CountryISO.objects.get_or_create(
iso=country_iso,
name=country_name
)
geo_json = {
'iso': country_iso,
'name': country_name
}
if ip_id:
ip = IpAddress.objects.get(pk=ip_id)
ip.geo_iso = geo_object
ip.save()
return geo_json
logger.info(f'Geo IP lookup failed for host "{host}"')
return None
@app.task(name='query_whois', bind=False, queue='query_whois_queue')
def query_whois(ip_domain, force_reload_whois=False):
"""Query WHOIS information for an IP or a domain name.
Args:
ip_domain (str): IP address or domain name.
save_domain (bool): Whether to save domain or not, default False
Returns:
dict: WHOIS information.
"""
if not force_reload_whois and Domain.objects.filter(name=ip_domain).exists() and Domain.objects.get(name=ip_domain).domain_info:
domain = Domain.objects.get(name=ip_domain)
if not domain.insert_date:
domain.insert_date = timezone.now()
domain.save()
domain_info_db = domain.domain_info
domain_info = DottedDict(
dnssec=domain_info_db.dnssec,
created=domain_info_db.created,
updated=domain_info_db.updated,
expires=domain_info_db.expires,
geolocation_iso=domain_info_db.geolocation_iso,
status=[status['name'] for status in DomainWhoisStatusSerializer(domain_info_db.status, many=True).data],
whois_server=domain_info_db.whois_server,
ns_records=[ns['name'] for ns in NameServersSerializer(domain_info_db.name_servers, many=True).data],
registrar_name=domain_info_db.registrar.name,
registrar_phone=domain_info_db.registrar.phone,
registrar_email=domain_info_db.registrar.email,
registrar_url=domain_info_db.registrar.url,
registrant_name=domain_info_db.registrant.name,
registrant_id=domain_info_db.registrant.id_str,
registrant_organization=domain_info_db.registrant.organization,
registrant_city=domain_info_db.registrant.city,
registrant_state=domain_info_db.registrant.state,
registrant_zip_code=domain_info_db.registrant.zip_code,
registrant_country=domain_info_db.registrant.country,
registrant_phone=domain_info_db.registrant.phone,
registrant_fax=domain_info_db.registrant.fax,
registrant_email=domain_info_db.registrant.email,
registrant_address=domain_info_db.registrant.address,
admin_name=domain_info_db.admin.name,
admin_id=domain_info_db.admin.id_str,
admin_organization=domain_info_db.admin.organization,
admin_city=domain_info_db.admin.city,
admin_state=domain_info_db.admin.state,
admin_zip_code=domain_info_db.admin.zip_code,
admin_country=domain_info_db.admin.country,
admin_phone=domain_info_db.admin.phone,
admin_fax=domain_info_db.admin.fax,
admin_email=domain_info_db.admin.email,
admin_address=domain_info_db.admin.address,
tech_name=domain_info_db.tech.name,
tech_id=domain_info_db.tech.id_str,
tech_organization=domain_info_db.tech.organization,
tech_city=domain_info_db.tech.city,
tech_state=domain_info_db.tech.state,
tech_zip_code=domain_info_db.tech.zip_code,
tech_country=domain_info_db.tech.country,
tech_phone=domain_info_db.tech.phone,
tech_fax=domain_info_db.tech.fax,
tech_email=domain_info_db.tech.email,
tech_address=domain_info_db.tech.address,
related_tlds=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_tlds, many=True).data],
related_domains=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_domains, many=True).data],
historical_ips=[ip for ip in HistoricalIPSerializer(domain_info_db.historical_ips, many=True).data],
)
if domain_info_db.dns_records:
a_records = []
txt_records = []
mx_records = []
dns_records = [{'name': dns['name'], 'type': dns['type']} for dns in DomainDNSRecordSerializer(domain_info_db.dns_records, many=True).data]
for dns in dns_records:
if dns['type'] == 'a':
a_records.append(dns['name'])
elif dns['type'] == 'txt':
txt_records.append(dns['name'])
elif dns['type'] == 'mx':
mx_records.append(dns['name'])
domain_info.a_records = a_records
domain_info.txt_records = txt_records
domain_info.mx_records = mx_records
else:
logger.info(f'Domain info for "{ip_domain}" not found in DB, querying whois')
domain_info = DottedDict()
# find domain historical ip
try:
historical_ips = get_domain_historical_ip_address(ip_domain)
domain_info.historical_ips = historical_ips
except Exception as e:
logger.error(f'HistoricalIP for {ip_domain} not found!\nError: {str(e)}')
historical_ips = []
# find associated domains using ip_domain
try:
related_domains = reverse_whois(ip_domain.split('.')[0])
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
# find related tlds using TLSx
try:
related_tlds = []
output_path = '/tmp/ip_domain_tlsx.txt'
tlsx_command = f'tlsx -san -cn -silent -ro -host {ip_domain} -o {output_path}'
run_command(
tlsx_command,
shell=True,
)
tlsx_output = []
with open(output_path) as f:
tlsx_output = f.readlines()
tldextract_target = tldextract.extract(ip_domain)
for doms in tlsx_output:
doms = doms.strip()
tldextract_res = tldextract.extract(doms)
if ip_domain != doms and tldextract_res.domain == tldextract_target.domain and tldextract_res.subdomain == '':
related_tlds.append(doms)
related_tlds = list(set(related_tlds))
domain_info.related_tlds = related_tlds
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
related_domains_list = []
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_related)
related_domains_list.append(_domain['name'])
for _domain in related_tlds:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain,
)[0]
db_domain_info.related_tlds.add(domain_related)
for _ip in historical_ips:
historical_ip = HistoricalIP.objects.get_or_create(
ip=_ip['ip'],
owner=_ip['owner'],
location=_ip['location'],
last_seen=_ip['last_seen'],
)[0]
db_domain_info.historical_ips.add(historical_ip)
domain.domain_info = db_domain_info
domain.save()
command = f'netlas host {ip_domain} -f json'
# check if netlas key is provided
netlas_key = get_netlas_key()
command += f' -a {netlas_key}' if netlas_key else ''
result = subprocess.check_output(command.split()).decode('utf-8')
if 'Failed to parse response data' in result:
# do fallback
return {
'status': False,
'ip_domain': ip_domain,
'result': "Netlas limit exceeded.",
'message': 'Netlas limit exceeded.'
}
try:
result = json.loads(result)
logger.info(result)
whois = result.get('whois') if result.get('whois') else {}
domain_info.created = whois.get('created_date')
domain_info.expires = whois.get('expiration_date')
domain_info.updated = whois.get('updated_date')
domain_info.whois_server = whois.get('whois_server')
if 'registrant' in whois:
registrant = whois.get('registrant')
domain_info.registrant_name = registrant.get('name')
domain_info.registrant_country = registrant.get('country')
domain_info.registrant_id = registrant.get('id')
domain_info.registrant_state = registrant.get('province')
domain_info.registrant_city = registrant.get('city')
domain_info.registrant_phone = registrant.get('phone')
domain_info.registrant_address = registrant.get('street')
domain_info.registrant_organization = registrant.get('organization')
domain_info.registrant_fax = registrant.get('fax')
domain_info.registrant_zip_code = registrant.get('postal_code')
email_search = EMAIL_REGEX.search(str(registrant.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.registrant_email = field_content
if 'administrative' in whois:
administrative = whois.get('administrative')
domain_info.admin_name = administrative.get('name')
domain_info.admin_country = administrative.get('country')
domain_info.admin_id = administrative.get('id')
domain_info.admin_state = administrative.get('province')
domain_info.admin_city = administrative.get('city')
domain_info.admin_phone = administrative.get('phone')
domain_info.admin_address = administrative.get('street')
domain_info.admin_organization = administrative.get('organization')
domain_info.admin_fax = administrative.get('fax')
domain_info.admin_zip_code = administrative.get('postal_code')
mail_search = EMAIL_REGEX.search(str(administrative.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.admin_email = field_content
if 'technical' in whois:
technical = whois.get('technical')
domain_info.tech_name = technical.get('name')
domain_info.tech_country = technical.get('country')
domain_info.tech_state = technical.get('province')
domain_info.tech_id = technical.get('id')
domain_info.tech_city = technical.get('city')
domain_info.tech_phone = technical.get('phone')
domain_info.tech_address = technical.get('street')
domain_info.tech_organization = technical.get('organization')
domain_info.tech_fax = technical.get('fax')
domain_info.tech_zip_code = technical.get('postal_code')
mail_search = EMAIL_REGEX.search(str(technical.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.tech_email = field_content
if 'dns' in result:
dns = result.get('dns')
domain_info.mx_records = dns.get('mx')
domain_info.txt_records = dns.get('txt')
domain_info.a_records = dns.get('a')
domain_info.ns_records = whois.get('name_servers')
domain_info.dnssec = True if whois.get('dnssec') else False
domain_info.status = whois.get('status')
if 'registrar' in whois:
registrar = whois.get('registrar')
domain_info.registrar_name = registrar.get('name')
domain_info.registrar_email = registrar.get('email')
domain_info.registrar_phone = registrar.get('phone')
domain_info.registrar_url = registrar.get('url')
# find associated domains if registrant email is found
related_domains = reverse_whois(domain_info.get('registrant_email')) if domain_info.get('registrant_email') else []
for _domain in related_domains:
related_domains_list.append(_domain['name'])
# remove duplicate domains from related domains list
related_domains_list = list(set(related_domains_list))
domain_info.related_domains = related_domains_list
# save to db if domain exists
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_rel = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_rel)
db_domain_info.dnssec = domain_info.get('dnssec')
#dates
db_domain_info.created = domain_info.get('created')
db_domain_info.updated = domain_info.get('updated')
db_domain_info.expires = domain_info.get('expires')
#registrar
db_domain_info.registrar = Registrar.objects.get_or_create(
name=domain_info.get('registrar_name'),
email=domain_info.get('registrar_email'),
phone=domain_info.get('registrar_phone'),
url=domain_info.get('registrar_url'),
)[0]
db_domain_info.registrant = DomainRegistration.objects.get_or_create(
name=domain_info.get('registrant_name'),
organization=domain_info.get('registrant_organization'),
address=domain_info.get('registrant_address'),
city=domain_info.get('registrant_city'),
state=domain_info.get('registrant_state'),
zip_code=domain_info.get('registrant_zip_code'),
country=domain_info.get('registrant_country'),
email=domain_info.get('registrant_email'),
phone=domain_info.get('registrant_phone'),
fax=domain_info.get('registrant_fax'),
id_str=domain_info.get('registrant_id'),
)[0]
db_domain_info.admin = DomainRegistration.objects.get_or_create(
name=domain_info.get('admin_name'),
organization=domain_info.get('admin_organization'),
address=domain_info.get('admin_address'),
city=domain_info.get('admin_city'),
state=domain_info.get('admin_state'),
zip_code=domain_info.get('admin_zip_code'),
country=domain_info.get('admin_country'),
email=domain_info.get('admin_email'),
phone=domain_info.get('admin_phone'),
fax=domain_info.get('admin_fax'),
id_str=domain_info.get('admin_id'),
)[0]
db_domain_info.tech = DomainRegistration.objects.get_or_create(
name=domain_info.get('tech_name'),
organization=domain_info.get('tech_organization'),
address=domain_info.get('tech_address'),
city=domain_info.get('tech_city'),
state=domain_info.get('tech_state'),
zip_code=domain_info.get('tech_zip_code'),
country=domain_info.get('tech_country'),
email=domain_info.get('tech_email'),
phone=domain_info.get('tech_phone'),
fax=domain_info.get('tech_fax'),
id_str=domain_info.get('tech_id'),
)[0]
for status in domain_info.get('status') or []:
_status = WhoisStatus.objects.get_or_create(
name=status
)[0]
_status.save()
db_domain_info.status.add(_status)
for ns in domain_info.get('ns_records') or []:
_ns = NameServer.objects.get_or_create(
name=ns
)[0]
_ns.save()
db_domain_info.name_servers.add(_ns)
for a in domain_info.get('a_records') or []:
_a = DNSRecord.objects.get_or_create(
name=a,
type='a'
)[0]
_a.save()
db_domain_info.dns_records.add(_a)
for mx in domain_info.get('mx_records') or []:
_mx = DNSRecord.objects.get_or_create(
name=mx,
type='mx'
)[0]
_mx.save()
db_domain_info.dns_records.add(_mx)
for txt in domain_info.get('txt_records') or []:
_txt = DNSRecord.objects.get_or_create(
name=txt,
type='txt'
)[0]
_txt.save()
db_domain_info.dns_records.add(_txt)
db_domain_info.geolocation_iso = domain_info.get('registrant_country')
db_domain_info.whois_server = domain_info.get('whois_server')
db_domain_info.save()
domain.domain_info = db_domain_info
domain.save()
except Exception as e:
return {
'status': False,
'ip_domain': ip_domain,
'result': "unable to fetch records from WHOIS database.",
'message': str(e)
}
return {
'status': True,
'ip_domain': ip_domain,
'dnssec': domain_info.get('dnssec'),
'created': domain_info.get('created'),
'updated': domain_info.get('updated'),
'expires': domain_info.get('expires'),
'geolocation_iso': domain_info.get('registrant_country'),
'domain_statuses': domain_info.get('status'),
'whois_server': domain_info.get('whois_server'),
'dns': {
'a': domain_info.get('a_records'),
'mx': domain_info.get('mx_records'),
'txt': domain_info.get('txt_records'),
},
'registrar': {
'name': domain_info.get('registrar_name'),
'phone': domain_info.get('registrar_phone'),
'email': domain_info.get('registrar_email'),
'url': domain_info.get('registrar_url'),
},
'registrant': {
'name': domain_info.get('registrant_name'),
'id': domain_info.get('registrant_id'),
'organization': domain_info.get('registrant_organization'),
'address': domain_info.get('registrant_address'),
'city': domain_info.get('registrant_city'),
'state': domain_info.get('registrant_state'),
'zipcode': domain_info.get('registrant_zip_code'),
'country': domain_info.get('registrant_country'),
'phone': domain_info.get('registrant_phone'),
'fax': domain_info.get('registrant_fax'),
'email': domain_info.get('registrant_email'),
},
'admin': {
'name': domain_info.get('admin_name'),
'id': domain_info.get('admin_id'),
'organization': domain_info.get('admin_organization'),
'address':domain_info.get('admin_address'),
'city': domain_info.get('admin_city'),
'state': domain_info.get('admin_state'),
'zipcode': domain_info.get('admin_zip_code'),
'country': domain_info.get('admin_country'),
'phone': domain_info.get('admin_phone'),
'fax': domain_info.get('admin_fax'),
'email': domain_info.get('admin_email'),
},
'technical_contact': {
'name': domain_info.get('tech_name'),
'id': domain_info.get('tech_id'),
'organization': domain_info.get('tech_organization'),
'address': domain_info.get('tech_address'),
'city': domain_info.get('tech_city'),
'state': domain_info.get('tech_state'),
'zipcode': domain_info.get('tech_zip_code'),
'country': domain_info.get('tech_country'),
'phone': domain_info.get('tech_phone'),
'fax': domain_info.get('tech_fax'),
'email': domain_info.get('tech_email'),
},
'nameservers': domain_info.get('ns_records'),
# 'similar_domains': domain_info.get('similar_domains'),
'related_domains': domain_info.get('related_domains'),
'related_tlds': domain_info.get('related_tlds'),
'historical_ips': domain_info.get('historical_ips'),
}
@app.task(name='remove_duplicate_endpoints', bind=False, queue='remove_duplicate_endpoints_queue')
def remove_duplicate_endpoints(
scan_history_id,
domain_id,
subdomain_id=None,
filter_ids=[],
filter_status=[200, 301, 404],
duplicate_removal_fields=ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS
):
"""Remove duplicate endpoints.
Check for implicit redirections by comparing endpoints:
- [x] `content_length` similarities indicating redirections
- [x] `page_title` (check for same page title)
- [ ] Sign-in / login page (check for endpoints with the same words)
Args:
scan_history_id: ScanHistory id.
domain_id (int): Domain id.
subdomain_id (int, optional): Subdomain id.
filter_ids (list): List of endpoint ids to filter on.
filter_status (list): List of HTTP status codes to filter on.
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
"""
logger.info(f'Removing duplicate endpoints based on {duplicate_removal_fields}')
endpoints = (
EndPoint.objects
.filter(scan_history__id=scan_history_id)
.filter(target_domain__id=domain_id)
)
if filter_status:
endpoints = endpoints.filter(http_status__in=filter_status)
if subdomain_id:
endpoints = endpoints.filter(subdomain__id=subdomain_id)
if filter_ids:
endpoints = endpoints.filter(id__in=filter_ids)
for field_name in duplicate_removal_fields:
cl_query = (
endpoints
.values_list(field_name)
.annotate(mc=Count(field_name))
.order_by('-mc')
)
for (field_value, count) in cl_query:
if count > DELETE_DUPLICATES_THRESHOLD:
eps_to_delete = (
endpoints
.filter(**{field_name: field_value})
.order_by('discovered_date')
.all()[1:]
)
msg = f'Deleting {len(eps_to_delete)} endpoints [reason: same {field_name} {field_value}]'
for ep in eps_to_delete:
url = urlparse(ep.http_url)
if url.path in ['', '/', '/login']: # try do not delete the original page that other pages redirect to
continue
msg += f'\n\t {ep.http_url} [{ep.http_status}] [{field_name}={field_value}]'
ep.delete()
logger.warning(msg)
@app.task(name='run_command', bind=False, queue='run_command_queue')
def run_command(cmd, cwd=None, shell=False, history_file=None, scan_id=None, activity_id=None):
"""Run a given command using subprocess module.
Args:
cmd (str): Command to run.
cwd (str): Current working directory.
echo (bool): Log command.
shell (bool): Run within separate shell if True.
history_file (str): Write command + output to history file.
Returns:
tuple: Tuple with return_code, output.
"""
logger.info(cmd)
logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Run the command using subprocess
popen = subprocess.Popen(
cmd if shell else cmd.split(),
shell=shell,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
cwd=cwd,
universal_newlines=True)
output = ''
for stdout_line in iter(popen.stdout.readline, ""):
item = stdout_line.strip()
output += '\n' + item
logger.debug(item)
popen.stdout.close()
popen.wait()
return_code = popen.returncode
command_obj.output = output
command_obj.return_code = return_code
command_obj.save()
if history_file:
mode = 'a'
if not os.path.exists(history_file):
mode = 'w'
with open(history_file, mode) as f:
f.write(f'\n{cmd}\n{return_code}\n{output}\n------------------\n')
return return_code, output
#-------------#
# Other utils #
#-------------#
def stream_command(cmd, cwd=None, shell=False, history_file=None, encoding='utf-8', scan_id=None, activity_id=None, trunc_char=None):
# Log cmd
logger.info(cmd)
# logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Sanitize the cmd
command = cmd if shell else cmd.split()
# Run the command using subprocess
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
shell=shell)
# Log the output in real-time to the database
output = ""
# Process the output
for line in iter(lambda: process.stdout.readline(), b''):
if not line:
break
line = line.strip()
ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')
line = ansi_escape.sub('', line)
line = line.replace('\\x0d\\x0a', '\n')
if trunc_char and line.endswith(trunc_char):
line = line[:-1]
item = line
# Try to parse the line as JSON
try:
item = json.loads(line)
except json.JSONDecodeError:
pass
# Yield the line
#logger.debug(item)
yield item
# Add the log line to the output
output += line + "\n"
# Update the command record in the database
command_obj.output = output
command_obj.save()
# Retrieve the return code and output
process.wait()
return_code = process.returncode
# Update the return code and final output in the database
command_obj.return_code = return_code
command_obj.save()
# Append the command, return code and output to the history file
if history_file is not None:
with open(history_file, "a") as f:
f.write(f"{cmd}\n{return_code}\n{output}\n")
def process_httpx_response(line):
"""TODO: implement this"""
def extract_httpx_url(line):
"""Extract final URL from httpx results. Always follow redirects to find
the last URL.
Args:
line (dict): URL data output by httpx.
Returns:
tuple: (final_url, redirect_bool) tuple.
"""
status_code = line.get('status_code', 0)
final_url = line.get('final_url')
location = line.get('location')
chain_status_codes = line.get('chain_status_codes', [])
# Final URL is already looking nice, if it exists return it
if final_url:
return final_url, False
http_url = line['url'] # fallback to url field
# Handle redirects manually
REDIRECT_STATUS_CODES = [301, 302]
is_redirect = (
status_code in REDIRECT_STATUS_CODES
or
any(x in REDIRECT_STATUS_CODES for x in chain_status_codes)
)
if is_redirect and location:
if location.startswith(('http', 'https')):
http_url = location
else:
http_url = f'{http_url}/{location.lstrip("/")}'
# Sanitize URL
http_url = sanitize_url(http_url)
return http_url, is_redirect
#-------------#
# OSInt utils #
#-------------#
def get_and_save_dork_results(lookup_target, results_dir, type, lookup_keywords=None, lookup_extensions=None, delay=3, page_count=2, scan_history=None):
"""
Uses gofuzz to dork and store information
Args:
lookup_target (str): target to look into such as stackoverflow or even the target itself
results_dir (str): Results directory
type (str): Dork Type Title
lookup_keywords (str): comma separated keywords or paths to look for
lookup_extensions (str): comma separated extensions to look for
delay (int): delay between each requests
page_count (int): pages in google to extract information
scan_history (startScan.ScanHistory): Scan History Object
"""
results = []
gofuzz_command = f'{GOFUZZ_EXEC_PATH} -t {lookup_target} -d {delay} -p {page_count}'
if lookup_extensions:
gofuzz_command += f' -e {lookup_extensions}'
elif lookup_keywords:
gofuzz_command += f' -w {lookup_keywords}'
output_file = f'{results_dir}/gofuzz.txt'
gofuzz_command += f' -o {output_file}'
history_file = f'{results_dir}/commands.txt'
try:
run_command(
gofuzz_command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
)
if not os.path.isfile(output_file):
return
with open(output_file) as f:
for line in f.readlines():
url = line.strip()
if url:
results.append(url)
dork, created = Dork.objects.get_or_create(
type=type,
url=url
)
if scan_history:
scan_history.dorks.add(dork)
# remove output file
os.remove(output_file)
except Exception as e:
logger.exception(e)
return results
def get_and_save_emails(scan_history, activity_id, results_dir):
"""Get and save emails from Google, Bing and Baidu.
Args:
scan_history (startScan.ScanHistory): Scan history object.
activity_id: ScanActivity Object
results_dir (str): Results directory.
Returns:
list: List of emails found.
"""
emails = []
# Proxy settings
# get_random_proxy()
# Gather emails from Google, Bing and Baidu
output_file = f'{results_dir}/emails_tmp.txt'
history_file = f'{results_dir}/commands.txt'
command = f'python3 /usr/src/github/Infoga/infoga.py --domain {scan_history.domain.name} --source all --report {output_file}'
try:
run_command(
command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
activity_id=activity_id)
if not os.path.isfile(output_file):
logger.info('No Email results')
return []
with open(output_file) as f:
for line in f.readlines():
if 'Email' in line:
split_email = line.split(' ')[2]
emails.append(split_email)
output_path = f'{results_dir}/emails.txt'
with open(output_path, 'w') as output_file:
for email_address in emails:
save_email(email_address, scan_history)
output_file.write(f'{email_address}\n')
except Exception as e:
logger.exception(e)
return emails
def save_metadata_info(meta_dict):
"""Extract metadata from Google Search.
Args:
meta_dict (dict): Info dict.
Returns:
list: List of startScan.MetaFinderDocument objects.
"""
logger.warning(f'Getting metadata for {meta_dict.osint_target}')
scan_history = ScanHistory.objects.get(id=meta_dict.scan_id)
# Proxy settings
get_random_proxy()
# Get metadata
result = extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if not result:
logger.error(f'No metadata result from Google Search for {meta_dict.osint_target}.')
return []
# Add metadata info to DB
results = []
for metadata_name, data in result.get_metadata().items():
subdomain = Subdomain.objects.get(
scan_history=meta_dict.scan_id,
name=meta_dict.osint_target)
metadata = DottedDict({k: v for k, v in data.items()})
meta_finder_document = MetaFinderDocument(
subdomain=subdomain,
target_domain=meta_dict.domain,
scan_history=scan_history,
url=metadata.url,
doc_name=metadata_name,
http_status=metadata.status_code,
producer=metadata.metadata.get('Producer'),
creator=metadata.metadata.get('Creator'),
creation_date=metadata.metadata.get('CreationDate'),
modified_date=metadata.metadata.get('ModDate'),
author=metadata.metadata.get('Author'),
title=metadata.metadata.get('Title'),
os=metadata.metadata.get('OSInfo'))
meta_finder_document.save()
results.append(data)
return results
#-----------------#
# Utils functions #
#-----------------#
def create_scan_activity(scan_history_id, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = ScanHistory.objects.get(pk=scan_history_id)
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
#--------------------#
# Database functions #
#--------------------#
def save_vulnerability(**vuln_data):
references = vuln_data.pop('references', [])
cve_ids = vuln_data.pop('cve_ids', [])
cwe_ids = vuln_data.pop('cwe_ids', [])
tags = vuln_data.pop('tags', [])
subscan = vuln_data.pop('subscan', None)
# remove nulls
vuln_data = replace_nulls(vuln_data)
# Create vulnerability
vuln, created = Vulnerability.objects.get_or_create(**vuln_data)
if created:
vuln.discovered_date = timezone.now()
vuln.open_status = True
vuln.save()
# Save vuln tags
for tag_name in tags or []:
tag, created = VulnerabilityTags.objects.get_or_create(name=tag_name)
if tag:
vuln.tags.add(tag)
vuln.save()
# Save CVEs
for cve_id in cve_ids or []:
cve, created = CveId.objects.get_or_create(name=cve_id)
if cve:
vuln.cve_ids.add(cve)
vuln.save()
# Save CWEs
for cve_id in cwe_ids or []:
cwe, created = CweId.objects.get_or_create(name=cve_id)
if cwe:
vuln.cwe_ids.add(cwe)
vuln.save()
# Save vuln reference
for url in references or []:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
if created:
vuln.references.add(ref)
vuln.save()
# Save subscan id in vuln object
if subscan:
vuln.vuln_subscan_ids.add(subscan)
vuln.save()
return vuln, created
def save_endpoint(
http_url,
ctx={},
crawl=False,
is_default=False,
**endpoint_data):
"""Get or create EndPoint object. If crawl is True, also crawl the endpoint
HTTP URL with httpx.
Args:
http_url (str): Input HTTP URL.
is_default (bool): If the url is a default url for SubDomains.
scan_history (startScan.models.ScanHistory): ScanHistory object.
domain (startScan.models.Domain): Domain object.
subdomain (starScan.models.Subdomain): Subdomain object.
results_dir (str, optional): Results directory.
crawl (bool, optional): Run httpx on endpoint if True. Default: False.
force (bool, optional): Force crawl even if ENABLE_HTTP_CRAWL mode is on.
subscan (startScan.models.SubScan, optional): SubScan object.
Returns:
tuple: (startScan.models.EndPoint, created) where `created` is a boolean
indicating if the object is new or already existed.
"""
# remove nulls
endpoint_data = replace_nulls(endpoint_data)
scheme = urlparse(http_url).scheme
endpoint = None
created = False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in http_url:
logger.error(f"{http_url} is not a URL of domain {domain.name}. Skipping.")
return None, False
if crawl:
ctx['track'] = False
results = http_crawl(
urls=[http_url],
method='HEAD',
ctx=ctx)
if results:
endpoint_data = results[0]
endpoint_id = endpoint_data['endpoint_id']
created = endpoint_data['endpoint_created']
endpoint = EndPoint.objects.get(pk=endpoint_id)
elif not scheme:
return None, False
else: # add dumb endpoint without probing it
scan = ScanHistory.objects.filter(pk=ctx.get('scan_history_id')).first()
domain = Domain.objects.filter(pk=ctx.get('domain_id')).first()
if not validators.url(http_url):
return None, False
http_url = sanitize_url(http_url)
endpoint, created = EndPoint.objects.get_or_create(
scan_history=scan,
target_domain=domain,
http_url=http_url,
**endpoint_data)
if created:
endpoint.is_default = is_default
endpoint.discovered_date = timezone.now()
endpoint.save()
subscan_id = ctx.get('subscan_id')
if subscan_id:
endpoint.endpoint_subscan_ids.add(subscan_id)
endpoint.save()
return endpoint, created
def save_subdomain(subdomain_name, ctx={}):
"""Get or create Subdomain object.
Args:
subdomain_name (str): Subdomain name.
scan_history (startScan.models.ScanHistory): ScanHistory object.
Returns:
tuple: (startScan.models.Subdomain, created) where `created` is a
boolean indicating if the object has been created in DB.
"""
scan_id = ctx.get('scan_history_id')
subscan_id = ctx.get('subscan_id')
out_of_scope_subdomains = ctx.get('out_of_scope_subdomains', [])
valid_domain = (
validators.domain(subdomain_name) or
validators.ipv4(subdomain_name) or
validators.ipv6(subdomain_name)
)
if not valid_domain:
logger.error(f'{subdomain_name} is not an invalid domain. Skipping.')
return None, False
if subdomain_name in out_of_scope_subdomains:
logger.error(f'{subdomain_name} is out-of-scope. Skipping.')
return None, False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in subdomain_name:
logger.error(f"{subdomain_name} is not a subdomain of domain {domain.name}. Skipping.")
return None, False
scan = ScanHistory.objects.filter(pk=scan_id).first()
domain = scan.domain if scan else None
subdomain, created = Subdomain.objects.get_or_create(
scan_history=scan,
target_domain=domain,
name=subdomain_name)
if created:
# logger.warning(f'Found new subdomain {subdomain_name}')
subdomain.discovered_date = timezone.now()
if subscan_id:
subdomain.subdomain_subscan_ids.add(subscan_id)
subdomain.save()
return subdomain, created
def save_email(email_address, scan_history=None):
if not validators.email(email_address):
logger.info(f'Email {email_address} is invalid. Skipping.')
return None, False
email, created = Email.objects.get_or_create(address=email_address)
# if created:
# logger.warning(f'Found new email address {email_address}')
# Add email to ScanHistory
if scan_history:
scan_history.emails.add(email)
scan_history.save()
return email, created
def save_employee(name, designation, scan_history=None):
employee, created = Employee.objects.get_or_create(
name=name,
designation=designation)
# if created:
# logger.warning(f'Found new employee {name}')
# Add employee to ScanHistory
if scan_history:
scan_history.employees.add(employee)
scan_history.save()
return employee, created
def save_ip_address(ip_address, subdomain=None, subscan=None, **kwargs):
if not (validators.ipv4(ip_address) or validators.ipv6(ip_address)):
logger.info(f'IP {ip_address} is not a valid IP. Skipping.')
return None, False
ip, created = IpAddress.objects.get_or_create(address=ip_address)
# if created:
# logger.warning(f'Found new IP {ip_address}')
# Set extra attributes
for key, value in kwargs.items():
setattr(ip, key, value)
ip.save()
# Add IP to subdomain
if subdomain:
subdomain.ip_addresses.add(ip)
subdomain.save()
# Add subscan to IP
if subscan:
ip.ip_subscan_ids.add(subscan)
# Geo-localize IP asynchronously
if created:
geo_localize.delay(ip_address, ip.id)
return ip, created
def save_imported_subdomains(subdomains, ctx={}):
"""Take a list of subdomains imported and write them to from_imported.txt.
Args:
subdomains (list): List of subdomain names.
scan_history (startScan.models.ScanHistory): ScanHistory instance.
domain (startScan.models.Domain): Domain instance.
results_dir (str): Results directory.
"""
domain_id = ctx['domain_id']
domain = Domain.objects.get(pk=domain_id)
results_dir = ctx.get('results_dir', RENGINE_RESULTS)
# Validate each subdomain and de-duplicate entries
subdomains = list(set([
subdomain for subdomain in subdomains
if validators.domain(subdomain) and domain.name == get_domain_from_subdomain(subdomain)
]))
if not subdomains:
return
logger.warning(f'Found {len(subdomains)} imported subdomains.')
with open(f'{results_dir}/from_imported.txt', 'w+') as output_file:
for name in subdomains:
subdomain_name = name.strip()
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain.is_imported_subdomain = True
subdomain.save()
output_file.write(f'{subdomain}\n')
@app.task(name='query_reverse_whois', bind=False, queue='query_reverse_whois_queue')
def query_reverse_whois(lookup_keyword):
"""Queries Reverse WHOIS information for an organization or email address.
Args:
lookup_keyword (str): Registrar Name or email
Returns:
dict: Reverse WHOIS information.
"""
return get_associated_domains(lookup_keyword)
@app.task(name='query_ip_history', bind=False, queue='query_ip_history_queue')
def query_ip_history(domain):
"""Queries the IP history for a domain
Args:
domain (str): domain_name
Returns:
list: list of historical ip addresses
"""
return get_domain_historical_ip_address(domain)
@app.task(name='gpt_vulnerability_description', bind=False, queue='gpt_queue')
def gpt_vulnerability_description(vulnerability_id):
"""Generate and store Vulnerability Description using GPT.
Args:
vulnerability_id (Vulnerability Model ID): Vulnerability ID to fetch Description.
"""
logger.info('Getting GPT Vulnerability Description')
try:
lookup_vulnerability = Vulnerability.objects.get(id=vulnerability_id)
lookup_url = urlparse(lookup_vulnerability.http_url)
path = lookup_url.path
except Exception as e:
return {
'status': False,
'error': str(e)
}
# check in db GPTVulnerabilityReport model if vulnerability description and path matches
stored = GPTVulnerabilityReport.objects.filter(url_path=path).filter(title=lookup_vulnerability.name).first()
if stored:
response = {
'status': True,
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
vulnerability_description = get_gpt_vuln_input_description(
lookup_vulnerability.name,
path
)
# one can add more description here later
gpt_generator = GPTVulnerabilityReportGenerator()
response = gpt_generator.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
lookup_vulnerability.name,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
# for all vulnerabilities with the same vulnerability name this description has to be stored.
# also the consition is that the url must contain a part of this.
for vuln in Vulnerability.objects.filter(name=lookup_vulnerability.name, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
return response
| ocervell | b557c6b8b70ea554c232095bf2fbb213e6d3648f | 0ded32c1bee7852e7fc5daea0fb6de999097400b | ## Overly permissive regular expression range
Suspicious character range that is equivalent to \[@A-Z\].
[Show more details](https://github.com/yogeshojha/rengine/security/code-scanning/167) | github-advanced-security[bot] | 11 |
yogeshojha/rengine | 1,058 | fix: ffuf ANSI code processing preventing task to finish | Should
- [ ] fix #1006
Needs to be tested for potential impact on other tasks (e.g: dalfox) | null | 2023-11-21 11:54:34+00:00 | 2023-11-24 03:10:39+00:00 | web/reNgine/tasks.py | import csv
import json
import os
import pprint
import subprocess
import time
import validators
import whatportis
import xmltodict
import yaml
import tldextract
import concurrent.futures
from datetime import datetime
from urllib.parse import urlparse
from api.serializers import SubdomainSerializer
from celery import chain, chord, group
from celery.result import allow_join_result
from celery.utils.log import get_task_logger
from django.db.models import Count
from dotted_dict import DottedDict
from django.utils import timezone
from pycvesearch import CVESearch
from metafinder.extractor import extract_metadata_from_google_search
from reNgine.celery import app
from reNgine.gpt import GPTVulnerabilityReportGenerator
from reNgine.celery_custom_task import RengineTask
from reNgine.common_func import *
from reNgine.definitions import *
from reNgine.settings import *
from reNgine.gpt import *
from reNgine.utilities import *
from scanEngine.models import (EngineType, InstalledExternalTool, Notification, Proxy)
from startScan.models import *
from startScan.models import EndPoint, Subdomain, Vulnerability
from targetApp.models import Domain
"""
Celery tasks.
"""
logger = get_task_logger(__name__)
#----------------------#
# Scan / Subscan tasks #
#----------------------#
@app.task(name='initiate_scan', bind=False, queue='initiate_scan_queue')
def initiate_scan(
scan_history_id,
domain_id,
engine_id=None,
scan_type=LIVE_SCAN,
results_dir=RENGINE_RESULTS,
imported_subdomains=[],
out_of_scope_subdomains=[],
url_filter=''):
"""Initiate a new scan.
Args:
scan_history_id (int): ScanHistory id.
domain_id (int): Domain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
imported_subdomains (list): Imported subdomains.
out_of_scope_subdomains (list): Out-of-scope subdomains.
url_filter (str): URL path. Default: ''
"""
# Get scan history
scan = ScanHistory.objects.get(pk=scan_history_id)
# Get scan engine
engine_id = engine_id or scan.scan_type.id # scan history engine_id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, [])
# Get domain and set last_scan_date
domain = Domain.objects.get(pk=domain_id)
domain.last_scan_date = timezone.now()
domain.save()
# Get path filter
url_filter = url_filter.rstrip('/')
# Get or create ScanHistory() object
if scan_type == LIVE_SCAN: # immediate
scan = ScanHistory.objects.get(pk=scan_history_id)
scan.scan_status = RUNNING_TASK
elif scan_type == SCHEDULED_SCAN: # scheduled
scan = ScanHistory()
scan.scan_status = INITIATED_TASK
scan.scan_type = engine
scan.celery_ids = [initiate_scan.request.id]
scan.domain = domain
scan.start_scan_date = timezone.now()
scan.tasks = engine.tasks
scan.results_dir = f'{results_dir}/{domain.name}_{scan.id}'
add_gf_patterns = gf_patterns and 'fetch_url' in engine.tasks
if add_gf_patterns:
scan.used_gf_patterns = ','.join(gf_patterns)
scan.save()
# Create scan results dir
os.makedirs(scan.results_dir)
# Build task context
ctx = {
'scan_history_id': scan_history_id,
'engine_id': engine_id,
'domain_id': domain.id,
'results_dir': scan.results_dir,
'url_filter': url_filter,
'yaml_configuration': config,
'out_of_scope_subdomains': out_of_scope_subdomains
}
ctx_str = json.dumps(ctx, indent=2)
# Send start notif
logger.warning(f'Starting scan {scan_history_id} with context:\n{ctx_str}')
send_scan_notif.delay(
scan_history_id,
subscan_id=None,
engine_id=engine_id,
status=CELERY_TASK_STATUS_MAP[scan.scan_status])
# Save imported subdomains in DB
save_imported_subdomains(imported_subdomains, ctx=ctx)
# Create initial subdomain in DB: make a copy of domain as a subdomain so
# that other tasks using subdomains can use it.
subdomain_name = domain.name
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
# If enable_http_crawl is set, create an initial root HTTP endpoint so that
# HTTP crawling can start somewhere
http_url = f'{domain.name}{url_filter}' if url_filter else domain.name
endpoint, _ = save_endpoint(
http_url,
ctx=ctx,
crawl=enable_http_crawl,
is_default=True,
subdomain=subdomain
)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build Celery tasks, crafted according to the dependency graph below:
# subdomain_discovery --> port_scan --> fetch_url --> dir_file_fuzz
# osint vulnerability_scan
# osint dalfox xss scan
# screenshot
# waf_detection
workflow = chain(
group(
subdomain_discovery.si(ctx=ctx, description='Subdomain discovery'),
osint.si(ctx=ctx, description='OS Intelligence')
),
port_scan.si(ctx=ctx, description='Port scan'),
fetch_url.si(ctx=ctx, description='Fetch URL'),
group(
dir_file_fuzz.si(ctx=ctx, description='Directories & files fuzz'),
vulnerability_scan.si(ctx=ctx, description='Vulnerability scan'),
screenshot.si(ctx=ctx, description='Screenshot'),
waf_detection.si(ctx=ctx, description='WAF detection')
)
)
# Build callback
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery chord
logger.info(f'Running Celery workflow with {len(workflow.tasks) + 1} tasks')
task = chain(workflow, callback).on_error(callback).delay()
scan.celery_ids.append(task.id)
scan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='initiate_subscan', bind=False, queue='subscan_queue')
def initiate_subscan(
scan_history_id,
subdomain_id,
engine_id=None,
scan_type=None,
results_dir=RENGINE_RESULTS,
url_filter=''):
"""Initiate a new subscan.
Args:
scan_history_id (int): ScanHistory id.
subdomain_id (int): Subdomain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
url_filter (str): URL path. Default: ''
"""
# Get Subdomain, Domain and ScanHistory
subdomain = Subdomain.objects.get(pk=subdomain_id)
scan = ScanHistory.objects.get(pk=subdomain.scan_history.id)
domain = Domain.objects.get(pk=subdomain.target_domain.id)
# Get EngineType
engine_id = engine_id or scan.scan_type.id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Create scan activity of SubScan Model
subscan = SubScan(
start_scan_date=timezone.now(),
celery_ids=[initiate_subscan.request.id],
scan_history=scan,
subdomain=subdomain,
type=scan_type,
status=RUNNING_TASK,
engine=engine)
subscan.save()
# Get YAML configuration
config = yaml.safe_load(engine.yaml_configuration)
# Create results directory
results_dir = f'{scan.results_dir}/subscans/{subscan.id}'
os.makedirs(results_dir, exist_ok=True)
# Run task
method = globals().get(scan_type)
if not method:
logger.warning(f'Task {scan_type} is not supported by reNgine. Skipping')
return
scan.tasks.append(scan_type)
scan.save()
# Send start notif
send_scan_notif.delay(
scan.id,
subscan_id=subscan.id,
engine_id=engine_id,
status='RUNNING')
# Build context
ctx = {
'scan_history_id': scan.id,
'subscan_id': subscan.id,
'engine_id': engine_id,
'domain_id': domain.id,
'subdomain_id': subdomain.id,
'yaml_configuration': config,
'results_dir': results_dir,
'url_filter': url_filter
}
# Create initial endpoints in DB: find domain HTTP endpoint so that HTTP
# crawling can start somewhere
base_url = f'{subdomain.name}{url_filter}' if url_filter else subdomain.name
endpoint, _ = save_endpoint(
base_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build header + callback
workflow = method.si(ctx=ctx)
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery tasks
task = chain(workflow, callback).on_error(callback).delay()
subscan.celery_ids.append(task.id)
subscan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='report', bind=False, queue='report_queue')
def report(ctx={}, description=None):
"""Report task running after all other tasks.
Mark ScanHistory or SubScan object as completed and update with final
status, log run details and send notification.
Args:
description (str, optional): Task description shown in UI.
"""
# Get objects
subscan_id = ctx.get('subscan_id')
scan_id = ctx.get('scan_history_id')
engine_id = ctx.get('engine_id')
scan = ScanHistory.objects.filter(pk=scan_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
# Get failed tasks
tasks = ScanActivity.objects.filter(scan_of=scan).all()
if subscan:
tasks = tasks.filter(celery_id__in=subscan.celery_ids)
failed_tasks = tasks.filter(status=FAILED_TASK)
# Get task status
failed_count = failed_tasks.count()
status = SUCCESS_TASK if failed_count == 0 else FAILED_TASK
status_h = 'SUCCESS' if failed_count == 0 else 'FAILED'
# Update scan / subscan status
if subscan:
subscan.stop_scan_date = timezone.now()
subscan.status = status
subscan.save()
else:
scan.scan_status = status
scan.stop_scan_date = timezone.now()
scan.save()
# Send scan status notif
send_scan_notif.delay(
scan_history_id=scan_id,
subscan_id=subscan_id,
engine_id=engine_id,
status=status_h)
#------------------------- #
# Tracked reNgine tasks #
#--------------------------#
@app.task(name='subdomain_discovery', queue='main_scan_queue', base=RengineTask, bind=True)
def subdomain_discovery(
self,
host=None,
ctx=None,
description=None):
"""Uses a set of tools (see SUBDOMAIN_SCAN_DEFAULT_TOOLS) to scan all
subdomains associated with a domain.
Args:
host (str): Hostname to scan.
Returns:
subdomains (list): List of subdomain names.
"""
if not host:
host = self.subdomain.name if self.subdomain else self.domain.name
if self.url_filter:
logger.warning(f'Ignoring subdomains scan as an URL path filter was passed ({self.url_filter}).')
return
# Config
config = self.yaml_configuration.get(SUBDOMAIN_DISCOVERY) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL) or self.yaml_configuration.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
tools = config.get(USES_TOOLS, SUBDOMAIN_SCAN_DEFAULT_TOOLS)
default_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=True).filter(is_subdomain_gathering=True)]
custom_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=False).filter(is_subdomain_gathering=True)]
send_subdomain_changes, send_interesting = False, False
notif = Notification.objects.first()
if notif:
send_subdomain_changes = notif.send_subdomain_changes_notif
send_interesting = notif.send_interesting_notif
# Gather tools to run for subdomain scan
if ALL in tools:
tools = SUBDOMAIN_SCAN_DEFAULT_TOOLS + custom_subdomain_tools
tools = [t.lower() for t in tools]
# Make exception for amass since tool name is amass, but command is amass-active/passive
default_subdomain_tools.append('amass-passive')
default_subdomain_tools.append('amass-active')
# Run tools
for tool in tools:
cmd = None
logger.info(f'Scanning subdomains for {host} with {tool}')
proxy = get_random_proxy()
if tool in default_subdomain_tools:
if tool == 'amass-passive':
cmd = f'amass enum -passive -d {host} -o {self.results_dir}/subdomains_amass.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
elif tool == 'amass-active':
use_amass_config = config.get(USE_AMASS_CONFIG, False)
amass_wordlist_name = config.get(AMASS_WORDLIST, 'deepmagic.com-prefixes-top50000')
wordlist_path = f'/usr/src/wordlist/{amass_wordlist_name}.txt'
cmd = f'amass enum -active -d {host} -o {self.results_dir}/subdomains_amass_active.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
cmd += f' -brute -w {wordlist_path}'
elif tool == 'sublist3r':
cmd = f'python3 /usr/src/github/Sublist3r/sublist3r.py -d {host} -t {threads} -o {self.results_dir}/subdomains_sublister.txt'
elif tool == 'subfinder':
cmd = f'subfinder -d {host} -o {self.results_dir}/subdomains_subfinder.txt'
use_subfinder_config = config.get(USE_SUBFINDER_CONFIG, False)
cmd += ' -config /root/.config/subfinder/config.yaml' if use_subfinder_config else ''
cmd += f' -proxy {proxy}' if proxy else ''
cmd += f' -timeout {timeout}' if timeout else ''
cmd += f' -t {threads}' if threads else ''
cmd += f' -silent'
elif tool == 'oneforall':
cmd = f'python3 /usr/src/github/OneForAll/oneforall.py --target {host} run'
cmd_extract = f'cut -d\',\' -f6 /usr/src/github/OneForAll/results/{host}.csv > {self.results_dir}/subdomains_oneforall.txt'
cmd_rm = f'rm -rf /usr/src/github/OneForAll/results/{host}.csv'
cmd += f' && {cmd_extract} && {cmd_rm}'
elif tool == 'ctfr':
results_file = self.results_dir + '/subdomains_ctfr.txt'
cmd = f'python3 /usr/src/github/ctfr/ctfr.py -d {host} -o {results_file}'
cmd_extract = f"cat {results_file} | sed 's/\*.//g' | tail -n +12 | uniq | sort > {results_file}"
cmd += f' && {cmd_extract}'
elif tool == 'tlsx':
results_file = self.results_dir + '/subdomains_tlsx.txt'
cmd = f'tlsx -san -cn -silent -ro -host {host}'
cmd += f" | sed -n '/^\([a-zA-Z0-9]\([-a-zA-Z0-9]*[a-zA-Z0-9]\)\?\.\)\+{host}$/p' | uniq | sort"
cmd += f' > {results_file}'
elif tool == 'netlas':
results_file = self.results_dir + '/subdomains_netlas.txt'
cmd = f'netlas search -d domain -i domain domain:"*.{host}" -f json'
netlas_key = get_netlas_key()
cmd += f' -a {netlas_key}' if netlas_key else ''
cmd_extract = f"grep -oE '([a-zA-Z0-9]([-a-zA-Z0-9]*[a-zA-Z0-9])?\.)+{host}'"
cmd += f' | {cmd_extract} > {results_file}'
elif tool in custom_subdomain_tools:
tool_query = InstalledExternalTool.objects.filter(name__icontains=tool.lower())
if not tool_query.exists():
logger.error(f'Missing {{TARGET}} and {{OUTPUT}} placeholders in {tool} configuration. Skipping.')
continue
custom_tool = tool_query.first()
cmd = custom_tool.subdomain_gathering_command
if '{TARGET}' in cmd and '{OUTPUT}' in cmd:
cmd = cmd.replace('{TARGET}', host)
cmd = cmd.replace('{OUTPUT}', f'{self.results_dir}/subdomains_{tool}.txt')
cmd = cmd.replace('{PATH}', custom_tool.github_clone_path) if '{PATH}' in cmd else cmd
else:
logger.warning(
f'Subdomain discovery tool "{tool}" is not supported by reNgine. Skipping.')
continue
# Run tool
try:
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
except Exception as e:
logger.error(
f'Subdomain discovery tool "{tool}" raised an exception')
logger.exception(e)
# Gather all the tools' results in one single file. Write subdomains into
# separate files, and sort all subdomains.
run_command(
f'cat {self.results_dir}/subdomains_*.txt > {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {self.output_path} -o {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
with open(self.output_path) as f:
lines = f.readlines()
# Parse the output_file file and store Subdomain and EndPoint objects found
# in db.
subdomain_count = 0
subdomains = []
urls = []
for line in lines:
subdomain_name = line.strip()
valid_url = bool(validators.url(subdomain_name))
valid_domain = (
bool(validators.domain(subdomain_name)) or
bool(validators.ipv4(subdomain_name)) or
bool(validators.ipv6(subdomain_name)) or
valid_url
)
if not valid_domain:
logger.error(f'Subdomain {subdomain_name} is not a valid domain, IP or URL. Skipping.')
continue
if valid_url:
subdomain_name = urlparse(subdomain_name).netloc
if subdomain_name in self.out_of_scope_subdomains:
logger.error(f'Subdomain {subdomain_name} is out of scope. Skipping.')
continue
# Add subdomain
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain_count += 1
subdomains.append(subdomain)
urls.append(subdomain.name)
# Bulk crawl subdomains
if enable_http_crawl:
ctx['track'] = True
http_crawl(urls, ctx=ctx, is_ran_from_subdomain_scan=True)
# Find root subdomain endpoints
for subdomain in subdomains:
pass
# Send notifications
subdomains_str = '\n'.join([f'• `{subdomain.name}`' for subdomain in subdomains])
self.notify(fields={
'Subdomain count': len(subdomains),
'Subdomains': subdomains_str,
})
if send_subdomain_changes and self.scan_id and self.domain_id:
added = get_new_added_subdomain(self.scan_id, self.domain_id)
removed = get_removed_subdomain(self.scan_id, self.domain_id)
if added:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in added])
self.notify(fields={'Added subdomains': subdomains_str})
if removed:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in removed])
self.notify(fields={'Removed subdomains': subdomains_str})
if send_interesting and self.scan_id and self.domain_id:
interesting_subdomains = get_interesting_subdomains(self.scan_id, self.domain_id)
if interesting_subdomains:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in interesting_subdomains])
self.notify(fields={'Interesting subdomains': subdomains_str})
return SubdomainSerializer(subdomains, many=True).data
@app.task(name='osint', queue='main_scan_queue', base=RengineTask, bind=True)
def osint(self, host=None, ctx={}, description=None):
"""Run Open-Source Intelligence tools on selected domain.
Args:
host (str): Hostname to scan.
Returns:
dict: Results from osint discovery and dorking.
"""
config = self.yaml_configuration.get(OSINT) or OSINT_DEFAULT_CONFIG
results = {}
grouped_tasks = []
if 'discover' in config:
ctx['track'] = False
# results = osint_discovery(host=host, ctx=ctx)
_task = osint_discovery.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
activity_id=self.activity_id,
results_dir=self.results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if OSINT_DORK in config or OSINT_CUSTOM_DORK in config:
_task = dorking.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
results_dir=self.results_dir
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('OSINT Tasks finished...')
# with open(self.output_path, 'w') as f:
# json.dump(results, f, indent=4)
#
# return results
@app.task(name='osint_discovery', queue='osint_discovery_queue', bind=False)
def osint_discovery(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run OSINT discovery.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
dict: osint metadat and theHarvester and h8mail results.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
osint_lookup = config.get(OSINT_DISCOVER, [])
osint_intensity = config.get(INTENSITY, 'normal')
documents_limit = config.get(OSINT_DOCUMENTS_LIMIT, 50)
results = {}
meta_info = []
emails = []
creds = []
# Get and save meta info
if 'metainfo' in osint_lookup:
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': host,
'domain': host,
'scan_id': scan_history_id,
'documents_limit': documents_limit
})
meta_info.append(save_metadata_info(meta_dict))
# TODO: disabled for now
# elif osint_intensity == 'deep':
# subdomains = Subdomain.objects
# if self.scan:
# subdomains = subdomains.filter(scan_history=self.scan)
# for subdomain in subdomains:
# meta_dict = DottedDict({
# 'osint_target': subdomain.name,
# 'domain': self.domain,
# 'scan_id': self.scan_id,
# 'documents_limit': documents_limit
# })
# meta_info.append(save_metadata_info(meta_dict))
grouped_tasks = []
if 'emails' in osint_lookup:
emails = get_and_save_emails(scan_history, activity_id, results_dir)
emails_str = '\n'.join([f'• `{email}`' for email in emails])
# self.notify(fields={'Emails': emails_str})
# ctx['track'] = False
_task = h8mail.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if 'employees' in osint_lookup:
ctx['track'] = False
_task = theHarvester.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
# results['emails'] = results.get('emails', []) + emails
# results['creds'] = creds
# results['meta_info'] = meta_info
return results
@app.task(name='dorking', bind=False, queue='dorking_queue')
def dorking(config, host, scan_history_id, results_dir):
"""Run Google dorks.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
list: Dorking results for each dork ran.
"""
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
scan_history = ScanHistory.objects.get(pk=scan_history_id)
dorks = config.get(OSINT_DORK, [])
custom_dorks = config.get(OSINT_CUSTOM_DORK, [])
results = []
# custom dorking has higher priority
try:
for custom_dork in custom_dorks:
lookup_target = custom_dork.get('lookup_site')
# replace with original host if _target_
lookup_target = host if lookup_target == '_target_' else lookup_target
if 'lookup_extensions' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_extensions=custom_dork.get('lookup_extensions'),
scan_history=scan_history
)
elif 'lookup_keywords' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_keywords=custom_dork.get('lookup_keywords'),
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
# default dorking
try:
for dork in dorks:
logger.info(f'Getting dork information for {dork}')
if dork == 'stackoverflow':
results = get_and_save_dork_results(
lookup_target='stackoverflow.com',
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'login_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/login/,login.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'admin_panels':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/admin/,admin.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'dashboard_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/dashboard/,dashboard.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'social_media' :
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'reddit.com'
]
for site in social_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'project_management' :
project_websites = [
'trello.com',
'atlassian.net'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'code_sharing' :
project_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'config_files' :
config_file_exts = [
'env',
'xml',
'conf',
'toml',
'yml',
'yaml',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(config_file_exts),
page_count=4,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keyword = 'Jenkins'
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=lookup_keyword,
page_count=1,
scan_history=scan_history
)
elif dork == 'wordpress_files' :
lookup_keywords = [
'/wp-content/',
'/wp-includes/'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'php_error' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'exposed_documents' :
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(docs_file_ext),
page_count=7,
scan_history=scan_history
)
elif dork == 'db_files' :
file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
elif dork == 'git_exposed' :
file_ext = [
'git',
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
return results
@app.task(name='theHarvester', queue='theHarvester_queue', bind=False)
def theHarvester(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run theHarvester to get save emails, hosts, employees found in domain.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
dict: Dict of emails, employees, hosts and ips found during crawling.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
output_path_json = f'{results_dir}/theHarvester.json'
theHarvester_dir = '/usr/src/github/theHarvester'
history_file = f'{results_dir}/commands.txt'
cmd = f'python3 {theHarvester_dir}/theHarvester.py -d {host} -b all -f {output_path_json}'
# Update proxies.yaml
proxy_query = Proxy.objects.all()
if proxy_query.exists():
proxy = proxy_query.first()
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(f'{theHarvester_dir}/proxies.yaml', 'w') as file:
yaml.dump(yaml_data, file)
# Run cmd
run_command(
cmd,
shell=False,
cwd=theHarvester_dir,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
# Get file location
if not os.path.isfile(output_path_json):
logger.error(f'Could not open {output_path_json}')
return {}
# Load theHarvester results
with open(output_path_json, 'r') as f:
data = json.load(f)
# Re-indent theHarvester JSON
with open(output_path_json, 'w') as f:
json.dump(data, f, indent=4)
emails = data.get('emails', [])
for email_address in emails:
email, _ = save_email(email_address, scan_history=scan_history)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
linkedin_people = data.get('linkedin_people', [])
for people in linkedin_people:
employee, _ = save_employee(
people,
designation='linkedin',
scan_history=scan_history)
# if employee:
# self.notify(fields={'LinkedIn people': f'• {employee.name}'})
twitter_people = data.get('twitter_people', [])
for people in twitter_people:
employee, _ = save_employee(
people,
designation='twitter',
scan_history=scan_history)
# if employee:
# self.notify(fields={'Twitter people': f'• {employee.name}'})
hosts = data.get('hosts', [])
urls = []
for host in hosts:
split = tuple(host.split(':'))
http_url = split[0]
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
endpoint, _ = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain)
# if endpoint:
# urls.append(endpoint.http_url)
# self.notify(fields={'Hosts': f'• {endpoint.http_url}'})
# if enable_http_crawl:
# ctx['track'] = False
# http_crawl(urls, ctx=ctx)
# TODO: Lots of ips unrelated with our domain are found, disabling
# this for now.
# ips = data.get('ips', [])
# for ip_address in ips:
# ip, created = save_ip_address(
# ip_address,
# subscan=subscan)
# if ip:
# send_task_notif.delay(
# 'osint',
# scan_history_id=scan_history_id,
# subscan_id=subscan_id,
# severity='success',
# update_fields={'IPs': f'{ip.address}'})
return data
@app.task(name='h8mail', queue='h8mail_queue', bind=False)
def h8mail(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run h8mail.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
list[dict]: List of credentials info.
"""
logger.warning('Getting leaked credentials')
scan_history = ScanHistory.objects.get(pk=scan_history_id)
input_path = f'{results_dir}/emails.txt'
output_file = f'{results_dir}/h8mail.json'
cmd = f'h8mail -t {input_path} --json {output_file}'
history_file = f'{results_dir}/commands.txt'
run_command(
cmd,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
with open(output_file) as f:
data = json.load(f)
creds = data.get('targets', [])
# TODO: go through h8mail output and save emails to DB
for cred in creds:
logger.warning(cred)
email_address = cred['target']
pwn_num = cred['pwn_num']
pwn_data = cred.get('data', [])
email, created = save_email(email_address, scan_history=scan)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
return creds
@app.task(name='screenshot', queue='main_scan_queue', base=RengineTask, bind=True)
def screenshot(self, ctx={}, description=None):
"""Uses EyeWitness to gather screenshot of a domain and/or url.
Args:
description (str, optional): Task description shown in UI.
"""
# Config
screenshots_path = f'{self.results_dir}/screenshots'
output_path = f'{self.results_dir}/screenshots/{self.filename}'
alive_endpoints_file = f'{self.results_dir}/endpoints_alive.txt'
config = self.yaml_configuration.get(SCREENSHOT) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT + 5)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
# If intensity is normal, grab only the root endpoints of each subdomain
strict = True if intensity == 'normal' else False
# Get URLs to take screenshot of
get_http_urls(
is_alive=enable_http_crawl,
strict=strict,
write_filepath=alive_endpoints_file,
get_only_default_urls=True,
ctx=ctx
)
# Send start notif
notification = Notification.objects.first()
send_output_file = notification.send_scan_output_file if notification else False
# Run cmd
cmd = f'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py -f {alive_endpoints_file} -d {screenshots_path} --no-prompt'
cmd += f' --timeout {timeout}' if timeout > 0 else ''
cmd += f' --threads {threads}' if threads > 0 else ''
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(output_path):
logger.error(f'Could not load EyeWitness results at {output_path} for {self.domain.name}.')
return
# Loop through results and save objects in DB
screenshot_paths = []
with open(output_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
"Protocol,Port,Domain,Request Status,Screenshot Path, Source Path"
protocol, port, subdomain_name, status, screenshot_path, source_path = tuple(row)
logger.info(f'{protocol}:{port}:{subdomain_name}:{status}')
subdomain_query = Subdomain.objects.filter(name=subdomain_name)
if self.scan:
subdomain_query = subdomain_query.filter(scan_history=self.scan)
if status == 'Successful' and subdomain_query.exists():
subdomain = subdomain_query.first()
screenshot_paths.append(screenshot_path)
subdomain.screenshot_path = screenshot_path.replace('/usr/src/scan_results/', '')
subdomain.save()
logger.warning(f'Added screenshot for {subdomain.name} to DB')
# Remove all db, html extra files in screenshot results
run_command(
'rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(screenshots_path),
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'rm -rf {screenshots_path}/source',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Send finish notifs
screenshots_str = '• ' + '\n• '.join([f'`{path}`' for path in screenshot_paths])
self.notify(fields={'Screenshots': screenshots_str})
if send_output_file:
for path in screenshot_paths:
title = get_output_file_name(
self.scan_id,
self.subscan_id,
self.filename)
send_file_to_discord.delay(path, title)
@app.task(name='port_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def port_scan(self, hosts=[], ctx={}, description=None):
"""Run port scan.
Args:
hosts (list, optional): Hosts to run port scan on.
description (str, optional): Task description shown in UI.
Returns:
list: List of open ports (dict).
"""
input_file = f'{self.results_dir}/input_subdomains_port_scan.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(PORT_SCAN) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
exclude_ports = config.get(NAABU_EXCLUDE_PORTS, [])
exclude_subdomains = config.get(NAABU_EXCLUDE_SUBDOMAINS, False)
ports = config.get(PORTS, NAABU_DEFAULT_PORTS)
ports = [str(port) for port in ports]
rate_limit = config.get(NAABU_RATE) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
passive = config.get(NAABU_PASSIVE, False)
use_naabu_config = config.get(USE_NAABU_CONFIG, False)
exclude_ports_str = ','.join(return_iterable(exclude_ports))
# nmap args
nmap_enabled = config.get(ENABLE_NMAP, False)
nmap_cmd = config.get(NMAP_COMMAND, '')
nmap_script = config.get(NMAP_SCRIPT, '')
nmap_script = ','.join(return_iterable(nmap_script))
nmap_script_args = config.get(NMAP_SCRIPT_ARGS)
if hosts:
with open(input_file, 'w') as f:
f.write('\n'.join(hosts))
else:
hosts = get_subdomains(
write_filepath=input_file,
exclude_subdomains=exclude_subdomains,
ctx=ctx)
# Build cmd
cmd = 'naabu -json -exclude-cdn'
cmd += f' -list {input_file}' if len(hosts) > 0 else f' -host {hosts[0]}'
if 'full' in ports or 'all' in ports:
ports_str = ' -p "-"'
elif 'top-100' in ports:
ports_str = ' -top-ports 100'
elif 'top-1000' in ports:
ports_str = ' -top-ports 1000'
else:
ports_str = ','.join(ports)
ports_str = f' -p {ports_str}'
cmd += ports_str
cmd += ' -config /root/.config/naabu/config.yaml' if use_naabu_config else ''
cmd += f' -proxy "{proxy}"' if proxy else ''
cmd += f' -c {threads}' if threads else ''
cmd += f' -rate {rate_limit}' if rate_limit > 0 else ''
cmd += f' -timeout {timeout*1000}' if timeout > 0 else ''
cmd += f' -passive' if passive else ''
cmd += f' -exclude-ports {exclude_ports_str}' if exclude_ports else ''
cmd += f' -silent'
# Execute cmd and gather results
results = []
urls = []
ports_data = {}
for line in stream_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
port_number = line['port']
ip_address = line['ip']
host = line.get('host') or ip_address
if port_number == 0:
continue
# Grab subdomain
subdomain = Subdomain.objects.filter(
name=host,
target_domain=self.domain,
scan_history=self.scan
).first()
# Add IP DB
ip, _ = save_ip_address(ip_address, subdomain, subscan=self.subscan)
if self.subscan:
ip.ip_subscan_ids.add(self.subscan)
ip.save()
# Add endpoint to DB
# port 80 and 443 not needed as http crawl already does that.
if port_number not in [80, 443]:
http_url = f'{host}:{port_number}'
endpoint, _ = save_endpoint(
http_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint:
http_url = endpoint.http_url
urls.append(http_url)
# Add Port in DB
port_details = whatportis.get_ports(str(port_number))
service_name = port_details[0].name if len(port_details) > 0 else 'unknown'
description = port_details[0].description if len(port_details) > 0 else ''
# get or create port
port, created = Port.objects.get_or_create(
number=port_number,
service_name=service_name,
description=description
)
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port.save()
ip.ports.add(port)
ip.save()
if host in ports_data:
ports_data[host].append(port_number)
else:
ports_data[host] = [port_number]
# Send notification
logger.warning(f'Found opened port {port_number} on {ip_address} ({host})')
if len(ports_data) == 0:
logger.info('Finished running naabu port scan - No open ports found.')
if nmap_enabled:
logger.info('Nmap scans skipped')
return ports_data
# Send notification
fields_str = ''
for host, ports in ports_data.items():
ports_str = ', '.join([f'`{port}`' for port in ports])
fields_str += f'• `{host}`: {ports_str}\n'
self.notify(fields={'Ports discovered': fields_str})
# Save output to file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
logger.info('Finished running naabu port scan.')
# Process nmap results: 1 process per host
sigs = []
if nmap_enabled:
logger.warning(f'Starting nmap scans ...')
logger.warning(ports_data)
for host, port_list in ports_data.items():
ports_str = '_'.join([str(p) for p in port_list])
ctx_nmap = ctx.copy()
ctx_nmap['description'] = get_task_title(f'nmap_{host}', self.scan_id, self.subscan_id)
ctx_nmap['track'] = False
sig = nmap.si(
cmd=nmap_cmd,
ports=port_list,
host=host,
script=nmap_script,
script_args=nmap_script_args,
max_rate=rate_limit,
ctx=ctx_nmap)
sigs.append(sig)
task = group(sigs).apply_async()
with allow_join_result():
results = task.get()
return ports_data
@app.task(name='nmap', queue='main_scan_queue', base=RengineTask, bind=True)
def nmap(
self,
cmd=None,
ports=[],
host=None,
input_file=None,
script=None,
script_args=None,
max_rate=None,
ctx={},
description=None):
"""Run nmap on a host.
Args:
cmd (str, optional): Existing nmap command to complete.
ports (list, optional): List of ports to scan.
host (str, optional): Host to scan.
input_file (str, optional): Input hosts file.
script (str, optional): NSE script to run.
script_args (str, optional): NSE script args.
max_rate (int): Max rate.
description (str, optional): Task description shown in UI.
"""
notif = Notification.objects.first()
ports_str = ','.join(str(port) for port in ports)
self.filename = self.filename.replace('.txt', '.xml')
filename_vulns = self.filename.replace('.xml', '_vulns.json')
output_file = self.output_path
output_file_xml = f'{self.results_dir}/{host}_{self.filename}'
vulns_file = f'{self.results_dir}/{host}_{filename_vulns}'
logger.warning(f'Running nmap on {host}:{ports}')
# Build cmd
nmap_cmd = get_nmap_cmd(
cmd=cmd,
ports=ports_str,
script=script,
script_args=script_args,
max_rate=max_rate,
host=host,
input_file=input_file,
output_file=output_file_xml)
# Run cmd
run_command(
nmap_cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Get nmap XML results and convert to JSON
vulns = parse_nmap_results(output_file_xml, output_file)
with open(vulns_file, 'w') as f:
json.dump(vulns, f, indent=4)
# Save vulnerabilities found by nmap
vulns_str = ''
for vuln_data in vulns:
# URL is not necessarily an HTTP URL when running nmap (can be any
# other vulnerable protocols). Look for existing endpoint and use its
# URL as vulnerability.http_url if it exists.
url = vuln_data['http_url']
endpoint = EndPoint.objects.filter(http_url__contains=url).first()
if endpoint:
vuln_data['http_url'] = endpoint.http_url
vuln, created = save_vulnerability(
target_domain=self.domain,
subdomain=self.subdomain,
scan_history=self.scan,
subscan=self.subscan,
endpoint=endpoint,
**vuln_data)
vulns_str += f'• {str(vuln)}\n'
if created:
logger.warning(str(vuln))
# Send only 1 notif for all vulns to reduce number of notifs
if notif and notif.send_vuln_notif and vulns_str:
logger.warning(vulns_str)
self.notify(fields={'CVEs': vulns_str})
return vulns
@app.task(name='waf_detection', queue='main_scan_queue', base=RengineTask, bind=True)
def waf_detection(self, ctx={}, description=None):
"""
Uses wafw00f to check for the presence of a WAF.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of startScan.models.Waf objects.
"""
input_path = f'{self.results_dir}/input_endpoints_waf_detection.txt'
config = self.yaml_configuration.get(WAF_DETECTION) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Get alive endpoints from DB
get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
cmd = f'wafw00f -i {input_path} -o {self.output_path}'
run_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(self.output_path):
logger.error(f'Could not find {self.output_path}')
return
with open(self.output_path) as file:
wafs = file.readlines()
for line in wafs:
line = " ".join(line.split())
splitted = line.split(' ', 1)
waf_info = splitted[1].strip()
waf_name = waf_info[:waf_info.find('(')].strip()
waf_manufacturer = waf_info[waf_info.find('(')+1:waf_info.find(')')].strip().replace('.', '')
http_url = sanitize_url(splitted[0].strip())
if not waf_name or waf_name == 'None':
continue
# Add waf to db
waf, _ = Waf.objects.get_or_create(
name=waf_name,
manufacturer=waf_manufacturer
)
# Add waf info to Subdomain in DB
subdomain = get_subdomain_from_url(http_url)
logger.info(f'Wafw00f Subdomain : {subdomain}')
subdomain_query, _ = Subdomain.objects.get_or_create(scan_history=self.scan, name=subdomain)
subdomain_query.waf.add(waf)
subdomain_query.save()
return wafs
@app.task(name='dir_file_fuzz', queue='main_scan_queue', base=RengineTask, bind=True)
def dir_file_fuzz(self, ctx={}, description=None):
"""Perform directory scan, and currently uses `ffuf` as a default tool.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of URLs discovered.
"""
# Config
cmd = 'ffuf'
config = self.yaml_configuration.get(DIR_FILE_FUZZ) or {}
custom_header = self.yaml_configuration.get(CUSTOM_HEADER)
auto_calibration = config.get(AUTO_CALIBRATION, True)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
extensions = config.get(EXTENSIONS, DEFAULT_DIR_FILE_FUZZ_EXTENSIONS)
extensions_str = ','.join(map(str, extensions))
follow_redirect = config.get(FOLLOW_REDIRECT, FFUF_DEFAULT_FOLLOW_REDIRECT)
max_time = config.get(MAX_TIME, 0)
match_http_status = config.get(MATCH_HTTP_STATUS, FFUF_DEFAULT_MATCH_HTTP_STATUS)
mc = ','.join([str(c) for c in match_http_status])
recursive_level = config.get(RECURSIVE_LEVEL, FFUF_DEFAULT_RECURSIVE_LEVEL)
stop_on_error = config.get(STOP_ON_ERROR, False)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
wordlist_name = config.get(WORDLIST, 'dicc')
delay = rate_limit / (threads * 100) # calculate request pause delay from rate_limit and number of threads
input_path = f'{self.results_dir}/input_dir_file_fuzz.txt'
# Get wordlist
wordlist_name = 'dicc' if wordlist_name == 'default' else wordlist_name
wordlist_path = f'/usr/src/wordlist/{wordlist_name}.txt'
# Build command
cmd += f' -w {wordlist_path}'
cmd += f' -e {extensions_str}' if extensions else ''
cmd += f' -maxtime {max_time}' if max_time > 0 else ''
cmd += f' -p {delay}' if delay > 0 else ''
cmd += f' -recursion -recursion-depth {recursive_level} ' if recursive_level > 0 else ''
cmd += f' -t {threads}' if threads and threads > 0 else ''
cmd += f' -timeout {timeout}' if timeout and timeout > 0 else ''
cmd += ' -se' if stop_on_error else ''
cmd += ' -fr' if follow_redirect else ''
cmd += ' -ac' if auto_calibration else ''
cmd += f' -mc {mc}' if mc else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
# Grab URLs to fuzz
urls = get_http_urls(
is_alive=True,
ignore_files=False,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
logger.warning(urls)
# Loop through URLs and run command
results = []
for url in urls:
'''
Above while fetching urls, we are not ignoring files, because some
default urls may redirect to https://example.com/login.php
so, ignore_files is set to False
but, during fuzzing, we will only need part of the path, in above example
it is still a good idea to ffuf base url https://example.com
so files from base url
'''
url_parse = urlparse(url)
url = url_parse.scheme + '://' + url_parse.netloc
url += '/FUZZ' # TODO: fuzz not only URL but also POST / PUT / headers
proxy = get_random_proxy()
# Build final cmd
fcmd = cmd
fcmd += f' -x {proxy}' if proxy else ''
fcmd += f' -u {url} -json'
# Initialize DirectoryScan object
dirscan = DirectoryScan()
dirscan.scanned_date = timezone.now()
dirscan.command_line = fcmd
dirscan.save()
# Loop through results and populate EndPoint and DirectoryFile in DB
results = []
for line in stream_command(
fcmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
name = line['input'].get('FUZZ')
length = line['length']
status = line['status']
words = line['words']
url = line['url']
lines = line['lines']
content_type = line['content-type']
duration = line['duration']
if not name:
logger.error(f'FUZZ not found for "{url}"')
continue
endpoint, created = save_endpoint(url, crawl=False, ctx=ctx)
# endpoint.is_default = False
endpoint.http_status = status
endpoint.content_length = length
endpoint.response_time = duration / 1000000000
endpoint.save()
if created:
urls.append(endpoint.http_url)
endpoint.status = status
endpoint.content_type = content_type
endpoint.content_length = length
dfile, created = DirectoryFile.objects.get_or_create(
name=name,
length=length,
words=words,
lines=lines,
content_type=content_type,
url=url)
dfile.http_status = status
dfile.save()
# if created:
# logger.warning(f'Found new directory or file {url}')
dirscan.directory_files.add(dfile)
dirscan.save()
if self.subscan:
dirscan.dir_subscan_ids.add(self.subscan)
subdomain_name = get_subdomain_from_url(endpoint.http_url)
subdomain = Subdomain.objects.get(name=subdomain_name, scan_history=self.scan)
subdomain.directories.add(dirscan)
subdomain.save()
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(urls, ctx=ctx)
return results
@app.task(name='fetch_url', queue='main_scan_queue', base=RengineTask, bind=True)
def fetch_url(self, urls=[], ctx={}, description=None):
"""Fetch URLs using different tools like gauplus, gau, gospider, waybackurls ...
Args:
urls (list): List of URLs to start from.
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/input_endpoints_fetch_url.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(FETCH_URL) or {}
should_remove_duplicate_endpoints = config.get(REMOVE_DUPLICATE_ENDPOINTS, True)
duplicate_removal_fields = config.get(DUPLICATE_REMOVAL_FIELDS, ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, DEFAULT_GF_PATTERNS)
ignore_file_extension = config.get(IGNORE_FILE_EXTENSION, DEFAULT_IGNORE_FILE_EXTENSIONS)
tools = config.get(USES_TOOLS, ENDPOINT_SCAN_DEFAULT_TOOLS)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
domain_request_headers = self.domain.request_headers if self.domain else None
custom_header = domain_request_headers or self.yaml_configuration.get(CUSTOM_HEADER)
exclude_subdomains = config.get(EXCLUDED_SUBDOMAINS, False)
# Get URLs to scan and save to input file
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
exclude_subdomains=exclude_subdomains,
get_only_default_urls=True,
ctx=ctx
)
# Domain regex
host = self.domain.name if self.domain else urlparse(urls[0]).netloc
host_regex = f"\'https?://([a-z0-9]+[.])*{host}.*\'"
# Tools cmds
cmd_map = {
'gau': f'gau',
'gauplus': f'gauplus -random-agent',
'hakrawler': 'hakrawler -subs -u',
'waybackurls': 'waybackurls',
'gospider': f'gospider -S {input_path} --js -d 2 --sitemap --robots -w -r',
'katana': f'katana -list {input_path} -silent -jc -kf all -d 3 -fs rdn',
}
if proxy:
cmd_map['gau'] += f' --proxy "{proxy}"'
cmd_map['gauplus'] += f' -p "{proxy}"'
cmd_map['gospider'] += f' -p {proxy}'
cmd_map['hakrawler'] += f' -proxy {proxy}'
cmd_map['katana'] += f' -proxy {proxy}'
if threads > 0:
cmd_map['gau'] += f' --threads {threads}'
cmd_map['gauplus'] += f' -t {threads}'
cmd_map['gospider'] += f' -t {threads}'
cmd_map['katana'] += f' -c {threads}'
if custom_header:
header_string = ';;'.join([
f'{key}: {value}' for key, value in custom_header.items()
])
cmd_map['hakrawler'] += f' -h {header_string}'
cmd_map['katana'] += f' -H {header_string}'
header_flags = [':'.join(h) for h in header_string.split(';;')]
for flag in header_flags:
cmd_map['gospider'] += f' -H {flag}'
cat_input = f'cat {input_path}'
grep_output = f'grep -Eo {host_regex}'
cmd_map = {
tool: f'{cat_input} | {cmd} | {grep_output} > {self.results_dir}/urls_{tool}.txt'
for tool, cmd in cmd_map.items()
}
tasks = group(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for tool, cmd in cmd_map.items()
if tool in tools
)
# Cleanup task
sort_output = [
f'cat {self.results_dir}/urls_* > {self.output_path}',
f'cat {input_path} >> {self.output_path}',
f'sort -u {self.output_path} -o {self.output_path}',
]
if ignore_file_extension:
ignore_exts = '|'.join(ignore_file_extension)
grep_ext_filtered_output = [
f'cat {self.output_path} | grep -Eiv "\\.({ignore_exts}).*" > {self.results_dir}/urls_filtered.txt',
f'mv {self.results_dir}/urls_filtered.txt {self.output_path}'
]
sort_output.extend(grep_ext_filtered_output)
cleanup = chain(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for cmd in sort_output
)
# Run all commands
task = chord(tasks)(cleanup)
with allow_join_result():
task.get()
# Store all the endpoints and run httpx
with open(self.output_path) as f:
discovered_urls = f.readlines()
self.notify(fields={'Discovered URLs': len(discovered_urls)})
# Some tools can have an URL in the format <URL>] - <PATH> or <URL> - <PATH>, add them
# to the final URL list
all_urls = []
for url in discovered_urls:
url = url.strip()
urlpath = None
base_url = None
if '] ' in url: # found JS scraped endpoint e.g from gospider
split = tuple(url.split('] '))
if not len(split) == 2:
logger.warning(f'URL format not recognized for "{url}". Skipping.')
continue
base_url, urlpath = split
urlpath = urlpath.lstrip('- ')
elif ' - ' in url: # found JS scraped endpoint e.g from gospider
base_url, urlpath = tuple(url.split(' - '))
if base_url and urlpath:
subdomain = urlparse(base_url)
url = f'{subdomain.scheme}://{subdomain.netloc}{self.url_filter}'
if not validators.url(url):
logger.warning(f'Invalid URL "{url}". Skipping.')
if url not in all_urls:
all_urls.append(url)
# Filter out URLs if a path filter was passed
if self.url_filter:
all_urls = [url for url in all_urls if self.url_filter in url]
# Write result to output path
with open(self.output_path, 'w') as f:
f.write('\n'.join(all_urls))
logger.warning(f'Found {len(all_urls)} usable URLs')
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(
all_urls,
ctx=ctx,
should_remove_duplicate_endpoints=should_remove_duplicate_endpoints,
duplicate_removal_fields=duplicate_removal_fields
)
#-------------------#
# GF PATTERNS MATCH #
#-------------------#
# Combine old gf patterns with new ones
if gf_patterns:
self.scan.used_gf_patterns = ','.join(gf_patterns)
self.scan.save()
# Run gf patterns on saved endpoints
# TODO: refactor to Celery task
for gf_pattern in gf_patterns:
# TODO: js var is causing issues, removing for now
if gf_pattern == 'jsvar':
logger.info('Ignoring jsvar as it is causing issues.')
continue
# Run gf on current pattern
logger.warning(f'Running gf on pattern "{gf_pattern}"')
gf_output_file = f'{self.results_dir}/gf_patterns_{gf_pattern}.txt'
cmd = f'cat {self.output_path} | gf {gf_pattern} | grep -Eo {host_regex} >> {gf_output_file}'
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Check output file
if not os.path.exists(gf_output_file):
logger.error(f'Could not find GF output file {gf_output_file}. Skipping GF pattern "{gf_pattern}"')
continue
# Read output file line by line and
with open(gf_output_file, 'r') as f:
lines = f.readlines()
# Add endpoints / subdomains to DB
for url in lines:
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
endpoint, created = save_endpoint(
http_url,
crawl=False,
subdomain=subdomain,
ctx=ctx)
if not endpoint:
continue
earlier_pattern = None
if not created:
earlier_pattern = endpoint.matched_gf_patterns
pattern = f'{earlier_pattern},{gf_pattern}' if earlier_pattern else gf_pattern
endpoint.matched_gf_patterns = pattern
endpoint.save()
return all_urls
def parse_curl_output(response):
# TODO: Enrich from other cURL fields.
CURL_REGEX_HTTP_STATUS = f'HTTP\/(?:(?:\d\.?)+)\s(\d+)\s(?:\w+)'
http_status = 0
if response:
failed = False
regex = re.compile(CURL_REGEX_HTTP_STATUS, re.MULTILINE)
try:
http_status = int(regex.findall(response)[0])
except (KeyError, TypeError, IndexError):
pass
return {
'http_status': http_status,
}
@app.task(name='vulnerability_scan', queue='main_scan_queue', bind=True, base=RengineTask)
def vulnerability_scan(self, urls=[], ctx={}, description=None):
"""
This function will serve as an entrypoint to vulnerability scan.
All other vulnerability scan will be run from here including nuclei, crlfuzz, etc
"""
logger.info('Running Vulnerability Scan Queue')
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_run_nuclei = config.get(RUN_NUCLEI, True)
should_run_crlfuzz = config.get(RUN_CRLFUZZ, False)
should_run_dalfox = config.get(RUN_DALFOX, False)
should_run_s3scanner = config.get(RUN_S3SCANNER, True)
grouped_tasks = []
if should_run_nuclei:
_task = nuclei_scan.si(
urls=urls,
ctx=ctx,
description=f'Nuclei Scan'
)
grouped_tasks.append(_task)
if should_run_crlfuzz:
_task = crlfuzz_scan.si(
urls=urls,
ctx=ctx,
description=f'CRLFuzz Scan'
)
grouped_tasks.append(_task)
if should_run_dalfox:
_task = dalfox_xss_scan.si(
urls=urls,
ctx=ctx,
description=f'Dalfox XSS Scan'
)
grouped_tasks.append(_task)
if should_run_s3scanner:
_task = s3scanner.si(
ctx=ctx,
description=f'Misconfigured S3 Buckets Scanner'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan completed...')
# return results
return None
@app.task(name='nuclei_individual_severity_module', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_individual_severity_module(self, cmd, severity, enable_http_crawl, should_fetch_gpt_report, ctx={}, description=None):
'''
This celery task will run vulnerability scan in parallel.
All severities supplied should run in parallel as grouped tasks.
'''
results = []
logger.info(f'Running vulnerability scan with severity: {severity}')
cmd += f' -severity {severity}'
# Send start notification
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
# Gather nuclei results
vuln_data = parse_nuclei_result(line)
# Get corresponding subdomain
http_url = sanitize_url(line.get('matched-at'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
# Look for duplicate vulnerabilities by excluding records that might change but are irrelevant.
object_comparison_exclude = ['response', 'curl_command', 'tags', 'references', 'cve_ids', 'cwe_ids']
# Add subdomain and target domain to the duplicate check
vuln_data_copy = vuln_data.copy()
vuln_data_copy['subdomain'] = subdomain
vuln_data_copy['target_domain'] = self.domain
# Check if record exists, if exists do not save it
if record_exists(Vulnerability, data=vuln_data_copy, exclude_keys=object_comparison_exclude):
logger.warning(f'Nuclei vulnerability of severity {severity} : {vuln_data_copy["name"]} for {subdomain_name} already exists')
continue
# Get or create EndPoint object
response = line.get('response')
httpx_crawl = False if response else enable_http_crawl # avoid yet another httpx crawl
endpoint, _ = save_endpoint(
http_url,
crawl=httpx_crawl,
subdomain=subdomain,
ctx=ctx)
if endpoint:
http_url = endpoint.http_url
if not httpx_crawl:
output = parse_curl_output(response)
endpoint.http_status = output['http_status']
endpoint.save()
# Get or create Vulnerability object
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
subdomain=subdomain,
**vuln_data)
if not vuln:
continue
# Print vuln
severity = line['info'].get('severity', 'unknown')
logger.warning(str(vuln))
# Send notification for all vulnerabilities except info
url = vuln.http_url or vuln.subdomain
send_vuln = (
notif and
notif.send_vuln_notif and
vuln and
severity in ['low', 'medium', 'high', 'critical'])
if send_vuln:
fields = {
'Severity': f'**{severity.upper()}**',
'URL': http_url,
'Subdomain': subdomain_name,
'Name': vuln.name,
'Type': vuln.type,
'Description': vuln.description,
'Template': vuln.template_url,
'Tags': vuln.get_tags_str(),
'CVEs': vuln.get_cve_str(),
'CWEs': vuln.get_cwe_str(),
'References': vuln.get_refs_str()
}
severity_map = {
'low': 'info',
'medium': 'warning',
'high': 'error',
'critical': 'error'
}
self.notify(
f'vulnerability_scan_#{vuln.id}',
severity_map[severity],
fields,
add_meta_info=False)
# Send report to hackerone
hackerone_query = Hackerone.objects.all()
send_report = (
hackerone_query.exists() and
severity not in ('info', 'low') and
vuln.target_domain.h1_team_handle
)
if send_report:
hackerone = hackerone_query.first()
if hackerone.send_critical and severity == 'critical':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_high and severity == 'high':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_medium and severity == 'medium':
send_hackerone_report.delay(vuln.id)
# Write results to JSON file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
# Send finish notif
if send_status:
vulns = Vulnerability.objects.filter(scan_history__id=self.scan_id)
info_count = vulns.filter(severity=0).count()
low_count = vulns.filter(severity=1).count()
medium_count = vulns.filter(severity=2).count()
high_count = vulns.filter(severity=3).count()
critical_count = vulns.filter(severity=4).count()
unknown_count = vulns.filter(severity=-1).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count + unknown_count
fields = {
'Total': vulnerability_count,
'Critical': critical_count,
'High': high_count,
'Medium': medium_count,
'Low': low_count,
'Info': info_count,
'Unknown': unknown_count
}
self.notify(fields=fields)
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=NUCLEI
).exclude(
severity=0
)
# find all unique vulnerabilities based on path and title
# all unique vulnerability will go thru gpt function and get report
# once report is got, it will be matched with other vulnerabilities and saved
unique_vulns = set()
for vuln in vulns:
unique_vulns.add((vuln.name, vuln.get_path()))
unique_vulns = list(unique_vulns)
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in unique_vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return None
def get_vulnerability_gpt_report(vuln):
title = vuln[0]
path = vuln[1]
logger.info(f'Getting GPT Report for {title}, PATH: {path}')
# check if in db already exists
stored = GPTVulnerabilityReport.objects.filter(
url_path=path
).filter(
title=title
).first()
if stored:
response = {
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
report = GPTVulnerabilityReportGenerator()
vulnerability_description = get_gpt_vuln_input_description(
title,
path
)
response = report.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
title,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
for vuln in Vulnerability.objects.filter(name=title, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
def add_gpt_description_db(title, path, description, impact, remediation, references):
gpt_report = GPTVulnerabilityReport()
gpt_report.url_path = path
gpt_report.title = title
gpt_report.description = description
gpt_report.impact = impact
gpt_report.remediation = remediation
gpt_report.save()
for url in references:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
gpt_report.references.add(ref)
gpt_report.save()
@app.task(name='nuclei_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_scan(self, urls=[], ctx={}, description=None):
"""HTTP vulnerability scan using Nuclei
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
Notes:
Unfurl the urls to keep only domain and path, will be sent to vuln scan and
ignore certain file extensions. Thanks: https://github.com/six2dez/reconftw
"""
# Config
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
input_path = f'{self.results_dir}/input_endpoints_vulnerability_scan.txt'
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
concurrency = config.get(NUCLEI_CONCURRENCY) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
retries = config.get(RETRIES) or self.yaml_configuration.get(RETRIES, DEFAULT_RETRIES)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
custom_header = config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
should_fetch_gpt_report = config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
proxy = get_random_proxy()
nuclei_specific_config = config.get('nuclei', {})
use_nuclei_conf = nuclei_specific_config.get(USE_CONFIG, False)
severities = nuclei_specific_config.get(NUCLEI_SEVERITY, NUCLEI_DEFAULT_SEVERITIES)
tags = nuclei_specific_config.get(NUCLEI_TAGS, [])
tags = ','.join(tags)
nuclei_templates = nuclei_specific_config.get(NUCLEI_TEMPLATE)
custom_nuclei_templates = nuclei_specific_config.get(NUCLEI_CUSTOM_TEMPLATE)
# severities_str = ','.join(severities)
# Get alive endpoints
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=enable_http_crawl,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
if intensity == 'normal': # reduce number of endpoints to scan
unfurl_filter = f'{self.results_dir}/urls_unfurled.txt'
run_command(
f"cat {input_path} | unfurl -u format %s://%d%p |uro > {unfurl_filter}",
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {unfurl_filter} -o {unfurl_filter}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
input_path = unfurl_filter
# Build templates
# logger.info('Updating Nuclei templates ...')
run_command(
'nuclei -update-templates',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
templates = []
if not (nuclei_templates or custom_nuclei_templates):
templates.append(NUCLEI_DEFAULT_TEMPLATES_PATH)
if nuclei_templates:
if ALL in nuclei_templates:
template = NUCLEI_DEFAULT_TEMPLATES_PATH
templates.append(template)
else:
templates.extend(nuclei_templates)
if custom_nuclei_templates:
custom_nuclei_template_paths = [f'{str(elem)}.yaml' for elem in custom_nuclei_templates]
template = templates.extend(custom_nuclei_template_paths)
# Build CMD
cmd = 'nuclei -j'
cmd += ' -config /root/.config/nuclei/config.yaml' if use_nuclei_conf else ''
cmd += f' -irr'
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -l {input_path}'
cmd += f' -c {str(concurrency)}' if concurrency > 0 else ''
cmd += f' -proxy {proxy} ' if proxy else ''
cmd += f' -retries {retries}' if retries > 0 else ''
cmd += f' -rl {rate_limit}' if rate_limit > 0 else ''
# cmd += f' -severity {severities_str}'
cmd += f' -timeout {str(timeout)}' if timeout and timeout > 0 else ''
cmd += f' -tags {tags}' if tags else ''
cmd += f' -silent'
for tpl in templates:
cmd += f' -t {tpl}'
grouped_tasks = []
custom_ctx = ctx
for severity in severities:
custom_ctx['track'] = True
_task = nuclei_individual_severity_module.si(
cmd,
severity,
enable_http_crawl,
should_fetch_gpt_report,
ctx=custom_ctx,
description=f'Nuclei Scan with severity {severity}'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan with all severities completed...')
return None
@app.task(name='dalfox_xss_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def dalfox_xss_scan(self, urls=[], ctx={}, description=None):
"""XSS Scan using dalfox
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
dalfox_config = vuln_config.get(DALFOX) or {}
custom_header = dalfox_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
is_waf_evasion = dalfox_config.get(WAF_EVASION, False)
blind_xss_server = dalfox_config.get(BLIND_XSS_SERVER)
user_agent = dalfox_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
timeout = dalfox_config.get(TIMEOUT)
delay = dalfox_config.get(DELAY)
threads = dalfox_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_dalfox_xss.txt'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=False,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'dalfox --silence --no-color --no-spinner'
cmd += f' --only-poc r '
cmd += f' --ignore-return 302,404,403'
cmd += f' --skip-bav'
cmd += f' file {input_path}'
cmd += f' --proxy {proxy}' if proxy else ''
cmd += f' --waf-evasion' if is_waf_evasion else ''
cmd += f' -b {blind_xss_server}' if blind_xss_server else ''
cmd += f' --delay {delay}' if delay else ''
cmd += f' --timeout {timeout}' if timeout else ''
cmd += f' --user-agent {user_agent}' if user_agent else ''
cmd += f' --header {custom_header}' if custom_header else ''
cmd += f' --worker {threads}' if threads else ''
cmd += f' --format json'
results = []
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id,
trunc_char=','
):
if not isinstance(line, dict):
continue
results.append(line)
vuln_data = parse_dalfox_result(line)
http_url = sanitize_url(line.get('data'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Dalfox Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=DALFOX
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='crlfuzz_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def crlfuzz_scan(self, urls=[], ctx={}, description=None):
"""CRLF Fuzzing with CRLFuzz
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
custom_header = vuln_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
user_agent = vuln_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
threads = vuln_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_crlf.txt'
output_path = f'{self.results_dir}/{self.filename}'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'crlfuzz -s'
cmd += f' -l {input_path}'
cmd += f' -x {proxy}' if proxy else ''
cmd += f' --H {custom_header}' if custom_header else ''
cmd += f' -o {output_path}'
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id
)
if not os.path.isfile(output_path):
logger.info('No Results from CRLFuzz')
return
crlfs = []
results = []
with open(output_path, 'r') as file:
crlfs = file.readlines()
for crlf in crlfs:
url = crlf.strip()
vuln_data = parse_crlfuzz_result(url)
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting CRLFuzz Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=CRLFUZZ
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='s3scanner', queue='main_scan_queue', base=RengineTask, bind=True)
def s3scanner(self, ctx={}, description=None):
"""Bucket Scanner
Args:
ctx (dict): Context
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/#{self.scan_id}_subdomain_discovery.txt'
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
s3_config = vuln_config.get(S3SCANNER) or {}
threads = s3_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
providers = s3_config.get(PROVIDERS, S3SCANNER_DEFAULT_PROVIDERS)
scan_history = ScanHistory.objects.filter(pk=self.scan_id).first()
for provider in providers:
cmd = f's3scanner -bucket-file {input_path} -enumerate -provider {provider} -threads {threads} -json'
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
if line.get('bucket', {}).get('exists', 0) == 1:
result = parse_s3scanner_result(line)
s3bucket, created = S3Bucket.objects.get_or_create(**result)
scan_history.buckets.add(s3bucket)
logger.info(f"s3 bucket added {result['provider']}-{result['name']}-{result['region']}")
@app.task(name='http_crawl', queue='main_scan_queue', base=RengineTask, bind=True)
def http_crawl(
self,
urls=[],
method=None,
recrawl=False,
ctx={},
track=True,
description=None,
is_ran_from_subdomain_scan=False,
should_remove_duplicate_endpoints=True,
duplicate_removal_fields=[]):
"""Use httpx to query HTTP URLs for important info like page titles, http
status, etc...
Args:
urls (list, optional): A set of URLs to check. Overrides default
behavior which queries all endpoints related to this scan.
method (str): HTTP method to use (GET, HEAD, POST, PUT, DELETE).
recrawl (bool, optional): If False, filter out URLs that have already
been crawled.
should_remove_duplicate_endpoints (bool): Whether to remove duplicate endpoints
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
Returns:
list: httpx results.
"""
logger.info('Initiating HTTP Crawl')
if is_ran_from_subdomain_scan:
logger.info('Running From Subdomain Scan...')
cmd = '/go/bin/httpx'
cfg = self.yaml_configuration.get(HTTP_CRAWL) or {}
custom_header = cfg.get(CUSTOM_HEADER, '')
threads = cfg.get(THREADS, DEFAULT_THREADS)
follow_redirect = cfg.get(FOLLOW_REDIRECT, True)
self.output_path = None
input_path = f'{self.results_dir}/httpx_input.txt'
history_file = f'{self.results_dir}/commands.txt'
if urls: # direct passing URLs to check
if self.url_filter:
urls = [u for u in urls if self.url_filter in u]
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_uncrawled=not recrawl,
write_filepath=input_path,
ctx=ctx
)
# logger.debug(urls)
# If no URLs found, skip it
if not urls:
return
# Re-adjust thread number if few URLs to avoid spinning up a monster to
# kill a fly.
if len(urls) < threads:
threads = len(urls)
# Get random proxy
proxy = get_random_proxy()
# Run command
cmd += f' -cl -ct -rt -location -td -websocket -cname -asn -cdn -probe -random-agent'
cmd += f' -t {threads}' if threads > 0 else ''
cmd += f' --http-proxy {proxy}' if proxy else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -json'
cmd += f' -u {urls[0]}' if len(urls) == 1 else f' -l {input_path}'
cmd += f' -x {method}' if method else ''
cmd += f' -silent'
if follow_redirect:
cmd += ' -fr'
results = []
endpoint_ids = []
for line in stream_command(
cmd,
history_file=history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not line or not isinstance(line, dict):
continue
logger.debug(line)
# No response from endpoint
if line.get('failed', False):
continue
# Parse httpx output
host = line.get('host', '')
content_length = line.get('content_length', 0)
http_status = line.get('status_code')
http_url, is_redirect = extract_httpx_url(line)
page_title = line.get('title')
webserver = line.get('webserver')
cdn = line.get('cdn', False)
rt = line.get('time')
techs = line.get('tech', [])
cname = line.get('cname', '')
content_type = line.get('content_type', '')
response_time = -1
if rt:
response_time = float(''.join(ch for ch in rt if not ch.isalpha()))
if rt[-2:] == 'ms':
response_time = response_time / 1000
# Create Subdomain object in DB
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
# Save default HTTP URL to endpoint object in DB
endpoint, created = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain,
is_default=is_ran_from_subdomain_scan
)
if not endpoint:
continue
endpoint.http_status = http_status
endpoint.page_title = page_title
endpoint.content_length = content_length
endpoint.webserver = webserver
endpoint.response_time = response_time
endpoint.content_type = content_type
endpoint.save()
endpoint_str = f'{http_url} [{http_status}] `{content_length}B` `{webserver}` `{rt}`'
logger.warning(endpoint_str)
if endpoint and endpoint.is_alive and endpoint.http_status != 403:
self.notify(
fields={'Alive endpoint': f'• {endpoint_str}'},
add_meta_info=False)
# Add endpoint to results
line['_cmd'] = cmd
line['final_url'] = http_url
line['endpoint_id'] = endpoint.id
line['endpoint_created'] = created
line['is_redirect'] = is_redirect
results.append(line)
# Add technology objects to DB
for technology in techs:
tech, _ = Technology.objects.get_or_create(name=technology)
endpoint.techs.add(tech)
if is_ran_from_subdomain_scan:
subdomain.technologies.add(tech)
subdomain.save()
endpoint.save()
techs_str = ', '.join([f'`{tech}`' for tech in techs])
self.notify(
fields={'Technologies': techs_str},
add_meta_info=False)
# Add IP objects for 'a' records to DB
a_records = line.get('a', [])
for ip_address in a_records:
ip, created = save_ip_address(
ip_address,
subdomain,
subscan=self.subscan,
cdn=cdn)
ips_str = '• ' + '\n• '.join([f'`{ip}`' for ip in a_records])
self.notify(
fields={'IPs': ips_str},
add_meta_info=False)
# Add IP object for host in DB
if host:
ip, created = save_ip_address(
host,
subdomain,
subscan=self.subscan,
cdn=cdn)
self.notify(
fields={'IPs': f'• `{ip.address}`'},
add_meta_info=False)
# Save subdomain and endpoint
if is_ran_from_subdomain_scan:
# save subdomain stuffs
subdomain.http_url = http_url
subdomain.http_status = http_status
subdomain.page_title = page_title
subdomain.content_length = content_length
subdomain.webserver = webserver
subdomain.response_time = response_time
subdomain.content_type = content_type
subdomain.cname = ','.join(cname)
subdomain.is_cdn = cdn
if cdn:
subdomain.cdn_name = line.get('cdn_name')
subdomain.save()
endpoint.save()
endpoint_ids.append(endpoint.id)
if should_remove_duplicate_endpoints:
# Remove 'fake' alive endpoints that are just redirects to the same page
remove_duplicate_endpoints(
self.scan_id,
self.domain_id,
self.subdomain_id,
filter_ids=endpoint_ids
)
# Remove input file
run_command(
f'rm {input_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
return results
#---------------------#
# Notifications tasks #
#---------------------#
@app.task(name='send_notif', bind=False, queue='send_notif_queue')
def send_notif(
message,
scan_history_id=None,
subscan_id=None,
**options):
if not 'title' in options:
message = enrich_notification(message, scan_history_id, subscan_id)
send_discord_message(message, **options)
send_slack_message(message)
send_telegram_message(message)
@app.task(name='send_scan_notif', bind=False, queue='send_scan_notif_queue')
def send_scan_notif(
scan_history_id,
subscan_id=None,
engine_id=None,
status='RUNNING'):
"""Send scan status notification. Works for scan or a subscan if subscan_id
is passed.
Args:
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Get domain, engine, scan_history objects
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
tasks = ScanActivity.objects.filter(scan_of=scan) if scan else 0
# Build notif options
url = get_scan_url(scan_history_id, subscan_id)
title = get_scan_title(scan_history_id, subscan_id)
fields = get_scan_fields(engine, scan, subscan, status, tasks)
severity = None
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
if status:
severity = STATUS_TO_SEVERITIES.get(status)
opts = {
'title': title,
'url': url,
'fields': fields,
'severity': severity
}
logger.warning(f'Sending notification "{title}" [{severity}]')
# Send notification
send_notif(
msg,
scan_history_id,
subscan_id,
**opts)
@app.task(name='send_task_notif', bind=False, queue='send_task_notif_queue')
def send_task_notif(
task_name,
status=None,
result=None,
output_path=None,
traceback=None,
scan_history_id=None,
engine_id=None,
subscan_id=None,
severity=None,
add_meta_info=True,
update_fields={}):
"""Send task status notification.
Args:
task_name (str): Task name.
status (str, optional): Task status.
result (str, optional): Task result.
output_path (str, optional): Task output path.
traceback (str, optional): Task traceback.
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
severity (str, optional): Severity (will be mapped to notif colors)
add_meta_info (bool, optional): Wheter to add scan / subscan info to notif.
update_fields (dict, optional): Fields key / value to update.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Build fields
url = None
fields = {}
if add_meta_info:
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
url = get_scan_url(scan_history_id)
if status:
fields['Status'] = f'**{status}**'
if engine:
fields['Engine'] = engine.engine_name
if scan:
fields['Scan ID'] = f'[#{scan.id}]({url})'
if subscan:
url = get_scan_url(scan_history_id, subscan_id)
fields['Subscan ID'] = f'[#{subscan.id}]({url})'
title = get_task_title(task_name, scan_history_id, subscan_id)
if status:
severity = STATUS_TO_SEVERITIES.get(status)
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
# Add fields to update
for k, v in update_fields.items():
fields[k] = v
# Add traceback to notif
if traceback and notif.send_scan_tracebacks:
fields['Traceback'] = f'```\n{traceback}\n```'
# Add files to notif
files = []
attach_file = (
notif.send_scan_output_file and
output_path and
result and
not traceback
)
if attach_file:
output_title = output_path.split('/')[-1]
files = [(output_path, output_title)]
# Send notif
opts = {
'title': title,
'url': url,
'files': files,
'severity': severity,
'fields': fields,
'fields_append': update_fields.keys()
}
send_notif(
msg,
scan_history_id=scan_history_id,
subscan_id=subscan_id,
**opts)
@app.task(name='send_file_to_discord', bind=False, queue='send_file_to_discord_queue')
def send_file_to_discord(file_path, title=None):
notif = Notification.objects.first()
do_send = notif and notif.send_to_discord and notif.discord_hook_url
if not do_send:
return False
webhook = DiscordWebhook(
url=notif.discord_hook_url,
rate_limit_retry=True,
username=title or "reNgine Discord Plugin"
)
with open(file_path, "rb") as f:
head, tail = os.path.split(file_path)
webhook.add_file(file=f.read(), filename=tail)
webhook.execute()
@app.task(name='send_hackerone_report', bind=False, queue='send_hackerone_report_queue')
def send_hackerone_report(vulnerability_id):
"""Send HackerOne vulnerability report.
Args:
vulnerability_id (int): Vulnerability id.
Returns:
int: HTTP response status code.
"""
vulnerability = Vulnerability.objects.get(id=vulnerability_id)
severities = {v: k for k,v in NUCLEI_SEVERITY_MAP.items()}
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
# can only send vulnerability report if team_handle exists
if len(vulnerability.target_domain.h1_team_handle) !=0:
hackerone_query = Hackerone.objects.all()
if hackerone_query.exists():
hackerone = Hackerone.objects.first()
severity_value = severities[vulnerability.severity]
tpl = hackerone.report_template
# Replace syntax of report template with actual content
tpl = tpl.replace('{vulnerability_name}', vulnerability.name)
tpl = tpl.replace('{vulnerable_url}', vulnerability.http_url)
tpl = tpl.replace('{vulnerability_severity}', severity_value)
tpl = tpl.replace('{vulnerability_description}', vulnerability.description if vulnerability.description else '')
tpl = tpl.replace('{vulnerability_extracted_results}', vulnerability.extracted_results if vulnerability.extracted_results else '')
tpl = tpl.replace('{vulnerability_reference}', vulnerability.reference if vulnerability.reference else '')
data = {
"data": {
"type": "report",
"attributes": {
"team_handle": vulnerability.target_domain.h1_team_handle,
"title": '{} found in {}'.format(vulnerability.name, vulnerability.http_url),
"vulnerability_information": tpl,
"severity_rating": severity_value,
"impact": "More information about the impact and vulnerability can be found here: \n" + vulnerability.reference if vulnerability.reference else "NA",
}
}
}
r = requests.post(
'https://api.hackerone.com/v1/hackers/reports',
auth=(hackerone.username, hackerone.api_key),
json=data,
headers=headers
)
response = r.json()
status_code = r.status_code
if status_code == 201:
vulnerability.hackerone_report_id = response['data']["id"]
vulnerability.open_status = False
vulnerability.save()
return status_code
else:
logger.error('No team handle found.')
status_code = 111
return status_code
#-------------#
# Utils tasks #
#-------------#
@app.task(name='parse_nmap_results', bind=False, queue='parse_nmap_results_queue')
def parse_nmap_results(xml_file, output_file=None):
"""Parse results from nmap output file.
Args:
xml_file (str): nmap XML report file path.
Returns:
list: List of vulnerabilities found from nmap results.
"""
with open(xml_file, encoding='utf8') as f:
content = f.read()
try:
nmap_results = xmltodict.parse(content) # parse XML to dict
except Exception as e:
logger.exception(e)
logger.error(f'Cannot parse {xml_file} to valid JSON. Skipping.')
return []
# Write JSON to output file
if output_file:
with open(output_file, 'w') as f:
json.dump(nmap_results, f, indent=4)
logger.warning(json.dumps(nmap_results, indent=4))
hosts = (
nmap_results
.get('nmaprun', {})
.get('host', {})
)
all_vulns = []
if isinstance(hosts, dict):
hosts = [hosts]
for host in hosts:
# Grab hostname / IP from output
hostnames_dict = host.get('hostnames', {})
if hostnames_dict:
# Ensure that hostnames['hostname'] is a list for consistency
hostnames_list = hostnames_dict['hostname'] if isinstance(hostnames_dict['hostname'], list) else [hostnames_dict['hostname']]
# Extract all the @name values from the list of dictionaries
hostnames = [entry.get('@name') for entry in hostnames_list]
else:
hostnames = [host.get('address')['@addr']]
# Iterate over each hostname for each port
for hostname in hostnames:
# Grab ports from output
ports = host.get('ports', {}).get('port', [])
if isinstance(ports, dict):
ports = [ports]
for port in ports:
url_vulns = []
port_number = port['@portid']
url = sanitize_url(f'{hostname}:{port_number}')
logger.info(f'Parsing nmap results for {hostname}:{port_number} ...')
if not port_number or not port_number.isdigit():
continue
port_protocol = port['@protocol']
scripts = port.get('script', [])
if isinstance(scripts, dict):
scripts = [scripts]
for script in scripts:
script_id = script['@id']
script_output = script['@output']
script_output_table = script.get('table', [])
logger.debug(f'Ran nmap script "{script_id}" on {port_number}/{port_protocol}:\n{script_output}\n')
if script_id == 'vulscan':
vulns = parse_nmap_vulscan_output(script_output)
url_vulns.extend(vulns)
elif script_id == 'vulners':
vulns = parse_nmap_vulners_output(script_output)
url_vulns.extend(vulns)
# elif script_id == 'http-server-header':
# TODO: nmap can help find technologies as well using the http-server-header script
# regex = r'(\w+)/([\d.]+)\s?(?:\((\w+)\))?'
# tech_name, tech_version, tech_os = re.match(regex, test_string).groups()
# Technology.objects.get_or_create(...)
# elif script_id == 'http_csrf':
# vulns = parse_nmap_http_csrf_output(script_output)
# url_vulns.extend(vulns)
else:
logger.warning(f'Script output parsing for script "{script_id}" is not supported yet.')
# Add URL to vuln
for vuln in url_vulns:
# TODO: This should extend to any URL, not just HTTP
vuln['http_url'] = url
if 'http_path' in vuln:
vuln['http_url'] += vuln['http_path']
all_vulns.append(vuln)
return all_vulns
def parse_nmap_http_csrf_output(script_output):
pass
def parse_nmap_vulscan_output(script_output):
"""Parse nmap vulscan script output.
Args:
script_output (str): Vulscan script output.
Returns:
list: List of Vulnerability dicts.
"""
data = {}
vulns = []
provider_name = ''
# Sort all vulns found by provider so that we can match each provider with
# a function that pulls from its API to get more info about the
# vulnerability.
for line in script_output.splitlines():
if not line:
continue
if not line.startswith('['): # provider line
if "No findings" in line:
logger.info(f"No findings: {line}")
continue
elif ' - ' in line:
provider_name, provider_url = tuple(line.split(' - '))
data[provider_name] = {'url': provider_url.rstrip(':'), 'entries': []}
continue
else:
# Log a warning
logger.warning(f"Unexpected line format: {line}")
continue
reg = r'\[(.*)\] (.*)'
matches = re.match(reg, line)
id, title = matches.groups()
entry = {'id': id, 'title': title}
data[provider_name]['entries'].append(entry)
logger.warning('Vulscan parsed output:')
logger.warning(pprint.pformat(data))
for provider_name in data:
if provider_name == 'Exploit-DB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'IBM X-Force':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'MITRE CVE':
logger.error(f'Provider {provider_name} is not supported YET.')
for entry in data[provider_name]['entries']:
cve_id = entry['id']
vuln = cve_to_vuln(cve_id)
vulns.append(vuln)
elif provider_name == 'OSVDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'OpenVAS (Nessus)':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'SecurityFocus':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'VulDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
else:
logger.error(f'Provider {provider_name} is not supported.')
return vulns
def parse_nmap_vulners_output(script_output, url=''):
"""Parse nmap vulners script output.
TODO: Rework this as it's currently matching all CVEs no matter the
confidence.
Args:
script_output (str): Script output.
Returns:
list: List of found vulnerabilities.
"""
vulns = []
# Check for CVE in script output
CVE_REGEX = re.compile(r'.*(CVE-\d\d\d\d-\d+).*')
matches = CVE_REGEX.findall(script_output)
matches = list(dict.fromkeys(matches))
for cve_id in matches: # get CVE info
vuln = cve_to_vuln(cve_id, vuln_type='nmap-vulners-nse')
if vuln:
vulns.append(vuln)
return vulns
def cve_to_vuln(cve_id, vuln_type=''):
"""Search for a CVE using CVESearch and return Vulnerability data.
Args:
cve_id (str): CVE ID in the form CVE-*
Returns:
dict: Vulnerability dict.
"""
cve_info = CVESearch('https://cve.circl.lu').id(cve_id)
if not cve_info:
logger.error(f'Could not fetch CVE info for cve {cve_id}. Skipping.')
return None
vuln_cve_id = cve_info['id']
vuln_name = vuln_cve_id
vuln_description = cve_info.get('summary', 'none').replace(vuln_cve_id, '').strip()
try:
vuln_cvss = float(cve_info.get('cvss', -1))
except (ValueError, TypeError):
vuln_cvss = -1
vuln_cwe_id = cve_info.get('cwe', '')
exploit_ids = cve_info.get('refmap', {}).get('exploit-db', [])
osvdb_ids = cve_info.get('refmap', {}).get('osvdb', [])
references = cve_info.get('references', [])
capec_objects = cve_info.get('capec', [])
# Parse ovals for a better vuln name / type
ovals = cve_info.get('oval', [])
if ovals:
vuln_name = ovals[0]['title']
vuln_type = ovals[0]['family']
# Set vulnerability severity based on CVSS score
vuln_severity = 'info'
if vuln_cvss < 4:
vuln_severity = 'low'
elif vuln_cvss < 7:
vuln_severity = 'medium'
elif vuln_cvss < 9:
vuln_severity = 'high'
else:
vuln_severity = 'critical'
# Build console warning message
msg = f'{vuln_name} | {vuln_severity.upper()} | {vuln_cve_id} | {vuln_cwe_id} | {vuln_cvss}'
for id in osvdb_ids:
msg += f'\n\tOSVDB: {id}'
for exploit_id in exploit_ids:
msg += f'\n\tEXPLOITDB: {exploit_id}'
logger.warning(msg)
vuln = {
'name': vuln_name,
'type': vuln_type,
'severity': NUCLEI_SEVERITY_MAP[vuln_severity],
'description': vuln_description,
'cvss_score': vuln_cvss,
'references': references,
'cve_ids': [vuln_cve_id],
'cwe_ids': [vuln_cwe_id]
}
return vuln
def parse_s3scanner_result(line):
'''
Parses and returns s3Scanner Data
'''
bucket = line['bucket']
return {
'name': bucket['name'],
'region': bucket['region'],
'provider': bucket['provider'],
'owner_display_name': bucket['owner_display_name'],
'owner_id': bucket['owner_id'],
'perm_auth_users_read': bucket['perm_auth_users_read'],
'perm_auth_users_write': bucket['perm_auth_users_write'],
'perm_auth_users_read_acl': bucket['perm_auth_users_read_acl'],
'perm_auth_users_write_acl': bucket['perm_auth_users_write_acl'],
'perm_auth_users_full_control': bucket['perm_auth_users_full_control'],
'perm_all_users_read': bucket['perm_all_users_read'],
'perm_all_users_write': bucket['perm_all_users_write'],
'perm_all_users_read_acl': bucket['perm_all_users_read_acl'],
'perm_all_users_write_acl': bucket['perm_all_users_write_acl'],
'perm_all_users_full_control': bucket['perm_all_users_full_control'],
'num_objects': bucket['num_objects'],
'size': bucket['bucket_size']
}
def parse_nuclei_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
return {
'name': line['info'].get('name', ''),
'type': line['type'],
'severity': NUCLEI_SEVERITY_MAP[line['info'].get('severity', 'unknown')],
'template': line['template'],
'template_url': line['template-url'],
'template_id': line['template-id'],
'description': line['info'].get('description', ''),
'matcher_name': line.get('matcher-name', ''),
'curl_command': line.get('curl-command'),
'request': line.get('request'),
'response': line.get('response'),
'extracted_results': line.get('extracted-results', []),
'cvss_metrics': line['info'].get('classification', {}).get('cvss-metrics', ''),
'cvss_score': line['info'].get('classification', {}).get('cvss-score'),
'cve_ids': line['info'].get('classification', {}).get('cve_id', []) or [],
'cwe_ids': line['info'].get('classification', {}).get('cwe_id', []) or [],
'references': line['info'].get('reference', []) or [],
'tags': line['info'].get('tags', []),
'source': NUCLEI,
}
def parse_dalfox_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
description = ''
description += f" Evidence: {line.get('evidence')} <br>" if line.get('evidence') else ''
description += f" Message: {line.get('message')} <br>" if line.get('message') else ''
description += f" Payload: {line.get('message_str')} <br>" if line.get('message_str') else ''
description += f" Vulnerable Parameter: {line.get('param')} <br>" if line.get('param') else ''
return {
'name': 'XSS (Cross Site Scripting)',
'type': 'XSS',
'severity': DALFOX_SEVERITY_MAP[line.get('severity', 'unknown')],
'description': description,
'source': DALFOX,
'cwe_ids': [line.get('cwe')]
}
def parse_crlfuzz_result(url):
"""Parse CRLF results
Args:
url (str): CRLF Vulnerable URL
Returns:
dict: Vulnerability data.
"""
return {
'name': 'CRLF (HTTP Response Splitting)',
'type': 'CRLF',
'severity': 2,
'description': 'A CRLF (HTTP Response Splitting) vulnerability has been discovered.',
'source': CRLFUZZ,
}
def record_exists(model, data, exclude_keys=[]):
"""
Check if a record already exists in the database based on the given data.
Args:
model (django.db.models.Model): The Django model to check against.
data (dict): Data dictionary containing fields and values.
exclude_keys (list): List of keys to exclude from the lookup.
Returns:
bool: True if the record exists, False otherwise.
"""
# Extract the keys that will be used for the lookup
lookup_fields = {key: data[key] for key in data if key not in exclude_keys}
# Return True if a record exists based on the lookup fields, False otherwise
return model.objects.filter(**lookup_fields).exists()
@app.task(name='geo_localize', bind=False, queue='geo_localize_queue')
def geo_localize(host, ip_id=None):
"""Uses geoiplookup to find location associated with host.
Args:
host (str): Hostname.
ip_id (int): IpAddress object id.
Returns:
startScan.models.CountryISO: CountryISO object from DB or None.
"""
if validators.ipv6(host):
logger.info(f'Ipv6 "{host}" is not supported by geoiplookup. Skipping.')
return None
cmd = f'geoiplookup {host}'
_, out = run_command(cmd)
if 'IP Address not found' not in out and "can't resolve hostname" not in out:
country_iso = out.split(':')[1].strip().split(',')[0]
country_name = out.split(':')[1].strip().split(',')[1].strip()
geo_object, _ = CountryISO.objects.get_or_create(
iso=country_iso,
name=country_name
)
geo_json = {
'iso': country_iso,
'name': country_name
}
if ip_id:
ip = IpAddress.objects.get(pk=ip_id)
ip.geo_iso = geo_object
ip.save()
return geo_json
logger.info(f'Geo IP lookup failed for host "{host}"')
return None
@app.task(name='query_whois', bind=False, queue='query_whois_queue')
def query_whois(ip_domain, force_reload_whois=False):
"""Query WHOIS information for an IP or a domain name.
Args:
ip_domain (str): IP address or domain name.
save_domain (bool): Whether to save domain or not, default False
Returns:
dict: WHOIS information.
"""
if not force_reload_whois and Domain.objects.filter(name=ip_domain).exists() and Domain.objects.get(name=ip_domain).domain_info:
domain = Domain.objects.get(name=ip_domain)
if not domain.insert_date:
domain.insert_date = timezone.now()
domain.save()
domain_info_db = domain.domain_info
domain_info = DottedDict(
dnssec=domain_info_db.dnssec,
created=domain_info_db.created,
updated=domain_info_db.updated,
expires=domain_info_db.expires,
geolocation_iso=domain_info_db.geolocation_iso,
status=[status['name'] for status in DomainWhoisStatusSerializer(domain_info_db.status, many=True).data],
whois_server=domain_info_db.whois_server,
ns_records=[ns['name'] for ns in NameServersSerializer(domain_info_db.name_servers, many=True).data],
registrar_name=domain_info_db.registrar.name,
registrar_phone=domain_info_db.registrar.phone,
registrar_email=domain_info_db.registrar.email,
registrar_url=domain_info_db.registrar.url,
registrant_name=domain_info_db.registrant.name,
registrant_id=domain_info_db.registrant.id_str,
registrant_organization=domain_info_db.registrant.organization,
registrant_city=domain_info_db.registrant.city,
registrant_state=domain_info_db.registrant.state,
registrant_zip_code=domain_info_db.registrant.zip_code,
registrant_country=domain_info_db.registrant.country,
registrant_phone=domain_info_db.registrant.phone,
registrant_fax=domain_info_db.registrant.fax,
registrant_email=domain_info_db.registrant.email,
registrant_address=domain_info_db.registrant.address,
admin_name=domain_info_db.admin.name,
admin_id=domain_info_db.admin.id_str,
admin_organization=domain_info_db.admin.organization,
admin_city=domain_info_db.admin.city,
admin_state=domain_info_db.admin.state,
admin_zip_code=domain_info_db.admin.zip_code,
admin_country=domain_info_db.admin.country,
admin_phone=domain_info_db.admin.phone,
admin_fax=domain_info_db.admin.fax,
admin_email=domain_info_db.admin.email,
admin_address=domain_info_db.admin.address,
tech_name=domain_info_db.tech.name,
tech_id=domain_info_db.tech.id_str,
tech_organization=domain_info_db.tech.organization,
tech_city=domain_info_db.tech.city,
tech_state=domain_info_db.tech.state,
tech_zip_code=domain_info_db.tech.zip_code,
tech_country=domain_info_db.tech.country,
tech_phone=domain_info_db.tech.phone,
tech_fax=domain_info_db.tech.fax,
tech_email=domain_info_db.tech.email,
tech_address=domain_info_db.tech.address,
related_tlds=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_tlds, many=True).data],
related_domains=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_domains, many=True).data],
historical_ips=[ip for ip in HistoricalIPSerializer(domain_info_db.historical_ips, many=True).data],
)
if domain_info_db.dns_records:
a_records = []
txt_records = []
mx_records = []
dns_records = [{'name': dns['name'], 'type': dns['type']} for dns in DomainDNSRecordSerializer(domain_info_db.dns_records, many=True).data]
for dns in dns_records:
if dns['type'] == 'a':
a_records.append(dns['name'])
elif dns['type'] == 'txt':
txt_records.append(dns['name'])
elif dns['type'] == 'mx':
mx_records.append(dns['name'])
domain_info.a_records = a_records
domain_info.txt_records = txt_records
domain_info.mx_records = mx_records
else:
logger.info(f'Domain info for "{ip_domain}" not found in DB, querying whois')
domain_info = DottedDict()
# find domain historical ip
try:
historical_ips = get_domain_historical_ip_address(ip_domain)
domain_info.historical_ips = historical_ips
except Exception as e:
logger.error(f'HistoricalIP for {ip_domain} not found!\nError: {str(e)}')
historical_ips = []
# find associated domains using ip_domain
try:
related_domains = reverse_whois(ip_domain.split('.')[0])
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
# find related tlds using TLSx
try:
related_tlds = []
output_path = '/tmp/ip_domain_tlsx.txt'
tlsx_command = f'tlsx -san -cn -silent -ro -host {ip_domain} -o {output_path}'
run_command(
tlsx_command,
shell=True,
)
tlsx_output = []
with open(output_path) as f:
tlsx_output = f.readlines()
tldextract_target = tldextract.extract(ip_domain)
for doms in tlsx_output:
doms = doms.strip()
tldextract_res = tldextract.extract(doms)
if ip_domain != doms and tldextract_res.domain == tldextract_target.domain and tldextract_res.subdomain == '':
related_tlds.append(doms)
related_tlds = list(set(related_tlds))
domain_info.related_tlds = related_tlds
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
related_domains_list = []
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_related)
related_domains_list.append(_domain['name'])
for _domain in related_tlds:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain,
)[0]
db_domain_info.related_tlds.add(domain_related)
for _ip in historical_ips:
historical_ip = HistoricalIP.objects.get_or_create(
ip=_ip['ip'],
owner=_ip['owner'],
location=_ip['location'],
last_seen=_ip['last_seen'],
)[0]
db_domain_info.historical_ips.add(historical_ip)
domain.domain_info = db_domain_info
domain.save()
command = f'netlas host {ip_domain} -f json'
# check if netlas key is provided
netlas_key = get_netlas_key()
command += f' -a {netlas_key}' if netlas_key else ''
result = subprocess.check_output(command.split()).decode('utf-8')
if 'Failed to parse response data' in result:
# do fallback
return {
'status': False,
'ip_domain': ip_domain,
'result': "Netlas limit exceeded.",
'message': 'Netlas limit exceeded.'
}
try:
result = json.loads(result)
logger.info(result)
whois = result.get('whois') if result.get('whois') else {}
domain_info.created = whois.get('created_date')
domain_info.expires = whois.get('expiration_date')
domain_info.updated = whois.get('updated_date')
domain_info.whois_server = whois.get('whois_server')
if 'registrant' in whois:
registrant = whois.get('registrant')
domain_info.registrant_name = registrant.get('name')
domain_info.registrant_country = registrant.get('country')
domain_info.registrant_id = registrant.get('id')
domain_info.registrant_state = registrant.get('province')
domain_info.registrant_city = registrant.get('city')
domain_info.registrant_phone = registrant.get('phone')
domain_info.registrant_address = registrant.get('street')
domain_info.registrant_organization = registrant.get('organization')
domain_info.registrant_fax = registrant.get('fax')
domain_info.registrant_zip_code = registrant.get('postal_code')
email_search = EMAIL_REGEX.search(str(registrant.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.registrant_email = field_content
if 'administrative' in whois:
administrative = whois.get('administrative')
domain_info.admin_name = administrative.get('name')
domain_info.admin_country = administrative.get('country')
domain_info.admin_id = administrative.get('id')
domain_info.admin_state = administrative.get('province')
domain_info.admin_city = administrative.get('city')
domain_info.admin_phone = administrative.get('phone')
domain_info.admin_address = administrative.get('street')
domain_info.admin_organization = administrative.get('organization')
domain_info.admin_fax = administrative.get('fax')
domain_info.admin_zip_code = administrative.get('postal_code')
mail_search = EMAIL_REGEX.search(str(administrative.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.admin_email = field_content
if 'technical' in whois:
technical = whois.get('technical')
domain_info.tech_name = technical.get('name')
domain_info.tech_country = technical.get('country')
domain_info.tech_state = technical.get('province')
domain_info.tech_id = technical.get('id')
domain_info.tech_city = technical.get('city')
domain_info.tech_phone = technical.get('phone')
domain_info.tech_address = technical.get('street')
domain_info.tech_organization = technical.get('organization')
domain_info.tech_fax = technical.get('fax')
domain_info.tech_zip_code = technical.get('postal_code')
mail_search = EMAIL_REGEX.search(str(technical.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.tech_email = field_content
if 'dns' in result:
dns = result.get('dns')
domain_info.mx_records = dns.get('mx')
domain_info.txt_records = dns.get('txt')
domain_info.a_records = dns.get('a')
domain_info.ns_records = whois.get('name_servers')
domain_info.dnssec = True if whois.get('dnssec') else False
domain_info.status = whois.get('status')
if 'registrar' in whois:
registrar = whois.get('registrar')
domain_info.registrar_name = registrar.get('name')
domain_info.registrar_email = registrar.get('email')
domain_info.registrar_phone = registrar.get('phone')
domain_info.registrar_url = registrar.get('url')
# find associated domains if registrant email is found
related_domains = reverse_whois(domain_info.get('registrant_email')) if domain_info.get('registrant_email') else []
for _domain in related_domains:
related_domains_list.append(_domain['name'])
# remove duplicate domains from related domains list
related_domains_list = list(set(related_domains_list))
domain_info.related_domains = related_domains_list
# save to db if domain exists
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_rel = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_rel)
db_domain_info.dnssec = domain_info.get('dnssec')
#dates
db_domain_info.created = domain_info.get('created')
db_domain_info.updated = domain_info.get('updated')
db_domain_info.expires = domain_info.get('expires')
#registrar
db_domain_info.registrar = Registrar.objects.get_or_create(
name=domain_info.get('registrar_name'),
email=domain_info.get('registrar_email'),
phone=domain_info.get('registrar_phone'),
url=domain_info.get('registrar_url'),
)[0]
db_domain_info.registrant = DomainRegistration.objects.get_or_create(
name=domain_info.get('registrant_name'),
organization=domain_info.get('registrant_organization'),
address=domain_info.get('registrant_address'),
city=domain_info.get('registrant_city'),
state=domain_info.get('registrant_state'),
zip_code=domain_info.get('registrant_zip_code'),
country=domain_info.get('registrant_country'),
email=domain_info.get('registrant_email'),
phone=domain_info.get('registrant_phone'),
fax=domain_info.get('registrant_fax'),
id_str=domain_info.get('registrant_id'),
)[0]
db_domain_info.admin = DomainRegistration.objects.get_or_create(
name=domain_info.get('admin_name'),
organization=domain_info.get('admin_organization'),
address=domain_info.get('admin_address'),
city=domain_info.get('admin_city'),
state=domain_info.get('admin_state'),
zip_code=domain_info.get('admin_zip_code'),
country=domain_info.get('admin_country'),
email=domain_info.get('admin_email'),
phone=domain_info.get('admin_phone'),
fax=domain_info.get('admin_fax'),
id_str=domain_info.get('admin_id'),
)[0]
db_domain_info.tech = DomainRegistration.objects.get_or_create(
name=domain_info.get('tech_name'),
organization=domain_info.get('tech_organization'),
address=domain_info.get('tech_address'),
city=domain_info.get('tech_city'),
state=domain_info.get('tech_state'),
zip_code=domain_info.get('tech_zip_code'),
country=domain_info.get('tech_country'),
email=domain_info.get('tech_email'),
phone=domain_info.get('tech_phone'),
fax=domain_info.get('tech_fax'),
id_str=domain_info.get('tech_id'),
)[0]
for status in domain_info.get('status') or []:
_status = WhoisStatus.objects.get_or_create(
name=status
)[0]
_status.save()
db_domain_info.status.add(_status)
for ns in domain_info.get('ns_records') or []:
_ns = NameServer.objects.get_or_create(
name=ns
)[0]
_ns.save()
db_domain_info.name_servers.add(_ns)
for a in domain_info.get('a_records') or []:
_a = DNSRecord.objects.get_or_create(
name=a,
type='a'
)[0]
_a.save()
db_domain_info.dns_records.add(_a)
for mx in domain_info.get('mx_records') or []:
_mx = DNSRecord.objects.get_or_create(
name=mx,
type='mx'
)[0]
_mx.save()
db_domain_info.dns_records.add(_mx)
for txt in domain_info.get('txt_records') or []:
_txt = DNSRecord.objects.get_or_create(
name=txt,
type='txt'
)[0]
_txt.save()
db_domain_info.dns_records.add(_txt)
db_domain_info.geolocation_iso = domain_info.get('registrant_country')
db_domain_info.whois_server = domain_info.get('whois_server')
db_domain_info.save()
domain.domain_info = db_domain_info
domain.save()
except Exception as e:
return {
'status': False,
'ip_domain': ip_domain,
'result': "unable to fetch records from WHOIS database.",
'message': str(e)
}
return {
'status': True,
'ip_domain': ip_domain,
'dnssec': domain_info.get('dnssec'),
'created': domain_info.get('created'),
'updated': domain_info.get('updated'),
'expires': domain_info.get('expires'),
'geolocation_iso': domain_info.get('registrant_country'),
'domain_statuses': domain_info.get('status'),
'whois_server': domain_info.get('whois_server'),
'dns': {
'a': domain_info.get('a_records'),
'mx': domain_info.get('mx_records'),
'txt': domain_info.get('txt_records'),
},
'registrar': {
'name': domain_info.get('registrar_name'),
'phone': domain_info.get('registrar_phone'),
'email': domain_info.get('registrar_email'),
'url': domain_info.get('registrar_url'),
},
'registrant': {
'name': domain_info.get('registrant_name'),
'id': domain_info.get('registrant_id'),
'organization': domain_info.get('registrant_organization'),
'address': domain_info.get('registrant_address'),
'city': domain_info.get('registrant_city'),
'state': domain_info.get('registrant_state'),
'zipcode': domain_info.get('registrant_zip_code'),
'country': domain_info.get('registrant_country'),
'phone': domain_info.get('registrant_phone'),
'fax': domain_info.get('registrant_fax'),
'email': domain_info.get('registrant_email'),
},
'admin': {
'name': domain_info.get('admin_name'),
'id': domain_info.get('admin_id'),
'organization': domain_info.get('admin_organization'),
'address':domain_info.get('admin_address'),
'city': domain_info.get('admin_city'),
'state': domain_info.get('admin_state'),
'zipcode': domain_info.get('admin_zip_code'),
'country': domain_info.get('admin_country'),
'phone': domain_info.get('admin_phone'),
'fax': domain_info.get('admin_fax'),
'email': domain_info.get('admin_email'),
},
'technical_contact': {
'name': domain_info.get('tech_name'),
'id': domain_info.get('tech_id'),
'organization': domain_info.get('tech_organization'),
'address': domain_info.get('tech_address'),
'city': domain_info.get('tech_city'),
'state': domain_info.get('tech_state'),
'zipcode': domain_info.get('tech_zip_code'),
'country': domain_info.get('tech_country'),
'phone': domain_info.get('tech_phone'),
'fax': domain_info.get('tech_fax'),
'email': domain_info.get('tech_email'),
},
'nameservers': domain_info.get('ns_records'),
# 'similar_domains': domain_info.get('similar_domains'),
'related_domains': domain_info.get('related_domains'),
'related_tlds': domain_info.get('related_tlds'),
'historical_ips': domain_info.get('historical_ips'),
}
@app.task(name='remove_duplicate_endpoints', bind=False, queue='remove_duplicate_endpoints_queue')
def remove_duplicate_endpoints(
scan_history_id,
domain_id,
subdomain_id=None,
filter_ids=[],
filter_status=[200, 301, 404],
duplicate_removal_fields=ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS
):
"""Remove duplicate endpoints.
Check for implicit redirections by comparing endpoints:
- [x] `content_length` similarities indicating redirections
- [x] `page_title` (check for same page title)
- [ ] Sign-in / login page (check for endpoints with the same words)
Args:
scan_history_id: ScanHistory id.
domain_id (int): Domain id.
subdomain_id (int, optional): Subdomain id.
filter_ids (list): List of endpoint ids to filter on.
filter_status (list): List of HTTP status codes to filter on.
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
"""
logger.info(f'Removing duplicate endpoints based on {duplicate_removal_fields}')
endpoints = (
EndPoint.objects
.filter(scan_history__id=scan_history_id)
.filter(target_domain__id=domain_id)
)
if filter_status:
endpoints = endpoints.filter(http_status__in=filter_status)
if subdomain_id:
endpoints = endpoints.filter(subdomain__id=subdomain_id)
if filter_ids:
endpoints = endpoints.filter(id__in=filter_ids)
for field_name in duplicate_removal_fields:
cl_query = (
endpoints
.values_list(field_name)
.annotate(mc=Count(field_name))
.order_by('-mc')
)
for (field_value, count) in cl_query:
if count > DELETE_DUPLICATES_THRESHOLD:
eps_to_delete = (
endpoints
.filter(**{field_name: field_value})
.order_by('discovered_date')
.all()[1:]
)
msg = f'Deleting {len(eps_to_delete)} endpoints [reason: same {field_name} {field_value}]'
for ep in eps_to_delete:
url = urlparse(ep.http_url)
if url.path in ['', '/', '/login']: # try do not delete the original page that other pages redirect to
continue
msg += f'\n\t {ep.http_url} [{ep.http_status}] [{field_name}={field_value}]'
ep.delete()
logger.warning(msg)
@app.task(name='run_command', bind=False, queue='run_command_queue')
def run_command(cmd, cwd=None, shell=False, history_file=None, scan_id=None, activity_id=None):
"""Run a given command using subprocess module.
Args:
cmd (str): Command to run.
cwd (str): Current working directory.
echo (bool): Log command.
shell (bool): Run within separate shell if True.
history_file (str): Write command + output to history file.
Returns:
tuple: Tuple with return_code, output.
"""
logger.info(cmd)
logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Run the command using subprocess
popen = subprocess.Popen(
cmd if shell else cmd.split(),
shell=shell,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
cwd=cwd,
universal_newlines=True)
output = ''
for stdout_line in iter(popen.stdout.readline, ""):
item = stdout_line.strip()
output += '\n' + item
logger.debug(item)
popen.stdout.close()
popen.wait()
return_code = popen.returncode
command_obj.output = output
command_obj.return_code = return_code
command_obj.save()
if history_file:
mode = 'a'
if not os.path.exists(history_file):
mode = 'w'
with open(history_file, mode) as f:
f.write(f'\n{cmd}\n{return_code}\n{output}\n------------------\n')
return return_code, output
#-------------#
# Other utils #
#-------------#
def stream_command(cmd, cwd=None, shell=False, history_file=None, encoding='utf-8', scan_id=None, activity_id=None, trunc_char=None):
# Log cmd
logger.info(cmd)
# logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Sanitize the cmd
command = cmd if shell else cmd.split()
# Run the command using subprocess
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=shell)
# Log the output in real-time to the database
output = ""
# Process the output
for line in iter(lambda: process.stdout.readline() or process.stderr.readline(), b''):
line = re.sub(r'\x1b[^m]*m', '', line.decode('utf-8').strip())
if trunc_char and line.endswith(trunc_char):
line = line[:-1]
item = line
# Try to parse the line as JSON
try:
item = json.loads(line)
except json.JSONDecodeError:
pass
# Yield the line
#logger.debug(item)
yield item
# Add the log line to the output
output += line + "\n"
# Update the command record in the database
command_obj.output = output
command_obj.save()
# Retrieve the return code and output
process.wait()
return_code = process.returncode
# Update the return code and final output in the database
command_obj.return_code = return_code
command_obj.save()
# Append the command, return code and output to the history file
if history_file is not None:
with open(history_file, "a") as f:
f.write(f"{cmd}\n{return_code}\n{output}\n")
def process_httpx_response(line):
"""TODO: implement this"""
def extract_httpx_url(line):
"""Extract final URL from httpx results. Always follow redirects to find
the last URL.
Args:
line (dict): URL data output by httpx.
Returns:
tuple: (final_url, redirect_bool) tuple.
"""
status_code = line.get('status_code', 0)
final_url = line.get('final_url')
location = line.get('location')
chain_status_codes = line.get('chain_status_codes', [])
# Final URL is already looking nice, if it exists return it
if final_url:
return final_url, False
http_url = line['url'] # fallback to url field
# Handle redirects manually
REDIRECT_STATUS_CODES = [301, 302]
is_redirect = (
status_code in REDIRECT_STATUS_CODES
or
any(x in REDIRECT_STATUS_CODES for x in chain_status_codes)
)
if is_redirect and location:
if location.startswith(('http', 'https')):
http_url = location
else:
http_url = f'{http_url}/{location.lstrip("/")}'
# Sanitize URL
http_url = sanitize_url(http_url)
return http_url, is_redirect
#-------------#
# OSInt utils #
#-------------#
def get_and_save_dork_results(lookup_target, results_dir, type, lookup_keywords=None, lookup_extensions=None, delay=3, page_count=2, scan_history=None):
"""
Uses gofuzz to dork and store information
Args:
lookup_target (str): target to look into such as stackoverflow or even the target itself
results_dir (str): Results directory
type (str): Dork Type Title
lookup_keywords (str): comma separated keywords or paths to look for
lookup_extensions (str): comma separated extensions to look for
delay (int): delay between each requests
page_count (int): pages in google to extract information
scan_history (startScan.ScanHistory): Scan History Object
"""
results = []
gofuzz_command = f'{GOFUZZ_EXEC_PATH} -t {lookup_target} -d {delay} -p {page_count}'
if lookup_extensions:
gofuzz_command += f' -e {lookup_extensions}'
elif lookup_keywords:
gofuzz_command += f' -w {lookup_keywords}'
output_file = f'{results_dir}/gofuzz.txt'
gofuzz_command += f' -o {output_file}'
history_file = f'{results_dir}/commands.txt'
try:
run_command(
gofuzz_command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
)
if not os.path.isfile(output_file):
return
with open(output_file) as f:
for line in f.readlines():
url = line.strip()
if url:
results.append(url)
dork, created = Dork.objects.get_or_create(
type=type,
url=url
)
if scan_history:
scan_history.dorks.add(dork)
# remove output file
os.remove(output_file)
except Exception as e:
logger.exception(e)
return results
def get_and_save_emails(scan_history, activity_id, results_dir):
"""Get and save emails from Google, Bing and Baidu.
Args:
scan_history (startScan.ScanHistory): Scan history object.
activity_id: ScanActivity Object
results_dir (str): Results directory.
Returns:
list: List of emails found.
"""
emails = []
# Proxy settings
# get_random_proxy()
# Gather emails from Google, Bing and Baidu
output_file = f'{results_dir}/emails_tmp.txt'
history_file = f'{results_dir}/commands.txt'
command = f'python3 /usr/src/github/Infoga/infoga.py --domain {scan_history.domain.name} --source all --report {output_file}'
try:
run_command(
command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
activity_id=activity_id)
if not os.path.isfile(output_file):
logger.info('No Email results')
return []
with open(output_file) as f:
for line in f.readlines():
if 'Email' in line:
split_email = line.split(' ')[2]
emails.append(split_email)
output_path = f'{results_dir}/emails.txt'
with open(output_path, 'w') as output_file:
for email_address in emails:
save_email(email_address, scan_history)
output_file.write(f'{email_address}\n')
except Exception as e:
logger.exception(e)
return emails
def save_metadata_info(meta_dict):
"""Extract metadata from Google Search.
Args:
meta_dict (dict): Info dict.
Returns:
list: List of startScan.MetaFinderDocument objects.
"""
logger.warning(f'Getting metadata for {meta_dict.osint_target}')
scan_history = ScanHistory.objects.get(id=meta_dict.scan_id)
# Proxy settings
get_random_proxy()
# Get metadata
result = extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if not result:
logger.error(f'No metadata result from Google Search for {meta_dict.osint_target}.')
return []
# Add metadata info to DB
results = []
for metadata_name, data in result.get_metadata().items():
subdomain = Subdomain.objects.get(
scan_history=meta_dict.scan_id,
name=meta_dict.osint_target)
metadata = DottedDict({k: v for k, v in data.items()})
meta_finder_document = MetaFinderDocument(
subdomain=subdomain,
target_domain=meta_dict.domain,
scan_history=scan_history,
url=metadata.url,
doc_name=metadata_name,
http_status=metadata.status_code,
producer=metadata.metadata.get('Producer'),
creator=metadata.metadata.get('Creator'),
creation_date=metadata.metadata.get('CreationDate'),
modified_date=metadata.metadata.get('ModDate'),
author=metadata.metadata.get('Author'),
title=metadata.metadata.get('Title'),
os=metadata.metadata.get('OSInfo'))
meta_finder_document.save()
results.append(data)
return results
#-----------------#
# Utils functions #
#-----------------#
def create_scan_activity(scan_history_id, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = ScanHistory.objects.get(pk=scan_history_id)
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
#--------------------#
# Database functions #
#--------------------#
def save_vulnerability(**vuln_data):
references = vuln_data.pop('references', [])
cve_ids = vuln_data.pop('cve_ids', [])
cwe_ids = vuln_data.pop('cwe_ids', [])
tags = vuln_data.pop('tags', [])
subscan = vuln_data.pop('subscan', None)
# remove nulls
vuln_data = replace_nulls(vuln_data)
# Create vulnerability
vuln, created = Vulnerability.objects.get_or_create(**vuln_data)
if created:
vuln.discovered_date = timezone.now()
vuln.open_status = True
vuln.save()
# Save vuln tags
for tag_name in tags or []:
tag, created = VulnerabilityTags.objects.get_or_create(name=tag_name)
if tag:
vuln.tags.add(tag)
vuln.save()
# Save CVEs
for cve_id in cve_ids or []:
cve, created = CveId.objects.get_or_create(name=cve_id)
if cve:
vuln.cve_ids.add(cve)
vuln.save()
# Save CWEs
for cve_id in cwe_ids or []:
cwe, created = CweId.objects.get_or_create(name=cve_id)
if cwe:
vuln.cwe_ids.add(cwe)
vuln.save()
# Save vuln reference
for url in references or []:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
if created:
vuln.references.add(ref)
vuln.save()
# Save subscan id in vuln object
if subscan:
vuln.vuln_subscan_ids.add(subscan)
vuln.save()
return vuln, created
def save_endpoint(
http_url,
ctx={},
crawl=False,
is_default=False,
**endpoint_data):
"""Get or create EndPoint object. If crawl is True, also crawl the endpoint
HTTP URL with httpx.
Args:
http_url (str): Input HTTP URL.
is_default (bool): If the url is a default url for SubDomains.
scan_history (startScan.models.ScanHistory): ScanHistory object.
domain (startScan.models.Domain): Domain object.
subdomain (starScan.models.Subdomain): Subdomain object.
results_dir (str, optional): Results directory.
crawl (bool, optional): Run httpx on endpoint if True. Default: False.
force (bool, optional): Force crawl even if ENABLE_HTTP_CRAWL mode is on.
subscan (startScan.models.SubScan, optional): SubScan object.
Returns:
tuple: (startScan.models.EndPoint, created) where `created` is a boolean
indicating if the object is new or already existed.
"""
# remove nulls
endpoint_data = replace_nulls(endpoint_data)
scheme = urlparse(http_url).scheme
endpoint = None
created = False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in http_url:
logger.error(f"{http_url} is not a URL of domain {domain.name}. Skipping.")
return None, False
if crawl:
ctx['track'] = False
results = http_crawl(
urls=[http_url],
method='HEAD',
ctx=ctx)
if results:
endpoint_data = results[0]
endpoint_id = endpoint_data['endpoint_id']
created = endpoint_data['endpoint_created']
endpoint = EndPoint.objects.get(pk=endpoint_id)
elif not scheme:
return None, False
else: # add dumb endpoint without probing it
scan = ScanHistory.objects.filter(pk=ctx.get('scan_history_id')).first()
domain = Domain.objects.filter(pk=ctx.get('domain_id')).first()
if not validators.url(http_url):
return None, False
http_url = sanitize_url(http_url)
endpoint, created = EndPoint.objects.get_or_create(
scan_history=scan,
target_domain=domain,
http_url=http_url,
**endpoint_data)
if created:
endpoint.is_default = is_default
endpoint.discovered_date = timezone.now()
endpoint.save()
subscan_id = ctx.get('subscan_id')
if subscan_id:
endpoint.endpoint_subscan_ids.add(subscan_id)
endpoint.save()
return endpoint, created
def save_subdomain(subdomain_name, ctx={}):
"""Get or create Subdomain object.
Args:
subdomain_name (str): Subdomain name.
scan_history (startScan.models.ScanHistory): ScanHistory object.
Returns:
tuple: (startScan.models.Subdomain, created) where `created` is a
boolean indicating if the object has been created in DB.
"""
scan_id = ctx.get('scan_history_id')
subscan_id = ctx.get('subscan_id')
out_of_scope_subdomains = ctx.get('out_of_scope_subdomains', [])
valid_domain = (
validators.domain(subdomain_name) or
validators.ipv4(subdomain_name) or
validators.ipv6(subdomain_name)
)
if not valid_domain:
logger.error(f'{subdomain_name} is not an invalid domain. Skipping.')
return None, False
if subdomain_name in out_of_scope_subdomains:
logger.error(f'{subdomain_name} is out-of-scope. Skipping.')
return None, False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in subdomain_name:
logger.error(f"{subdomain_name} is not a subdomain of domain {domain.name}. Skipping.")
return None, False
scan = ScanHistory.objects.filter(pk=scan_id).first()
domain = scan.domain if scan else None
subdomain, created = Subdomain.objects.get_or_create(
scan_history=scan,
target_domain=domain,
name=subdomain_name)
if created:
# logger.warning(f'Found new subdomain {subdomain_name}')
subdomain.discovered_date = timezone.now()
if subscan_id:
subdomain.subdomain_subscan_ids.add(subscan_id)
subdomain.save()
return subdomain, created
def save_email(email_address, scan_history=None):
if not validators.email(email_address):
logger.info(f'Email {email_address} is invalid. Skipping.')
return None, False
email, created = Email.objects.get_or_create(address=email_address)
# if created:
# logger.warning(f'Found new email address {email_address}')
# Add email to ScanHistory
if scan_history:
scan_history.emails.add(email)
scan_history.save()
return email, created
def save_employee(name, designation, scan_history=None):
employee, created = Employee.objects.get_or_create(
name=name,
designation=designation)
# if created:
# logger.warning(f'Found new employee {name}')
# Add employee to ScanHistory
if scan_history:
scan_history.employees.add(employee)
scan_history.save()
return employee, created
def save_ip_address(ip_address, subdomain=None, subscan=None, **kwargs):
if not (validators.ipv4(ip_address) or validators.ipv6(ip_address)):
logger.info(f'IP {ip_address} is not a valid IP. Skipping.')
return None, False
ip, created = IpAddress.objects.get_or_create(address=ip_address)
# if created:
# logger.warning(f'Found new IP {ip_address}')
# Set extra attributes
for key, value in kwargs.items():
setattr(ip, key, value)
ip.save()
# Add IP to subdomain
if subdomain:
subdomain.ip_addresses.add(ip)
subdomain.save()
# Add subscan to IP
if subscan:
ip.ip_subscan_ids.add(subscan)
# Geo-localize IP asynchronously
if created:
geo_localize.delay(ip_address, ip.id)
return ip, created
def save_imported_subdomains(subdomains, ctx={}):
"""Take a list of subdomains imported and write them to from_imported.txt.
Args:
subdomains (list): List of subdomain names.
scan_history (startScan.models.ScanHistory): ScanHistory instance.
domain (startScan.models.Domain): Domain instance.
results_dir (str): Results directory.
"""
domain_id = ctx['domain_id']
domain = Domain.objects.get(pk=domain_id)
results_dir = ctx.get('results_dir', RENGINE_RESULTS)
# Validate each subdomain and de-duplicate entries
subdomains = list(set([
subdomain for subdomain in subdomains
if validators.domain(subdomain) and domain.name == get_domain_from_subdomain(subdomain)
]))
if not subdomains:
return
logger.warning(f'Found {len(subdomains)} imported subdomains.')
with open(f'{results_dir}/from_imported.txt', 'w+') as output_file:
for name in subdomains:
subdomain_name = name.strip()
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain.is_imported_subdomain = True
subdomain.save()
output_file.write(f'{subdomain}\n')
@app.task(name='query_reverse_whois', bind=False, queue='query_reverse_whois_queue')
def query_reverse_whois(lookup_keyword):
"""Queries Reverse WHOIS information for an organization or email address.
Args:
lookup_keyword (str): Registrar Name or email
Returns:
dict: Reverse WHOIS information.
"""
return get_associated_domains(lookup_keyword)
@app.task(name='query_ip_history', bind=False, queue='query_ip_history_queue')
def query_ip_history(domain):
"""Queries the IP history for a domain
Args:
domain (str): domain_name
Returns:
list: list of historical ip addresses
"""
return get_domain_historical_ip_address(domain)
@app.task(name='gpt_vulnerability_description', bind=False, queue='gpt_queue')
def gpt_vulnerability_description(vulnerability_id):
"""Generate and store Vulnerability Description using GPT.
Args:
vulnerability_id (Vulnerability Model ID): Vulnerability ID to fetch Description.
"""
logger.info('Getting GPT Vulnerability Description')
try:
lookup_vulnerability = Vulnerability.objects.get(id=vulnerability_id)
lookup_url = urlparse(lookup_vulnerability.http_url)
path = lookup_url.path
except Exception as e:
return {
'status': False,
'error': str(e)
}
# check in db GPTVulnerabilityReport model if vulnerability description and path matches
stored = GPTVulnerabilityReport.objects.filter(url_path=path).filter(title=lookup_vulnerability.name).first()
if stored:
response = {
'status': True,
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
vulnerability_description = get_gpt_vuln_input_description(
lookup_vulnerability.name,
path
)
# one can add more description here later
gpt_generator = GPTVulnerabilityReportGenerator()
response = gpt_generator.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
lookup_vulnerability.name,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
# for all vulnerabilities with the same vulnerability name this description has to be stored.
# also the consition is that the url must contain a part of this.
for vuln in Vulnerability.objects.filter(name=lookup_vulnerability.name, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
return response
| import csv
import json
import os
import pprint
import subprocess
import time
import validators
import whatportis
import xmltodict
import yaml
import tldextract
import concurrent.futures
from datetime import datetime
from urllib.parse import urlparse
from api.serializers import SubdomainSerializer
from celery import chain, chord, group
from celery.result import allow_join_result
from celery.utils.log import get_task_logger
from django.db.models import Count
from dotted_dict import DottedDict
from django.utils import timezone
from pycvesearch import CVESearch
from metafinder.extractor import extract_metadata_from_google_search
from reNgine.celery import app
from reNgine.gpt import GPTVulnerabilityReportGenerator
from reNgine.celery_custom_task import RengineTask
from reNgine.common_func import *
from reNgine.definitions import *
from reNgine.settings import *
from reNgine.gpt import *
from reNgine.utilities import *
from scanEngine.models import (EngineType, InstalledExternalTool, Notification, Proxy)
from startScan.models import *
from startScan.models import EndPoint, Subdomain, Vulnerability
from targetApp.models import Domain
"""
Celery tasks.
"""
logger = get_task_logger(__name__)
#----------------------#
# Scan / Subscan tasks #
#----------------------#
@app.task(name='initiate_scan', bind=False, queue='initiate_scan_queue')
def initiate_scan(
scan_history_id,
domain_id,
engine_id=None,
scan_type=LIVE_SCAN,
results_dir=RENGINE_RESULTS,
imported_subdomains=[],
out_of_scope_subdomains=[],
url_filter=''):
"""Initiate a new scan.
Args:
scan_history_id (int): ScanHistory id.
domain_id (int): Domain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
imported_subdomains (list): Imported subdomains.
out_of_scope_subdomains (list): Out-of-scope subdomains.
url_filter (str): URL path. Default: ''
"""
# Get scan history
scan = ScanHistory.objects.get(pk=scan_history_id)
# Get scan engine
engine_id = engine_id or scan.scan_type.id # scan history engine_id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, [])
# Get domain and set last_scan_date
domain = Domain.objects.get(pk=domain_id)
domain.last_scan_date = timezone.now()
domain.save()
# Get path filter
url_filter = url_filter.rstrip('/')
# Get or create ScanHistory() object
if scan_type == LIVE_SCAN: # immediate
scan = ScanHistory.objects.get(pk=scan_history_id)
scan.scan_status = RUNNING_TASK
elif scan_type == SCHEDULED_SCAN: # scheduled
scan = ScanHistory()
scan.scan_status = INITIATED_TASK
scan.scan_type = engine
scan.celery_ids = [initiate_scan.request.id]
scan.domain = domain
scan.start_scan_date = timezone.now()
scan.tasks = engine.tasks
scan.results_dir = f'{results_dir}/{domain.name}_{scan.id}'
add_gf_patterns = gf_patterns and 'fetch_url' in engine.tasks
if add_gf_patterns:
scan.used_gf_patterns = ','.join(gf_patterns)
scan.save()
# Create scan results dir
os.makedirs(scan.results_dir)
# Build task context
ctx = {
'scan_history_id': scan_history_id,
'engine_id': engine_id,
'domain_id': domain.id,
'results_dir': scan.results_dir,
'url_filter': url_filter,
'yaml_configuration': config,
'out_of_scope_subdomains': out_of_scope_subdomains
}
ctx_str = json.dumps(ctx, indent=2)
# Send start notif
logger.warning(f'Starting scan {scan_history_id} with context:\n{ctx_str}')
send_scan_notif.delay(
scan_history_id,
subscan_id=None,
engine_id=engine_id,
status=CELERY_TASK_STATUS_MAP[scan.scan_status])
# Save imported subdomains in DB
save_imported_subdomains(imported_subdomains, ctx=ctx)
# Create initial subdomain in DB: make a copy of domain as a subdomain so
# that other tasks using subdomains can use it.
subdomain_name = domain.name
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
# If enable_http_crawl is set, create an initial root HTTP endpoint so that
# HTTP crawling can start somewhere
http_url = f'{domain.name}{url_filter}' if url_filter else domain.name
endpoint, _ = save_endpoint(
http_url,
ctx=ctx,
crawl=enable_http_crawl,
is_default=True,
subdomain=subdomain
)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build Celery tasks, crafted according to the dependency graph below:
# subdomain_discovery --> port_scan --> fetch_url --> dir_file_fuzz
# osint vulnerability_scan
# osint dalfox xss scan
# screenshot
# waf_detection
workflow = chain(
group(
subdomain_discovery.si(ctx=ctx, description='Subdomain discovery'),
osint.si(ctx=ctx, description='OS Intelligence')
),
port_scan.si(ctx=ctx, description='Port scan'),
fetch_url.si(ctx=ctx, description='Fetch URL'),
group(
dir_file_fuzz.si(ctx=ctx, description='Directories & files fuzz'),
vulnerability_scan.si(ctx=ctx, description='Vulnerability scan'),
screenshot.si(ctx=ctx, description='Screenshot'),
waf_detection.si(ctx=ctx, description='WAF detection')
)
)
# Build callback
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery chord
logger.info(f'Running Celery workflow with {len(workflow.tasks) + 1} tasks')
task = chain(workflow, callback).on_error(callback).delay()
scan.celery_ids.append(task.id)
scan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='initiate_subscan', bind=False, queue='subscan_queue')
def initiate_subscan(
scan_history_id,
subdomain_id,
engine_id=None,
scan_type=None,
results_dir=RENGINE_RESULTS,
url_filter=''):
"""Initiate a new subscan.
Args:
scan_history_id (int): ScanHistory id.
subdomain_id (int): Subdomain id.
engine_id (int): Engine ID.
scan_type (int): Scan type (periodic, live).
results_dir (str): Results directory.
url_filter (str): URL path. Default: ''
"""
# Get Subdomain, Domain and ScanHistory
subdomain = Subdomain.objects.get(pk=subdomain_id)
scan = ScanHistory.objects.get(pk=subdomain.scan_history.id)
domain = Domain.objects.get(pk=subdomain.target_domain.id)
# Get EngineType
engine_id = engine_id or scan.scan_type.id
engine = EngineType.objects.get(pk=engine_id)
# Get YAML config
config = yaml.safe_load(engine.yaml_configuration)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Create scan activity of SubScan Model
subscan = SubScan(
start_scan_date=timezone.now(),
celery_ids=[initiate_subscan.request.id],
scan_history=scan,
subdomain=subdomain,
type=scan_type,
status=RUNNING_TASK,
engine=engine)
subscan.save()
# Get YAML configuration
config = yaml.safe_load(engine.yaml_configuration)
# Create results directory
results_dir = f'{scan.results_dir}/subscans/{subscan.id}'
os.makedirs(results_dir, exist_ok=True)
# Run task
method = globals().get(scan_type)
if not method:
logger.warning(f'Task {scan_type} is not supported by reNgine. Skipping')
return
scan.tasks.append(scan_type)
scan.save()
# Send start notif
send_scan_notif.delay(
scan.id,
subscan_id=subscan.id,
engine_id=engine_id,
status='RUNNING')
# Build context
ctx = {
'scan_history_id': scan.id,
'subscan_id': subscan.id,
'engine_id': engine_id,
'domain_id': domain.id,
'subdomain_id': subdomain.id,
'yaml_configuration': config,
'results_dir': results_dir,
'url_filter': url_filter
}
# Create initial endpoints in DB: find domain HTTP endpoint so that HTTP
# crawling can start somewhere
base_url = f'{subdomain.name}{url_filter}' if url_filter else subdomain.name
endpoint, _ = save_endpoint(
base_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint and endpoint.is_alive:
# TODO: add `root_endpoint` property to subdomain and simply do
# subdomain.root_endpoint = endpoint instead
logger.warning(f'Found subdomain root HTTP URL {endpoint.http_url}')
subdomain.http_url = endpoint.http_url
subdomain.http_status = endpoint.http_status
subdomain.response_time = endpoint.response_time
subdomain.page_title = endpoint.page_title
subdomain.content_type = endpoint.content_type
subdomain.content_length = endpoint.content_length
for tech in endpoint.techs.all():
subdomain.technologies.add(tech)
subdomain.save()
# Build header + callback
workflow = method.si(ctx=ctx)
callback = report.si(ctx=ctx).set(link_error=[report.si(ctx=ctx)])
# Run Celery tasks
task = chain(workflow, callback).on_error(callback).delay()
subscan.celery_ids.append(task.id)
subscan.save()
return {
'success': True,
'task_id': task.id
}
@app.task(name='report', bind=False, queue='report_queue')
def report(ctx={}, description=None):
"""Report task running after all other tasks.
Mark ScanHistory or SubScan object as completed and update with final
status, log run details and send notification.
Args:
description (str, optional): Task description shown in UI.
"""
# Get objects
subscan_id = ctx.get('subscan_id')
scan_id = ctx.get('scan_history_id')
engine_id = ctx.get('engine_id')
scan = ScanHistory.objects.filter(pk=scan_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
# Get failed tasks
tasks = ScanActivity.objects.filter(scan_of=scan).all()
if subscan:
tasks = tasks.filter(celery_id__in=subscan.celery_ids)
failed_tasks = tasks.filter(status=FAILED_TASK)
# Get task status
failed_count = failed_tasks.count()
status = SUCCESS_TASK if failed_count == 0 else FAILED_TASK
status_h = 'SUCCESS' if failed_count == 0 else 'FAILED'
# Update scan / subscan status
if subscan:
subscan.stop_scan_date = timezone.now()
subscan.status = status
subscan.save()
else:
scan.scan_status = status
scan.stop_scan_date = timezone.now()
scan.save()
# Send scan status notif
send_scan_notif.delay(
scan_history_id=scan_id,
subscan_id=subscan_id,
engine_id=engine_id,
status=status_h)
#------------------------- #
# Tracked reNgine tasks #
#--------------------------#
@app.task(name='subdomain_discovery', queue='main_scan_queue', base=RengineTask, bind=True)
def subdomain_discovery(
self,
host=None,
ctx=None,
description=None):
"""Uses a set of tools (see SUBDOMAIN_SCAN_DEFAULT_TOOLS) to scan all
subdomains associated with a domain.
Args:
host (str): Hostname to scan.
Returns:
subdomains (list): List of subdomain names.
"""
if not host:
host = self.subdomain.name if self.subdomain else self.domain.name
if self.url_filter:
logger.warning(f'Ignoring subdomains scan as an URL path filter was passed ({self.url_filter}).')
return
# Config
config = self.yaml_configuration.get(SUBDOMAIN_DISCOVERY) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL) or self.yaml_configuration.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
tools = config.get(USES_TOOLS, SUBDOMAIN_SCAN_DEFAULT_TOOLS)
default_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=True).filter(is_subdomain_gathering=True)]
custom_subdomain_tools = [tool.name.lower() for tool in InstalledExternalTool.objects.filter(is_default=False).filter(is_subdomain_gathering=True)]
send_subdomain_changes, send_interesting = False, False
notif = Notification.objects.first()
if notif:
send_subdomain_changes = notif.send_subdomain_changes_notif
send_interesting = notif.send_interesting_notif
# Gather tools to run for subdomain scan
if ALL in tools:
tools = SUBDOMAIN_SCAN_DEFAULT_TOOLS + custom_subdomain_tools
tools = [t.lower() for t in tools]
# Make exception for amass since tool name is amass, but command is amass-active/passive
default_subdomain_tools.append('amass-passive')
default_subdomain_tools.append('amass-active')
# Run tools
for tool in tools:
cmd = None
logger.info(f'Scanning subdomains for {host} with {tool}')
proxy = get_random_proxy()
if tool in default_subdomain_tools:
if tool == 'amass-passive':
cmd = f'amass enum -passive -d {host} -o {self.results_dir}/subdomains_amass.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
elif tool == 'amass-active':
use_amass_config = config.get(USE_AMASS_CONFIG, False)
amass_wordlist_name = config.get(AMASS_WORDLIST, 'deepmagic.com-prefixes-top50000')
wordlist_path = f'/usr/src/wordlist/{amass_wordlist_name}.txt'
cmd = f'amass enum -active -d {host} -o {self.results_dir}/subdomains_amass_active.txt'
cmd += ' -config /root/.config/amass.ini' if use_amass_config else ''
cmd += f' -brute -w {wordlist_path}'
elif tool == 'sublist3r':
cmd = f'python3 /usr/src/github/Sublist3r/sublist3r.py -d {host} -t {threads} -o {self.results_dir}/subdomains_sublister.txt'
elif tool == 'subfinder':
cmd = f'subfinder -d {host} -o {self.results_dir}/subdomains_subfinder.txt'
use_subfinder_config = config.get(USE_SUBFINDER_CONFIG, False)
cmd += ' -config /root/.config/subfinder/config.yaml' if use_subfinder_config else ''
cmd += f' -proxy {proxy}' if proxy else ''
cmd += f' -timeout {timeout}' if timeout else ''
cmd += f' -t {threads}' if threads else ''
cmd += f' -silent'
elif tool == 'oneforall':
cmd = f'python3 /usr/src/github/OneForAll/oneforall.py --target {host} run'
cmd_extract = f'cut -d\',\' -f6 /usr/src/github/OneForAll/results/{host}.csv > {self.results_dir}/subdomains_oneforall.txt'
cmd_rm = f'rm -rf /usr/src/github/OneForAll/results/{host}.csv'
cmd += f' && {cmd_extract} && {cmd_rm}'
elif tool == 'ctfr':
results_file = self.results_dir + '/subdomains_ctfr.txt'
cmd = f'python3 /usr/src/github/ctfr/ctfr.py -d {host} -o {results_file}'
cmd_extract = f"cat {results_file} | sed 's/\*.//g' | tail -n +12 | uniq | sort > {results_file}"
cmd += f' && {cmd_extract}'
elif tool == 'tlsx':
results_file = self.results_dir + '/subdomains_tlsx.txt'
cmd = f'tlsx -san -cn -silent -ro -host {host}'
cmd += f" | sed -n '/^\([a-zA-Z0-9]\([-a-zA-Z0-9]*[a-zA-Z0-9]\)\?\.\)\+{host}$/p' | uniq | sort"
cmd += f' > {results_file}'
elif tool == 'netlas':
results_file = self.results_dir + '/subdomains_netlas.txt'
cmd = f'netlas search -d domain -i domain domain:"*.{host}" -f json'
netlas_key = get_netlas_key()
cmd += f' -a {netlas_key}' if netlas_key else ''
cmd_extract = f"grep -oE '([a-zA-Z0-9]([-a-zA-Z0-9]*[a-zA-Z0-9])?\.)+{host}'"
cmd += f' | {cmd_extract} > {results_file}'
elif tool in custom_subdomain_tools:
tool_query = InstalledExternalTool.objects.filter(name__icontains=tool.lower())
if not tool_query.exists():
logger.error(f'Missing {{TARGET}} and {{OUTPUT}} placeholders in {tool} configuration. Skipping.')
continue
custom_tool = tool_query.first()
cmd = custom_tool.subdomain_gathering_command
if '{TARGET}' in cmd and '{OUTPUT}' in cmd:
cmd = cmd.replace('{TARGET}', host)
cmd = cmd.replace('{OUTPUT}', f'{self.results_dir}/subdomains_{tool}.txt')
cmd = cmd.replace('{PATH}', custom_tool.github_clone_path) if '{PATH}' in cmd else cmd
else:
logger.warning(
f'Subdomain discovery tool "{tool}" is not supported by reNgine. Skipping.')
continue
# Run tool
try:
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
except Exception as e:
logger.error(
f'Subdomain discovery tool "{tool}" raised an exception')
logger.exception(e)
# Gather all the tools' results in one single file. Write subdomains into
# separate files, and sort all subdomains.
run_command(
f'cat {self.results_dir}/subdomains_*.txt > {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {self.output_path} -o {self.output_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
with open(self.output_path) as f:
lines = f.readlines()
# Parse the output_file file and store Subdomain and EndPoint objects found
# in db.
subdomain_count = 0
subdomains = []
urls = []
for line in lines:
subdomain_name = line.strip()
valid_url = bool(validators.url(subdomain_name))
valid_domain = (
bool(validators.domain(subdomain_name)) or
bool(validators.ipv4(subdomain_name)) or
bool(validators.ipv6(subdomain_name)) or
valid_url
)
if not valid_domain:
logger.error(f'Subdomain {subdomain_name} is not a valid domain, IP or URL. Skipping.')
continue
if valid_url:
subdomain_name = urlparse(subdomain_name).netloc
if subdomain_name in self.out_of_scope_subdomains:
logger.error(f'Subdomain {subdomain_name} is out of scope. Skipping.')
continue
# Add subdomain
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain_count += 1
subdomains.append(subdomain)
urls.append(subdomain.name)
# Bulk crawl subdomains
if enable_http_crawl:
ctx['track'] = True
http_crawl(urls, ctx=ctx, is_ran_from_subdomain_scan=True)
# Find root subdomain endpoints
for subdomain in subdomains:
pass
# Send notifications
subdomains_str = '\n'.join([f'• `{subdomain.name}`' for subdomain in subdomains])
self.notify(fields={
'Subdomain count': len(subdomains),
'Subdomains': subdomains_str,
})
if send_subdomain_changes and self.scan_id and self.domain_id:
added = get_new_added_subdomain(self.scan_id, self.domain_id)
removed = get_removed_subdomain(self.scan_id, self.domain_id)
if added:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in added])
self.notify(fields={'Added subdomains': subdomains_str})
if removed:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in removed])
self.notify(fields={'Removed subdomains': subdomains_str})
if send_interesting and self.scan_id and self.domain_id:
interesting_subdomains = get_interesting_subdomains(self.scan_id, self.domain_id)
if interesting_subdomains:
subdomains_str = '\n'.join([f'• `{subdomain}`' for subdomain in interesting_subdomains])
self.notify(fields={'Interesting subdomains': subdomains_str})
return SubdomainSerializer(subdomains, many=True).data
@app.task(name='osint', queue='main_scan_queue', base=RengineTask, bind=True)
def osint(self, host=None, ctx={}, description=None):
"""Run Open-Source Intelligence tools on selected domain.
Args:
host (str): Hostname to scan.
Returns:
dict: Results from osint discovery and dorking.
"""
config = self.yaml_configuration.get(OSINT) or OSINT_DEFAULT_CONFIG
results = {}
grouped_tasks = []
if 'discover' in config:
ctx['track'] = False
# results = osint_discovery(host=host, ctx=ctx)
_task = osint_discovery.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
activity_id=self.activity_id,
results_dir=self.results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if OSINT_DORK in config or OSINT_CUSTOM_DORK in config:
_task = dorking.si(
config=config,
host=self.scan.domain.name,
scan_history_id=self.scan.id,
results_dir=self.results_dir
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('OSINT Tasks finished...')
# with open(self.output_path, 'w') as f:
# json.dump(results, f, indent=4)
#
# return results
@app.task(name='osint_discovery', queue='osint_discovery_queue', bind=False)
def osint_discovery(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run OSINT discovery.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
dict: osint metadat and theHarvester and h8mail results.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
osint_lookup = config.get(OSINT_DISCOVER, [])
osint_intensity = config.get(INTENSITY, 'normal')
documents_limit = config.get(OSINT_DOCUMENTS_LIMIT, 50)
results = {}
meta_info = []
emails = []
creds = []
# Get and save meta info
if 'metainfo' in osint_lookup:
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': host,
'domain': host,
'scan_id': scan_history_id,
'documents_limit': documents_limit
})
meta_info.append(save_metadata_info(meta_dict))
# TODO: disabled for now
# elif osint_intensity == 'deep':
# subdomains = Subdomain.objects
# if self.scan:
# subdomains = subdomains.filter(scan_history=self.scan)
# for subdomain in subdomains:
# meta_dict = DottedDict({
# 'osint_target': subdomain.name,
# 'domain': self.domain,
# 'scan_id': self.scan_id,
# 'documents_limit': documents_limit
# })
# meta_info.append(save_metadata_info(meta_dict))
grouped_tasks = []
if 'emails' in osint_lookup:
emails = get_and_save_emails(scan_history, activity_id, results_dir)
emails_str = '\n'.join([f'• `{email}`' for email in emails])
# self.notify(fields={'Emails': emails_str})
# ctx['track'] = False
_task = h8mail.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
if 'employees' in osint_lookup:
ctx['track'] = False
_task = theHarvester.si(
config=config,
host=host,
scan_history_id=scan_history_id,
activity_id=activity_id,
results_dir=results_dir,
ctx=ctx
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
# results['emails'] = results.get('emails', []) + emails
# results['creds'] = creds
# results['meta_info'] = meta_info
return results
@app.task(name='dorking', bind=False, queue='dorking_queue')
def dorking(config, host, scan_history_id, results_dir):
"""Run Google dorks.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
results_dir (str): Path to store scan results
Returns:
list: Dorking results for each dork ran.
"""
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
scan_history = ScanHistory.objects.get(pk=scan_history_id)
dorks = config.get(OSINT_DORK, [])
custom_dorks = config.get(OSINT_CUSTOM_DORK, [])
results = []
# custom dorking has higher priority
try:
for custom_dork in custom_dorks:
lookup_target = custom_dork.get('lookup_site')
# replace with original host if _target_
lookup_target = host if lookup_target == '_target_' else lookup_target
if 'lookup_extensions' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_extensions=custom_dork.get('lookup_extensions'),
scan_history=scan_history
)
elif 'lookup_keywords' in custom_dork:
results = get_and_save_dork_results(
lookup_target=lookup_target,
results_dir=results_dir,
type='custom_dork',
lookup_keywords=custom_dork.get('lookup_keywords'),
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
# default dorking
try:
for dork in dorks:
logger.info(f'Getting dork information for {dork}')
if dork == 'stackoverflow':
results = get_and_save_dork_results(
lookup_target='stackoverflow.com',
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'login_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/login/,login.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'admin_panels':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/admin/,admin.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'dashboard_pages':
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords='/dashboard/,dashboard.html',
page_count=5,
scan_history=scan_history
)
elif dork == 'social_media' :
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'reddit.com'
]
for site in social_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'project_management' :
project_websites = [
'trello.com',
'atlassian.net'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'code_sharing' :
project_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
for site in project_websites:
results = get_and_save_dork_results(
lookup_target=site,
results_dir=results_dir,
type=dork,
lookup_keywords=host,
scan_history=scan_history
)
elif dork == 'config_files' :
config_file_exts = [
'env',
'xml',
'conf',
'toml',
'yml',
'yaml',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(config_file_exts),
page_count=4,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keyword = 'Jenkins'
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=lookup_keyword,
page_count=1,
scan_history=scan_history
)
elif dork == 'wordpress_files' :
lookup_keywords = [
'/wp-content/',
'/wp-includes/'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'php_error' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'jenkins' :
lookup_keywords = [
'PHP Parse error',
'PHP Warning',
'PHP Error'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_keywords=','.join(lookup_keywords),
page_count=5,
scan_history=scan_history
)
elif dork == 'exposed_documents' :
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(docs_file_ext),
page_count=7,
scan_history=scan_history
)
elif dork == 'db_files' :
file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
elif dork == 'git_exposed' :
file_ext = [
'git',
]
results = get_and_save_dork_results(
lookup_target=host,
results_dir=results_dir,
type=dork,
lookup_extensions=','.join(file_ext),
page_count=1,
scan_history=scan_history
)
except Exception as e:
logger.exception(e)
return results
@app.task(name='theHarvester', queue='theHarvester_queue', bind=False)
def theHarvester(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run theHarvester to get save emails, hosts, employees found in domain.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
dict: Dict of emails, employees, hosts and ips found during crawling.
"""
scan_history = ScanHistory.objects.get(pk=scan_history_id)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
output_path_json = f'{results_dir}/theHarvester.json'
theHarvester_dir = '/usr/src/github/theHarvester'
history_file = f'{results_dir}/commands.txt'
cmd = f'python3 {theHarvester_dir}/theHarvester.py -d {host} -b all -f {output_path_json}'
# Update proxies.yaml
proxy_query = Proxy.objects.all()
if proxy_query.exists():
proxy = proxy_query.first()
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(f'{theHarvester_dir}/proxies.yaml', 'w') as file:
yaml.dump(yaml_data, file)
# Run cmd
run_command(
cmd,
shell=False,
cwd=theHarvester_dir,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
# Get file location
if not os.path.isfile(output_path_json):
logger.error(f'Could not open {output_path_json}')
return {}
# Load theHarvester results
with open(output_path_json, 'r') as f:
data = json.load(f)
# Re-indent theHarvester JSON
with open(output_path_json, 'w') as f:
json.dump(data, f, indent=4)
emails = data.get('emails', [])
for email_address in emails:
email, _ = save_email(email_address, scan_history=scan_history)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
linkedin_people = data.get('linkedin_people', [])
for people in linkedin_people:
employee, _ = save_employee(
people,
designation='linkedin',
scan_history=scan_history)
# if employee:
# self.notify(fields={'LinkedIn people': f'• {employee.name}'})
twitter_people = data.get('twitter_people', [])
for people in twitter_people:
employee, _ = save_employee(
people,
designation='twitter',
scan_history=scan_history)
# if employee:
# self.notify(fields={'Twitter people': f'• {employee.name}'})
hosts = data.get('hosts', [])
urls = []
for host in hosts:
split = tuple(host.split(':'))
http_url = split[0]
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
endpoint, _ = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain)
# if endpoint:
# urls.append(endpoint.http_url)
# self.notify(fields={'Hosts': f'• {endpoint.http_url}'})
# if enable_http_crawl:
# ctx['track'] = False
# http_crawl(urls, ctx=ctx)
# TODO: Lots of ips unrelated with our domain are found, disabling
# this for now.
# ips = data.get('ips', [])
# for ip_address in ips:
# ip, created = save_ip_address(
# ip_address,
# subscan=subscan)
# if ip:
# send_task_notif.delay(
# 'osint',
# scan_history_id=scan_history_id,
# subscan_id=subscan_id,
# severity='success',
# update_fields={'IPs': f'{ip.address}'})
return data
@app.task(name='h8mail', queue='h8mail_queue', bind=False)
def h8mail(config, host, scan_history_id, activity_id, results_dir, ctx={}):
"""Run h8mail.
Args:
config (dict): yaml_configuration
host (str): target name
scan_history_id (startScan.ScanHistory): Scan History ID
activity_id: ScanActivity ID
results_dir (str): Path to store scan results
ctx (dict): context of scan
Returns:
list[dict]: List of credentials info.
"""
logger.warning('Getting leaked credentials')
scan_history = ScanHistory.objects.get(pk=scan_history_id)
input_path = f'{results_dir}/emails.txt'
output_file = f'{results_dir}/h8mail.json'
cmd = f'h8mail -t {input_path} --json {output_file}'
history_file = f'{results_dir}/commands.txt'
run_command(
cmd,
history_file=history_file,
scan_id=scan_history_id,
activity_id=activity_id)
with open(output_file) as f:
data = json.load(f)
creds = data.get('targets', [])
# TODO: go through h8mail output and save emails to DB
for cred in creds:
logger.warning(cred)
email_address = cred['target']
pwn_num = cred['pwn_num']
pwn_data = cred.get('data', [])
email, created = save_email(email_address, scan_history=scan)
# if email:
# self.notify(fields={'Emails': f'• `{email.address}`'})
return creds
@app.task(name='screenshot', queue='main_scan_queue', base=RengineTask, bind=True)
def screenshot(self, ctx={}, description=None):
"""Uses EyeWitness to gather screenshot of a domain and/or url.
Args:
description (str, optional): Task description shown in UI.
"""
# Config
screenshots_path = f'{self.results_dir}/screenshots'
output_path = f'{self.results_dir}/screenshots/{self.filename}'
alive_endpoints_file = f'{self.results_dir}/endpoints_alive.txt'
config = self.yaml_configuration.get(SCREENSHOT) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT + 5)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
# If intensity is normal, grab only the root endpoints of each subdomain
strict = True if intensity == 'normal' else False
# Get URLs to take screenshot of
get_http_urls(
is_alive=enable_http_crawl,
strict=strict,
write_filepath=alive_endpoints_file,
get_only_default_urls=True,
ctx=ctx
)
# Send start notif
notification = Notification.objects.first()
send_output_file = notification.send_scan_output_file if notification else False
# Run cmd
cmd = f'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py -f {alive_endpoints_file} -d {screenshots_path} --no-prompt'
cmd += f' --timeout {timeout}' if timeout > 0 else ''
cmd += f' --threads {threads}' if threads > 0 else ''
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(output_path):
logger.error(f'Could not load EyeWitness results at {output_path} for {self.domain.name}.')
return
# Loop through results and save objects in DB
screenshot_paths = []
with open(output_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
"Protocol,Port,Domain,Request Status,Screenshot Path, Source Path"
protocol, port, subdomain_name, status, screenshot_path, source_path = tuple(row)
logger.info(f'{protocol}:{port}:{subdomain_name}:{status}')
subdomain_query = Subdomain.objects.filter(name=subdomain_name)
if self.scan:
subdomain_query = subdomain_query.filter(scan_history=self.scan)
if status == 'Successful' and subdomain_query.exists():
subdomain = subdomain_query.first()
screenshot_paths.append(screenshot_path)
subdomain.screenshot_path = screenshot_path.replace('/usr/src/scan_results/', '')
subdomain.save()
logger.warning(f'Added screenshot for {subdomain.name} to DB')
# Remove all db, html extra files in screenshot results
run_command(
'rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(screenshots_path),
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'rm -rf {screenshots_path}/source',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Send finish notifs
screenshots_str = '• ' + '\n• '.join([f'`{path}`' for path in screenshot_paths])
self.notify(fields={'Screenshots': screenshots_str})
if send_output_file:
for path in screenshot_paths:
title = get_output_file_name(
self.scan_id,
self.subscan_id,
self.filename)
send_file_to_discord.delay(path, title)
@app.task(name='port_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def port_scan(self, hosts=[], ctx={}, description=None):
"""Run port scan.
Args:
hosts (list, optional): Hosts to run port scan on.
description (str, optional): Task description shown in UI.
Returns:
list: List of open ports (dict).
"""
input_file = f'{self.results_dir}/input_subdomains_port_scan.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(PORT_SCAN) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
exclude_ports = config.get(NAABU_EXCLUDE_PORTS, [])
exclude_subdomains = config.get(NAABU_EXCLUDE_SUBDOMAINS, False)
ports = config.get(PORTS, NAABU_DEFAULT_PORTS)
ports = [str(port) for port in ports]
rate_limit = config.get(NAABU_RATE) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
passive = config.get(NAABU_PASSIVE, False)
use_naabu_config = config.get(USE_NAABU_CONFIG, False)
exclude_ports_str = ','.join(return_iterable(exclude_ports))
# nmap args
nmap_enabled = config.get(ENABLE_NMAP, False)
nmap_cmd = config.get(NMAP_COMMAND, '')
nmap_script = config.get(NMAP_SCRIPT, '')
nmap_script = ','.join(return_iterable(nmap_script))
nmap_script_args = config.get(NMAP_SCRIPT_ARGS)
if hosts:
with open(input_file, 'w') as f:
f.write('\n'.join(hosts))
else:
hosts = get_subdomains(
write_filepath=input_file,
exclude_subdomains=exclude_subdomains,
ctx=ctx)
# Build cmd
cmd = 'naabu -json -exclude-cdn'
cmd += f' -list {input_file}' if len(hosts) > 0 else f' -host {hosts[0]}'
if 'full' in ports or 'all' in ports:
ports_str = ' -p "-"'
elif 'top-100' in ports:
ports_str = ' -top-ports 100'
elif 'top-1000' in ports:
ports_str = ' -top-ports 1000'
else:
ports_str = ','.join(ports)
ports_str = f' -p {ports_str}'
cmd += ports_str
cmd += ' -config /root/.config/naabu/config.yaml' if use_naabu_config else ''
cmd += f' -proxy "{proxy}"' if proxy else ''
cmd += f' -c {threads}' if threads else ''
cmd += f' -rate {rate_limit}' if rate_limit > 0 else ''
cmd += f' -timeout {timeout*1000}' if timeout > 0 else ''
cmd += f' -passive' if passive else ''
cmd += f' -exclude-ports {exclude_ports_str}' if exclude_ports else ''
cmd += f' -silent'
# Execute cmd and gather results
results = []
urls = []
ports_data = {}
for line in stream_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
port_number = line['port']
ip_address = line['ip']
host = line.get('host') or ip_address
if port_number == 0:
continue
# Grab subdomain
subdomain = Subdomain.objects.filter(
name=host,
target_domain=self.domain,
scan_history=self.scan
).first()
# Add IP DB
ip, _ = save_ip_address(ip_address, subdomain, subscan=self.subscan)
if self.subscan:
ip.ip_subscan_ids.add(self.subscan)
ip.save()
# Add endpoint to DB
# port 80 and 443 not needed as http crawl already does that.
if port_number not in [80, 443]:
http_url = f'{host}:{port_number}'
endpoint, _ = save_endpoint(
http_url,
crawl=enable_http_crawl,
ctx=ctx,
subdomain=subdomain)
if endpoint:
http_url = endpoint.http_url
urls.append(http_url)
# Add Port in DB
port_details = whatportis.get_ports(str(port_number))
service_name = port_details[0].name if len(port_details) > 0 else 'unknown'
description = port_details[0].description if len(port_details) > 0 else ''
# get or create port
port, created = Port.objects.get_or_create(
number=port_number,
service_name=service_name,
description=description
)
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port.save()
ip.ports.add(port)
ip.save()
if host in ports_data:
ports_data[host].append(port_number)
else:
ports_data[host] = [port_number]
# Send notification
logger.warning(f'Found opened port {port_number} on {ip_address} ({host})')
if len(ports_data) == 0:
logger.info('Finished running naabu port scan - No open ports found.')
if nmap_enabled:
logger.info('Nmap scans skipped')
return ports_data
# Send notification
fields_str = ''
for host, ports in ports_data.items():
ports_str = ', '.join([f'`{port}`' for port in ports])
fields_str += f'• `{host}`: {ports_str}\n'
self.notify(fields={'Ports discovered': fields_str})
# Save output to file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
logger.info('Finished running naabu port scan.')
# Process nmap results: 1 process per host
sigs = []
if nmap_enabled:
logger.warning(f'Starting nmap scans ...')
logger.warning(ports_data)
for host, port_list in ports_data.items():
ports_str = '_'.join([str(p) for p in port_list])
ctx_nmap = ctx.copy()
ctx_nmap['description'] = get_task_title(f'nmap_{host}', self.scan_id, self.subscan_id)
ctx_nmap['track'] = False
sig = nmap.si(
cmd=nmap_cmd,
ports=port_list,
host=host,
script=nmap_script,
script_args=nmap_script_args,
max_rate=rate_limit,
ctx=ctx_nmap)
sigs.append(sig)
task = group(sigs).apply_async()
with allow_join_result():
results = task.get()
return ports_data
@app.task(name='nmap', queue='main_scan_queue', base=RengineTask, bind=True)
def nmap(
self,
cmd=None,
ports=[],
host=None,
input_file=None,
script=None,
script_args=None,
max_rate=None,
ctx={},
description=None):
"""Run nmap on a host.
Args:
cmd (str, optional): Existing nmap command to complete.
ports (list, optional): List of ports to scan.
host (str, optional): Host to scan.
input_file (str, optional): Input hosts file.
script (str, optional): NSE script to run.
script_args (str, optional): NSE script args.
max_rate (int): Max rate.
description (str, optional): Task description shown in UI.
"""
notif = Notification.objects.first()
ports_str = ','.join(str(port) for port in ports)
self.filename = self.filename.replace('.txt', '.xml')
filename_vulns = self.filename.replace('.xml', '_vulns.json')
output_file = self.output_path
output_file_xml = f'{self.results_dir}/{host}_{self.filename}'
vulns_file = f'{self.results_dir}/{host}_{filename_vulns}'
logger.warning(f'Running nmap on {host}:{ports}')
# Build cmd
nmap_cmd = get_nmap_cmd(
cmd=cmd,
ports=ports_str,
script=script,
script_args=script_args,
max_rate=max_rate,
host=host,
input_file=input_file,
output_file=output_file_xml)
# Run cmd
run_command(
nmap_cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Get nmap XML results and convert to JSON
vulns = parse_nmap_results(output_file_xml, output_file)
with open(vulns_file, 'w') as f:
json.dump(vulns, f, indent=4)
# Save vulnerabilities found by nmap
vulns_str = ''
for vuln_data in vulns:
# URL is not necessarily an HTTP URL when running nmap (can be any
# other vulnerable protocols). Look for existing endpoint and use its
# URL as vulnerability.http_url if it exists.
url = vuln_data['http_url']
endpoint = EndPoint.objects.filter(http_url__contains=url).first()
if endpoint:
vuln_data['http_url'] = endpoint.http_url
vuln, created = save_vulnerability(
target_domain=self.domain,
subdomain=self.subdomain,
scan_history=self.scan,
subscan=self.subscan,
endpoint=endpoint,
**vuln_data)
vulns_str += f'• {str(vuln)}\n'
if created:
logger.warning(str(vuln))
# Send only 1 notif for all vulns to reduce number of notifs
if notif and notif.send_vuln_notif and vulns_str:
logger.warning(vulns_str)
self.notify(fields={'CVEs': vulns_str})
return vulns
@app.task(name='waf_detection', queue='main_scan_queue', base=RengineTask, bind=True)
def waf_detection(self, ctx={}, description=None):
"""
Uses wafw00f to check for the presence of a WAF.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of startScan.models.Waf objects.
"""
input_path = f'{self.results_dir}/input_endpoints_waf_detection.txt'
config = self.yaml_configuration.get(WAF_DETECTION) or {}
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
# Get alive endpoints from DB
get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
cmd = f'wafw00f -i {input_path} -o {self.output_path}'
run_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
if not os.path.isfile(self.output_path):
logger.error(f'Could not find {self.output_path}')
return
with open(self.output_path) as file:
wafs = file.readlines()
for line in wafs:
line = " ".join(line.split())
splitted = line.split(' ', 1)
waf_info = splitted[1].strip()
waf_name = waf_info[:waf_info.find('(')].strip()
waf_manufacturer = waf_info[waf_info.find('(')+1:waf_info.find(')')].strip().replace('.', '')
http_url = sanitize_url(splitted[0].strip())
if not waf_name or waf_name == 'None':
continue
# Add waf to db
waf, _ = Waf.objects.get_or_create(
name=waf_name,
manufacturer=waf_manufacturer
)
# Add waf info to Subdomain in DB
subdomain = get_subdomain_from_url(http_url)
logger.info(f'Wafw00f Subdomain : {subdomain}')
subdomain_query, _ = Subdomain.objects.get_or_create(scan_history=self.scan, name=subdomain)
subdomain_query.waf.add(waf)
subdomain_query.save()
return wafs
@app.task(name='dir_file_fuzz', queue='main_scan_queue', base=RengineTask, bind=True)
def dir_file_fuzz(self, ctx={}, description=None):
"""Perform directory scan, and currently uses `ffuf` as a default tool.
Args:
description (str, optional): Task description shown in UI.
Returns:
list: List of URLs discovered.
"""
# Config
cmd = 'ffuf'
config = self.yaml_configuration.get(DIR_FILE_FUZZ) or {}
custom_header = self.yaml_configuration.get(CUSTOM_HEADER)
auto_calibration = config.get(AUTO_CALIBRATION, True)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
extensions = config.get(EXTENSIONS, DEFAULT_DIR_FILE_FUZZ_EXTENSIONS)
# prepend . on extensions
extensions = [ext if ext.startswith('.') else '.' + ext for ext in extensions]
extensions_str = ','.join(map(str, extensions))
follow_redirect = config.get(FOLLOW_REDIRECT, FFUF_DEFAULT_FOLLOW_REDIRECT)
max_time = config.get(MAX_TIME, 0)
match_http_status = config.get(MATCH_HTTP_STATUS, FFUF_DEFAULT_MATCH_HTTP_STATUS)
mc = ','.join([str(c) for c in match_http_status])
recursive_level = config.get(RECURSIVE_LEVEL, FFUF_DEFAULT_RECURSIVE_LEVEL)
stop_on_error = config.get(STOP_ON_ERROR, False)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
wordlist_name = config.get(WORDLIST, 'dicc')
delay = rate_limit / (threads * 100) # calculate request pause delay from rate_limit and number of threads
input_path = f'{self.results_dir}/input_dir_file_fuzz.txt'
# Get wordlist
wordlist_name = 'dicc' if wordlist_name == 'default' else wordlist_name
wordlist_path = f'/usr/src/wordlist/{wordlist_name}.txt'
# Build command
cmd += f' -w {wordlist_path}'
cmd += f' -e {extensions_str}' if extensions else ''
cmd += f' -maxtime {max_time}' if max_time > 0 else ''
cmd += f' -p {delay}' if delay > 0 else ''
cmd += f' -recursion -recursion-depth {recursive_level} ' if recursive_level > 0 else ''
cmd += f' -t {threads}' if threads and threads > 0 else ''
cmd += f' -timeout {timeout}' if timeout and timeout > 0 else ''
cmd += ' -se' if stop_on_error else ''
cmd += ' -fr' if follow_redirect else ''
cmd += ' -ac' if auto_calibration else ''
cmd += f' -mc {mc}' if mc else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
# Grab URLs to fuzz
urls = get_http_urls(
is_alive=True,
ignore_files=False,
write_filepath=input_path,
get_only_default_urls=True,
ctx=ctx
)
logger.warning(urls)
# Loop through URLs and run command
results = []
for url in urls:
'''
Above while fetching urls, we are not ignoring files, because some
default urls may redirect to https://example.com/login.php
so, ignore_files is set to False
but, during fuzzing, we will only need part of the path, in above example
it is still a good idea to ffuf base url https://example.com
so files from base url
'''
url_parse = urlparse(url)
url = url_parse.scheme + '://' + url_parse.netloc
url += '/FUZZ' # TODO: fuzz not only URL but also POST / PUT / headers
proxy = get_random_proxy()
# Build final cmd
fcmd = cmd
fcmd += f' -x {proxy}' if proxy else ''
fcmd += f' -u {url} -json'
# Initialize DirectoryScan object
dirscan = DirectoryScan()
dirscan.scanned_date = timezone.now()
dirscan.command_line = fcmd
dirscan.save()
# Loop through results and populate EndPoint and DirectoryFile in DB
results = []
for line in stream_command(
fcmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
name = line['input'].get('FUZZ')
length = line['length']
status = line['status']
words = line['words']
url = line['url']
lines = line['lines']
content_type = line['content-type']
duration = line['duration']
if not name:
logger.error(f'FUZZ not found for "{url}"')
continue
endpoint, created = save_endpoint(url, crawl=False, ctx=ctx)
# endpoint.is_default = False
endpoint.http_status = status
endpoint.content_length = length
endpoint.response_time = duration / 1000000000
endpoint.save()
if created:
urls.append(endpoint.http_url)
endpoint.status = status
endpoint.content_type = content_type
endpoint.content_length = length
dfile, created = DirectoryFile.objects.get_or_create(
name=name,
length=length,
words=words,
lines=lines,
content_type=content_type,
url=url)
dfile.http_status = status
dfile.save()
# if created:
# logger.warning(f'Found new directory or file {url}')
dirscan.directory_files.add(dfile)
dirscan.save()
if self.subscan:
dirscan.dir_subscan_ids.add(self.subscan)
subdomain_name = get_subdomain_from_url(endpoint.http_url)
subdomain = Subdomain.objects.get(name=subdomain_name, scan_history=self.scan)
subdomain.directories.add(dirscan)
subdomain.save()
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(urls, ctx=ctx)
return results
@app.task(name='fetch_url', queue='main_scan_queue', base=RengineTask, bind=True)
def fetch_url(self, urls=[], ctx={}, description=None):
"""Fetch URLs using different tools like gauplus, gau, gospider, waybackurls ...
Args:
urls (list): List of URLs to start from.
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/input_endpoints_fetch_url.txt'
proxy = get_random_proxy()
# Config
config = self.yaml_configuration.get(FETCH_URL) or {}
should_remove_duplicate_endpoints = config.get(REMOVE_DUPLICATE_ENDPOINTS, True)
duplicate_removal_fields = config.get(DUPLICATE_REMOVAL_FIELDS, ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS)
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
gf_patterns = config.get(GF_PATTERNS, DEFAULT_GF_PATTERNS)
ignore_file_extension = config.get(IGNORE_FILE_EXTENSION, DEFAULT_IGNORE_FILE_EXTENSIONS)
tools = config.get(USES_TOOLS, ENDPOINT_SCAN_DEFAULT_TOOLS)
threads = config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
domain_request_headers = self.domain.request_headers if self.domain else None
custom_header = domain_request_headers or self.yaml_configuration.get(CUSTOM_HEADER)
exclude_subdomains = config.get(EXCLUDED_SUBDOMAINS, False)
# Get URLs to scan and save to input file
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_alive=enable_http_crawl,
write_filepath=input_path,
exclude_subdomains=exclude_subdomains,
get_only_default_urls=True,
ctx=ctx
)
# Domain regex
host = self.domain.name if self.domain else urlparse(urls[0]).netloc
host_regex = f"\'https?://([a-z0-9]+[.])*{host}.*\'"
# Tools cmds
cmd_map = {
'gau': f'gau',
'gauplus': f'gauplus -random-agent',
'hakrawler': 'hakrawler -subs -u',
'waybackurls': 'waybackurls',
'gospider': f'gospider -S {input_path} --js -d 2 --sitemap --robots -w -r',
'katana': f'katana -list {input_path} -silent -jc -kf all -d 3 -fs rdn',
}
if proxy:
cmd_map['gau'] += f' --proxy "{proxy}"'
cmd_map['gauplus'] += f' -p "{proxy}"'
cmd_map['gospider'] += f' -p {proxy}'
cmd_map['hakrawler'] += f' -proxy {proxy}'
cmd_map['katana'] += f' -proxy {proxy}'
if threads > 0:
cmd_map['gau'] += f' --threads {threads}'
cmd_map['gauplus'] += f' -t {threads}'
cmd_map['gospider'] += f' -t {threads}'
cmd_map['katana'] += f' -c {threads}'
if custom_header:
header_string = ';;'.join([
f'{key}: {value}' for key, value in custom_header.items()
])
cmd_map['hakrawler'] += f' -h {header_string}'
cmd_map['katana'] += f' -H {header_string}'
header_flags = [':'.join(h) for h in header_string.split(';;')]
for flag in header_flags:
cmd_map['gospider'] += f' -H {flag}'
cat_input = f'cat {input_path}'
grep_output = f'grep -Eo {host_regex}'
cmd_map = {
tool: f'{cat_input} | {cmd} | {grep_output} > {self.results_dir}/urls_{tool}.txt'
for tool, cmd in cmd_map.items()
}
tasks = group(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for tool, cmd in cmd_map.items()
if tool in tools
)
# Cleanup task
sort_output = [
f'cat {self.results_dir}/urls_* > {self.output_path}',
f'cat {input_path} >> {self.output_path}',
f'sort -u {self.output_path} -o {self.output_path}',
]
if ignore_file_extension:
ignore_exts = '|'.join(ignore_file_extension)
grep_ext_filtered_output = [
f'cat {self.output_path} | grep -Eiv "\\.({ignore_exts}).*" > {self.results_dir}/urls_filtered.txt',
f'mv {self.results_dir}/urls_filtered.txt {self.output_path}'
]
sort_output.extend(grep_ext_filtered_output)
cleanup = chain(
run_command.si(
cmd,
shell=True,
scan_id=self.scan_id,
activity_id=self.activity_id)
for cmd in sort_output
)
# Run all commands
task = chord(tasks)(cleanup)
with allow_join_result():
task.get()
# Store all the endpoints and run httpx
with open(self.output_path) as f:
discovered_urls = f.readlines()
self.notify(fields={'Discovered URLs': len(discovered_urls)})
# Some tools can have an URL in the format <URL>] - <PATH> or <URL> - <PATH>, add them
# to the final URL list
all_urls = []
for url in discovered_urls:
url = url.strip()
urlpath = None
base_url = None
if '] ' in url: # found JS scraped endpoint e.g from gospider
split = tuple(url.split('] '))
if not len(split) == 2:
logger.warning(f'URL format not recognized for "{url}". Skipping.')
continue
base_url, urlpath = split
urlpath = urlpath.lstrip('- ')
elif ' - ' in url: # found JS scraped endpoint e.g from gospider
base_url, urlpath = tuple(url.split(' - '))
if base_url and urlpath:
subdomain = urlparse(base_url)
url = f'{subdomain.scheme}://{subdomain.netloc}{self.url_filter}'
if not validators.url(url):
logger.warning(f'Invalid URL "{url}". Skipping.')
if url not in all_urls:
all_urls.append(url)
# Filter out URLs if a path filter was passed
if self.url_filter:
all_urls = [url for url in all_urls if self.url_filter in url]
# Write result to output path
with open(self.output_path, 'w') as f:
f.write('\n'.join(all_urls))
logger.warning(f'Found {len(all_urls)} usable URLs')
# Crawl discovered URLs
if enable_http_crawl:
ctx['track'] = False
http_crawl(
all_urls,
ctx=ctx,
should_remove_duplicate_endpoints=should_remove_duplicate_endpoints,
duplicate_removal_fields=duplicate_removal_fields
)
#-------------------#
# GF PATTERNS MATCH #
#-------------------#
# Combine old gf patterns with new ones
if gf_patterns:
self.scan.used_gf_patterns = ','.join(gf_patterns)
self.scan.save()
# Run gf patterns on saved endpoints
# TODO: refactor to Celery task
for gf_pattern in gf_patterns:
# TODO: js var is causing issues, removing for now
if gf_pattern == 'jsvar':
logger.info('Ignoring jsvar as it is causing issues.')
continue
# Run gf on current pattern
logger.warning(f'Running gf on pattern "{gf_pattern}"')
gf_output_file = f'{self.results_dir}/gf_patterns_{gf_pattern}.txt'
cmd = f'cat {self.output_path} | gf {gf_pattern} | grep -Eo {host_regex} >> {gf_output_file}'
run_command(
cmd,
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
# Check output file
if not os.path.exists(gf_output_file):
logger.error(f'Could not find GF output file {gf_output_file}. Skipping GF pattern "{gf_pattern}"')
continue
# Read output file line by line and
with open(gf_output_file, 'r') as f:
lines = f.readlines()
# Add endpoints / subdomains to DB
for url in lines:
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
endpoint, created = save_endpoint(
http_url,
crawl=False,
subdomain=subdomain,
ctx=ctx)
if not endpoint:
continue
earlier_pattern = None
if not created:
earlier_pattern = endpoint.matched_gf_patterns
pattern = f'{earlier_pattern},{gf_pattern}' if earlier_pattern else gf_pattern
endpoint.matched_gf_patterns = pattern
endpoint.save()
return all_urls
def parse_curl_output(response):
# TODO: Enrich from other cURL fields.
CURL_REGEX_HTTP_STATUS = f'HTTP\/(?:(?:\d\.?)+)\s(\d+)\s(?:\w+)'
http_status = 0
if response:
failed = False
regex = re.compile(CURL_REGEX_HTTP_STATUS, re.MULTILINE)
try:
http_status = int(regex.findall(response)[0])
except (KeyError, TypeError, IndexError):
pass
return {
'http_status': http_status,
}
@app.task(name='vulnerability_scan', queue='main_scan_queue', bind=True, base=RengineTask)
def vulnerability_scan(self, urls=[], ctx={}, description=None):
"""
This function will serve as an entrypoint to vulnerability scan.
All other vulnerability scan will be run from here including nuclei, crlfuzz, etc
"""
logger.info('Running Vulnerability Scan Queue')
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_run_nuclei = config.get(RUN_NUCLEI, True)
should_run_crlfuzz = config.get(RUN_CRLFUZZ, False)
should_run_dalfox = config.get(RUN_DALFOX, False)
should_run_s3scanner = config.get(RUN_S3SCANNER, True)
grouped_tasks = []
if should_run_nuclei:
_task = nuclei_scan.si(
urls=urls,
ctx=ctx,
description=f'Nuclei Scan'
)
grouped_tasks.append(_task)
if should_run_crlfuzz:
_task = crlfuzz_scan.si(
urls=urls,
ctx=ctx,
description=f'CRLFuzz Scan'
)
grouped_tasks.append(_task)
if should_run_dalfox:
_task = dalfox_xss_scan.si(
urls=urls,
ctx=ctx,
description=f'Dalfox XSS Scan'
)
grouped_tasks.append(_task)
if should_run_s3scanner:
_task = s3scanner.si(
ctx=ctx,
description=f'Misconfigured S3 Buckets Scanner'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan completed...')
# return results
return None
@app.task(name='nuclei_individual_severity_module', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_individual_severity_module(self, cmd, severity, enable_http_crawl, should_fetch_gpt_report, ctx={}, description=None):
'''
This celery task will run vulnerability scan in parallel.
All severities supplied should run in parallel as grouped tasks.
'''
results = []
logger.info(f'Running vulnerability scan with severity: {severity}')
cmd += f' -severity {severity}'
# Send start notification
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
results.append(line)
# Gather nuclei results
vuln_data = parse_nuclei_result(line)
# Get corresponding subdomain
http_url = sanitize_url(line.get('matched-at'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
# Look for duplicate vulnerabilities by excluding records that might change but are irrelevant.
object_comparison_exclude = ['response', 'curl_command', 'tags', 'references', 'cve_ids', 'cwe_ids']
# Add subdomain and target domain to the duplicate check
vuln_data_copy = vuln_data.copy()
vuln_data_copy['subdomain'] = subdomain
vuln_data_copy['target_domain'] = self.domain
# Check if record exists, if exists do not save it
if record_exists(Vulnerability, data=vuln_data_copy, exclude_keys=object_comparison_exclude):
logger.warning(f'Nuclei vulnerability of severity {severity} : {vuln_data_copy["name"]} for {subdomain_name} already exists')
continue
# Get or create EndPoint object
response = line.get('response')
httpx_crawl = False if response else enable_http_crawl # avoid yet another httpx crawl
endpoint, _ = save_endpoint(
http_url,
crawl=httpx_crawl,
subdomain=subdomain,
ctx=ctx)
if endpoint:
http_url = endpoint.http_url
if not httpx_crawl:
output = parse_curl_output(response)
endpoint.http_status = output['http_status']
endpoint.save()
# Get or create Vulnerability object
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
subdomain=subdomain,
**vuln_data)
if not vuln:
continue
# Print vuln
severity = line['info'].get('severity', 'unknown')
logger.warning(str(vuln))
# Send notification for all vulnerabilities except info
url = vuln.http_url or vuln.subdomain
send_vuln = (
notif and
notif.send_vuln_notif and
vuln and
severity in ['low', 'medium', 'high', 'critical'])
if send_vuln:
fields = {
'Severity': f'**{severity.upper()}**',
'URL': http_url,
'Subdomain': subdomain_name,
'Name': vuln.name,
'Type': vuln.type,
'Description': vuln.description,
'Template': vuln.template_url,
'Tags': vuln.get_tags_str(),
'CVEs': vuln.get_cve_str(),
'CWEs': vuln.get_cwe_str(),
'References': vuln.get_refs_str()
}
severity_map = {
'low': 'info',
'medium': 'warning',
'high': 'error',
'critical': 'error'
}
self.notify(
f'vulnerability_scan_#{vuln.id}',
severity_map[severity],
fields,
add_meta_info=False)
# Send report to hackerone
hackerone_query = Hackerone.objects.all()
send_report = (
hackerone_query.exists() and
severity not in ('info', 'low') and
vuln.target_domain.h1_team_handle
)
if send_report:
hackerone = hackerone_query.first()
if hackerone.send_critical and severity == 'critical':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_high and severity == 'high':
send_hackerone_report.delay(vuln.id)
elif hackerone.send_medium and severity == 'medium':
send_hackerone_report.delay(vuln.id)
# Write results to JSON file
with open(self.output_path, 'w') as f:
json.dump(results, f, indent=4)
# Send finish notif
if send_status:
vulns = Vulnerability.objects.filter(scan_history__id=self.scan_id)
info_count = vulns.filter(severity=0).count()
low_count = vulns.filter(severity=1).count()
medium_count = vulns.filter(severity=2).count()
high_count = vulns.filter(severity=3).count()
critical_count = vulns.filter(severity=4).count()
unknown_count = vulns.filter(severity=-1).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count + unknown_count
fields = {
'Total': vulnerability_count,
'Critical': critical_count,
'High': high_count,
'Medium': medium_count,
'Low': low_count,
'Info': info_count,
'Unknown': unknown_count
}
self.notify(fields=fields)
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=NUCLEI
).exclude(
severity=0
)
# find all unique vulnerabilities based on path and title
# all unique vulnerability will go thru gpt function and get report
# once report is got, it will be matched with other vulnerabilities and saved
unique_vulns = set()
for vuln in vulns:
unique_vulns.add((vuln.name, vuln.get_path()))
unique_vulns = list(unique_vulns)
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in unique_vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return None
def get_vulnerability_gpt_report(vuln):
title = vuln[0]
path = vuln[1]
logger.info(f'Getting GPT Report for {title}, PATH: {path}')
# check if in db already exists
stored = GPTVulnerabilityReport.objects.filter(
url_path=path
).filter(
title=title
).first()
if stored:
response = {
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
report = GPTVulnerabilityReportGenerator()
vulnerability_description = get_gpt_vuln_input_description(
title,
path
)
response = report.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
title,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
for vuln in Vulnerability.objects.filter(name=title, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
def add_gpt_description_db(title, path, description, impact, remediation, references):
gpt_report = GPTVulnerabilityReport()
gpt_report.url_path = path
gpt_report.title = title
gpt_report.description = description
gpt_report.impact = impact
gpt_report.remediation = remediation
gpt_report.save()
for url in references:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
gpt_report.references.add(ref)
gpt_report.save()
@app.task(name='nuclei_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def nuclei_scan(self, urls=[], ctx={}, description=None):
"""HTTP vulnerability scan using Nuclei
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
Notes:
Unfurl the urls to keep only domain and path, will be sent to vuln scan and
ignore certain file extensions. Thanks: https://github.com/six2dez/reconftw
"""
# Config
config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
input_path = f'{self.results_dir}/input_endpoints_vulnerability_scan.txt'
enable_http_crawl = config.get(ENABLE_HTTP_CRAWL, DEFAULT_ENABLE_HTTP_CRAWL)
concurrency = config.get(NUCLEI_CONCURRENCY) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
intensity = config.get(INTENSITY) or self.yaml_configuration.get(INTENSITY, DEFAULT_SCAN_INTENSITY)
rate_limit = config.get(RATE_LIMIT) or self.yaml_configuration.get(RATE_LIMIT, DEFAULT_RATE_LIMIT)
retries = config.get(RETRIES) or self.yaml_configuration.get(RETRIES, DEFAULT_RETRIES)
timeout = config.get(TIMEOUT) or self.yaml_configuration.get(TIMEOUT, DEFAULT_HTTP_TIMEOUT)
custom_header = config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
should_fetch_gpt_report = config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
proxy = get_random_proxy()
nuclei_specific_config = config.get('nuclei', {})
use_nuclei_conf = nuclei_specific_config.get(USE_CONFIG, False)
severities = nuclei_specific_config.get(NUCLEI_SEVERITY, NUCLEI_DEFAULT_SEVERITIES)
tags = nuclei_specific_config.get(NUCLEI_TAGS, [])
tags = ','.join(tags)
nuclei_templates = nuclei_specific_config.get(NUCLEI_TEMPLATE)
custom_nuclei_templates = nuclei_specific_config.get(NUCLEI_CUSTOM_TEMPLATE)
# severities_str = ','.join(severities)
# Get alive endpoints
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=enable_http_crawl,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
if intensity == 'normal': # reduce number of endpoints to scan
unfurl_filter = f'{self.results_dir}/urls_unfurled.txt'
run_command(
f"cat {input_path} | unfurl -u format %s://%d%p |uro > {unfurl_filter}",
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
run_command(
f'sort -u {unfurl_filter} -o {unfurl_filter}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
input_path = unfurl_filter
# Build templates
# logger.info('Updating Nuclei templates ...')
run_command(
'nuclei -update-templates',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
templates = []
if not (nuclei_templates or custom_nuclei_templates):
templates.append(NUCLEI_DEFAULT_TEMPLATES_PATH)
if nuclei_templates:
if ALL in nuclei_templates:
template = NUCLEI_DEFAULT_TEMPLATES_PATH
templates.append(template)
else:
templates.extend(nuclei_templates)
if custom_nuclei_templates:
custom_nuclei_template_paths = [f'{str(elem)}.yaml' for elem in custom_nuclei_templates]
template = templates.extend(custom_nuclei_template_paths)
# Build CMD
cmd = 'nuclei -j'
cmd += ' -config /root/.config/nuclei/config.yaml' if use_nuclei_conf else ''
cmd += f' -irr'
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -l {input_path}'
cmd += f' -c {str(concurrency)}' if concurrency > 0 else ''
cmd += f' -proxy {proxy} ' if proxy else ''
cmd += f' -retries {retries}' if retries > 0 else ''
cmd += f' -rl {rate_limit}' if rate_limit > 0 else ''
# cmd += f' -severity {severities_str}'
cmd += f' -timeout {str(timeout)}' if timeout and timeout > 0 else ''
cmd += f' -tags {tags}' if tags else ''
cmd += f' -silent'
for tpl in templates:
cmd += f' -t {tpl}'
grouped_tasks = []
custom_ctx = ctx
for severity in severities:
custom_ctx['track'] = True
_task = nuclei_individual_severity_module.si(
cmd,
severity,
enable_http_crawl,
should_fetch_gpt_report,
ctx=custom_ctx,
description=f'Nuclei Scan with severity {severity}'
)
grouped_tasks.append(_task)
celery_group = group(grouped_tasks)
job = celery_group.apply_async()
while not job.ready():
# wait for all jobs to complete
time.sleep(5)
logger.info('Vulnerability scan with all severities completed...')
return None
@app.task(name='dalfox_xss_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def dalfox_xss_scan(self, urls=[], ctx={}, description=None):
"""XSS Scan using dalfox
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
dalfox_config = vuln_config.get(DALFOX) or {}
custom_header = dalfox_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
is_waf_evasion = dalfox_config.get(WAF_EVASION, False)
blind_xss_server = dalfox_config.get(BLIND_XSS_SERVER)
user_agent = dalfox_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
timeout = dalfox_config.get(TIMEOUT)
delay = dalfox_config.get(DELAY)
threads = dalfox_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_dalfox_xss.txt'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=False,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'dalfox --silence --no-color --no-spinner'
cmd += f' --only-poc r '
cmd += f' --ignore-return 302,404,403'
cmd += f' --skip-bav'
cmd += f' file {input_path}'
cmd += f' --proxy {proxy}' if proxy else ''
cmd += f' --waf-evasion' if is_waf_evasion else ''
cmd += f' -b {blind_xss_server}' if blind_xss_server else ''
cmd += f' --delay {delay}' if delay else ''
cmd += f' --timeout {timeout}' if timeout else ''
cmd += f' --user-agent {user_agent}' if user_agent else ''
cmd += f' --header {custom_header}' if custom_header else ''
cmd += f' --worker {threads}' if threads else ''
cmd += f' --format json'
results = []
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id,
trunc_char=','
):
if not isinstance(line, dict):
continue
results.append(line)
vuln_data = parse_dalfox_result(line)
http_url = sanitize_url(line.get('data'))
subdomain_name = get_subdomain_from_url(http_url)
# TODO: this should be get only
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting Dalfox Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=DALFOX
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='crlfuzz_scan', queue='main_scan_queue', base=RengineTask, bind=True)
def crlfuzz_scan(self, urls=[], ctx={}, description=None):
"""CRLF Fuzzing with CRLFuzz
Args:
urls (list, optional): If passed, filter on those URLs.
description (str, optional): Task description shown in UI.
"""
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
should_fetch_gpt_report = vuln_config.get(FETCH_GPT_REPORT, DEFAULT_GET_GPT_REPORT)
custom_header = vuln_config.get(CUSTOM_HEADER) or self.yaml_configuration.get(CUSTOM_HEADER)
proxy = get_random_proxy()
user_agent = vuln_config.get(USER_AGENT) or self.yaml_configuration.get(USER_AGENT)
threads = vuln_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
input_path = f'{self.results_dir}/input_endpoints_crlf.txt'
output_path = f'{self.results_dir}/{self.filename}'
if urls:
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
get_http_urls(
is_alive=False,
ignore_files=True,
write_filepath=input_path,
ctx=ctx
)
notif = Notification.objects.first()
send_status = notif.send_scan_status_notif if notif else False
# command builder
cmd = 'crlfuzz -s'
cmd += f' -l {input_path}'
cmd += f' -x {proxy}' if proxy else ''
cmd += f' --H {custom_header}' if custom_header else ''
cmd += f' -o {output_path}'
run_command(
cmd,
shell=False,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id
)
if not os.path.isfile(output_path):
logger.info('No Results from CRLFuzz')
return
crlfs = []
results = []
with open(output_path, 'r') as file:
crlfs = file.readlines()
for crlf in crlfs:
url = crlf.strip()
vuln_data = parse_crlfuzz_result(url)
http_url = sanitize_url(url)
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = Subdomain.objects.get_or_create(
name=subdomain_name,
scan_history=self.scan,
target_domain=self.domain
)
endpoint, _ = save_endpoint(
http_url,
crawl=True,
subdomain=subdomain,
ctx=ctx
)
if endpoint:
http_url = endpoint.http_url
endpoint.save()
vuln, _ = save_vulnerability(
target_domain=self.domain,
http_url=http_url,
scan_history=self.scan,
subscan=self.subscan,
**vuln_data
)
if not vuln:
continue
# after vulnerability scan is done, we need to run gpt if
# should_fetch_gpt_report and openapi key exists
if should_fetch_gpt_report and OpenAiAPIKey.objects.all().first():
logger.info('Getting CRLFuzz Vulnerability GPT Report')
vulns = Vulnerability.objects.filter(
scan_history__id=self.scan_id
).filter(
source=CRLFUZZ
).exclude(
severity=0
)
_vulns = []
for vuln in vulns:
_vulns.append((vuln.name, vuln.http_url))
with concurrent.futures.ThreadPoolExecutor(max_workers=DEFAULT_THREADS) as executor:
future_to_gpt = {executor.submit(get_vulnerability_gpt_report, vuln): vuln for vuln in _vulns}
# Wait for all tasks to complete
for future in concurrent.futures.as_completed(future_to_gpt):
gpt = future_to_gpt[future]
try:
future.result()
except Exception as e:
logger.error(f"Exception for Vulnerability {vuln}: {e}")
return results
@app.task(name='s3scanner', queue='main_scan_queue', base=RengineTask, bind=True)
def s3scanner(self, ctx={}, description=None):
"""Bucket Scanner
Args:
ctx (dict): Context
description (str, optional): Task description shown in UI.
"""
input_path = f'{self.results_dir}/#{self.scan_id}_subdomain_discovery.txt'
vuln_config = self.yaml_configuration.get(VULNERABILITY_SCAN) or {}
s3_config = vuln_config.get(S3SCANNER) or {}
threads = s3_config.get(THREADS) or self.yaml_configuration.get(THREADS, DEFAULT_THREADS)
providers = s3_config.get(PROVIDERS, S3SCANNER_DEFAULT_PROVIDERS)
scan_history = ScanHistory.objects.filter(pk=self.scan_id).first()
for provider in providers:
cmd = f's3scanner -bucket-file {input_path} -enumerate -provider {provider} -threads {threads} -json'
for line in stream_command(
cmd,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not isinstance(line, dict):
continue
if line.get('bucket', {}).get('exists', 0) == 1:
result = parse_s3scanner_result(line)
s3bucket, created = S3Bucket.objects.get_or_create(**result)
scan_history.buckets.add(s3bucket)
logger.info(f"s3 bucket added {result['provider']}-{result['name']}-{result['region']}")
@app.task(name='http_crawl', queue='main_scan_queue', base=RengineTask, bind=True)
def http_crawl(
self,
urls=[],
method=None,
recrawl=False,
ctx={},
track=True,
description=None,
is_ran_from_subdomain_scan=False,
should_remove_duplicate_endpoints=True,
duplicate_removal_fields=[]):
"""Use httpx to query HTTP URLs for important info like page titles, http
status, etc...
Args:
urls (list, optional): A set of URLs to check. Overrides default
behavior which queries all endpoints related to this scan.
method (str): HTTP method to use (GET, HEAD, POST, PUT, DELETE).
recrawl (bool, optional): If False, filter out URLs that have already
been crawled.
should_remove_duplicate_endpoints (bool): Whether to remove duplicate endpoints
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
Returns:
list: httpx results.
"""
logger.info('Initiating HTTP Crawl')
if is_ran_from_subdomain_scan:
logger.info('Running From Subdomain Scan...')
cmd = '/go/bin/httpx'
cfg = self.yaml_configuration.get(HTTP_CRAWL) or {}
custom_header = cfg.get(CUSTOM_HEADER, '')
threads = cfg.get(THREADS, DEFAULT_THREADS)
follow_redirect = cfg.get(FOLLOW_REDIRECT, True)
self.output_path = None
input_path = f'{self.results_dir}/httpx_input.txt'
history_file = f'{self.results_dir}/commands.txt'
if urls: # direct passing URLs to check
if self.url_filter:
urls = [u for u in urls if self.url_filter in u]
with open(input_path, 'w') as f:
f.write('\n'.join(urls))
else:
urls = get_http_urls(
is_uncrawled=not recrawl,
write_filepath=input_path,
ctx=ctx
)
# logger.debug(urls)
# If no URLs found, skip it
if not urls:
return
# Re-adjust thread number if few URLs to avoid spinning up a monster to
# kill a fly.
if len(urls) < threads:
threads = len(urls)
# Get random proxy
proxy = get_random_proxy()
# Run command
cmd += f' -cl -ct -rt -location -td -websocket -cname -asn -cdn -probe -random-agent'
cmd += f' -t {threads}' if threads > 0 else ''
cmd += f' --http-proxy {proxy}' if proxy else ''
cmd += f' -H "{custom_header}"' if custom_header else ''
cmd += f' -json'
cmd += f' -u {urls[0]}' if len(urls) == 1 else f' -l {input_path}'
cmd += f' -x {method}' if method else ''
cmd += f' -silent'
if follow_redirect:
cmd += ' -fr'
results = []
endpoint_ids = []
for line in stream_command(
cmd,
history_file=history_file,
scan_id=self.scan_id,
activity_id=self.activity_id):
if not line or not isinstance(line, dict):
continue
logger.debug(line)
# No response from endpoint
if line.get('failed', False):
continue
# Parse httpx output
host = line.get('host', '')
content_length = line.get('content_length', 0)
http_status = line.get('status_code')
http_url, is_redirect = extract_httpx_url(line)
page_title = line.get('title')
webserver = line.get('webserver')
cdn = line.get('cdn', False)
rt = line.get('time')
techs = line.get('tech', [])
cname = line.get('cname', '')
content_type = line.get('content_type', '')
response_time = -1
if rt:
response_time = float(''.join(ch for ch in rt if not ch.isalpha()))
if rt[-2:] == 'ms':
response_time = response_time / 1000
# Create Subdomain object in DB
subdomain_name = get_subdomain_from_url(http_url)
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
if not subdomain:
continue
# Save default HTTP URL to endpoint object in DB
endpoint, created = save_endpoint(
http_url,
crawl=False,
ctx=ctx,
subdomain=subdomain,
is_default=is_ran_from_subdomain_scan
)
if not endpoint:
continue
endpoint.http_status = http_status
endpoint.page_title = page_title
endpoint.content_length = content_length
endpoint.webserver = webserver
endpoint.response_time = response_time
endpoint.content_type = content_type
endpoint.save()
endpoint_str = f'{http_url} [{http_status}] `{content_length}B` `{webserver}` `{rt}`'
logger.warning(endpoint_str)
if endpoint and endpoint.is_alive and endpoint.http_status != 403:
self.notify(
fields={'Alive endpoint': f'• {endpoint_str}'},
add_meta_info=False)
# Add endpoint to results
line['_cmd'] = cmd
line['final_url'] = http_url
line['endpoint_id'] = endpoint.id
line['endpoint_created'] = created
line['is_redirect'] = is_redirect
results.append(line)
# Add technology objects to DB
for technology in techs:
tech, _ = Technology.objects.get_or_create(name=technology)
endpoint.techs.add(tech)
if is_ran_from_subdomain_scan:
subdomain.technologies.add(tech)
subdomain.save()
endpoint.save()
techs_str = ', '.join([f'`{tech}`' for tech in techs])
self.notify(
fields={'Technologies': techs_str},
add_meta_info=False)
# Add IP objects for 'a' records to DB
a_records = line.get('a', [])
for ip_address in a_records:
ip, created = save_ip_address(
ip_address,
subdomain,
subscan=self.subscan,
cdn=cdn)
ips_str = '• ' + '\n• '.join([f'`{ip}`' for ip in a_records])
self.notify(
fields={'IPs': ips_str},
add_meta_info=False)
# Add IP object for host in DB
if host:
ip, created = save_ip_address(
host,
subdomain,
subscan=self.subscan,
cdn=cdn)
self.notify(
fields={'IPs': f'• `{ip.address}`'},
add_meta_info=False)
# Save subdomain and endpoint
if is_ran_from_subdomain_scan:
# save subdomain stuffs
subdomain.http_url = http_url
subdomain.http_status = http_status
subdomain.page_title = page_title
subdomain.content_length = content_length
subdomain.webserver = webserver
subdomain.response_time = response_time
subdomain.content_type = content_type
subdomain.cname = ','.join(cname)
subdomain.is_cdn = cdn
if cdn:
subdomain.cdn_name = line.get('cdn_name')
subdomain.save()
endpoint.save()
endpoint_ids.append(endpoint.id)
if should_remove_duplicate_endpoints:
# Remove 'fake' alive endpoints that are just redirects to the same page
remove_duplicate_endpoints(
self.scan_id,
self.domain_id,
self.subdomain_id,
filter_ids=endpoint_ids
)
# Remove input file
run_command(
f'rm {input_path}',
shell=True,
history_file=self.history_file,
scan_id=self.scan_id,
activity_id=self.activity_id)
return results
#---------------------#
# Notifications tasks #
#---------------------#
@app.task(name='send_notif', bind=False, queue='send_notif_queue')
def send_notif(
message,
scan_history_id=None,
subscan_id=None,
**options):
if not 'title' in options:
message = enrich_notification(message, scan_history_id, subscan_id)
send_discord_message(message, **options)
send_slack_message(message)
send_telegram_message(message)
@app.task(name='send_scan_notif', bind=False, queue='send_scan_notif_queue')
def send_scan_notif(
scan_history_id,
subscan_id=None,
engine_id=None,
status='RUNNING'):
"""Send scan status notification. Works for scan or a subscan if subscan_id
is passed.
Args:
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Get domain, engine, scan_history objects
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
tasks = ScanActivity.objects.filter(scan_of=scan) if scan else 0
# Build notif options
url = get_scan_url(scan_history_id, subscan_id)
title = get_scan_title(scan_history_id, subscan_id)
fields = get_scan_fields(engine, scan, subscan, status, tasks)
severity = None
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
if status:
severity = STATUS_TO_SEVERITIES.get(status)
opts = {
'title': title,
'url': url,
'fields': fields,
'severity': severity
}
logger.warning(f'Sending notification "{title}" [{severity}]')
# Send notification
send_notif(
msg,
scan_history_id,
subscan_id,
**opts)
@app.task(name='send_task_notif', bind=False, queue='send_task_notif_queue')
def send_task_notif(
task_name,
status=None,
result=None,
output_path=None,
traceback=None,
scan_history_id=None,
engine_id=None,
subscan_id=None,
severity=None,
add_meta_info=True,
update_fields={}):
"""Send task status notification.
Args:
task_name (str): Task name.
status (str, optional): Task status.
result (str, optional): Task result.
output_path (str, optional): Task output path.
traceback (str, optional): Task traceback.
scan_history_id (int, optional): ScanHistory id.
subscan_id (int, optional): SuScan id.
engine_id (int, optional): EngineType id.
severity (str, optional): Severity (will be mapped to notif colors)
add_meta_info (bool, optional): Wheter to add scan / subscan info to notif.
update_fields (dict, optional): Fields key / value to update.
"""
# Skip send if notification settings are not configured
notif = Notification.objects.first()
if not (notif and notif.send_scan_status_notif):
return
# Build fields
url = None
fields = {}
if add_meta_info:
engine = EngineType.objects.filter(pk=engine_id).first()
scan = ScanHistory.objects.filter(pk=scan_history_id).first()
subscan = SubScan.objects.filter(pk=subscan_id).first()
url = get_scan_url(scan_history_id)
if status:
fields['Status'] = f'**{status}**'
if engine:
fields['Engine'] = engine.engine_name
if scan:
fields['Scan ID'] = f'[#{scan.id}]({url})'
if subscan:
url = get_scan_url(scan_history_id, subscan_id)
fields['Subscan ID'] = f'[#{subscan.id}]({url})'
title = get_task_title(task_name, scan_history_id, subscan_id)
if status:
severity = STATUS_TO_SEVERITIES.get(status)
msg = f'{title} {status}\n'
msg += '\n🡆 '.join(f'**{k}:** {v}' for k, v in fields.items())
# Add fields to update
for k, v in update_fields.items():
fields[k] = v
# Add traceback to notif
if traceback and notif.send_scan_tracebacks:
fields['Traceback'] = f'```\n{traceback}\n```'
# Add files to notif
files = []
attach_file = (
notif.send_scan_output_file and
output_path and
result and
not traceback
)
if attach_file:
output_title = output_path.split('/')[-1]
files = [(output_path, output_title)]
# Send notif
opts = {
'title': title,
'url': url,
'files': files,
'severity': severity,
'fields': fields,
'fields_append': update_fields.keys()
}
send_notif(
msg,
scan_history_id=scan_history_id,
subscan_id=subscan_id,
**opts)
@app.task(name='send_file_to_discord', bind=False, queue='send_file_to_discord_queue')
def send_file_to_discord(file_path, title=None):
notif = Notification.objects.first()
do_send = notif and notif.send_to_discord and notif.discord_hook_url
if not do_send:
return False
webhook = DiscordWebhook(
url=notif.discord_hook_url,
rate_limit_retry=True,
username=title or "reNgine Discord Plugin"
)
with open(file_path, "rb") as f:
head, tail = os.path.split(file_path)
webhook.add_file(file=f.read(), filename=tail)
webhook.execute()
@app.task(name='send_hackerone_report', bind=False, queue='send_hackerone_report_queue')
def send_hackerone_report(vulnerability_id):
"""Send HackerOne vulnerability report.
Args:
vulnerability_id (int): Vulnerability id.
Returns:
int: HTTP response status code.
"""
vulnerability = Vulnerability.objects.get(id=vulnerability_id)
severities = {v: k for k,v in NUCLEI_SEVERITY_MAP.items()}
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
# can only send vulnerability report if team_handle exists
if len(vulnerability.target_domain.h1_team_handle) !=0:
hackerone_query = Hackerone.objects.all()
if hackerone_query.exists():
hackerone = Hackerone.objects.first()
severity_value = severities[vulnerability.severity]
tpl = hackerone.report_template
# Replace syntax of report template with actual content
tpl = tpl.replace('{vulnerability_name}', vulnerability.name)
tpl = tpl.replace('{vulnerable_url}', vulnerability.http_url)
tpl = tpl.replace('{vulnerability_severity}', severity_value)
tpl = tpl.replace('{vulnerability_description}', vulnerability.description if vulnerability.description else '')
tpl = tpl.replace('{vulnerability_extracted_results}', vulnerability.extracted_results if vulnerability.extracted_results else '')
tpl = tpl.replace('{vulnerability_reference}', vulnerability.reference if vulnerability.reference else '')
data = {
"data": {
"type": "report",
"attributes": {
"team_handle": vulnerability.target_domain.h1_team_handle,
"title": '{} found in {}'.format(vulnerability.name, vulnerability.http_url),
"vulnerability_information": tpl,
"severity_rating": severity_value,
"impact": "More information about the impact and vulnerability can be found here: \n" + vulnerability.reference if vulnerability.reference else "NA",
}
}
}
r = requests.post(
'https://api.hackerone.com/v1/hackers/reports',
auth=(hackerone.username, hackerone.api_key),
json=data,
headers=headers
)
response = r.json()
status_code = r.status_code
if status_code == 201:
vulnerability.hackerone_report_id = response['data']["id"]
vulnerability.open_status = False
vulnerability.save()
return status_code
else:
logger.error('No team handle found.')
status_code = 111
return status_code
#-------------#
# Utils tasks #
#-------------#
@app.task(name='parse_nmap_results', bind=False, queue='parse_nmap_results_queue')
def parse_nmap_results(xml_file, output_file=None):
"""Parse results from nmap output file.
Args:
xml_file (str): nmap XML report file path.
Returns:
list: List of vulnerabilities found from nmap results.
"""
with open(xml_file, encoding='utf8') as f:
content = f.read()
try:
nmap_results = xmltodict.parse(content) # parse XML to dict
except Exception as e:
logger.exception(e)
logger.error(f'Cannot parse {xml_file} to valid JSON. Skipping.')
return []
# Write JSON to output file
if output_file:
with open(output_file, 'w') as f:
json.dump(nmap_results, f, indent=4)
logger.warning(json.dumps(nmap_results, indent=4))
hosts = (
nmap_results
.get('nmaprun', {})
.get('host', {})
)
all_vulns = []
if isinstance(hosts, dict):
hosts = [hosts]
for host in hosts:
# Grab hostname / IP from output
hostnames_dict = host.get('hostnames', {})
if hostnames_dict:
# Ensure that hostnames['hostname'] is a list for consistency
hostnames_list = hostnames_dict['hostname'] if isinstance(hostnames_dict['hostname'], list) else [hostnames_dict['hostname']]
# Extract all the @name values from the list of dictionaries
hostnames = [entry.get('@name') for entry in hostnames_list]
else:
hostnames = [host.get('address')['@addr']]
# Iterate over each hostname for each port
for hostname in hostnames:
# Grab ports from output
ports = host.get('ports', {}).get('port', [])
if isinstance(ports, dict):
ports = [ports]
for port in ports:
url_vulns = []
port_number = port['@portid']
url = sanitize_url(f'{hostname}:{port_number}')
logger.info(f'Parsing nmap results for {hostname}:{port_number} ...')
if not port_number or not port_number.isdigit():
continue
port_protocol = port['@protocol']
scripts = port.get('script', [])
if isinstance(scripts, dict):
scripts = [scripts]
for script in scripts:
script_id = script['@id']
script_output = script['@output']
script_output_table = script.get('table', [])
logger.debug(f'Ran nmap script "{script_id}" on {port_number}/{port_protocol}:\n{script_output}\n')
if script_id == 'vulscan':
vulns = parse_nmap_vulscan_output(script_output)
url_vulns.extend(vulns)
elif script_id == 'vulners':
vulns = parse_nmap_vulners_output(script_output)
url_vulns.extend(vulns)
# elif script_id == 'http-server-header':
# TODO: nmap can help find technologies as well using the http-server-header script
# regex = r'(\w+)/([\d.]+)\s?(?:\((\w+)\))?'
# tech_name, tech_version, tech_os = re.match(regex, test_string).groups()
# Technology.objects.get_or_create(...)
# elif script_id == 'http_csrf':
# vulns = parse_nmap_http_csrf_output(script_output)
# url_vulns.extend(vulns)
else:
logger.warning(f'Script output parsing for script "{script_id}" is not supported yet.')
# Add URL to vuln
for vuln in url_vulns:
# TODO: This should extend to any URL, not just HTTP
vuln['http_url'] = url
if 'http_path' in vuln:
vuln['http_url'] += vuln['http_path']
all_vulns.append(vuln)
return all_vulns
def parse_nmap_http_csrf_output(script_output):
pass
def parse_nmap_vulscan_output(script_output):
"""Parse nmap vulscan script output.
Args:
script_output (str): Vulscan script output.
Returns:
list: List of Vulnerability dicts.
"""
data = {}
vulns = []
provider_name = ''
# Sort all vulns found by provider so that we can match each provider with
# a function that pulls from its API to get more info about the
# vulnerability.
for line in script_output.splitlines():
if not line:
continue
if not line.startswith('['): # provider line
if "No findings" in line:
logger.info(f"No findings: {line}")
continue
elif ' - ' in line:
provider_name, provider_url = tuple(line.split(' - '))
data[provider_name] = {'url': provider_url.rstrip(':'), 'entries': []}
continue
else:
# Log a warning
logger.warning(f"Unexpected line format: {line}")
continue
reg = r'\[(.*)\] (.*)'
matches = re.match(reg, line)
id, title = matches.groups()
entry = {'id': id, 'title': title}
data[provider_name]['entries'].append(entry)
logger.warning('Vulscan parsed output:')
logger.warning(pprint.pformat(data))
for provider_name in data:
if provider_name == 'Exploit-DB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'IBM X-Force':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'MITRE CVE':
logger.error(f'Provider {provider_name} is not supported YET.')
for entry in data[provider_name]['entries']:
cve_id = entry['id']
vuln = cve_to_vuln(cve_id)
vulns.append(vuln)
elif provider_name == 'OSVDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'OpenVAS (Nessus)':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'SecurityFocus':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
elif provider_name == 'VulDB':
logger.error(f'Provider {provider_name} is not supported YET.')
pass
else:
logger.error(f'Provider {provider_name} is not supported.')
return vulns
def parse_nmap_vulners_output(script_output, url=''):
"""Parse nmap vulners script output.
TODO: Rework this as it's currently matching all CVEs no matter the
confidence.
Args:
script_output (str): Script output.
Returns:
list: List of found vulnerabilities.
"""
vulns = []
# Check for CVE in script output
CVE_REGEX = re.compile(r'.*(CVE-\d\d\d\d-\d+).*')
matches = CVE_REGEX.findall(script_output)
matches = list(dict.fromkeys(matches))
for cve_id in matches: # get CVE info
vuln = cve_to_vuln(cve_id, vuln_type='nmap-vulners-nse')
if vuln:
vulns.append(vuln)
return vulns
def cve_to_vuln(cve_id, vuln_type=''):
"""Search for a CVE using CVESearch and return Vulnerability data.
Args:
cve_id (str): CVE ID in the form CVE-*
Returns:
dict: Vulnerability dict.
"""
cve_info = CVESearch('https://cve.circl.lu').id(cve_id)
if not cve_info:
logger.error(f'Could not fetch CVE info for cve {cve_id}. Skipping.')
return None
vuln_cve_id = cve_info['id']
vuln_name = vuln_cve_id
vuln_description = cve_info.get('summary', 'none').replace(vuln_cve_id, '').strip()
try:
vuln_cvss = float(cve_info.get('cvss', -1))
except (ValueError, TypeError):
vuln_cvss = -1
vuln_cwe_id = cve_info.get('cwe', '')
exploit_ids = cve_info.get('refmap', {}).get('exploit-db', [])
osvdb_ids = cve_info.get('refmap', {}).get('osvdb', [])
references = cve_info.get('references', [])
capec_objects = cve_info.get('capec', [])
# Parse ovals for a better vuln name / type
ovals = cve_info.get('oval', [])
if ovals:
vuln_name = ovals[0]['title']
vuln_type = ovals[0]['family']
# Set vulnerability severity based on CVSS score
vuln_severity = 'info'
if vuln_cvss < 4:
vuln_severity = 'low'
elif vuln_cvss < 7:
vuln_severity = 'medium'
elif vuln_cvss < 9:
vuln_severity = 'high'
else:
vuln_severity = 'critical'
# Build console warning message
msg = f'{vuln_name} | {vuln_severity.upper()} | {vuln_cve_id} | {vuln_cwe_id} | {vuln_cvss}'
for id in osvdb_ids:
msg += f'\n\tOSVDB: {id}'
for exploit_id in exploit_ids:
msg += f'\n\tEXPLOITDB: {exploit_id}'
logger.warning(msg)
vuln = {
'name': vuln_name,
'type': vuln_type,
'severity': NUCLEI_SEVERITY_MAP[vuln_severity],
'description': vuln_description,
'cvss_score': vuln_cvss,
'references': references,
'cve_ids': [vuln_cve_id],
'cwe_ids': [vuln_cwe_id]
}
return vuln
def parse_s3scanner_result(line):
'''
Parses and returns s3Scanner Data
'''
bucket = line['bucket']
return {
'name': bucket['name'],
'region': bucket['region'],
'provider': bucket['provider'],
'owner_display_name': bucket['owner_display_name'],
'owner_id': bucket['owner_id'],
'perm_auth_users_read': bucket['perm_auth_users_read'],
'perm_auth_users_write': bucket['perm_auth_users_write'],
'perm_auth_users_read_acl': bucket['perm_auth_users_read_acl'],
'perm_auth_users_write_acl': bucket['perm_auth_users_write_acl'],
'perm_auth_users_full_control': bucket['perm_auth_users_full_control'],
'perm_all_users_read': bucket['perm_all_users_read'],
'perm_all_users_write': bucket['perm_all_users_write'],
'perm_all_users_read_acl': bucket['perm_all_users_read_acl'],
'perm_all_users_write_acl': bucket['perm_all_users_write_acl'],
'perm_all_users_full_control': bucket['perm_all_users_full_control'],
'num_objects': bucket['num_objects'],
'size': bucket['bucket_size']
}
def parse_nuclei_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
return {
'name': line['info'].get('name', ''),
'type': line['type'],
'severity': NUCLEI_SEVERITY_MAP[line['info'].get('severity', 'unknown')],
'template': line['template'],
'template_url': line['template-url'],
'template_id': line['template-id'],
'description': line['info'].get('description', ''),
'matcher_name': line.get('matcher-name', ''),
'curl_command': line.get('curl-command'),
'request': line.get('request'),
'response': line.get('response'),
'extracted_results': line.get('extracted-results', []),
'cvss_metrics': line['info'].get('classification', {}).get('cvss-metrics', ''),
'cvss_score': line['info'].get('classification', {}).get('cvss-score'),
'cve_ids': line['info'].get('classification', {}).get('cve_id', []) or [],
'cwe_ids': line['info'].get('classification', {}).get('cwe_id', []) or [],
'references': line['info'].get('reference', []) or [],
'tags': line['info'].get('tags', []),
'source': NUCLEI,
}
def parse_dalfox_result(line):
"""Parse results from nuclei JSON output.
Args:
line (dict): Nuclei JSON line output.
Returns:
dict: Vulnerability data.
"""
description = ''
description += f" Evidence: {line.get('evidence')} <br>" if line.get('evidence') else ''
description += f" Message: {line.get('message')} <br>" if line.get('message') else ''
description += f" Payload: {line.get('message_str')} <br>" if line.get('message_str') else ''
description += f" Vulnerable Parameter: {line.get('param')} <br>" if line.get('param') else ''
return {
'name': 'XSS (Cross Site Scripting)',
'type': 'XSS',
'severity': DALFOX_SEVERITY_MAP[line.get('severity', 'unknown')],
'description': description,
'source': DALFOX,
'cwe_ids': [line.get('cwe')]
}
def parse_crlfuzz_result(url):
"""Parse CRLF results
Args:
url (str): CRLF Vulnerable URL
Returns:
dict: Vulnerability data.
"""
return {
'name': 'CRLF (HTTP Response Splitting)',
'type': 'CRLF',
'severity': 2,
'description': 'A CRLF (HTTP Response Splitting) vulnerability has been discovered.',
'source': CRLFUZZ,
}
def record_exists(model, data, exclude_keys=[]):
"""
Check if a record already exists in the database based on the given data.
Args:
model (django.db.models.Model): The Django model to check against.
data (dict): Data dictionary containing fields and values.
exclude_keys (list): List of keys to exclude from the lookup.
Returns:
bool: True if the record exists, False otherwise.
"""
# Extract the keys that will be used for the lookup
lookup_fields = {key: data[key] for key in data if key not in exclude_keys}
# Return True if a record exists based on the lookup fields, False otherwise
return model.objects.filter(**lookup_fields).exists()
@app.task(name='geo_localize', bind=False, queue='geo_localize_queue')
def geo_localize(host, ip_id=None):
"""Uses geoiplookup to find location associated with host.
Args:
host (str): Hostname.
ip_id (int): IpAddress object id.
Returns:
startScan.models.CountryISO: CountryISO object from DB or None.
"""
if validators.ipv6(host):
logger.info(f'Ipv6 "{host}" is not supported by geoiplookup. Skipping.')
return None
cmd = f'geoiplookup {host}'
_, out = run_command(cmd)
if 'IP Address not found' not in out and "can't resolve hostname" not in out:
country_iso = out.split(':')[1].strip().split(',')[0]
country_name = out.split(':')[1].strip().split(',')[1].strip()
geo_object, _ = CountryISO.objects.get_or_create(
iso=country_iso,
name=country_name
)
geo_json = {
'iso': country_iso,
'name': country_name
}
if ip_id:
ip = IpAddress.objects.get(pk=ip_id)
ip.geo_iso = geo_object
ip.save()
return geo_json
logger.info(f'Geo IP lookup failed for host "{host}"')
return None
@app.task(name='query_whois', bind=False, queue='query_whois_queue')
def query_whois(ip_domain, force_reload_whois=False):
"""Query WHOIS information for an IP or a domain name.
Args:
ip_domain (str): IP address or domain name.
save_domain (bool): Whether to save domain or not, default False
Returns:
dict: WHOIS information.
"""
if not force_reload_whois and Domain.objects.filter(name=ip_domain).exists() and Domain.objects.get(name=ip_domain).domain_info:
domain = Domain.objects.get(name=ip_domain)
if not domain.insert_date:
domain.insert_date = timezone.now()
domain.save()
domain_info_db = domain.domain_info
domain_info = DottedDict(
dnssec=domain_info_db.dnssec,
created=domain_info_db.created,
updated=domain_info_db.updated,
expires=domain_info_db.expires,
geolocation_iso=domain_info_db.geolocation_iso,
status=[status['name'] for status in DomainWhoisStatusSerializer(domain_info_db.status, many=True).data],
whois_server=domain_info_db.whois_server,
ns_records=[ns['name'] for ns in NameServersSerializer(domain_info_db.name_servers, many=True).data],
registrar_name=domain_info_db.registrar.name,
registrar_phone=domain_info_db.registrar.phone,
registrar_email=domain_info_db.registrar.email,
registrar_url=domain_info_db.registrar.url,
registrant_name=domain_info_db.registrant.name,
registrant_id=domain_info_db.registrant.id_str,
registrant_organization=domain_info_db.registrant.organization,
registrant_city=domain_info_db.registrant.city,
registrant_state=domain_info_db.registrant.state,
registrant_zip_code=domain_info_db.registrant.zip_code,
registrant_country=domain_info_db.registrant.country,
registrant_phone=domain_info_db.registrant.phone,
registrant_fax=domain_info_db.registrant.fax,
registrant_email=domain_info_db.registrant.email,
registrant_address=domain_info_db.registrant.address,
admin_name=domain_info_db.admin.name,
admin_id=domain_info_db.admin.id_str,
admin_organization=domain_info_db.admin.organization,
admin_city=domain_info_db.admin.city,
admin_state=domain_info_db.admin.state,
admin_zip_code=domain_info_db.admin.zip_code,
admin_country=domain_info_db.admin.country,
admin_phone=domain_info_db.admin.phone,
admin_fax=domain_info_db.admin.fax,
admin_email=domain_info_db.admin.email,
admin_address=domain_info_db.admin.address,
tech_name=domain_info_db.tech.name,
tech_id=domain_info_db.tech.id_str,
tech_organization=domain_info_db.tech.organization,
tech_city=domain_info_db.tech.city,
tech_state=domain_info_db.tech.state,
tech_zip_code=domain_info_db.tech.zip_code,
tech_country=domain_info_db.tech.country,
tech_phone=domain_info_db.tech.phone,
tech_fax=domain_info_db.tech.fax,
tech_email=domain_info_db.tech.email,
tech_address=domain_info_db.tech.address,
related_tlds=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_tlds, many=True).data],
related_domains=[domain['name'] for domain in RelatedDomainSerializer(domain_info_db.related_domains, many=True).data],
historical_ips=[ip for ip in HistoricalIPSerializer(domain_info_db.historical_ips, many=True).data],
)
if domain_info_db.dns_records:
a_records = []
txt_records = []
mx_records = []
dns_records = [{'name': dns['name'], 'type': dns['type']} for dns in DomainDNSRecordSerializer(domain_info_db.dns_records, many=True).data]
for dns in dns_records:
if dns['type'] == 'a':
a_records.append(dns['name'])
elif dns['type'] == 'txt':
txt_records.append(dns['name'])
elif dns['type'] == 'mx':
mx_records.append(dns['name'])
domain_info.a_records = a_records
domain_info.txt_records = txt_records
domain_info.mx_records = mx_records
else:
logger.info(f'Domain info for "{ip_domain}" not found in DB, querying whois')
domain_info = DottedDict()
# find domain historical ip
try:
historical_ips = get_domain_historical_ip_address(ip_domain)
domain_info.historical_ips = historical_ips
except Exception as e:
logger.error(f'HistoricalIP for {ip_domain} not found!\nError: {str(e)}')
historical_ips = []
# find associated domains using ip_domain
try:
related_domains = reverse_whois(ip_domain.split('.')[0])
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
# find related tlds using TLSx
try:
related_tlds = []
output_path = '/tmp/ip_domain_tlsx.txt'
tlsx_command = f'tlsx -san -cn -silent -ro -host {ip_domain} -o {output_path}'
run_command(
tlsx_command,
shell=True,
)
tlsx_output = []
with open(output_path) as f:
tlsx_output = f.readlines()
tldextract_target = tldextract.extract(ip_domain)
for doms in tlsx_output:
doms = doms.strip()
tldextract_res = tldextract.extract(doms)
if ip_domain != doms and tldextract_res.domain == tldextract_target.domain and tldextract_res.subdomain == '':
related_tlds.append(doms)
related_tlds = list(set(related_tlds))
domain_info.related_tlds = related_tlds
except Exception as e:
logger.error(f'Associated domain not found for {ip_domain}\nError: {str(e)}')
similar_domains = []
related_domains_list = []
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_related)
related_domains_list.append(_domain['name'])
for _domain in related_tlds:
domain_related = RelatedDomain.objects.get_or_create(
name=_domain,
)[0]
db_domain_info.related_tlds.add(domain_related)
for _ip in historical_ips:
historical_ip = HistoricalIP.objects.get_or_create(
ip=_ip['ip'],
owner=_ip['owner'],
location=_ip['location'],
last_seen=_ip['last_seen'],
)[0]
db_domain_info.historical_ips.add(historical_ip)
domain.domain_info = db_domain_info
domain.save()
command = f'netlas host {ip_domain} -f json'
# check if netlas key is provided
netlas_key = get_netlas_key()
command += f' -a {netlas_key}' if netlas_key else ''
result = subprocess.check_output(command.split()).decode('utf-8')
if 'Failed to parse response data' in result:
# do fallback
return {
'status': False,
'ip_domain': ip_domain,
'result': "Netlas limit exceeded.",
'message': 'Netlas limit exceeded.'
}
try:
result = json.loads(result)
logger.info(result)
whois = result.get('whois') if result.get('whois') else {}
domain_info.created = whois.get('created_date')
domain_info.expires = whois.get('expiration_date')
domain_info.updated = whois.get('updated_date')
domain_info.whois_server = whois.get('whois_server')
if 'registrant' in whois:
registrant = whois.get('registrant')
domain_info.registrant_name = registrant.get('name')
domain_info.registrant_country = registrant.get('country')
domain_info.registrant_id = registrant.get('id')
domain_info.registrant_state = registrant.get('province')
domain_info.registrant_city = registrant.get('city')
domain_info.registrant_phone = registrant.get('phone')
domain_info.registrant_address = registrant.get('street')
domain_info.registrant_organization = registrant.get('organization')
domain_info.registrant_fax = registrant.get('fax')
domain_info.registrant_zip_code = registrant.get('postal_code')
email_search = EMAIL_REGEX.search(str(registrant.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.registrant_email = field_content
if 'administrative' in whois:
administrative = whois.get('administrative')
domain_info.admin_name = administrative.get('name')
domain_info.admin_country = administrative.get('country')
domain_info.admin_id = administrative.get('id')
domain_info.admin_state = administrative.get('province')
domain_info.admin_city = administrative.get('city')
domain_info.admin_phone = administrative.get('phone')
domain_info.admin_address = administrative.get('street')
domain_info.admin_organization = administrative.get('organization')
domain_info.admin_fax = administrative.get('fax')
domain_info.admin_zip_code = administrative.get('postal_code')
mail_search = EMAIL_REGEX.search(str(administrative.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.admin_email = field_content
if 'technical' in whois:
technical = whois.get('technical')
domain_info.tech_name = technical.get('name')
domain_info.tech_country = technical.get('country')
domain_info.tech_state = technical.get('province')
domain_info.tech_id = technical.get('id')
domain_info.tech_city = technical.get('city')
domain_info.tech_phone = technical.get('phone')
domain_info.tech_address = technical.get('street')
domain_info.tech_organization = technical.get('organization')
domain_info.tech_fax = technical.get('fax')
domain_info.tech_zip_code = technical.get('postal_code')
mail_search = EMAIL_REGEX.search(str(technical.get('email')))
field_content = email_search.group(0) if email_search else None
domain_info.tech_email = field_content
if 'dns' in result:
dns = result.get('dns')
domain_info.mx_records = dns.get('mx')
domain_info.txt_records = dns.get('txt')
domain_info.a_records = dns.get('a')
domain_info.ns_records = whois.get('name_servers')
domain_info.dnssec = True if whois.get('dnssec') else False
domain_info.status = whois.get('status')
if 'registrar' in whois:
registrar = whois.get('registrar')
domain_info.registrar_name = registrar.get('name')
domain_info.registrar_email = registrar.get('email')
domain_info.registrar_phone = registrar.get('phone')
domain_info.registrar_url = registrar.get('url')
# find associated domains if registrant email is found
related_domains = reverse_whois(domain_info.get('registrant_email')) if domain_info.get('registrant_email') else []
for _domain in related_domains:
related_domains_list.append(_domain['name'])
# remove duplicate domains from related domains list
related_domains_list = list(set(related_domains_list))
domain_info.related_domains = related_domains_list
# save to db if domain exists
if Domain.objects.filter(name=ip_domain).exists():
domain = Domain.objects.get(name=ip_domain)
db_domain_info = domain.domain_info if domain.domain_info else DomainInfo()
db_domain_info.save()
for _domain in related_domains:
domain_rel = RelatedDomain.objects.get_or_create(
name=_domain['name'],
)[0]
db_domain_info.related_domains.add(domain_rel)
db_domain_info.dnssec = domain_info.get('dnssec')
#dates
db_domain_info.created = domain_info.get('created')
db_domain_info.updated = domain_info.get('updated')
db_domain_info.expires = domain_info.get('expires')
#registrar
db_domain_info.registrar = Registrar.objects.get_or_create(
name=domain_info.get('registrar_name'),
email=domain_info.get('registrar_email'),
phone=domain_info.get('registrar_phone'),
url=domain_info.get('registrar_url'),
)[0]
db_domain_info.registrant = DomainRegistration.objects.get_or_create(
name=domain_info.get('registrant_name'),
organization=domain_info.get('registrant_organization'),
address=domain_info.get('registrant_address'),
city=domain_info.get('registrant_city'),
state=domain_info.get('registrant_state'),
zip_code=domain_info.get('registrant_zip_code'),
country=domain_info.get('registrant_country'),
email=domain_info.get('registrant_email'),
phone=domain_info.get('registrant_phone'),
fax=domain_info.get('registrant_fax'),
id_str=domain_info.get('registrant_id'),
)[0]
db_domain_info.admin = DomainRegistration.objects.get_or_create(
name=domain_info.get('admin_name'),
organization=domain_info.get('admin_organization'),
address=domain_info.get('admin_address'),
city=domain_info.get('admin_city'),
state=domain_info.get('admin_state'),
zip_code=domain_info.get('admin_zip_code'),
country=domain_info.get('admin_country'),
email=domain_info.get('admin_email'),
phone=domain_info.get('admin_phone'),
fax=domain_info.get('admin_fax'),
id_str=domain_info.get('admin_id'),
)[0]
db_domain_info.tech = DomainRegistration.objects.get_or_create(
name=domain_info.get('tech_name'),
organization=domain_info.get('tech_organization'),
address=domain_info.get('tech_address'),
city=domain_info.get('tech_city'),
state=domain_info.get('tech_state'),
zip_code=domain_info.get('tech_zip_code'),
country=domain_info.get('tech_country'),
email=domain_info.get('tech_email'),
phone=domain_info.get('tech_phone'),
fax=domain_info.get('tech_fax'),
id_str=domain_info.get('tech_id'),
)[0]
for status in domain_info.get('status') or []:
_status = WhoisStatus.objects.get_or_create(
name=status
)[0]
_status.save()
db_domain_info.status.add(_status)
for ns in domain_info.get('ns_records') or []:
_ns = NameServer.objects.get_or_create(
name=ns
)[0]
_ns.save()
db_domain_info.name_servers.add(_ns)
for a in domain_info.get('a_records') or []:
_a = DNSRecord.objects.get_or_create(
name=a,
type='a'
)[0]
_a.save()
db_domain_info.dns_records.add(_a)
for mx in domain_info.get('mx_records') or []:
_mx = DNSRecord.objects.get_or_create(
name=mx,
type='mx'
)[0]
_mx.save()
db_domain_info.dns_records.add(_mx)
for txt in domain_info.get('txt_records') or []:
_txt = DNSRecord.objects.get_or_create(
name=txt,
type='txt'
)[0]
_txt.save()
db_domain_info.dns_records.add(_txt)
db_domain_info.geolocation_iso = domain_info.get('registrant_country')
db_domain_info.whois_server = domain_info.get('whois_server')
db_domain_info.save()
domain.domain_info = db_domain_info
domain.save()
except Exception as e:
return {
'status': False,
'ip_domain': ip_domain,
'result': "unable to fetch records from WHOIS database.",
'message': str(e)
}
return {
'status': True,
'ip_domain': ip_domain,
'dnssec': domain_info.get('dnssec'),
'created': domain_info.get('created'),
'updated': domain_info.get('updated'),
'expires': domain_info.get('expires'),
'geolocation_iso': domain_info.get('registrant_country'),
'domain_statuses': domain_info.get('status'),
'whois_server': domain_info.get('whois_server'),
'dns': {
'a': domain_info.get('a_records'),
'mx': domain_info.get('mx_records'),
'txt': domain_info.get('txt_records'),
},
'registrar': {
'name': domain_info.get('registrar_name'),
'phone': domain_info.get('registrar_phone'),
'email': domain_info.get('registrar_email'),
'url': domain_info.get('registrar_url'),
},
'registrant': {
'name': domain_info.get('registrant_name'),
'id': domain_info.get('registrant_id'),
'organization': domain_info.get('registrant_organization'),
'address': domain_info.get('registrant_address'),
'city': domain_info.get('registrant_city'),
'state': domain_info.get('registrant_state'),
'zipcode': domain_info.get('registrant_zip_code'),
'country': domain_info.get('registrant_country'),
'phone': domain_info.get('registrant_phone'),
'fax': domain_info.get('registrant_fax'),
'email': domain_info.get('registrant_email'),
},
'admin': {
'name': domain_info.get('admin_name'),
'id': domain_info.get('admin_id'),
'organization': domain_info.get('admin_organization'),
'address':domain_info.get('admin_address'),
'city': domain_info.get('admin_city'),
'state': domain_info.get('admin_state'),
'zipcode': domain_info.get('admin_zip_code'),
'country': domain_info.get('admin_country'),
'phone': domain_info.get('admin_phone'),
'fax': domain_info.get('admin_fax'),
'email': domain_info.get('admin_email'),
},
'technical_contact': {
'name': domain_info.get('tech_name'),
'id': domain_info.get('tech_id'),
'organization': domain_info.get('tech_organization'),
'address': domain_info.get('tech_address'),
'city': domain_info.get('tech_city'),
'state': domain_info.get('tech_state'),
'zipcode': domain_info.get('tech_zip_code'),
'country': domain_info.get('tech_country'),
'phone': domain_info.get('tech_phone'),
'fax': domain_info.get('tech_fax'),
'email': domain_info.get('tech_email'),
},
'nameservers': domain_info.get('ns_records'),
# 'similar_domains': domain_info.get('similar_domains'),
'related_domains': domain_info.get('related_domains'),
'related_tlds': domain_info.get('related_tlds'),
'historical_ips': domain_info.get('historical_ips'),
}
@app.task(name='remove_duplicate_endpoints', bind=False, queue='remove_duplicate_endpoints_queue')
def remove_duplicate_endpoints(
scan_history_id,
domain_id,
subdomain_id=None,
filter_ids=[],
filter_status=[200, 301, 404],
duplicate_removal_fields=ENDPOINT_SCAN_DEFAULT_DUPLICATE_FIELDS
):
"""Remove duplicate endpoints.
Check for implicit redirections by comparing endpoints:
- [x] `content_length` similarities indicating redirections
- [x] `page_title` (check for same page title)
- [ ] Sign-in / login page (check for endpoints with the same words)
Args:
scan_history_id: ScanHistory id.
domain_id (int): Domain id.
subdomain_id (int, optional): Subdomain id.
filter_ids (list): List of endpoint ids to filter on.
filter_status (list): List of HTTP status codes to filter on.
duplicate_removal_fields (list): List of Endpoint model fields to check for duplicates
"""
logger.info(f'Removing duplicate endpoints based on {duplicate_removal_fields}')
endpoints = (
EndPoint.objects
.filter(scan_history__id=scan_history_id)
.filter(target_domain__id=domain_id)
)
if filter_status:
endpoints = endpoints.filter(http_status__in=filter_status)
if subdomain_id:
endpoints = endpoints.filter(subdomain__id=subdomain_id)
if filter_ids:
endpoints = endpoints.filter(id__in=filter_ids)
for field_name in duplicate_removal_fields:
cl_query = (
endpoints
.values_list(field_name)
.annotate(mc=Count(field_name))
.order_by('-mc')
)
for (field_value, count) in cl_query:
if count > DELETE_DUPLICATES_THRESHOLD:
eps_to_delete = (
endpoints
.filter(**{field_name: field_value})
.order_by('discovered_date')
.all()[1:]
)
msg = f'Deleting {len(eps_to_delete)} endpoints [reason: same {field_name} {field_value}]'
for ep in eps_to_delete:
url = urlparse(ep.http_url)
if url.path in ['', '/', '/login']: # try do not delete the original page that other pages redirect to
continue
msg += f'\n\t {ep.http_url} [{ep.http_status}] [{field_name}={field_value}]'
ep.delete()
logger.warning(msg)
@app.task(name='run_command', bind=False, queue='run_command_queue')
def run_command(cmd, cwd=None, shell=False, history_file=None, scan_id=None, activity_id=None):
"""Run a given command using subprocess module.
Args:
cmd (str): Command to run.
cwd (str): Current working directory.
echo (bool): Log command.
shell (bool): Run within separate shell if True.
history_file (str): Write command + output to history file.
Returns:
tuple: Tuple with return_code, output.
"""
logger.info(cmd)
logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Run the command using subprocess
popen = subprocess.Popen(
cmd if shell else cmd.split(),
shell=shell,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
cwd=cwd,
universal_newlines=True)
output = ''
for stdout_line in iter(popen.stdout.readline, ""):
item = stdout_line.strip()
output += '\n' + item
logger.debug(item)
popen.stdout.close()
popen.wait()
return_code = popen.returncode
command_obj.output = output
command_obj.return_code = return_code
command_obj.save()
if history_file:
mode = 'a'
if not os.path.exists(history_file):
mode = 'w'
with open(history_file, mode) as f:
f.write(f'\n{cmd}\n{return_code}\n{output}\n------------------\n')
return return_code, output
#-------------#
# Other utils #
#-------------#
def stream_command(cmd, cwd=None, shell=False, history_file=None, encoding='utf-8', scan_id=None, activity_id=None, trunc_char=None):
# Log cmd
logger.info(cmd)
# logger.warning(activity_id)
# Create a command record in the database
command_obj = Command.objects.create(
command=cmd,
time=timezone.now(),
scan_history_id=scan_id,
activity_id=activity_id)
# Sanitize the cmd
command = cmd if shell else cmd.split()
# Run the command using subprocess
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
shell=shell)
# Log the output in real-time to the database
output = ""
# Process the output
for line in iter(lambda: process.stdout.readline(), b''):
if not line:
break
line = line.strip()
ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')
line = ansi_escape.sub('', line)
line = line.replace('\\x0d\\x0a', '\n')
if trunc_char and line.endswith(trunc_char):
line = line[:-1]
item = line
# Try to parse the line as JSON
try:
item = json.loads(line)
except json.JSONDecodeError:
pass
# Yield the line
#logger.debug(item)
yield item
# Add the log line to the output
output += line + "\n"
# Update the command record in the database
command_obj.output = output
command_obj.save()
# Retrieve the return code and output
process.wait()
return_code = process.returncode
# Update the return code and final output in the database
command_obj.return_code = return_code
command_obj.save()
# Append the command, return code and output to the history file
if history_file is not None:
with open(history_file, "a") as f:
f.write(f"{cmd}\n{return_code}\n{output}\n")
def process_httpx_response(line):
"""TODO: implement this"""
def extract_httpx_url(line):
"""Extract final URL from httpx results. Always follow redirects to find
the last URL.
Args:
line (dict): URL data output by httpx.
Returns:
tuple: (final_url, redirect_bool) tuple.
"""
status_code = line.get('status_code', 0)
final_url = line.get('final_url')
location = line.get('location')
chain_status_codes = line.get('chain_status_codes', [])
# Final URL is already looking nice, if it exists return it
if final_url:
return final_url, False
http_url = line['url'] # fallback to url field
# Handle redirects manually
REDIRECT_STATUS_CODES = [301, 302]
is_redirect = (
status_code in REDIRECT_STATUS_CODES
or
any(x in REDIRECT_STATUS_CODES for x in chain_status_codes)
)
if is_redirect and location:
if location.startswith(('http', 'https')):
http_url = location
else:
http_url = f'{http_url}/{location.lstrip("/")}'
# Sanitize URL
http_url = sanitize_url(http_url)
return http_url, is_redirect
#-------------#
# OSInt utils #
#-------------#
def get_and_save_dork_results(lookup_target, results_dir, type, lookup_keywords=None, lookup_extensions=None, delay=3, page_count=2, scan_history=None):
"""
Uses gofuzz to dork and store information
Args:
lookup_target (str): target to look into such as stackoverflow or even the target itself
results_dir (str): Results directory
type (str): Dork Type Title
lookup_keywords (str): comma separated keywords or paths to look for
lookup_extensions (str): comma separated extensions to look for
delay (int): delay between each requests
page_count (int): pages in google to extract information
scan_history (startScan.ScanHistory): Scan History Object
"""
results = []
gofuzz_command = f'{GOFUZZ_EXEC_PATH} -t {lookup_target} -d {delay} -p {page_count}'
if lookup_extensions:
gofuzz_command += f' -e {lookup_extensions}'
elif lookup_keywords:
gofuzz_command += f' -w {lookup_keywords}'
output_file = f'{results_dir}/gofuzz.txt'
gofuzz_command += f' -o {output_file}'
history_file = f'{results_dir}/commands.txt'
try:
run_command(
gofuzz_command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
)
if not os.path.isfile(output_file):
return
with open(output_file) as f:
for line in f.readlines():
url = line.strip()
if url:
results.append(url)
dork, created = Dork.objects.get_or_create(
type=type,
url=url
)
if scan_history:
scan_history.dorks.add(dork)
# remove output file
os.remove(output_file)
except Exception as e:
logger.exception(e)
return results
def get_and_save_emails(scan_history, activity_id, results_dir):
"""Get and save emails from Google, Bing and Baidu.
Args:
scan_history (startScan.ScanHistory): Scan history object.
activity_id: ScanActivity Object
results_dir (str): Results directory.
Returns:
list: List of emails found.
"""
emails = []
# Proxy settings
# get_random_proxy()
# Gather emails from Google, Bing and Baidu
output_file = f'{results_dir}/emails_tmp.txt'
history_file = f'{results_dir}/commands.txt'
command = f'python3 /usr/src/github/Infoga/infoga.py --domain {scan_history.domain.name} --source all --report {output_file}'
try:
run_command(
command,
shell=False,
history_file=history_file,
scan_id=scan_history.id,
activity_id=activity_id)
if not os.path.isfile(output_file):
logger.info('No Email results')
return []
with open(output_file) as f:
for line in f.readlines():
if 'Email' in line:
split_email = line.split(' ')[2]
emails.append(split_email)
output_path = f'{results_dir}/emails.txt'
with open(output_path, 'w') as output_file:
for email_address in emails:
save_email(email_address, scan_history)
output_file.write(f'{email_address}\n')
except Exception as e:
logger.exception(e)
return emails
def save_metadata_info(meta_dict):
"""Extract metadata from Google Search.
Args:
meta_dict (dict): Info dict.
Returns:
list: List of startScan.MetaFinderDocument objects.
"""
logger.warning(f'Getting metadata for {meta_dict.osint_target}')
scan_history = ScanHistory.objects.get(id=meta_dict.scan_id)
# Proxy settings
get_random_proxy()
# Get metadata
result = extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if not result:
logger.error(f'No metadata result from Google Search for {meta_dict.osint_target}.')
return []
# Add metadata info to DB
results = []
for metadata_name, data in result.get_metadata().items():
subdomain = Subdomain.objects.get(
scan_history=meta_dict.scan_id,
name=meta_dict.osint_target)
metadata = DottedDict({k: v for k, v in data.items()})
meta_finder_document = MetaFinderDocument(
subdomain=subdomain,
target_domain=meta_dict.domain,
scan_history=scan_history,
url=metadata.url,
doc_name=metadata_name,
http_status=metadata.status_code,
producer=metadata.metadata.get('Producer'),
creator=metadata.metadata.get('Creator'),
creation_date=metadata.metadata.get('CreationDate'),
modified_date=metadata.metadata.get('ModDate'),
author=metadata.metadata.get('Author'),
title=metadata.metadata.get('Title'),
os=metadata.metadata.get('OSInfo'))
meta_finder_document.save()
results.append(data)
return results
#-----------------#
# Utils functions #
#-----------------#
def create_scan_activity(scan_history_id, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = ScanHistory.objects.get(pk=scan_history_id)
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
#--------------------#
# Database functions #
#--------------------#
def save_vulnerability(**vuln_data):
references = vuln_data.pop('references', [])
cve_ids = vuln_data.pop('cve_ids', [])
cwe_ids = vuln_data.pop('cwe_ids', [])
tags = vuln_data.pop('tags', [])
subscan = vuln_data.pop('subscan', None)
# remove nulls
vuln_data = replace_nulls(vuln_data)
# Create vulnerability
vuln, created = Vulnerability.objects.get_or_create(**vuln_data)
if created:
vuln.discovered_date = timezone.now()
vuln.open_status = True
vuln.save()
# Save vuln tags
for tag_name in tags or []:
tag, created = VulnerabilityTags.objects.get_or_create(name=tag_name)
if tag:
vuln.tags.add(tag)
vuln.save()
# Save CVEs
for cve_id in cve_ids or []:
cve, created = CveId.objects.get_or_create(name=cve_id)
if cve:
vuln.cve_ids.add(cve)
vuln.save()
# Save CWEs
for cve_id in cwe_ids or []:
cwe, created = CweId.objects.get_or_create(name=cve_id)
if cwe:
vuln.cwe_ids.add(cwe)
vuln.save()
# Save vuln reference
for url in references or []:
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
if created:
vuln.references.add(ref)
vuln.save()
# Save subscan id in vuln object
if subscan:
vuln.vuln_subscan_ids.add(subscan)
vuln.save()
return vuln, created
def save_endpoint(
http_url,
ctx={},
crawl=False,
is_default=False,
**endpoint_data):
"""Get or create EndPoint object. If crawl is True, also crawl the endpoint
HTTP URL with httpx.
Args:
http_url (str): Input HTTP URL.
is_default (bool): If the url is a default url for SubDomains.
scan_history (startScan.models.ScanHistory): ScanHistory object.
domain (startScan.models.Domain): Domain object.
subdomain (starScan.models.Subdomain): Subdomain object.
results_dir (str, optional): Results directory.
crawl (bool, optional): Run httpx on endpoint if True. Default: False.
force (bool, optional): Force crawl even if ENABLE_HTTP_CRAWL mode is on.
subscan (startScan.models.SubScan, optional): SubScan object.
Returns:
tuple: (startScan.models.EndPoint, created) where `created` is a boolean
indicating if the object is new or already existed.
"""
# remove nulls
endpoint_data = replace_nulls(endpoint_data)
scheme = urlparse(http_url).scheme
endpoint = None
created = False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in http_url:
logger.error(f"{http_url} is not a URL of domain {domain.name}. Skipping.")
return None, False
if crawl:
ctx['track'] = False
results = http_crawl(
urls=[http_url],
method='HEAD',
ctx=ctx)
if results:
endpoint_data = results[0]
endpoint_id = endpoint_data['endpoint_id']
created = endpoint_data['endpoint_created']
endpoint = EndPoint.objects.get(pk=endpoint_id)
elif not scheme:
return None, False
else: # add dumb endpoint without probing it
scan = ScanHistory.objects.filter(pk=ctx.get('scan_history_id')).first()
domain = Domain.objects.filter(pk=ctx.get('domain_id')).first()
if not validators.url(http_url):
return None, False
http_url = sanitize_url(http_url)
endpoint, created = EndPoint.objects.get_or_create(
scan_history=scan,
target_domain=domain,
http_url=http_url,
**endpoint_data)
if created:
endpoint.is_default = is_default
endpoint.discovered_date = timezone.now()
endpoint.save()
subscan_id = ctx.get('subscan_id')
if subscan_id:
endpoint.endpoint_subscan_ids.add(subscan_id)
endpoint.save()
return endpoint, created
def save_subdomain(subdomain_name, ctx={}):
"""Get or create Subdomain object.
Args:
subdomain_name (str): Subdomain name.
scan_history (startScan.models.ScanHistory): ScanHistory object.
Returns:
tuple: (startScan.models.Subdomain, created) where `created` is a
boolean indicating if the object has been created in DB.
"""
scan_id = ctx.get('scan_history_id')
subscan_id = ctx.get('subscan_id')
out_of_scope_subdomains = ctx.get('out_of_scope_subdomains', [])
valid_domain = (
validators.domain(subdomain_name) or
validators.ipv4(subdomain_name) or
validators.ipv6(subdomain_name)
)
if not valid_domain:
logger.error(f'{subdomain_name} is not an invalid domain. Skipping.')
return None, False
if subdomain_name in out_of_scope_subdomains:
logger.error(f'{subdomain_name} is out-of-scope. Skipping.')
return None, False
if ctx.get('domain_id'):
domain = Domain.objects.get(id=ctx.get('domain_id'))
if domain.name not in subdomain_name:
logger.error(f"{subdomain_name} is not a subdomain of domain {domain.name}. Skipping.")
return None, False
scan = ScanHistory.objects.filter(pk=scan_id).first()
domain = scan.domain if scan else None
subdomain, created = Subdomain.objects.get_or_create(
scan_history=scan,
target_domain=domain,
name=subdomain_name)
if created:
# logger.warning(f'Found new subdomain {subdomain_name}')
subdomain.discovered_date = timezone.now()
if subscan_id:
subdomain.subdomain_subscan_ids.add(subscan_id)
subdomain.save()
return subdomain, created
def save_email(email_address, scan_history=None):
if not validators.email(email_address):
logger.info(f'Email {email_address} is invalid. Skipping.')
return None, False
email, created = Email.objects.get_or_create(address=email_address)
# if created:
# logger.warning(f'Found new email address {email_address}')
# Add email to ScanHistory
if scan_history:
scan_history.emails.add(email)
scan_history.save()
return email, created
def save_employee(name, designation, scan_history=None):
employee, created = Employee.objects.get_or_create(
name=name,
designation=designation)
# if created:
# logger.warning(f'Found new employee {name}')
# Add employee to ScanHistory
if scan_history:
scan_history.employees.add(employee)
scan_history.save()
return employee, created
def save_ip_address(ip_address, subdomain=None, subscan=None, **kwargs):
if not (validators.ipv4(ip_address) or validators.ipv6(ip_address)):
logger.info(f'IP {ip_address} is not a valid IP. Skipping.')
return None, False
ip, created = IpAddress.objects.get_or_create(address=ip_address)
# if created:
# logger.warning(f'Found new IP {ip_address}')
# Set extra attributes
for key, value in kwargs.items():
setattr(ip, key, value)
ip.save()
# Add IP to subdomain
if subdomain:
subdomain.ip_addresses.add(ip)
subdomain.save()
# Add subscan to IP
if subscan:
ip.ip_subscan_ids.add(subscan)
# Geo-localize IP asynchronously
if created:
geo_localize.delay(ip_address, ip.id)
return ip, created
def save_imported_subdomains(subdomains, ctx={}):
"""Take a list of subdomains imported and write them to from_imported.txt.
Args:
subdomains (list): List of subdomain names.
scan_history (startScan.models.ScanHistory): ScanHistory instance.
domain (startScan.models.Domain): Domain instance.
results_dir (str): Results directory.
"""
domain_id = ctx['domain_id']
domain = Domain.objects.get(pk=domain_id)
results_dir = ctx.get('results_dir', RENGINE_RESULTS)
# Validate each subdomain and de-duplicate entries
subdomains = list(set([
subdomain for subdomain in subdomains
if validators.domain(subdomain) and domain.name == get_domain_from_subdomain(subdomain)
]))
if not subdomains:
return
logger.warning(f'Found {len(subdomains)} imported subdomains.')
with open(f'{results_dir}/from_imported.txt', 'w+') as output_file:
for name in subdomains:
subdomain_name = name.strip()
subdomain, _ = save_subdomain(subdomain_name, ctx=ctx)
subdomain.is_imported_subdomain = True
subdomain.save()
output_file.write(f'{subdomain}\n')
@app.task(name='query_reverse_whois', bind=False, queue='query_reverse_whois_queue')
def query_reverse_whois(lookup_keyword):
"""Queries Reverse WHOIS information for an organization or email address.
Args:
lookup_keyword (str): Registrar Name or email
Returns:
dict: Reverse WHOIS information.
"""
return get_associated_domains(lookup_keyword)
@app.task(name='query_ip_history', bind=False, queue='query_ip_history_queue')
def query_ip_history(domain):
"""Queries the IP history for a domain
Args:
domain (str): domain_name
Returns:
list: list of historical ip addresses
"""
return get_domain_historical_ip_address(domain)
@app.task(name='gpt_vulnerability_description', bind=False, queue='gpt_queue')
def gpt_vulnerability_description(vulnerability_id):
"""Generate and store Vulnerability Description using GPT.
Args:
vulnerability_id (Vulnerability Model ID): Vulnerability ID to fetch Description.
"""
logger.info('Getting GPT Vulnerability Description')
try:
lookup_vulnerability = Vulnerability.objects.get(id=vulnerability_id)
lookup_url = urlparse(lookup_vulnerability.http_url)
path = lookup_url.path
except Exception as e:
return {
'status': False,
'error': str(e)
}
# check in db GPTVulnerabilityReport model if vulnerability description and path matches
stored = GPTVulnerabilityReport.objects.filter(url_path=path).filter(title=lookup_vulnerability.name).first()
if stored:
response = {
'status': True,
'description': stored.description,
'impact': stored.impact,
'remediation': stored.remediation,
'references': [url.url for url in stored.references.all()]
}
else:
vulnerability_description = get_gpt_vuln_input_description(
lookup_vulnerability.name,
path
)
# one can add more description here later
gpt_generator = GPTVulnerabilityReportGenerator()
response = gpt_generator.get_vulnerability_description(vulnerability_description)
add_gpt_description_db(
lookup_vulnerability.name,
path,
response.get('description'),
response.get('impact'),
response.get('remediation'),
response.get('references', [])
)
# for all vulnerabilities with the same vulnerability name this description has to be stored.
# also the consition is that the url must contain a part of this.
for vuln in Vulnerability.objects.filter(name=lookup_vulnerability.name, http_url__icontains=path):
vuln.description = response.get('description', vuln.description)
vuln.impact = response.get('impact')
vuln.remediation = response.get('remediation')
vuln.is_gpt_used = True
vuln.save()
for url in response.get('references', []):
ref, created = VulnerabilityReference.objects.get_or_create(url=url)
vuln.references.add(ref)
vuln.save()
return response
| ocervell | b557c6b8b70ea554c232095bf2fbb213e6d3648f | 0ded32c1bee7852e7fc5daea0fb6de999097400b | ## Overly permissive regular expression range
Suspicious character range that is equivalent to \[0-9:;<=>?\].
[Show more details](https://github.com/yogeshojha/rengine/security/code-scanning/168) | github-advanced-security[bot] | 12 |
yogeshojha/rengine | 973 | Add non-interactive installation parameter | Add a non-interactive installation method via a new parameter to be passed to the install.sh script.
Essential for automated/industrialized systems (e.g. via Ansible or another automated environment creation system). | null | 2023-10-12 01:09:15+00:00 | 2023-11-21 12:49:22+00:00 | Makefile | .DEFAULT_GOAL:=help
# Credits: https://github.com/sherifabdlnaby/elastdocker/
# This for future release of Compose that will use Docker Buildkit, which is much efficient.
COMPOSE_PREFIX_CMD := COMPOSE_DOCKER_CLI_BUILD=1
COMPOSE_ALL_FILES := -f docker-compose.yml
SERVICES := db web proxy redis celery celery-beat
# --------------------------
.PHONY: setup certs up build username pull down stop restart rm logs
certs: ## Generate certificates.
@${COMPOSE_PREFIX_CMD} docker-compose -f docker-compose.setup.yml run --rm certs
setup: ## Generate certificates.
@make certs
up: ## Build and start all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} up -d --build ${SERVICES}
build: ## Build all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} build ${SERVICES}
username: ## Generate Username (Use only after make up).
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} exec web python3 manage.py createsuperuser
pull: ## Pull Docker images.
docker login docker.pkg.github.com
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} pull
down: ## Down all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} down
stop: ## Stop all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} stop ${SERVICES}
restart: ## Restart all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} restart ${SERVICES}
rm: ## Remove all services containers.
${COMPOSE_PREFIX_CMD} docker-compose $(COMPOSE_ALL_FILES) rm -f ${SERVICES}
test:
${COMPOSE_PREFIX_CMD} docker-compose $(COMPOSE_ALL_FILES) exec celery python3 -m unittest tests/test_scan.py
logs: ## Tail all logs with -n 1000.
${COMPOSE_PREFIX_CMD} docker-compose $(COMPOSE_ALL_FILES) logs --follow --tail=1000 ${SERVICES}
images: ## Show all Docker images.
${COMPOSE_PREFIX_CMD} docker-compose $(COMPOSE_ALL_FILES) images ${SERVICES}
prune: ## Remove containers and delete volume data.
@make stop && make rm && docker volume prune -f
help: ## Show this help.
@echo "Make application docker images and manage containers using docker-compose files."
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m (default: help)\n\nTargets:\n"} /^[a-zA-Z_-]+:.*?##/ { printf " \033[36m%-12s\033[0m %s\n", $$1, $$2 }' $(MAKEFILE_LIST)
| include .env
.DEFAULT_GOAL:=help
# Credits: https://github.com/sherifabdlnaby/elastdocker/
# This for future release of Compose that will use Docker Buildkit, which is much efficient.
COMPOSE_PREFIX_CMD := COMPOSE_DOCKER_CLI_BUILD=1
COMPOSE_ALL_FILES := -f docker-compose.yml
SERVICES := db web proxy redis celery celery-beat
# --------------------------
.PHONY: setup certs up build username pull down stop restart rm logs
certs: ## Generate certificates.
@${COMPOSE_PREFIX_CMD} docker-compose -f docker-compose.setup.yml run --rm certs
setup: ## Generate certificates.
@make certs
up: ## Build and start all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} up -d --build ${SERVICES}
build: ## Build all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} build ${SERVICES}
username: ## Generate Username (Use only after make up).
ifeq ($(isNonInteractive), true)
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} exec web python3 manage.py createsuperuser --username ${DJANGO_SUPERUSER_USERNAME} --email ${DJANGO_SUPERUSER_EMAIL} --noinput
else
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} exec web python3 manage.py createsuperuser
endif
pull: ## Pull Docker images.
docker login docker.pkg.github.com
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} pull
down: ## Down all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} down
stop: ## Stop all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} stop ${SERVICES}
restart: ## Restart all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} restart ${SERVICES}
rm: ## Remove all services containers.
${COMPOSE_PREFIX_CMD} docker-compose $(COMPOSE_ALL_FILES) rm -f ${SERVICES}
test:
${COMPOSE_PREFIX_CMD} docker-compose $(COMPOSE_ALL_FILES) exec celery python3 -m unittest tests/test_scan.py
logs: ## Tail all logs with -n 1000.
${COMPOSE_PREFIX_CMD} docker-compose $(COMPOSE_ALL_FILES) logs --follow --tail=1000 ${SERVICES}
images: ## Show all Docker images.
${COMPOSE_PREFIX_CMD} docker-compose $(COMPOSE_ALL_FILES) images ${SERVICES}
prune: ## Remove containers and delete volume data.
@make stop && make rm && docker volume prune -f
help: ## Show this help.
@echo "Make application docker images and manage containers using docker-compose files."
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m (default: help)\n\nTargets:\n"} /^[a-zA-Z_-]+:.*?##/ { printf " \033[36m%-12s\033[0m %s\n", $$1, $$2 }' $(MAKEFILE_LIST)
| C0wnuts | 3dd700357a4bd5701b07ede4511f66042655be00 | 64b7f291240b3b8853e3cec7ee6230827c97b907 | What's the addition/benefit of adding the .env file to `Makefile`? | AnonymousWP | 13 |
yogeshojha/rengine | 973 | Add non-interactive installation parameter | Add a non-interactive installation method via a new parameter to be passed to the install.sh script.
Essential for automated/industrialized systems (e.g. via Ansible or another automated environment creation system). | null | 2023-10-12 01:09:15+00:00 | 2023-11-21 12:49:22+00:00 | Makefile | .DEFAULT_GOAL:=help
# Credits: https://github.com/sherifabdlnaby/elastdocker/
# This for future release of Compose that will use Docker Buildkit, which is much efficient.
COMPOSE_PREFIX_CMD := COMPOSE_DOCKER_CLI_BUILD=1
COMPOSE_ALL_FILES := -f docker-compose.yml
SERVICES := db web proxy redis celery celery-beat
# --------------------------
.PHONY: setup certs up build username pull down stop restart rm logs
certs: ## Generate certificates.
@${COMPOSE_PREFIX_CMD} docker-compose -f docker-compose.setup.yml run --rm certs
setup: ## Generate certificates.
@make certs
up: ## Build and start all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} up -d --build ${SERVICES}
build: ## Build all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} build ${SERVICES}
username: ## Generate Username (Use only after make up).
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} exec web python3 manage.py createsuperuser
pull: ## Pull Docker images.
docker login docker.pkg.github.com
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} pull
down: ## Down all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} down
stop: ## Stop all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} stop ${SERVICES}
restart: ## Restart all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} restart ${SERVICES}
rm: ## Remove all services containers.
${COMPOSE_PREFIX_CMD} docker-compose $(COMPOSE_ALL_FILES) rm -f ${SERVICES}
test:
${COMPOSE_PREFIX_CMD} docker-compose $(COMPOSE_ALL_FILES) exec celery python3 -m unittest tests/test_scan.py
logs: ## Tail all logs with -n 1000.
${COMPOSE_PREFIX_CMD} docker-compose $(COMPOSE_ALL_FILES) logs --follow --tail=1000 ${SERVICES}
images: ## Show all Docker images.
${COMPOSE_PREFIX_CMD} docker-compose $(COMPOSE_ALL_FILES) images ${SERVICES}
prune: ## Remove containers and delete volume data.
@make stop && make rm && docker volume prune -f
help: ## Show this help.
@echo "Make application docker images and manage containers using docker-compose files."
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m (default: help)\n\nTargets:\n"} /^[a-zA-Z_-]+:.*?##/ { printf " \033[36m%-12s\033[0m %s\n", $$1, $$2 }' $(MAKEFILE_LIST)
| include .env
.DEFAULT_GOAL:=help
# Credits: https://github.com/sherifabdlnaby/elastdocker/
# This for future release of Compose that will use Docker Buildkit, which is much efficient.
COMPOSE_PREFIX_CMD := COMPOSE_DOCKER_CLI_BUILD=1
COMPOSE_ALL_FILES := -f docker-compose.yml
SERVICES := db web proxy redis celery celery-beat
# --------------------------
.PHONY: setup certs up build username pull down stop restart rm logs
certs: ## Generate certificates.
@${COMPOSE_PREFIX_CMD} docker-compose -f docker-compose.setup.yml run --rm certs
setup: ## Generate certificates.
@make certs
up: ## Build and start all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} up -d --build ${SERVICES}
build: ## Build all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} build ${SERVICES}
username: ## Generate Username (Use only after make up).
ifeq ($(isNonInteractive), true)
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} exec web python3 manage.py createsuperuser --username ${DJANGO_SUPERUSER_USERNAME} --email ${DJANGO_SUPERUSER_EMAIL} --noinput
else
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} exec web python3 manage.py createsuperuser
endif
pull: ## Pull Docker images.
docker login docker.pkg.github.com
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} pull
down: ## Down all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} down
stop: ## Stop all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} stop ${SERVICES}
restart: ## Restart all services.
${COMPOSE_PREFIX_CMD} docker-compose ${COMPOSE_ALL_FILES} restart ${SERVICES}
rm: ## Remove all services containers.
${COMPOSE_PREFIX_CMD} docker-compose $(COMPOSE_ALL_FILES) rm -f ${SERVICES}
test:
${COMPOSE_PREFIX_CMD} docker-compose $(COMPOSE_ALL_FILES) exec celery python3 -m unittest tests/test_scan.py
logs: ## Tail all logs with -n 1000.
${COMPOSE_PREFIX_CMD} docker-compose $(COMPOSE_ALL_FILES) logs --follow --tail=1000 ${SERVICES}
images: ## Show all Docker images.
${COMPOSE_PREFIX_CMD} docker-compose $(COMPOSE_ALL_FILES) images ${SERVICES}
prune: ## Remove containers and delete volume data.
@make stop && make rm && docker volume prune -f
help: ## Show this help.
@echo "Make application docker images and manage containers using docker-compose files."
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m (default: help)\n\nTargets:\n"} /^[a-zA-Z_-]+:.*?##/ { printf " \033[36m%-12s\033[0m %s\n", $$1, $$2 }' $(MAKEFILE_LIST)
| C0wnuts | 3dd700357a4bd5701b07ede4511f66042655be00 | 64b7f291240b3b8853e3cec7ee6230827c97b907 | To give it access to variables located in the .env and avoid writing the username/email directly in the Makefile. This way, all configuration elements remain centralized in the .env file.
Following the Django documentation, for non-interactive createsuperuser process, you need to specify --username and --email argument (With the desired values) inside the docker-compose command | C0wnuts | 14 |
yogeshojha/rengine | 973 | Add non-interactive installation parameter | Add a non-interactive installation method via a new parameter to be passed to the install.sh script.
Essential for automated/industrialized systems (e.g. via Ansible or another automated environment creation system). | null | 2023-10-12 01:09:15+00:00 | 2023-11-21 12:49:22+00:00 | README.md | <p align="center">
<a href="https://rengine.wiki"><img src=".github/screenshots/banner.gif" alt=""/></a>
</p>
<p align="center"><a href="https://github.com/yogeshojha/rengine/releases" target="_blank"><img src="https://img.shields.io/badge/version-v2.0.0-informational?&logo=none" alt="reNgine Latest Version" /></a> <a href="https://www.gnu.org/licenses/gpl-3.0" target="_blank"><img src="https://img.shields.io/badge/License-GPLv3-red.svg?&logo=none" alt="License" /></a> <a href="#" target="_blank"><img src="https://img.shields.io/badge/first--timers--only-friendly-blue.svg?&logo=none" alt="" /></a> <a href="https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine" target="_blank"><img src="https://cdn.huntr.dev/huntr_security_badge_mono.svg" alt="" /></a> </p>
<p align="center">
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Asia-2023-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/Open--Source--Summit-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://cyberweek.ae/2021/hitb-armory/" target="_blank"><img src="https://img.shields.io/badge/HITB--Armory-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=7uvP6MaQOX0" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://drive.google.com/file/d/1Bh8lbf-Dztt5ViHJVACyrXMiglyICPQ2/view?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Defcon--Demolabs--29-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=A1oNOIc0h5A" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Europe-2020-blue.svg?&logo=none" alt="" /></a>
</p>
<p align="center">
<a href="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml/badge.svg" alt="" /></a> <a href="https://github.com/yogeshojha/rengine/actions/workflows/build.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/build.yml/badge.svg" alt="" /></a>
</p>
<p align="center">
<a href="https://discord.gg/H6WzebwX3H" target="_blank"><img src="https://img.shields.io/discord/880363103689277461" alt="" /></a>
</p>
<p align="center">
<a href="https://opensourcesecurityindex.io/" target="_blank" rel="noopener">
<img style="width: 282px; height: 56px" src="https://opensourcesecurityindex.io/badge.svg" alt="Open Source Security Index - Fastest Growing Open Source Security Projects" width="282" height="56" /> </a>
</p>
<h3>reNgine 2.0-jasper<br>Redefining the future of reconnaissance!</h3>
<h4>What is reNgine?</h4>
<p align="left">reNgine is your go-to web application reconnaissance suite that's designed to simplify and streamline the reconnaissance process for security professionals, penetration testers, and bug bounty hunters. With its highly configurable engines, data correlation capabilities, continuous monitoring, database-backed reconnaissance data, and an intuitive user interface, reNgine redefines how you gather critical information about your target web applications.
Traditional reconnaissance tools often fall short in terms of configurability and efficiency. reNgine addresses these shortcomings and emerges as a excellent alternative to existing commercial tools.
reNgine was created to address the limitations of traditional reconnaissance tools and provide a better alternative, even surpassing some commercial offerings. Whether you're a bug bounty hunter, a penetration tester, or a corporate security team, reNgine is your go-to solution for automating and enhancing your information-gathering efforts.
</p>
reNgine 2.0-jasper is out now, you can [watch reNgine 2.0-jasper release trailer here!](https://youtu.be/VwkOWqiWW5g)
reNgine 2.0-Jasper would not have been possible without [@ocervell](https://github.com/ocervell) valuable contributions. [@ocervell](https://github.com/ocervell) did majority of the refactoring if not all and also added a ton of features. Together, we wish to shape the future of web application reconnaissance, and it's developers like [@ocervell](https://github.com/ocervell) and a [ton of other developers and hackers from our community](https://github.com/yogeshojha/rengine/graphs/contributors) who inspire and drive us forward.
Thank you, [@ocervell](https://github.com/ocervell), for your outstanding work and unwavering commitment to reNgine.
Checkout our contributers here: [Contributers](https://github.com/yogeshojha/rengine/graphs/contributors)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Documentation
You can find detailed documentation at [https://rengine.wiki](https://rengine.wiki)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Table of Contents
* [About reNgine](#about-rengine)
* [Workflow](#workflow)
* [Features](#features)
* [Scan Engine](#scan-engine)
* [Quick Installation](#quick-installation)
* [What's new in reNgine 2.0](#changelog)
* [Screenshots](#screenshots)
* [Contributing](#contributing)
* [reNgine Support](#rengine-support)
* [Support and Sponsoring](#support-and-sponsoring)
* [reNgine Bug Bounty Program](#rengine-bug-bounty-program)
* [License](#license)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### About reNgine
reNgine is not an ordinary reconnaissance suite; it's a game-changer! We've turbocharged the traditional workflow with groundbreaking features that is sure to ease your reconnaissance game. reNgine redefines the art of reconnaissance with highly configurable scan engines, recon data correlation, continuous monitoring, GPT powered Vulnerability Report, Project Management and role based access control etc.
🦾 reNgine has advanced reconnaissance capabilities, harnessing a range of open-source tools to deliver a comprehensive web application reconnaissance experience. With it's intuitive User Interface, it excels in subdomain discovery, pinpointing IP addresses and open ports, collecting endpoints, conducting directory and file fuzzing, capturing screenshots, and performing vulnerability scans. To summarize, it does end-to-end reconnaissance. With WHOIS identification and WAF detection, it offers deep insights into target domains. Additionally, reNgine also identifies misconfigured S3 buckets and find interesting subdomains and URLS, based on specific keywords to helps you identify your next target, making it an go to tool for efficient reconnaissance.
🗃️ Say goodbye to recon data chaos! reNgine seamlessly integrates with a database, providing you with unmatched data correlation and organization. Forgot the hassle of grepping through json, txt or csv files. Plus, our custom query language lets you filter reconnaissance data effortlessly using natural language like operators such as filtering all alive subdomains with `http_status=200` and also filter all subdomains that are alive and has admin in name `http_status=200&name=admin`
🔧 reNgine offers unparalleled flexibility through its highly configurable scan engines, based on a YAML-based configuration. It offers the freedom to create and customize recon scan engines based on any kind of requirement, users can tailor them to their specific objectives and preferences, from thread management to timeout settings and rate-limit configurations, everything is customizable. Additionally, reNgine offers a range of pre-configured scan engines right out of the box, including Full Scan, Passive Scan, Screenshot Gathering, and the OSINT Scan Engine. These ready-to-use engines eliminate the need for extensive manual setup, aligning perfectly with reNgine's core mission of simplifying the reconnaissance process and enabling users to effortlessly access the right reconnaissance data with minimal effort.
💎 Subscans: Subscan is a game-changing feature in reNgine, setting it apart as the only open-source tool of its kind to offer this capability. With Subscan, waiting for the entire pipeline to complete is a thing of the past. Now, users can swiftly respond to newfound discoveries during reconnaissance. Whether you've stumbled upon an intriguing subdomain and wish to conduct a focused port scan or want to delve deeper with a vulnerability assessment, reNgine has you covered.
📃 PDF Reports: In addition to its robust reconnaissance capabilities, reNgine goes the extra mile by simplifying the report generation process, recognizing the crucial role that PDF reports play in the realm of end-to-end reconnaissance. Users can effortlessly generate and customize PDF reports to suit their exact needs. Whether it's a Full Scan Report, Vulnerability Report, or a concise reconnaissance report, reNgine provides the flexibility to choose the report type that best communicates your findings. Moreover, the level of customization is unparalleled, allowing users to select report colors, fine-tune executive summaries, and even add personalized touches like company names and footers. With GPT integration, your reports aren't just a report, with remediation steps, and impacts, you get 360-degree view of the vulnerabilities you've uncovered.
🔖 Say Hello to Projects! reNgine 2.0 introduces a powerful addition that enables you to efficiently organize your web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task. Each projects will have separate dashboard and all the scan results will be separated from each projects, while scan engines and configuration will be shared across all the projects.
⚙️ Roles and Permissions! Begining reNgine 2.0, we've taken your web application reconnaissance to a whole new level of control and security. Now, you can assign distinct roles to your team members—Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- 🔐 Sys Admin: Sys Admin is a super user that has permission to modify system and scan related configurations, scan engines, create new users, add new tools etc. Super user can initiate scans and subscans effortlessly.
- 🔍 Penetration Tester: Penetration Tester will be allowed to modify and initiate scans and subscans, add or update targets, etc. A penetration tester will not be allowed to modify system configurations.
- 📊 Auditor: Auditor can only view and download the report. An auditor can not change any system or scan related configurations nor can initiate any scans or subscans.
🚀 GPT Vulnerability Report Generation: Get ready for the future of penetration testing reports with reNgine's groundbreaking feature: "GPT-Powered Report Generation"! With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments that read like they were written by a human security expert! **But that's not all!** Our GPT-driven reports go the extra mile by scouring the web for related news articles, blogs, and references, so you have a 360-degree view of the vulnerabilities you've uncovered. With reNgine 2.0 revolutionize your penetration testing game and impress your clients with reports that are not just informative but engaging and comprehensive with detailed analysis on impact assessment and remediation strategies.
🥷 GPT-Powered Attack Surface Generation: With reNgine 2.0, reNgine seamlessly integrates with GPT to identify the attacks that you can likely perform on a subdomain. By making use of reconnaissance data such as page title, open ports, subdomain name etc, reNgine can advice you the attacks you could perform on a target. reNgine will also provide you the rationale on why the specific attack is likely to be successful.
🧭 Continuous monitoring: Continuous monitoring is at the core of reNgine's mission, and it's robust continuous monitoring feature ensures that their targets are under constant scrutiny. With the flexibility to schedule scans at regular intervals, penetration testers can effortlessly stay informed about their targets. What sets reNgine apart is its seamless integration with popular notification channels such as Discord, Slack, and Telegram, delivering real-time alerts for newly discovered subdomains, vulnerabilities, or any changes in reconnaissance data.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Workflow
<img src="https://github.com/yogeshojha/rengine/assets/17223002/10c475b8-b4a8-440d-9126-77fe2038a386">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Features
* Reconnaissance:
* Subdomain Discovery
* IP and Open Ports Identification
* Endpoints Discovery
* Directory/Files fuzzing
* Screenshot Gathering
* Vulnerability Scan
* Nuclei
* Dalfox XSS Scanner
* CRLFuzzer
* Misconfigured S3 Scanner
* WHOIS Identification
* WAF Detection
* OSINT Capabilities
* Meta info Gathering
* Employees Gathering
* Email Address gathering
* Google Dorking for sensitive info and urls
* Projects, create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
* Perform Advanced Query lookup using natural language alike and, or, not operations
* Highly configurable YAML-based Scan Engines
* Support for Parallel Scans
* Support for Subscans
* Recon Data visualization
* GPT Vulnerability Description, Impact and Remediation generation
* GPT Attack Surface Generator
* Multiple Roles and Permissions to cater a team's need
* Customizable Alerts/Notifications on Slack, Discord, and Telegram
* Automatically report Vulnerabilities to HackerOne
* Recon Notes and Todos
* Clocked Scans (Run reconnaissance exactly at X Hours and Y minutes) and Periodic Scans (Runs reconnaissance every X minutes/- hours/days/week)
* Proxy Support
* Screenshot Gallery with Filters
* Powerful recon data filtering with autosuggestions
* Recon Data changes, find new/removed subdomains/endpoints
* Tag targets into the Organization
* Smart Duplicate endpoint removal based on page title and content length to cleanup the reconnaissance data
* Identify Interesting Subdomains
* Custom GF patterns and custom Nuclei Templates
* Edit tool-related configuration files (Nuclei, Subfinder, Naabu, amass)
* Add external tools from Github/Go
* Interoperable with other tools, Import/Export Subdomains/Endpoints
* Import Targets via IP and/or CIDRs
* Report Generation
* Toolbox: Comes bundled with most commonly used tools during penetration testing such as whois lookup, CMS detector, CVE lookup, etc.
* Identification of related domains and related TLDs for targets
* Find actionable insights such as Most Common Vulnerability, Most Common CVE ID, Most Vulnerable Target/Subdomain, etc.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Scan Engine
```yaml
subdomain_discovery: {
'uses_tools': [
'subfinder',
'ctfr',
'sublist3r',
'tlsx',
'oneforall',
'netlas'
],
'enable_http_crawl': true,
'threads': 30,
'timeout': 5,
}
http_crawl: {}
port_scan: {
'enable_http_crawl': true,
'timeout': 5,
# 'exclude_ports': [],
# 'exclude_subdomains': true,
'ports': ['top-100'],
'rate_limit': 150,
'threads': 30,
'passive': false,
# 'use_naabu_config': false,
# 'enable_nmap': true,
# 'nmap_cmd': '',
# 'nmap_script': '',
# 'nmap_script_args': ''
}
osint: {
'discover': [
'emails',
'metainfo',
'employees'
],
'dorks': [
'login_pages',
'admin_panels',
'dashboard_pages',
'stackoverflow',
'social_media',
'project_management',
'code_sharing',
'config_files',
'jenkins',
'wordpress_files',
'php_error',
'exposed_documents',
'db_files',
'git_exposed'
],
'custom_dorks': [
{
'lookup_site': 'google.com',
'lookup_keywords': '/home/'
},
{
'lookup_site': '_target_',
'lookup_extensions': 'jpg,png'
}
],
'intensity': 'normal',
'documents_limit': 50
}
dir_file_fuzz: {
'auto_calibration': true,
'enable_http_crawl': true,
'rate_limit': 150,
'extensions': ['html', 'php','git','yaml','conf','cnf','config','gz','env','log','db','mysql','bak','asp','aspx','txt','conf','sql','json','yml','pdf'],
'follow_redirect': false,
'max_time': 0,
'match_http_status': [200, 204],
'recursive_level': 2,
'stop_on_error': false,
'timeout': 5,
'threads': 30,
'wordlist_name': 'dicc'
}
fetch_url: {
'uses_tools': [
'gospider',
'hakrawler',
'waybackurls',
'gospider',
'katana'
],
'remove_duplicate_endpoints': true,
'duplicate_fields': [
'content_length',
'page_title'
],
'enable_http_crawl': true,
'gf_patterns': ['debug_logic', 'idor', 'interestingEXT', 'interestingparams', 'interestingsubs', 'lfi', 'rce', 'redirect', 'sqli', 'ssrf', 'ssti', 'xss'],
'ignore_file_extensions': ['png', 'jpg', 'jpeg', 'gif', 'mp4', 'mpeg', 'mp3']
# 'exclude_subdomains': true
}
vulnerability_scan: {
'run_nuclei': false,
'run_dalfox': false,
'run_crlfuzz': false,
'run_s3scanner': true,
'enable_http_crawl': true,
'concurrency': 50,
'intensity': 'normal',
'rate_limit': 150,
'retries': 1,
'timeout': 5,
'fetch_gpt_report': true,
'nuclei': {
'use_conf': false,
'severities': [
'unknown',
'info',
'low',
'medium',
'high',
'critical'
],
# 'tags': [],
# 'templates': [],
# 'custom_templates': [],
},
's3scanner': {
'threads': 100,
'providers': [
'aws',
'gcp',
'digitalocean',
'dreamhost',
'linode'
]
}
}
waf_detection: {}
screenshot: {
'enable_http_crawl': true,
'intensity': 'normal',
'timeout': 10,
'threads': 40
}
# custom_header: "Cookie: Test"
```
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Quick Installation
**Note:** Only Ubuntu/VPS
1. Clone this repo
```bash
git clone https://github.com/yogeshojha/rengine && cd rengine
```
1. Edit the dotenv file, **please make sure to change the password for postgresql `POSTGRES_PASSWORD`!**
```bash
nano .env
```
1. In the dotenv file, you may also modify the Scaling Configurations
```bash
MAX_CONCURRENCY=80
MIN_CONCURRENCY=10
```
MAX_CONCURRENCY: This parameter specifies the maximum number of reNgine's concurrent Celery worker processes that can be spawned. In this case, it's set to 80, meaning that the application can utilize up to 80 concurrent worker processes to execute tasks concurrently. This is useful for handling a high volume of scans or when you want to scale up processing power during periods of high demand. If you have more CPU cores, you will need to increase this for maximised performance.
MIN_CONCURRENCY: On the other hand, MIN_CONCURRENCY specifies the minimum number of concurrent worker processes that should be maintained, even during periods of lower demand. In this example, it's set to 10, which means that even when there are fewer tasks to process, at least 10 worker processes will be kept running. This helps ensure that the application can respond promptly to incoming tasks without the overhead of repeatedly starting and stopping worker processes.
These settings allow for dynamic scaling of Celery workers, ensuring that the application efficiently manages its workload by adjusting the number of concurrent workers based on the workload's size and complexity
1. Run the installation script, Please keep an eye for any prompt, you will also be asked for username and password for reNgine.
```bash
sudo ./install.sh
```
If `install.sh` does not have install permission, please change it, `chmod +x install.sh`
**reNgine can now be accessed from <https://127.0.0.1> or if you're on the VPS <https://your_vps_ip_address>**
**Unless you are on development branch, please do not access reNgine via any ports**
### Installation (Mac/Windows/Other)
Installation instructions can be found at [https://reNgine.wiki/install/detailed/](https://reNgine.wiki/2.0/install/detailed/)
### Updating
1. Updating is as simple as running the following command:
```bash
cd rengine && sudo ./update.sh
```
If `update.sh` does not have execution permissions, please change it, `sudo chmod +x update.sh`
**NOTE:** if you're updating from 1.3.6 and you're getting a 'password authentication failed' error, consider uninstalling 1.3.6 first, then install 2.x.x as you'd normally do.
### Changelog
[Please find the latest release notes and changelog here.](https://rengine.wiki/changelog/)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Screenshots
#### Scan Results
![](.github/screenshots/scan_results.gif)
#### General Usage
<img src="https://user-images.githubusercontent.com/17223002/164993781-b6012995-522b-480a-a8bf-911193d35894.gif">
#### Initiating Subscan
<img src="https://user-images.githubusercontent.com/17223002/164993749-1ad343d6-8ce7-43d6-aee7-b3add0321da7.gif">
#### Recon Data filtering
<img src="https://user-images.githubusercontent.com/17223002/164993687-b63f3de8-e033-4ac0-808e-a2aa377d3cf8.gif">
#### Report Generation
<img src="https://user-images.githubusercontent.com/17223002/164993689-c796c6cd-eb61-43f4-800d-08aba9740088.gif">
#### Toolbox
<img src="https://user-images.githubusercontent.com/17223002/164993751-d687e88a-eb79-440f-9dc0-0ad006901620.gif">
#### Adding Custom tool in Tools Arsenal
<img src="https://user-images.githubusercontent.com/17223002/164993670-466f6459-9499-498b-a9bd-526476d735a7.gif">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire and create. Every contributions you make is **greatly appreciated**. Your contributions can be as simple as fixing the indentation or UI, or as complex as adding new modules and features.
See the [Contributing Guide](.github/CONTRIBUTING.md) to get started.
You can also [join our Discord channel #development](https://discord.gg/JuhHdHTtwd) for any development related questions.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### First-time Open Source contributors
Please note that reNgine is beginner friendly. If you have never done open-source before, we encourage you to do so. **We will be happy and proud of your first PR ever.**
You can start by resolving any [open issues](https://github.com/yogeshojha/rengine/issues).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Support
Please do not use GitHub for support requests. Instead, [join our Discord channel #support](https://discord.gg/azv6fzhNCE).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Support and Sponsoring
Over the past few years, I have been working hard on reNgine to add new features with the sole aim of making it the de facto standard for reconnaissance. I spend most of my free time and weekends working on reNgine. I do this in addition to my day job. I am happy to have received such overwhelming support from the community. But to keep this project alive, I am looking for financial support.
| Paypal | Bitcoin | Ethereum |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
|[https://www.paypal.com/paypalme/yogeshojha11](https://www.paypal.com/paypalme/yogeshojha11) | `35AiKyNswNZ4TZUSdriHopSCjNMPi63BCX` | `0xe7A337Da6ff98A28513C26A7Fec8C9b42A63d346`
OR
* Add a [GitHub Star](https://github.com/yogeshojha/rengine) to the project.
* Tweet about this project, or maybe blogs?
* Maybe nominate me for [GitHub Stars?](https://stars.github.com/nominate/)
* Join DigitalOcean using my [referral link](https://m.do.co/c/e353502d19fc) your profit is **$100** and I get $25 DO credit. This will help me test reNgine on VPS before I release any major features.
It takes a considerable amount of time to add new features and make sure everything works. Donating is your way of saying: **reNgine is awesome**.
Any support is greatly appreciated! Thank you!
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Bug Bounty Program
[![huntr](https://cdn.huntr.dev/huntr_security_badge_mono.svg)](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine)
Security researchers, welcome aboard! I'm excited to announce the reNgine bug bounty programme in collaboration with [huntr.dev](https://huntr.dev), which means that you will be rewarded for any vulnerabilities you find in reNgine.
Thank you for your interest in reporting reNgine vulnerabilities! If you are aware of any potential security vulnerabilities in reNgine, we encourage you to report them immediately via [huntr.dev](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine).
**Please do not disclose vulnerabilities via Github issues/blogs/tweets after/before reporting to huntr.dev as this is explicitly against the disclosure policy of huntr.dev and reNgine and will not be considered for monetary rewards.**
Please note that the reNgine maintainer does not set the bounty amount.
The bounty reward is determined by an industry-first equation developed by huntr.dev to understand the popularity, impact and value of repositories to the open-source community.
**What do I expect from security researchers?**
* Patience: Please note that I am currently the only maintainer in reNgine and it will take some time to validate your report. I ask for your patience during this process.
* Respect for privacy and security reports: Please do not publicly disclose any vulnerabilities (including GitHub issues) before or after reporting them on huntr.dev! This is against the disclosure policy and will not be rewarded.
* Respect the rules
**What do you get in return?**
* Thanks from the maintainer
* Monetary rewards
* CVE ID(s)
Please find the [FAQ](https://www.huntr.dev/faq) and [Responsible disclosure policy](https://www.huntr.dev/policy/) from huntr.dev.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### License
Distributed under the GNU GPL v3 License. See [LICENSE](LICENSE) for more information.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<p align="right">(ChatGPT was used to write some or most part of this README section.)</p>
| <p align="center">
<a href="https://rengine.wiki"><img src=".github/screenshots/banner.gif" alt=""/></a>
</p>
<p align="center"><a href="https://github.com/yogeshojha/rengine/releases" target="_blank"><img src="https://img.shields.io/badge/version-v2.0.0-informational?&logo=none" alt="reNgine Latest Version" /></a> <a href="https://www.gnu.org/licenses/gpl-3.0" target="_blank"><img src="https://img.shields.io/badge/License-GPLv3-red.svg?&logo=none" alt="License" /></a> <a href="#" target="_blank"><img src="https://img.shields.io/badge/first--timers--only-friendly-blue.svg?&logo=none" alt="" /></a> <a href="https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine" target="_blank"><img src="https://cdn.huntr.dev/huntr_security_badge_mono.svg" alt="" /></a> </p>
<p align="center">
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Asia-2023-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/Open--Source--Summit-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://cyberweek.ae/2021/hitb-armory/" target="_blank"><img src="https://img.shields.io/badge/HITB--Armory-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=7uvP6MaQOX0" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://drive.google.com/file/d/1Bh8lbf-Dztt5ViHJVACyrXMiglyICPQ2/view?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Defcon--Demolabs--29-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=A1oNOIc0h5A" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Europe-2020-blue.svg?&logo=none" alt="" /></a>
</p>
<p align="center">
<a href="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml/badge.svg" alt="" /></a> <a href="https://github.com/yogeshojha/rengine/actions/workflows/build.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/build.yml/badge.svg" alt="" /></a>
</p>
<p align="center">
<a href="https://discord.gg/H6WzebwX3H" target="_blank"><img src="https://img.shields.io/discord/880363103689277461" alt="" /></a>
</p>
<p align="center">
<a href="https://opensourcesecurityindex.io/" target="_blank" rel="noopener">
<img style="width: 282px; height: 56px" src="https://opensourcesecurityindex.io/badge.svg" alt="Open Source Security Index - Fastest Growing Open Source Security Projects" width="282" height="56" /> </a>
</p>
<h3>reNgine 2.0-jasper<br>Redefining the future of reconnaissance!</h3>
<h4>What is reNgine?</h4>
<p align="left">reNgine is your go-to web application reconnaissance suite that's designed to simplify and streamline the reconnaissance process for security professionals, penetration testers, and bug bounty hunters. With its highly configurable engines, data correlation capabilities, continuous monitoring, database-backed reconnaissance data, and an intuitive user interface, reNgine redefines how you gather critical information about your target web applications.
Traditional reconnaissance tools often fall short in terms of configurability and efficiency. reNgine addresses these shortcomings and emerges as a excellent alternative to existing commercial tools.
reNgine was created to address the limitations of traditional reconnaissance tools and provide a better alternative, even surpassing some commercial offerings. Whether you're a bug bounty hunter, a penetration tester, or a corporate security team, reNgine is your go-to solution for automating and enhancing your information-gathering efforts.
</p>
reNgine 2.0-jasper is out now, you can [watch reNgine 2.0-jasper release trailer here!](https://youtu.be/VwkOWqiWW5g)
reNgine 2.0-Jasper would not have been possible without [@ocervell](https://github.com/ocervell) valuable contributions. [@ocervell](https://github.com/ocervell) did majority of the refactoring if not all and also added a ton of features. Together, we wish to shape the future of web application reconnaissance, and it's developers like [@ocervell](https://github.com/ocervell) and a [ton of other developers and hackers from our community](https://github.com/yogeshojha/rengine/graphs/contributors) who inspire and drive us forward.
Thank you, [@ocervell](https://github.com/ocervell), for your outstanding work and unwavering commitment to reNgine.
Checkout our contributers here: [Contributers](https://github.com/yogeshojha/rengine/graphs/contributors)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Documentation
You can find detailed documentation at [https://rengine.wiki](https://rengine.wiki)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Table of Contents
* [About reNgine](#about-rengine)
* [Workflow](#workflow)
* [Features](#features)
* [Scan Engine](#scan-engine)
* [Quick Installation](#quick-installation)
* [What's new in reNgine 2.0](#changelog)
* [Screenshots](#screenshots)
* [Contributing](#contributing)
* [reNgine Support](#rengine-support)
* [Support and Sponsoring](#support-and-sponsoring)
* [reNgine Bug Bounty Program](#rengine-bug-bounty-program)
* [License](#license)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### About reNgine
reNgine is not an ordinary reconnaissance suite; it's a game-changer! We've turbocharged the traditional workflow with groundbreaking features that is sure to ease your reconnaissance game. reNgine redefines the art of reconnaissance with highly configurable scan engines, recon data correlation, continuous monitoring, GPT powered Vulnerability Report, Project Management and role based access control etc.
🦾 reNgine has advanced reconnaissance capabilities, harnessing a range of open-source tools to deliver a comprehensive web application reconnaissance experience. With it's intuitive User Interface, it excels in subdomain discovery, pinpointing IP addresses and open ports, collecting endpoints, conducting directory and file fuzzing, capturing screenshots, and performing vulnerability scans. To summarize, it does end-to-end reconnaissance. With WHOIS identification and WAF detection, it offers deep insights into target domains. Additionally, reNgine also identifies misconfigured S3 buckets and find interesting subdomains and URLS, based on specific keywords to helps you identify your next target, making it an go to tool for efficient reconnaissance.
🗃️ Say goodbye to recon data chaos! reNgine seamlessly integrates with a database, providing you with unmatched data correlation and organization. Forgot the hassle of grepping through json, txt or csv files. Plus, our custom query language lets you filter reconnaissance data effortlessly using natural language like operators such as filtering all alive subdomains with `http_status=200` and also filter all subdomains that are alive and has admin in name `http_status=200&name=admin`
🔧 reNgine offers unparalleled flexibility through its highly configurable scan engines, based on a YAML-based configuration. It offers the freedom to create and customize recon scan engines based on any kind of requirement, users can tailor them to their specific objectives and preferences, from thread management to timeout settings and rate-limit configurations, everything is customizable. Additionally, reNgine offers a range of pre-configured scan engines right out of the box, including Full Scan, Passive Scan, Screenshot Gathering, and the OSINT Scan Engine. These ready-to-use engines eliminate the need for extensive manual setup, aligning perfectly with reNgine's core mission of simplifying the reconnaissance process and enabling users to effortlessly access the right reconnaissance data with minimal effort.
💎 Subscans: Subscan is a game-changing feature in reNgine, setting it apart as the only open-source tool of its kind to offer this capability. With Subscan, waiting for the entire pipeline to complete is a thing of the past. Now, users can swiftly respond to newfound discoveries during reconnaissance. Whether you've stumbled upon an intriguing subdomain and wish to conduct a focused port scan or want to delve deeper with a vulnerability assessment, reNgine has you covered.
📃 PDF Reports: In addition to its robust reconnaissance capabilities, reNgine goes the extra mile by simplifying the report generation process, recognizing the crucial role that PDF reports play in the realm of end-to-end reconnaissance. Users can effortlessly generate and customize PDF reports to suit their exact needs. Whether it's a Full Scan Report, Vulnerability Report, or a concise reconnaissance report, reNgine provides the flexibility to choose the report type that best communicates your findings. Moreover, the level of customization is unparalleled, allowing users to select report colors, fine-tune executive summaries, and even add personalized touches like company names and footers. With GPT integration, your reports aren't just a report, with remediation steps, and impacts, you get 360-degree view of the vulnerabilities you've uncovered.
🔖 Say Hello to Projects! reNgine 2.0 introduces a powerful addition that enables you to efficiently organize your web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task. Each projects will have separate dashboard and all the scan results will be separated from each projects, while scan engines and configuration will be shared across all the projects.
⚙️ Roles and Permissions! Begining reNgine 2.0, we've taken your web application reconnaissance to a whole new level of control and security. Now, you can assign distinct roles to your team members—Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- 🔐 Sys Admin: Sys Admin is a super user that has permission to modify system and scan related configurations, scan engines, create new users, add new tools etc. Super user can initiate scans and subscans effortlessly.
- 🔍 Penetration Tester: Penetration Tester will be allowed to modify and initiate scans and subscans, add or update targets, etc. A penetration tester will not be allowed to modify system configurations.
- 📊 Auditor: Auditor can only view and download the report. An auditor can not change any system or scan related configurations nor can initiate any scans or subscans.
🚀 GPT Vulnerability Report Generation: Get ready for the future of penetration testing reports with reNgine's groundbreaking feature: "GPT-Powered Report Generation"! With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments that read like they were written by a human security expert! **But that's not all!** Our GPT-driven reports go the extra mile by scouring the web for related news articles, blogs, and references, so you have a 360-degree view of the vulnerabilities you've uncovered. With reNgine 2.0 revolutionize your penetration testing game and impress your clients with reports that are not just informative but engaging and comprehensive with detailed analysis on impact assessment and remediation strategies.
🥷 GPT-Powered Attack Surface Generation: With reNgine 2.0, reNgine seamlessly integrates with GPT to identify the attacks that you can likely perform on a subdomain. By making use of reconnaissance data such as page title, open ports, subdomain name etc, reNgine can advice you the attacks you could perform on a target. reNgine will also provide you the rationale on why the specific attack is likely to be successful.
🧭 Continuous monitoring: Continuous monitoring is at the core of reNgine's mission, and it's robust continuous monitoring feature ensures that their targets are under constant scrutiny. With the flexibility to schedule scans at regular intervals, penetration testers can effortlessly stay informed about their targets. What sets reNgine apart is its seamless integration with popular notification channels such as Discord, Slack, and Telegram, delivering real-time alerts for newly discovered subdomains, vulnerabilities, or any changes in reconnaissance data.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Workflow
<img src="https://github.com/yogeshojha/rengine/assets/17223002/10c475b8-b4a8-440d-9126-77fe2038a386">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Features
* Reconnaissance:
* Subdomain Discovery
* IP and Open Ports Identification
* Endpoints Discovery
* Directory/Files fuzzing
* Screenshot Gathering
* Vulnerability Scan
* Nuclei
* Dalfox XSS Scanner
* CRLFuzzer
* Misconfigured S3 Scanner
* WHOIS Identification
* WAF Detection
* OSINT Capabilities
* Meta info Gathering
* Employees Gathering
* Email Address gathering
* Google Dorking for sensitive info and urls
* Projects, create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
* Perform Advanced Query lookup using natural language alike and, or, not operations
* Highly configurable YAML-based Scan Engines
* Support for Parallel Scans
* Support for Subscans
* Recon Data visualization
* GPT Vulnerability Description, Impact and Remediation generation
* GPT Attack Surface Generator
* Multiple Roles and Permissions to cater a team's need
* Customizable Alerts/Notifications on Slack, Discord, and Telegram
* Automatically report Vulnerabilities to HackerOne
* Recon Notes and Todos
* Clocked Scans (Run reconnaissance exactly at X Hours and Y minutes) and Periodic Scans (Runs reconnaissance every X minutes/- hours/days/week)
* Proxy Support
* Screenshot Gallery with Filters
* Powerful recon data filtering with autosuggestions
* Recon Data changes, find new/removed subdomains/endpoints
* Tag targets into the Organization
* Smart Duplicate endpoint removal based on page title and content length to cleanup the reconnaissance data
* Identify Interesting Subdomains
* Custom GF patterns and custom Nuclei Templates
* Edit tool-related configuration files (Nuclei, Subfinder, Naabu, amass)
* Add external tools from Github/Go
* Interoperable with other tools, Import/Export Subdomains/Endpoints
* Import Targets via IP and/or CIDRs
* Report Generation
* Toolbox: Comes bundled with most commonly used tools during penetration testing such as whois lookup, CMS detector, CVE lookup, etc.
* Identification of related domains and related TLDs for targets
* Find actionable insights such as Most Common Vulnerability, Most Common CVE ID, Most Vulnerable Target/Subdomain, etc.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Scan Engine
```yaml
subdomain_discovery: {
'uses_tools': [
'subfinder',
'ctfr',
'sublist3r',
'tlsx',
'oneforall',
'netlas'
],
'enable_http_crawl': true,
'threads': 30,
'timeout': 5,
}
http_crawl: {}
port_scan: {
'enable_http_crawl': true,
'timeout': 5,
# 'exclude_ports': [],
# 'exclude_subdomains': true,
'ports': ['top-100'],
'rate_limit': 150,
'threads': 30,
'passive': false,
# 'use_naabu_config': false,
# 'enable_nmap': true,
# 'nmap_cmd': '',
# 'nmap_script': '',
# 'nmap_script_args': ''
}
osint: {
'discover': [
'emails',
'metainfo',
'employees'
],
'dorks': [
'login_pages',
'admin_panels',
'dashboard_pages',
'stackoverflow',
'social_media',
'project_management',
'code_sharing',
'config_files',
'jenkins',
'wordpress_files',
'php_error',
'exposed_documents',
'db_files',
'git_exposed'
],
'custom_dorks': [
{
'lookup_site': 'google.com',
'lookup_keywords': '/home/'
},
{
'lookup_site': '_target_',
'lookup_extensions': 'jpg,png'
}
],
'intensity': 'normal',
'documents_limit': 50
}
dir_file_fuzz: {
'auto_calibration': true,
'enable_http_crawl': true,
'rate_limit': 150,
'extensions': ['html', 'php','git','yaml','conf','cnf','config','gz','env','log','db','mysql','bak','asp','aspx','txt','conf','sql','json','yml','pdf'],
'follow_redirect': false,
'max_time': 0,
'match_http_status': [200, 204],
'recursive_level': 2,
'stop_on_error': false,
'timeout': 5,
'threads': 30,
'wordlist_name': 'dicc'
}
fetch_url: {
'uses_tools': [
'gospider',
'hakrawler',
'waybackurls',
'gospider',
'katana'
],
'remove_duplicate_endpoints': true,
'duplicate_fields': [
'content_length',
'page_title'
],
'enable_http_crawl': true,
'gf_patterns': ['debug_logic', 'idor', 'interestingEXT', 'interestingparams', 'interestingsubs', 'lfi', 'rce', 'redirect', 'sqli', 'ssrf', 'ssti', 'xss'],
'ignore_file_extensions': ['png', 'jpg', 'jpeg', 'gif', 'mp4', 'mpeg', 'mp3']
# 'exclude_subdomains': true
}
vulnerability_scan: {
'run_nuclei': false,
'run_dalfox': false,
'run_crlfuzz': false,
'run_s3scanner': true,
'enable_http_crawl': true,
'concurrency': 50,
'intensity': 'normal',
'rate_limit': 150,
'retries': 1,
'timeout': 5,
'fetch_gpt_report': true,
'nuclei': {
'use_conf': false,
'severities': [
'unknown',
'info',
'low',
'medium',
'high',
'critical'
],
# 'tags': [],
# 'templates': [],
# 'custom_templates': [],
},
's3scanner': {
'threads': 100,
'providers': [
'aws',
'gcp',
'digitalocean',
'dreamhost',
'linode'
]
}
}
waf_detection: {}
screenshot: {
'enable_http_crawl': true,
'intensity': 'normal',
'timeout': 10,
'threads': 40
}
# custom_header: "Cookie: Test"
```
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Quick Installation
**Note:** Only Ubuntu/VPS
1. Clone this repo
```bash
git clone https://github.com/yogeshojha/rengine && cd rengine
```
1. Edit the `.env` file, **please make sure to change the password for postgresql `POSTGRES_PASSWORD`!**
```bash
nano .env
```
1. **Optional, only for non-interactive install**: In the `.env` file, **please make sure to change the super admin values!**
```bash
DJANGO_SUPERUSER_USERNAME=yourUsername
DJANGO_SUPERUSER_EMAIL=YourMail@example.com
DJANGO_SUPERUSER_PASSWORD=yourStrongPassword
```
If you need to carry out a non-interactive installation, you can setup the login, email and password of the web interface admin directly from the .env file (instead of manually setting them from prompts during the installation process). This option can be interesting for automated installation (via ansible, vagrant, etc.).
`DJANGO_SUPERUSER_USERNAME`: web interface admin username (used to login to the web interface).
`DJANGO_SUPERUSER_EMAIL`: web interface admin email.
`DJANGO_SUPERUSER_PASSWORD`: web interface admin password (used to login to the web interface).
1. In the dotenv file, you may also modify the Scaling Configurations
```bash
MAX_CONCURRENCY=80
MIN_CONCURRENCY=10
```
`MAX_CONCURRENCY`: This parameter specifies the maximum number of reNgine's concurrent Celery worker processes that can be spawned. In this case, it's set to 80, meaning that the application can utilize up to 80 concurrent worker processes to execute tasks concurrently. This is useful for handling a high volume of scans or when you want to scale up processing power during periods of high demand. If you have more CPU cores, you will need to increase this for maximised performance.
`MIN_CONCURRENCY`: On the other hand, MIN_CONCURRENCY specifies the minimum number of concurrent worker processes that should be maintained, even during periods of lower demand. In this example, it's set to 10, which means that even when there are fewer tasks to process, at least 10 worker processes will be kept running. This helps ensure that the application can respond promptly to incoming tasks without the overhead of repeatedly starting and stopping worker processes.
These settings allow for dynamic scaling of Celery workers, ensuring that the application efficiently manages its workload by adjusting the number of concurrent workers based on the workload's size and complexity
1. Run the installation script, Please keep an eye for any prompt, you will also be asked for username and password for reNgine.
```bash
sudo ./install.sh
```
Or for a non-interactive installation, use `-n` argument (make sure you've modified the `.env` file before launching the installation).
```bash
sudo ./install.sh -n
```
If `install.sh` does not have install permission, please change it, `chmod +x install.sh`
**reNgine can now be accessed from <https://127.0.0.1> or if you're on the VPS <https://your_vps_ip_address>**
**Unless you are on development branch, please do not access reNgine via any ports**
### Installation (Mac/Windows/Other)
Installation instructions can be found at [https://reNgine.wiki/install/detailed/](https://reNgine.wiki/2.0/install/detailed/)
### Updating
1. Updating is as simple as running the following command:
```bash
cd rengine && sudo ./update.sh
```
If `update.sh` does not have execution permissions, please change it, `sudo chmod +x update.sh`
**NOTE:** if you're updating from 1.3.6 and you're getting a 'password authentication failed' error, consider uninstalling 1.3.6 first, then install 2.x.x as you'd normally do.
### Changelog
[Please find the latest release notes and changelog here.](https://rengine.wiki/changelog/)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Screenshots
#### Scan Results
![](.github/screenshots/scan_results.gif)
#### General Usage
<img src="https://user-images.githubusercontent.com/17223002/164993781-b6012995-522b-480a-a8bf-911193d35894.gif">
#### Initiating Subscan
<img src="https://user-images.githubusercontent.com/17223002/164993749-1ad343d6-8ce7-43d6-aee7-b3add0321da7.gif">
#### Recon Data filtering
<img src="https://user-images.githubusercontent.com/17223002/164993687-b63f3de8-e033-4ac0-808e-a2aa377d3cf8.gif">
#### Report Generation
<img src="https://user-images.githubusercontent.com/17223002/164993689-c796c6cd-eb61-43f4-800d-08aba9740088.gif">
#### Toolbox
<img src="https://user-images.githubusercontent.com/17223002/164993751-d687e88a-eb79-440f-9dc0-0ad006901620.gif">
#### Adding Custom tool in Tools Arsenal
<img src="https://user-images.githubusercontent.com/17223002/164993670-466f6459-9499-498b-a9bd-526476d735a7.gif">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire and create. Every contributions you make is **greatly appreciated**. Your contributions can be as simple as fixing the indentation or UI, or as complex as adding new modules and features.
See the [Contributing Guide](.github/CONTRIBUTING.md) to get started.
You can also [join our Discord channel #development](https://discord.gg/JuhHdHTtwd) for any development related questions.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### First-time Open Source contributors
Please note that reNgine is beginner friendly. If you have never done open-source before, we encourage you to do so. **We will be happy and proud of your first PR ever.**
You can start by resolving any [open issues](https://github.com/yogeshojha/rengine/issues).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Support
Please do not use GitHub for support requests. Instead, [join our Discord channel #support](https://discord.gg/azv6fzhNCE).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Support and Sponsoring
Over the past few years, I have been working hard on reNgine to add new features with the sole aim of making it the de facto standard for reconnaissance. I spend most of my free time and weekends working on reNgine. I do this in addition to my day job. I am happy to have received such overwhelming support from the community. But to keep this project alive, I am looking for financial support.
| Paypal | Bitcoin | Ethereum |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
|[https://www.paypal.com/paypalme/yogeshojha11](https://www.paypal.com/paypalme/yogeshojha11) | `35AiKyNswNZ4TZUSdriHopSCjNMPi63BCX` | `0xe7A337Da6ff98A28513C26A7Fec8C9b42A63d346`
OR
* Add a [GitHub Star](https://github.com/yogeshojha/rengine) to the project.
* Tweet about this project, or maybe blogs?
* Maybe nominate me for [GitHub Stars?](https://stars.github.com/nominate/)
* Join DigitalOcean using my [referral link](https://m.do.co/c/e353502d19fc) your profit is **$100** and I get $25 DO credit. This will help me test reNgine on VPS before I release any major features.
It takes a considerable amount of time to add new features and make sure everything works. Donating is your way of saying: **reNgine is awesome**.
Any support is greatly appreciated! Thank you!
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Bug Bounty Program
[![huntr](https://cdn.huntr.dev/huntr_security_badge_mono.svg)](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine)
Security researchers, welcome aboard! I'm excited to announce the reNgine bug bounty programme in collaboration with [huntr.dev](https://huntr.dev), which means that you will be rewarded for any vulnerabilities you find in reNgine.
Thank you for your interest in reporting reNgine vulnerabilities! If you are aware of any potential security vulnerabilities in reNgine, we encourage you to report them immediately via [huntr.dev](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine).
**Please do not disclose vulnerabilities via Github issues/blogs/tweets after/before reporting to huntr.dev as this is explicitly against the disclosure policy of huntr.dev and reNgine and will not be considered for monetary rewards.**
Please note that the reNgine maintainer does not set the bounty amount.
The bounty reward is determined by an industry-first equation developed by huntr.dev to understand the popularity, impact and value of repositories to the open-source community.
**What do I expect from security researchers?**
* Patience: Please note that I am currently the only maintainer in reNgine and it will take some time to validate your report. I ask for your patience during this process.
* Respect for privacy and security reports: Please do not publicly disclose any vulnerabilities (including GitHub issues) before or after reporting them on huntr.dev! This is against the disclosure policy and will not be rewarded.
* Respect the rules
**What do you get in return?**
* Thanks from the maintainer
* Monetary rewards
* CVE ID(s)
Please find the [FAQ](https://www.huntr.dev/faq) and [Responsible disclosure policy](https://www.huntr.dev/policy/) from huntr.dev.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### License
Distributed under the GNU GPL v3 License. See [LICENSE](LICENSE) for more information.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<p align="right">(ChatGPT was used to write some or most part of this README section.)</p>
| C0wnuts | 3dd700357a4bd5701b07ede4511f66042655be00 | 64b7f291240b3b8853e3cec7ee6230827c97b907 | ```suggestion
1. Edit the `.env` file, **please make sure to change the password for postgresql `POSTGRES_PASSWORD`!**
``` | AnonymousWP | 15 |
yogeshojha/rengine | 973 | Add non-interactive installation parameter | Add a non-interactive installation method via a new parameter to be passed to the install.sh script.
Essential for automated/industrialized systems (e.g. via Ansible or another automated environment creation system). | null | 2023-10-12 01:09:15+00:00 | 2023-11-21 12:49:22+00:00 | README.md | <p align="center">
<a href="https://rengine.wiki"><img src=".github/screenshots/banner.gif" alt=""/></a>
</p>
<p align="center"><a href="https://github.com/yogeshojha/rengine/releases" target="_blank"><img src="https://img.shields.io/badge/version-v2.0.0-informational?&logo=none" alt="reNgine Latest Version" /></a> <a href="https://www.gnu.org/licenses/gpl-3.0" target="_blank"><img src="https://img.shields.io/badge/License-GPLv3-red.svg?&logo=none" alt="License" /></a> <a href="#" target="_blank"><img src="https://img.shields.io/badge/first--timers--only-friendly-blue.svg?&logo=none" alt="" /></a> <a href="https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine" target="_blank"><img src="https://cdn.huntr.dev/huntr_security_badge_mono.svg" alt="" /></a> </p>
<p align="center">
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Asia-2023-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/Open--Source--Summit-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://cyberweek.ae/2021/hitb-armory/" target="_blank"><img src="https://img.shields.io/badge/HITB--Armory-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=7uvP6MaQOX0" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://drive.google.com/file/d/1Bh8lbf-Dztt5ViHJVACyrXMiglyICPQ2/view?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Defcon--Demolabs--29-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=A1oNOIc0h5A" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Europe-2020-blue.svg?&logo=none" alt="" /></a>
</p>
<p align="center">
<a href="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml/badge.svg" alt="" /></a> <a href="https://github.com/yogeshojha/rengine/actions/workflows/build.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/build.yml/badge.svg" alt="" /></a>
</p>
<p align="center">
<a href="https://discord.gg/H6WzebwX3H" target="_blank"><img src="https://img.shields.io/discord/880363103689277461" alt="" /></a>
</p>
<p align="center">
<a href="https://opensourcesecurityindex.io/" target="_blank" rel="noopener">
<img style="width: 282px; height: 56px" src="https://opensourcesecurityindex.io/badge.svg" alt="Open Source Security Index - Fastest Growing Open Source Security Projects" width="282" height="56" /> </a>
</p>
<h3>reNgine 2.0-jasper<br>Redefining the future of reconnaissance!</h3>
<h4>What is reNgine?</h4>
<p align="left">reNgine is your go-to web application reconnaissance suite that's designed to simplify and streamline the reconnaissance process for security professionals, penetration testers, and bug bounty hunters. With its highly configurable engines, data correlation capabilities, continuous monitoring, database-backed reconnaissance data, and an intuitive user interface, reNgine redefines how you gather critical information about your target web applications.
Traditional reconnaissance tools often fall short in terms of configurability and efficiency. reNgine addresses these shortcomings and emerges as a excellent alternative to existing commercial tools.
reNgine was created to address the limitations of traditional reconnaissance tools and provide a better alternative, even surpassing some commercial offerings. Whether you're a bug bounty hunter, a penetration tester, or a corporate security team, reNgine is your go-to solution for automating and enhancing your information-gathering efforts.
</p>
reNgine 2.0-jasper is out now, you can [watch reNgine 2.0-jasper release trailer here!](https://youtu.be/VwkOWqiWW5g)
reNgine 2.0-Jasper would not have been possible without [@ocervell](https://github.com/ocervell) valuable contributions. [@ocervell](https://github.com/ocervell) did majority of the refactoring if not all and also added a ton of features. Together, we wish to shape the future of web application reconnaissance, and it's developers like [@ocervell](https://github.com/ocervell) and a [ton of other developers and hackers from our community](https://github.com/yogeshojha/rengine/graphs/contributors) who inspire and drive us forward.
Thank you, [@ocervell](https://github.com/ocervell), for your outstanding work and unwavering commitment to reNgine.
Checkout our contributers here: [Contributers](https://github.com/yogeshojha/rengine/graphs/contributors)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Documentation
You can find detailed documentation at [https://rengine.wiki](https://rengine.wiki)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Table of Contents
* [About reNgine](#about-rengine)
* [Workflow](#workflow)
* [Features](#features)
* [Scan Engine](#scan-engine)
* [Quick Installation](#quick-installation)
* [What's new in reNgine 2.0](#changelog)
* [Screenshots](#screenshots)
* [Contributing](#contributing)
* [reNgine Support](#rengine-support)
* [Support and Sponsoring](#support-and-sponsoring)
* [reNgine Bug Bounty Program](#rengine-bug-bounty-program)
* [License](#license)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### About reNgine
reNgine is not an ordinary reconnaissance suite; it's a game-changer! We've turbocharged the traditional workflow with groundbreaking features that is sure to ease your reconnaissance game. reNgine redefines the art of reconnaissance with highly configurable scan engines, recon data correlation, continuous monitoring, GPT powered Vulnerability Report, Project Management and role based access control etc.
🦾 reNgine has advanced reconnaissance capabilities, harnessing a range of open-source tools to deliver a comprehensive web application reconnaissance experience. With it's intuitive User Interface, it excels in subdomain discovery, pinpointing IP addresses and open ports, collecting endpoints, conducting directory and file fuzzing, capturing screenshots, and performing vulnerability scans. To summarize, it does end-to-end reconnaissance. With WHOIS identification and WAF detection, it offers deep insights into target domains. Additionally, reNgine also identifies misconfigured S3 buckets and find interesting subdomains and URLS, based on specific keywords to helps you identify your next target, making it an go to tool for efficient reconnaissance.
🗃️ Say goodbye to recon data chaos! reNgine seamlessly integrates with a database, providing you with unmatched data correlation and organization. Forgot the hassle of grepping through json, txt or csv files. Plus, our custom query language lets you filter reconnaissance data effortlessly using natural language like operators such as filtering all alive subdomains with `http_status=200` and also filter all subdomains that are alive and has admin in name `http_status=200&name=admin`
🔧 reNgine offers unparalleled flexibility through its highly configurable scan engines, based on a YAML-based configuration. It offers the freedom to create and customize recon scan engines based on any kind of requirement, users can tailor them to their specific objectives and preferences, from thread management to timeout settings and rate-limit configurations, everything is customizable. Additionally, reNgine offers a range of pre-configured scan engines right out of the box, including Full Scan, Passive Scan, Screenshot Gathering, and the OSINT Scan Engine. These ready-to-use engines eliminate the need for extensive manual setup, aligning perfectly with reNgine's core mission of simplifying the reconnaissance process and enabling users to effortlessly access the right reconnaissance data with minimal effort.
💎 Subscans: Subscan is a game-changing feature in reNgine, setting it apart as the only open-source tool of its kind to offer this capability. With Subscan, waiting for the entire pipeline to complete is a thing of the past. Now, users can swiftly respond to newfound discoveries during reconnaissance. Whether you've stumbled upon an intriguing subdomain and wish to conduct a focused port scan or want to delve deeper with a vulnerability assessment, reNgine has you covered.
📃 PDF Reports: In addition to its robust reconnaissance capabilities, reNgine goes the extra mile by simplifying the report generation process, recognizing the crucial role that PDF reports play in the realm of end-to-end reconnaissance. Users can effortlessly generate and customize PDF reports to suit their exact needs. Whether it's a Full Scan Report, Vulnerability Report, or a concise reconnaissance report, reNgine provides the flexibility to choose the report type that best communicates your findings. Moreover, the level of customization is unparalleled, allowing users to select report colors, fine-tune executive summaries, and even add personalized touches like company names and footers. With GPT integration, your reports aren't just a report, with remediation steps, and impacts, you get 360-degree view of the vulnerabilities you've uncovered.
🔖 Say Hello to Projects! reNgine 2.0 introduces a powerful addition that enables you to efficiently organize your web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task. Each projects will have separate dashboard and all the scan results will be separated from each projects, while scan engines and configuration will be shared across all the projects.
⚙️ Roles and Permissions! Begining reNgine 2.0, we've taken your web application reconnaissance to a whole new level of control and security. Now, you can assign distinct roles to your team members—Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- 🔐 Sys Admin: Sys Admin is a super user that has permission to modify system and scan related configurations, scan engines, create new users, add new tools etc. Super user can initiate scans and subscans effortlessly.
- 🔍 Penetration Tester: Penetration Tester will be allowed to modify and initiate scans and subscans, add or update targets, etc. A penetration tester will not be allowed to modify system configurations.
- 📊 Auditor: Auditor can only view and download the report. An auditor can not change any system or scan related configurations nor can initiate any scans or subscans.
🚀 GPT Vulnerability Report Generation: Get ready for the future of penetration testing reports with reNgine's groundbreaking feature: "GPT-Powered Report Generation"! With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments that read like they were written by a human security expert! **But that's not all!** Our GPT-driven reports go the extra mile by scouring the web for related news articles, blogs, and references, so you have a 360-degree view of the vulnerabilities you've uncovered. With reNgine 2.0 revolutionize your penetration testing game and impress your clients with reports that are not just informative but engaging and comprehensive with detailed analysis on impact assessment and remediation strategies.
🥷 GPT-Powered Attack Surface Generation: With reNgine 2.0, reNgine seamlessly integrates with GPT to identify the attacks that you can likely perform on a subdomain. By making use of reconnaissance data such as page title, open ports, subdomain name etc, reNgine can advice you the attacks you could perform on a target. reNgine will also provide you the rationale on why the specific attack is likely to be successful.
🧭 Continuous monitoring: Continuous monitoring is at the core of reNgine's mission, and it's robust continuous monitoring feature ensures that their targets are under constant scrutiny. With the flexibility to schedule scans at regular intervals, penetration testers can effortlessly stay informed about their targets. What sets reNgine apart is its seamless integration with popular notification channels such as Discord, Slack, and Telegram, delivering real-time alerts for newly discovered subdomains, vulnerabilities, or any changes in reconnaissance data.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Workflow
<img src="https://github.com/yogeshojha/rengine/assets/17223002/10c475b8-b4a8-440d-9126-77fe2038a386">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Features
* Reconnaissance:
* Subdomain Discovery
* IP and Open Ports Identification
* Endpoints Discovery
* Directory/Files fuzzing
* Screenshot Gathering
* Vulnerability Scan
* Nuclei
* Dalfox XSS Scanner
* CRLFuzzer
* Misconfigured S3 Scanner
* WHOIS Identification
* WAF Detection
* OSINT Capabilities
* Meta info Gathering
* Employees Gathering
* Email Address gathering
* Google Dorking for sensitive info and urls
* Projects, create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
* Perform Advanced Query lookup using natural language alike and, or, not operations
* Highly configurable YAML-based Scan Engines
* Support for Parallel Scans
* Support for Subscans
* Recon Data visualization
* GPT Vulnerability Description, Impact and Remediation generation
* GPT Attack Surface Generator
* Multiple Roles and Permissions to cater a team's need
* Customizable Alerts/Notifications on Slack, Discord, and Telegram
* Automatically report Vulnerabilities to HackerOne
* Recon Notes and Todos
* Clocked Scans (Run reconnaissance exactly at X Hours and Y minutes) and Periodic Scans (Runs reconnaissance every X minutes/- hours/days/week)
* Proxy Support
* Screenshot Gallery with Filters
* Powerful recon data filtering with autosuggestions
* Recon Data changes, find new/removed subdomains/endpoints
* Tag targets into the Organization
* Smart Duplicate endpoint removal based on page title and content length to cleanup the reconnaissance data
* Identify Interesting Subdomains
* Custom GF patterns and custom Nuclei Templates
* Edit tool-related configuration files (Nuclei, Subfinder, Naabu, amass)
* Add external tools from Github/Go
* Interoperable with other tools, Import/Export Subdomains/Endpoints
* Import Targets via IP and/or CIDRs
* Report Generation
* Toolbox: Comes bundled with most commonly used tools during penetration testing such as whois lookup, CMS detector, CVE lookup, etc.
* Identification of related domains and related TLDs for targets
* Find actionable insights such as Most Common Vulnerability, Most Common CVE ID, Most Vulnerable Target/Subdomain, etc.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Scan Engine
```yaml
subdomain_discovery: {
'uses_tools': [
'subfinder',
'ctfr',
'sublist3r',
'tlsx',
'oneforall',
'netlas'
],
'enable_http_crawl': true,
'threads': 30,
'timeout': 5,
}
http_crawl: {}
port_scan: {
'enable_http_crawl': true,
'timeout': 5,
# 'exclude_ports': [],
# 'exclude_subdomains': true,
'ports': ['top-100'],
'rate_limit': 150,
'threads': 30,
'passive': false,
# 'use_naabu_config': false,
# 'enable_nmap': true,
# 'nmap_cmd': '',
# 'nmap_script': '',
# 'nmap_script_args': ''
}
osint: {
'discover': [
'emails',
'metainfo',
'employees'
],
'dorks': [
'login_pages',
'admin_panels',
'dashboard_pages',
'stackoverflow',
'social_media',
'project_management',
'code_sharing',
'config_files',
'jenkins',
'wordpress_files',
'php_error',
'exposed_documents',
'db_files',
'git_exposed'
],
'custom_dorks': [
{
'lookup_site': 'google.com',
'lookup_keywords': '/home/'
},
{
'lookup_site': '_target_',
'lookup_extensions': 'jpg,png'
}
],
'intensity': 'normal',
'documents_limit': 50
}
dir_file_fuzz: {
'auto_calibration': true,
'enable_http_crawl': true,
'rate_limit': 150,
'extensions': ['html', 'php','git','yaml','conf','cnf','config','gz','env','log','db','mysql','bak','asp','aspx','txt','conf','sql','json','yml','pdf'],
'follow_redirect': false,
'max_time': 0,
'match_http_status': [200, 204],
'recursive_level': 2,
'stop_on_error': false,
'timeout': 5,
'threads': 30,
'wordlist_name': 'dicc'
}
fetch_url: {
'uses_tools': [
'gospider',
'hakrawler',
'waybackurls',
'gospider',
'katana'
],
'remove_duplicate_endpoints': true,
'duplicate_fields': [
'content_length',
'page_title'
],
'enable_http_crawl': true,
'gf_patterns': ['debug_logic', 'idor', 'interestingEXT', 'interestingparams', 'interestingsubs', 'lfi', 'rce', 'redirect', 'sqli', 'ssrf', 'ssti', 'xss'],
'ignore_file_extensions': ['png', 'jpg', 'jpeg', 'gif', 'mp4', 'mpeg', 'mp3']
# 'exclude_subdomains': true
}
vulnerability_scan: {
'run_nuclei': false,
'run_dalfox': false,
'run_crlfuzz': false,
'run_s3scanner': true,
'enable_http_crawl': true,
'concurrency': 50,
'intensity': 'normal',
'rate_limit': 150,
'retries': 1,
'timeout': 5,
'fetch_gpt_report': true,
'nuclei': {
'use_conf': false,
'severities': [
'unknown',
'info',
'low',
'medium',
'high',
'critical'
],
# 'tags': [],
# 'templates': [],
# 'custom_templates': [],
},
's3scanner': {
'threads': 100,
'providers': [
'aws',
'gcp',
'digitalocean',
'dreamhost',
'linode'
]
}
}
waf_detection: {}
screenshot: {
'enable_http_crawl': true,
'intensity': 'normal',
'timeout': 10,
'threads': 40
}
# custom_header: "Cookie: Test"
```
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Quick Installation
**Note:** Only Ubuntu/VPS
1. Clone this repo
```bash
git clone https://github.com/yogeshojha/rengine && cd rengine
```
1. Edit the dotenv file, **please make sure to change the password for postgresql `POSTGRES_PASSWORD`!**
```bash
nano .env
```
1. In the dotenv file, you may also modify the Scaling Configurations
```bash
MAX_CONCURRENCY=80
MIN_CONCURRENCY=10
```
MAX_CONCURRENCY: This parameter specifies the maximum number of reNgine's concurrent Celery worker processes that can be spawned. In this case, it's set to 80, meaning that the application can utilize up to 80 concurrent worker processes to execute tasks concurrently. This is useful for handling a high volume of scans or when you want to scale up processing power during periods of high demand. If you have more CPU cores, you will need to increase this for maximised performance.
MIN_CONCURRENCY: On the other hand, MIN_CONCURRENCY specifies the minimum number of concurrent worker processes that should be maintained, even during periods of lower demand. In this example, it's set to 10, which means that even when there are fewer tasks to process, at least 10 worker processes will be kept running. This helps ensure that the application can respond promptly to incoming tasks without the overhead of repeatedly starting and stopping worker processes.
These settings allow for dynamic scaling of Celery workers, ensuring that the application efficiently manages its workload by adjusting the number of concurrent workers based on the workload's size and complexity
1. Run the installation script, Please keep an eye for any prompt, you will also be asked for username and password for reNgine.
```bash
sudo ./install.sh
```
If `install.sh` does not have install permission, please change it, `chmod +x install.sh`
**reNgine can now be accessed from <https://127.0.0.1> or if you're on the VPS <https://your_vps_ip_address>**
**Unless you are on development branch, please do not access reNgine via any ports**
### Installation (Mac/Windows/Other)
Installation instructions can be found at [https://reNgine.wiki/install/detailed/](https://reNgine.wiki/2.0/install/detailed/)
### Updating
1. Updating is as simple as running the following command:
```bash
cd rengine && sudo ./update.sh
```
If `update.sh` does not have execution permissions, please change it, `sudo chmod +x update.sh`
**NOTE:** if you're updating from 1.3.6 and you're getting a 'password authentication failed' error, consider uninstalling 1.3.6 first, then install 2.x.x as you'd normally do.
### Changelog
[Please find the latest release notes and changelog here.](https://rengine.wiki/changelog/)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Screenshots
#### Scan Results
![](.github/screenshots/scan_results.gif)
#### General Usage
<img src="https://user-images.githubusercontent.com/17223002/164993781-b6012995-522b-480a-a8bf-911193d35894.gif">
#### Initiating Subscan
<img src="https://user-images.githubusercontent.com/17223002/164993749-1ad343d6-8ce7-43d6-aee7-b3add0321da7.gif">
#### Recon Data filtering
<img src="https://user-images.githubusercontent.com/17223002/164993687-b63f3de8-e033-4ac0-808e-a2aa377d3cf8.gif">
#### Report Generation
<img src="https://user-images.githubusercontent.com/17223002/164993689-c796c6cd-eb61-43f4-800d-08aba9740088.gif">
#### Toolbox
<img src="https://user-images.githubusercontent.com/17223002/164993751-d687e88a-eb79-440f-9dc0-0ad006901620.gif">
#### Adding Custom tool in Tools Arsenal
<img src="https://user-images.githubusercontent.com/17223002/164993670-466f6459-9499-498b-a9bd-526476d735a7.gif">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire and create. Every contributions you make is **greatly appreciated**. Your contributions can be as simple as fixing the indentation or UI, or as complex as adding new modules and features.
See the [Contributing Guide](.github/CONTRIBUTING.md) to get started.
You can also [join our Discord channel #development](https://discord.gg/JuhHdHTtwd) for any development related questions.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### First-time Open Source contributors
Please note that reNgine is beginner friendly. If you have never done open-source before, we encourage you to do so. **We will be happy and proud of your first PR ever.**
You can start by resolving any [open issues](https://github.com/yogeshojha/rengine/issues).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Support
Please do not use GitHub for support requests. Instead, [join our Discord channel #support](https://discord.gg/azv6fzhNCE).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Support and Sponsoring
Over the past few years, I have been working hard on reNgine to add new features with the sole aim of making it the de facto standard for reconnaissance. I spend most of my free time and weekends working on reNgine. I do this in addition to my day job. I am happy to have received such overwhelming support from the community. But to keep this project alive, I am looking for financial support.
| Paypal | Bitcoin | Ethereum |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
|[https://www.paypal.com/paypalme/yogeshojha11](https://www.paypal.com/paypalme/yogeshojha11) | `35AiKyNswNZ4TZUSdriHopSCjNMPi63BCX` | `0xe7A337Da6ff98A28513C26A7Fec8C9b42A63d346`
OR
* Add a [GitHub Star](https://github.com/yogeshojha/rengine) to the project.
* Tweet about this project, or maybe blogs?
* Maybe nominate me for [GitHub Stars?](https://stars.github.com/nominate/)
* Join DigitalOcean using my [referral link](https://m.do.co/c/e353502d19fc) your profit is **$100** and I get $25 DO credit. This will help me test reNgine on VPS before I release any major features.
It takes a considerable amount of time to add new features and make sure everything works. Donating is your way of saying: **reNgine is awesome**.
Any support is greatly appreciated! Thank you!
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Bug Bounty Program
[![huntr](https://cdn.huntr.dev/huntr_security_badge_mono.svg)](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine)
Security researchers, welcome aboard! I'm excited to announce the reNgine bug bounty programme in collaboration with [huntr.dev](https://huntr.dev), which means that you will be rewarded for any vulnerabilities you find in reNgine.
Thank you for your interest in reporting reNgine vulnerabilities! If you are aware of any potential security vulnerabilities in reNgine, we encourage you to report them immediately via [huntr.dev](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine).
**Please do not disclose vulnerabilities via Github issues/blogs/tweets after/before reporting to huntr.dev as this is explicitly against the disclosure policy of huntr.dev and reNgine and will not be considered for monetary rewards.**
Please note that the reNgine maintainer does not set the bounty amount.
The bounty reward is determined by an industry-first equation developed by huntr.dev to understand the popularity, impact and value of repositories to the open-source community.
**What do I expect from security researchers?**
* Patience: Please note that I am currently the only maintainer in reNgine and it will take some time to validate your report. I ask for your patience during this process.
* Respect for privacy and security reports: Please do not publicly disclose any vulnerabilities (including GitHub issues) before or after reporting them on huntr.dev! This is against the disclosure policy and will not be rewarded.
* Respect the rules
**What do you get in return?**
* Thanks from the maintainer
* Monetary rewards
* CVE ID(s)
Please find the [FAQ](https://www.huntr.dev/faq) and [Responsible disclosure policy](https://www.huntr.dev/policy/) from huntr.dev.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### License
Distributed under the GNU GPL v3 License. See [LICENSE](LICENSE) for more information.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<p align="right">(ChatGPT was used to write some or most part of this README section.)</p>
| <p align="center">
<a href="https://rengine.wiki"><img src=".github/screenshots/banner.gif" alt=""/></a>
</p>
<p align="center"><a href="https://github.com/yogeshojha/rengine/releases" target="_blank"><img src="https://img.shields.io/badge/version-v2.0.0-informational?&logo=none" alt="reNgine Latest Version" /></a> <a href="https://www.gnu.org/licenses/gpl-3.0" target="_blank"><img src="https://img.shields.io/badge/License-GPLv3-red.svg?&logo=none" alt="License" /></a> <a href="#" target="_blank"><img src="https://img.shields.io/badge/first--timers--only-friendly-blue.svg?&logo=none" alt="" /></a> <a href="https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine" target="_blank"><img src="https://cdn.huntr.dev/huntr_security_badge_mono.svg" alt="" /></a> </p>
<p align="center">
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Asia-2023-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/Open--Source--Summit-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://cyberweek.ae/2021/hitb-armory/" target="_blank"><img src="https://img.shields.io/badge/HITB--Armory-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=7uvP6MaQOX0" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://drive.google.com/file/d/1Bh8lbf-Dztt5ViHJVACyrXMiglyICPQ2/view?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Defcon--Demolabs--29-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=A1oNOIc0h5A" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Europe-2020-blue.svg?&logo=none" alt="" /></a>
</p>
<p align="center">
<a href="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml/badge.svg" alt="" /></a> <a href="https://github.com/yogeshojha/rengine/actions/workflows/build.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/build.yml/badge.svg" alt="" /></a>
</p>
<p align="center">
<a href="https://discord.gg/H6WzebwX3H" target="_blank"><img src="https://img.shields.io/discord/880363103689277461" alt="" /></a>
</p>
<p align="center">
<a href="https://opensourcesecurityindex.io/" target="_blank" rel="noopener">
<img style="width: 282px; height: 56px" src="https://opensourcesecurityindex.io/badge.svg" alt="Open Source Security Index - Fastest Growing Open Source Security Projects" width="282" height="56" /> </a>
</p>
<h3>reNgine 2.0-jasper<br>Redefining the future of reconnaissance!</h3>
<h4>What is reNgine?</h4>
<p align="left">reNgine is your go-to web application reconnaissance suite that's designed to simplify and streamline the reconnaissance process for security professionals, penetration testers, and bug bounty hunters. With its highly configurable engines, data correlation capabilities, continuous monitoring, database-backed reconnaissance data, and an intuitive user interface, reNgine redefines how you gather critical information about your target web applications.
Traditional reconnaissance tools often fall short in terms of configurability and efficiency. reNgine addresses these shortcomings and emerges as a excellent alternative to existing commercial tools.
reNgine was created to address the limitations of traditional reconnaissance tools and provide a better alternative, even surpassing some commercial offerings. Whether you're a bug bounty hunter, a penetration tester, or a corporate security team, reNgine is your go-to solution for automating and enhancing your information-gathering efforts.
</p>
reNgine 2.0-jasper is out now, you can [watch reNgine 2.0-jasper release trailer here!](https://youtu.be/VwkOWqiWW5g)
reNgine 2.0-Jasper would not have been possible without [@ocervell](https://github.com/ocervell) valuable contributions. [@ocervell](https://github.com/ocervell) did majority of the refactoring if not all and also added a ton of features. Together, we wish to shape the future of web application reconnaissance, and it's developers like [@ocervell](https://github.com/ocervell) and a [ton of other developers and hackers from our community](https://github.com/yogeshojha/rengine/graphs/contributors) who inspire and drive us forward.
Thank you, [@ocervell](https://github.com/ocervell), for your outstanding work and unwavering commitment to reNgine.
Checkout our contributers here: [Contributers](https://github.com/yogeshojha/rengine/graphs/contributors)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Documentation
You can find detailed documentation at [https://rengine.wiki](https://rengine.wiki)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Table of Contents
* [About reNgine](#about-rengine)
* [Workflow](#workflow)
* [Features](#features)
* [Scan Engine](#scan-engine)
* [Quick Installation](#quick-installation)
* [What's new in reNgine 2.0](#changelog)
* [Screenshots](#screenshots)
* [Contributing](#contributing)
* [reNgine Support](#rengine-support)
* [Support and Sponsoring](#support-and-sponsoring)
* [reNgine Bug Bounty Program](#rengine-bug-bounty-program)
* [License](#license)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### About reNgine
reNgine is not an ordinary reconnaissance suite; it's a game-changer! We've turbocharged the traditional workflow with groundbreaking features that is sure to ease your reconnaissance game. reNgine redefines the art of reconnaissance with highly configurable scan engines, recon data correlation, continuous monitoring, GPT powered Vulnerability Report, Project Management and role based access control etc.
🦾 reNgine has advanced reconnaissance capabilities, harnessing a range of open-source tools to deliver a comprehensive web application reconnaissance experience. With it's intuitive User Interface, it excels in subdomain discovery, pinpointing IP addresses and open ports, collecting endpoints, conducting directory and file fuzzing, capturing screenshots, and performing vulnerability scans. To summarize, it does end-to-end reconnaissance. With WHOIS identification and WAF detection, it offers deep insights into target domains. Additionally, reNgine also identifies misconfigured S3 buckets and find interesting subdomains and URLS, based on specific keywords to helps you identify your next target, making it an go to tool for efficient reconnaissance.
🗃️ Say goodbye to recon data chaos! reNgine seamlessly integrates with a database, providing you with unmatched data correlation and organization. Forgot the hassle of grepping through json, txt or csv files. Plus, our custom query language lets you filter reconnaissance data effortlessly using natural language like operators such as filtering all alive subdomains with `http_status=200` and also filter all subdomains that are alive and has admin in name `http_status=200&name=admin`
🔧 reNgine offers unparalleled flexibility through its highly configurable scan engines, based on a YAML-based configuration. It offers the freedom to create and customize recon scan engines based on any kind of requirement, users can tailor them to their specific objectives and preferences, from thread management to timeout settings and rate-limit configurations, everything is customizable. Additionally, reNgine offers a range of pre-configured scan engines right out of the box, including Full Scan, Passive Scan, Screenshot Gathering, and the OSINT Scan Engine. These ready-to-use engines eliminate the need for extensive manual setup, aligning perfectly with reNgine's core mission of simplifying the reconnaissance process and enabling users to effortlessly access the right reconnaissance data with minimal effort.
💎 Subscans: Subscan is a game-changing feature in reNgine, setting it apart as the only open-source tool of its kind to offer this capability. With Subscan, waiting for the entire pipeline to complete is a thing of the past. Now, users can swiftly respond to newfound discoveries during reconnaissance. Whether you've stumbled upon an intriguing subdomain and wish to conduct a focused port scan or want to delve deeper with a vulnerability assessment, reNgine has you covered.
📃 PDF Reports: In addition to its robust reconnaissance capabilities, reNgine goes the extra mile by simplifying the report generation process, recognizing the crucial role that PDF reports play in the realm of end-to-end reconnaissance. Users can effortlessly generate and customize PDF reports to suit their exact needs. Whether it's a Full Scan Report, Vulnerability Report, or a concise reconnaissance report, reNgine provides the flexibility to choose the report type that best communicates your findings. Moreover, the level of customization is unparalleled, allowing users to select report colors, fine-tune executive summaries, and even add personalized touches like company names and footers. With GPT integration, your reports aren't just a report, with remediation steps, and impacts, you get 360-degree view of the vulnerabilities you've uncovered.
🔖 Say Hello to Projects! reNgine 2.0 introduces a powerful addition that enables you to efficiently organize your web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task. Each projects will have separate dashboard and all the scan results will be separated from each projects, while scan engines and configuration will be shared across all the projects.
⚙️ Roles and Permissions! Begining reNgine 2.0, we've taken your web application reconnaissance to a whole new level of control and security. Now, you can assign distinct roles to your team members—Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- 🔐 Sys Admin: Sys Admin is a super user that has permission to modify system and scan related configurations, scan engines, create new users, add new tools etc. Super user can initiate scans and subscans effortlessly.
- 🔍 Penetration Tester: Penetration Tester will be allowed to modify and initiate scans and subscans, add or update targets, etc. A penetration tester will not be allowed to modify system configurations.
- 📊 Auditor: Auditor can only view and download the report. An auditor can not change any system or scan related configurations nor can initiate any scans or subscans.
🚀 GPT Vulnerability Report Generation: Get ready for the future of penetration testing reports with reNgine's groundbreaking feature: "GPT-Powered Report Generation"! With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments that read like they were written by a human security expert! **But that's not all!** Our GPT-driven reports go the extra mile by scouring the web for related news articles, blogs, and references, so you have a 360-degree view of the vulnerabilities you've uncovered. With reNgine 2.0 revolutionize your penetration testing game and impress your clients with reports that are not just informative but engaging and comprehensive with detailed analysis on impact assessment and remediation strategies.
🥷 GPT-Powered Attack Surface Generation: With reNgine 2.0, reNgine seamlessly integrates with GPT to identify the attacks that you can likely perform on a subdomain. By making use of reconnaissance data such as page title, open ports, subdomain name etc, reNgine can advice you the attacks you could perform on a target. reNgine will also provide you the rationale on why the specific attack is likely to be successful.
🧭 Continuous monitoring: Continuous monitoring is at the core of reNgine's mission, and it's robust continuous monitoring feature ensures that their targets are under constant scrutiny. With the flexibility to schedule scans at regular intervals, penetration testers can effortlessly stay informed about their targets. What sets reNgine apart is its seamless integration with popular notification channels such as Discord, Slack, and Telegram, delivering real-time alerts for newly discovered subdomains, vulnerabilities, or any changes in reconnaissance data.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Workflow
<img src="https://github.com/yogeshojha/rengine/assets/17223002/10c475b8-b4a8-440d-9126-77fe2038a386">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Features
* Reconnaissance:
* Subdomain Discovery
* IP and Open Ports Identification
* Endpoints Discovery
* Directory/Files fuzzing
* Screenshot Gathering
* Vulnerability Scan
* Nuclei
* Dalfox XSS Scanner
* CRLFuzzer
* Misconfigured S3 Scanner
* WHOIS Identification
* WAF Detection
* OSINT Capabilities
* Meta info Gathering
* Employees Gathering
* Email Address gathering
* Google Dorking for sensitive info and urls
* Projects, create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
* Perform Advanced Query lookup using natural language alike and, or, not operations
* Highly configurable YAML-based Scan Engines
* Support for Parallel Scans
* Support for Subscans
* Recon Data visualization
* GPT Vulnerability Description, Impact and Remediation generation
* GPT Attack Surface Generator
* Multiple Roles and Permissions to cater a team's need
* Customizable Alerts/Notifications on Slack, Discord, and Telegram
* Automatically report Vulnerabilities to HackerOne
* Recon Notes and Todos
* Clocked Scans (Run reconnaissance exactly at X Hours and Y minutes) and Periodic Scans (Runs reconnaissance every X minutes/- hours/days/week)
* Proxy Support
* Screenshot Gallery with Filters
* Powerful recon data filtering with autosuggestions
* Recon Data changes, find new/removed subdomains/endpoints
* Tag targets into the Organization
* Smart Duplicate endpoint removal based on page title and content length to cleanup the reconnaissance data
* Identify Interesting Subdomains
* Custom GF patterns and custom Nuclei Templates
* Edit tool-related configuration files (Nuclei, Subfinder, Naabu, amass)
* Add external tools from Github/Go
* Interoperable with other tools, Import/Export Subdomains/Endpoints
* Import Targets via IP and/or CIDRs
* Report Generation
* Toolbox: Comes bundled with most commonly used tools during penetration testing such as whois lookup, CMS detector, CVE lookup, etc.
* Identification of related domains and related TLDs for targets
* Find actionable insights such as Most Common Vulnerability, Most Common CVE ID, Most Vulnerable Target/Subdomain, etc.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Scan Engine
```yaml
subdomain_discovery: {
'uses_tools': [
'subfinder',
'ctfr',
'sublist3r',
'tlsx',
'oneforall',
'netlas'
],
'enable_http_crawl': true,
'threads': 30,
'timeout': 5,
}
http_crawl: {}
port_scan: {
'enable_http_crawl': true,
'timeout': 5,
# 'exclude_ports': [],
# 'exclude_subdomains': true,
'ports': ['top-100'],
'rate_limit': 150,
'threads': 30,
'passive': false,
# 'use_naabu_config': false,
# 'enable_nmap': true,
# 'nmap_cmd': '',
# 'nmap_script': '',
# 'nmap_script_args': ''
}
osint: {
'discover': [
'emails',
'metainfo',
'employees'
],
'dorks': [
'login_pages',
'admin_panels',
'dashboard_pages',
'stackoverflow',
'social_media',
'project_management',
'code_sharing',
'config_files',
'jenkins',
'wordpress_files',
'php_error',
'exposed_documents',
'db_files',
'git_exposed'
],
'custom_dorks': [
{
'lookup_site': 'google.com',
'lookup_keywords': '/home/'
},
{
'lookup_site': '_target_',
'lookup_extensions': 'jpg,png'
}
],
'intensity': 'normal',
'documents_limit': 50
}
dir_file_fuzz: {
'auto_calibration': true,
'enable_http_crawl': true,
'rate_limit': 150,
'extensions': ['html', 'php','git','yaml','conf','cnf','config','gz','env','log','db','mysql','bak','asp','aspx','txt','conf','sql','json','yml','pdf'],
'follow_redirect': false,
'max_time': 0,
'match_http_status': [200, 204],
'recursive_level': 2,
'stop_on_error': false,
'timeout': 5,
'threads': 30,
'wordlist_name': 'dicc'
}
fetch_url: {
'uses_tools': [
'gospider',
'hakrawler',
'waybackurls',
'gospider',
'katana'
],
'remove_duplicate_endpoints': true,
'duplicate_fields': [
'content_length',
'page_title'
],
'enable_http_crawl': true,
'gf_patterns': ['debug_logic', 'idor', 'interestingEXT', 'interestingparams', 'interestingsubs', 'lfi', 'rce', 'redirect', 'sqli', 'ssrf', 'ssti', 'xss'],
'ignore_file_extensions': ['png', 'jpg', 'jpeg', 'gif', 'mp4', 'mpeg', 'mp3']
# 'exclude_subdomains': true
}
vulnerability_scan: {
'run_nuclei': false,
'run_dalfox': false,
'run_crlfuzz': false,
'run_s3scanner': true,
'enable_http_crawl': true,
'concurrency': 50,
'intensity': 'normal',
'rate_limit': 150,
'retries': 1,
'timeout': 5,
'fetch_gpt_report': true,
'nuclei': {
'use_conf': false,
'severities': [
'unknown',
'info',
'low',
'medium',
'high',
'critical'
],
# 'tags': [],
# 'templates': [],
# 'custom_templates': [],
},
's3scanner': {
'threads': 100,
'providers': [
'aws',
'gcp',
'digitalocean',
'dreamhost',
'linode'
]
}
}
waf_detection: {}
screenshot: {
'enable_http_crawl': true,
'intensity': 'normal',
'timeout': 10,
'threads': 40
}
# custom_header: "Cookie: Test"
```
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Quick Installation
**Note:** Only Ubuntu/VPS
1. Clone this repo
```bash
git clone https://github.com/yogeshojha/rengine && cd rengine
```
1. Edit the `.env` file, **please make sure to change the password for postgresql `POSTGRES_PASSWORD`!**
```bash
nano .env
```
1. **Optional, only for non-interactive install**: In the `.env` file, **please make sure to change the super admin values!**
```bash
DJANGO_SUPERUSER_USERNAME=yourUsername
DJANGO_SUPERUSER_EMAIL=YourMail@example.com
DJANGO_SUPERUSER_PASSWORD=yourStrongPassword
```
If you need to carry out a non-interactive installation, you can setup the login, email and password of the web interface admin directly from the .env file (instead of manually setting them from prompts during the installation process). This option can be interesting for automated installation (via ansible, vagrant, etc.).
`DJANGO_SUPERUSER_USERNAME`: web interface admin username (used to login to the web interface).
`DJANGO_SUPERUSER_EMAIL`: web interface admin email.
`DJANGO_SUPERUSER_PASSWORD`: web interface admin password (used to login to the web interface).
1. In the dotenv file, you may also modify the Scaling Configurations
```bash
MAX_CONCURRENCY=80
MIN_CONCURRENCY=10
```
`MAX_CONCURRENCY`: This parameter specifies the maximum number of reNgine's concurrent Celery worker processes that can be spawned. In this case, it's set to 80, meaning that the application can utilize up to 80 concurrent worker processes to execute tasks concurrently. This is useful for handling a high volume of scans or when you want to scale up processing power during periods of high demand. If you have more CPU cores, you will need to increase this for maximised performance.
`MIN_CONCURRENCY`: On the other hand, MIN_CONCURRENCY specifies the minimum number of concurrent worker processes that should be maintained, even during periods of lower demand. In this example, it's set to 10, which means that even when there are fewer tasks to process, at least 10 worker processes will be kept running. This helps ensure that the application can respond promptly to incoming tasks without the overhead of repeatedly starting and stopping worker processes.
These settings allow for dynamic scaling of Celery workers, ensuring that the application efficiently manages its workload by adjusting the number of concurrent workers based on the workload's size and complexity
1. Run the installation script, Please keep an eye for any prompt, you will also be asked for username and password for reNgine.
```bash
sudo ./install.sh
```
Or for a non-interactive installation, use `-n` argument (make sure you've modified the `.env` file before launching the installation).
```bash
sudo ./install.sh -n
```
If `install.sh` does not have install permission, please change it, `chmod +x install.sh`
**reNgine can now be accessed from <https://127.0.0.1> or if you're on the VPS <https://your_vps_ip_address>**
**Unless you are on development branch, please do not access reNgine via any ports**
### Installation (Mac/Windows/Other)
Installation instructions can be found at [https://reNgine.wiki/install/detailed/](https://reNgine.wiki/2.0/install/detailed/)
### Updating
1. Updating is as simple as running the following command:
```bash
cd rengine && sudo ./update.sh
```
If `update.sh` does not have execution permissions, please change it, `sudo chmod +x update.sh`
**NOTE:** if you're updating from 1.3.6 and you're getting a 'password authentication failed' error, consider uninstalling 1.3.6 first, then install 2.x.x as you'd normally do.
### Changelog
[Please find the latest release notes and changelog here.](https://rengine.wiki/changelog/)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Screenshots
#### Scan Results
![](.github/screenshots/scan_results.gif)
#### General Usage
<img src="https://user-images.githubusercontent.com/17223002/164993781-b6012995-522b-480a-a8bf-911193d35894.gif">
#### Initiating Subscan
<img src="https://user-images.githubusercontent.com/17223002/164993749-1ad343d6-8ce7-43d6-aee7-b3add0321da7.gif">
#### Recon Data filtering
<img src="https://user-images.githubusercontent.com/17223002/164993687-b63f3de8-e033-4ac0-808e-a2aa377d3cf8.gif">
#### Report Generation
<img src="https://user-images.githubusercontent.com/17223002/164993689-c796c6cd-eb61-43f4-800d-08aba9740088.gif">
#### Toolbox
<img src="https://user-images.githubusercontent.com/17223002/164993751-d687e88a-eb79-440f-9dc0-0ad006901620.gif">
#### Adding Custom tool in Tools Arsenal
<img src="https://user-images.githubusercontent.com/17223002/164993670-466f6459-9499-498b-a9bd-526476d735a7.gif">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire and create. Every contributions you make is **greatly appreciated**. Your contributions can be as simple as fixing the indentation or UI, or as complex as adding new modules and features.
See the [Contributing Guide](.github/CONTRIBUTING.md) to get started.
You can also [join our Discord channel #development](https://discord.gg/JuhHdHTtwd) for any development related questions.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### First-time Open Source contributors
Please note that reNgine is beginner friendly. If you have never done open-source before, we encourage you to do so. **We will be happy and proud of your first PR ever.**
You can start by resolving any [open issues](https://github.com/yogeshojha/rengine/issues).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Support
Please do not use GitHub for support requests. Instead, [join our Discord channel #support](https://discord.gg/azv6fzhNCE).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Support and Sponsoring
Over the past few years, I have been working hard on reNgine to add new features with the sole aim of making it the de facto standard for reconnaissance. I spend most of my free time and weekends working on reNgine. I do this in addition to my day job. I am happy to have received such overwhelming support from the community. But to keep this project alive, I am looking for financial support.
| Paypal | Bitcoin | Ethereum |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
|[https://www.paypal.com/paypalme/yogeshojha11](https://www.paypal.com/paypalme/yogeshojha11) | `35AiKyNswNZ4TZUSdriHopSCjNMPi63BCX` | `0xe7A337Da6ff98A28513C26A7Fec8C9b42A63d346`
OR
* Add a [GitHub Star](https://github.com/yogeshojha/rengine) to the project.
* Tweet about this project, or maybe blogs?
* Maybe nominate me for [GitHub Stars?](https://stars.github.com/nominate/)
* Join DigitalOcean using my [referral link](https://m.do.co/c/e353502d19fc) your profit is **$100** and I get $25 DO credit. This will help me test reNgine on VPS before I release any major features.
It takes a considerable amount of time to add new features and make sure everything works. Donating is your way of saying: **reNgine is awesome**.
Any support is greatly appreciated! Thank you!
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Bug Bounty Program
[![huntr](https://cdn.huntr.dev/huntr_security_badge_mono.svg)](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine)
Security researchers, welcome aboard! I'm excited to announce the reNgine bug bounty programme in collaboration with [huntr.dev](https://huntr.dev), which means that you will be rewarded for any vulnerabilities you find in reNgine.
Thank you for your interest in reporting reNgine vulnerabilities! If you are aware of any potential security vulnerabilities in reNgine, we encourage you to report them immediately via [huntr.dev](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine).
**Please do not disclose vulnerabilities via Github issues/blogs/tweets after/before reporting to huntr.dev as this is explicitly against the disclosure policy of huntr.dev and reNgine and will not be considered for monetary rewards.**
Please note that the reNgine maintainer does not set the bounty amount.
The bounty reward is determined by an industry-first equation developed by huntr.dev to understand the popularity, impact and value of repositories to the open-source community.
**What do I expect from security researchers?**
* Patience: Please note that I am currently the only maintainer in reNgine and it will take some time to validate your report. I ask for your patience during this process.
* Respect for privacy and security reports: Please do not publicly disclose any vulnerabilities (including GitHub issues) before or after reporting them on huntr.dev! This is against the disclosure policy and will not be rewarded.
* Respect the rules
**What do you get in return?**
* Thanks from the maintainer
* Monetary rewards
* CVE ID(s)
Please find the [FAQ](https://www.huntr.dev/faq) and [Responsible disclosure policy](https://www.huntr.dev/policy/) from huntr.dev.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### License
Distributed under the GNU GPL v3 License. See [LICENSE](LICENSE) for more information.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<p align="right">(ChatGPT was used to write some or most part of this README section.)</p>
| C0wnuts | 3dd700357a4bd5701b07ede4511f66042655be00 | 64b7f291240b3b8853e3cec7ee6230827c97b907 | ```suggestion
1. **Optional, only for non-interactive install**: In the `.env` file, **please make sure to change the super admin values!**
``` | AnonymousWP | 16 |
yogeshojha/rengine | 973 | Add non-interactive installation parameter | Add a non-interactive installation method via a new parameter to be passed to the install.sh script.
Essential for automated/industrialized systems (e.g. via Ansible or another automated environment creation system). | null | 2023-10-12 01:09:15+00:00 | 2023-11-21 12:49:22+00:00 | README.md | <p align="center">
<a href="https://rengine.wiki"><img src=".github/screenshots/banner.gif" alt=""/></a>
</p>
<p align="center"><a href="https://github.com/yogeshojha/rengine/releases" target="_blank"><img src="https://img.shields.io/badge/version-v2.0.0-informational?&logo=none" alt="reNgine Latest Version" /></a> <a href="https://www.gnu.org/licenses/gpl-3.0" target="_blank"><img src="https://img.shields.io/badge/License-GPLv3-red.svg?&logo=none" alt="License" /></a> <a href="#" target="_blank"><img src="https://img.shields.io/badge/first--timers--only-friendly-blue.svg?&logo=none" alt="" /></a> <a href="https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine" target="_blank"><img src="https://cdn.huntr.dev/huntr_security_badge_mono.svg" alt="" /></a> </p>
<p align="center">
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Asia-2023-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/Open--Source--Summit-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://cyberweek.ae/2021/hitb-armory/" target="_blank"><img src="https://img.shields.io/badge/HITB--Armory-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=7uvP6MaQOX0" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://drive.google.com/file/d/1Bh8lbf-Dztt5ViHJVACyrXMiglyICPQ2/view?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Defcon--Demolabs--29-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=A1oNOIc0h5A" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Europe-2020-blue.svg?&logo=none" alt="" /></a>
</p>
<p align="center">
<a href="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml/badge.svg" alt="" /></a> <a href="https://github.com/yogeshojha/rengine/actions/workflows/build.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/build.yml/badge.svg" alt="" /></a>
</p>
<p align="center">
<a href="https://discord.gg/H6WzebwX3H" target="_blank"><img src="https://img.shields.io/discord/880363103689277461" alt="" /></a>
</p>
<p align="center">
<a href="https://opensourcesecurityindex.io/" target="_blank" rel="noopener">
<img style="width: 282px; height: 56px" src="https://opensourcesecurityindex.io/badge.svg" alt="Open Source Security Index - Fastest Growing Open Source Security Projects" width="282" height="56" /> </a>
</p>
<h3>reNgine 2.0-jasper<br>Redefining the future of reconnaissance!</h3>
<h4>What is reNgine?</h4>
<p align="left">reNgine is your go-to web application reconnaissance suite that's designed to simplify and streamline the reconnaissance process for security professionals, penetration testers, and bug bounty hunters. With its highly configurable engines, data correlation capabilities, continuous monitoring, database-backed reconnaissance data, and an intuitive user interface, reNgine redefines how you gather critical information about your target web applications.
Traditional reconnaissance tools often fall short in terms of configurability and efficiency. reNgine addresses these shortcomings and emerges as a excellent alternative to existing commercial tools.
reNgine was created to address the limitations of traditional reconnaissance tools and provide a better alternative, even surpassing some commercial offerings. Whether you're a bug bounty hunter, a penetration tester, or a corporate security team, reNgine is your go-to solution for automating and enhancing your information-gathering efforts.
</p>
reNgine 2.0-jasper is out now, you can [watch reNgine 2.0-jasper release trailer here!](https://youtu.be/VwkOWqiWW5g)
reNgine 2.0-Jasper would not have been possible without [@ocervell](https://github.com/ocervell) valuable contributions. [@ocervell](https://github.com/ocervell) did majority of the refactoring if not all and also added a ton of features. Together, we wish to shape the future of web application reconnaissance, and it's developers like [@ocervell](https://github.com/ocervell) and a [ton of other developers and hackers from our community](https://github.com/yogeshojha/rengine/graphs/contributors) who inspire and drive us forward.
Thank you, [@ocervell](https://github.com/ocervell), for your outstanding work and unwavering commitment to reNgine.
Checkout our contributers here: [Contributers](https://github.com/yogeshojha/rengine/graphs/contributors)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Documentation
You can find detailed documentation at [https://rengine.wiki](https://rengine.wiki)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Table of Contents
* [About reNgine](#about-rengine)
* [Workflow](#workflow)
* [Features](#features)
* [Scan Engine](#scan-engine)
* [Quick Installation](#quick-installation)
* [What's new in reNgine 2.0](#changelog)
* [Screenshots](#screenshots)
* [Contributing](#contributing)
* [reNgine Support](#rengine-support)
* [Support and Sponsoring](#support-and-sponsoring)
* [reNgine Bug Bounty Program](#rengine-bug-bounty-program)
* [License](#license)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### About reNgine
reNgine is not an ordinary reconnaissance suite; it's a game-changer! We've turbocharged the traditional workflow with groundbreaking features that is sure to ease your reconnaissance game. reNgine redefines the art of reconnaissance with highly configurable scan engines, recon data correlation, continuous monitoring, GPT powered Vulnerability Report, Project Management and role based access control etc.
🦾 reNgine has advanced reconnaissance capabilities, harnessing a range of open-source tools to deliver a comprehensive web application reconnaissance experience. With it's intuitive User Interface, it excels in subdomain discovery, pinpointing IP addresses and open ports, collecting endpoints, conducting directory and file fuzzing, capturing screenshots, and performing vulnerability scans. To summarize, it does end-to-end reconnaissance. With WHOIS identification and WAF detection, it offers deep insights into target domains. Additionally, reNgine also identifies misconfigured S3 buckets and find interesting subdomains and URLS, based on specific keywords to helps you identify your next target, making it an go to tool for efficient reconnaissance.
🗃️ Say goodbye to recon data chaos! reNgine seamlessly integrates with a database, providing you with unmatched data correlation and organization. Forgot the hassle of grepping through json, txt or csv files. Plus, our custom query language lets you filter reconnaissance data effortlessly using natural language like operators such as filtering all alive subdomains with `http_status=200` and also filter all subdomains that are alive and has admin in name `http_status=200&name=admin`
🔧 reNgine offers unparalleled flexibility through its highly configurable scan engines, based on a YAML-based configuration. It offers the freedom to create and customize recon scan engines based on any kind of requirement, users can tailor them to their specific objectives and preferences, from thread management to timeout settings and rate-limit configurations, everything is customizable. Additionally, reNgine offers a range of pre-configured scan engines right out of the box, including Full Scan, Passive Scan, Screenshot Gathering, and the OSINT Scan Engine. These ready-to-use engines eliminate the need for extensive manual setup, aligning perfectly with reNgine's core mission of simplifying the reconnaissance process and enabling users to effortlessly access the right reconnaissance data with minimal effort.
💎 Subscans: Subscan is a game-changing feature in reNgine, setting it apart as the only open-source tool of its kind to offer this capability. With Subscan, waiting for the entire pipeline to complete is a thing of the past. Now, users can swiftly respond to newfound discoveries during reconnaissance. Whether you've stumbled upon an intriguing subdomain and wish to conduct a focused port scan or want to delve deeper with a vulnerability assessment, reNgine has you covered.
📃 PDF Reports: In addition to its robust reconnaissance capabilities, reNgine goes the extra mile by simplifying the report generation process, recognizing the crucial role that PDF reports play in the realm of end-to-end reconnaissance. Users can effortlessly generate and customize PDF reports to suit their exact needs. Whether it's a Full Scan Report, Vulnerability Report, or a concise reconnaissance report, reNgine provides the flexibility to choose the report type that best communicates your findings. Moreover, the level of customization is unparalleled, allowing users to select report colors, fine-tune executive summaries, and even add personalized touches like company names and footers. With GPT integration, your reports aren't just a report, with remediation steps, and impacts, you get 360-degree view of the vulnerabilities you've uncovered.
🔖 Say Hello to Projects! reNgine 2.0 introduces a powerful addition that enables you to efficiently organize your web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task. Each projects will have separate dashboard and all the scan results will be separated from each projects, while scan engines and configuration will be shared across all the projects.
⚙️ Roles and Permissions! Begining reNgine 2.0, we've taken your web application reconnaissance to a whole new level of control and security. Now, you can assign distinct roles to your team members—Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- 🔐 Sys Admin: Sys Admin is a super user that has permission to modify system and scan related configurations, scan engines, create new users, add new tools etc. Super user can initiate scans and subscans effortlessly.
- 🔍 Penetration Tester: Penetration Tester will be allowed to modify and initiate scans and subscans, add or update targets, etc. A penetration tester will not be allowed to modify system configurations.
- 📊 Auditor: Auditor can only view and download the report. An auditor can not change any system or scan related configurations nor can initiate any scans or subscans.
🚀 GPT Vulnerability Report Generation: Get ready for the future of penetration testing reports with reNgine's groundbreaking feature: "GPT-Powered Report Generation"! With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments that read like they were written by a human security expert! **But that's not all!** Our GPT-driven reports go the extra mile by scouring the web for related news articles, blogs, and references, so you have a 360-degree view of the vulnerabilities you've uncovered. With reNgine 2.0 revolutionize your penetration testing game and impress your clients with reports that are not just informative but engaging and comprehensive with detailed analysis on impact assessment and remediation strategies.
🥷 GPT-Powered Attack Surface Generation: With reNgine 2.0, reNgine seamlessly integrates with GPT to identify the attacks that you can likely perform on a subdomain. By making use of reconnaissance data such as page title, open ports, subdomain name etc, reNgine can advice you the attacks you could perform on a target. reNgine will also provide you the rationale on why the specific attack is likely to be successful.
🧭 Continuous monitoring: Continuous monitoring is at the core of reNgine's mission, and it's robust continuous monitoring feature ensures that their targets are under constant scrutiny. With the flexibility to schedule scans at regular intervals, penetration testers can effortlessly stay informed about their targets. What sets reNgine apart is its seamless integration with popular notification channels such as Discord, Slack, and Telegram, delivering real-time alerts for newly discovered subdomains, vulnerabilities, or any changes in reconnaissance data.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Workflow
<img src="https://github.com/yogeshojha/rengine/assets/17223002/10c475b8-b4a8-440d-9126-77fe2038a386">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Features
* Reconnaissance:
* Subdomain Discovery
* IP and Open Ports Identification
* Endpoints Discovery
* Directory/Files fuzzing
* Screenshot Gathering
* Vulnerability Scan
* Nuclei
* Dalfox XSS Scanner
* CRLFuzzer
* Misconfigured S3 Scanner
* WHOIS Identification
* WAF Detection
* OSINT Capabilities
* Meta info Gathering
* Employees Gathering
* Email Address gathering
* Google Dorking for sensitive info and urls
* Projects, create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
* Perform Advanced Query lookup using natural language alike and, or, not operations
* Highly configurable YAML-based Scan Engines
* Support for Parallel Scans
* Support for Subscans
* Recon Data visualization
* GPT Vulnerability Description, Impact and Remediation generation
* GPT Attack Surface Generator
* Multiple Roles and Permissions to cater a team's need
* Customizable Alerts/Notifications on Slack, Discord, and Telegram
* Automatically report Vulnerabilities to HackerOne
* Recon Notes and Todos
* Clocked Scans (Run reconnaissance exactly at X Hours and Y minutes) and Periodic Scans (Runs reconnaissance every X minutes/- hours/days/week)
* Proxy Support
* Screenshot Gallery with Filters
* Powerful recon data filtering with autosuggestions
* Recon Data changes, find new/removed subdomains/endpoints
* Tag targets into the Organization
* Smart Duplicate endpoint removal based on page title and content length to cleanup the reconnaissance data
* Identify Interesting Subdomains
* Custom GF patterns and custom Nuclei Templates
* Edit tool-related configuration files (Nuclei, Subfinder, Naabu, amass)
* Add external tools from Github/Go
* Interoperable with other tools, Import/Export Subdomains/Endpoints
* Import Targets via IP and/or CIDRs
* Report Generation
* Toolbox: Comes bundled with most commonly used tools during penetration testing such as whois lookup, CMS detector, CVE lookup, etc.
* Identification of related domains and related TLDs for targets
* Find actionable insights such as Most Common Vulnerability, Most Common CVE ID, Most Vulnerable Target/Subdomain, etc.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Scan Engine
```yaml
subdomain_discovery: {
'uses_tools': [
'subfinder',
'ctfr',
'sublist3r',
'tlsx',
'oneforall',
'netlas'
],
'enable_http_crawl': true,
'threads': 30,
'timeout': 5,
}
http_crawl: {}
port_scan: {
'enable_http_crawl': true,
'timeout': 5,
# 'exclude_ports': [],
# 'exclude_subdomains': true,
'ports': ['top-100'],
'rate_limit': 150,
'threads': 30,
'passive': false,
# 'use_naabu_config': false,
# 'enable_nmap': true,
# 'nmap_cmd': '',
# 'nmap_script': '',
# 'nmap_script_args': ''
}
osint: {
'discover': [
'emails',
'metainfo',
'employees'
],
'dorks': [
'login_pages',
'admin_panels',
'dashboard_pages',
'stackoverflow',
'social_media',
'project_management',
'code_sharing',
'config_files',
'jenkins',
'wordpress_files',
'php_error',
'exposed_documents',
'db_files',
'git_exposed'
],
'custom_dorks': [
{
'lookup_site': 'google.com',
'lookup_keywords': '/home/'
},
{
'lookup_site': '_target_',
'lookup_extensions': 'jpg,png'
}
],
'intensity': 'normal',
'documents_limit': 50
}
dir_file_fuzz: {
'auto_calibration': true,
'enable_http_crawl': true,
'rate_limit': 150,
'extensions': ['html', 'php','git','yaml','conf','cnf','config','gz','env','log','db','mysql','bak','asp','aspx','txt','conf','sql','json','yml','pdf'],
'follow_redirect': false,
'max_time': 0,
'match_http_status': [200, 204],
'recursive_level': 2,
'stop_on_error': false,
'timeout': 5,
'threads': 30,
'wordlist_name': 'dicc'
}
fetch_url: {
'uses_tools': [
'gospider',
'hakrawler',
'waybackurls',
'gospider',
'katana'
],
'remove_duplicate_endpoints': true,
'duplicate_fields': [
'content_length',
'page_title'
],
'enable_http_crawl': true,
'gf_patterns': ['debug_logic', 'idor', 'interestingEXT', 'interestingparams', 'interestingsubs', 'lfi', 'rce', 'redirect', 'sqli', 'ssrf', 'ssti', 'xss'],
'ignore_file_extensions': ['png', 'jpg', 'jpeg', 'gif', 'mp4', 'mpeg', 'mp3']
# 'exclude_subdomains': true
}
vulnerability_scan: {
'run_nuclei': false,
'run_dalfox': false,
'run_crlfuzz': false,
'run_s3scanner': true,
'enable_http_crawl': true,
'concurrency': 50,
'intensity': 'normal',
'rate_limit': 150,
'retries': 1,
'timeout': 5,
'fetch_gpt_report': true,
'nuclei': {
'use_conf': false,
'severities': [
'unknown',
'info',
'low',
'medium',
'high',
'critical'
],
# 'tags': [],
# 'templates': [],
# 'custom_templates': [],
},
's3scanner': {
'threads': 100,
'providers': [
'aws',
'gcp',
'digitalocean',
'dreamhost',
'linode'
]
}
}
waf_detection: {}
screenshot: {
'enable_http_crawl': true,
'intensity': 'normal',
'timeout': 10,
'threads': 40
}
# custom_header: "Cookie: Test"
```
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Quick Installation
**Note:** Only Ubuntu/VPS
1. Clone this repo
```bash
git clone https://github.com/yogeshojha/rengine && cd rengine
```
1. Edit the dotenv file, **please make sure to change the password for postgresql `POSTGRES_PASSWORD`!**
```bash
nano .env
```
1. In the dotenv file, you may also modify the Scaling Configurations
```bash
MAX_CONCURRENCY=80
MIN_CONCURRENCY=10
```
MAX_CONCURRENCY: This parameter specifies the maximum number of reNgine's concurrent Celery worker processes that can be spawned. In this case, it's set to 80, meaning that the application can utilize up to 80 concurrent worker processes to execute tasks concurrently. This is useful for handling a high volume of scans or when you want to scale up processing power during periods of high demand. If you have more CPU cores, you will need to increase this for maximised performance.
MIN_CONCURRENCY: On the other hand, MIN_CONCURRENCY specifies the minimum number of concurrent worker processes that should be maintained, even during periods of lower demand. In this example, it's set to 10, which means that even when there are fewer tasks to process, at least 10 worker processes will be kept running. This helps ensure that the application can respond promptly to incoming tasks without the overhead of repeatedly starting and stopping worker processes.
These settings allow for dynamic scaling of Celery workers, ensuring that the application efficiently manages its workload by adjusting the number of concurrent workers based on the workload's size and complexity
1. Run the installation script, Please keep an eye for any prompt, you will also be asked for username and password for reNgine.
```bash
sudo ./install.sh
```
If `install.sh` does not have install permission, please change it, `chmod +x install.sh`
**reNgine can now be accessed from <https://127.0.0.1> or if you're on the VPS <https://your_vps_ip_address>**
**Unless you are on development branch, please do not access reNgine via any ports**
### Installation (Mac/Windows/Other)
Installation instructions can be found at [https://reNgine.wiki/install/detailed/](https://reNgine.wiki/2.0/install/detailed/)
### Updating
1. Updating is as simple as running the following command:
```bash
cd rengine && sudo ./update.sh
```
If `update.sh` does not have execution permissions, please change it, `sudo chmod +x update.sh`
**NOTE:** if you're updating from 1.3.6 and you're getting a 'password authentication failed' error, consider uninstalling 1.3.6 first, then install 2.x.x as you'd normally do.
### Changelog
[Please find the latest release notes and changelog here.](https://rengine.wiki/changelog/)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Screenshots
#### Scan Results
![](.github/screenshots/scan_results.gif)
#### General Usage
<img src="https://user-images.githubusercontent.com/17223002/164993781-b6012995-522b-480a-a8bf-911193d35894.gif">
#### Initiating Subscan
<img src="https://user-images.githubusercontent.com/17223002/164993749-1ad343d6-8ce7-43d6-aee7-b3add0321da7.gif">
#### Recon Data filtering
<img src="https://user-images.githubusercontent.com/17223002/164993687-b63f3de8-e033-4ac0-808e-a2aa377d3cf8.gif">
#### Report Generation
<img src="https://user-images.githubusercontent.com/17223002/164993689-c796c6cd-eb61-43f4-800d-08aba9740088.gif">
#### Toolbox
<img src="https://user-images.githubusercontent.com/17223002/164993751-d687e88a-eb79-440f-9dc0-0ad006901620.gif">
#### Adding Custom tool in Tools Arsenal
<img src="https://user-images.githubusercontent.com/17223002/164993670-466f6459-9499-498b-a9bd-526476d735a7.gif">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire and create. Every contributions you make is **greatly appreciated**. Your contributions can be as simple as fixing the indentation or UI, or as complex as adding new modules and features.
See the [Contributing Guide](.github/CONTRIBUTING.md) to get started.
You can also [join our Discord channel #development](https://discord.gg/JuhHdHTtwd) for any development related questions.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### First-time Open Source contributors
Please note that reNgine is beginner friendly. If you have never done open-source before, we encourage you to do so. **We will be happy and proud of your first PR ever.**
You can start by resolving any [open issues](https://github.com/yogeshojha/rengine/issues).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Support
Please do not use GitHub for support requests. Instead, [join our Discord channel #support](https://discord.gg/azv6fzhNCE).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Support and Sponsoring
Over the past few years, I have been working hard on reNgine to add new features with the sole aim of making it the de facto standard for reconnaissance. I spend most of my free time and weekends working on reNgine. I do this in addition to my day job. I am happy to have received such overwhelming support from the community. But to keep this project alive, I am looking for financial support.
| Paypal | Bitcoin | Ethereum |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
|[https://www.paypal.com/paypalme/yogeshojha11](https://www.paypal.com/paypalme/yogeshojha11) | `35AiKyNswNZ4TZUSdriHopSCjNMPi63BCX` | `0xe7A337Da6ff98A28513C26A7Fec8C9b42A63d346`
OR
* Add a [GitHub Star](https://github.com/yogeshojha/rengine) to the project.
* Tweet about this project, or maybe blogs?
* Maybe nominate me for [GitHub Stars?](https://stars.github.com/nominate/)
* Join DigitalOcean using my [referral link](https://m.do.co/c/e353502d19fc) your profit is **$100** and I get $25 DO credit. This will help me test reNgine on VPS before I release any major features.
It takes a considerable amount of time to add new features and make sure everything works. Donating is your way of saying: **reNgine is awesome**.
Any support is greatly appreciated! Thank you!
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Bug Bounty Program
[![huntr](https://cdn.huntr.dev/huntr_security_badge_mono.svg)](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine)
Security researchers, welcome aboard! I'm excited to announce the reNgine bug bounty programme in collaboration with [huntr.dev](https://huntr.dev), which means that you will be rewarded for any vulnerabilities you find in reNgine.
Thank you for your interest in reporting reNgine vulnerabilities! If you are aware of any potential security vulnerabilities in reNgine, we encourage you to report them immediately via [huntr.dev](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine).
**Please do not disclose vulnerabilities via Github issues/blogs/tweets after/before reporting to huntr.dev as this is explicitly against the disclosure policy of huntr.dev and reNgine and will not be considered for monetary rewards.**
Please note that the reNgine maintainer does not set the bounty amount.
The bounty reward is determined by an industry-first equation developed by huntr.dev to understand the popularity, impact and value of repositories to the open-source community.
**What do I expect from security researchers?**
* Patience: Please note that I am currently the only maintainer in reNgine and it will take some time to validate your report. I ask for your patience during this process.
* Respect for privacy and security reports: Please do not publicly disclose any vulnerabilities (including GitHub issues) before or after reporting them on huntr.dev! This is against the disclosure policy and will not be rewarded.
* Respect the rules
**What do you get in return?**
* Thanks from the maintainer
* Monetary rewards
* CVE ID(s)
Please find the [FAQ](https://www.huntr.dev/faq) and [Responsible disclosure policy](https://www.huntr.dev/policy/) from huntr.dev.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### License
Distributed under the GNU GPL v3 License. See [LICENSE](LICENSE) for more information.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<p align="right">(ChatGPT was used to write some or most part of this README section.)</p>
| <p align="center">
<a href="https://rengine.wiki"><img src=".github/screenshots/banner.gif" alt=""/></a>
</p>
<p align="center"><a href="https://github.com/yogeshojha/rengine/releases" target="_blank"><img src="https://img.shields.io/badge/version-v2.0.0-informational?&logo=none" alt="reNgine Latest Version" /></a> <a href="https://www.gnu.org/licenses/gpl-3.0" target="_blank"><img src="https://img.shields.io/badge/License-GPLv3-red.svg?&logo=none" alt="License" /></a> <a href="#" target="_blank"><img src="https://img.shields.io/badge/first--timers--only-friendly-blue.svg?&logo=none" alt="" /></a> <a href="https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine" target="_blank"><img src="https://cdn.huntr.dev/huntr_security_badge_mono.svg" alt="" /></a> </p>
<p align="center">
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Asia-2023-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/Open--Source--Summit-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://cyberweek.ae/2021/hitb-armory/" target="_blank"><img src="https://img.shields.io/badge/HITB--Armory-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=7uvP6MaQOX0" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://drive.google.com/file/d/1Bh8lbf-Dztt5ViHJVACyrXMiglyICPQ2/view?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Defcon--Demolabs--29-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=A1oNOIc0h5A" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Europe-2020-blue.svg?&logo=none" alt="" /></a>
</p>
<p align="center">
<a href="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml/badge.svg" alt="" /></a> <a href="https://github.com/yogeshojha/rengine/actions/workflows/build.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/build.yml/badge.svg" alt="" /></a>
</p>
<p align="center">
<a href="https://discord.gg/H6WzebwX3H" target="_blank"><img src="https://img.shields.io/discord/880363103689277461" alt="" /></a>
</p>
<p align="center">
<a href="https://opensourcesecurityindex.io/" target="_blank" rel="noopener">
<img style="width: 282px; height: 56px" src="https://opensourcesecurityindex.io/badge.svg" alt="Open Source Security Index - Fastest Growing Open Source Security Projects" width="282" height="56" /> </a>
</p>
<h3>reNgine 2.0-jasper<br>Redefining the future of reconnaissance!</h3>
<h4>What is reNgine?</h4>
<p align="left">reNgine is your go-to web application reconnaissance suite that's designed to simplify and streamline the reconnaissance process for security professionals, penetration testers, and bug bounty hunters. With its highly configurable engines, data correlation capabilities, continuous monitoring, database-backed reconnaissance data, and an intuitive user interface, reNgine redefines how you gather critical information about your target web applications.
Traditional reconnaissance tools often fall short in terms of configurability and efficiency. reNgine addresses these shortcomings and emerges as a excellent alternative to existing commercial tools.
reNgine was created to address the limitations of traditional reconnaissance tools and provide a better alternative, even surpassing some commercial offerings. Whether you're a bug bounty hunter, a penetration tester, or a corporate security team, reNgine is your go-to solution for automating and enhancing your information-gathering efforts.
</p>
reNgine 2.0-jasper is out now, you can [watch reNgine 2.0-jasper release trailer here!](https://youtu.be/VwkOWqiWW5g)
reNgine 2.0-Jasper would not have been possible without [@ocervell](https://github.com/ocervell) valuable contributions. [@ocervell](https://github.com/ocervell) did majority of the refactoring if not all and also added a ton of features. Together, we wish to shape the future of web application reconnaissance, and it's developers like [@ocervell](https://github.com/ocervell) and a [ton of other developers and hackers from our community](https://github.com/yogeshojha/rengine/graphs/contributors) who inspire and drive us forward.
Thank you, [@ocervell](https://github.com/ocervell), for your outstanding work and unwavering commitment to reNgine.
Checkout our contributers here: [Contributers](https://github.com/yogeshojha/rengine/graphs/contributors)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Documentation
You can find detailed documentation at [https://rengine.wiki](https://rengine.wiki)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Table of Contents
* [About reNgine](#about-rengine)
* [Workflow](#workflow)
* [Features](#features)
* [Scan Engine](#scan-engine)
* [Quick Installation](#quick-installation)
* [What's new in reNgine 2.0](#changelog)
* [Screenshots](#screenshots)
* [Contributing](#contributing)
* [reNgine Support](#rengine-support)
* [Support and Sponsoring](#support-and-sponsoring)
* [reNgine Bug Bounty Program](#rengine-bug-bounty-program)
* [License](#license)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### About reNgine
reNgine is not an ordinary reconnaissance suite; it's a game-changer! We've turbocharged the traditional workflow with groundbreaking features that is sure to ease your reconnaissance game. reNgine redefines the art of reconnaissance with highly configurable scan engines, recon data correlation, continuous monitoring, GPT powered Vulnerability Report, Project Management and role based access control etc.
🦾 reNgine has advanced reconnaissance capabilities, harnessing a range of open-source tools to deliver a comprehensive web application reconnaissance experience. With it's intuitive User Interface, it excels in subdomain discovery, pinpointing IP addresses and open ports, collecting endpoints, conducting directory and file fuzzing, capturing screenshots, and performing vulnerability scans. To summarize, it does end-to-end reconnaissance. With WHOIS identification and WAF detection, it offers deep insights into target domains. Additionally, reNgine also identifies misconfigured S3 buckets and find interesting subdomains and URLS, based on specific keywords to helps you identify your next target, making it an go to tool for efficient reconnaissance.
🗃️ Say goodbye to recon data chaos! reNgine seamlessly integrates with a database, providing you with unmatched data correlation and organization. Forgot the hassle of grepping through json, txt or csv files. Plus, our custom query language lets you filter reconnaissance data effortlessly using natural language like operators such as filtering all alive subdomains with `http_status=200` and also filter all subdomains that are alive and has admin in name `http_status=200&name=admin`
🔧 reNgine offers unparalleled flexibility through its highly configurable scan engines, based on a YAML-based configuration. It offers the freedom to create and customize recon scan engines based on any kind of requirement, users can tailor them to their specific objectives and preferences, from thread management to timeout settings and rate-limit configurations, everything is customizable. Additionally, reNgine offers a range of pre-configured scan engines right out of the box, including Full Scan, Passive Scan, Screenshot Gathering, and the OSINT Scan Engine. These ready-to-use engines eliminate the need for extensive manual setup, aligning perfectly with reNgine's core mission of simplifying the reconnaissance process and enabling users to effortlessly access the right reconnaissance data with minimal effort.
💎 Subscans: Subscan is a game-changing feature in reNgine, setting it apart as the only open-source tool of its kind to offer this capability. With Subscan, waiting for the entire pipeline to complete is a thing of the past. Now, users can swiftly respond to newfound discoveries during reconnaissance. Whether you've stumbled upon an intriguing subdomain and wish to conduct a focused port scan or want to delve deeper with a vulnerability assessment, reNgine has you covered.
📃 PDF Reports: In addition to its robust reconnaissance capabilities, reNgine goes the extra mile by simplifying the report generation process, recognizing the crucial role that PDF reports play in the realm of end-to-end reconnaissance. Users can effortlessly generate and customize PDF reports to suit their exact needs. Whether it's a Full Scan Report, Vulnerability Report, or a concise reconnaissance report, reNgine provides the flexibility to choose the report type that best communicates your findings. Moreover, the level of customization is unparalleled, allowing users to select report colors, fine-tune executive summaries, and even add personalized touches like company names and footers. With GPT integration, your reports aren't just a report, with remediation steps, and impacts, you get 360-degree view of the vulnerabilities you've uncovered.
🔖 Say Hello to Projects! reNgine 2.0 introduces a powerful addition that enables you to efficiently organize your web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task. Each projects will have separate dashboard and all the scan results will be separated from each projects, while scan engines and configuration will be shared across all the projects.
⚙️ Roles and Permissions! Begining reNgine 2.0, we've taken your web application reconnaissance to a whole new level of control and security. Now, you can assign distinct roles to your team members—Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- 🔐 Sys Admin: Sys Admin is a super user that has permission to modify system and scan related configurations, scan engines, create new users, add new tools etc. Super user can initiate scans and subscans effortlessly.
- 🔍 Penetration Tester: Penetration Tester will be allowed to modify and initiate scans and subscans, add or update targets, etc. A penetration tester will not be allowed to modify system configurations.
- 📊 Auditor: Auditor can only view and download the report. An auditor can not change any system or scan related configurations nor can initiate any scans or subscans.
🚀 GPT Vulnerability Report Generation: Get ready for the future of penetration testing reports with reNgine's groundbreaking feature: "GPT-Powered Report Generation"! With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments that read like they were written by a human security expert! **But that's not all!** Our GPT-driven reports go the extra mile by scouring the web for related news articles, blogs, and references, so you have a 360-degree view of the vulnerabilities you've uncovered. With reNgine 2.0 revolutionize your penetration testing game and impress your clients with reports that are not just informative but engaging and comprehensive with detailed analysis on impact assessment and remediation strategies.
🥷 GPT-Powered Attack Surface Generation: With reNgine 2.0, reNgine seamlessly integrates with GPT to identify the attacks that you can likely perform on a subdomain. By making use of reconnaissance data such as page title, open ports, subdomain name etc, reNgine can advice you the attacks you could perform on a target. reNgine will also provide you the rationale on why the specific attack is likely to be successful.
🧭 Continuous monitoring: Continuous monitoring is at the core of reNgine's mission, and it's robust continuous monitoring feature ensures that their targets are under constant scrutiny. With the flexibility to schedule scans at regular intervals, penetration testers can effortlessly stay informed about their targets. What sets reNgine apart is its seamless integration with popular notification channels such as Discord, Slack, and Telegram, delivering real-time alerts for newly discovered subdomains, vulnerabilities, or any changes in reconnaissance data.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Workflow
<img src="https://github.com/yogeshojha/rengine/assets/17223002/10c475b8-b4a8-440d-9126-77fe2038a386">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Features
* Reconnaissance:
* Subdomain Discovery
* IP and Open Ports Identification
* Endpoints Discovery
* Directory/Files fuzzing
* Screenshot Gathering
* Vulnerability Scan
* Nuclei
* Dalfox XSS Scanner
* CRLFuzzer
* Misconfigured S3 Scanner
* WHOIS Identification
* WAF Detection
* OSINT Capabilities
* Meta info Gathering
* Employees Gathering
* Email Address gathering
* Google Dorking for sensitive info and urls
* Projects, create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
* Perform Advanced Query lookup using natural language alike and, or, not operations
* Highly configurable YAML-based Scan Engines
* Support for Parallel Scans
* Support for Subscans
* Recon Data visualization
* GPT Vulnerability Description, Impact and Remediation generation
* GPT Attack Surface Generator
* Multiple Roles and Permissions to cater a team's need
* Customizable Alerts/Notifications on Slack, Discord, and Telegram
* Automatically report Vulnerabilities to HackerOne
* Recon Notes and Todos
* Clocked Scans (Run reconnaissance exactly at X Hours and Y minutes) and Periodic Scans (Runs reconnaissance every X minutes/- hours/days/week)
* Proxy Support
* Screenshot Gallery with Filters
* Powerful recon data filtering with autosuggestions
* Recon Data changes, find new/removed subdomains/endpoints
* Tag targets into the Organization
* Smart Duplicate endpoint removal based on page title and content length to cleanup the reconnaissance data
* Identify Interesting Subdomains
* Custom GF patterns and custom Nuclei Templates
* Edit tool-related configuration files (Nuclei, Subfinder, Naabu, amass)
* Add external tools from Github/Go
* Interoperable with other tools, Import/Export Subdomains/Endpoints
* Import Targets via IP and/or CIDRs
* Report Generation
* Toolbox: Comes bundled with most commonly used tools during penetration testing such as whois lookup, CMS detector, CVE lookup, etc.
* Identification of related domains and related TLDs for targets
* Find actionable insights such as Most Common Vulnerability, Most Common CVE ID, Most Vulnerable Target/Subdomain, etc.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Scan Engine
```yaml
subdomain_discovery: {
'uses_tools': [
'subfinder',
'ctfr',
'sublist3r',
'tlsx',
'oneforall',
'netlas'
],
'enable_http_crawl': true,
'threads': 30,
'timeout': 5,
}
http_crawl: {}
port_scan: {
'enable_http_crawl': true,
'timeout': 5,
# 'exclude_ports': [],
# 'exclude_subdomains': true,
'ports': ['top-100'],
'rate_limit': 150,
'threads': 30,
'passive': false,
# 'use_naabu_config': false,
# 'enable_nmap': true,
# 'nmap_cmd': '',
# 'nmap_script': '',
# 'nmap_script_args': ''
}
osint: {
'discover': [
'emails',
'metainfo',
'employees'
],
'dorks': [
'login_pages',
'admin_panels',
'dashboard_pages',
'stackoverflow',
'social_media',
'project_management',
'code_sharing',
'config_files',
'jenkins',
'wordpress_files',
'php_error',
'exposed_documents',
'db_files',
'git_exposed'
],
'custom_dorks': [
{
'lookup_site': 'google.com',
'lookup_keywords': '/home/'
},
{
'lookup_site': '_target_',
'lookup_extensions': 'jpg,png'
}
],
'intensity': 'normal',
'documents_limit': 50
}
dir_file_fuzz: {
'auto_calibration': true,
'enable_http_crawl': true,
'rate_limit': 150,
'extensions': ['html', 'php','git','yaml','conf','cnf','config','gz','env','log','db','mysql','bak','asp','aspx','txt','conf','sql','json','yml','pdf'],
'follow_redirect': false,
'max_time': 0,
'match_http_status': [200, 204],
'recursive_level': 2,
'stop_on_error': false,
'timeout': 5,
'threads': 30,
'wordlist_name': 'dicc'
}
fetch_url: {
'uses_tools': [
'gospider',
'hakrawler',
'waybackurls',
'gospider',
'katana'
],
'remove_duplicate_endpoints': true,
'duplicate_fields': [
'content_length',
'page_title'
],
'enable_http_crawl': true,
'gf_patterns': ['debug_logic', 'idor', 'interestingEXT', 'interestingparams', 'interestingsubs', 'lfi', 'rce', 'redirect', 'sqli', 'ssrf', 'ssti', 'xss'],
'ignore_file_extensions': ['png', 'jpg', 'jpeg', 'gif', 'mp4', 'mpeg', 'mp3']
# 'exclude_subdomains': true
}
vulnerability_scan: {
'run_nuclei': false,
'run_dalfox': false,
'run_crlfuzz': false,
'run_s3scanner': true,
'enable_http_crawl': true,
'concurrency': 50,
'intensity': 'normal',
'rate_limit': 150,
'retries': 1,
'timeout': 5,
'fetch_gpt_report': true,
'nuclei': {
'use_conf': false,
'severities': [
'unknown',
'info',
'low',
'medium',
'high',
'critical'
],
# 'tags': [],
# 'templates': [],
# 'custom_templates': [],
},
's3scanner': {
'threads': 100,
'providers': [
'aws',
'gcp',
'digitalocean',
'dreamhost',
'linode'
]
}
}
waf_detection: {}
screenshot: {
'enable_http_crawl': true,
'intensity': 'normal',
'timeout': 10,
'threads': 40
}
# custom_header: "Cookie: Test"
```
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Quick Installation
**Note:** Only Ubuntu/VPS
1. Clone this repo
```bash
git clone https://github.com/yogeshojha/rengine && cd rengine
```
1. Edit the `.env` file, **please make sure to change the password for postgresql `POSTGRES_PASSWORD`!**
```bash
nano .env
```
1. **Optional, only for non-interactive install**: In the `.env` file, **please make sure to change the super admin values!**
```bash
DJANGO_SUPERUSER_USERNAME=yourUsername
DJANGO_SUPERUSER_EMAIL=YourMail@example.com
DJANGO_SUPERUSER_PASSWORD=yourStrongPassword
```
If you need to carry out a non-interactive installation, you can setup the login, email and password of the web interface admin directly from the .env file (instead of manually setting them from prompts during the installation process). This option can be interesting for automated installation (via ansible, vagrant, etc.).
`DJANGO_SUPERUSER_USERNAME`: web interface admin username (used to login to the web interface).
`DJANGO_SUPERUSER_EMAIL`: web interface admin email.
`DJANGO_SUPERUSER_PASSWORD`: web interface admin password (used to login to the web interface).
1. In the dotenv file, you may also modify the Scaling Configurations
```bash
MAX_CONCURRENCY=80
MIN_CONCURRENCY=10
```
`MAX_CONCURRENCY`: This parameter specifies the maximum number of reNgine's concurrent Celery worker processes that can be spawned. In this case, it's set to 80, meaning that the application can utilize up to 80 concurrent worker processes to execute tasks concurrently. This is useful for handling a high volume of scans or when you want to scale up processing power during periods of high demand. If you have more CPU cores, you will need to increase this for maximised performance.
`MIN_CONCURRENCY`: On the other hand, MIN_CONCURRENCY specifies the minimum number of concurrent worker processes that should be maintained, even during periods of lower demand. In this example, it's set to 10, which means that even when there are fewer tasks to process, at least 10 worker processes will be kept running. This helps ensure that the application can respond promptly to incoming tasks without the overhead of repeatedly starting and stopping worker processes.
These settings allow for dynamic scaling of Celery workers, ensuring that the application efficiently manages its workload by adjusting the number of concurrent workers based on the workload's size and complexity
1. Run the installation script, Please keep an eye for any prompt, you will also be asked for username and password for reNgine.
```bash
sudo ./install.sh
```
Or for a non-interactive installation, use `-n` argument (make sure you've modified the `.env` file before launching the installation).
```bash
sudo ./install.sh -n
```
If `install.sh` does not have install permission, please change it, `chmod +x install.sh`
**reNgine can now be accessed from <https://127.0.0.1> or if you're on the VPS <https://your_vps_ip_address>**
**Unless you are on development branch, please do not access reNgine via any ports**
### Installation (Mac/Windows/Other)
Installation instructions can be found at [https://reNgine.wiki/install/detailed/](https://reNgine.wiki/2.0/install/detailed/)
### Updating
1. Updating is as simple as running the following command:
```bash
cd rengine && sudo ./update.sh
```
If `update.sh` does not have execution permissions, please change it, `sudo chmod +x update.sh`
**NOTE:** if you're updating from 1.3.6 and you're getting a 'password authentication failed' error, consider uninstalling 1.3.6 first, then install 2.x.x as you'd normally do.
### Changelog
[Please find the latest release notes and changelog here.](https://rengine.wiki/changelog/)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Screenshots
#### Scan Results
![](.github/screenshots/scan_results.gif)
#### General Usage
<img src="https://user-images.githubusercontent.com/17223002/164993781-b6012995-522b-480a-a8bf-911193d35894.gif">
#### Initiating Subscan
<img src="https://user-images.githubusercontent.com/17223002/164993749-1ad343d6-8ce7-43d6-aee7-b3add0321da7.gif">
#### Recon Data filtering
<img src="https://user-images.githubusercontent.com/17223002/164993687-b63f3de8-e033-4ac0-808e-a2aa377d3cf8.gif">
#### Report Generation
<img src="https://user-images.githubusercontent.com/17223002/164993689-c796c6cd-eb61-43f4-800d-08aba9740088.gif">
#### Toolbox
<img src="https://user-images.githubusercontent.com/17223002/164993751-d687e88a-eb79-440f-9dc0-0ad006901620.gif">
#### Adding Custom tool in Tools Arsenal
<img src="https://user-images.githubusercontent.com/17223002/164993670-466f6459-9499-498b-a9bd-526476d735a7.gif">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire and create. Every contributions you make is **greatly appreciated**. Your contributions can be as simple as fixing the indentation or UI, or as complex as adding new modules and features.
See the [Contributing Guide](.github/CONTRIBUTING.md) to get started.
You can also [join our Discord channel #development](https://discord.gg/JuhHdHTtwd) for any development related questions.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### First-time Open Source contributors
Please note that reNgine is beginner friendly. If you have never done open-source before, we encourage you to do so. **We will be happy and proud of your first PR ever.**
You can start by resolving any [open issues](https://github.com/yogeshojha/rengine/issues).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Support
Please do not use GitHub for support requests. Instead, [join our Discord channel #support](https://discord.gg/azv6fzhNCE).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Support and Sponsoring
Over the past few years, I have been working hard on reNgine to add new features with the sole aim of making it the de facto standard for reconnaissance. I spend most of my free time and weekends working on reNgine. I do this in addition to my day job. I am happy to have received such overwhelming support from the community. But to keep this project alive, I am looking for financial support.
| Paypal | Bitcoin | Ethereum |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
|[https://www.paypal.com/paypalme/yogeshojha11](https://www.paypal.com/paypalme/yogeshojha11) | `35AiKyNswNZ4TZUSdriHopSCjNMPi63BCX` | `0xe7A337Da6ff98A28513C26A7Fec8C9b42A63d346`
OR
* Add a [GitHub Star](https://github.com/yogeshojha/rengine) to the project.
* Tweet about this project, or maybe blogs?
* Maybe nominate me for [GitHub Stars?](https://stars.github.com/nominate/)
* Join DigitalOcean using my [referral link](https://m.do.co/c/e353502d19fc) your profit is **$100** and I get $25 DO credit. This will help me test reNgine on VPS before I release any major features.
It takes a considerable amount of time to add new features and make sure everything works. Donating is your way of saying: **reNgine is awesome**.
Any support is greatly appreciated! Thank you!
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Bug Bounty Program
[![huntr](https://cdn.huntr.dev/huntr_security_badge_mono.svg)](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine)
Security researchers, welcome aboard! I'm excited to announce the reNgine bug bounty programme in collaboration with [huntr.dev](https://huntr.dev), which means that you will be rewarded for any vulnerabilities you find in reNgine.
Thank you for your interest in reporting reNgine vulnerabilities! If you are aware of any potential security vulnerabilities in reNgine, we encourage you to report them immediately via [huntr.dev](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine).
**Please do not disclose vulnerabilities via Github issues/blogs/tweets after/before reporting to huntr.dev as this is explicitly against the disclosure policy of huntr.dev and reNgine and will not be considered for monetary rewards.**
Please note that the reNgine maintainer does not set the bounty amount.
The bounty reward is determined by an industry-first equation developed by huntr.dev to understand the popularity, impact and value of repositories to the open-source community.
**What do I expect from security researchers?**
* Patience: Please note that I am currently the only maintainer in reNgine and it will take some time to validate your report. I ask for your patience during this process.
* Respect for privacy and security reports: Please do not publicly disclose any vulnerabilities (including GitHub issues) before or after reporting them on huntr.dev! This is against the disclosure policy and will not be rewarded.
* Respect the rules
**What do you get in return?**
* Thanks from the maintainer
* Monetary rewards
* CVE ID(s)
Please find the [FAQ](https://www.huntr.dev/faq) and [Responsible disclosure policy](https://www.huntr.dev/policy/) from huntr.dev.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### License
Distributed under the GNU GPL v3 License. See [LICENSE](LICENSE) for more information.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<p align="right">(ChatGPT was used to write some or most part of this README section.)</p>
| C0wnuts | 3dd700357a4bd5701b07ede4511f66042655be00 | 64b7f291240b3b8853e3cec7ee6230827c97b907 | ```suggestion
`DJANGO_SUPERUSER_USERNAME`: web interface admin username (used to login to the web interface).
``` | AnonymousWP | 17 |
yogeshojha/rengine | 973 | Add non-interactive installation parameter | Add a non-interactive installation method via a new parameter to be passed to the install.sh script.
Essential for automated/industrialized systems (e.g. via Ansible or another automated environment creation system). | null | 2023-10-12 01:09:15+00:00 | 2023-11-21 12:49:22+00:00 | README.md | <p align="center">
<a href="https://rengine.wiki"><img src=".github/screenshots/banner.gif" alt=""/></a>
</p>
<p align="center"><a href="https://github.com/yogeshojha/rengine/releases" target="_blank"><img src="https://img.shields.io/badge/version-v2.0.0-informational?&logo=none" alt="reNgine Latest Version" /></a> <a href="https://www.gnu.org/licenses/gpl-3.0" target="_blank"><img src="https://img.shields.io/badge/License-GPLv3-red.svg?&logo=none" alt="License" /></a> <a href="#" target="_blank"><img src="https://img.shields.io/badge/first--timers--only-friendly-blue.svg?&logo=none" alt="" /></a> <a href="https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine" target="_blank"><img src="https://cdn.huntr.dev/huntr_security_badge_mono.svg" alt="" /></a> </p>
<p align="center">
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Asia-2023-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/Open--Source--Summit-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://cyberweek.ae/2021/hitb-armory/" target="_blank"><img src="https://img.shields.io/badge/HITB--Armory-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=7uvP6MaQOX0" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://drive.google.com/file/d/1Bh8lbf-Dztt5ViHJVACyrXMiglyICPQ2/view?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Defcon--Demolabs--29-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=A1oNOIc0h5A" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Europe-2020-blue.svg?&logo=none" alt="" /></a>
</p>
<p align="center">
<a href="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml/badge.svg" alt="" /></a> <a href="https://github.com/yogeshojha/rengine/actions/workflows/build.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/build.yml/badge.svg" alt="" /></a>
</p>
<p align="center">
<a href="https://discord.gg/H6WzebwX3H" target="_blank"><img src="https://img.shields.io/discord/880363103689277461" alt="" /></a>
</p>
<p align="center">
<a href="https://opensourcesecurityindex.io/" target="_blank" rel="noopener">
<img style="width: 282px; height: 56px" src="https://opensourcesecurityindex.io/badge.svg" alt="Open Source Security Index - Fastest Growing Open Source Security Projects" width="282" height="56" /> </a>
</p>
<h3>reNgine 2.0-jasper<br>Redefining the future of reconnaissance!</h3>
<h4>What is reNgine?</h4>
<p align="left">reNgine is your go-to web application reconnaissance suite that's designed to simplify and streamline the reconnaissance process for security professionals, penetration testers, and bug bounty hunters. With its highly configurable engines, data correlation capabilities, continuous monitoring, database-backed reconnaissance data, and an intuitive user interface, reNgine redefines how you gather critical information about your target web applications.
Traditional reconnaissance tools often fall short in terms of configurability and efficiency. reNgine addresses these shortcomings and emerges as a excellent alternative to existing commercial tools.
reNgine was created to address the limitations of traditional reconnaissance tools and provide a better alternative, even surpassing some commercial offerings. Whether you're a bug bounty hunter, a penetration tester, or a corporate security team, reNgine is your go-to solution for automating and enhancing your information-gathering efforts.
</p>
reNgine 2.0-jasper is out now, you can [watch reNgine 2.0-jasper release trailer here!](https://youtu.be/VwkOWqiWW5g)
reNgine 2.0-Jasper would not have been possible without [@ocervell](https://github.com/ocervell) valuable contributions. [@ocervell](https://github.com/ocervell) did majority of the refactoring if not all and also added a ton of features. Together, we wish to shape the future of web application reconnaissance, and it's developers like [@ocervell](https://github.com/ocervell) and a [ton of other developers and hackers from our community](https://github.com/yogeshojha/rengine/graphs/contributors) who inspire and drive us forward.
Thank you, [@ocervell](https://github.com/ocervell), for your outstanding work and unwavering commitment to reNgine.
Checkout our contributers here: [Contributers](https://github.com/yogeshojha/rengine/graphs/contributors)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Documentation
You can find detailed documentation at [https://rengine.wiki](https://rengine.wiki)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Table of Contents
* [About reNgine](#about-rengine)
* [Workflow](#workflow)
* [Features](#features)
* [Scan Engine](#scan-engine)
* [Quick Installation](#quick-installation)
* [What's new in reNgine 2.0](#changelog)
* [Screenshots](#screenshots)
* [Contributing](#contributing)
* [reNgine Support](#rengine-support)
* [Support and Sponsoring](#support-and-sponsoring)
* [reNgine Bug Bounty Program](#rengine-bug-bounty-program)
* [License](#license)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### About reNgine
reNgine is not an ordinary reconnaissance suite; it's a game-changer! We've turbocharged the traditional workflow with groundbreaking features that is sure to ease your reconnaissance game. reNgine redefines the art of reconnaissance with highly configurable scan engines, recon data correlation, continuous monitoring, GPT powered Vulnerability Report, Project Management and role based access control etc.
🦾 reNgine has advanced reconnaissance capabilities, harnessing a range of open-source tools to deliver a comprehensive web application reconnaissance experience. With it's intuitive User Interface, it excels in subdomain discovery, pinpointing IP addresses and open ports, collecting endpoints, conducting directory and file fuzzing, capturing screenshots, and performing vulnerability scans. To summarize, it does end-to-end reconnaissance. With WHOIS identification and WAF detection, it offers deep insights into target domains. Additionally, reNgine also identifies misconfigured S3 buckets and find interesting subdomains and URLS, based on specific keywords to helps you identify your next target, making it an go to tool for efficient reconnaissance.
🗃️ Say goodbye to recon data chaos! reNgine seamlessly integrates with a database, providing you with unmatched data correlation and organization. Forgot the hassle of grepping through json, txt or csv files. Plus, our custom query language lets you filter reconnaissance data effortlessly using natural language like operators such as filtering all alive subdomains with `http_status=200` and also filter all subdomains that are alive and has admin in name `http_status=200&name=admin`
🔧 reNgine offers unparalleled flexibility through its highly configurable scan engines, based on a YAML-based configuration. It offers the freedom to create and customize recon scan engines based on any kind of requirement, users can tailor them to their specific objectives and preferences, from thread management to timeout settings and rate-limit configurations, everything is customizable. Additionally, reNgine offers a range of pre-configured scan engines right out of the box, including Full Scan, Passive Scan, Screenshot Gathering, and the OSINT Scan Engine. These ready-to-use engines eliminate the need for extensive manual setup, aligning perfectly with reNgine's core mission of simplifying the reconnaissance process and enabling users to effortlessly access the right reconnaissance data with minimal effort.
💎 Subscans: Subscan is a game-changing feature in reNgine, setting it apart as the only open-source tool of its kind to offer this capability. With Subscan, waiting for the entire pipeline to complete is a thing of the past. Now, users can swiftly respond to newfound discoveries during reconnaissance. Whether you've stumbled upon an intriguing subdomain and wish to conduct a focused port scan or want to delve deeper with a vulnerability assessment, reNgine has you covered.
📃 PDF Reports: In addition to its robust reconnaissance capabilities, reNgine goes the extra mile by simplifying the report generation process, recognizing the crucial role that PDF reports play in the realm of end-to-end reconnaissance. Users can effortlessly generate and customize PDF reports to suit their exact needs. Whether it's a Full Scan Report, Vulnerability Report, or a concise reconnaissance report, reNgine provides the flexibility to choose the report type that best communicates your findings. Moreover, the level of customization is unparalleled, allowing users to select report colors, fine-tune executive summaries, and even add personalized touches like company names and footers. With GPT integration, your reports aren't just a report, with remediation steps, and impacts, you get 360-degree view of the vulnerabilities you've uncovered.
🔖 Say Hello to Projects! reNgine 2.0 introduces a powerful addition that enables you to efficiently organize your web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task. Each projects will have separate dashboard and all the scan results will be separated from each projects, while scan engines and configuration will be shared across all the projects.
⚙️ Roles and Permissions! Begining reNgine 2.0, we've taken your web application reconnaissance to a whole new level of control and security. Now, you can assign distinct roles to your team members—Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- 🔐 Sys Admin: Sys Admin is a super user that has permission to modify system and scan related configurations, scan engines, create new users, add new tools etc. Super user can initiate scans and subscans effortlessly.
- 🔍 Penetration Tester: Penetration Tester will be allowed to modify and initiate scans and subscans, add or update targets, etc. A penetration tester will not be allowed to modify system configurations.
- 📊 Auditor: Auditor can only view and download the report. An auditor can not change any system or scan related configurations nor can initiate any scans or subscans.
🚀 GPT Vulnerability Report Generation: Get ready for the future of penetration testing reports with reNgine's groundbreaking feature: "GPT-Powered Report Generation"! With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments that read like they were written by a human security expert! **But that's not all!** Our GPT-driven reports go the extra mile by scouring the web for related news articles, blogs, and references, so you have a 360-degree view of the vulnerabilities you've uncovered. With reNgine 2.0 revolutionize your penetration testing game and impress your clients with reports that are not just informative but engaging and comprehensive with detailed analysis on impact assessment and remediation strategies.
🥷 GPT-Powered Attack Surface Generation: With reNgine 2.0, reNgine seamlessly integrates with GPT to identify the attacks that you can likely perform on a subdomain. By making use of reconnaissance data such as page title, open ports, subdomain name etc, reNgine can advice you the attacks you could perform on a target. reNgine will also provide you the rationale on why the specific attack is likely to be successful.
🧭 Continuous monitoring: Continuous monitoring is at the core of reNgine's mission, and it's robust continuous monitoring feature ensures that their targets are under constant scrutiny. With the flexibility to schedule scans at regular intervals, penetration testers can effortlessly stay informed about their targets. What sets reNgine apart is its seamless integration with popular notification channels such as Discord, Slack, and Telegram, delivering real-time alerts for newly discovered subdomains, vulnerabilities, or any changes in reconnaissance data.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Workflow
<img src="https://github.com/yogeshojha/rengine/assets/17223002/10c475b8-b4a8-440d-9126-77fe2038a386">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Features
* Reconnaissance:
* Subdomain Discovery
* IP and Open Ports Identification
* Endpoints Discovery
* Directory/Files fuzzing
* Screenshot Gathering
* Vulnerability Scan
* Nuclei
* Dalfox XSS Scanner
* CRLFuzzer
* Misconfigured S3 Scanner
* WHOIS Identification
* WAF Detection
* OSINT Capabilities
* Meta info Gathering
* Employees Gathering
* Email Address gathering
* Google Dorking for sensitive info and urls
* Projects, create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
* Perform Advanced Query lookup using natural language alike and, or, not operations
* Highly configurable YAML-based Scan Engines
* Support for Parallel Scans
* Support for Subscans
* Recon Data visualization
* GPT Vulnerability Description, Impact and Remediation generation
* GPT Attack Surface Generator
* Multiple Roles and Permissions to cater a team's need
* Customizable Alerts/Notifications on Slack, Discord, and Telegram
* Automatically report Vulnerabilities to HackerOne
* Recon Notes and Todos
* Clocked Scans (Run reconnaissance exactly at X Hours and Y minutes) and Periodic Scans (Runs reconnaissance every X minutes/- hours/days/week)
* Proxy Support
* Screenshot Gallery with Filters
* Powerful recon data filtering with autosuggestions
* Recon Data changes, find new/removed subdomains/endpoints
* Tag targets into the Organization
* Smart Duplicate endpoint removal based on page title and content length to cleanup the reconnaissance data
* Identify Interesting Subdomains
* Custom GF patterns and custom Nuclei Templates
* Edit tool-related configuration files (Nuclei, Subfinder, Naabu, amass)
* Add external tools from Github/Go
* Interoperable with other tools, Import/Export Subdomains/Endpoints
* Import Targets via IP and/or CIDRs
* Report Generation
* Toolbox: Comes bundled with most commonly used tools during penetration testing such as whois lookup, CMS detector, CVE lookup, etc.
* Identification of related domains and related TLDs for targets
* Find actionable insights such as Most Common Vulnerability, Most Common CVE ID, Most Vulnerable Target/Subdomain, etc.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Scan Engine
```yaml
subdomain_discovery: {
'uses_tools': [
'subfinder',
'ctfr',
'sublist3r',
'tlsx',
'oneforall',
'netlas'
],
'enable_http_crawl': true,
'threads': 30,
'timeout': 5,
}
http_crawl: {}
port_scan: {
'enable_http_crawl': true,
'timeout': 5,
# 'exclude_ports': [],
# 'exclude_subdomains': true,
'ports': ['top-100'],
'rate_limit': 150,
'threads': 30,
'passive': false,
# 'use_naabu_config': false,
# 'enable_nmap': true,
# 'nmap_cmd': '',
# 'nmap_script': '',
# 'nmap_script_args': ''
}
osint: {
'discover': [
'emails',
'metainfo',
'employees'
],
'dorks': [
'login_pages',
'admin_panels',
'dashboard_pages',
'stackoverflow',
'social_media',
'project_management',
'code_sharing',
'config_files',
'jenkins',
'wordpress_files',
'php_error',
'exposed_documents',
'db_files',
'git_exposed'
],
'custom_dorks': [
{
'lookup_site': 'google.com',
'lookup_keywords': '/home/'
},
{
'lookup_site': '_target_',
'lookup_extensions': 'jpg,png'
}
],
'intensity': 'normal',
'documents_limit': 50
}
dir_file_fuzz: {
'auto_calibration': true,
'enable_http_crawl': true,
'rate_limit': 150,
'extensions': ['html', 'php','git','yaml','conf','cnf','config','gz','env','log','db','mysql','bak','asp','aspx','txt','conf','sql','json','yml','pdf'],
'follow_redirect': false,
'max_time': 0,
'match_http_status': [200, 204],
'recursive_level': 2,
'stop_on_error': false,
'timeout': 5,
'threads': 30,
'wordlist_name': 'dicc'
}
fetch_url: {
'uses_tools': [
'gospider',
'hakrawler',
'waybackurls',
'gospider',
'katana'
],
'remove_duplicate_endpoints': true,
'duplicate_fields': [
'content_length',
'page_title'
],
'enable_http_crawl': true,
'gf_patterns': ['debug_logic', 'idor', 'interestingEXT', 'interestingparams', 'interestingsubs', 'lfi', 'rce', 'redirect', 'sqli', 'ssrf', 'ssti', 'xss'],
'ignore_file_extensions': ['png', 'jpg', 'jpeg', 'gif', 'mp4', 'mpeg', 'mp3']
# 'exclude_subdomains': true
}
vulnerability_scan: {
'run_nuclei': false,
'run_dalfox': false,
'run_crlfuzz': false,
'run_s3scanner': true,
'enable_http_crawl': true,
'concurrency': 50,
'intensity': 'normal',
'rate_limit': 150,
'retries': 1,
'timeout': 5,
'fetch_gpt_report': true,
'nuclei': {
'use_conf': false,
'severities': [
'unknown',
'info',
'low',
'medium',
'high',
'critical'
],
# 'tags': [],
# 'templates': [],
# 'custom_templates': [],
},
's3scanner': {
'threads': 100,
'providers': [
'aws',
'gcp',
'digitalocean',
'dreamhost',
'linode'
]
}
}
waf_detection: {}
screenshot: {
'enable_http_crawl': true,
'intensity': 'normal',
'timeout': 10,
'threads': 40
}
# custom_header: "Cookie: Test"
```
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Quick Installation
**Note:** Only Ubuntu/VPS
1. Clone this repo
```bash
git clone https://github.com/yogeshojha/rengine && cd rengine
```
1. Edit the dotenv file, **please make sure to change the password for postgresql `POSTGRES_PASSWORD`!**
```bash
nano .env
```
1. In the dotenv file, you may also modify the Scaling Configurations
```bash
MAX_CONCURRENCY=80
MIN_CONCURRENCY=10
```
MAX_CONCURRENCY: This parameter specifies the maximum number of reNgine's concurrent Celery worker processes that can be spawned. In this case, it's set to 80, meaning that the application can utilize up to 80 concurrent worker processes to execute tasks concurrently. This is useful for handling a high volume of scans or when you want to scale up processing power during periods of high demand. If you have more CPU cores, you will need to increase this for maximised performance.
MIN_CONCURRENCY: On the other hand, MIN_CONCURRENCY specifies the minimum number of concurrent worker processes that should be maintained, even during periods of lower demand. In this example, it's set to 10, which means that even when there are fewer tasks to process, at least 10 worker processes will be kept running. This helps ensure that the application can respond promptly to incoming tasks without the overhead of repeatedly starting and stopping worker processes.
These settings allow for dynamic scaling of Celery workers, ensuring that the application efficiently manages its workload by adjusting the number of concurrent workers based on the workload's size and complexity
1. Run the installation script, Please keep an eye for any prompt, you will also be asked for username and password for reNgine.
```bash
sudo ./install.sh
```
If `install.sh` does not have install permission, please change it, `chmod +x install.sh`
**reNgine can now be accessed from <https://127.0.0.1> or if you're on the VPS <https://your_vps_ip_address>**
**Unless you are on development branch, please do not access reNgine via any ports**
### Installation (Mac/Windows/Other)
Installation instructions can be found at [https://reNgine.wiki/install/detailed/](https://reNgine.wiki/2.0/install/detailed/)
### Updating
1. Updating is as simple as running the following command:
```bash
cd rengine && sudo ./update.sh
```
If `update.sh` does not have execution permissions, please change it, `sudo chmod +x update.sh`
**NOTE:** if you're updating from 1.3.6 and you're getting a 'password authentication failed' error, consider uninstalling 1.3.6 first, then install 2.x.x as you'd normally do.
### Changelog
[Please find the latest release notes and changelog here.](https://rengine.wiki/changelog/)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Screenshots
#### Scan Results
![](.github/screenshots/scan_results.gif)
#### General Usage
<img src="https://user-images.githubusercontent.com/17223002/164993781-b6012995-522b-480a-a8bf-911193d35894.gif">
#### Initiating Subscan
<img src="https://user-images.githubusercontent.com/17223002/164993749-1ad343d6-8ce7-43d6-aee7-b3add0321da7.gif">
#### Recon Data filtering
<img src="https://user-images.githubusercontent.com/17223002/164993687-b63f3de8-e033-4ac0-808e-a2aa377d3cf8.gif">
#### Report Generation
<img src="https://user-images.githubusercontent.com/17223002/164993689-c796c6cd-eb61-43f4-800d-08aba9740088.gif">
#### Toolbox
<img src="https://user-images.githubusercontent.com/17223002/164993751-d687e88a-eb79-440f-9dc0-0ad006901620.gif">
#### Adding Custom tool in Tools Arsenal
<img src="https://user-images.githubusercontent.com/17223002/164993670-466f6459-9499-498b-a9bd-526476d735a7.gif">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire and create. Every contributions you make is **greatly appreciated**. Your contributions can be as simple as fixing the indentation or UI, or as complex as adding new modules and features.
See the [Contributing Guide](.github/CONTRIBUTING.md) to get started.
You can also [join our Discord channel #development](https://discord.gg/JuhHdHTtwd) for any development related questions.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### First-time Open Source contributors
Please note that reNgine is beginner friendly. If you have never done open-source before, we encourage you to do so. **We will be happy and proud of your first PR ever.**
You can start by resolving any [open issues](https://github.com/yogeshojha/rengine/issues).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Support
Please do not use GitHub for support requests. Instead, [join our Discord channel #support](https://discord.gg/azv6fzhNCE).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Support and Sponsoring
Over the past few years, I have been working hard on reNgine to add new features with the sole aim of making it the de facto standard for reconnaissance. I spend most of my free time and weekends working on reNgine. I do this in addition to my day job. I am happy to have received such overwhelming support from the community. But to keep this project alive, I am looking for financial support.
| Paypal | Bitcoin | Ethereum |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
|[https://www.paypal.com/paypalme/yogeshojha11](https://www.paypal.com/paypalme/yogeshojha11) | `35AiKyNswNZ4TZUSdriHopSCjNMPi63BCX` | `0xe7A337Da6ff98A28513C26A7Fec8C9b42A63d346`
OR
* Add a [GitHub Star](https://github.com/yogeshojha/rengine) to the project.
* Tweet about this project, or maybe blogs?
* Maybe nominate me for [GitHub Stars?](https://stars.github.com/nominate/)
* Join DigitalOcean using my [referral link](https://m.do.co/c/e353502d19fc) your profit is **$100** and I get $25 DO credit. This will help me test reNgine on VPS before I release any major features.
It takes a considerable amount of time to add new features and make sure everything works. Donating is your way of saying: **reNgine is awesome**.
Any support is greatly appreciated! Thank you!
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Bug Bounty Program
[![huntr](https://cdn.huntr.dev/huntr_security_badge_mono.svg)](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine)
Security researchers, welcome aboard! I'm excited to announce the reNgine bug bounty programme in collaboration with [huntr.dev](https://huntr.dev), which means that you will be rewarded for any vulnerabilities you find in reNgine.
Thank you for your interest in reporting reNgine vulnerabilities! If you are aware of any potential security vulnerabilities in reNgine, we encourage you to report them immediately via [huntr.dev](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine).
**Please do not disclose vulnerabilities via Github issues/blogs/tweets after/before reporting to huntr.dev as this is explicitly against the disclosure policy of huntr.dev and reNgine and will not be considered for monetary rewards.**
Please note that the reNgine maintainer does not set the bounty amount.
The bounty reward is determined by an industry-first equation developed by huntr.dev to understand the popularity, impact and value of repositories to the open-source community.
**What do I expect from security researchers?**
* Patience: Please note that I am currently the only maintainer in reNgine and it will take some time to validate your report. I ask for your patience during this process.
* Respect for privacy and security reports: Please do not publicly disclose any vulnerabilities (including GitHub issues) before or after reporting them on huntr.dev! This is against the disclosure policy and will not be rewarded.
* Respect the rules
**What do you get in return?**
* Thanks from the maintainer
* Monetary rewards
* CVE ID(s)
Please find the [FAQ](https://www.huntr.dev/faq) and [Responsible disclosure policy](https://www.huntr.dev/policy/) from huntr.dev.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### License
Distributed under the GNU GPL v3 License. See [LICENSE](LICENSE) for more information.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<p align="right">(ChatGPT was used to write some or most part of this README section.)</p>
| <p align="center">
<a href="https://rengine.wiki"><img src=".github/screenshots/banner.gif" alt=""/></a>
</p>
<p align="center"><a href="https://github.com/yogeshojha/rengine/releases" target="_blank"><img src="https://img.shields.io/badge/version-v2.0.0-informational?&logo=none" alt="reNgine Latest Version" /></a> <a href="https://www.gnu.org/licenses/gpl-3.0" target="_blank"><img src="https://img.shields.io/badge/License-GPLv3-red.svg?&logo=none" alt="License" /></a> <a href="#" target="_blank"><img src="https://img.shields.io/badge/first--timers--only-friendly-blue.svg?&logo=none" alt="" /></a> <a href="https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine" target="_blank"><img src="https://cdn.huntr.dev/huntr_security_badge_mono.svg" alt="" /></a> </p>
<p align="center">
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Asia-2023-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/Open--Source--Summit-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://cyberweek.ae/2021/hitb-armory/" target="_blank"><img src="https://img.shields.io/badge/HITB--Armory-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=7uvP6MaQOX0" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://drive.google.com/file/d/1Bh8lbf-Dztt5ViHJVACyrXMiglyICPQ2/view?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Defcon--Demolabs--29-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=A1oNOIc0h5A" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Europe-2020-blue.svg?&logo=none" alt="" /></a>
</p>
<p align="center">
<a href="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml/badge.svg" alt="" /></a> <a href="https://github.com/yogeshojha/rengine/actions/workflows/build.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/build.yml/badge.svg" alt="" /></a>
</p>
<p align="center">
<a href="https://discord.gg/H6WzebwX3H" target="_blank"><img src="https://img.shields.io/discord/880363103689277461" alt="" /></a>
</p>
<p align="center">
<a href="https://opensourcesecurityindex.io/" target="_blank" rel="noopener">
<img style="width: 282px; height: 56px" src="https://opensourcesecurityindex.io/badge.svg" alt="Open Source Security Index - Fastest Growing Open Source Security Projects" width="282" height="56" /> </a>
</p>
<h3>reNgine 2.0-jasper<br>Redefining the future of reconnaissance!</h3>
<h4>What is reNgine?</h4>
<p align="left">reNgine is your go-to web application reconnaissance suite that's designed to simplify and streamline the reconnaissance process for security professionals, penetration testers, and bug bounty hunters. With its highly configurable engines, data correlation capabilities, continuous monitoring, database-backed reconnaissance data, and an intuitive user interface, reNgine redefines how you gather critical information about your target web applications.
Traditional reconnaissance tools often fall short in terms of configurability and efficiency. reNgine addresses these shortcomings and emerges as a excellent alternative to existing commercial tools.
reNgine was created to address the limitations of traditional reconnaissance tools and provide a better alternative, even surpassing some commercial offerings. Whether you're a bug bounty hunter, a penetration tester, or a corporate security team, reNgine is your go-to solution for automating and enhancing your information-gathering efforts.
</p>
reNgine 2.0-jasper is out now, you can [watch reNgine 2.0-jasper release trailer here!](https://youtu.be/VwkOWqiWW5g)
reNgine 2.0-Jasper would not have been possible without [@ocervell](https://github.com/ocervell) valuable contributions. [@ocervell](https://github.com/ocervell) did majority of the refactoring if not all and also added a ton of features. Together, we wish to shape the future of web application reconnaissance, and it's developers like [@ocervell](https://github.com/ocervell) and a [ton of other developers and hackers from our community](https://github.com/yogeshojha/rengine/graphs/contributors) who inspire and drive us forward.
Thank you, [@ocervell](https://github.com/ocervell), for your outstanding work and unwavering commitment to reNgine.
Checkout our contributers here: [Contributers](https://github.com/yogeshojha/rengine/graphs/contributors)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Documentation
You can find detailed documentation at [https://rengine.wiki](https://rengine.wiki)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Table of Contents
* [About reNgine](#about-rengine)
* [Workflow](#workflow)
* [Features](#features)
* [Scan Engine](#scan-engine)
* [Quick Installation](#quick-installation)
* [What's new in reNgine 2.0](#changelog)
* [Screenshots](#screenshots)
* [Contributing](#contributing)
* [reNgine Support](#rengine-support)
* [Support and Sponsoring](#support-and-sponsoring)
* [reNgine Bug Bounty Program](#rengine-bug-bounty-program)
* [License](#license)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### About reNgine
reNgine is not an ordinary reconnaissance suite; it's a game-changer! We've turbocharged the traditional workflow with groundbreaking features that is sure to ease your reconnaissance game. reNgine redefines the art of reconnaissance with highly configurable scan engines, recon data correlation, continuous monitoring, GPT powered Vulnerability Report, Project Management and role based access control etc.
🦾 reNgine has advanced reconnaissance capabilities, harnessing a range of open-source tools to deliver a comprehensive web application reconnaissance experience. With it's intuitive User Interface, it excels in subdomain discovery, pinpointing IP addresses and open ports, collecting endpoints, conducting directory and file fuzzing, capturing screenshots, and performing vulnerability scans. To summarize, it does end-to-end reconnaissance. With WHOIS identification and WAF detection, it offers deep insights into target domains. Additionally, reNgine also identifies misconfigured S3 buckets and find interesting subdomains and URLS, based on specific keywords to helps you identify your next target, making it an go to tool for efficient reconnaissance.
🗃️ Say goodbye to recon data chaos! reNgine seamlessly integrates with a database, providing you with unmatched data correlation and organization. Forgot the hassle of grepping through json, txt or csv files. Plus, our custom query language lets you filter reconnaissance data effortlessly using natural language like operators such as filtering all alive subdomains with `http_status=200` and also filter all subdomains that are alive and has admin in name `http_status=200&name=admin`
🔧 reNgine offers unparalleled flexibility through its highly configurable scan engines, based on a YAML-based configuration. It offers the freedom to create and customize recon scan engines based on any kind of requirement, users can tailor them to their specific objectives and preferences, from thread management to timeout settings and rate-limit configurations, everything is customizable. Additionally, reNgine offers a range of pre-configured scan engines right out of the box, including Full Scan, Passive Scan, Screenshot Gathering, and the OSINT Scan Engine. These ready-to-use engines eliminate the need for extensive manual setup, aligning perfectly with reNgine's core mission of simplifying the reconnaissance process and enabling users to effortlessly access the right reconnaissance data with minimal effort.
💎 Subscans: Subscan is a game-changing feature in reNgine, setting it apart as the only open-source tool of its kind to offer this capability. With Subscan, waiting for the entire pipeline to complete is a thing of the past. Now, users can swiftly respond to newfound discoveries during reconnaissance. Whether you've stumbled upon an intriguing subdomain and wish to conduct a focused port scan or want to delve deeper with a vulnerability assessment, reNgine has you covered.
📃 PDF Reports: In addition to its robust reconnaissance capabilities, reNgine goes the extra mile by simplifying the report generation process, recognizing the crucial role that PDF reports play in the realm of end-to-end reconnaissance. Users can effortlessly generate and customize PDF reports to suit their exact needs. Whether it's a Full Scan Report, Vulnerability Report, or a concise reconnaissance report, reNgine provides the flexibility to choose the report type that best communicates your findings. Moreover, the level of customization is unparalleled, allowing users to select report colors, fine-tune executive summaries, and even add personalized touches like company names and footers. With GPT integration, your reports aren't just a report, with remediation steps, and impacts, you get 360-degree view of the vulnerabilities you've uncovered.
🔖 Say Hello to Projects! reNgine 2.0 introduces a powerful addition that enables you to efficiently organize your web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task. Each projects will have separate dashboard and all the scan results will be separated from each projects, while scan engines and configuration will be shared across all the projects.
⚙️ Roles and Permissions! Begining reNgine 2.0, we've taken your web application reconnaissance to a whole new level of control and security. Now, you can assign distinct roles to your team members—Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- 🔐 Sys Admin: Sys Admin is a super user that has permission to modify system and scan related configurations, scan engines, create new users, add new tools etc. Super user can initiate scans and subscans effortlessly.
- 🔍 Penetration Tester: Penetration Tester will be allowed to modify and initiate scans and subscans, add or update targets, etc. A penetration tester will not be allowed to modify system configurations.
- 📊 Auditor: Auditor can only view and download the report. An auditor can not change any system or scan related configurations nor can initiate any scans or subscans.
🚀 GPT Vulnerability Report Generation: Get ready for the future of penetration testing reports with reNgine's groundbreaking feature: "GPT-Powered Report Generation"! With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments that read like they were written by a human security expert! **But that's not all!** Our GPT-driven reports go the extra mile by scouring the web for related news articles, blogs, and references, so you have a 360-degree view of the vulnerabilities you've uncovered. With reNgine 2.0 revolutionize your penetration testing game and impress your clients with reports that are not just informative but engaging and comprehensive with detailed analysis on impact assessment and remediation strategies.
🥷 GPT-Powered Attack Surface Generation: With reNgine 2.0, reNgine seamlessly integrates with GPT to identify the attacks that you can likely perform on a subdomain. By making use of reconnaissance data such as page title, open ports, subdomain name etc, reNgine can advice you the attacks you could perform on a target. reNgine will also provide you the rationale on why the specific attack is likely to be successful.
🧭 Continuous monitoring: Continuous monitoring is at the core of reNgine's mission, and it's robust continuous monitoring feature ensures that their targets are under constant scrutiny. With the flexibility to schedule scans at regular intervals, penetration testers can effortlessly stay informed about their targets. What sets reNgine apart is its seamless integration with popular notification channels such as Discord, Slack, and Telegram, delivering real-time alerts for newly discovered subdomains, vulnerabilities, or any changes in reconnaissance data.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Workflow
<img src="https://github.com/yogeshojha/rengine/assets/17223002/10c475b8-b4a8-440d-9126-77fe2038a386">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Features
* Reconnaissance:
* Subdomain Discovery
* IP and Open Ports Identification
* Endpoints Discovery
* Directory/Files fuzzing
* Screenshot Gathering
* Vulnerability Scan
* Nuclei
* Dalfox XSS Scanner
* CRLFuzzer
* Misconfigured S3 Scanner
* WHOIS Identification
* WAF Detection
* OSINT Capabilities
* Meta info Gathering
* Employees Gathering
* Email Address gathering
* Google Dorking for sensitive info and urls
* Projects, create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
* Perform Advanced Query lookup using natural language alike and, or, not operations
* Highly configurable YAML-based Scan Engines
* Support for Parallel Scans
* Support for Subscans
* Recon Data visualization
* GPT Vulnerability Description, Impact and Remediation generation
* GPT Attack Surface Generator
* Multiple Roles and Permissions to cater a team's need
* Customizable Alerts/Notifications on Slack, Discord, and Telegram
* Automatically report Vulnerabilities to HackerOne
* Recon Notes and Todos
* Clocked Scans (Run reconnaissance exactly at X Hours and Y minutes) and Periodic Scans (Runs reconnaissance every X minutes/- hours/days/week)
* Proxy Support
* Screenshot Gallery with Filters
* Powerful recon data filtering with autosuggestions
* Recon Data changes, find new/removed subdomains/endpoints
* Tag targets into the Organization
* Smart Duplicate endpoint removal based on page title and content length to cleanup the reconnaissance data
* Identify Interesting Subdomains
* Custom GF patterns and custom Nuclei Templates
* Edit tool-related configuration files (Nuclei, Subfinder, Naabu, amass)
* Add external tools from Github/Go
* Interoperable with other tools, Import/Export Subdomains/Endpoints
* Import Targets via IP and/or CIDRs
* Report Generation
* Toolbox: Comes bundled with most commonly used tools during penetration testing such as whois lookup, CMS detector, CVE lookup, etc.
* Identification of related domains and related TLDs for targets
* Find actionable insights such as Most Common Vulnerability, Most Common CVE ID, Most Vulnerable Target/Subdomain, etc.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Scan Engine
```yaml
subdomain_discovery: {
'uses_tools': [
'subfinder',
'ctfr',
'sublist3r',
'tlsx',
'oneforall',
'netlas'
],
'enable_http_crawl': true,
'threads': 30,
'timeout': 5,
}
http_crawl: {}
port_scan: {
'enable_http_crawl': true,
'timeout': 5,
# 'exclude_ports': [],
# 'exclude_subdomains': true,
'ports': ['top-100'],
'rate_limit': 150,
'threads': 30,
'passive': false,
# 'use_naabu_config': false,
# 'enable_nmap': true,
# 'nmap_cmd': '',
# 'nmap_script': '',
# 'nmap_script_args': ''
}
osint: {
'discover': [
'emails',
'metainfo',
'employees'
],
'dorks': [
'login_pages',
'admin_panels',
'dashboard_pages',
'stackoverflow',
'social_media',
'project_management',
'code_sharing',
'config_files',
'jenkins',
'wordpress_files',
'php_error',
'exposed_documents',
'db_files',
'git_exposed'
],
'custom_dorks': [
{
'lookup_site': 'google.com',
'lookup_keywords': '/home/'
},
{
'lookup_site': '_target_',
'lookup_extensions': 'jpg,png'
}
],
'intensity': 'normal',
'documents_limit': 50
}
dir_file_fuzz: {
'auto_calibration': true,
'enable_http_crawl': true,
'rate_limit': 150,
'extensions': ['html', 'php','git','yaml','conf','cnf','config','gz','env','log','db','mysql','bak','asp','aspx','txt','conf','sql','json','yml','pdf'],
'follow_redirect': false,
'max_time': 0,
'match_http_status': [200, 204],
'recursive_level': 2,
'stop_on_error': false,
'timeout': 5,
'threads': 30,
'wordlist_name': 'dicc'
}
fetch_url: {
'uses_tools': [
'gospider',
'hakrawler',
'waybackurls',
'gospider',
'katana'
],
'remove_duplicate_endpoints': true,
'duplicate_fields': [
'content_length',
'page_title'
],
'enable_http_crawl': true,
'gf_patterns': ['debug_logic', 'idor', 'interestingEXT', 'interestingparams', 'interestingsubs', 'lfi', 'rce', 'redirect', 'sqli', 'ssrf', 'ssti', 'xss'],
'ignore_file_extensions': ['png', 'jpg', 'jpeg', 'gif', 'mp4', 'mpeg', 'mp3']
# 'exclude_subdomains': true
}
vulnerability_scan: {
'run_nuclei': false,
'run_dalfox': false,
'run_crlfuzz': false,
'run_s3scanner': true,
'enable_http_crawl': true,
'concurrency': 50,
'intensity': 'normal',
'rate_limit': 150,
'retries': 1,
'timeout': 5,
'fetch_gpt_report': true,
'nuclei': {
'use_conf': false,
'severities': [
'unknown',
'info',
'low',
'medium',
'high',
'critical'
],
# 'tags': [],
# 'templates': [],
# 'custom_templates': [],
},
's3scanner': {
'threads': 100,
'providers': [
'aws',
'gcp',
'digitalocean',
'dreamhost',
'linode'
]
}
}
waf_detection: {}
screenshot: {
'enable_http_crawl': true,
'intensity': 'normal',
'timeout': 10,
'threads': 40
}
# custom_header: "Cookie: Test"
```
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Quick Installation
**Note:** Only Ubuntu/VPS
1. Clone this repo
```bash
git clone https://github.com/yogeshojha/rengine && cd rengine
```
1. Edit the `.env` file, **please make sure to change the password for postgresql `POSTGRES_PASSWORD`!**
```bash
nano .env
```
1. **Optional, only for non-interactive install**: In the `.env` file, **please make sure to change the super admin values!**
```bash
DJANGO_SUPERUSER_USERNAME=yourUsername
DJANGO_SUPERUSER_EMAIL=YourMail@example.com
DJANGO_SUPERUSER_PASSWORD=yourStrongPassword
```
If you need to carry out a non-interactive installation, you can setup the login, email and password of the web interface admin directly from the .env file (instead of manually setting them from prompts during the installation process). This option can be interesting for automated installation (via ansible, vagrant, etc.).
`DJANGO_SUPERUSER_USERNAME`: web interface admin username (used to login to the web interface).
`DJANGO_SUPERUSER_EMAIL`: web interface admin email.
`DJANGO_SUPERUSER_PASSWORD`: web interface admin password (used to login to the web interface).
1. In the dotenv file, you may also modify the Scaling Configurations
```bash
MAX_CONCURRENCY=80
MIN_CONCURRENCY=10
```
`MAX_CONCURRENCY`: This parameter specifies the maximum number of reNgine's concurrent Celery worker processes that can be spawned. In this case, it's set to 80, meaning that the application can utilize up to 80 concurrent worker processes to execute tasks concurrently. This is useful for handling a high volume of scans or when you want to scale up processing power during periods of high demand. If you have more CPU cores, you will need to increase this for maximised performance.
`MIN_CONCURRENCY`: On the other hand, MIN_CONCURRENCY specifies the minimum number of concurrent worker processes that should be maintained, even during periods of lower demand. In this example, it's set to 10, which means that even when there are fewer tasks to process, at least 10 worker processes will be kept running. This helps ensure that the application can respond promptly to incoming tasks without the overhead of repeatedly starting and stopping worker processes.
These settings allow for dynamic scaling of Celery workers, ensuring that the application efficiently manages its workload by adjusting the number of concurrent workers based on the workload's size and complexity
1. Run the installation script, Please keep an eye for any prompt, you will also be asked for username and password for reNgine.
```bash
sudo ./install.sh
```
Or for a non-interactive installation, use `-n` argument (make sure you've modified the `.env` file before launching the installation).
```bash
sudo ./install.sh -n
```
If `install.sh` does not have install permission, please change it, `chmod +x install.sh`
**reNgine can now be accessed from <https://127.0.0.1> or if you're on the VPS <https://your_vps_ip_address>**
**Unless you are on development branch, please do not access reNgine via any ports**
### Installation (Mac/Windows/Other)
Installation instructions can be found at [https://reNgine.wiki/install/detailed/](https://reNgine.wiki/2.0/install/detailed/)
### Updating
1. Updating is as simple as running the following command:
```bash
cd rengine && sudo ./update.sh
```
If `update.sh` does not have execution permissions, please change it, `sudo chmod +x update.sh`
**NOTE:** if you're updating from 1.3.6 and you're getting a 'password authentication failed' error, consider uninstalling 1.3.6 first, then install 2.x.x as you'd normally do.
### Changelog
[Please find the latest release notes and changelog here.](https://rengine.wiki/changelog/)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Screenshots
#### Scan Results
![](.github/screenshots/scan_results.gif)
#### General Usage
<img src="https://user-images.githubusercontent.com/17223002/164993781-b6012995-522b-480a-a8bf-911193d35894.gif">
#### Initiating Subscan
<img src="https://user-images.githubusercontent.com/17223002/164993749-1ad343d6-8ce7-43d6-aee7-b3add0321da7.gif">
#### Recon Data filtering
<img src="https://user-images.githubusercontent.com/17223002/164993687-b63f3de8-e033-4ac0-808e-a2aa377d3cf8.gif">
#### Report Generation
<img src="https://user-images.githubusercontent.com/17223002/164993689-c796c6cd-eb61-43f4-800d-08aba9740088.gif">
#### Toolbox
<img src="https://user-images.githubusercontent.com/17223002/164993751-d687e88a-eb79-440f-9dc0-0ad006901620.gif">
#### Adding Custom tool in Tools Arsenal
<img src="https://user-images.githubusercontent.com/17223002/164993670-466f6459-9499-498b-a9bd-526476d735a7.gif">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire and create. Every contributions you make is **greatly appreciated**. Your contributions can be as simple as fixing the indentation or UI, or as complex as adding new modules and features.
See the [Contributing Guide](.github/CONTRIBUTING.md) to get started.
You can also [join our Discord channel #development](https://discord.gg/JuhHdHTtwd) for any development related questions.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### First-time Open Source contributors
Please note that reNgine is beginner friendly. If you have never done open-source before, we encourage you to do so. **We will be happy and proud of your first PR ever.**
You can start by resolving any [open issues](https://github.com/yogeshojha/rengine/issues).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Support
Please do not use GitHub for support requests. Instead, [join our Discord channel #support](https://discord.gg/azv6fzhNCE).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Support and Sponsoring
Over the past few years, I have been working hard on reNgine to add new features with the sole aim of making it the de facto standard for reconnaissance. I spend most of my free time and weekends working on reNgine. I do this in addition to my day job. I am happy to have received such overwhelming support from the community. But to keep this project alive, I am looking for financial support.
| Paypal | Bitcoin | Ethereum |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
|[https://www.paypal.com/paypalme/yogeshojha11](https://www.paypal.com/paypalme/yogeshojha11) | `35AiKyNswNZ4TZUSdriHopSCjNMPi63BCX` | `0xe7A337Da6ff98A28513C26A7Fec8C9b42A63d346`
OR
* Add a [GitHub Star](https://github.com/yogeshojha/rengine) to the project.
* Tweet about this project, or maybe blogs?
* Maybe nominate me for [GitHub Stars?](https://stars.github.com/nominate/)
* Join DigitalOcean using my [referral link](https://m.do.co/c/e353502d19fc) your profit is **$100** and I get $25 DO credit. This will help me test reNgine on VPS before I release any major features.
It takes a considerable amount of time to add new features and make sure everything works. Donating is your way of saying: **reNgine is awesome**.
Any support is greatly appreciated! Thank you!
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Bug Bounty Program
[![huntr](https://cdn.huntr.dev/huntr_security_badge_mono.svg)](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine)
Security researchers, welcome aboard! I'm excited to announce the reNgine bug bounty programme in collaboration with [huntr.dev](https://huntr.dev), which means that you will be rewarded for any vulnerabilities you find in reNgine.
Thank you for your interest in reporting reNgine vulnerabilities! If you are aware of any potential security vulnerabilities in reNgine, we encourage you to report them immediately via [huntr.dev](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine).
**Please do not disclose vulnerabilities via Github issues/blogs/tweets after/before reporting to huntr.dev as this is explicitly against the disclosure policy of huntr.dev and reNgine and will not be considered for monetary rewards.**
Please note that the reNgine maintainer does not set the bounty amount.
The bounty reward is determined by an industry-first equation developed by huntr.dev to understand the popularity, impact and value of repositories to the open-source community.
**What do I expect from security researchers?**
* Patience: Please note that I am currently the only maintainer in reNgine and it will take some time to validate your report. I ask for your patience during this process.
* Respect for privacy and security reports: Please do not publicly disclose any vulnerabilities (including GitHub issues) before or after reporting them on huntr.dev! This is against the disclosure policy and will not be rewarded.
* Respect the rules
**What do you get in return?**
* Thanks from the maintainer
* Monetary rewards
* CVE ID(s)
Please find the [FAQ](https://www.huntr.dev/faq) and [Responsible disclosure policy](https://www.huntr.dev/policy/) from huntr.dev.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### License
Distributed under the GNU GPL v3 License. See [LICENSE](LICENSE) for more information.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<p align="right">(ChatGPT was used to write some or most part of this README section.)</p>
| C0wnuts | 3dd700357a4bd5701b07ede4511f66042655be00 | 64b7f291240b3b8853e3cec7ee6230827c97b907 | ```suggestion
`DJANGO_SUPERUSER_EMAIL`: web interface admin email.
``` | AnonymousWP | 18 |
yogeshojha/rengine | 973 | Add non-interactive installation parameter | Add a non-interactive installation method via a new parameter to be passed to the install.sh script.
Essential for automated/industrialized systems (e.g. via Ansible or another automated environment creation system). | null | 2023-10-12 01:09:15+00:00 | 2023-11-21 12:49:22+00:00 | README.md | <p align="center">
<a href="https://rengine.wiki"><img src=".github/screenshots/banner.gif" alt=""/></a>
</p>
<p align="center"><a href="https://github.com/yogeshojha/rengine/releases" target="_blank"><img src="https://img.shields.io/badge/version-v2.0.0-informational?&logo=none" alt="reNgine Latest Version" /></a> <a href="https://www.gnu.org/licenses/gpl-3.0" target="_blank"><img src="https://img.shields.io/badge/License-GPLv3-red.svg?&logo=none" alt="License" /></a> <a href="#" target="_blank"><img src="https://img.shields.io/badge/first--timers--only-friendly-blue.svg?&logo=none" alt="" /></a> <a href="https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine" target="_blank"><img src="https://cdn.huntr.dev/huntr_security_badge_mono.svg" alt="" /></a> </p>
<p align="center">
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Asia-2023-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/Open--Source--Summit-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://cyberweek.ae/2021/hitb-armory/" target="_blank"><img src="https://img.shields.io/badge/HITB--Armory-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=7uvP6MaQOX0" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://drive.google.com/file/d/1Bh8lbf-Dztt5ViHJVACyrXMiglyICPQ2/view?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Defcon--Demolabs--29-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=A1oNOIc0h5A" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Europe-2020-blue.svg?&logo=none" alt="" /></a>
</p>
<p align="center">
<a href="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml/badge.svg" alt="" /></a> <a href="https://github.com/yogeshojha/rengine/actions/workflows/build.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/build.yml/badge.svg" alt="" /></a>
</p>
<p align="center">
<a href="https://discord.gg/H6WzebwX3H" target="_blank"><img src="https://img.shields.io/discord/880363103689277461" alt="" /></a>
</p>
<p align="center">
<a href="https://opensourcesecurityindex.io/" target="_blank" rel="noopener">
<img style="width: 282px; height: 56px" src="https://opensourcesecurityindex.io/badge.svg" alt="Open Source Security Index - Fastest Growing Open Source Security Projects" width="282" height="56" /> </a>
</p>
<h3>reNgine 2.0-jasper<br>Redefining the future of reconnaissance!</h3>
<h4>What is reNgine?</h4>
<p align="left">reNgine is your go-to web application reconnaissance suite that's designed to simplify and streamline the reconnaissance process for security professionals, penetration testers, and bug bounty hunters. With its highly configurable engines, data correlation capabilities, continuous monitoring, database-backed reconnaissance data, and an intuitive user interface, reNgine redefines how you gather critical information about your target web applications.
Traditional reconnaissance tools often fall short in terms of configurability and efficiency. reNgine addresses these shortcomings and emerges as a excellent alternative to existing commercial tools.
reNgine was created to address the limitations of traditional reconnaissance tools and provide a better alternative, even surpassing some commercial offerings. Whether you're a bug bounty hunter, a penetration tester, or a corporate security team, reNgine is your go-to solution for automating and enhancing your information-gathering efforts.
</p>
reNgine 2.0-jasper is out now, you can [watch reNgine 2.0-jasper release trailer here!](https://youtu.be/VwkOWqiWW5g)
reNgine 2.0-Jasper would not have been possible without [@ocervell](https://github.com/ocervell) valuable contributions. [@ocervell](https://github.com/ocervell) did majority of the refactoring if not all and also added a ton of features. Together, we wish to shape the future of web application reconnaissance, and it's developers like [@ocervell](https://github.com/ocervell) and a [ton of other developers and hackers from our community](https://github.com/yogeshojha/rengine/graphs/contributors) who inspire and drive us forward.
Thank you, [@ocervell](https://github.com/ocervell), for your outstanding work and unwavering commitment to reNgine.
Checkout our contributers here: [Contributers](https://github.com/yogeshojha/rengine/graphs/contributors)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Documentation
You can find detailed documentation at [https://rengine.wiki](https://rengine.wiki)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Table of Contents
* [About reNgine](#about-rengine)
* [Workflow](#workflow)
* [Features](#features)
* [Scan Engine](#scan-engine)
* [Quick Installation](#quick-installation)
* [What's new in reNgine 2.0](#changelog)
* [Screenshots](#screenshots)
* [Contributing](#contributing)
* [reNgine Support](#rengine-support)
* [Support and Sponsoring](#support-and-sponsoring)
* [reNgine Bug Bounty Program](#rengine-bug-bounty-program)
* [License](#license)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### About reNgine
reNgine is not an ordinary reconnaissance suite; it's a game-changer! We've turbocharged the traditional workflow with groundbreaking features that is sure to ease your reconnaissance game. reNgine redefines the art of reconnaissance with highly configurable scan engines, recon data correlation, continuous monitoring, GPT powered Vulnerability Report, Project Management and role based access control etc.
🦾 reNgine has advanced reconnaissance capabilities, harnessing a range of open-source tools to deliver a comprehensive web application reconnaissance experience. With it's intuitive User Interface, it excels in subdomain discovery, pinpointing IP addresses and open ports, collecting endpoints, conducting directory and file fuzzing, capturing screenshots, and performing vulnerability scans. To summarize, it does end-to-end reconnaissance. With WHOIS identification and WAF detection, it offers deep insights into target domains. Additionally, reNgine also identifies misconfigured S3 buckets and find interesting subdomains and URLS, based on specific keywords to helps you identify your next target, making it an go to tool for efficient reconnaissance.
🗃️ Say goodbye to recon data chaos! reNgine seamlessly integrates with a database, providing you with unmatched data correlation and organization. Forgot the hassle of grepping through json, txt or csv files. Plus, our custom query language lets you filter reconnaissance data effortlessly using natural language like operators such as filtering all alive subdomains with `http_status=200` and also filter all subdomains that are alive and has admin in name `http_status=200&name=admin`
🔧 reNgine offers unparalleled flexibility through its highly configurable scan engines, based on a YAML-based configuration. It offers the freedom to create and customize recon scan engines based on any kind of requirement, users can tailor them to their specific objectives and preferences, from thread management to timeout settings and rate-limit configurations, everything is customizable. Additionally, reNgine offers a range of pre-configured scan engines right out of the box, including Full Scan, Passive Scan, Screenshot Gathering, and the OSINT Scan Engine. These ready-to-use engines eliminate the need for extensive manual setup, aligning perfectly with reNgine's core mission of simplifying the reconnaissance process and enabling users to effortlessly access the right reconnaissance data with minimal effort.
💎 Subscans: Subscan is a game-changing feature in reNgine, setting it apart as the only open-source tool of its kind to offer this capability. With Subscan, waiting for the entire pipeline to complete is a thing of the past. Now, users can swiftly respond to newfound discoveries during reconnaissance. Whether you've stumbled upon an intriguing subdomain and wish to conduct a focused port scan or want to delve deeper with a vulnerability assessment, reNgine has you covered.
📃 PDF Reports: In addition to its robust reconnaissance capabilities, reNgine goes the extra mile by simplifying the report generation process, recognizing the crucial role that PDF reports play in the realm of end-to-end reconnaissance. Users can effortlessly generate and customize PDF reports to suit their exact needs. Whether it's a Full Scan Report, Vulnerability Report, or a concise reconnaissance report, reNgine provides the flexibility to choose the report type that best communicates your findings. Moreover, the level of customization is unparalleled, allowing users to select report colors, fine-tune executive summaries, and even add personalized touches like company names and footers. With GPT integration, your reports aren't just a report, with remediation steps, and impacts, you get 360-degree view of the vulnerabilities you've uncovered.
🔖 Say Hello to Projects! reNgine 2.0 introduces a powerful addition that enables you to efficiently organize your web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task. Each projects will have separate dashboard and all the scan results will be separated from each projects, while scan engines and configuration will be shared across all the projects.
⚙️ Roles and Permissions! Begining reNgine 2.0, we've taken your web application reconnaissance to a whole new level of control and security. Now, you can assign distinct roles to your team members—Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- 🔐 Sys Admin: Sys Admin is a super user that has permission to modify system and scan related configurations, scan engines, create new users, add new tools etc. Super user can initiate scans and subscans effortlessly.
- 🔍 Penetration Tester: Penetration Tester will be allowed to modify and initiate scans and subscans, add or update targets, etc. A penetration tester will not be allowed to modify system configurations.
- 📊 Auditor: Auditor can only view and download the report. An auditor can not change any system or scan related configurations nor can initiate any scans or subscans.
🚀 GPT Vulnerability Report Generation: Get ready for the future of penetration testing reports with reNgine's groundbreaking feature: "GPT-Powered Report Generation"! With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments that read like they were written by a human security expert! **But that's not all!** Our GPT-driven reports go the extra mile by scouring the web for related news articles, blogs, and references, so you have a 360-degree view of the vulnerabilities you've uncovered. With reNgine 2.0 revolutionize your penetration testing game and impress your clients with reports that are not just informative but engaging and comprehensive with detailed analysis on impact assessment and remediation strategies.
🥷 GPT-Powered Attack Surface Generation: With reNgine 2.0, reNgine seamlessly integrates with GPT to identify the attacks that you can likely perform on a subdomain. By making use of reconnaissance data such as page title, open ports, subdomain name etc, reNgine can advice you the attacks you could perform on a target. reNgine will also provide you the rationale on why the specific attack is likely to be successful.
🧭 Continuous monitoring: Continuous monitoring is at the core of reNgine's mission, and it's robust continuous monitoring feature ensures that their targets are under constant scrutiny. With the flexibility to schedule scans at regular intervals, penetration testers can effortlessly stay informed about their targets. What sets reNgine apart is its seamless integration with popular notification channels such as Discord, Slack, and Telegram, delivering real-time alerts for newly discovered subdomains, vulnerabilities, or any changes in reconnaissance data.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Workflow
<img src="https://github.com/yogeshojha/rengine/assets/17223002/10c475b8-b4a8-440d-9126-77fe2038a386">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Features
* Reconnaissance:
* Subdomain Discovery
* IP and Open Ports Identification
* Endpoints Discovery
* Directory/Files fuzzing
* Screenshot Gathering
* Vulnerability Scan
* Nuclei
* Dalfox XSS Scanner
* CRLFuzzer
* Misconfigured S3 Scanner
* WHOIS Identification
* WAF Detection
* OSINT Capabilities
* Meta info Gathering
* Employees Gathering
* Email Address gathering
* Google Dorking for sensitive info and urls
* Projects, create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
* Perform Advanced Query lookup using natural language alike and, or, not operations
* Highly configurable YAML-based Scan Engines
* Support for Parallel Scans
* Support for Subscans
* Recon Data visualization
* GPT Vulnerability Description, Impact and Remediation generation
* GPT Attack Surface Generator
* Multiple Roles and Permissions to cater a team's need
* Customizable Alerts/Notifications on Slack, Discord, and Telegram
* Automatically report Vulnerabilities to HackerOne
* Recon Notes and Todos
* Clocked Scans (Run reconnaissance exactly at X Hours and Y minutes) and Periodic Scans (Runs reconnaissance every X minutes/- hours/days/week)
* Proxy Support
* Screenshot Gallery with Filters
* Powerful recon data filtering with autosuggestions
* Recon Data changes, find new/removed subdomains/endpoints
* Tag targets into the Organization
* Smart Duplicate endpoint removal based on page title and content length to cleanup the reconnaissance data
* Identify Interesting Subdomains
* Custom GF patterns and custom Nuclei Templates
* Edit tool-related configuration files (Nuclei, Subfinder, Naabu, amass)
* Add external tools from Github/Go
* Interoperable with other tools, Import/Export Subdomains/Endpoints
* Import Targets via IP and/or CIDRs
* Report Generation
* Toolbox: Comes bundled with most commonly used tools during penetration testing such as whois lookup, CMS detector, CVE lookup, etc.
* Identification of related domains and related TLDs for targets
* Find actionable insights such as Most Common Vulnerability, Most Common CVE ID, Most Vulnerable Target/Subdomain, etc.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Scan Engine
```yaml
subdomain_discovery: {
'uses_tools': [
'subfinder',
'ctfr',
'sublist3r',
'tlsx',
'oneforall',
'netlas'
],
'enable_http_crawl': true,
'threads': 30,
'timeout': 5,
}
http_crawl: {}
port_scan: {
'enable_http_crawl': true,
'timeout': 5,
# 'exclude_ports': [],
# 'exclude_subdomains': true,
'ports': ['top-100'],
'rate_limit': 150,
'threads': 30,
'passive': false,
# 'use_naabu_config': false,
# 'enable_nmap': true,
# 'nmap_cmd': '',
# 'nmap_script': '',
# 'nmap_script_args': ''
}
osint: {
'discover': [
'emails',
'metainfo',
'employees'
],
'dorks': [
'login_pages',
'admin_panels',
'dashboard_pages',
'stackoverflow',
'social_media',
'project_management',
'code_sharing',
'config_files',
'jenkins',
'wordpress_files',
'php_error',
'exposed_documents',
'db_files',
'git_exposed'
],
'custom_dorks': [
{
'lookup_site': 'google.com',
'lookup_keywords': '/home/'
},
{
'lookup_site': '_target_',
'lookup_extensions': 'jpg,png'
}
],
'intensity': 'normal',
'documents_limit': 50
}
dir_file_fuzz: {
'auto_calibration': true,
'enable_http_crawl': true,
'rate_limit': 150,
'extensions': ['html', 'php','git','yaml','conf','cnf','config','gz','env','log','db','mysql','bak','asp','aspx','txt','conf','sql','json','yml','pdf'],
'follow_redirect': false,
'max_time': 0,
'match_http_status': [200, 204],
'recursive_level': 2,
'stop_on_error': false,
'timeout': 5,
'threads': 30,
'wordlist_name': 'dicc'
}
fetch_url: {
'uses_tools': [
'gospider',
'hakrawler',
'waybackurls',
'gospider',
'katana'
],
'remove_duplicate_endpoints': true,
'duplicate_fields': [
'content_length',
'page_title'
],
'enable_http_crawl': true,
'gf_patterns': ['debug_logic', 'idor', 'interestingEXT', 'interestingparams', 'interestingsubs', 'lfi', 'rce', 'redirect', 'sqli', 'ssrf', 'ssti', 'xss'],
'ignore_file_extensions': ['png', 'jpg', 'jpeg', 'gif', 'mp4', 'mpeg', 'mp3']
# 'exclude_subdomains': true
}
vulnerability_scan: {
'run_nuclei': false,
'run_dalfox': false,
'run_crlfuzz': false,
'run_s3scanner': true,
'enable_http_crawl': true,
'concurrency': 50,
'intensity': 'normal',
'rate_limit': 150,
'retries': 1,
'timeout': 5,
'fetch_gpt_report': true,
'nuclei': {
'use_conf': false,
'severities': [
'unknown',
'info',
'low',
'medium',
'high',
'critical'
],
# 'tags': [],
# 'templates': [],
# 'custom_templates': [],
},
's3scanner': {
'threads': 100,
'providers': [
'aws',
'gcp',
'digitalocean',
'dreamhost',
'linode'
]
}
}
waf_detection: {}
screenshot: {
'enable_http_crawl': true,
'intensity': 'normal',
'timeout': 10,
'threads': 40
}
# custom_header: "Cookie: Test"
```
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Quick Installation
**Note:** Only Ubuntu/VPS
1. Clone this repo
```bash
git clone https://github.com/yogeshojha/rengine && cd rengine
```
1. Edit the dotenv file, **please make sure to change the password for postgresql `POSTGRES_PASSWORD`!**
```bash
nano .env
```
1. In the dotenv file, you may also modify the Scaling Configurations
```bash
MAX_CONCURRENCY=80
MIN_CONCURRENCY=10
```
MAX_CONCURRENCY: This parameter specifies the maximum number of reNgine's concurrent Celery worker processes that can be spawned. In this case, it's set to 80, meaning that the application can utilize up to 80 concurrent worker processes to execute tasks concurrently. This is useful for handling a high volume of scans or when you want to scale up processing power during periods of high demand. If you have more CPU cores, you will need to increase this for maximised performance.
MIN_CONCURRENCY: On the other hand, MIN_CONCURRENCY specifies the minimum number of concurrent worker processes that should be maintained, even during periods of lower demand. In this example, it's set to 10, which means that even when there are fewer tasks to process, at least 10 worker processes will be kept running. This helps ensure that the application can respond promptly to incoming tasks without the overhead of repeatedly starting and stopping worker processes.
These settings allow for dynamic scaling of Celery workers, ensuring that the application efficiently manages its workload by adjusting the number of concurrent workers based on the workload's size and complexity
1. Run the installation script, Please keep an eye for any prompt, you will also be asked for username and password for reNgine.
```bash
sudo ./install.sh
```
If `install.sh` does not have install permission, please change it, `chmod +x install.sh`
**reNgine can now be accessed from <https://127.0.0.1> or if you're on the VPS <https://your_vps_ip_address>**
**Unless you are on development branch, please do not access reNgine via any ports**
### Installation (Mac/Windows/Other)
Installation instructions can be found at [https://reNgine.wiki/install/detailed/](https://reNgine.wiki/2.0/install/detailed/)
### Updating
1. Updating is as simple as running the following command:
```bash
cd rengine && sudo ./update.sh
```
If `update.sh` does not have execution permissions, please change it, `sudo chmod +x update.sh`
**NOTE:** if you're updating from 1.3.6 and you're getting a 'password authentication failed' error, consider uninstalling 1.3.6 first, then install 2.x.x as you'd normally do.
### Changelog
[Please find the latest release notes and changelog here.](https://rengine.wiki/changelog/)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Screenshots
#### Scan Results
![](.github/screenshots/scan_results.gif)
#### General Usage
<img src="https://user-images.githubusercontent.com/17223002/164993781-b6012995-522b-480a-a8bf-911193d35894.gif">
#### Initiating Subscan
<img src="https://user-images.githubusercontent.com/17223002/164993749-1ad343d6-8ce7-43d6-aee7-b3add0321da7.gif">
#### Recon Data filtering
<img src="https://user-images.githubusercontent.com/17223002/164993687-b63f3de8-e033-4ac0-808e-a2aa377d3cf8.gif">
#### Report Generation
<img src="https://user-images.githubusercontent.com/17223002/164993689-c796c6cd-eb61-43f4-800d-08aba9740088.gif">
#### Toolbox
<img src="https://user-images.githubusercontent.com/17223002/164993751-d687e88a-eb79-440f-9dc0-0ad006901620.gif">
#### Adding Custom tool in Tools Arsenal
<img src="https://user-images.githubusercontent.com/17223002/164993670-466f6459-9499-498b-a9bd-526476d735a7.gif">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire and create. Every contributions you make is **greatly appreciated**. Your contributions can be as simple as fixing the indentation or UI, or as complex as adding new modules and features.
See the [Contributing Guide](.github/CONTRIBUTING.md) to get started.
You can also [join our Discord channel #development](https://discord.gg/JuhHdHTtwd) for any development related questions.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### First-time Open Source contributors
Please note that reNgine is beginner friendly. If you have never done open-source before, we encourage you to do so. **We will be happy and proud of your first PR ever.**
You can start by resolving any [open issues](https://github.com/yogeshojha/rengine/issues).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Support
Please do not use GitHub for support requests. Instead, [join our Discord channel #support](https://discord.gg/azv6fzhNCE).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Support and Sponsoring
Over the past few years, I have been working hard on reNgine to add new features with the sole aim of making it the de facto standard for reconnaissance. I spend most of my free time and weekends working on reNgine. I do this in addition to my day job. I am happy to have received such overwhelming support from the community. But to keep this project alive, I am looking for financial support.
| Paypal | Bitcoin | Ethereum |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
|[https://www.paypal.com/paypalme/yogeshojha11](https://www.paypal.com/paypalme/yogeshojha11) | `35AiKyNswNZ4TZUSdriHopSCjNMPi63BCX` | `0xe7A337Da6ff98A28513C26A7Fec8C9b42A63d346`
OR
* Add a [GitHub Star](https://github.com/yogeshojha/rengine) to the project.
* Tweet about this project, or maybe blogs?
* Maybe nominate me for [GitHub Stars?](https://stars.github.com/nominate/)
* Join DigitalOcean using my [referral link](https://m.do.co/c/e353502d19fc) your profit is **$100** and I get $25 DO credit. This will help me test reNgine on VPS before I release any major features.
It takes a considerable amount of time to add new features and make sure everything works. Donating is your way of saying: **reNgine is awesome**.
Any support is greatly appreciated! Thank you!
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Bug Bounty Program
[![huntr](https://cdn.huntr.dev/huntr_security_badge_mono.svg)](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine)
Security researchers, welcome aboard! I'm excited to announce the reNgine bug bounty programme in collaboration with [huntr.dev](https://huntr.dev), which means that you will be rewarded for any vulnerabilities you find in reNgine.
Thank you for your interest in reporting reNgine vulnerabilities! If you are aware of any potential security vulnerabilities in reNgine, we encourage you to report them immediately via [huntr.dev](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine).
**Please do not disclose vulnerabilities via Github issues/blogs/tweets after/before reporting to huntr.dev as this is explicitly against the disclosure policy of huntr.dev and reNgine and will not be considered for monetary rewards.**
Please note that the reNgine maintainer does not set the bounty amount.
The bounty reward is determined by an industry-first equation developed by huntr.dev to understand the popularity, impact and value of repositories to the open-source community.
**What do I expect from security researchers?**
* Patience: Please note that I am currently the only maintainer in reNgine and it will take some time to validate your report. I ask for your patience during this process.
* Respect for privacy and security reports: Please do not publicly disclose any vulnerabilities (including GitHub issues) before or after reporting them on huntr.dev! This is against the disclosure policy and will not be rewarded.
* Respect the rules
**What do you get in return?**
* Thanks from the maintainer
* Monetary rewards
* CVE ID(s)
Please find the [FAQ](https://www.huntr.dev/faq) and [Responsible disclosure policy](https://www.huntr.dev/policy/) from huntr.dev.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### License
Distributed under the GNU GPL v3 License. See [LICENSE](LICENSE) for more information.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<p align="right">(ChatGPT was used to write some or most part of this README section.)</p>
| <p align="center">
<a href="https://rengine.wiki"><img src=".github/screenshots/banner.gif" alt=""/></a>
</p>
<p align="center"><a href="https://github.com/yogeshojha/rengine/releases" target="_blank"><img src="https://img.shields.io/badge/version-v2.0.0-informational?&logo=none" alt="reNgine Latest Version" /></a> <a href="https://www.gnu.org/licenses/gpl-3.0" target="_blank"><img src="https://img.shields.io/badge/License-GPLv3-red.svg?&logo=none" alt="License" /></a> <a href="#" target="_blank"><img src="https://img.shields.io/badge/first--timers--only-friendly-blue.svg?&logo=none" alt="" /></a> <a href="https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine" target="_blank"><img src="https://cdn.huntr.dev/huntr_security_badge_mono.svg" alt="" /></a> </p>
<p align="center">
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Asia-2023-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/Open--Source--Summit-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://cyberweek.ae/2021/hitb-armory/" target="_blank"><img src="https://img.shields.io/badge/HITB--Armory-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=7uvP6MaQOX0" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://drive.google.com/file/d/1Bh8lbf-Dztt5ViHJVACyrXMiglyICPQ2/view?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Defcon--Demolabs--29-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=A1oNOIc0h5A" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Europe-2020-blue.svg?&logo=none" alt="" /></a>
</p>
<p align="center">
<a href="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml/badge.svg" alt="" /></a> <a href="https://github.com/yogeshojha/rengine/actions/workflows/build.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/build.yml/badge.svg" alt="" /></a>
</p>
<p align="center">
<a href="https://discord.gg/H6WzebwX3H" target="_blank"><img src="https://img.shields.io/discord/880363103689277461" alt="" /></a>
</p>
<p align="center">
<a href="https://opensourcesecurityindex.io/" target="_blank" rel="noopener">
<img style="width: 282px; height: 56px" src="https://opensourcesecurityindex.io/badge.svg" alt="Open Source Security Index - Fastest Growing Open Source Security Projects" width="282" height="56" /> </a>
</p>
<h3>reNgine 2.0-jasper<br>Redefining the future of reconnaissance!</h3>
<h4>What is reNgine?</h4>
<p align="left">reNgine is your go-to web application reconnaissance suite that's designed to simplify and streamline the reconnaissance process for security professionals, penetration testers, and bug bounty hunters. With its highly configurable engines, data correlation capabilities, continuous monitoring, database-backed reconnaissance data, and an intuitive user interface, reNgine redefines how you gather critical information about your target web applications.
Traditional reconnaissance tools often fall short in terms of configurability and efficiency. reNgine addresses these shortcomings and emerges as a excellent alternative to existing commercial tools.
reNgine was created to address the limitations of traditional reconnaissance tools and provide a better alternative, even surpassing some commercial offerings. Whether you're a bug bounty hunter, a penetration tester, or a corporate security team, reNgine is your go-to solution for automating and enhancing your information-gathering efforts.
</p>
reNgine 2.0-jasper is out now, you can [watch reNgine 2.0-jasper release trailer here!](https://youtu.be/VwkOWqiWW5g)
reNgine 2.0-Jasper would not have been possible without [@ocervell](https://github.com/ocervell) valuable contributions. [@ocervell](https://github.com/ocervell) did majority of the refactoring if not all and also added a ton of features. Together, we wish to shape the future of web application reconnaissance, and it's developers like [@ocervell](https://github.com/ocervell) and a [ton of other developers and hackers from our community](https://github.com/yogeshojha/rengine/graphs/contributors) who inspire and drive us forward.
Thank you, [@ocervell](https://github.com/ocervell), for your outstanding work and unwavering commitment to reNgine.
Checkout our contributers here: [Contributers](https://github.com/yogeshojha/rengine/graphs/contributors)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Documentation
You can find detailed documentation at [https://rengine.wiki](https://rengine.wiki)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Table of Contents
* [About reNgine](#about-rengine)
* [Workflow](#workflow)
* [Features](#features)
* [Scan Engine](#scan-engine)
* [Quick Installation](#quick-installation)
* [What's new in reNgine 2.0](#changelog)
* [Screenshots](#screenshots)
* [Contributing](#contributing)
* [reNgine Support](#rengine-support)
* [Support and Sponsoring](#support-and-sponsoring)
* [reNgine Bug Bounty Program](#rengine-bug-bounty-program)
* [License](#license)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### About reNgine
reNgine is not an ordinary reconnaissance suite; it's a game-changer! We've turbocharged the traditional workflow with groundbreaking features that is sure to ease your reconnaissance game. reNgine redefines the art of reconnaissance with highly configurable scan engines, recon data correlation, continuous monitoring, GPT powered Vulnerability Report, Project Management and role based access control etc.
🦾 reNgine has advanced reconnaissance capabilities, harnessing a range of open-source tools to deliver a comprehensive web application reconnaissance experience. With it's intuitive User Interface, it excels in subdomain discovery, pinpointing IP addresses and open ports, collecting endpoints, conducting directory and file fuzzing, capturing screenshots, and performing vulnerability scans. To summarize, it does end-to-end reconnaissance. With WHOIS identification and WAF detection, it offers deep insights into target domains. Additionally, reNgine also identifies misconfigured S3 buckets and find interesting subdomains and URLS, based on specific keywords to helps you identify your next target, making it an go to tool for efficient reconnaissance.
🗃️ Say goodbye to recon data chaos! reNgine seamlessly integrates with a database, providing you with unmatched data correlation and organization. Forgot the hassle of grepping through json, txt or csv files. Plus, our custom query language lets you filter reconnaissance data effortlessly using natural language like operators such as filtering all alive subdomains with `http_status=200` and also filter all subdomains that are alive and has admin in name `http_status=200&name=admin`
🔧 reNgine offers unparalleled flexibility through its highly configurable scan engines, based on a YAML-based configuration. It offers the freedom to create and customize recon scan engines based on any kind of requirement, users can tailor them to their specific objectives and preferences, from thread management to timeout settings and rate-limit configurations, everything is customizable. Additionally, reNgine offers a range of pre-configured scan engines right out of the box, including Full Scan, Passive Scan, Screenshot Gathering, and the OSINT Scan Engine. These ready-to-use engines eliminate the need for extensive manual setup, aligning perfectly with reNgine's core mission of simplifying the reconnaissance process and enabling users to effortlessly access the right reconnaissance data with minimal effort.
💎 Subscans: Subscan is a game-changing feature in reNgine, setting it apart as the only open-source tool of its kind to offer this capability. With Subscan, waiting for the entire pipeline to complete is a thing of the past. Now, users can swiftly respond to newfound discoveries during reconnaissance. Whether you've stumbled upon an intriguing subdomain and wish to conduct a focused port scan or want to delve deeper with a vulnerability assessment, reNgine has you covered.
📃 PDF Reports: In addition to its robust reconnaissance capabilities, reNgine goes the extra mile by simplifying the report generation process, recognizing the crucial role that PDF reports play in the realm of end-to-end reconnaissance. Users can effortlessly generate and customize PDF reports to suit their exact needs. Whether it's a Full Scan Report, Vulnerability Report, or a concise reconnaissance report, reNgine provides the flexibility to choose the report type that best communicates your findings. Moreover, the level of customization is unparalleled, allowing users to select report colors, fine-tune executive summaries, and even add personalized touches like company names and footers. With GPT integration, your reports aren't just a report, with remediation steps, and impacts, you get 360-degree view of the vulnerabilities you've uncovered.
🔖 Say Hello to Projects! reNgine 2.0 introduces a powerful addition that enables you to efficiently organize your web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task. Each projects will have separate dashboard and all the scan results will be separated from each projects, while scan engines and configuration will be shared across all the projects.
⚙️ Roles and Permissions! Begining reNgine 2.0, we've taken your web application reconnaissance to a whole new level of control and security. Now, you can assign distinct roles to your team members—Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- 🔐 Sys Admin: Sys Admin is a super user that has permission to modify system and scan related configurations, scan engines, create new users, add new tools etc. Super user can initiate scans and subscans effortlessly.
- 🔍 Penetration Tester: Penetration Tester will be allowed to modify and initiate scans and subscans, add or update targets, etc. A penetration tester will not be allowed to modify system configurations.
- 📊 Auditor: Auditor can only view and download the report. An auditor can not change any system or scan related configurations nor can initiate any scans or subscans.
🚀 GPT Vulnerability Report Generation: Get ready for the future of penetration testing reports with reNgine's groundbreaking feature: "GPT-Powered Report Generation"! With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments that read like they were written by a human security expert! **But that's not all!** Our GPT-driven reports go the extra mile by scouring the web for related news articles, blogs, and references, so you have a 360-degree view of the vulnerabilities you've uncovered. With reNgine 2.0 revolutionize your penetration testing game and impress your clients with reports that are not just informative but engaging and comprehensive with detailed analysis on impact assessment and remediation strategies.
🥷 GPT-Powered Attack Surface Generation: With reNgine 2.0, reNgine seamlessly integrates with GPT to identify the attacks that you can likely perform on a subdomain. By making use of reconnaissance data such as page title, open ports, subdomain name etc, reNgine can advice you the attacks you could perform on a target. reNgine will also provide you the rationale on why the specific attack is likely to be successful.
🧭 Continuous monitoring: Continuous monitoring is at the core of reNgine's mission, and it's robust continuous monitoring feature ensures that their targets are under constant scrutiny. With the flexibility to schedule scans at regular intervals, penetration testers can effortlessly stay informed about their targets. What sets reNgine apart is its seamless integration with popular notification channels such as Discord, Slack, and Telegram, delivering real-time alerts for newly discovered subdomains, vulnerabilities, or any changes in reconnaissance data.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Workflow
<img src="https://github.com/yogeshojha/rengine/assets/17223002/10c475b8-b4a8-440d-9126-77fe2038a386">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Features
* Reconnaissance:
* Subdomain Discovery
* IP and Open Ports Identification
* Endpoints Discovery
* Directory/Files fuzzing
* Screenshot Gathering
* Vulnerability Scan
* Nuclei
* Dalfox XSS Scanner
* CRLFuzzer
* Misconfigured S3 Scanner
* WHOIS Identification
* WAF Detection
* OSINT Capabilities
* Meta info Gathering
* Employees Gathering
* Email Address gathering
* Google Dorking for sensitive info and urls
* Projects, create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
* Perform Advanced Query lookup using natural language alike and, or, not operations
* Highly configurable YAML-based Scan Engines
* Support for Parallel Scans
* Support for Subscans
* Recon Data visualization
* GPT Vulnerability Description, Impact and Remediation generation
* GPT Attack Surface Generator
* Multiple Roles and Permissions to cater a team's need
* Customizable Alerts/Notifications on Slack, Discord, and Telegram
* Automatically report Vulnerabilities to HackerOne
* Recon Notes and Todos
* Clocked Scans (Run reconnaissance exactly at X Hours and Y minutes) and Periodic Scans (Runs reconnaissance every X minutes/- hours/days/week)
* Proxy Support
* Screenshot Gallery with Filters
* Powerful recon data filtering with autosuggestions
* Recon Data changes, find new/removed subdomains/endpoints
* Tag targets into the Organization
* Smart Duplicate endpoint removal based on page title and content length to cleanup the reconnaissance data
* Identify Interesting Subdomains
* Custom GF patterns and custom Nuclei Templates
* Edit tool-related configuration files (Nuclei, Subfinder, Naabu, amass)
* Add external tools from Github/Go
* Interoperable with other tools, Import/Export Subdomains/Endpoints
* Import Targets via IP and/or CIDRs
* Report Generation
* Toolbox: Comes bundled with most commonly used tools during penetration testing such as whois lookup, CMS detector, CVE lookup, etc.
* Identification of related domains and related TLDs for targets
* Find actionable insights such as Most Common Vulnerability, Most Common CVE ID, Most Vulnerable Target/Subdomain, etc.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Scan Engine
```yaml
subdomain_discovery: {
'uses_tools': [
'subfinder',
'ctfr',
'sublist3r',
'tlsx',
'oneforall',
'netlas'
],
'enable_http_crawl': true,
'threads': 30,
'timeout': 5,
}
http_crawl: {}
port_scan: {
'enable_http_crawl': true,
'timeout': 5,
# 'exclude_ports': [],
# 'exclude_subdomains': true,
'ports': ['top-100'],
'rate_limit': 150,
'threads': 30,
'passive': false,
# 'use_naabu_config': false,
# 'enable_nmap': true,
# 'nmap_cmd': '',
# 'nmap_script': '',
# 'nmap_script_args': ''
}
osint: {
'discover': [
'emails',
'metainfo',
'employees'
],
'dorks': [
'login_pages',
'admin_panels',
'dashboard_pages',
'stackoverflow',
'social_media',
'project_management',
'code_sharing',
'config_files',
'jenkins',
'wordpress_files',
'php_error',
'exposed_documents',
'db_files',
'git_exposed'
],
'custom_dorks': [
{
'lookup_site': 'google.com',
'lookup_keywords': '/home/'
},
{
'lookup_site': '_target_',
'lookup_extensions': 'jpg,png'
}
],
'intensity': 'normal',
'documents_limit': 50
}
dir_file_fuzz: {
'auto_calibration': true,
'enable_http_crawl': true,
'rate_limit': 150,
'extensions': ['html', 'php','git','yaml','conf','cnf','config','gz','env','log','db','mysql','bak','asp','aspx','txt','conf','sql','json','yml','pdf'],
'follow_redirect': false,
'max_time': 0,
'match_http_status': [200, 204],
'recursive_level': 2,
'stop_on_error': false,
'timeout': 5,
'threads': 30,
'wordlist_name': 'dicc'
}
fetch_url: {
'uses_tools': [
'gospider',
'hakrawler',
'waybackurls',
'gospider',
'katana'
],
'remove_duplicate_endpoints': true,
'duplicate_fields': [
'content_length',
'page_title'
],
'enable_http_crawl': true,
'gf_patterns': ['debug_logic', 'idor', 'interestingEXT', 'interestingparams', 'interestingsubs', 'lfi', 'rce', 'redirect', 'sqli', 'ssrf', 'ssti', 'xss'],
'ignore_file_extensions': ['png', 'jpg', 'jpeg', 'gif', 'mp4', 'mpeg', 'mp3']
# 'exclude_subdomains': true
}
vulnerability_scan: {
'run_nuclei': false,
'run_dalfox': false,
'run_crlfuzz': false,
'run_s3scanner': true,
'enable_http_crawl': true,
'concurrency': 50,
'intensity': 'normal',
'rate_limit': 150,
'retries': 1,
'timeout': 5,
'fetch_gpt_report': true,
'nuclei': {
'use_conf': false,
'severities': [
'unknown',
'info',
'low',
'medium',
'high',
'critical'
],
# 'tags': [],
# 'templates': [],
# 'custom_templates': [],
},
's3scanner': {
'threads': 100,
'providers': [
'aws',
'gcp',
'digitalocean',
'dreamhost',
'linode'
]
}
}
waf_detection: {}
screenshot: {
'enable_http_crawl': true,
'intensity': 'normal',
'timeout': 10,
'threads': 40
}
# custom_header: "Cookie: Test"
```
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Quick Installation
**Note:** Only Ubuntu/VPS
1. Clone this repo
```bash
git clone https://github.com/yogeshojha/rengine && cd rengine
```
1. Edit the `.env` file, **please make sure to change the password for postgresql `POSTGRES_PASSWORD`!**
```bash
nano .env
```
1. **Optional, only for non-interactive install**: In the `.env` file, **please make sure to change the super admin values!**
```bash
DJANGO_SUPERUSER_USERNAME=yourUsername
DJANGO_SUPERUSER_EMAIL=YourMail@example.com
DJANGO_SUPERUSER_PASSWORD=yourStrongPassword
```
If you need to carry out a non-interactive installation, you can setup the login, email and password of the web interface admin directly from the .env file (instead of manually setting them from prompts during the installation process). This option can be interesting for automated installation (via ansible, vagrant, etc.).
`DJANGO_SUPERUSER_USERNAME`: web interface admin username (used to login to the web interface).
`DJANGO_SUPERUSER_EMAIL`: web interface admin email.
`DJANGO_SUPERUSER_PASSWORD`: web interface admin password (used to login to the web interface).
1. In the dotenv file, you may also modify the Scaling Configurations
```bash
MAX_CONCURRENCY=80
MIN_CONCURRENCY=10
```
`MAX_CONCURRENCY`: This parameter specifies the maximum number of reNgine's concurrent Celery worker processes that can be spawned. In this case, it's set to 80, meaning that the application can utilize up to 80 concurrent worker processes to execute tasks concurrently. This is useful for handling a high volume of scans or when you want to scale up processing power during periods of high demand. If you have more CPU cores, you will need to increase this for maximised performance.
`MIN_CONCURRENCY`: On the other hand, MIN_CONCURRENCY specifies the minimum number of concurrent worker processes that should be maintained, even during periods of lower demand. In this example, it's set to 10, which means that even when there are fewer tasks to process, at least 10 worker processes will be kept running. This helps ensure that the application can respond promptly to incoming tasks without the overhead of repeatedly starting and stopping worker processes.
These settings allow for dynamic scaling of Celery workers, ensuring that the application efficiently manages its workload by adjusting the number of concurrent workers based on the workload's size and complexity
1. Run the installation script, Please keep an eye for any prompt, you will also be asked for username and password for reNgine.
```bash
sudo ./install.sh
```
Or for a non-interactive installation, use `-n` argument (make sure you've modified the `.env` file before launching the installation).
```bash
sudo ./install.sh -n
```
If `install.sh` does not have install permission, please change it, `chmod +x install.sh`
**reNgine can now be accessed from <https://127.0.0.1> or if you're on the VPS <https://your_vps_ip_address>**
**Unless you are on development branch, please do not access reNgine via any ports**
### Installation (Mac/Windows/Other)
Installation instructions can be found at [https://reNgine.wiki/install/detailed/](https://reNgine.wiki/2.0/install/detailed/)
### Updating
1. Updating is as simple as running the following command:
```bash
cd rengine && sudo ./update.sh
```
If `update.sh` does not have execution permissions, please change it, `sudo chmod +x update.sh`
**NOTE:** if you're updating from 1.3.6 and you're getting a 'password authentication failed' error, consider uninstalling 1.3.6 first, then install 2.x.x as you'd normally do.
### Changelog
[Please find the latest release notes and changelog here.](https://rengine.wiki/changelog/)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Screenshots
#### Scan Results
![](.github/screenshots/scan_results.gif)
#### General Usage
<img src="https://user-images.githubusercontent.com/17223002/164993781-b6012995-522b-480a-a8bf-911193d35894.gif">
#### Initiating Subscan
<img src="https://user-images.githubusercontent.com/17223002/164993749-1ad343d6-8ce7-43d6-aee7-b3add0321da7.gif">
#### Recon Data filtering
<img src="https://user-images.githubusercontent.com/17223002/164993687-b63f3de8-e033-4ac0-808e-a2aa377d3cf8.gif">
#### Report Generation
<img src="https://user-images.githubusercontent.com/17223002/164993689-c796c6cd-eb61-43f4-800d-08aba9740088.gif">
#### Toolbox
<img src="https://user-images.githubusercontent.com/17223002/164993751-d687e88a-eb79-440f-9dc0-0ad006901620.gif">
#### Adding Custom tool in Tools Arsenal
<img src="https://user-images.githubusercontent.com/17223002/164993670-466f6459-9499-498b-a9bd-526476d735a7.gif">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire and create. Every contributions you make is **greatly appreciated**. Your contributions can be as simple as fixing the indentation or UI, or as complex as adding new modules and features.
See the [Contributing Guide](.github/CONTRIBUTING.md) to get started.
You can also [join our Discord channel #development](https://discord.gg/JuhHdHTtwd) for any development related questions.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### First-time Open Source contributors
Please note that reNgine is beginner friendly. If you have never done open-source before, we encourage you to do so. **We will be happy and proud of your first PR ever.**
You can start by resolving any [open issues](https://github.com/yogeshojha/rengine/issues).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Support
Please do not use GitHub for support requests. Instead, [join our Discord channel #support](https://discord.gg/azv6fzhNCE).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Support and Sponsoring
Over the past few years, I have been working hard on reNgine to add new features with the sole aim of making it the de facto standard for reconnaissance. I spend most of my free time and weekends working on reNgine. I do this in addition to my day job. I am happy to have received such overwhelming support from the community. But to keep this project alive, I am looking for financial support.
| Paypal | Bitcoin | Ethereum |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
|[https://www.paypal.com/paypalme/yogeshojha11](https://www.paypal.com/paypalme/yogeshojha11) | `35AiKyNswNZ4TZUSdriHopSCjNMPi63BCX` | `0xe7A337Da6ff98A28513C26A7Fec8C9b42A63d346`
OR
* Add a [GitHub Star](https://github.com/yogeshojha/rengine) to the project.
* Tweet about this project, or maybe blogs?
* Maybe nominate me for [GitHub Stars?](https://stars.github.com/nominate/)
* Join DigitalOcean using my [referral link](https://m.do.co/c/e353502d19fc) your profit is **$100** and I get $25 DO credit. This will help me test reNgine on VPS before I release any major features.
It takes a considerable amount of time to add new features and make sure everything works. Donating is your way of saying: **reNgine is awesome**.
Any support is greatly appreciated! Thank you!
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Bug Bounty Program
[![huntr](https://cdn.huntr.dev/huntr_security_badge_mono.svg)](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine)
Security researchers, welcome aboard! I'm excited to announce the reNgine bug bounty programme in collaboration with [huntr.dev](https://huntr.dev), which means that you will be rewarded for any vulnerabilities you find in reNgine.
Thank you for your interest in reporting reNgine vulnerabilities! If you are aware of any potential security vulnerabilities in reNgine, we encourage you to report them immediately via [huntr.dev](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine).
**Please do not disclose vulnerabilities via Github issues/blogs/tweets after/before reporting to huntr.dev as this is explicitly against the disclosure policy of huntr.dev and reNgine and will not be considered for monetary rewards.**
Please note that the reNgine maintainer does not set the bounty amount.
The bounty reward is determined by an industry-first equation developed by huntr.dev to understand the popularity, impact and value of repositories to the open-source community.
**What do I expect from security researchers?**
* Patience: Please note that I am currently the only maintainer in reNgine and it will take some time to validate your report. I ask for your patience during this process.
* Respect for privacy and security reports: Please do not publicly disclose any vulnerabilities (including GitHub issues) before or after reporting them on huntr.dev! This is against the disclosure policy and will not be rewarded.
* Respect the rules
**What do you get in return?**
* Thanks from the maintainer
* Monetary rewards
* CVE ID(s)
Please find the [FAQ](https://www.huntr.dev/faq) and [Responsible disclosure policy](https://www.huntr.dev/policy/) from huntr.dev.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### License
Distributed under the GNU GPL v3 License. See [LICENSE](LICENSE) for more information.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<p align="right">(ChatGPT was used to write some or most part of this README section.)</p>
| C0wnuts | 3dd700357a4bd5701b07ede4511f66042655be00 | 64b7f291240b3b8853e3cec7ee6230827c97b907 | ```suggestion
`DJANGO_SUPERUSER_PASSWORD`: web interface admin password (used to login to the web interface).
``` | AnonymousWP | 19 |
yogeshojha/rengine | 973 | Add non-interactive installation parameter | Add a non-interactive installation method via a new parameter to be passed to the install.sh script.
Essential for automated/industrialized systems (e.g. via Ansible or another automated environment creation system). | null | 2023-10-12 01:09:15+00:00 | 2023-11-21 12:49:22+00:00 | README.md | <p align="center">
<a href="https://rengine.wiki"><img src=".github/screenshots/banner.gif" alt=""/></a>
</p>
<p align="center"><a href="https://github.com/yogeshojha/rengine/releases" target="_blank"><img src="https://img.shields.io/badge/version-v2.0.0-informational?&logo=none" alt="reNgine Latest Version" /></a> <a href="https://www.gnu.org/licenses/gpl-3.0" target="_blank"><img src="https://img.shields.io/badge/License-GPLv3-red.svg?&logo=none" alt="License" /></a> <a href="#" target="_blank"><img src="https://img.shields.io/badge/first--timers--only-friendly-blue.svg?&logo=none" alt="" /></a> <a href="https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine" target="_blank"><img src="https://cdn.huntr.dev/huntr_security_badge_mono.svg" alt="" /></a> </p>
<p align="center">
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Asia-2023-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/Open--Source--Summit-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://cyberweek.ae/2021/hitb-armory/" target="_blank"><img src="https://img.shields.io/badge/HITB--Armory-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=7uvP6MaQOX0" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://drive.google.com/file/d/1Bh8lbf-Dztt5ViHJVACyrXMiglyICPQ2/view?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Defcon--Demolabs--29-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=A1oNOIc0h5A" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Europe-2020-blue.svg?&logo=none" alt="" /></a>
</p>
<p align="center">
<a href="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml/badge.svg" alt="" /></a> <a href="https://github.com/yogeshojha/rengine/actions/workflows/build.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/build.yml/badge.svg" alt="" /></a>
</p>
<p align="center">
<a href="https://discord.gg/H6WzebwX3H" target="_blank"><img src="https://img.shields.io/discord/880363103689277461" alt="" /></a>
</p>
<p align="center">
<a href="https://opensourcesecurityindex.io/" target="_blank" rel="noopener">
<img style="width: 282px; height: 56px" src="https://opensourcesecurityindex.io/badge.svg" alt="Open Source Security Index - Fastest Growing Open Source Security Projects" width="282" height="56" /> </a>
</p>
<h3>reNgine 2.0-jasper<br>Redefining the future of reconnaissance!</h3>
<h4>What is reNgine?</h4>
<p align="left">reNgine is your go-to web application reconnaissance suite that's designed to simplify and streamline the reconnaissance process for security professionals, penetration testers, and bug bounty hunters. With its highly configurable engines, data correlation capabilities, continuous monitoring, database-backed reconnaissance data, and an intuitive user interface, reNgine redefines how you gather critical information about your target web applications.
Traditional reconnaissance tools often fall short in terms of configurability and efficiency. reNgine addresses these shortcomings and emerges as a excellent alternative to existing commercial tools.
reNgine was created to address the limitations of traditional reconnaissance tools and provide a better alternative, even surpassing some commercial offerings. Whether you're a bug bounty hunter, a penetration tester, or a corporate security team, reNgine is your go-to solution for automating and enhancing your information-gathering efforts.
</p>
reNgine 2.0-jasper is out now, you can [watch reNgine 2.0-jasper release trailer here!](https://youtu.be/VwkOWqiWW5g)
reNgine 2.0-Jasper would not have been possible without [@ocervell](https://github.com/ocervell) valuable contributions. [@ocervell](https://github.com/ocervell) did majority of the refactoring if not all and also added a ton of features. Together, we wish to shape the future of web application reconnaissance, and it's developers like [@ocervell](https://github.com/ocervell) and a [ton of other developers and hackers from our community](https://github.com/yogeshojha/rengine/graphs/contributors) who inspire and drive us forward.
Thank you, [@ocervell](https://github.com/ocervell), for your outstanding work and unwavering commitment to reNgine.
Checkout our contributers here: [Contributers](https://github.com/yogeshojha/rengine/graphs/contributors)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Documentation
You can find detailed documentation at [https://rengine.wiki](https://rengine.wiki)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Table of Contents
* [About reNgine](#about-rengine)
* [Workflow](#workflow)
* [Features](#features)
* [Scan Engine](#scan-engine)
* [Quick Installation](#quick-installation)
* [What's new in reNgine 2.0](#changelog)
* [Screenshots](#screenshots)
* [Contributing](#contributing)
* [reNgine Support](#rengine-support)
* [Support and Sponsoring](#support-and-sponsoring)
* [reNgine Bug Bounty Program](#rengine-bug-bounty-program)
* [License](#license)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### About reNgine
reNgine is not an ordinary reconnaissance suite; it's a game-changer! We've turbocharged the traditional workflow with groundbreaking features that is sure to ease your reconnaissance game. reNgine redefines the art of reconnaissance with highly configurable scan engines, recon data correlation, continuous monitoring, GPT powered Vulnerability Report, Project Management and role based access control etc.
🦾 reNgine has advanced reconnaissance capabilities, harnessing a range of open-source tools to deliver a comprehensive web application reconnaissance experience. With it's intuitive User Interface, it excels in subdomain discovery, pinpointing IP addresses and open ports, collecting endpoints, conducting directory and file fuzzing, capturing screenshots, and performing vulnerability scans. To summarize, it does end-to-end reconnaissance. With WHOIS identification and WAF detection, it offers deep insights into target domains. Additionally, reNgine also identifies misconfigured S3 buckets and find interesting subdomains and URLS, based on specific keywords to helps you identify your next target, making it an go to tool for efficient reconnaissance.
🗃️ Say goodbye to recon data chaos! reNgine seamlessly integrates with a database, providing you with unmatched data correlation and organization. Forgot the hassle of grepping through json, txt or csv files. Plus, our custom query language lets you filter reconnaissance data effortlessly using natural language like operators such as filtering all alive subdomains with `http_status=200` and also filter all subdomains that are alive and has admin in name `http_status=200&name=admin`
🔧 reNgine offers unparalleled flexibility through its highly configurable scan engines, based on a YAML-based configuration. It offers the freedom to create and customize recon scan engines based on any kind of requirement, users can tailor them to their specific objectives and preferences, from thread management to timeout settings and rate-limit configurations, everything is customizable. Additionally, reNgine offers a range of pre-configured scan engines right out of the box, including Full Scan, Passive Scan, Screenshot Gathering, and the OSINT Scan Engine. These ready-to-use engines eliminate the need for extensive manual setup, aligning perfectly with reNgine's core mission of simplifying the reconnaissance process and enabling users to effortlessly access the right reconnaissance data with minimal effort.
💎 Subscans: Subscan is a game-changing feature in reNgine, setting it apart as the only open-source tool of its kind to offer this capability. With Subscan, waiting for the entire pipeline to complete is a thing of the past. Now, users can swiftly respond to newfound discoveries during reconnaissance. Whether you've stumbled upon an intriguing subdomain and wish to conduct a focused port scan or want to delve deeper with a vulnerability assessment, reNgine has you covered.
📃 PDF Reports: In addition to its robust reconnaissance capabilities, reNgine goes the extra mile by simplifying the report generation process, recognizing the crucial role that PDF reports play in the realm of end-to-end reconnaissance. Users can effortlessly generate and customize PDF reports to suit their exact needs. Whether it's a Full Scan Report, Vulnerability Report, or a concise reconnaissance report, reNgine provides the flexibility to choose the report type that best communicates your findings. Moreover, the level of customization is unparalleled, allowing users to select report colors, fine-tune executive summaries, and even add personalized touches like company names and footers. With GPT integration, your reports aren't just a report, with remediation steps, and impacts, you get 360-degree view of the vulnerabilities you've uncovered.
🔖 Say Hello to Projects! reNgine 2.0 introduces a powerful addition that enables you to efficiently organize your web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task. Each projects will have separate dashboard and all the scan results will be separated from each projects, while scan engines and configuration will be shared across all the projects.
⚙️ Roles and Permissions! Begining reNgine 2.0, we've taken your web application reconnaissance to a whole new level of control and security. Now, you can assign distinct roles to your team members—Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- 🔐 Sys Admin: Sys Admin is a super user that has permission to modify system and scan related configurations, scan engines, create new users, add new tools etc. Super user can initiate scans and subscans effortlessly.
- 🔍 Penetration Tester: Penetration Tester will be allowed to modify and initiate scans and subscans, add or update targets, etc. A penetration tester will not be allowed to modify system configurations.
- 📊 Auditor: Auditor can only view and download the report. An auditor can not change any system or scan related configurations nor can initiate any scans or subscans.
🚀 GPT Vulnerability Report Generation: Get ready for the future of penetration testing reports with reNgine's groundbreaking feature: "GPT-Powered Report Generation"! With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments that read like they were written by a human security expert! **But that's not all!** Our GPT-driven reports go the extra mile by scouring the web for related news articles, blogs, and references, so you have a 360-degree view of the vulnerabilities you've uncovered. With reNgine 2.0 revolutionize your penetration testing game and impress your clients with reports that are not just informative but engaging and comprehensive with detailed analysis on impact assessment and remediation strategies.
🥷 GPT-Powered Attack Surface Generation: With reNgine 2.0, reNgine seamlessly integrates with GPT to identify the attacks that you can likely perform on a subdomain. By making use of reconnaissance data such as page title, open ports, subdomain name etc, reNgine can advice you the attacks you could perform on a target. reNgine will also provide you the rationale on why the specific attack is likely to be successful.
🧭 Continuous monitoring: Continuous monitoring is at the core of reNgine's mission, and it's robust continuous monitoring feature ensures that their targets are under constant scrutiny. With the flexibility to schedule scans at regular intervals, penetration testers can effortlessly stay informed about their targets. What sets reNgine apart is its seamless integration with popular notification channels such as Discord, Slack, and Telegram, delivering real-time alerts for newly discovered subdomains, vulnerabilities, or any changes in reconnaissance data.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Workflow
<img src="https://github.com/yogeshojha/rengine/assets/17223002/10c475b8-b4a8-440d-9126-77fe2038a386">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Features
* Reconnaissance:
* Subdomain Discovery
* IP and Open Ports Identification
* Endpoints Discovery
* Directory/Files fuzzing
* Screenshot Gathering
* Vulnerability Scan
* Nuclei
* Dalfox XSS Scanner
* CRLFuzzer
* Misconfigured S3 Scanner
* WHOIS Identification
* WAF Detection
* OSINT Capabilities
* Meta info Gathering
* Employees Gathering
* Email Address gathering
* Google Dorking for sensitive info and urls
* Projects, create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
* Perform Advanced Query lookup using natural language alike and, or, not operations
* Highly configurable YAML-based Scan Engines
* Support for Parallel Scans
* Support for Subscans
* Recon Data visualization
* GPT Vulnerability Description, Impact and Remediation generation
* GPT Attack Surface Generator
* Multiple Roles and Permissions to cater a team's need
* Customizable Alerts/Notifications on Slack, Discord, and Telegram
* Automatically report Vulnerabilities to HackerOne
* Recon Notes and Todos
* Clocked Scans (Run reconnaissance exactly at X Hours and Y minutes) and Periodic Scans (Runs reconnaissance every X minutes/- hours/days/week)
* Proxy Support
* Screenshot Gallery with Filters
* Powerful recon data filtering with autosuggestions
* Recon Data changes, find new/removed subdomains/endpoints
* Tag targets into the Organization
* Smart Duplicate endpoint removal based on page title and content length to cleanup the reconnaissance data
* Identify Interesting Subdomains
* Custom GF patterns and custom Nuclei Templates
* Edit tool-related configuration files (Nuclei, Subfinder, Naabu, amass)
* Add external tools from Github/Go
* Interoperable with other tools, Import/Export Subdomains/Endpoints
* Import Targets via IP and/or CIDRs
* Report Generation
* Toolbox: Comes bundled with most commonly used tools during penetration testing such as whois lookup, CMS detector, CVE lookup, etc.
* Identification of related domains and related TLDs for targets
* Find actionable insights such as Most Common Vulnerability, Most Common CVE ID, Most Vulnerable Target/Subdomain, etc.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Scan Engine
```yaml
subdomain_discovery: {
'uses_tools': [
'subfinder',
'ctfr',
'sublist3r',
'tlsx',
'oneforall',
'netlas'
],
'enable_http_crawl': true,
'threads': 30,
'timeout': 5,
}
http_crawl: {}
port_scan: {
'enable_http_crawl': true,
'timeout': 5,
# 'exclude_ports': [],
# 'exclude_subdomains': true,
'ports': ['top-100'],
'rate_limit': 150,
'threads': 30,
'passive': false,
# 'use_naabu_config': false,
# 'enable_nmap': true,
# 'nmap_cmd': '',
# 'nmap_script': '',
# 'nmap_script_args': ''
}
osint: {
'discover': [
'emails',
'metainfo',
'employees'
],
'dorks': [
'login_pages',
'admin_panels',
'dashboard_pages',
'stackoverflow',
'social_media',
'project_management',
'code_sharing',
'config_files',
'jenkins',
'wordpress_files',
'php_error',
'exposed_documents',
'db_files',
'git_exposed'
],
'custom_dorks': [
{
'lookup_site': 'google.com',
'lookup_keywords': '/home/'
},
{
'lookup_site': '_target_',
'lookup_extensions': 'jpg,png'
}
],
'intensity': 'normal',
'documents_limit': 50
}
dir_file_fuzz: {
'auto_calibration': true,
'enable_http_crawl': true,
'rate_limit': 150,
'extensions': ['html', 'php','git','yaml','conf','cnf','config','gz','env','log','db','mysql','bak','asp','aspx','txt','conf','sql','json','yml','pdf'],
'follow_redirect': false,
'max_time': 0,
'match_http_status': [200, 204],
'recursive_level': 2,
'stop_on_error': false,
'timeout': 5,
'threads': 30,
'wordlist_name': 'dicc'
}
fetch_url: {
'uses_tools': [
'gospider',
'hakrawler',
'waybackurls',
'gospider',
'katana'
],
'remove_duplicate_endpoints': true,
'duplicate_fields': [
'content_length',
'page_title'
],
'enable_http_crawl': true,
'gf_patterns': ['debug_logic', 'idor', 'interestingEXT', 'interestingparams', 'interestingsubs', 'lfi', 'rce', 'redirect', 'sqli', 'ssrf', 'ssti', 'xss'],
'ignore_file_extensions': ['png', 'jpg', 'jpeg', 'gif', 'mp4', 'mpeg', 'mp3']
# 'exclude_subdomains': true
}
vulnerability_scan: {
'run_nuclei': false,
'run_dalfox': false,
'run_crlfuzz': false,
'run_s3scanner': true,
'enable_http_crawl': true,
'concurrency': 50,
'intensity': 'normal',
'rate_limit': 150,
'retries': 1,
'timeout': 5,
'fetch_gpt_report': true,
'nuclei': {
'use_conf': false,
'severities': [
'unknown',
'info',
'low',
'medium',
'high',
'critical'
],
# 'tags': [],
# 'templates': [],
# 'custom_templates': [],
},
's3scanner': {
'threads': 100,
'providers': [
'aws',
'gcp',
'digitalocean',
'dreamhost',
'linode'
]
}
}
waf_detection: {}
screenshot: {
'enable_http_crawl': true,
'intensity': 'normal',
'timeout': 10,
'threads': 40
}
# custom_header: "Cookie: Test"
```
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Quick Installation
**Note:** Only Ubuntu/VPS
1. Clone this repo
```bash
git clone https://github.com/yogeshojha/rengine && cd rengine
```
1. Edit the dotenv file, **please make sure to change the password for postgresql `POSTGRES_PASSWORD`!**
```bash
nano .env
```
1. In the dotenv file, you may also modify the Scaling Configurations
```bash
MAX_CONCURRENCY=80
MIN_CONCURRENCY=10
```
MAX_CONCURRENCY: This parameter specifies the maximum number of reNgine's concurrent Celery worker processes that can be spawned. In this case, it's set to 80, meaning that the application can utilize up to 80 concurrent worker processes to execute tasks concurrently. This is useful for handling a high volume of scans or when you want to scale up processing power during periods of high demand. If you have more CPU cores, you will need to increase this for maximised performance.
MIN_CONCURRENCY: On the other hand, MIN_CONCURRENCY specifies the minimum number of concurrent worker processes that should be maintained, even during periods of lower demand. In this example, it's set to 10, which means that even when there are fewer tasks to process, at least 10 worker processes will be kept running. This helps ensure that the application can respond promptly to incoming tasks without the overhead of repeatedly starting and stopping worker processes.
These settings allow for dynamic scaling of Celery workers, ensuring that the application efficiently manages its workload by adjusting the number of concurrent workers based on the workload's size and complexity
1. Run the installation script, Please keep an eye for any prompt, you will also be asked for username and password for reNgine.
```bash
sudo ./install.sh
```
If `install.sh` does not have install permission, please change it, `chmod +x install.sh`
**reNgine can now be accessed from <https://127.0.0.1> or if you're on the VPS <https://your_vps_ip_address>**
**Unless you are on development branch, please do not access reNgine via any ports**
### Installation (Mac/Windows/Other)
Installation instructions can be found at [https://reNgine.wiki/install/detailed/](https://reNgine.wiki/2.0/install/detailed/)
### Updating
1. Updating is as simple as running the following command:
```bash
cd rengine && sudo ./update.sh
```
If `update.sh` does not have execution permissions, please change it, `sudo chmod +x update.sh`
**NOTE:** if you're updating from 1.3.6 and you're getting a 'password authentication failed' error, consider uninstalling 1.3.6 first, then install 2.x.x as you'd normally do.
### Changelog
[Please find the latest release notes and changelog here.](https://rengine.wiki/changelog/)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Screenshots
#### Scan Results
![](.github/screenshots/scan_results.gif)
#### General Usage
<img src="https://user-images.githubusercontent.com/17223002/164993781-b6012995-522b-480a-a8bf-911193d35894.gif">
#### Initiating Subscan
<img src="https://user-images.githubusercontent.com/17223002/164993749-1ad343d6-8ce7-43d6-aee7-b3add0321da7.gif">
#### Recon Data filtering
<img src="https://user-images.githubusercontent.com/17223002/164993687-b63f3de8-e033-4ac0-808e-a2aa377d3cf8.gif">
#### Report Generation
<img src="https://user-images.githubusercontent.com/17223002/164993689-c796c6cd-eb61-43f4-800d-08aba9740088.gif">
#### Toolbox
<img src="https://user-images.githubusercontent.com/17223002/164993751-d687e88a-eb79-440f-9dc0-0ad006901620.gif">
#### Adding Custom tool in Tools Arsenal
<img src="https://user-images.githubusercontent.com/17223002/164993670-466f6459-9499-498b-a9bd-526476d735a7.gif">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire and create. Every contributions you make is **greatly appreciated**. Your contributions can be as simple as fixing the indentation or UI, or as complex as adding new modules and features.
See the [Contributing Guide](.github/CONTRIBUTING.md) to get started.
You can also [join our Discord channel #development](https://discord.gg/JuhHdHTtwd) for any development related questions.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### First-time Open Source contributors
Please note that reNgine is beginner friendly. If you have never done open-source before, we encourage you to do so. **We will be happy and proud of your first PR ever.**
You can start by resolving any [open issues](https://github.com/yogeshojha/rengine/issues).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Support
Please do not use GitHub for support requests. Instead, [join our Discord channel #support](https://discord.gg/azv6fzhNCE).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Support and Sponsoring
Over the past few years, I have been working hard on reNgine to add new features with the sole aim of making it the de facto standard for reconnaissance. I spend most of my free time and weekends working on reNgine. I do this in addition to my day job. I am happy to have received such overwhelming support from the community. But to keep this project alive, I am looking for financial support.
| Paypal | Bitcoin | Ethereum |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
|[https://www.paypal.com/paypalme/yogeshojha11](https://www.paypal.com/paypalme/yogeshojha11) | `35AiKyNswNZ4TZUSdriHopSCjNMPi63BCX` | `0xe7A337Da6ff98A28513C26A7Fec8C9b42A63d346`
OR
* Add a [GitHub Star](https://github.com/yogeshojha/rengine) to the project.
* Tweet about this project, or maybe blogs?
* Maybe nominate me for [GitHub Stars?](https://stars.github.com/nominate/)
* Join DigitalOcean using my [referral link](https://m.do.co/c/e353502d19fc) your profit is **$100** and I get $25 DO credit. This will help me test reNgine on VPS before I release any major features.
It takes a considerable amount of time to add new features and make sure everything works. Donating is your way of saying: **reNgine is awesome**.
Any support is greatly appreciated! Thank you!
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Bug Bounty Program
[![huntr](https://cdn.huntr.dev/huntr_security_badge_mono.svg)](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine)
Security researchers, welcome aboard! I'm excited to announce the reNgine bug bounty programme in collaboration with [huntr.dev](https://huntr.dev), which means that you will be rewarded for any vulnerabilities you find in reNgine.
Thank you for your interest in reporting reNgine vulnerabilities! If you are aware of any potential security vulnerabilities in reNgine, we encourage you to report them immediately via [huntr.dev](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine).
**Please do not disclose vulnerabilities via Github issues/blogs/tweets after/before reporting to huntr.dev as this is explicitly against the disclosure policy of huntr.dev and reNgine and will not be considered for monetary rewards.**
Please note that the reNgine maintainer does not set the bounty amount.
The bounty reward is determined by an industry-first equation developed by huntr.dev to understand the popularity, impact and value of repositories to the open-source community.
**What do I expect from security researchers?**
* Patience: Please note that I am currently the only maintainer in reNgine and it will take some time to validate your report. I ask for your patience during this process.
* Respect for privacy and security reports: Please do not publicly disclose any vulnerabilities (including GitHub issues) before or after reporting them on huntr.dev! This is against the disclosure policy and will not be rewarded.
* Respect the rules
**What do you get in return?**
* Thanks from the maintainer
* Monetary rewards
* CVE ID(s)
Please find the [FAQ](https://www.huntr.dev/faq) and [Responsible disclosure policy](https://www.huntr.dev/policy/) from huntr.dev.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### License
Distributed under the GNU GPL v3 License. See [LICENSE](LICENSE) for more information.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<p align="right">(ChatGPT was used to write some or most part of this README section.)</p>
| <p align="center">
<a href="https://rengine.wiki"><img src=".github/screenshots/banner.gif" alt=""/></a>
</p>
<p align="center"><a href="https://github.com/yogeshojha/rengine/releases" target="_blank"><img src="https://img.shields.io/badge/version-v2.0.0-informational?&logo=none" alt="reNgine Latest Version" /></a> <a href="https://www.gnu.org/licenses/gpl-3.0" target="_blank"><img src="https://img.shields.io/badge/License-GPLv3-red.svg?&logo=none" alt="License" /></a> <a href="#" target="_blank"><img src="https://img.shields.io/badge/first--timers--only-friendly-blue.svg?&logo=none" alt="" /></a> <a href="https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine" target="_blank"><img src="https://cdn.huntr.dev/huntr_security_badge_mono.svg" alt="" /></a> </p>
<p align="center">
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Asia-2023-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/Open--Source--Summit-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://cyberweek.ae/2021/hitb-armory/" target="_blank"><img src="https://img.shields.io/badge/HITB--Armory-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=7uvP6MaQOX0" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://drive.google.com/file/d/1Bh8lbf-Dztt5ViHJVACyrXMiglyICPQ2/view?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Defcon--Demolabs--29-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=A1oNOIc0h5A" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Europe-2020-blue.svg?&logo=none" alt="" /></a>
</p>
<p align="center">
<a href="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml/badge.svg" alt="" /></a> <a href="https://github.com/yogeshojha/rengine/actions/workflows/build.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/build.yml/badge.svg" alt="" /></a>
</p>
<p align="center">
<a href="https://discord.gg/H6WzebwX3H" target="_blank"><img src="https://img.shields.io/discord/880363103689277461" alt="" /></a>
</p>
<p align="center">
<a href="https://opensourcesecurityindex.io/" target="_blank" rel="noopener">
<img style="width: 282px; height: 56px" src="https://opensourcesecurityindex.io/badge.svg" alt="Open Source Security Index - Fastest Growing Open Source Security Projects" width="282" height="56" /> </a>
</p>
<h3>reNgine 2.0-jasper<br>Redefining the future of reconnaissance!</h3>
<h4>What is reNgine?</h4>
<p align="left">reNgine is your go-to web application reconnaissance suite that's designed to simplify and streamline the reconnaissance process for security professionals, penetration testers, and bug bounty hunters. With its highly configurable engines, data correlation capabilities, continuous monitoring, database-backed reconnaissance data, and an intuitive user interface, reNgine redefines how you gather critical information about your target web applications.
Traditional reconnaissance tools often fall short in terms of configurability and efficiency. reNgine addresses these shortcomings and emerges as a excellent alternative to existing commercial tools.
reNgine was created to address the limitations of traditional reconnaissance tools and provide a better alternative, even surpassing some commercial offerings. Whether you're a bug bounty hunter, a penetration tester, or a corporate security team, reNgine is your go-to solution for automating and enhancing your information-gathering efforts.
</p>
reNgine 2.0-jasper is out now, you can [watch reNgine 2.0-jasper release trailer here!](https://youtu.be/VwkOWqiWW5g)
reNgine 2.0-Jasper would not have been possible without [@ocervell](https://github.com/ocervell) valuable contributions. [@ocervell](https://github.com/ocervell) did majority of the refactoring if not all and also added a ton of features. Together, we wish to shape the future of web application reconnaissance, and it's developers like [@ocervell](https://github.com/ocervell) and a [ton of other developers and hackers from our community](https://github.com/yogeshojha/rengine/graphs/contributors) who inspire and drive us forward.
Thank you, [@ocervell](https://github.com/ocervell), for your outstanding work and unwavering commitment to reNgine.
Checkout our contributers here: [Contributers](https://github.com/yogeshojha/rengine/graphs/contributors)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Documentation
You can find detailed documentation at [https://rengine.wiki](https://rengine.wiki)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Table of Contents
* [About reNgine](#about-rengine)
* [Workflow](#workflow)
* [Features](#features)
* [Scan Engine](#scan-engine)
* [Quick Installation](#quick-installation)
* [What's new in reNgine 2.0](#changelog)
* [Screenshots](#screenshots)
* [Contributing](#contributing)
* [reNgine Support](#rengine-support)
* [Support and Sponsoring](#support-and-sponsoring)
* [reNgine Bug Bounty Program](#rengine-bug-bounty-program)
* [License](#license)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### About reNgine
reNgine is not an ordinary reconnaissance suite; it's a game-changer! We've turbocharged the traditional workflow with groundbreaking features that is sure to ease your reconnaissance game. reNgine redefines the art of reconnaissance with highly configurable scan engines, recon data correlation, continuous monitoring, GPT powered Vulnerability Report, Project Management and role based access control etc.
🦾 reNgine has advanced reconnaissance capabilities, harnessing a range of open-source tools to deliver a comprehensive web application reconnaissance experience. With it's intuitive User Interface, it excels in subdomain discovery, pinpointing IP addresses and open ports, collecting endpoints, conducting directory and file fuzzing, capturing screenshots, and performing vulnerability scans. To summarize, it does end-to-end reconnaissance. With WHOIS identification and WAF detection, it offers deep insights into target domains. Additionally, reNgine also identifies misconfigured S3 buckets and find interesting subdomains and URLS, based on specific keywords to helps you identify your next target, making it an go to tool for efficient reconnaissance.
🗃️ Say goodbye to recon data chaos! reNgine seamlessly integrates with a database, providing you with unmatched data correlation and organization. Forgot the hassle of grepping through json, txt or csv files. Plus, our custom query language lets you filter reconnaissance data effortlessly using natural language like operators such as filtering all alive subdomains with `http_status=200` and also filter all subdomains that are alive and has admin in name `http_status=200&name=admin`
🔧 reNgine offers unparalleled flexibility through its highly configurable scan engines, based on a YAML-based configuration. It offers the freedom to create and customize recon scan engines based on any kind of requirement, users can tailor them to their specific objectives and preferences, from thread management to timeout settings and rate-limit configurations, everything is customizable. Additionally, reNgine offers a range of pre-configured scan engines right out of the box, including Full Scan, Passive Scan, Screenshot Gathering, and the OSINT Scan Engine. These ready-to-use engines eliminate the need for extensive manual setup, aligning perfectly with reNgine's core mission of simplifying the reconnaissance process and enabling users to effortlessly access the right reconnaissance data with minimal effort.
💎 Subscans: Subscan is a game-changing feature in reNgine, setting it apart as the only open-source tool of its kind to offer this capability. With Subscan, waiting for the entire pipeline to complete is a thing of the past. Now, users can swiftly respond to newfound discoveries during reconnaissance. Whether you've stumbled upon an intriguing subdomain and wish to conduct a focused port scan or want to delve deeper with a vulnerability assessment, reNgine has you covered.
📃 PDF Reports: In addition to its robust reconnaissance capabilities, reNgine goes the extra mile by simplifying the report generation process, recognizing the crucial role that PDF reports play in the realm of end-to-end reconnaissance. Users can effortlessly generate and customize PDF reports to suit their exact needs. Whether it's a Full Scan Report, Vulnerability Report, or a concise reconnaissance report, reNgine provides the flexibility to choose the report type that best communicates your findings. Moreover, the level of customization is unparalleled, allowing users to select report colors, fine-tune executive summaries, and even add personalized touches like company names and footers. With GPT integration, your reports aren't just a report, with remediation steps, and impacts, you get 360-degree view of the vulnerabilities you've uncovered.
🔖 Say Hello to Projects! reNgine 2.0 introduces a powerful addition that enables you to efficiently organize your web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task. Each projects will have separate dashboard and all the scan results will be separated from each projects, while scan engines and configuration will be shared across all the projects.
⚙️ Roles and Permissions! Begining reNgine 2.0, we've taken your web application reconnaissance to a whole new level of control and security. Now, you can assign distinct roles to your team members—Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- 🔐 Sys Admin: Sys Admin is a super user that has permission to modify system and scan related configurations, scan engines, create new users, add new tools etc. Super user can initiate scans and subscans effortlessly.
- 🔍 Penetration Tester: Penetration Tester will be allowed to modify and initiate scans and subscans, add or update targets, etc. A penetration tester will not be allowed to modify system configurations.
- 📊 Auditor: Auditor can only view and download the report. An auditor can not change any system or scan related configurations nor can initiate any scans or subscans.
🚀 GPT Vulnerability Report Generation: Get ready for the future of penetration testing reports with reNgine's groundbreaking feature: "GPT-Powered Report Generation"! With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments that read like they were written by a human security expert! **But that's not all!** Our GPT-driven reports go the extra mile by scouring the web for related news articles, blogs, and references, so you have a 360-degree view of the vulnerabilities you've uncovered. With reNgine 2.0 revolutionize your penetration testing game and impress your clients with reports that are not just informative but engaging and comprehensive with detailed analysis on impact assessment and remediation strategies.
🥷 GPT-Powered Attack Surface Generation: With reNgine 2.0, reNgine seamlessly integrates with GPT to identify the attacks that you can likely perform on a subdomain. By making use of reconnaissance data such as page title, open ports, subdomain name etc, reNgine can advice you the attacks you could perform on a target. reNgine will also provide you the rationale on why the specific attack is likely to be successful.
🧭 Continuous monitoring: Continuous monitoring is at the core of reNgine's mission, and it's robust continuous monitoring feature ensures that their targets are under constant scrutiny. With the flexibility to schedule scans at regular intervals, penetration testers can effortlessly stay informed about their targets. What sets reNgine apart is its seamless integration with popular notification channels such as Discord, Slack, and Telegram, delivering real-time alerts for newly discovered subdomains, vulnerabilities, or any changes in reconnaissance data.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Workflow
<img src="https://github.com/yogeshojha/rengine/assets/17223002/10c475b8-b4a8-440d-9126-77fe2038a386">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Features
* Reconnaissance:
* Subdomain Discovery
* IP and Open Ports Identification
* Endpoints Discovery
* Directory/Files fuzzing
* Screenshot Gathering
* Vulnerability Scan
* Nuclei
* Dalfox XSS Scanner
* CRLFuzzer
* Misconfigured S3 Scanner
* WHOIS Identification
* WAF Detection
* OSINT Capabilities
* Meta info Gathering
* Employees Gathering
* Email Address gathering
* Google Dorking for sensitive info and urls
* Projects, create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
* Perform Advanced Query lookup using natural language alike and, or, not operations
* Highly configurable YAML-based Scan Engines
* Support for Parallel Scans
* Support for Subscans
* Recon Data visualization
* GPT Vulnerability Description, Impact and Remediation generation
* GPT Attack Surface Generator
* Multiple Roles and Permissions to cater a team's need
* Customizable Alerts/Notifications on Slack, Discord, and Telegram
* Automatically report Vulnerabilities to HackerOne
* Recon Notes and Todos
* Clocked Scans (Run reconnaissance exactly at X Hours and Y minutes) and Periodic Scans (Runs reconnaissance every X minutes/- hours/days/week)
* Proxy Support
* Screenshot Gallery with Filters
* Powerful recon data filtering with autosuggestions
* Recon Data changes, find new/removed subdomains/endpoints
* Tag targets into the Organization
* Smart Duplicate endpoint removal based on page title and content length to cleanup the reconnaissance data
* Identify Interesting Subdomains
* Custom GF patterns and custom Nuclei Templates
* Edit tool-related configuration files (Nuclei, Subfinder, Naabu, amass)
* Add external tools from Github/Go
* Interoperable with other tools, Import/Export Subdomains/Endpoints
* Import Targets via IP and/or CIDRs
* Report Generation
* Toolbox: Comes bundled with most commonly used tools during penetration testing such as whois lookup, CMS detector, CVE lookup, etc.
* Identification of related domains and related TLDs for targets
* Find actionable insights such as Most Common Vulnerability, Most Common CVE ID, Most Vulnerable Target/Subdomain, etc.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Scan Engine
```yaml
subdomain_discovery: {
'uses_tools': [
'subfinder',
'ctfr',
'sublist3r',
'tlsx',
'oneforall',
'netlas'
],
'enable_http_crawl': true,
'threads': 30,
'timeout': 5,
}
http_crawl: {}
port_scan: {
'enable_http_crawl': true,
'timeout': 5,
# 'exclude_ports': [],
# 'exclude_subdomains': true,
'ports': ['top-100'],
'rate_limit': 150,
'threads': 30,
'passive': false,
# 'use_naabu_config': false,
# 'enable_nmap': true,
# 'nmap_cmd': '',
# 'nmap_script': '',
# 'nmap_script_args': ''
}
osint: {
'discover': [
'emails',
'metainfo',
'employees'
],
'dorks': [
'login_pages',
'admin_panels',
'dashboard_pages',
'stackoverflow',
'social_media',
'project_management',
'code_sharing',
'config_files',
'jenkins',
'wordpress_files',
'php_error',
'exposed_documents',
'db_files',
'git_exposed'
],
'custom_dorks': [
{
'lookup_site': 'google.com',
'lookup_keywords': '/home/'
},
{
'lookup_site': '_target_',
'lookup_extensions': 'jpg,png'
}
],
'intensity': 'normal',
'documents_limit': 50
}
dir_file_fuzz: {
'auto_calibration': true,
'enable_http_crawl': true,
'rate_limit': 150,
'extensions': ['html', 'php','git','yaml','conf','cnf','config','gz','env','log','db','mysql','bak','asp','aspx','txt','conf','sql','json','yml','pdf'],
'follow_redirect': false,
'max_time': 0,
'match_http_status': [200, 204],
'recursive_level': 2,
'stop_on_error': false,
'timeout': 5,
'threads': 30,
'wordlist_name': 'dicc'
}
fetch_url: {
'uses_tools': [
'gospider',
'hakrawler',
'waybackurls',
'gospider',
'katana'
],
'remove_duplicate_endpoints': true,
'duplicate_fields': [
'content_length',
'page_title'
],
'enable_http_crawl': true,
'gf_patterns': ['debug_logic', 'idor', 'interestingEXT', 'interestingparams', 'interestingsubs', 'lfi', 'rce', 'redirect', 'sqli', 'ssrf', 'ssti', 'xss'],
'ignore_file_extensions': ['png', 'jpg', 'jpeg', 'gif', 'mp4', 'mpeg', 'mp3']
# 'exclude_subdomains': true
}
vulnerability_scan: {
'run_nuclei': false,
'run_dalfox': false,
'run_crlfuzz': false,
'run_s3scanner': true,
'enable_http_crawl': true,
'concurrency': 50,
'intensity': 'normal',
'rate_limit': 150,
'retries': 1,
'timeout': 5,
'fetch_gpt_report': true,
'nuclei': {
'use_conf': false,
'severities': [
'unknown',
'info',
'low',
'medium',
'high',
'critical'
],
# 'tags': [],
# 'templates': [],
# 'custom_templates': [],
},
's3scanner': {
'threads': 100,
'providers': [
'aws',
'gcp',
'digitalocean',
'dreamhost',
'linode'
]
}
}
waf_detection: {}
screenshot: {
'enable_http_crawl': true,
'intensity': 'normal',
'timeout': 10,
'threads': 40
}
# custom_header: "Cookie: Test"
```
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Quick Installation
**Note:** Only Ubuntu/VPS
1. Clone this repo
```bash
git clone https://github.com/yogeshojha/rengine && cd rengine
```
1. Edit the `.env` file, **please make sure to change the password for postgresql `POSTGRES_PASSWORD`!**
```bash
nano .env
```
1. **Optional, only for non-interactive install**: In the `.env` file, **please make sure to change the super admin values!**
```bash
DJANGO_SUPERUSER_USERNAME=yourUsername
DJANGO_SUPERUSER_EMAIL=YourMail@example.com
DJANGO_SUPERUSER_PASSWORD=yourStrongPassword
```
If you need to carry out a non-interactive installation, you can setup the login, email and password of the web interface admin directly from the .env file (instead of manually setting them from prompts during the installation process). This option can be interesting for automated installation (via ansible, vagrant, etc.).
`DJANGO_SUPERUSER_USERNAME`: web interface admin username (used to login to the web interface).
`DJANGO_SUPERUSER_EMAIL`: web interface admin email.
`DJANGO_SUPERUSER_PASSWORD`: web interface admin password (used to login to the web interface).
1. In the dotenv file, you may also modify the Scaling Configurations
```bash
MAX_CONCURRENCY=80
MIN_CONCURRENCY=10
```
`MAX_CONCURRENCY`: This parameter specifies the maximum number of reNgine's concurrent Celery worker processes that can be spawned. In this case, it's set to 80, meaning that the application can utilize up to 80 concurrent worker processes to execute tasks concurrently. This is useful for handling a high volume of scans or when you want to scale up processing power during periods of high demand. If you have more CPU cores, you will need to increase this for maximised performance.
`MIN_CONCURRENCY`: On the other hand, MIN_CONCURRENCY specifies the minimum number of concurrent worker processes that should be maintained, even during periods of lower demand. In this example, it's set to 10, which means that even when there are fewer tasks to process, at least 10 worker processes will be kept running. This helps ensure that the application can respond promptly to incoming tasks without the overhead of repeatedly starting and stopping worker processes.
These settings allow for dynamic scaling of Celery workers, ensuring that the application efficiently manages its workload by adjusting the number of concurrent workers based on the workload's size and complexity
1. Run the installation script, Please keep an eye for any prompt, you will also be asked for username and password for reNgine.
```bash
sudo ./install.sh
```
Or for a non-interactive installation, use `-n` argument (make sure you've modified the `.env` file before launching the installation).
```bash
sudo ./install.sh -n
```
If `install.sh` does not have install permission, please change it, `chmod +x install.sh`
**reNgine can now be accessed from <https://127.0.0.1> or if you're on the VPS <https://your_vps_ip_address>**
**Unless you are on development branch, please do not access reNgine via any ports**
### Installation (Mac/Windows/Other)
Installation instructions can be found at [https://reNgine.wiki/install/detailed/](https://reNgine.wiki/2.0/install/detailed/)
### Updating
1. Updating is as simple as running the following command:
```bash
cd rengine && sudo ./update.sh
```
If `update.sh` does not have execution permissions, please change it, `sudo chmod +x update.sh`
**NOTE:** if you're updating from 1.3.6 and you're getting a 'password authentication failed' error, consider uninstalling 1.3.6 first, then install 2.x.x as you'd normally do.
### Changelog
[Please find the latest release notes and changelog here.](https://rengine.wiki/changelog/)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Screenshots
#### Scan Results
![](.github/screenshots/scan_results.gif)
#### General Usage
<img src="https://user-images.githubusercontent.com/17223002/164993781-b6012995-522b-480a-a8bf-911193d35894.gif">
#### Initiating Subscan
<img src="https://user-images.githubusercontent.com/17223002/164993749-1ad343d6-8ce7-43d6-aee7-b3add0321da7.gif">
#### Recon Data filtering
<img src="https://user-images.githubusercontent.com/17223002/164993687-b63f3de8-e033-4ac0-808e-a2aa377d3cf8.gif">
#### Report Generation
<img src="https://user-images.githubusercontent.com/17223002/164993689-c796c6cd-eb61-43f4-800d-08aba9740088.gif">
#### Toolbox
<img src="https://user-images.githubusercontent.com/17223002/164993751-d687e88a-eb79-440f-9dc0-0ad006901620.gif">
#### Adding Custom tool in Tools Arsenal
<img src="https://user-images.githubusercontent.com/17223002/164993670-466f6459-9499-498b-a9bd-526476d735a7.gif">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire and create. Every contributions you make is **greatly appreciated**. Your contributions can be as simple as fixing the indentation or UI, or as complex as adding new modules and features.
See the [Contributing Guide](.github/CONTRIBUTING.md) to get started.
You can also [join our Discord channel #development](https://discord.gg/JuhHdHTtwd) for any development related questions.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### First-time Open Source contributors
Please note that reNgine is beginner friendly. If you have never done open-source before, we encourage you to do so. **We will be happy and proud of your first PR ever.**
You can start by resolving any [open issues](https://github.com/yogeshojha/rengine/issues).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Support
Please do not use GitHub for support requests. Instead, [join our Discord channel #support](https://discord.gg/azv6fzhNCE).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Support and Sponsoring
Over the past few years, I have been working hard on reNgine to add new features with the sole aim of making it the de facto standard for reconnaissance. I spend most of my free time and weekends working on reNgine. I do this in addition to my day job. I am happy to have received such overwhelming support from the community. But to keep this project alive, I am looking for financial support.
| Paypal | Bitcoin | Ethereum |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
|[https://www.paypal.com/paypalme/yogeshojha11](https://www.paypal.com/paypalme/yogeshojha11) | `35AiKyNswNZ4TZUSdriHopSCjNMPi63BCX` | `0xe7A337Da6ff98A28513C26A7Fec8C9b42A63d346`
OR
* Add a [GitHub Star](https://github.com/yogeshojha/rengine) to the project.
* Tweet about this project, or maybe blogs?
* Maybe nominate me for [GitHub Stars?](https://stars.github.com/nominate/)
* Join DigitalOcean using my [referral link](https://m.do.co/c/e353502d19fc) your profit is **$100** and I get $25 DO credit. This will help me test reNgine on VPS before I release any major features.
It takes a considerable amount of time to add new features and make sure everything works. Donating is your way of saying: **reNgine is awesome**.
Any support is greatly appreciated! Thank you!
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Bug Bounty Program
[![huntr](https://cdn.huntr.dev/huntr_security_badge_mono.svg)](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine)
Security researchers, welcome aboard! I'm excited to announce the reNgine bug bounty programme in collaboration with [huntr.dev](https://huntr.dev), which means that you will be rewarded for any vulnerabilities you find in reNgine.
Thank you for your interest in reporting reNgine vulnerabilities! If you are aware of any potential security vulnerabilities in reNgine, we encourage you to report them immediately via [huntr.dev](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine).
**Please do not disclose vulnerabilities via Github issues/blogs/tweets after/before reporting to huntr.dev as this is explicitly against the disclosure policy of huntr.dev and reNgine and will not be considered for monetary rewards.**
Please note that the reNgine maintainer does not set the bounty amount.
The bounty reward is determined by an industry-first equation developed by huntr.dev to understand the popularity, impact and value of repositories to the open-source community.
**What do I expect from security researchers?**
* Patience: Please note that I am currently the only maintainer in reNgine and it will take some time to validate your report. I ask for your patience during this process.
* Respect for privacy and security reports: Please do not publicly disclose any vulnerabilities (including GitHub issues) before or after reporting them on huntr.dev! This is against the disclosure policy and will not be rewarded.
* Respect the rules
**What do you get in return?**
* Thanks from the maintainer
* Monetary rewards
* CVE ID(s)
Please find the [FAQ](https://www.huntr.dev/faq) and [Responsible disclosure policy](https://www.huntr.dev/policy/) from huntr.dev.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### License
Distributed under the GNU GPL v3 License. See [LICENSE](LICENSE) for more information.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<p align="right">(ChatGPT was used to write some or most part of this README section.)</p>
| C0wnuts | 3dd700357a4bd5701b07ede4511f66042655be00 | 64b7f291240b3b8853e3cec7ee6230827c97b907 | ```suggestion
Or for a non-interactive installation, use `-n` argument (make sure you've modified the `.env` file before launching the installation).
``` | AnonymousWP | 20 |
yogeshojha/rengine | 963 | 2.0-jasper release | ### Added
- Projects: Projects allow you to efficiently organize their web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
- Roles and Permissions: assign distinct roles to your team members: Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- GPT-powered Report Generation: With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments.
- API Vault: This feature allows you to organize your API keys such as OpenAI or Netlas API keys.
- GPT-powered Attack Surface Generation
- URL gathering now is much more efficient, removing duplicate endpoints based on similar HTTP Responses, having the same content_lenth, or page_title. Custom duplicate fields can also be set from the scan engine configuration.
- URL Path filtering while initiating scan: For instance, if we want to scan only endpoints starting with https://example.com/start/, we can pass the /start as a path filter while starting the scan. @ocervell
- Expanding Target Concept: reNgine 2.0 now accepts IPs, URLS, etc as targets. (#678, #658) Excellent work by @ocervell
- A ton of refactoring on reNgine's core to improve scan efficiency. Massive kudos to @ocervell
- Created a custom celery workflow to be able to run several tasks in parallel that are not dependent on each other, such OSINT task and subdomain discovery will run in parallel, and directory and file fuzzing, vulnerability scan, screenshot gathering etc. will run in parallel after port scan or url fetching is completed. This will increase the efficiency of scans and instead of having one long flow of tasks, they can run independently on their own. @ocervell
- Refactored all tasks to run asynchronously @ocervell
- Added a stream_command that allows to read the output of a command live: this means the UI is updated with results while the command runs and does not have to wait until the task completes. Excellent work by @ocervell
- Pwndb is now replaced by h8mail. @ocervell
- Group Scan Results: reNgine 2.0 allows to group of subdomains based on similar page titles and HTTP status, and also vulnerability grouping based on the same vulnerability title and severity.
- Added Support for Nmap: reNgine 2.0 allows to run Nmap scripts and vuln scans on ports found by Naabu. @ocervell
- Added support for Shared Scan Variables in Scan Engine Configuration:
- `enable_http_crawl`: (true/false) You can disable it to be more stealthy or focus on something different than HTTP
- `timeout`: set timeout for all tasks
- `rate_limit`: set rate limit for all tasks
- `retries`: set retries for all tasks
- `custom_header`: set the custom header for all tasks
- Added Dalfox for XSS Vulnerability Scan
- Added CRLFuzz for CRLF Vulnerability Scan
- Added S3Scanner for scanning misconfigured S3 buckets
- Improve OSINT Dork results, now detects admin panels, login pages and dashboards
- Added Custom Dorks
- Improved UI for vulnerability results, clicking on each vulnerability will open up a sidebar with vulnerability details.
- Added HTTP Request and Response in vulnerability Results
- Under Admin Settings, added an option to allow add/remove/deactivate additional users
- Added Option to Preview Scan Report instead of forcing to download
- Added Katana for crawling and spidering URLs
- Added Netlas for Whois and subdomain gathering
- Added TLSX for subdomain gathering
- Added CTFR for subdomain gathering
- Added historical IP in whois section
### Fixes
- GF patterns do not run on 404 endpoints (#574 closed)
- Fixes for retrieving whois data (#693 closed)
- Related/Associated Domains in Whois section is now fixed
### Removed
- Removed pwndb and tor related to it.
- Removed tor for pwndb | null | 2023-10-02 07:51:35+00:00 | 2023-10-07 10:37:23+00:00 | README.md | <p align="center">
<a href="https://rengine.wiki"><img src=".github/screenshots/banner.gif" alt=""/></a>
</p>
<p align="center"><a href="https://github.com/yogeshojha/rengine/releases" target="_blank"><img src="https://img.shields.io/badge/version-v1.2.0-informational?&logo=none" alt="reNgine Latest Version" /></a> <a href="https://www.gnu.org/licenses/gpl-3.0" target="_blank"><img src="https://img.shields.io/badge/License-GPLv3-red.svg?&logo=none" alt="License" /></a> <a href="https://github.com/yogeshojha/rengine/issues" target="_blank"><img src="https://img.shields.io/github/issues/yogeshojha/rengine?color=red&logo=none" alt="reNgine Issues" /></a> <a href="#" target="_blank"><img src="https://img.shields.io/badge/first--timers--only-friendly-blue.svg?&logo=none" alt="" /></a> <a href="https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine" target="_blank"><img src="https://cdn.huntr.dev/huntr_security_badge_mono.svg" alt="" /></a> </p>
<p align="center">
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/Open--Source--Summit-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://cyberweek.ae/2021/hitb-armory/" target="_blank"><img src="https://img.shields.io/badge/HITB--Armory-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=7uvP6MaQOX0" target="_blank"><img src="https://img.shields.io/badge/Black--Hat--Arsenal-USA--2021-blue.svg?logo=none" alt="" /></a>
<a href="https://drive.google.com/file/d/1Bh8lbf-Dztt5ViHJVACyrXMiglyICPQ2/view?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Defcon--Demolabs--29-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=A1oNOIc0h5A" target="_blank"><img src="https://img.shields.io/badge/Black--Hat--Arsenal-Europe--2020-blue.svg?&logo=none" alt="" /></a>
</p>
<p align="center">
<a href="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml/badge.svg" alt="" /></a> <a href="https://github.com/yogeshojha/rengine/actions/workflows/build.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/build.yml/badge.svg" alt="" /></a>
</p>
<p align="center">
<a href="https://discord.gg/H6WzebwX3H" target="_blank"><img src="https://img.shields.io/discord/880363103689277461" alt="" /></a>
</p>
<p align="center">
<a href="https://opensourcesecurityindex.io/" target="_blank" rel="noopener">
<img style="width: 282px; height: 56px" src="https://opensourcesecurityindex.io/badge.svg" alt="Open Source Security Index - Fastest Growing Open Source Security Projects" width="282" height="56" /> </a>
</p>
<h3>reNgine 1.1<br>More than just recon!</h3>
<h4>The only web application recon tool you will ever need!</h4>
<p>Quickly discover the attack surface, and identify vulnerabilities using highly customizable and powerful scan engines.
Enjoy peace of mind with reNgine's continuous monitoring, deeper reconnaissance, and open-source powered Vulnerability Scanner.</p>
<h4>What is reNgine?</h4>
<p align="left">reNgine is a web application reconnaissance suite that focuses on a highly configurable streamlined reconnaissance process via engines, reconnaissance data correlation, continuous monitoring, database backed reconnaissance data and a simple yet intuitive user interface. With features such as sub-scan, deeper co-relation, report generation, etc., reNgine aims to fill the gap in traditional reconnaissance tools and is likely to be a better alternative to existing commercial tools.
reNgine makes it easy for penetration testers and security auditors to gather reconnaissance data with minimal configuration.
</p>
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<p align="center">
⭐<a href="https://rengine.wiki">reNgine Documentation</a>
·
<a href="https://rengine.wiki/changelog/">What's new</a>
·
<a href="https://github.com/yogeshojha/rengine/blob/master/.github/CONTRIBUTING.md">Contribute</a>
·
<a href="https://github.com/yogeshojha/rengine/issues">Report Bug</a>
·
<a href="https://github.com/yogeshojha/rengine/issues">Request Feature</a>⭐
</p>
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Table of Contents
* [About reNgine](#about-rengine)
* [Features](#features)
* [Documentation](#documentation)
* [Quick Installation](#quick-installation)
* [What's new in reNgine](#changelog)
* [reNgine Bug Bounty Program](#rengine-bug-bounty-program)
* [Screenshots](#screenshots)
* [Contributing](#contributing)
* [reNgine Support](#rengine-support)
* [Related Projects](#related-projects)
* [Support and Sponsoring](#support-and-sponsoring)
* [License](#license)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine 2.0 codenamed Jasper
I am currently working on reNgine 2.0, which will probably be announced sometime between May and August 2023. reNgine 2.0 will be the most advanced reNgine ever, a lot of work will be done in how scans are performed, things such as Pause and Resume Scan, Axiom Integration, more deeper correlation, Project Options, Multiple Tenants, etc.
Please submit your feature requests via GitHub issues.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## About reNgine
You can watch [reNgine 1.1 release trailer here.](https://www.youtube.com/watch?v=iy_6F7Vq8Lo) (Recommended)
<img src="https://user-images.githubusercontent.com/17223002/164993688-50eb95f2-3653-4ef7-bd3b-ef7a096824ea.jpeg">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
reNgine is a web application reconnaissance suite with a focus on a highly configurable, streamlined reconnaissance process. reNgine is backed by a database, with data correlation and organisation, the custom query "like" language for filtering reconnaissance data, reNgine aims to address the shortcomings of the traditional reconnaissance workflow.
The developers behind reNgine understand that reconnaissance data can be huge and manually searching for records to attack can be tedious, so features such as identifying subdomains of interest help penetration testers focus on attack rather than reconnaissance.
reNgine also focuses on continuous monitoring. Penetration testers can choose to schedule the scan at regular intervals and be notified via notification channels such as Discord, Slack and Telegram of any new subdomains or vulnerabilities identified, or any changes to the recon data.
Interoperability is something every reconnaissance tool needs, and reNgine is no different. Starting with reNgine 1.0, we have added features such as import and export of subdomains, endpoints, GF pattern matched endpoints, etc. This allows you to use your favourite reconnaissance workflow in conjunction with reNgine.
PDF reports are something every individual or team needs. From reNgine 1.1, reNgine also comes with the option to download PDF reports. You can also choose the type of report, a full scan report or just a reconnaissance report. We also understand that PDF reports need to be customisable. Choose the colour of the report you want, customise the executive summary, etc. You choose how your PDF report looks!
reNgine features highly configurable scan engines based on YAML, allowing penetration testers to create as many reconnaissance engines of their choice as they like, configure them as they like, and use them against any targets for scanning. These engines allow penetration testers to use the tools of their choice, with the configuration of their choice. Out of the box, reNgine comes with several scan engines such as Full Scan, Passive Scan, Screenshot Gathering, OSINT Engine, etc.
Our focus has always been on finding the right reconnaissance data with the least amount of effort. After several discussions with fellow hackers/pentesters, a screenshot gallery was a must, reNgine also comes with a screenshot gallery, and what's more exciting than having a screenshot gallery with filters, filter screenshots with HTTP status, technology, ports and services.
We also want our fellow hackers to stay ahead of the game, so reNgine also comes with automatic vulnerability reporting (ATM only Hackerone is supported, other platforms may come soon). This allows hackers to define their vulnerability reporting template and reNgine will do the rest of the work to report the vulnerability as soon as it is identified.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<img src="https://user-images.githubusercontent.com/17223002/164993945-aabdbb4a-2b9d-4951-ba27-5f2f5abd1d8b.gif">
## Features
* Reconnaissance: Subdomain Discovery, IP and Open Ports Identification, Endpoints Discovery, Directory and Files fuzzing, - Screenshot gathering, Vulnerability scan using Nuclei, WHOIS Identification, WAF Detection etc.
* Highly configurable YAML-based Scan Engines
* Support for Parallel Scans and Subscans
* Automatically report Vulnerabilities to HackerOne
* Recon Data visualization
* OSINT Capabilities (Meta info Gathering, Employees Gathering, Email Address with an option to look password in the leaked database, - dorks, etc.)
* Customizable Alerts/Notifications on Slack, Discord, and Telegram
* Perform Advanced Query lookup using natural language alike and, or, not operations
* Recon Notes and Todos
* Clocked Scans (Run reconnaissance exactly at X Hours and Y minutes) and Periodic Scans (Runs reconnaissance every X minutes/- hours/days/week)
* Proxy Support
* Screenshot Gallery with Filters
* Powerful recon data filtering with autosuggestions
* Recon Data changes, find new/removed subdomains/endpoints
* Tag targets into the Organization
* Identify Interesting Subdomains
* Custom GF patterns and custom Nuclei Templates
* Edit tool-related configuration files (Nuclei, Subfinder, Naabu, amass)
* Add external tools from Github/Go
* Interoperable with other tools, Import/Export Subdomains/Endpoints
* Import Targets via IP and/or CIDRs
* Report Generation
* Toolbox: Comes bundled with most commonly used tools such as whois lookup, CMS detector, CVE lookup, etc.
* Identification of related domains and related TLDs for targets
* Find actionable insights such as Most Common Vulnerability, Most Common CVE ID, Most Vulnerable Target/Subdomain, etc.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Documentation
You can find reNgine documentation at [https://rengine.wiki](https://rengine.wiki)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Quick Installation
**Note:** Only Ubuntu/VPS
1. Clone this repo
```bash
git clone https://github.com/yogeshojha/rengine && cd rengine
```
1. Edit the dotenv file, **please make sure to change the password for postgresql `POSTGRES_PASSWORD`!**
```bash
nano .env
```
1. Run the installation script, Please keep an eye for any prompt, you will also be asked for username and password for reNgine.
```bash
sudo ./install.sh
```
If `install.sh` does not have install permission, please change it, `chmod +x install.sh`
**reNgine can now be accessed from <https://127.0.0.1> or if you're on the VPS <https://your_vps_ip_address>**
A detailed installation guide can also be found [here](https://www.rffuste.com/2022/05/23/rengine-a-brief-overview/). Thanks to Rubén!
## Installation (Mac/Windows/Other)
Installation instructions can be found at [https://reNgine.wiki/install/detailed/](https://reNgine.wiki/install/detailed/)
## Updating
1. Updating is as simple as running the following command:
```bash
sudo ./update.sh
```
If `update.sh` does not have execution permissions, please change it, `sudo chmod +x update.sh`
## Changelog
[Please find the latest release notes and changelog here.](https://rengine.wiki/changelog/)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## reNgine Bug Bounty Program
[![huntr](https://cdn.huntr.dev/huntr_security_badge_mono.svg)](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine)
Security researchers, welcome aboard! I'm excited to announce the reNgine bug bounty programme in collaboration with [huntr.dev](https://huntr.dev), which means that you will be rewarded for any vulnerabilities you find in reNgine.
Thank you for your interest in reporting reNgine vulnerabilities! If you are aware of any potential security vulnerabilities in reNgine, we encourage you to report them immediately via [huntr.dev](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine).
**Please do not disclose vulnerabilities via Github issues/blogs/tweets after/before reporting to huntr.dev as this is explicitly against the disclosure policy of huntr.dev and reNgine and will not be considered for monetary rewards.**
Please note that the reNgine maintainer does not set the bounty amount.
The bounty reward is determined by an industry-first equation developed by huntr.dev to understand the popularity, impact and value of repositories to the open-source community.
**What do I expect from security researchers?**
* Patience: Please note that I am currently the only maintainer in reNgine and it will take some time to validate your report. I ask for your patience during this process.
* Respect for privacy and security reports: Please do not publicly disclose any vulnerabilities (including GitHub issues) before or after reporting them on huntr.dev! This is against the disclosure policy and will not be rewarded.
* Respect the rules
**What do you get in return?**
* Thanks from the maintainer
* Monetary rewards
* CVE ID(s)
Please find the [FAQ](https://www.huntr.dev/faq) and [Responsible disclosure policy](https://www.huntr.dev/policy/) from huntr.dev.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Screenshots
### Scan Results
![](.github/screenshots/scan_results.gif)
### General Usage
<img src="https://user-images.githubusercontent.com/17223002/164993781-b6012995-522b-480a-a8bf-911193d35894.gif">
### Initiating Subscan
<img src="https://user-images.githubusercontent.com/17223002/164993749-1ad343d6-8ce7-43d6-aee7-b3add0321da7.gif">
### Recon Data filtering
<img src="https://user-images.githubusercontent.com/17223002/164993687-b63f3de8-e033-4ac0-808e-a2aa377d3cf8.gif">
### Report Generation
<img src="https://user-images.githubusercontent.com/17223002/164993689-c796c6cd-eb61-43f4-800d-08aba9740088.gif">
### Toolbox
<img src="https://user-images.githubusercontent.com/17223002/164993751-d687e88a-eb79-440f-9dc0-0ad006901620.gif">
### Adding Custom tool in Tools Arsenal
<img src="https://user-images.githubusercontent.com/17223002/164993670-466f6459-9499-498b-a9bd-526476d735a7.gif">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire and create. Every contributions you make is **greatly appreciated**. Your contributions can be as simple as fixing the indentation or UI, or as complex as adding new modules and features.
See the [Contributing Guide](.github/CONTRIBUTING.md) to get started.
You can also [join our Discord channel #development](https://discord.gg/JuhHdHTtwd) for any development related questions.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### First-time Open Source contributors
Please note that reNgine is beginner friendly. If you have never done open-source before, we encourage you to do so. **We will be happy and proud of your first PR ever.**
You can start by resolving any [open issues](https://github.com/yogeshojha/rengine/issues).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## reNgine Support
Please do not use GitHub for support requests. Instead, [join our Discord channel #support](https://discord.gg/azv6fzhNCE)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Related Projects
There are many other great reconnaissance frameworks out there, you can use reNgine in conjunction with those tools. But they are great in their own right, and can sometimes produce better results than reNgine.
* [ReconFTW](https://github.com/six2dez/reconftw#sample-video)
* [Reconmap](https://github.com/reconmap/reconmap)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Support and Sponsoring
Over the past few years, I have been working hard on reNgine to add new features with the sole aim of making it the de facto standard for reconnaissance. I spend most of my free time and weekends working on reNgine. I do this in addition to my day job. I am happy to have received such overwhelming support from the community. But to keep this project alive, I am looking for financial support.
| Paypal | Bitcoin | Ethereum |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
|[https://www.paypal.com/paypalme/yogeshojha11](https://www.paypal.com/paypalme/yogeshojha11) | `35AiKyNswNZ4TZUSdriHopSCjNMPi63BCX` | `0xe7A337Da6ff98A28513C26A7Fec8C9b42A63d346`
OR
* Add a [GitHub Star](https://github.com/yogeshojha/rengine) to the project.
* Tweet about this project, or maybe blogs?
* Maybe nominate me for [GitHub Stars?](https://stars.github.com/nominate/)
* Join DigitalOcean using my [referral link](https://m.do.co/c/e353502d19fc) your profit is **$100** and I get $25 DO credit. This will help me test reNgine on VPS before I release any major features.
It takes a considerable amount of time to add new features and make sure everything works. Donating is your way of saying: **reNgine is awesome**.
Any support is greatly appreciated! Thank you!
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## License
Distributed under the GNU GPL v3 License. See [LICENSE](LICENSE) for more information.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
| <p align="center">
<a href="https://rengine.wiki"><img src=".github/screenshots/banner.gif" alt=""/></a>
</p>
<p align="center"><a href="https://github.com/yogeshojha/rengine/releases" target="_blank"><img src="https://img.shields.io/badge/version-v2.0.0-informational?&logo=none" alt="reNgine Latest Version" /></a> <a href="https://www.gnu.org/licenses/gpl-3.0" target="_blank"><img src="https://img.shields.io/badge/License-GPLv3-red.svg?&logo=none" alt="License" /></a> <a href="#" target="_blank"><img src="https://img.shields.io/badge/first--timers--only-friendly-blue.svg?&logo=none" alt="" /></a> <a href="https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine" target="_blank"><img src="https://cdn.huntr.dev/huntr_security_badge_mono.svg" alt="" /></a> </p>
<p align="center">
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Asia-2023-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/Open--Source--Summit-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://cyberweek.ae/2021/hitb-armory/" target="_blank"><img src="https://img.shields.io/badge/HITB--Armory-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=7uvP6MaQOX0" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://drive.google.com/file/d/1Bh8lbf-Dztt5ViHJVACyrXMiglyICPQ2/view?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Defcon--Demolabs--29-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=A1oNOIc0h5A" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Europe-2020-blue.svg?&logo=none" alt="" /></a>
</p>
<p align="center">
<a href="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml/badge.svg" alt="" /></a> <a href="https://github.com/yogeshojha/rengine/actions/workflows/build.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/build.yml/badge.svg" alt="" /></a>
</p>
<p align="center">
<a href="https://discord.gg/H6WzebwX3H" target="_blank"><img src="https://img.shields.io/discord/880363103689277461" alt="" /></a>
</p>
<p align="center">
<a href="https://opensourcesecurityindex.io/" target="_blank" rel="noopener">
<img style="width: 282px; height: 56px" src="https://opensourcesecurityindex.io/badge.svg" alt="Open Source Security Index - Fastest Growing Open Source Security Projects" width="282" height="56" /> </a>
</p>
<h3>reNgine 2.0-jasper<br>Redefining the future of reconnaissance!</h3>
<h4>What is reNgine?</h4>
<p align="left">reNgine is your go-to web application reconnaissance suite that's designed to simplify and streamline the reconnaissance process for security professionals, penetration testers, and bug bounty hunters. With its highly configurable engines, data correlation capabilities, continuous monitoring, database-backed reconnaissance data, and an intuitive user interface, reNgine redefines how you gather critical information about your target web applications.
Traditional reconnaissance tools often fall short in terms of configurability and efficiency. reNgine addresses these shortcomings and emerges as a excellent alternative to existing commercial tools.
reNgine was created to address the limitations of traditional reconnaissance tools and provide a better alternative, even surpassing some commercial offerings. Whether you're a bug bounty hunter, a penetration tester, or a corporate security team, reNgine is your go-to solution for automating and enhancing your information-gathering efforts.
</p>
reNgine 2.0-jasper is out now, you can [watch reNgine 2.0-jasper release trailer here!](https://youtu.be/VwkOWqiWW5g)
reNgine 2.0-Jasper would not have been possible without [@ocervell](https://github.com/ocervell) valuable contributions. [@ocervell](https://github.com/ocervell) did majority of the refactoring if not all and also added a ton of features. Together, we wish to shape the future of web application reconnaissance, and it's developers like [@ocervell](https://github.com/ocervell) and a [ton of other developers and hackers from our community](https://github.com/yogeshojha/rengine/graphs/contributors) who inspire and drive us forward.
Thank you, [@ocervell](https://github.com/ocervell), for your outstanding work and unwavering commitment to reNgine.
Checkout our contributers here: [Contributers](https://github.com/yogeshojha/rengine/graphs/contributors)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Documentation
You can find detailed documentation at [https://rengine.wiki](https://rengine.wiki)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Table of Contents
* [About reNgine](#about-rengine)
* [Workflow](#workflow)
* [Features](#features)
* [Scan Engine](#scan-engine)
* [Quick Installation](#quick-installation)
* [What's new in reNgine 2.0](#changelog)
* [Screenshots](#screenshots)
* [Contributing](#contributing)
* [reNgine Support](#rengine-support)
* [Support and Sponsoring](#support-and-sponsoring)
* [reNgine Bug Bounty Program](#rengine-bug-bounty-program)
* [License](#license)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### About reNgine
reNgine is not an ordinary reconnaissance suite; it's a game-changer! We've turbocharged the traditional workflow with groundbreaking features that is sure to ease your reconnaissance game. reNgine redefines the art of reconnaissance with highly configurable scan engines, recon data correlation, continuous monitoring, GPT powered Vulnerability Report, Project Management and role based access control etc.
🦾 reNgine has advanced reconnaissance capabilities, harnessing a range of open-source tools to deliver a comprehensive web application reconnaissance experience. With it's intuitive User Interface, it excels in subdomain discovery, pinpointing IP addresses and open ports, collecting endpoints, conducting directory and file fuzzing, capturing screenshots, and performing vulnerability scans. To summarize, it does end-to-end reconnaissance. With WHOIS identification and WAF detection, it offers deep insights into target domains. Additionally, reNgine also identifies misconfigured S3 buckets and find interesting subdomains and URLS, based on specific keywords to helps you identify your next target, making it an go to tool for efficient reconnaissance.
🗃️ Say goodbye to recon data chaos! reNgine seamlessly integrates with a database, providing you with unmatched data correlation and organization. Forgot the hassle of grepping through json, txt or csv files. Plus, our custom query language lets you filter reconnaissance data effortlessly using natural language like operators such as filtering all alive subdomains with `http_status=200` and also filter all subdomains that are alive and has admin in name `http_status=200&name=admin`
🔧 reNgine offers unparalleled flexibility through its highly configurable scan engines, based on a YAML-based configuration. It offers the freedom to create and customize recon scan engines based on any kind of requirement, users can tailor them to their specific objectives and preferences, from thread management to timeout settings and rate-limit configurations, everything is customizable. Additionally, reNgine offers a range of pre-configured scan engines right out of the box, including Full Scan, Passive Scan, Screenshot Gathering, and the OSINT Scan Engine. These ready-to-use engines eliminate the need for extensive manual setup, aligning perfectly with reNgine's core mission of simplifying the reconnaissance process and enabling users to effortlessly access the right reconnaissance data with minimal effort.
💎 Subscans: Subscan is a game-changing feature in reNgine, setting it apart as the only open-source tool of its kind to offer this capability. With Subscan, waiting for the entire pipeline to complete is a thing of the past. Now, users can swiftly respond to newfound discoveries during reconnaissance. Whether you've stumbled upon an intriguing subdomain and wish to conduct a focused port scan or want to delve deeper with a vulnerability assessment, reNgine has you covered.
📃 PDF Reports: In addition to its robust reconnaissance capabilities, reNgine goes the extra mile by simplifying the report generation process, recognizing the crucial role that PDF reports play in the realm of end-to-end reconnaissance. Users can effortlessly generate and customize PDF reports to suit their exact needs. Whether it's a Full Scan Report, Vulnerability Report, or a concise reconnaissance report, reNgine provides the flexibility to choose the report type that best communicates your findings. Moreover, the level of customization is unparalleled, allowing users to select report colors, fine-tune executive summaries, and even add personalized touches like company names and footers. With GPT integration, your reports aren't just a report, with remediation steps, and impacts, you get 360-degree view of the vulnerabilities you've uncovered.
🔖 Say Hello to Projects! reNgine 2.0 introduces a powerful addition that enables you to efficiently organize your web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task. Each projects will have separate dashboard and all the scan results will be separated from each projects, while scan engines and configuration will be shared across all the projects.
⚙️ Roles and Permissions! Begining reNgine 2.0, we've taken your web application reconnaissance to a whole new level of control and security. Now, you can assign distinct roles to your team members—Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- 🔐 Sys Admin: Sys Admin is a super user that has permission to modify system and scan related configurations, scan engines, create new users, add new tools etc. Super user can initiate scans and subscans effortlessly.
- 🔍 Penetration Tester: Penetration Tester will be allowed to modify and initiate scans and subscans, add or update targets, etc. A penetration tester will not be allowed to modify system configurations.
- 📊 Auditor: Auditor can only view and download the report. An auditor can not change any system or scan related configurations nor can initiate any scans or subscans.
🚀 GPT Vulnerability Report Generation: Get ready for the future of penetration testing reports with reNgine's groundbreaking feature: "GPT-Powered Report Generation"! With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments that read like they were written by a human security expert! **But that's not all!** Our GPT-driven reports go the extra mile by scouring the web for related news articles, blogs, and references, so you have a 360-degree view of the vulnerabilities you've uncovered. With reNgine 2.0 revolutionize your penetration testing game and impress your clients with reports that are not just informative but engaging and comprehensive with detailed analysis on impact assessment and remediation strategies.
🥷 GPT-Powered Attack Surface Generation: With reNgine 2.0, reNgine seamlessly integrates with GPT to identify the attacks that you can likely perform on a subdomain. By making use of reconnaissance data such as page title, open ports, subdomain name etc, reNgine can advice you the attacks you could perform on a target. reNgine will also provide you the rationale on why the specific attack is likely to be successful.
🧭 Continuous monitoring: Continuous monitoring is at the core of reNgine's mission, and it's robust continuous monitoring feature ensures that their targets are under constant scrutiny. With the flexibility to schedule scans at regular intervals, penetration testers can effortlessly stay informed about their targets. What sets reNgine apart is its seamless integration with popular notification channels such as Discord, Slack, and Telegram, delivering real-time alerts for newly discovered subdomains, vulnerabilities, or any changes in reconnaissance data.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Workflow
<img src="https://github.com/yogeshojha/rengine/assets/17223002/10c475b8-b4a8-440d-9126-77fe2038a386">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Features
* Reconnaissance:
* Subdomain Discovery
* IP and Open Ports Identification
* Endpoints Discovery
* Directory/Files fuzzing
* Screenshot Gathering
* Vulnerability Scan
* Nuclei
* Dalfox XSS Scanner
* CRLFuzzer
* Misconfigured S3 Scanner
* WHOIS Identification
* WAF Detection
* OSINT Capabilities
* Meta info Gathering
* Employees Gathering
* Email Address gathering
* Google Dorking for sensitive info and urls
* Projects, create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
* Perform Advanced Query lookup using natural language alike and, or, not operations
* Highly configurable YAML-based Scan Engines
* Support for Parallel Scans
* Support for Subscans
* Recon Data visualization
* GPT Vulnerability Description, Impact and Remediation generation
* GPT Attack Surface Generator
* Multiple Roles and Permissions to cater a team's need
* Customizable Alerts/Notifications on Slack, Discord, and Telegram
* Automatically report Vulnerabilities to HackerOne
* Recon Notes and Todos
* Clocked Scans (Run reconnaissance exactly at X Hours and Y minutes) and Periodic Scans (Runs reconnaissance every X minutes/- hours/days/week)
* Proxy Support
* Screenshot Gallery with Filters
* Powerful recon data filtering with autosuggestions
* Recon Data changes, find new/removed subdomains/endpoints
* Tag targets into the Organization
* Smart Duplicate endpoint removal based on page title and content length to cleanup the reconnaissance data
* Identify Interesting Subdomains
* Custom GF patterns and custom Nuclei Templates
* Edit tool-related configuration files (Nuclei, Subfinder, Naabu, amass)
* Add external tools from Github/Go
* Interoperable with other tools, Import/Export Subdomains/Endpoints
* Import Targets via IP and/or CIDRs
* Report Generation
* Toolbox: Comes bundled with most commonly used tools during penetration testing such as whois lookup, CMS detector, CVE lookup, etc.
* Identification of related domains and related TLDs for targets
* Find actionable insights such as Most Common Vulnerability, Most Common CVE ID, Most Vulnerable Target/Subdomain, etc.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Scan Engine
```yaml
subdomain_discovery: {
'uses_tools': [
'subfinder',
'ctfr',
'sublist3r',
'tlsx',
'oneforall',
'netlas'
],
'enable_http_crawl': true,
'threads': 30,
'timeout': 5,
}
http_crawl: {}
port_scan: {
'enable_http_crawl': true,
'timeout': 5,
# 'exclude_ports': [],
# 'exclude_subdomains': true,
'ports': ['top-100'],
'rate_limit': 150,
'threads': 30,
'passive': false,
# 'use_naabu_config': false,
# 'enable_nmap': true,
# 'nmap_cmd': '',
# 'nmap_script': '',
# 'nmap_script_args': ''
}
osint: {
'discover': [
'emails',
'metainfo',
'employees'
],
'dorks': [
'login_pages',
'admin_panels',
'dashboard_pages',
'stackoverflow',
'social_media',
'project_management',
'code_sharing',
'config_files',
'jenkins',
'wordpress_files',
'php_error',
'exposed_documents',
'db_files',
'git_exposed'
],
'custom_dorks': [
{
'lookup_site': 'google.com',
'lookup_keywords': '/home/'
},
{
'lookup_site': '_target_',
'lookup_extensions': 'jpg,png'
}
],
'intensity': 'normal',
'documents_limit': 50
}
dir_file_fuzz: {
'auto_calibration': true,
'enable_http_crawl': true,
'rate_limit': 150,
'extensions': ['html', 'php','git','yaml','conf','cnf','config','gz','env','log','db','mysql','bak','asp','aspx','txt','conf','sql','json','yml','pdf'],
'follow_redirect': false,
'max_time': 0,
'match_http_status': [200, 204],
'recursive_level': 2,
'stop_on_error': false,
'timeout': 5,
'threads': 30,
'wordlist_name': 'dicc'
}
fetch_url: {
'uses_tools': [
'gospider',
'hakrawler',
'waybackurls',
'gospider',
'katana'
],
'remove_duplicate_endpoints': true,
'duplicate_fields': [
'content_length',
'page_title'
],
'enable_http_crawl': true,
'gf_patterns': ['debug_logic', 'idor', 'interestingEXT', 'interestingparams', 'interestingsubs', 'lfi', 'rce', 'redirect', 'sqli', 'ssrf', 'ssti', 'xss'],
'ignore_file_extensions': ['png', 'jpg', 'jpeg', 'gif', 'mp4', 'mpeg', 'mp3']
# 'exclude_subdomains': true
}
vulnerability_scan: {
'run_nuclei': false,
'run_dalfox': false,
'run_crlfuzz': false,
'run_s3scanner': true,
'enable_http_crawl': true,
'concurrency': 50,
'intensity': 'normal',
'rate_limit': 150,
'retries': 1,
'timeout': 5,
'fetch_gpt_report': true,
'nuclei': {
'use_conf': false,
'severities': [
'unknown',
'info',
'low',
'medium',
'high',
'critical'
],
# 'tags': [],
# 'templates': [],
# 'custom_templates': [],
},
's3scanner': {
'threads': 100,
'providers': [
'aws',
'gcp',
'digitalocean',
'dreamhost',
'linode'
]
}
}
waf_detection: {}
screenshot: {
'enable_http_crawl': true,
'intensity': 'normal',
'timeout': 10,
'threads': 40
}
# custom_header: "Cookie: Test"
```
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Quick Installation
**Note:** Only Ubuntu/VPS
1. Clone this repo
```bash
git clone https://github.com/yogeshojha/rengine && cd rengine
```
1. Edit the dotenv file, **please make sure to change the password for postgresql `POSTGRES_PASSWORD`!**
```bash
nano .env
```
1. In the dotenv file, you may also modify the Scaling Configurations
```bash
MAX_CONCURRENCY=80
MIN_CONCURRENCY=10
```
MAX_CONCURRENCY: This parameter specifies the maximum number of reNgine's concurrent Celery worker processes that can be spawned. In this case, it's set to 80, meaning that the application can utilize up to 80 concurrent worker processes to execute tasks concurrently. This is useful for handling a high volume of scans or when you want to scale up processing power during periods of high demand. If you have more CPU cores, you will need to increase this for maximised performance.
MIN_CONCURRENCY: On the other hand, MIN_CONCURRENCY specifies the minimum number of concurrent worker processes that should be maintained, even during periods of lower demand. In this example, it's set to 10, which means that even when there are fewer tasks to process, at least 10 worker processes will be kept running. This helps ensure that the application can respond promptly to incoming tasks without the overhead of repeatedly starting and stopping worker processes.
These settings allow for dynamic scaling of Celery workers, ensuring that the application efficiently manages its workload by adjusting the number of concurrent workers based on the workload's size and complexity
1. Run the installation script, Please keep an eye for any prompt, you will also be asked for username and password for reNgine.
```bash
sudo ./install.sh
```
If `install.sh` does not have install permission, please change it, `chmod +x install.sh`
**reNgine can now be accessed from <https://127.0.0.1> or if you're on the VPS <https://your_vps_ip_address>**
**Unless you are on development branch, please do not access reNgine via any ports**
### Installation (Mac/Windows/Other)
Installation instructions can be found at [https://reNgine.wiki/install/detailed/](https://reNgine.wiki/2.0/install/detailed/)
### Updating
1. Updating is as simple as running the following command:
```bash
cd rengine && sudo ./update.sh
```
If `update.sh` does not have execution permissions, please change it, `sudo chmod +x update.sh`
### Changelog
[Please find the latest release notes and changelog here.](https://rengine.wiki/changelog/)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Screenshots
#### Scan Results
![](.github/screenshots/scan_results.gif)
#### General Usage
<img src="https://user-images.githubusercontent.com/17223002/164993781-b6012995-522b-480a-a8bf-911193d35894.gif">
#### Initiating Subscan
<img src="https://user-images.githubusercontent.com/17223002/164993749-1ad343d6-8ce7-43d6-aee7-b3add0321da7.gif">
#### Recon Data filtering
<img src="https://user-images.githubusercontent.com/17223002/164993687-b63f3de8-e033-4ac0-808e-a2aa377d3cf8.gif">
#### Report Generation
<img src="https://user-images.githubusercontent.com/17223002/164993689-c796c6cd-eb61-43f4-800d-08aba9740088.gif">
#### Toolbox
<img src="https://user-images.githubusercontent.com/17223002/164993751-d687e88a-eb79-440f-9dc0-0ad006901620.gif">
#### Adding Custom tool in Tools Arsenal
<img src="https://user-images.githubusercontent.com/17223002/164993670-466f6459-9499-498b-a9bd-526476d735a7.gif">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire and create. Every contributions you make is **greatly appreciated**. Your contributions can be as simple as fixing the indentation or UI, or as complex as adding new modules and features.
See the [Contributing Guide](.github/CONTRIBUTING.md) to get started.
You can also [join our Discord channel #development](https://discord.gg/JuhHdHTtwd) for any development related questions.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### First-time Open Source contributors
Please note that reNgine is beginner friendly. If you have never done open-source before, we encourage you to do so. **We will be happy and proud of your first PR ever.**
You can start by resolving any [open issues](https://github.com/yogeshojha/rengine/issues).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Support
Please do not use GitHub for support requests. Instead, [join our Discord channel #support](https://discord.gg/azv6fzhNCE).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Support and Sponsoring
Over the past few years, I have been working hard on reNgine to add new features with the sole aim of making it the de facto standard for reconnaissance. I spend most of my free time and weekends working on reNgine. I do this in addition to my day job. I am happy to have received such overwhelming support from the community. But to keep this project alive, I am looking for financial support.
| Paypal | Bitcoin | Ethereum |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
|[https://www.paypal.com/paypalme/yogeshojha11](https://www.paypal.com/paypalme/yogeshojha11) | `35AiKyNswNZ4TZUSdriHopSCjNMPi63BCX` | `0xe7A337Da6ff98A28513C26A7Fec8C9b42A63d346`
OR
* Add a [GitHub Star](https://github.com/yogeshojha/rengine) to the project.
* Tweet about this project, or maybe blogs?
* Maybe nominate me for [GitHub Stars?](https://stars.github.com/nominate/)
* Join DigitalOcean using my [referral link](https://m.do.co/c/e353502d19fc) your profit is **$100** and I get $25 DO credit. This will help me test reNgine on VPS before I release any major features.
It takes a considerable amount of time to add new features and make sure everything works. Donating is your way of saying: **reNgine is awesome**.
Any support is greatly appreciated! Thank you!
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Bug Bounty Program
[![huntr](https://cdn.huntr.dev/huntr_security_badge_mono.svg)](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine)
Security researchers, welcome aboard! I'm excited to announce the reNgine bug bounty programme in collaboration with [huntr.dev](https://huntr.dev), which means that you will be rewarded for any vulnerabilities you find in reNgine.
Thank you for your interest in reporting reNgine vulnerabilities! If you are aware of any potential security vulnerabilities in reNgine, we encourage you to report them immediately via [huntr.dev](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine).
**Please do not disclose vulnerabilities via Github issues/blogs/tweets after/before reporting to huntr.dev as this is explicitly against the disclosure policy of huntr.dev and reNgine and will not be considered for monetary rewards.**
Please note that the reNgine maintainer does not set the bounty amount.
The bounty reward is determined by an industry-first equation developed by huntr.dev to understand the popularity, impact and value of repositories to the open-source community.
**What do I expect from security researchers?**
* Patience: Please note that I am currently the only maintainer in reNgine and it will take some time to validate your report. I ask for your patience during this process.
* Respect for privacy and security reports: Please do not publicly disclose any vulnerabilities (including GitHub issues) before or after reporting them on huntr.dev! This is against the disclosure policy and will not be rewarded.
* Respect the rules
**What do you get in return?**
* Thanks from the maintainer
* Monetary rewards
* CVE ID(s)
Please find the [FAQ](https://www.huntr.dev/faq) and [Responsible disclosure policy](https://www.huntr.dev/policy/) from huntr.dev.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### License
Distributed under the GNU GPL v3 License. See [LICENSE](LICENSE) for more information.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<p align="right">(ChatGPT was used to write some or most part of this README section.)</p>
| yogeshojha | 3c60bc1ee495044794d91edee0c96fff73ab46c7 | 5413708d243799a5271440c47c6f98d0c51154ca | The hyperlinks seem incorrect or I am mentioned by accident. :p | AnonymousWP | 21 |
yogeshojha/rengine | 963 | 2.0-jasper release | ### Added
- Projects: Projects allow you to efficiently organize their web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
- Roles and Permissions: assign distinct roles to your team members: Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- GPT-powered Report Generation: With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments.
- API Vault: This feature allows you to organize your API keys such as OpenAI or Netlas API keys.
- GPT-powered Attack Surface Generation
- URL gathering now is much more efficient, removing duplicate endpoints based on similar HTTP Responses, having the same content_lenth, or page_title. Custom duplicate fields can also be set from the scan engine configuration.
- URL Path filtering while initiating scan: For instance, if we want to scan only endpoints starting with https://example.com/start/, we can pass the /start as a path filter while starting the scan. @ocervell
- Expanding Target Concept: reNgine 2.0 now accepts IPs, URLS, etc as targets. (#678, #658) Excellent work by @ocervell
- A ton of refactoring on reNgine's core to improve scan efficiency. Massive kudos to @ocervell
- Created a custom celery workflow to be able to run several tasks in parallel that are not dependent on each other, such OSINT task and subdomain discovery will run in parallel, and directory and file fuzzing, vulnerability scan, screenshot gathering etc. will run in parallel after port scan or url fetching is completed. This will increase the efficiency of scans and instead of having one long flow of tasks, they can run independently on their own. @ocervell
- Refactored all tasks to run asynchronously @ocervell
- Added a stream_command that allows to read the output of a command live: this means the UI is updated with results while the command runs and does not have to wait until the task completes. Excellent work by @ocervell
- Pwndb is now replaced by h8mail. @ocervell
- Group Scan Results: reNgine 2.0 allows to group of subdomains based on similar page titles and HTTP status, and also vulnerability grouping based on the same vulnerability title and severity.
- Added Support for Nmap: reNgine 2.0 allows to run Nmap scripts and vuln scans on ports found by Naabu. @ocervell
- Added support for Shared Scan Variables in Scan Engine Configuration:
- `enable_http_crawl`: (true/false) You can disable it to be more stealthy or focus on something different than HTTP
- `timeout`: set timeout for all tasks
- `rate_limit`: set rate limit for all tasks
- `retries`: set retries for all tasks
- `custom_header`: set the custom header for all tasks
- Added Dalfox for XSS Vulnerability Scan
- Added CRLFuzz for CRLF Vulnerability Scan
- Added S3Scanner for scanning misconfigured S3 buckets
- Improve OSINT Dork results, now detects admin panels, login pages and dashboards
- Added Custom Dorks
- Improved UI for vulnerability results, clicking on each vulnerability will open up a sidebar with vulnerability details.
- Added HTTP Request and Response in vulnerability Results
- Under Admin Settings, added an option to allow add/remove/deactivate additional users
- Added Option to Preview Scan Report instead of forcing to download
- Added Katana for crawling and spidering URLs
- Added Netlas for Whois and subdomain gathering
- Added TLSX for subdomain gathering
- Added CTFR for subdomain gathering
- Added historical IP in whois section
### Fixes
- GF patterns do not run on 404 endpoints (#574 closed)
- Fixes for retrieving whois data (#693 closed)
- Related/Associated Domains in Whois section is now fixed
### Removed
- Removed pwndb and tor related to it.
- Removed tor for pwndb | null | 2023-10-02 07:51:35+00:00 | 2023-10-07 10:37:23+00:00 | README.md | <p align="center">
<a href="https://rengine.wiki"><img src=".github/screenshots/banner.gif" alt=""/></a>
</p>
<p align="center"><a href="https://github.com/yogeshojha/rengine/releases" target="_blank"><img src="https://img.shields.io/badge/version-v1.2.0-informational?&logo=none" alt="reNgine Latest Version" /></a> <a href="https://www.gnu.org/licenses/gpl-3.0" target="_blank"><img src="https://img.shields.io/badge/License-GPLv3-red.svg?&logo=none" alt="License" /></a> <a href="https://github.com/yogeshojha/rengine/issues" target="_blank"><img src="https://img.shields.io/github/issues/yogeshojha/rengine?color=red&logo=none" alt="reNgine Issues" /></a> <a href="#" target="_blank"><img src="https://img.shields.io/badge/first--timers--only-friendly-blue.svg?&logo=none" alt="" /></a> <a href="https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine" target="_blank"><img src="https://cdn.huntr.dev/huntr_security_badge_mono.svg" alt="" /></a> </p>
<p align="center">
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/Open--Source--Summit-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://cyberweek.ae/2021/hitb-armory/" target="_blank"><img src="https://img.shields.io/badge/HITB--Armory-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=7uvP6MaQOX0" target="_blank"><img src="https://img.shields.io/badge/Black--Hat--Arsenal-USA--2021-blue.svg?logo=none" alt="" /></a>
<a href="https://drive.google.com/file/d/1Bh8lbf-Dztt5ViHJVACyrXMiglyICPQ2/view?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Defcon--Demolabs--29-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=A1oNOIc0h5A" target="_blank"><img src="https://img.shields.io/badge/Black--Hat--Arsenal-Europe--2020-blue.svg?&logo=none" alt="" /></a>
</p>
<p align="center">
<a href="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml/badge.svg" alt="" /></a> <a href="https://github.com/yogeshojha/rengine/actions/workflows/build.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/build.yml/badge.svg" alt="" /></a>
</p>
<p align="center">
<a href="https://discord.gg/H6WzebwX3H" target="_blank"><img src="https://img.shields.io/discord/880363103689277461" alt="" /></a>
</p>
<p align="center">
<a href="https://opensourcesecurityindex.io/" target="_blank" rel="noopener">
<img style="width: 282px; height: 56px" src="https://opensourcesecurityindex.io/badge.svg" alt="Open Source Security Index - Fastest Growing Open Source Security Projects" width="282" height="56" /> </a>
</p>
<h3>reNgine 1.1<br>More than just recon!</h3>
<h4>The only web application recon tool you will ever need!</h4>
<p>Quickly discover the attack surface, and identify vulnerabilities using highly customizable and powerful scan engines.
Enjoy peace of mind with reNgine's continuous monitoring, deeper reconnaissance, and open-source powered Vulnerability Scanner.</p>
<h4>What is reNgine?</h4>
<p align="left">reNgine is a web application reconnaissance suite that focuses on a highly configurable streamlined reconnaissance process via engines, reconnaissance data correlation, continuous monitoring, database backed reconnaissance data and a simple yet intuitive user interface. With features such as sub-scan, deeper co-relation, report generation, etc., reNgine aims to fill the gap in traditional reconnaissance tools and is likely to be a better alternative to existing commercial tools.
reNgine makes it easy for penetration testers and security auditors to gather reconnaissance data with minimal configuration.
</p>
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<p align="center">
⭐<a href="https://rengine.wiki">reNgine Documentation</a>
·
<a href="https://rengine.wiki/changelog/">What's new</a>
·
<a href="https://github.com/yogeshojha/rengine/blob/master/.github/CONTRIBUTING.md">Contribute</a>
·
<a href="https://github.com/yogeshojha/rengine/issues">Report Bug</a>
·
<a href="https://github.com/yogeshojha/rengine/issues">Request Feature</a>⭐
</p>
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Table of Contents
* [About reNgine](#about-rengine)
* [Features](#features)
* [Documentation](#documentation)
* [Quick Installation](#quick-installation)
* [What's new in reNgine](#changelog)
* [reNgine Bug Bounty Program](#rengine-bug-bounty-program)
* [Screenshots](#screenshots)
* [Contributing](#contributing)
* [reNgine Support](#rengine-support)
* [Related Projects](#related-projects)
* [Support and Sponsoring](#support-and-sponsoring)
* [License](#license)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine 2.0 codenamed Jasper
I am currently working on reNgine 2.0, which will probably be announced sometime between May and August 2023. reNgine 2.0 will be the most advanced reNgine ever, a lot of work will be done in how scans are performed, things such as Pause and Resume Scan, Axiom Integration, more deeper correlation, Project Options, Multiple Tenants, etc.
Please submit your feature requests via GitHub issues.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## About reNgine
You can watch [reNgine 1.1 release trailer here.](https://www.youtube.com/watch?v=iy_6F7Vq8Lo) (Recommended)
<img src="https://user-images.githubusercontent.com/17223002/164993688-50eb95f2-3653-4ef7-bd3b-ef7a096824ea.jpeg">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
reNgine is a web application reconnaissance suite with a focus on a highly configurable, streamlined reconnaissance process. reNgine is backed by a database, with data correlation and organisation, the custom query "like" language for filtering reconnaissance data, reNgine aims to address the shortcomings of the traditional reconnaissance workflow.
The developers behind reNgine understand that reconnaissance data can be huge and manually searching for records to attack can be tedious, so features such as identifying subdomains of interest help penetration testers focus on attack rather than reconnaissance.
reNgine also focuses on continuous monitoring. Penetration testers can choose to schedule the scan at regular intervals and be notified via notification channels such as Discord, Slack and Telegram of any new subdomains or vulnerabilities identified, or any changes to the recon data.
Interoperability is something every reconnaissance tool needs, and reNgine is no different. Starting with reNgine 1.0, we have added features such as import and export of subdomains, endpoints, GF pattern matched endpoints, etc. This allows you to use your favourite reconnaissance workflow in conjunction with reNgine.
PDF reports are something every individual or team needs. From reNgine 1.1, reNgine also comes with the option to download PDF reports. You can also choose the type of report, a full scan report or just a reconnaissance report. We also understand that PDF reports need to be customisable. Choose the colour of the report you want, customise the executive summary, etc. You choose how your PDF report looks!
reNgine features highly configurable scan engines based on YAML, allowing penetration testers to create as many reconnaissance engines of their choice as they like, configure them as they like, and use them against any targets for scanning. These engines allow penetration testers to use the tools of their choice, with the configuration of their choice. Out of the box, reNgine comes with several scan engines such as Full Scan, Passive Scan, Screenshot Gathering, OSINT Engine, etc.
Our focus has always been on finding the right reconnaissance data with the least amount of effort. After several discussions with fellow hackers/pentesters, a screenshot gallery was a must, reNgine also comes with a screenshot gallery, and what's more exciting than having a screenshot gallery with filters, filter screenshots with HTTP status, technology, ports and services.
We also want our fellow hackers to stay ahead of the game, so reNgine also comes with automatic vulnerability reporting (ATM only Hackerone is supported, other platforms may come soon). This allows hackers to define their vulnerability reporting template and reNgine will do the rest of the work to report the vulnerability as soon as it is identified.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<img src="https://user-images.githubusercontent.com/17223002/164993945-aabdbb4a-2b9d-4951-ba27-5f2f5abd1d8b.gif">
## Features
* Reconnaissance: Subdomain Discovery, IP and Open Ports Identification, Endpoints Discovery, Directory and Files fuzzing, - Screenshot gathering, Vulnerability scan using Nuclei, WHOIS Identification, WAF Detection etc.
* Highly configurable YAML-based Scan Engines
* Support for Parallel Scans and Subscans
* Automatically report Vulnerabilities to HackerOne
* Recon Data visualization
* OSINT Capabilities (Meta info Gathering, Employees Gathering, Email Address with an option to look password in the leaked database, - dorks, etc.)
* Customizable Alerts/Notifications on Slack, Discord, and Telegram
* Perform Advanced Query lookup using natural language alike and, or, not operations
* Recon Notes and Todos
* Clocked Scans (Run reconnaissance exactly at X Hours and Y minutes) and Periodic Scans (Runs reconnaissance every X minutes/- hours/days/week)
* Proxy Support
* Screenshot Gallery with Filters
* Powerful recon data filtering with autosuggestions
* Recon Data changes, find new/removed subdomains/endpoints
* Tag targets into the Organization
* Identify Interesting Subdomains
* Custom GF patterns and custom Nuclei Templates
* Edit tool-related configuration files (Nuclei, Subfinder, Naabu, amass)
* Add external tools from Github/Go
* Interoperable with other tools, Import/Export Subdomains/Endpoints
* Import Targets via IP and/or CIDRs
* Report Generation
* Toolbox: Comes bundled with most commonly used tools such as whois lookup, CMS detector, CVE lookup, etc.
* Identification of related domains and related TLDs for targets
* Find actionable insights such as Most Common Vulnerability, Most Common CVE ID, Most Vulnerable Target/Subdomain, etc.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Documentation
You can find reNgine documentation at [https://rengine.wiki](https://rengine.wiki)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Quick Installation
**Note:** Only Ubuntu/VPS
1. Clone this repo
```bash
git clone https://github.com/yogeshojha/rengine && cd rengine
```
1. Edit the dotenv file, **please make sure to change the password for postgresql `POSTGRES_PASSWORD`!**
```bash
nano .env
```
1. Run the installation script, Please keep an eye for any prompt, you will also be asked for username and password for reNgine.
```bash
sudo ./install.sh
```
If `install.sh` does not have install permission, please change it, `chmod +x install.sh`
**reNgine can now be accessed from <https://127.0.0.1> or if you're on the VPS <https://your_vps_ip_address>**
A detailed installation guide can also be found [here](https://www.rffuste.com/2022/05/23/rengine-a-brief-overview/). Thanks to Rubén!
## Installation (Mac/Windows/Other)
Installation instructions can be found at [https://reNgine.wiki/install/detailed/](https://reNgine.wiki/install/detailed/)
## Updating
1. Updating is as simple as running the following command:
```bash
sudo ./update.sh
```
If `update.sh` does not have execution permissions, please change it, `sudo chmod +x update.sh`
## Changelog
[Please find the latest release notes and changelog here.](https://rengine.wiki/changelog/)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## reNgine Bug Bounty Program
[![huntr](https://cdn.huntr.dev/huntr_security_badge_mono.svg)](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine)
Security researchers, welcome aboard! I'm excited to announce the reNgine bug bounty programme in collaboration with [huntr.dev](https://huntr.dev), which means that you will be rewarded for any vulnerabilities you find in reNgine.
Thank you for your interest in reporting reNgine vulnerabilities! If you are aware of any potential security vulnerabilities in reNgine, we encourage you to report them immediately via [huntr.dev](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine).
**Please do not disclose vulnerabilities via Github issues/blogs/tweets after/before reporting to huntr.dev as this is explicitly against the disclosure policy of huntr.dev and reNgine and will not be considered for monetary rewards.**
Please note that the reNgine maintainer does not set the bounty amount.
The bounty reward is determined by an industry-first equation developed by huntr.dev to understand the popularity, impact and value of repositories to the open-source community.
**What do I expect from security researchers?**
* Patience: Please note that I am currently the only maintainer in reNgine and it will take some time to validate your report. I ask for your patience during this process.
* Respect for privacy and security reports: Please do not publicly disclose any vulnerabilities (including GitHub issues) before or after reporting them on huntr.dev! This is against the disclosure policy and will not be rewarded.
* Respect the rules
**What do you get in return?**
* Thanks from the maintainer
* Monetary rewards
* CVE ID(s)
Please find the [FAQ](https://www.huntr.dev/faq) and [Responsible disclosure policy](https://www.huntr.dev/policy/) from huntr.dev.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Screenshots
### Scan Results
![](.github/screenshots/scan_results.gif)
### General Usage
<img src="https://user-images.githubusercontent.com/17223002/164993781-b6012995-522b-480a-a8bf-911193d35894.gif">
### Initiating Subscan
<img src="https://user-images.githubusercontent.com/17223002/164993749-1ad343d6-8ce7-43d6-aee7-b3add0321da7.gif">
### Recon Data filtering
<img src="https://user-images.githubusercontent.com/17223002/164993687-b63f3de8-e033-4ac0-808e-a2aa377d3cf8.gif">
### Report Generation
<img src="https://user-images.githubusercontent.com/17223002/164993689-c796c6cd-eb61-43f4-800d-08aba9740088.gif">
### Toolbox
<img src="https://user-images.githubusercontent.com/17223002/164993751-d687e88a-eb79-440f-9dc0-0ad006901620.gif">
### Adding Custom tool in Tools Arsenal
<img src="https://user-images.githubusercontent.com/17223002/164993670-466f6459-9499-498b-a9bd-526476d735a7.gif">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire and create. Every contributions you make is **greatly appreciated**. Your contributions can be as simple as fixing the indentation or UI, or as complex as adding new modules and features.
See the [Contributing Guide](.github/CONTRIBUTING.md) to get started.
You can also [join our Discord channel #development](https://discord.gg/JuhHdHTtwd) for any development related questions.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### First-time Open Source contributors
Please note that reNgine is beginner friendly. If you have never done open-source before, we encourage you to do so. **We will be happy and proud of your first PR ever.**
You can start by resolving any [open issues](https://github.com/yogeshojha/rengine/issues).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## reNgine Support
Please do not use GitHub for support requests. Instead, [join our Discord channel #support](https://discord.gg/azv6fzhNCE)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Related Projects
There are many other great reconnaissance frameworks out there, you can use reNgine in conjunction with those tools. But they are great in their own right, and can sometimes produce better results than reNgine.
* [ReconFTW](https://github.com/six2dez/reconftw#sample-video)
* [Reconmap](https://github.com/reconmap/reconmap)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Support and Sponsoring
Over the past few years, I have been working hard on reNgine to add new features with the sole aim of making it the de facto standard for reconnaissance. I spend most of my free time and weekends working on reNgine. I do this in addition to my day job. I am happy to have received such overwhelming support from the community. But to keep this project alive, I am looking for financial support.
| Paypal | Bitcoin | Ethereum |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
|[https://www.paypal.com/paypalme/yogeshojha11](https://www.paypal.com/paypalme/yogeshojha11) | `35AiKyNswNZ4TZUSdriHopSCjNMPi63BCX` | `0xe7A337Da6ff98A28513C26A7Fec8C9b42A63d346`
OR
* Add a [GitHub Star](https://github.com/yogeshojha/rengine) to the project.
* Tweet about this project, or maybe blogs?
* Maybe nominate me for [GitHub Stars?](https://stars.github.com/nominate/)
* Join DigitalOcean using my [referral link](https://m.do.co/c/e353502d19fc) your profit is **$100** and I get $25 DO credit. This will help me test reNgine on VPS before I release any major features.
It takes a considerable amount of time to add new features and make sure everything works. Donating is your way of saying: **reNgine is awesome**.
Any support is greatly appreciated! Thank you!
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## License
Distributed under the GNU GPL v3 License. See [LICENSE](LICENSE) for more information.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
| <p align="center">
<a href="https://rengine.wiki"><img src=".github/screenshots/banner.gif" alt=""/></a>
</p>
<p align="center"><a href="https://github.com/yogeshojha/rengine/releases" target="_blank"><img src="https://img.shields.io/badge/version-v2.0.0-informational?&logo=none" alt="reNgine Latest Version" /></a> <a href="https://www.gnu.org/licenses/gpl-3.0" target="_blank"><img src="https://img.shields.io/badge/License-GPLv3-red.svg?&logo=none" alt="License" /></a> <a href="#" target="_blank"><img src="https://img.shields.io/badge/first--timers--only-friendly-blue.svg?&logo=none" alt="" /></a> <a href="https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine" target="_blank"><img src="https://cdn.huntr.dev/huntr_security_badge_mono.svg" alt="" /></a> </p>
<p align="center">
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Asia-2023-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/Open--Source--Summit-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://cyberweek.ae/2021/hitb-armory/" target="_blank"><img src="https://img.shields.io/badge/HITB--Armory-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=7uvP6MaQOX0" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://drive.google.com/file/d/1Bh8lbf-Dztt5ViHJVACyrXMiglyICPQ2/view?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Defcon--Demolabs--29-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=A1oNOIc0h5A" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Europe-2020-blue.svg?&logo=none" alt="" /></a>
</p>
<p align="center">
<a href="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml/badge.svg" alt="" /></a> <a href="https://github.com/yogeshojha/rengine/actions/workflows/build.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/build.yml/badge.svg" alt="" /></a>
</p>
<p align="center">
<a href="https://discord.gg/H6WzebwX3H" target="_blank"><img src="https://img.shields.io/discord/880363103689277461" alt="" /></a>
</p>
<p align="center">
<a href="https://opensourcesecurityindex.io/" target="_blank" rel="noopener">
<img style="width: 282px; height: 56px" src="https://opensourcesecurityindex.io/badge.svg" alt="Open Source Security Index - Fastest Growing Open Source Security Projects" width="282" height="56" /> </a>
</p>
<h3>reNgine 2.0-jasper<br>Redefining the future of reconnaissance!</h3>
<h4>What is reNgine?</h4>
<p align="left">reNgine is your go-to web application reconnaissance suite that's designed to simplify and streamline the reconnaissance process for security professionals, penetration testers, and bug bounty hunters. With its highly configurable engines, data correlation capabilities, continuous monitoring, database-backed reconnaissance data, and an intuitive user interface, reNgine redefines how you gather critical information about your target web applications.
Traditional reconnaissance tools often fall short in terms of configurability and efficiency. reNgine addresses these shortcomings and emerges as a excellent alternative to existing commercial tools.
reNgine was created to address the limitations of traditional reconnaissance tools and provide a better alternative, even surpassing some commercial offerings. Whether you're a bug bounty hunter, a penetration tester, or a corporate security team, reNgine is your go-to solution for automating and enhancing your information-gathering efforts.
</p>
reNgine 2.0-jasper is out now, you can [watch reNgine 2.0-jasper release trailer here!](https://youtu.be/VwkOWqiWW5g)
reNgine 2.0-Jasper would not have been possible without [@ocervell](https://github.com/ocervell) valuable contributions. [@ocervell](https://github.com/ocervell) did majority of the refactoring if not all and also added a ton of features. Together, we wish to shape the future of web application reconnaissance, and it's developers like [@ocervell](https://github.com/ocervell) and a [ton of other developers and hackers from our community](https://github.com/yogeshojha/rengine/graphs/contributors) who inspire and drive us forward.
Thank you, [@ocervell](https://github.com/ocervell), for your outstanding work and unwavering commitment to reNgine.
Checkout our contributers here: [Contributers](https://github.com/yogeshojha/rengine/graphs/contributors)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Documentation
You can find detailed documentation at [https://rengine.wiki](https://rengine.wiki)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Table of Contents
* [About reNgine](#about-rengine)
* [Workflow](#workflow)
* [Features](#features)
* [Scan Engine](#scan-engine)
* [Quick Installation](#quick-installation)
* [What's new in reNgine 2.0](#changelog)
* [Screenshots](#screenshots)
* [Contributing](#contributing)
* [reNgine Support](#rengine-support)
* [Support and Sponsoring](#support-and-sponsoring)
* [reNgine Bug Bounty Program](#rengine-bug-bounty-program)
* [License](#license)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### About reNgine
reNgine is not an ordinary reconnaissance suite; it's a game-changer! We've turbocharged the traditional workflow with groundbreaking features that is sure to ease your reconnaissance game. reNgine redefines the art of reconnaissance with highly configurable scan engines, recon data correlation, continuous monitoring, GPT powered Vulnerability Report, Project Management and role based access control etc.
🦾 reNgine has advanced reconnaissance capabilities, harnessing a range of open-source tools to deliver a comprehensive web application reconnaissance experience. With it's intuitive User Interface, it excels in subdomain discovery, pinpointing IP addresses and open ports, collecting endpoints, conducting directory and file fuzzing, capturing screenshots, and performing vulnerability scans. To summarize, it does end-to-end reconnaissance. With WHOIS identification and WAF detection, it offers deep insights into target domains. Additionally, reNgine also identifies misconfigured S3 buckets and find interesting subdomains and URLS, based on specific keywords to helps you identify your next target, making it an go to tool for efficient reconnaissance.
🗃️ Say goodbye to recon data chaos! reNgine seamlessly integrates with a database, providing you with unmatched data correlation and organization. Forgot the hassle of grepping through json, txt or csv files. Plus, our custom query language lets you filter reconnaissance data effortlessly using natural language like operators such as filtering all alive subdomains with `http_status=200` and also filter all subdomains that are alive and has admin in name `http_status=200&name=admin`
🔧 reNgine offers unparalleled flexibility through its highly configurable scan engines, based on a YAML-based configuration. It offers the freedom to create and customize recon scan engines based on any kind of requirement, users can tailor them to their specific objectives and preferences, from thread management to timeout settings and rate-limit configurations, everything is customizable. Additionally, reNgine offers a range of pre-configured scan engines right out of the box, including Full Scan, Passive Scan, Screenshot Gathering, and the OSINT Scan Engine. These ready-to-use engines eliminate the need for extensive manual setup, aligning perfectly with reNgine's core mission of simplifying the reconnaissance process and enabling users to effortlessly access the right reconnaissance data with minimal effort.
💎 Subscans: Subscan is a game-changing feature in reNgine, setting it apart as the only open-source tool of its kind to offer this capability. With Subscan, waiting for the entire pipeline to complete is a thing of the past. Now, users can swiftly respond to newfound discoveries during reconnaissance. Whether you've stumbled upon an intriguing subdomain and wish to conduct a focused port scan or want to delve deeper with a vulnerability assessment, reNgine has you covered.
📃 PDF Reports: In addition to its robust reconnaissance capabilities, reNgine goes the extra mile by simplifying the report generation process, recognizing the crucial role that PDF reports play in the realm of end-to-end reconnaissance. Users can effortlessly generate and customize PDF reports to suit their exact needs. Whether it's a Full Scan Report, Vulnerability Report, or a concise reconnaissance report, reNgine provides the flexibility to choose the report type that best communicates your findings. Moreover, the level of customization is unparalleled, allowing users to select report colors, fine-tune executive summaries, and even add personalized touches like company names and footers. With GPT integration, your reports aren't just a report, with remediation steps, and impacts, you get 360-degree view of the vulnerabilities you've uncovered.
🔖 Say Hello to Projects! reNgine 2.0 introduces a powerful addition that enables you to efficiently organize your web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task. Each projects will have separate dashboard and all the scan results will be separated from each projects, while scan engines and configuration will be shared across all the projects.
⚙️ Roles and Permissions! Begining reNgine 2.0, we've taken your web application reconnaissance to a whole new level of control and security. Now, you can assign distinct roles to your team members—Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- 🔐 Sys Admin: Sys Admin is a super user that has permission to modify system and scan related configurations, scan engines, create new users, add new tools etc. Super user can initiate scans and subscans effortlessly.
- 🔍 Penetration Tester: Penetration Tester will be allowed to modify and initiate scans and subscans, add or update targets, etc. A penetration tester will not be allowed to modify system configurations.
- 📊 Auditor: Auditor can only view and download the report. An auditor can not change any system or scan related configurations nor can initiate any scans or subscans.
🚀 GPT Vulnerability Report Generation: Get ready for the future of penetration testing reports with reNgine's groundbreaking feature: "GPT-Powered Report Generation"! With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments that read like they were written by a human security expert! **But that's not all!** Our GPT-driven reports go the extra mile by scouring the web for related news articles, blogs, and references, so you have a 360-degree view of the vulnerabilities you've uncovered. With reNgine 2.0 revolutionize your penetration testing game and impress your clients with reports that are not just informative but engaging and comprehensive with detailed analysis on impact assessment and remediation strategies.
🥷 GPT-Powered Attack Surface Generation: With reNgine 2.0, reNgine seamlessly integrates with GPT to identify the attacks that you can likely perform on a subdomain. By making use of reconnaissance data such as page title, open ports, subdomain name etc, reNgine can advice you the attacks you could perform on a target. reNgine will also provide you the rationale on why the specific attack is likely to be successful.
🧭 Continuous monitoring: Continuous monitoring is at the core of reNgine's mission, and it's robust continuous monitoring feature ensures that their targets are under constant scrutiny. With the flexibility to schedule scans at regular intervals, penetration testers can effortlessly stay informed about their targets. What sets reNgine apart is its seamless integration with popular notification channels such as Discord, Slack, and Telegram, delivering real-time alerts for newly discovered subdomains, vulnerabilities, or any changes in reconnaissance data.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Workflow
<img src="https://github.com/yogeshojha/rengine/assets/17223002/10c475b8-b4a8-440d-9126-77fe2038a386">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Features
* Reconnaissance:
* Subdomain Discovery
* IP and Open Ports Identification
* Endpoints Discovery
* Directory/Files fuzzing
* Screenshot Gathering
* Vulnerability Scan
* Nuclei
* Dalfox XSS Scanner
* CRLFuzzer
* Misconfigured S3 Scanner
* WHOIS Identification
* WAF Detection
* OSINT Capabilities
* Meta info Gathering
* Employees Gathering
* Email Address gathering
* Google Dorking for sensitive info and urls
* Projects, create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
* Perform Advanced Query lookup using natural language alike and, or, not operations
* Highly configurable YAML-based Scan Engines
* Support for Parallel Scans
* Support for Subscans
* Recon Data visualization
* GPT Vulnerability Description, Impact and Remediation generation
* GPT Attack Surface Generator
* Multiple Roles and Permissions to cater a team's need
* Customizable Alerts/Notifications on Slack, Discord, and Telegram
* Automatically report Vulnerabilities to HackerOne
* Recon Notes and Todos
* Clocked Scans (Run reconnaissance exactly at X Hours and Y minutes) and Periodic Scans (Runs reconnaissance every X minutes/- hours/days/week)
* Proxy Support
* Screenshot Gallery with Filters
* Powerful recon data filtering with autosuggestions
* Recon Data changes, find new/removed subdomains/endpoints
* Tag targets into the Organization
* Smart Duplicate endpoint removal based on page title and content length to cleanup the reconnaissance data
* Identify Interesting Subdomains
* Custom GF patterns and custom Nuclei Templates
* Edit tool-related configuration files (Nuclei, Subfinder, Naabu, amass)
* Add external tools from Github/Go
* Interoperable with other tools, Import/Export Subdomains/Endpoints
* Import Targets via IP and/or CIDRs
* Report Generation
* Toolbox: Comes bundled with most commonly used tools during penetration testing such as whois lookup, CMS detector, CVE lookup, etc.
* Identification of related domains and related TLDs for targets
* Find actionable insights such as Most Common Vulnerability, Most Common CVE ID, Most Vulnerable Target/Subdomain, etc.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Scan Engine
```yaml
subdomain_discovery: {
'uses_tools': [
'subfinder',
'ctfr',
'sublist3r',
'tlsx',
'oneforall',
'netlas'
],
'enable_http_crawl': true,
'threads': 30,
'timeout': 5,
}
http_crawl: {}
port_scan: {
'enable_http_crawl': true,
'timeout': 5,
# 'exclude_ports': [],
# 'exclude_subdomains': true,
'ports': ['top-100'],
'rate_limit': 150,
'threads': 30,
'passive': false,
# 'use_naabu_config': false,
# 'enable_nmap': true,
# 'nmap_cmd': '',
# 'nmap_script': '',
# 'nmap_script_args': ''
}
osint: {
'discover': [
'emails',
'metainfo',
'employees'
],
'dorks': [
'login_pages',
'admin_panels',
'dashboard_pages',
'stackoverflow',
'social_media',
'project_management',
'code_sharing',
'config_files',
'jenkins',
'wordpress_files',
'php_error',
'exposed_documents',
'db_files',
'git_exposed'
],
'custom_dorks': [
{
'lookup_site': 'google.com',
'lookup_keywords': '/home/'
},
{
'lookup_site': '_target_',
'lookup_extensions': 'jpg,png'
}
],
'intensity': 'normal',
'documents_limit': 50
}
dir_file_fuzz: {
'auto_calibration': true,
'enable_http_crawl': true,
'rate_limit': 150,
'extensions': ['html', 'php','git','yaml','conf','cnf','config','gz','env','log','db','mysql','bak','asp','aspx','txt','conf','sql','json','yml','pdf'],
'follow_redirect': false,
'max_time': 0,
'match_http_status': [200, 204],
'recursive_level': 2,
'stop_on_error': false,
'timeout': 5,
'threads': 30,
'wordlist_name': 'dicc'
}
fetch_url: {
'uses_tools': [
'gospider',
'hakrawler',
'waybackurls',
'gospider',
'katana'
],
'remove_duplicate_endpoints': true,
'duplicate_fields': [
'content_length',
'page_title'
],
'enable_http_crawl': true,
'gf_patterns': ['debug_logic', 'idor', 'interestingEXT', 'interestingparams', 'interestingsubs', 'lfi', 'rce', 'redirect', 'sqli', 'ssrf', 'ssti', 'xss'],
'ignore_file_extensions': ['png', 'jpg', 'jpeg', 'gif', 'mp4', 'mpeg', 'mp3']
# 'exclude_subdomains': true
}
vulnerability_scan: {
'run_nuclei': false,
'run_dalfox': false,
'run_crlfuzz': false,
'run_s3scanner': true,
'enable_http_crawl': true,
'concurrency': 50,
'intensity': 'normal',
'rate_limit': 150,
'retries': 1,
'timeout': 5,
'fetch_gpt_report': true,
'nuclei': {
'use_conf': false,
'severities': [
'unknown',
'info',
'low',
'medium',
'high',
'critical'
],
# 'tags': [],
# 'templates': [],
# 'custom_templates': [],
},
's3scanner': {
'threads': 100,
'providers': [
'aws',
'gcp',
'digitalocean',
'dreamhost',
'linode'
]
}
}
waf_detection: {}
screenshot: {
'enable_http_crawl': true,
'intensity': 'normal',
'timeout': 10,
'threads': 40
}
# custom_header: "Cookie: Test"
```
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Quick Installation
**Note:** Only Ubuntu/VPS
1. Clone this repo
```bash
git clone https://github.com/yogeshojha/rengine && cd rengine
```
1. Edit the dotenv file, **please make sure to change the password for postgresql `POSTGRES_PASSWORD`!**
```bash
nano .env
```
1. In the dotenv file, you may also modify the Scaling Configurations
```bash
MAX_CONCURRENCY=80
MIN_CONCURRENCY=10
```
MAX_CONCURRENCY: This parameter specifies the maximum number of reNgine's concurrent Celery worker processes that can be spawned. In this case, it's set to 80, meaning that the application can utilize up to 80 concurrent worker processes to execute tasks concurrently. This is useful for handling a high volume of scans or when you want to scale up processing power during periods of high demand. If you have more CPU cores, you will need to increase this for maximised performance.
MIN_CONCURRENCY: On the other hand, MIN_CONCURRENCY specifies the minimum number of concurrent worker processes that should be maintained, even during periods of lower demand. In this example, it's set to 10, which means that even when there are fewer tasks to process, at least 10 worker processes will be kept running. This helps ensure that the application can respond promptly to incoming tasks without the overhead of repeatedly starting and stopping worker processes.
These settings allow for dynamic scaling of Celery workers, ensuring that the application efficiently manages its workload by adjusting the number of concurrent workers based on the workload's size and complexity
1. Run the installation script, Please keep an eye for any prompt, you will also be asked for username and password for reNgine.
```bash
sudo ./install.sh
```
If `install.sh` does not have install permission, please change it, `chmod +x install.sh`
**reNgine can now be accessed from <https://127.0.0.1> or if you're on the VPS <https://your_vps_ip_address>**
**Unless you are on development branch, please do not access reNgine via any ports**
### Installation (Mac/Windows/Other)
Installation instructions can be found at [https://reNgine.wiki/install/detailed/](https://reNgine.wiki/2.0/install/detailed/)
### Updating
1. Updating is as simple as running the following command:
```bash
cd rengine && sudo ./update.sh
```
If `update.sh` does not have execution permissions, please change it, `sudo chmod +x update.sh`
### Changelog
[Please find the latest release notes and changelog here.](https://rengine.wiki/changelog/)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Screenshots
#### Scan Results
![](.github/screenshots/scan_results.gif)
#### General Usage
<img src="https://user-images.githubusercontent.com/17223002/164993781-b6012995-522b-480a-a8bf-911193d35894.gif">
#### Initiating Subscan
<img src="https://user-images.githubusercontent.com/17223002/164993749-1ad343d6-8ce7-43d6-aee7-b3add0321da7.gif">
#### Recon Data filtering
<img src="https://user-images.githubusercontent.com/17223002/164993687-b63f3de8-e033-4ac0-808e-a2aa377d3cf8.gif">
#### Report Generation
<img src="https://user-images.githubusercontent.com/17223002/164993689-c796c6cd-eb61-43f4-800d-08aba9740088.gif">
#### Toolbox
<img src="https://user-images.githubusercontent.com/17223002/164993751-d687e88a-eb79-440f-9dc0-0ad006901620.gif">
#### Adding Custom tool in Tools Arsenal
<img src="https://user-images.githubusercontent.com/17223002/164993670-466f6459-9499-498b-a9bd-526476d735a7.gif">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire and create. Every contributions you make is **greatly appreciated**. Your contributions can be as simple as fixing the indentation or UI, or as complex as adding new modules and features.
See the [Contributing Guide](.github/CONTRIBUTING.md) to get started.
You can also [join our Discord channel #development](https://discord.gg/JuhHdHTtwd) for any development related questions.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### First-time Open Source contributors
Please note that reNgine is beginner friendly. If you have never done open-source before, we encourage you to do so. **We will be happy and proud of your first PR ever.**
You can start by resolving any [open issues](https://github.com/yogeshojha/rengine/issues).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Support
Please do not use GitHub for support requests. Instead, [join our Discord channel #support](https://discord.gg/azv6fzhNCE).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Support and Sponsoring
Over the past few years, I have been working hard on reNgine to add new features with the sole aim of making it the de facto standard for reconnaissance. I spend most of my free time and weekends working on reNgine. I do this in addition to my day job. I am happy to have received such overwhelming support from the community. But to keep this project alive, I am looking for financial support.
| Paypal | Bitcoin | Ethereum |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
|[https://www.paypal.com/paypalme/yogeshojha11](https://www.paypal.com/paypalme/yogeshojha11) | `35AiKyNswNZ4TZUSdriHopSCjNMPi63BCX` | `0xe7A337Da6ff98A28513C26A7Fec8C9b42A63d346`
OR
* Add a [GitHub Star](https://github.com/yogeshojha/rengine) to the project.
* Tweet about this project, or maybe blogs?
* Maybe nominate me for [GitHub Stars?](https://stars.github.com/nominate/)
* Join DigitalOcean using my [referral link](https://m.do.co/c/e353502d19fc) your profit is **$100** and I get $25 DO credit. This will help me test reNgine on VPS before I release any major features.
It takes a considerable amount of time to add new features and make sure everything works. Donating is your way of saying: **reNgine is awesome**.
Any support is greatly appreciated! Thank you!
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Bug Bounty Program
[![huntr](https://cdn.huntr.dev/huntr_security_badge_mono.svg)](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine)
Security researchers, welcome aboard! I'm excited to announce the reNgine bug bounty programme in collaboration with [huntr.dev](https://huntr.dev), which means that you will be rewarded for any vulnerabilities you find in reNgine.
Thank you for your interest in reporting reNgine vulnerabilities! If you are aware of any potential security vulnerabilities in reNgine, we encourage you to report them immediately via [huntr.dev](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine).
**Please do not disclose vulnerabilities via Github issues/blogs/tweets after/before reporting to huntr.dev as this is explicitly against the disclosure policy of huntr.dev and reNgine and will not be considered for monetary rewards.**
Please note that the reNgine maintainer does not set the bounty amount.
The bounty reward is determined by an industry-first equation developed by huntr.dev to understand the popularity, impact and value of repositories to the open-source community.
**What do I expect from security researchers?**
* Patience: Please note that I am currently the only maintainer in reNgine and it will take some time to validate your report. I ask for your patience during this process.
* Respect for privacy and security reports: Please do not publicly disclose any vulnerabilities (including GitHub issues) before or after reporting them on huntr.dev! This is against the disclosure policy and will not be rewarded.
* Respect the rules
**What do you get in return?**
* Thanks from the maintainer
* Monetary rewards
* CVE ID(s)
Please find the [FAQ](https://www.huntr.dev/faq) and [Responsible disclosure policy](https://www.huntr.dev/policy/) from huntr.dev.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### License
Distributed under the GNU GPL v3 License. See [LICENSE](LICENSE) for more information.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<p align="right">(ChatGPT was used to write some or most part of this README section.)</p>
| yogeshojha | 3c60bc1ee495044794d91edee0c96fff73ab46c7 | 5413708d243799a5271440c47c6f98d0c51154ca | Ditto. | AnonymousWP | 22 |
yogeshojha/rengine | 963 | 2.0-jasper release | ### Added
- Projects: Projects allow you to efficiently organize their web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
- Roles and Permissions: assign distinct roles to your team members: Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- GPT-powered Report Generation: With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments.
- API Vault: This feature allows you to organize your API keys such as OpenAI or Netlas API keys.
- GPT-powered Attack Surface Generation
- URL gathering now is much more efficient, removing duplicate endpoints based on similar HTTP Responses, having the same content_lenth, or page_title. Custom duplicate fields can also be set from the scan engine configuration.
- URL Path filtering while initiating scan: For instance, if we want to scan only endpoints starting with https://example.com/start/, we can pass the /start as a path filter while starting the scan. @ocervell
- Expanding Target Concept: reNgine 2.0 now accepts IPs, URLS, etc as targets. (#678, #658) Excellent work by @ocervell
- A ton of refactoring on reNgine's core to improve scan efficiency. Massive kudos to @ocervell
- Created a custom celery workflow to be able to run several tasks in parallel that are not dependent on each other, such OSINT task and subdomain discovery will run in parallel, and directory and file fuzzing, vulnerability scan, screenshot gathering etc. will run in parallel after port scan or url fetching is completed. This will increase the efficiency of scans and instead of having one long flow of tasks, they can run independently on their own. @ocervell
- Refactored all tasks to run asynchronously @ocervell
- Added a stream_command that allows to read the output of a command live: this means the UI is updated with results while the command runs and does not have to wait until the task completes. Excellent work by @ocervell
- Pwndb is now replaced by h8mail. @ocervell
- Group Scan Results: reNgine 2.0 allows to group of subdomains based on similar page titles and HTTP status, and also vulnerability grouping based on the same vulnerability title and severity.
- Added Support for Nmap: reNgine 2.0 allows to run Nmap scripts and vuln scans on ports found by Naabu. @ocervell
- Added support for Shared Scan Variables in Scan Engine Configuration:
- `enable_http_crawl`: (true/false) You can disable it to be more stealthy or focus on something different than HTTP
- `timeout`: set timeout for all tasks
- `rate_limit`: set rate limit for all tasks
- `retries`: set retries for all tasks
- `custom_header`: set the custom header for all tasks
- Added Dalfox for XSS Vulnerability Scan
- Added CRLFuzz for CRLF Vulnerability Scan
- Added S3Scanner for scanning misconfigured S3 buckets
- Improve OSINT Dork results, now detects admin panels, login pages and dashboards
- Added Custom Dorks
- Improved UI for vulnerability results, clicking on each vulnerability will open up a sidebar with vulnerability details.
- Added HTTP Request and Response in vulnerability Results
- Under Admin Settings, added an option to allow add/remove/deactivate additional users
- Added Option to Preview Scan Report instead of forcing to download
- Added Katana for crawling and spidering URLs
- Added Netlas for Whois and subdomain gathering
- Added TLSX for subdomain gathering
- Added CTFR for subdomain gathering
- Added historical IP in whois section
### Fixes
- GF patterns do not run on 404 endpoints (#574 closed)
- Fixes for retrieving whois data (#693 closed)
- Related/Associated Domains in Whois section is now fixed
### Removed
- Removed pwndb and tor related to it.
- Removed tor for pwndb | null | 2023-10-02 07:51:35+00:00 | 2023-10-07 10:37:23+00:00 | README.md | <p align="center">
<a href="https://rengine.wiki"><img src=".github/screenshots/banner.gif" alt=""/></a>
</p>
<p align="center"><a href="https://github.com/yogeshojha/rengine/releases" target="_blank"><img src="https://img.shields.io/badge/version-v1.2.0-informational?&logo=none" alt="reNgine Latest Version" /></a> <a href="https://www.gnu.org/licenses/gpl-3.0" target="_blank"><img src="https://img.shields.io/badge/License-GPLv3-red.svg?&logo=none" alt="License" /></a> <a href="https://github.com/yogeshojha/rengine/issues" target="_blank"><img src="https://img.shields.io/github/issues/yogeshojha/rengine?color=red&logo=none" alt="reNgine Issues" /></a> <a href="#" target="_blank"><img src="https://img.shields.io/badge/first--timers--only-friendly-blue.svg?&logo=none" alt="" /></a> <a href="https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine" target="_blank"><img src="https://cdn.huntr.dev/huntr_security_badge_mono.svg" alt="" /></a> </p>
<p align="center">
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/Open--Source--Summit-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://cyberweek.ae/2021/hitb-armory/" target="_blank"><img src="https://img.shields.io/badge/HITB--Armory-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=7uvP6MaQOX0" target="_blank"><img src="https://img.shields.io/badge/Black--Hat--Arsenal-USA--2021-blue.svg?logo=none" alt="" /></a>
<a href="https://drive.google.com/file/d/1Bh8lbf-Dztt5ViHJVACyrXMiglyICPQ2/view?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Defcon--Demolabs--29-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=A1oNOIc0h5A" target="_blank"><img src="https://img.shields.io/badge/Black--Hat--Arsenal-Europe--2020-blue.svg?&logo=none" alt="" /></a>
</p>
<p align="center">
<a href="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml/badge.svg" alt="" /></a> <a href="https://github.com/yogeshojha/rengine/actions/workflows/build.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/build.yml/badge.svg" alt="" /></a>
</p>
<p align="center">
<a href="https://discord.gg/H6WzebwX3H" target="_blank"><img src="https://img.shields.io/discord/880363103689277461" alt="" /></a>
</p>
<p align="center">
<a href="https://opensourcesecurityindex.io/" target="_blank" rel="noopener">
<img style="width: 282px; height: 56px" src="https://opensourcesecurityindex.io/badge.svg" alt="Open Source Security Index - Fastest Growing Open Source Security Projects" width="282" height="56" /> </a>
</p>
<h3>reNgine 1.1<br>More than just recon!</h3>
<h4>The only web application recon tool you will ever need!</h4>
<p>Quickly discover the attack surface, and identify vulnerabilities using highly customizable and powerful scan engines.
Enjoy peace of mind with reNgine's continuous monitoring, deeper reconnaissance, and open-source powered Vulnerability Scanner.</p>
<h4>What is reNgine?</h4>
<p align="left">reNgine is a web application reconnaissance suite that focuses on a highly configurable streamlined reconnaissance process via engines, reconnaissance data correlation, continuous monitoring, database backed reconnaissance data and a simple yet intuitive user interface. With features such as sub-scan, deeper co-relation, report generation, etc., reNgine aims to fill the gap in traditional reconnaissance tools and is likely to be a better alternative to existing commercial tools.
reNgine makes it easy for penetration testers and security auditors to gather reconnaissance data with minimal configuration.
</p>
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<p align="center">
⭐<a href="https://rengine.wiki">reNgine Documentation</a>
·
<a href="https://rengine.wiki/changelog/">What's new</a>
·
<a href="https://github.com/yogeshojha/rengine/blob/master/.github/CONTRIBUTING.md">Contribute</a>
·
<a href="https://github.com/yogeshojha/rengine/issues">Report Bug</a>
·
<a href="https://github.com/yogeshojha/rengine/issues">Request Feature</a>⭐
</p>
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Table of Contents
* [About reNgine](#about-rengine)
* [Features](#features)
* [Documentation](#documentation)
* [Quick Installation](#quick-installation)
* [What's new in reNgine](#changelog)
* [reNgine Bug Bounty Program](#rengine-bug-bounty-program)
* [Screenshots](#screenshots)
* [Contributing](#contributing)
* [reNgine Support](#rengine-support)
* [Related Projects](#related-projects)
* [Support and Sponsoring](#support-and-sponsoring)
* [License](#license)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine 2.0 codenamed Jasper
I am currently working on reNgine 2.0, which will probably be announced sometime between May and August 2023. reNgine 2.0 will be the most advanced reNgine ever, a lot of work will be done in how scans are performed, things such as Pause and Resume Scan, Axiom Integration, more deeper correlation, Project Options, Multiple Tenants, etc.
Please submit your feature requests via GitHub issues.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## About reNgine
You can watch [reNgine 1.1 release trailer here.](https://www.youtube.com/watch?v=iy_6F7Vq8Lo) (Recommended)
<img src="https://user-images.githubusercontent.com/17223002/164993688-50eb95f2-3653-4ef7-bd3b-ef7a096824ea.jpeg">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
reNgine is a web application reconnaissance suite with a focus on a highly configurable, streamlined reconnaissance process. reNgine is backed by a database, with data correlation and organisation, the custom query "like" language for filtering reconnaissance data, reNgine aims to address the shortcomings of the traditional reconnaissance workflow.
The developers behind reNgine understand that reconnaissance data can be huge and manually searching for records to attack can be tedious, so features such as identifying subdomains of interest help penetration testers focus on attack rather than reconnaissance.
reNgine also focuses on continuous monitoring. Penetration testers can choose to schedule the scan at regular intervals and be notified via notification channels such as Discord, Slack and Telegram of any new subdomains or vulnerabilities identified, or any changes to the recon data.
Interoperability is something every reconnaissance tool needs, and reNgine is no different. Starting with reNgine 1.0, we have added features such as import and export of subdomains, endpoints, GF pattern matched endpoints, etc. This allows you to use your favourite reconnaissance workflow in conjunction with reNgine.
PDF reports are something every individual or team needs. From reNgine 1.1, reNgine also comes with the option to download PDF reports. You can also choose the type of report, a full scan report or just a reconnaissance report. We also understand that PDF reports need to be customisable. Choose the colour of the report you want, customise the executive summary, etc. You choose how your PDF report looks!
reNgine features highly configurable scan engines based on YAML, allowing penetration testers to create as many reconnaissance engines of their choice as they like, configure them as they like, and use them against any targets for scanning. These engines allow penetration testers to use the tools of their choice, with the configuration of their choice. Out of the box, reNgine comes with several scan engines such as Full Scan, Passive Scan, Screenshot Gathering, OSINT Engine, etc.
Our focus has always been on finding the right reconnaissance data with the least amount of effort. After several discussions with fellow hackers/pentesters, a screenshot gallery was a must, reNgine also comes with a screenshot gallery, and what's more exciting than having a screenshot gallery with filters, filter screenshots with HTTP status, technology, ports and services.
We also want our fellow hackers to stay ahead of the game, so reNgine also comes with automatic vulnerability reporting (ATM only Hackerone is supported, other platforms may come soon). This allows hackers to define their vulnerability reporting template and reNgine will do the rest of the work to report the vulnerability as soon as it is identified.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<img src="https://user-images.githubusercontent.com/17223002/164993945-aabdbb4a-2b9d-4951-ba27-5f2f5abd1d8b.gif">
## Features
* Reconnaissance: Subdomain Discovery, IP and Open Ports Identification, Endpoints Discovery, Directory and Files fuzzing, - Screenshot gathering, Vulnerability scan using Nuclei, WHOIS Identification, WAF Detection etc.
* Highly configurable YAML-based Scan Engines
* Support for Parallel Scans and Subscans
* Automatically report Vulnerabilities to HackerOne
* Recon Data visualization
* OSINT Capabilities (Meta info Gathering, Employees Gathering, Email Address with an option to look password in the leaked database, - dorks, etc.)
* Customizable Alerts/Notifications on Slack, Discord, and Telegram
* Perform Advanced Query lookup using natural language alike and, or, not operations
* Recon Notes and Todos
* Clocked Scans (Run reconnaissance exactly at X Hours and Y minutes) and Periodic Scans (Runs reconnaissance every X minutes/- hours/days/week)
* Proxy Support
* Screenshot Gallery with Filters
* Powerful recon data filtering with autosuggestions
* Recon Data changes, find new/removed subdomains/endpoints
* Tag targets into the Organization
* Identify Interesting Subdomains
* Custom GF patterns and custom Nuclei Templates
* Edit tool-related configuration files (Nuclei, Subfinder, Naabu, amass)
* Add external tools from Github/Go
* Interoperable with other tools, Import/Export Subdomains/Endpoints
* Import Targets via IP and/or CIDRs
* Report Generation
* Toolbox: Comes bundled with most commonly used tools such as whois lookup, CMS detector, CVE lookup, etc.
* Identification of related domains and related TLDs for targets
* Find actionable insights such as Most Common Vulnerability, Most Common CVE ID, Most Vulnerable Target/Subdomain, etc.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Documentation
You can find reNgine documentation at [https://rengine.wiki](https://rengine.wiki)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Quick Installation
**Note:** Only Ubuntu/VPS
1. Clone this repo
```bash
git clone https://github.com/yogeshojha/rengine && cd rengine
```
1. Edit the dotenv file, **please make sure to change the password for postgresql `POSTGRES_PASSWORD`!**
```bash
nano .env
```
1. Run the installation script, Please keep an eye for any prompt, you will also be asked for username and password for reNgine.
```bash
sudo ./install.sh
```
If `install.sh` does not have install permission, please change it, `chmod +x install.sh`
**reNgine can now be accessed from <https://127.0.0.1> or if you're on the VPS <https://your_vps_ip_address>**
A detailed installation guide can also be found [here](https://www.rffuste.com/2022/05/23/rengine-a-brief-overview/). Thanks to Rubén!
## Installation (Mac/Windows/Other)
Installation instructions can be found at [https://reNgine.wiki/install/detailed/](https://reNgine.wiki/install/detailed/)
## Updating
1. Updating is as simple as running the following command:
```bash
sudo ./update.sh
```
If `update.sh` does not have execution permissions, please change it, `sudo chmod +x update.sh`
## Changelog
[Please find the latest release notes and changelog here.](https://rengine.wiki/changelog/)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## reNgine Bug Bounty Program
[![huntr](https://cdn.huntr.dev/huntr_security_badge_mono.svg)](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine)
Security researchers, welcome aboard! I'm excited to announce the reNgine bug bounty programme in collaboration with [huntr.dev](https://huntr.dev), which means that you will be rewarded for any vulnerabilities you find in reNgine.
Thank you for your interest in reporting reNgine vulnerabilities! If you are aware of any potential security vulnerabilities in reNgine, we encourage you to report them immediately via [huntr.dev](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine).
**Please do not disclose vulnerabilities via Github issues/blogs/tweets after/before reporting to huntr.dev as this is explicitly against the disclosure policy of huntr.dev and reNgine and will not be considered for monetary rewards.**
Please note that the reNgine maintainer does not set the bounty amount.
The bounty reward is determined by an industry-first equation developed by huntr.dev to understand the popularity, impact and value of repositories to the open-source community.
**What do I expect from security researchers?**
* Patience: Please note that I am currently the only maintainer in reNgine and it will take some time to validate your report. I ask for your patience during this process.
* Respect for privacy and security reports: Please do not publicly disclose any vulnerabilities (including GitHub issues) before or after reporting them on huntr.dev! This is against the disclosure policy and will not be rewarded.
* Respect the rules
**What do you get in return?**
* Thanks from the maintainer
* Monetary rewards
* CVE ID(s)
Please find the [FAQ](https://www.huntr.dev/faq) and [Responsible disclosure policy](https://www.huntr.dev/policy/) from huntr.dev.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Screenshots
### Scan Results
![](.github/screenshots/scan_results.gif)
### General Usage
<img src="https://user-images.githubusercontent.com/17223002/164993781-b6012995-522b-480a-a8bf-911193d35894.gif">
### Initiating Subscan
<img src="https://user-images.githubusercontent.com/17223002/164993749-1ad343d6-8ce7-43d6-aee7-b3add0321da7.gif">
### Recon Data filtering
<img src="https://user-images.githubusercontent.com/17223002/164993687-b63f3de8-e033-4ac0-808e-a2aa377d3cf8.gif">
### Report Generation
<img src="https://user-images.githubusercontent.com/17223002/164993689-c796c6cd-eb61-43f4-800d-08aba9740088.gif">
### Toolbox
<img src="https://user-images.githubusercontent.com/17223002/164993751-d687e88a-eb79-440f-9dc0-0ad006901620.gif">
### Adding Custom tool in Tools Arsenal
<img src="https://user-images.githubusercontent.com/17223002/164993670-466f6459-9499-498b-a9bd-526476d735a7.gif">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire and create. Every contributions you make is **greatly appreciated**. Your contributions can be as simple as fixing the indentation or UI, or as complex as adding new modules and features.
See the [Contributing Guide](.github/CONTRIBUTING.md) to get started.
You can also [join our Discord channel #development](https://discord.gg/JuhHdHTtwd) for any development related questions.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### First-time Open Source contributors
Please note that reNgine is beginner friendly. If you have never done open-source before, we encourage you to do so. **We will be happy and proud of your first PR ever.**
You can start by resolving any [open issues](https://github.com/yogeshojha/rengine/issues).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## reNgine Support
Please do not use GitHub for support requests. Instead, [join our Discord channel #support](https://discord.gg/azv6fzhNCE)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Related Projects
There are many other great reconnaissance frameworks out there, you can use reNgine in conjunction with those tools. But they are great in their own right, and can sometimes produce better results than reNgine.
* [ReconFTW](https://github.com/six2dez/reconftw#sample-video)
* [Reconmap](https://github.com/reconmap/reconmap)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## Support and Sponsoring
Over the past few years, I have been working hard on reNgine to add new features with the sole aim of making it the de facto standard for reconnaissance. I spend most of my free time and weekends working on reNgine. I do this in addition to my day job. I am happy to have received such overwhelming support from the community. But to keep this project alive, I am looking for financial support.
| Paypal | Bitcoin | Ethereum |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
|[https://www.paypal.com/paypalme/yogeshojha11](https://www.paypal.com/paypalme/yogeshojha11) | `35AiKyNswNZ4TZUSdriHopSCjNMPi63BCX` | `0xe7A337Da6ff98A28513C26A7Fec8C9b42A63d346`
OR
* Add a [GitHub Star](https://github.com/yogeshojha/rengine) to the project.
* Tweet about this project, or maybe blogs?
* Maybe nominate me for [GitHub Stars?](https://stars.github.com/nominate/)
* Join DigitalOcean using my [referral link](https://m.do.co/c/e353502d19fc) your profit is **$100** and I get $25 DO credit. This will help me test reNgine on VPS before I release any major features.
It takes a considerable amount of time to add new features and make sure everything works. Donating is your way of saying: **reNgine is awesome**.
Any support is greatly appreciated! Thank you!
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
## License
Distributed under the GNU GPL v3 License. See [LICENSE](LICENSE) for more information.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
| <p align="center">
<a href="https://rengine.wiki"><img src=".github/screenshots/banner.gif" alt=""/></a>
</p>
<p align="center"><a href="https://github.com/yogeshojha/rengine/releases" target="_blank"><img src="https://img.shields.io/badge/version-v2.0.0-informational?&logo=none" alt="reNgine Latest Version" /></a> <a href="https://www.gnu.org/licenses/gpl-3.0" target="_blank"><img src="https://img.shields.io/badge/License-GPLv3-red.svg?&logo=none" alt="License" /></a> <a href="#" target="_blank"><img src="https://img.shields.io/badge/first--timers--only-friendly-blue.svg?&logo=none" alt="" /></a> <a href="https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine" target="_blank"><img src="https://cdn.huntr.dev/huntr_security_badge_mono.svg" alt="" /></a> </p>
<p align="center">
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Asia-2023-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=Xk_YH83IQgg" target="_blank"><img src="https://img.shields.io/badge/Open--Source--Summit-2022-blue.svg?logo=none" alt="" /></a>
<a href="https://cyberweek.ae/2021/hitb-armory/" target="_blank"><img src="https://img.shields.io/badge/HITB--Armory-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=7uvP6MaQOX0" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--USA-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://drive.google.com/file/d/1Bh8lbf-Dztt5ViHJVACyrXMiglyICPQ2/view?usp=sharing" target="_blank"><img src="https://img.shields.io/badge/Defcon--Demolabs--29-2021-blue.svg?logo=none" alt="" /></a>
<a href="https://www.youtube.com/watch?v=A1oNOIc0h5A" target="_blank"><img src="https://img.shields.io/badge/BlackHat--Arsenal--Europe-2020-blue.svg?&logo=none" alt="" /></a>
</p>
<p align="center">
<a href="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/codeql-analysis.yml/badge.svg" alt="" /></a> <a href="https://github.com/yogeshojha/rengine/actions/workflows/build.yml" target="_blank"><img src="https://github.com/yogeshojha/rengine/actions/workflows/build.yml/badge.svg" alt="" /></a>
</p>
<p align="center">
<a href="https://discord.gg/H6WzebwX3H" target="_blank"><img src="https://img.shields.io/discord/880363103689277461" alt="" /></a>
</p>
<p align="center">
<a href="https://opensourcesecurityindex.io/" target="_blank" rel="noopener">
<img style="width: 282px; height: 56px" src="https://opensourcesecurityindex.io/badge.svg" alt="Open Source Security Index - Fastest Growing Open Source Security Projects" width="282" height="56" /> </a>
</p>
<h3>reNgine 2.0-jasper<br>Redefining the future of reconnaissance!</h3>
<h4>What is reNgine?</h4>
<p align="left">reNgine is your go-to web application reconnaissance suite that's designed to simplify and streamline the reconnaissance process for security professionals, penetration testers, and bug bounty hunters. With its highly configurable engines, data correlation capabilities, continuous monitoring, database-backed reconnaissance data, and an intuitive user interface, reNgine redefines how you gather critical information about your target web applications.
Traditional reconnaissance tools often fall short in terms of configurability and efficiency. reNgine addresses these shortcomings and emerges as a excellent alternative to existing commercial tools.
reNgine was created to address the limitations of traditional reconnaissance tools and provide a better alternative, even surpassing some commercial offerings. Whether you're a bug bounty hunter, a penetration tester, or a corporate security team, reNgine is your go-to solution for automating and enhancing your information-gathering efforts.
</p>
reNgine 2.0-jasper is out now, you can [watch reNgine 2.0-jasper release trailer here!](https://youtu.be/VwkOWqiWW5g)
reNgine 2.0-Jasper would not have been possible without [@ocervell](https://github.com/ocervell) valuable contributions. [@ocervell](https://github.com/ocervell) did majority of the refactoring if not all and also added a ton of features. Together, we wish to shape the future of web application reconnaissance, and it's developers like [@ocervell](https://github.com/ocervell) and a [ton of other developers and hackers from our community](https://github.com/yogeshojha/rengine/graphs/contributors) who inspire and drive us forward.
Thank you, [@ocervell](https://github.com/ocervell), for your outstanding work and unwavering commitment to reNgine.
Checkout our contributers here: [Contributers](https://github.com/yogeshojha/rengine/graphs/contributors)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Documentation
You can find detailed documentation at [https://rengine.wiki](https://rengine.wiki)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Table of Contents
* [About reNgine](#about-rengine)
* [Workflow](#workflow)
* [Features](#features)
* [Scan Engine](#scan-engine)
* [Quick Installation](#quick-installation)
* [What's new in reNgine 2.0](#changelog)
* [Screenshots](#screenshots)
* [Contributing](#contributing)
* [reNgine Support](#rengine-support)
* [Support and Sponsoring](#support-and-sponsoring)
* [reNgine Bug Bounty Program](#rengine-bug-bounty-program)
* [License](#license)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### About reNgine
reNgine is not an ordinary reconnaissance suite; it's a game-changer! We've turbocharged the traditional workflow with groundbreaking features that is sure to ease your reconnaissance game. reNgine redefines the art of reconnaissance with highly configurable scan engines, recon data correlation, continuous monitoring, GPT powered Vulnerability Report, Project Management and role based access control etc.
🦾 reNgine has advanced reconnaissance capabilities, harnessing a range of open-source tools to deliver a comprehensive web application reconnaissance experience. With it's intuitive User Interface, it excels in subdomain discovery, pinpointing IP addresses and open ports, collecting endpoints, conducting directory and file fuzzing, capturing screenshots, and performing vulnerability scans. To summarize, it does end-to-end reconnaissance. With WHOIS identification and WAF detection, it offers deep insights into target domains. Additionally, reNgine also identifies misconfigured S3 buckets and find interesting subdomains and URLS, based on specific keywords to helps you identify your next target, making it an go to tool for efficient reconnaissance.
🗃️ Say goodbye to recon data chaos! reNgine seamlessly integrates with a database, providing you with unmatched data correlation and organization. Forgot the hassle of grepping through json, txt or csv files. Plus, our custom query language lets you filter reconnaissance data effortlessly using natural language like operators such as filtering all alive subdomains with `http_status=200` and also filter all subdomains that are alive and has admin in name `http_status=200&name=admin`
🔧 reNgine offers unparalleled flexibility through its highly configurable scan engines, based on a YAML-based configuration. It offers the freedom to create and customize recon scan engines based on any kind of requirement, users can tailor them to their specific objectives and preferences, from thread management to timeout settings and rate-limit configurations, everything is customizable. Additionally, reNgine offers a range of pre-configured scan engines right out of the box, including Full Scan, Passive Scan, Screenshot Gathering, and the OSINT Scan Engine. These ready-to-use engines eliminate the need for extensive manual setup, aligning perfectly with reNgine's core mission of simplifying the reconnaissance process and enabling users to effortlessly access the right reconnaissance data with minimal effort.
💎 Subscans: Subscan is a game-changing feature in reNgine, setting it apart as the only open-source tool of its kind to offer this capability. With Subscan, waiting for the entire pipeline to complete is a thing of the past. Now, users can swiftly respond to newfound discoveries during reconnaissance. Whether you've stumbled upon an intriguing subdomain and wish to conduct a focused port scan or want to delve deeper with a vulnerability assessment, reNgine has you covered.
📃 PDF Reports: In addition to its robust reconnaissance capabilities, reNgine goes the extra mile by simplifying the report generation process, recognizing the crucial role that PDF reports play in the realm of end-to-end reconnaissance. Users can effortlessly generate and customize PDF reports to suit their exact needs. Whether it's a Full Scan Report, Vulnerability Report, or a concise reconnaissance report, reNgine provides the flexibility to choose the report type that best communicates your findings. Moreover, the level of customization is unparalleled, allowing users to select report colors, fine-tune executive summaries, and even add personalized touches like company names and footers. With GPT integration, your reports aren't just a report, with remediation steps, and impacts, you get 360-degree view of the vulnerabilities you've uncovered.
🔖 Say Hello to Projects! reNgine 2.0 introduces a powerful addition that enables you to efficiently organize your web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task. Each projects will have separate dashboard and all the scan results will be separated from each projects, while scan engines and configuration will be shared across all the projects.
⚙️ Roles and Permissions! Begining reNgine 2.0, we've taken your web application reconnaissance to a whole new level of control and security. Now, you can assign distinct roles to your team members—Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- 🔐 Sys Admin: Sys Admin is a super user that has permission to modify system and scan related configurations, scan engines, create new users, add new tools etc. Super user can initiate scans and subscans effortlessly.
- 🔍 Penetration Tester: Penetration Tester will be allowed to modify and initiate scans and subscans, add or update targets, etc. A penetration tester will not be allowed to modify system configurations.
- 📊 Auditor: Auditor can only view and download the report. An auditor can not change any system or scan related configurations nor can initiate any scans or subscans.
🚀 GPT Vulnerability Report Generation: Get ready for the future of penetration testing reports with reNgine's groundbreaking feature: "GPT-Powered Report Generation"! With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments that read like they were written by a human security expert! **But that's not all!** Our GPT-driven reports go the extra mile by scouring the web for related news articles, blogs, and references, so you have a 360-degree view of the vulnerabilities you've uncovered. With reNgine 2.0 revolutionize your penetration testing game and impress your clients with reports that are not just informative but engaging and comprehensive with detailed analysis on impact assessment and remediation strategies.
🥷 GPT-Powered Attack Surface Generation: With reNgine 2.0, reNgine seamlessly integrates with GPT to identify the attacks that you can likely perform on a subdomain. By making use of reconnaissance data such as page title, open ports, subdomain name etc, reNgine can advice you the attacks you could perform on a target. reNgine will also provide you the rationale on why the specific attack is likely to be successful.
🧭 Continuous monitoring: Continuous monitoring is at the core of reNgine's mission, and it's robust continuous monitoring feature ensures that their targets are under constant scrutiny. With the flexibility to schedule scans at regular intervals, penetration testers can effortlessly stay informed about their targets. What sets reNgine apart is its seamless integration with popular notification channels such as Discord, Slack, and Telegram, delivering real-time alerts for newly discovered subdomains, vulnerabilities, or any changes in reconnaissance data.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Workflow
<img src="https://github.com/yogeshojha/rengine/assets/17223002/10c475b8-b4a8-440d-9126-77fe2038a386">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Features
* Reconnaissance:
* Subdomain Discovery
* IP and Open Ports Identification
* Endpoints Discovery
* Directory/Files fuzzing
* Screenshot Gathering
* Vulnerability Scan
* Nuclei
* Dalfox XSS Scanner
* CRLFuzzer
* Misconfigured S3 Scanner
* WHOIS Identification
* WAF Detection
* OSINT Capabilities
* Meta info Gathering
* Employees Gathering
* Email Address gathering
* Google Dorking for sensitive info and urls
* Projects, create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
* Perform Advanced Query lookup using natural language alike and, or, not operations
* Highly configurable YAML-based Scan Engines
* Support for Parallel Scans
* Support for Subscans
* Recon Data visualization
* GPT Vulnerability Description, Impact and Remediation generation
* GPT Attack Surface Generator
* Multiple Roles and Permissions to cater a team's need
* Customizable Alerts/Notifications on Slack, Discord, and Telegram
* Automatically report Vulnerabilities to HackerOne
* Recon Notes and Todos
* Clocked Scans (Run reconnaissance exactly at X Hours and Y minutes) and Periodic Scans (Runs reconnaissance every X minutes/- hours/days/week)
* Proxy Support
* Screenshot Gallery with Filters
* Powerful recon data filtering with autosuggestions
* Recon Data changes, find new/removed subdomains/endpoints
* Tag targets into the Organization
* Smart Duplicate endpoint removal based on page title and content length to cleanup the reconnaissance data
* Identify Interesting Subdomains
* Custom GF patterns and custom Nuclei Templates
* Edit tool-related configuration files (Nuclei, Subfinder, Naabu, amass)
* Add external tools from Github/Go
* Interoperable with other tools, Import/Export Subdomains/Endpoints
* Import Targets via IP and/or CIDRs
* Report Generation
* Toolbox: Comes bundled with most commonly used tools during penetration testing such as whois lookup, CMS detector, CVE lookup, etc.
* Identification of related domains and related TLDs for targets
* Find actionable insights such as Most Common Vulnerability, Most Common CVE ID, Most Vulnerable Target/Subdomain, etc.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Scan Engine
```yaml
subdomain_discovery: {
'uses_tools': [
'subfinder',
'ctfr',
'sublist3r',
'tlsx',
'oneforall',
'netlas'
],
'enable_http_crawl': true,
'threads': 30,
'timeout': 5,
}
http_crawl: {}
port_scan: {
'enable_http_crawl': true,
'timeout': 5,
# 'exclude_ports': [],
# 'exclude_subdomains': true,
'ports': ['top-100'],
'rate_limit': 150,
'threads': 30,
'passive': false,
# 'use_naabu_config': false,
# 'enable_nmap': true,
# 'nmap_cmd': '',
# 'nmap_script': '',
# 'nmap_script_args': ''
}
osint: {
'discover': [
'emails',
'metainfo',
'employees'
],
'dorks': [
'login_pages',
'admin_panels',
'dashboard_pages',
'stackoverflow',
'social_media',
'project_management',
'code_sharing',
'config_files',
'jenkins',
'wordpress_files',
'php_error',
'exposed_documents',
'db_files',
'git_exposed'
],
'custom_dorks': [
{
'lookup_site': 'google.com',
'lookup_keywords': '/home/'
},
{
'lookup_site': '_target_',
'lookup_extensions': 'jpg,png'
}
],
'intensity': 'normal',
'documents_limit': 50
}
dir_file_fuzz: {
'auto_calibration': true,
'enable_http_crawl': true,
'rate_limit': 150,
'extensions': ['html', 'php','git','yaml','conf','cnf','config','gz','env','log','db','mysql','bak','asp','aspx','txt','conf','sql','json','yml','pdf'],
'follow_redirect': false,
'max_time': 0,
'match_http_status': [200, 204],
'recursive_level': 2,
'stop_on_error': false,
'timeout': 5,
'threads': 30,
'wordlist_name': 'dicc'
}
fetch_url: {
'uses_tools': [
'gospider',
'hakrawler',
'waybackurls',
'gospider',
'katana'
],
'remove_duplicate_endpoints': true,
'duplicate_fields': [
'content_length',
'page_title'
],
'enable_http_crawl': true,
'gf_patterns': ['debug_logic', 'idor', 'interestingEXT', 'interestingparams', 'interestingsubs', 'lfi', 'rce', 'redirect', 'sqli', 'ssrf', 'ssti', 'xss'],
'ignore_file_extensions': ['png', 'jpg', 'jpeg', 'gif', 'mp4', 'mpeg', 'mp3']
# 'exclude_subdomains': true
}
vulnerability_scan: {
'run_nuclei': false,
'run_dalfox': false,
'run_crlfuzz': false,
'run_s3scanner': true,
'enable_http_crawl': true,
'concurrency': 50,
'intensity': 'normal',
'rate_limit': 150,
'retries': 1,
'timeout': 5,
'fetch_gpt_report': true,
'nuclei': {
'use_conf': false,
'severities': [
'unknown',
'info',
'low',
'medium',
'high',
'critical'
],
# 'tags': [],
# 'templates': [],
# 'custom_templates': [],
},
's3scanner': {
'threads': 100,
'providers': [
'aws',
'gcp',
'digitalocean',
'dreamhost',
'linode'
]
}
}
waf_detection: {}
screenshot: {
'enable_http_crawl': true,
'intensity': 'normal',
'timeout': 10,
'threads': 40
}
# custom_header: "Cookie: Test"
```
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Quick Installation
**Note:** Only Ubuntu/VPS
1. Clone this repo
```bash
git clone https://github.com/yogeshojha/rengine && cd rengine
```
1. Edit the dotenv file, **please make sure to change the password for postgresql `POSTGRES_PASSWORD`!**
```bash
nano .env
```
1. In the dotenv file, you may also modify the Scaling Configurations
```bash
MAX_CONCURRENCY=80
MIN_CONCURRENCY=10
```
MAX_CONCURRENCY: This parameter specifies the maximum number of reNgine's concurrent Celery worker processes that can be spawned. In this case, it's set to 80, meaning that the application can utilize up to 80 concurrent worker processes to execute tasks concurrently. This is useful for handling a high volume of scans or when you want to scale up processing power during periods of high demand. If you have more CPU cores, you will need to increase this for maximised performance.
MIN_CONCURRENCY: On the other hand, MIN_CONCURRENCY specifies the minimum number of concurrent worker processes that should be maintained, even during periods of lower demand. In this example, it's set to 10, which means that even when there are fewer tasks to process, at least 10 worker processes will be kept running. This helps ensure that the application can respond promptly to incoming tasks without the overhead of repeatedly starting and stopping worker processes.
These settings allow for dynamic scaling of Celery workers, ensuring that the application efficiently manages its workload by adjusting the number of concurrent workers based on the workload's size and complexity
1. Run the installation script, Please keep an eye for any prompt, you will also be asked for username and password for reNgine.
```bash
sudo ./install.sh
```
If `install.sh` does not have install permission, please change it, `chmod +x install.sh`
**reNgine can now be accessed from <https://127.0.0.1> or if you're on the VPS <https://your_vps_ip_address>**
**Unless you are on development branch, please do not access reNgine via any ports**
### Installation (Mac/Windows/Other)
Installation instructions can be found at [https://reNgine.wiki/install/detailed/](https://reNgine.wiki/2.0/install/detailed/)
### Updating
1. Updating is as simple as running the following command:
```bash
cd rengine && sudo ./update.sh
```
If `update.sh` does not have execution permissions, please change it, `sudo chmod +x update.sh`
### Changelog
[Please find the latest release notes and changelog here.](https://rengine.wiki/changelog/)
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Screenshots
#### Scan Results
![](.github/screenshots/scan_results.gif)
#### General Usage
<img src="https://user-images.githubusercontent.com/17223002/164993781-b6012995-522b-480a-a8bf-911193d35894.gif">
#### Initiating Subscan
<img src="https://user-images.githubusercontent.com/17223002/164993749-1ad343d6-8ce7-43d6-aee7-b3add0321da7.gif">
#### Recon Data filtering
<img src="https://user-images.githubusercontent.com/17223002/164993687-b63f3de8-e033-4ac0-808e-a2aa377d3cf8.gif">
#### Report Generation
<img src="https://user-images.githubusercontent.com/17223002/164993689-c796c6cd-eb61-43f4-800d-08aba9740088.gif">
#### Toolbox
<img src="https://user-images.githubusercontent.com/17223002/164993751-d687e88a-eb79-440f-9dc0-0ad006901620.gif">
#### Adding Custom tool in Tools Arsenal
<img src="https://user-images.githubusercontent.com/17223002/164993670-466f6459-9499-498b-a9bd-526476d735a7.gif">
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Contributing
Contributions are what make the open-source community such an amazing place to learn, inspire and create. Every contributions you make is **greatly appreciated**. Your contributions can be as simple as fixing the indentation or UI, or as complex as adding new modules and features.
See the [Contributing Guide](.github/CONTRIBUTING.md) to get started.
You can also [join our Discord channel #development](https://discord.gg/JuhHdHTtwd) for any development related questions.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### First-time Open Source contributors
Please note that reNgine is beginner friendly. If you have never done open-source before, we encourage you to do so. **We will be happy and proud of your first PR ever.**
You can start by resolving any [open issues](https://github.com/yogeshojha/rengine/issues).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Support
Please do not use GitHub for support requests. Instead, [join our Discord channel #support](https://discord.gg/azv6fzhNCE).
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### Support and Sponsoring
Over the past few years, I have been working hard on reNgine to add new features with the sole aim of making it the de facto standard for reconnaissance. I spend most of my free time and weekends working on reNgine. I do this in addition to my day job. I am happy to have received such overwhelming support from the community. But to keep this project alive, I am looking for financial support.
| Paypal | Bitcoin | Ethereum |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------: |
|[https://www.paypal.com/paypalme/yogeshojha11](https://www.paypal.com/paypalme/yogeshojha11) | `35AiKyNswNZ4TZUSdriHopSCjNMPi63BCX` | `0xe7A337Da6ff98A28513C26A7Fec8C9b42A63d346`
OR
* Add a [GitHub Star](https://github.com/yogeshojha/rengine) to the project.
* Tweet about this project, or maybe blogs?
* Maybe nominate me for [GitHub Stars?](https://stars.github.com/nominate/)
* Join DigitalOcean using my [referral link](https://m.do.co/c/e353502d19fc) your profit is **$100** and I get $25 DO credit. This will help me test reNgine on VPS before I release any major features.
It takes a considerable amount of time to add new features and make sure everything works. Donating is your way of saying: **reNgine is awesome**.
Any support is greatly appreciated! Thank you!
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### reNgine Bug Bounty Program
[![huntr](https://cdn.huntr.dev/huntr_security_badge_mono.svg)](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine)
Security researchers, welcome aboard! I'm excited to announce the reNgine bug bounty programme in collaboration with [huntr.dev](https://huntr.dev), which means that you will be rewarded for any vulnerabilities you find in reNgine.
Thank you for your interest in reporting reNgine vulnerabilities! If you are aware of any potential security vulnerabilities in reNgine, we encourage you to report them immediately via [huntr.dev](https://huntr.dev/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Fyogeshojha%2Frengine).
**Please do not disclose vulnerabilities via Github issues/blogs/tweets after/before reporting to huntr.dev as this is explicitly against the disclosure policy of huntr.dev and reNgine and will not be considered for monetary rewards.**
Please note that the reNgine maintainer does not set the bounty amount.
The bounty reward is determined by an industry-first equation developed by huntr.dev to understand the popularity, impact and value of repositories to the open-source community.
**What do I expect from security researchers?**
* Patience: Please note that I am currently the only maintainer in reNgine and it will take some time to validate your report. I ask for your patience during this process.
* Respect for privacy and security reports: Please do not publicly disclose any vulnerabilities (including GitHub issues) before or after reporting them on huntr.dev! This is against the disclosure policy and will not be rewarded.
* Respect the rules
**What do you get in return?**
* Thanks from the maintainer
* Monetary rewards
* CVE ID(s)
Please find the [FAQ](https://www.huntr.dev/faq) and [Responsible disclosure policy](https://www.huntr.dev/policy/) from huntr.dev.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
### License
Distributed under the GNU GPL v3 License. See [LICENSE](LICENSE) for more information.
![-----------------------------------------------------](https://raw.githubusercontent.com/andreasbm/readme/master/assets/lines/aqua.png)
<p align="right">(ChatGPT was used to write some or most part of this README section.)</p>
| yogeshojha | 3c60bc1ee495044794d91edee0c96fff73ab46c7 | 5413708d243799a5271440c47c6f98d0c51154ca | Thank you for pointing out ;p | yogeshojha | 23 |
yogeshojha/rengine | 963 | 2.0-jasper release | ### Added
- Projects: Projects allow you to efficiently organize their web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
- Roles and Permissions: assign distinct roles to your team members: Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- GPT-powered Report Generation: With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments.
- API Vault: This feature allows you to organize your API keys such as OpenAI or Netlas API keys.
- GPT-powered Attack Surface Generation
- URL gathering now is much more efficient, removing duplicate endpoints based on similar HTTP Responses, having the same content_lenth, or page_title. Custom duplicate fields can also be set from the scan engine configuration.
- URL Path filtering while initiating scan: For instance, if we want to scan only endpoints starting with https://example.com/start/, we can pass the /start as a path filter while starting the scan. @ocervell
- Expanding Target Concept: reNgine 2.0 now accepts IPs, URLS, etc as targets. (#678, #658) Excellent work by @ocervell
- A ton of refactoring on reNgine's core to improve scan efficiency. Massive kudos to @ocervell
- Created a custom celery workflow to be able to run several tasks in parallel that are not dependent on each other, such OSINT task and subdomain discovery will run in parallel, and directory and file fuzzing, vulnerability scan, screenshot gathering etc. will run in parallel after port scan or url fetching is completed. This will increase the efficiency of scans and instead of having one long flow of tasks, they can run independently on their own. @ocervell
- Refactored all tasks to run asynchronously @ocervell
- Added a stream_command that allows to read the output of a command live: this means the UI is updated with results while the command runs and does not have to wait until the task completes. Excellent work by @ocervell
- Pwndb is now replaced by h8mail. @ocervell
- Group Scan Results: reNgine 2.0 allows to group of subdomains based on similar page titles and HTTP status, and also vulnerability grouping based on the same vulnerability title and severity.
- Added Support for Nmap: reNgine 2.0 allows to run Nmap scripts and vuln scans on ports found by Naabu. @ocervell
- Added support for Shared Scan Variables in Scan Engine Configuration:
- `enable_http_crawl`: (true/false) You can disable it to be more stealthy or focus on something different than HTTP
- `timeout`: set timeout for all tasks
- `rate_limit`: set rate limit for all tasks
- `retries`: set retries for all tasks
- `custom_header`: set the custom header for all tasks
- Added Dalfox for XSS Vulnerability Scan
- Added CRLFuzz for CRLF Vulnerability Scan
- Added S3Scanner for scanning misconfigured S3 buckets
- Improve OSINT Dork results, now detects admin panels, login pages and dashboards
- Added Custom Dorks
- Improved UI for vulnerability results, clicking on each vulnerability will open up a sidebar with vulnerability details.
- Added HTTP Request and Response in vulnerability Results
- Under Admin Settings, added an option to allow add/remove/deactivate additional users
- Added Option to Preview Scan Report instead of forcing to download
- Added Katana for crawling and spidering URLs
- Added Netlas for Whois and subdomain gathering
- Added TLSX for subdomain gathering
- Added CTFR for subdomain gathering
- Added historical IP in whois section
### Fixes
- GF patterns do not run on 404 endpoints (#574 closed)
- Fixes for retrieving whois data (#693 closed)
- Related/Associated Domains in Whois section is now fixed
### Removed
- Removed pwndb and tor related to it.
- Removed tor for pwndb | null | 2023-10-02 07:51:35+00:00 | 2023-10-07 10:37:23+00:00 | web/dashboard/views.py | from datetime import timedelta
from targetApp.models import Domain
from startScan.models import *
from django.utils import timezone
from django.shortcuts import render, redirect
from django.http import HttpResponse
from django.db.models.functions import TruncDay
from django.contrib.auth.decorators import login_required
from django.dispatch import receiver
from django.contrib.auth.signals import user_logged_out, user_logged_in
from django.contrib import messages
from django.contrib.auth import update_session_auth_hash
from django.contrib.auth.forms import PasswordChangeForm
from django.db.models import Count, Value, CharField, Q
def index(request):
domain_count = Domain.objects.all().count()
endpoint_count = EndPoint.objects.all().count()
scan_count = ScanHistory.objects.all().count()
subdomain_count = Subdomain.objects.all().count()
subdomain_with_ip_count = Subdomain.objects.filter(ip_addresses__isnull=False).count()
alive_count = \
Subdomain.objects.all().exclude(http_status__exact=0).count()
endpoint_alive_count = \
EndPoint.objects.filter(http_status__exact=200).count()
vulnerabilities = Vulnerability.objects.all()
info_count = vulnerabilities.filter(severity=0).count()
low_count = vulnerabilities.filter(severity=1).count()
medium_count = vulnerabilities.filter(severity=2).count()
high_count = vulnerabilities.filter(severity=3).count()
critical_count = vulnerabilities.filter(severity=4).count()
unknown_count = vulnerabilities.filter(severity=-1).count()
vulnerability_feed = Vulnerability.objects.all().order_by(
'-discovered_date')[:20]
activity_feed = ScanActivity.objects.all().order_by('-time')[:20]
total_vul_count = info_count + low_count + \
medium_count + high_count + critical_count + unknown_count
total_vul_ignore_info_count = low_count + \
medium_count + high_count + critical_count
most_common_vulnerability = Vulnerability.objects.values("name", "severity").annotate(count=Count('name')).order_by("-count")[:10]
last_week = timezone.now() - timedelta(days=7)
count_targets_by_date = Domain.objects.filter(
insert_date__gte=last_week).annotate(
date=TruncDay('insert_date')).values("date").annotate(
created_count=Count('id')).order_by("-date")
count_subdomains_by_date = Subdomain.objects.filter(
discovered_date__gte=last_week).annotate(
date=TruncDay('discovered_date')).values("date").annotate(
count=Count('id')).order_by("-date")
count_vulns_by_date = Vulnerability.objects.filter(
discovered_date__gte=last_week).annotate(
date=TruncDay('discovered_date')).values("date").annotate(
count=Count('id')).order_by("-date")
count_scans_by_date = ScanHistory.objects.filter(
start_scan_date__gte=last_week).annotate(
date=TruncDay('start_scan_date')).values("date").annotate(
count=Count('id')).order_by("-date")
count_endpoints_by_date = EndPoint.objects.filter(
discovered_date__gte=last_week).annotate(
date=TruncDay('discovered_date')).values("date").annotate(
count=Count('id')).order_by("-date")
last_7_dates = [(timezone.now() - timedelta(days=i)).date()
for i in range(0, 7)]
targets_in_last_week = []
subdomains_in_last_week = []
vulns_in_last_week = []
scans_in_last_week = []
endpoints_in_last_week = []
for date in last_7_dates:
_target = count_targets_by_date.filter(date=date)
_subdomain = count_subdomains_by_date.filter(date=date)
_vuln = count_vulns_by_date.filter(date=date)
_scan = count_scans_by_date.filter(date=date)
_endpoint = count_endpoints_by_date.filter(date=date)
if _target:
targets_in_last_week.append(_target[0]['created_count'])
else:
targets_in_last_week.append(0)
if _subdomain:
subdomains_in_last_week.append(_subdomain[0]['count'])
else:
subdomains_in_last_week.append(0)
if _vuln:
vulns_in_last_week.append(_vuln[0]['count'])
else:
vulns_in_last_week.append(0)
if _scan:
scans_in_last_week.append(_scan[0]['count'])
else:
scans_in_last_week.append(0)
if _endpoint:
endpoints_in_last_week.append(_endpoint[0]['count'])
else:
endpoints_in_last_week.append(0)
targets_in_last_week.reverse()
subdomains_in_last_week.reverse()
vulns_in_last_week.reverse()
scans_in_last_week.reverse()
endpoints_in_last_week.reverse()
context = {
'dashboard_data_active': 'active',
'domain_count': domain_count,
'endpoint_count': endpoint_count,
'scan_count': scan_count,
'subdomain_count': subdomain_count,
'subdomain_with_ip_count': subdomain_with_ip_count,
'alive_count': alive_count,
'endpoint_alive_count': endpoint_alive_count,
'info_count': info_count,
'low_count': low_count,
'medium_count': medium_count,
'high_count': high_count,
'critical_count': critical_count,
'unknown_count': unknown_count,
'most_common_vulnerability': most_common_vulnerability,
'total_vul_count': total_vul_count,
'total_vul_ignore_info_count': total_vul_ignore_info_count,
'vulnerability_feed': vulnerability_feed,
'activity_feed': activity_feed,
'targets_in_last_week': targets_in_last_week,
'subdomains_in_last_week': subdomains_in_last_week,
'vulns_in_last_week': vulns_in_last_week,
'scans_in_last_week': scans_in_last_week,
'endpoints_in_last_week': endpoints_in_last_week,
'last_7_dates': last_7_dates,
}
context['total_ips'] = IpAddress.objects.all().count()
context['most_used_port'] = Port.objects.annotate(count=Count('ports')).order_by('-count')[:7]
context['most_used_ip'] = IpAddress.objects.annotate(count=Count('ip_addresses')).order_by('-count').exclude(ip_addresses__isnull=True)[:7]
context['most_used_tech'] = Technology.objects.annotate(count=Count('technologies')).order_by('-count')[:7]
context['most_common_cve'] = CveId.objects.annotate(nused=Count('cve_ids')).order_by('-nused').values('name', 'nused')[:7]
context['most_common_cwe'] = CweId.objects.annotate(nused=Count('cwe_ids')).order_by('-nused').values('name', 'nused')[:7]
context['most_common_tags'] = VulnerabilityTags.objects.annotate(nused=Count('vuln_tags')).order_by('-nused').values('name', 'nused')[:7]
context['asset_countries'] = CountryISO.objects.annotate(count=Count('ipaddress')).order_by('-count')
return render(request, 'dashboard/index.html', context)
def profile(request):
if request.method == 'POST':
form = PasswordChangeForm(request.user, request.POST)
if form.is_valid():
user = form.save()
update_session_auth_hash(request, user)
messages.success(
request,
'Your password was successfully changed!')
return redirect('profile')
else:
messages.error(request, 'Please correct the error below.')
else:
form = PasswordChangeForm(request.user)
return render(request, 'dashboard/profile.html', {
'form': form
})
@receiver(user_logged_out)
def on_user_logged_out(sender, request, **kwargs):
messages.add_message(
request,
messages.INFO,
'You have been successfully logged out. Thank you ' +
'for using reNgine.')
@receiver(user_logged_in)
def on_user_logged_in(sender, request, **kwargs):
messages.add_message(
request,
messages.INFO,
'Hi @' +
request.user.username +
' welcome back!')
def search(request):
return render(request, 'dashboard/search.html')
| import json
import logging
from datetime import timedelta
from django.contrib.auth import get_user_model
from django.contrib import messages
from django.contrib.auth import update_session_auth_hash
from django.contrib.auth.forms import PasswordChangeForm
from django.contrib.auth.signals import user_logged_in, user_logged_out
from django.contrib import messages
from django.db.models import Count
from django.db.models.functions import TruncDay
from django.dispatch import receiver
from django.shortcuts import redirect, render, get_object_or_404
from django.utils import timezone
from django.http import HttpResponseRedirect, JsonResponse
from django.urls import reverse
from rolepermissions.roles import assign_role, clear_roles
from rolepermissions.decorators import has_permission_decorator
from django.template.defaultfilters import slugify
from startScan.models import *
from targetApp.models import Domain
from dashboard.models import *
from reNgine.definitions import *
logger = logging.getLogger(__name__)
def index(request, slug):
try:
project = Project.objects.get(slug=slug)
except Exception as e:
# if project not found redirect to 404
return HttpResponseRedirect(reverse('four_oh_four'))
domains = Domain.objects.filter(project=project)
subdomains = Subdomain.objects.filter(target_domain__project=project)
endpoints = EndPoint.objects.filter(target_domain__project=project)
scan_histories = ScanHistory.objects.filter(domain__project=project)
vulnerabilities = Vulnerability.objects.filter(target_domain__project=project)
scan_activities = ScanActivity.objects.filter(scan_of__in=scan_histories)
domain_count = domains.count()
endpoint_count = endpoints.count()
scan_count = scan_histories.count()
subdomain_count = subdomains.count()
subdomain_with_ip_count = subdomains.filter(ip_addresses__isnull=False).count()
alive_count = subdomains.exclude(http_status__exact=0).count()
endpoint_alive_count = endpoints.filter(http_status__exact=200).count()
info_count = vulnerabilities.filter(severity=0).count()
low_count = vulnerabilities.filter(severity=1).count()
medium_count = vulnerabilities.filter(severity=2).count()
high_count = vulnerabilities.filter(severity=3).count()
critical_count = vulnerabilities.filter(severity=4).count()
unknown_count = vulnerabilities.filter(severity=-1).count()
vulnerability_feed = vulnerabilities.order_by('-discovered_date')[:50]
activity_feed = scan_activities.order_by('-time')[:50]
total_vul_count = info_count + low_count + \
medium_count + high_count + critical_count + unknown_count
total_vul_ignore_info_count = low_count + \
medium_count + high_count + critical_count
last_week = timezone.now() - timedelta(days=7)
count_targets_by_date = domains.filter(
insert_date__gte=last_week).annotate(
date=TruncDay('insert_date')).values("date").annotate(
created_count=Count('id')).order_by("-date")
count_subdomains_by_date = subdomains.filter(
discovered_date__gte=last_week).annotate(
date=TruncDay('discovered_date')).values("date").annotate(
count=Count('id')).order_by("-date")
count_vulns_by_date = vulnerabilities.filter(
discovered_date__gte=last_week).annotate(
date=TruncDay('discovered_date')).values("date").annotate(
count=Count('id')).order_by("-date")
count_scans_by_date = scan_histories.filter(
start_scan_date__gte=last_week).annotate(
date=TruncDay('start_scan_date')).values("date").annotate(
count=Count('id')).order_by("-date")
count_endpoints_by_date = endpoints.filter(
discovered_date__gte=last_week).annotate(
date=TruncDay('discovered_date')).values("date").annotate(
count=Count('id')).order_by("-date")
last_7_dates = [(timezone.now() - timedelta(days=i)).date()
for i in range(0, 7)]
targets_in_last_week = []
subdomains_in_last_week = []
vulns_in_last_week = []
scans_in_last_week = []
endpoints_in_last_week = []
for date in last_7_dates:
_target = count_targets_by_date.filter(date=date)
_subdomain = count_subdomains_by_date.filter(date=date)
_vuln = count_vulns_by_date.filter(date=date)
_scan = count_scans_by_date.filter(date=date)
_endpoint = count_endpoints_by_date.filter(date=date)
if _target:
targets_in_last_week.append(_target[0]['created_count'])
else:
targets_in_last_week.append(0)
if _subdomain:
subdomains_in_last_week.append(_subdomain[0]['count'])
else:
subdomains_in_last_week.append(0)
if _vuln:
vulns_in_last_week.append(_vuln[0]['count'])
else:
vulns_in_last_week.append(0)
if _scan:
scans_in_last_week.append(_scan[0]['count'])
else:
scans_in_last_week.append(0)
if _endpoint:
endpoints_in_last_week.append(_endpoint[0]['count'])
else:
endpoints_in_last_week.append(0)
targets_in_last_week.reverse()
subdomains_in_last_week.reverse()
vulns_in_last_week.reverse()
scans_in_last_week.reverse()
endpoints_in_last_week.reverse()
context = {
'dashboard_data_active': 'active',
'domain_count': domain_count,
'endpoint_count': endpoint_count,
'scan_count': scan_count,
'subdomain_count': subdomain_count,
'subdomain_with_ip_count': subdomain_with_ip_count,
'alive_count': alive_count,
'endpoint_alive_count': endpoint_alive_count,
'info_count': info_count,
'low_count': low_count,
'medium_count': medium_count,
'high_count': high_count,
'critical_count': critical_count,
'unknown_count': unknown_count,
'total_vul_count': total_vul_count,
'total_vul_ignore_info_count': total_vul_ignore_info_count,
'vulnerability_feed': vulnerability_feed,
'activity_feed': activity_feed,
'targets_in_last_week': targets_in_last_week,
'subdomains_in_last_week': subdomains_in_last_week,
'vulns_in_last_week': vulns_in_last_week,
'scans_in_last_week': scans_in_last_week,
'endpoints_in_last_week': endpoints_in_last_week,
'last_7_dates': last_7_dates,
'project': project
}
ip_addresses = IpAddress.objects.filter(ip_addresses__in=subdomains)
context['total_ips'] = ip_addresses.count()
context['most_used_port'] = Port.objects.filter(ports__in=ip_addresses).annotate(count=Count('ports')).order_by('-count')[:7]
context['most_used_ip'] = ip_addresses.annotate(count=Count('ip_addresses')).order_by('-count').exclude(ip_addresses__isnull=True)[:7]
context['most_used_tech'] = Technology.objects.filter(technologies__in=subdomains).annotate(count=Count('technologies')).order_by('-count')[:7]
context['most_common_cve'] = CveId.objects.filter(cve_ids__in=vulnerabilities).annotate(nused=Count('cve_ids')).order_by('-nused').values('name', 'nused')[:7]
context['most_common_cwe'] = CweId.objects.filter(cwe_ids__in=vulnerabilities).annotate(nused=Count('cwe_ids')).order_by('-nused').values('name', 'nused')[:7]
context['most_common_tags'] = VulnerabilityTags.objects.filter(vuln_tags__in=vulnerabilities).annotate(nused=Count('vuln_tags')).order_by('-nused').values('name', 'nused')[:7]
context['asset_countries'] = CountryISO.objects.filter(ipaddress__in=ip_addresses).annotate(count=Count('ipaddress')).order_by('-count')
return render(request, 'dashboard/index.html', context)
def profile(request, slug):
if request.method == 'POST':
form = PasswordChangeForm(request.user, request.POST)
if form.is_valid():
user = form.save()
update_session_auth_hash(request, user)
messages.success(
request,
'Your password was successfully changed!')
return redirect('profile')
else:
messages.error(request, 'Please correct the error below.')
else:
form = PasswordChangeForm(request.user)
return render(request, 'dashboard/profile.html', {
'form': form
})
@has_permission_decorator(PERM_MODIFY_SYSTEM_CONFIGURATIONS, redirect_url=FOUR_OH_FOUR_URL)
def admin_interface(request, slug):
UserModel = get_user_model()
users = UserModel.objects.all().order_by('date_joined')
return render(
request,
'dashboard/admin.html',
{
'users': users
}
)
@has_permission_decorator(PERM_MODIFY_SYSTEM_CONFIGURATIONS, redirect_url=FOUR_OH_FOUR_URL)
def admin_interface_update(request, slug):
mode = request.GET.get('mode')
user_id = request.GET.get('user')
if user_id:
UserModel = get_user_model()
user = UserModel.objects.get(id=user_id)
if request.method == 'GET':
if mode == 'change_status':
user.is_active = not user.is_active
user.save()
elif request.method == 'POST':
if mode == 'delete':
try:
user.delete()
messages.add_message(
request,
messages.INFO,
f'User {user.username} successfully deleted.'
)
messageData = {'status': True}
except Exception as e:
logger.error(e)
messageData = {'status': False}
elif mode == 'update':
try:
response = json.loads(request.body)
role = response.get('role')
change_password = response.get('change_password')
clear_roles(user)
assign_role(user, role)
if change_password:
user.set_password(change_password)
user.save()
messageData = {'status': True}
except Exception as e:
logger.error(e)
messageData = {'status': False, 'error': str(e)}
elif mode == 'create':
try:
response = json.loads(request.body)
UserModel = get_user_model()
user = UserModel.objects.create_user(
username=response.get('username'),
password=response.get('password')
)
assign_role(user, response.get('role'))
messageData = {'status': True}
except Exception as e:
logger.error(e)
messageData = {'status': False, 'error': str(e)}
return JsonResponse(messageData)
return HttpResponseRedirect(reverse('admin_interface', kwargs={'slug': slug}))
@receiver(user_logged_out)
def on_user_logged_out(sender, request, **kwargs):
messages.add_message(
request,
messages.INFO,
'You have been successfully logged out. Thank you ' +
'for using reNgine.')
@receiver(user_logged_in)
def on_user_logged_in(sender, request, **kwargs):
messages.add_message(
request,
messages.INFO,
'Hi @' +
request.user.username +
' welcome back!')
def search(request, slug):
return render(request, 'dashboard/search.html')
def four_oh_four(request):
return render(request, '404.html')
def projects(request, slug):
context = {}
context['projects'] = Project.objects.all()
return render(request, 'dashboard/projects.html', context)
def delete_project(request, id):
obj = get_object_or_404(Project, id=id)
if request.method == "POST":
obj.delete()
responseData = {
'status': 'true'
}
messages.add_message(
request,
messages.INFO,
'Project successfully deleted!')
else:
responseData = {'status': 'false'}
messages.add_message(
request,
messages.ERROR,
'Oops! Project could not be deleted!')
return JsonResponse(responseData)
def onboarding(request):
context = {}
error = ''
if request.method == "POST":
project_name = request.POST.get('project_name')
slug = slugify(project_name)
create_username = request.POST.get('create_username')
create_password = request.POST.get('create_password')
create_user_role = request.POST.get('create_user_role')
key_openai = request.POST.get('key_openai')
key_netlas = request.POST.get('key_netlas')
insert_date = timezone.now()
try:
Project.objects.create(
name=project_name,
slug=slug,
insert_date=insert_date
)
except Exception as e:
error = ' Could not create project, Error: ' + str(e)
try:
if create_username and create_password and create_user_role:
UserModel = get_user_model()
user = UserModel.objects.create_user(
username=create_username,
password=create_password
)
assign_role(user, create_user_role)
except Exception as e:
error = ' Could not create User, Error: ' + str(e)
if key_openai:
openai_api_key = OpenAiAPIKey.objects.first()
if openai_api_key:
openai_api_key.key = key_openai
openai_api_key.save()
else:
OpenAiAPIKey.objects.create(key=key_openai)
if key_netlas:
netlas_api_key = NetlasAPIKey.objects.first()
if netlas_api_key:
netlas_api_key.key = key_netlas
netlas_api_key.save()
else:
NetlasAPIKey.objects.create(key=key_netlas)
context['error'] = error
# check is any projects exists, then redirect to project list else onboarding
projects = Project.objects.all()
context['openai_key'] = OpenAiAPIKey.objects.first()
context['netlas_key'] = NetlasAPIKey.objects.first()
if len(projects):
slug = projects[0].slug
return HttpResponseRedirect(reverse('dashboardIndex', kwargs={'slug': slug}))
return render(request, 'dashboard/onboarding.html', context)
| yogeshojha | 3c60bc1ee495044794d91edee0c96fff73ab46c7 | 5413708d243799a5271440c47c6f98d0c51154ca | ## Information exposure through an exception
[Stack trace information](1) flows to this location and may be exposed to an external user.
[Stack trace information](2) flows to this location and may be exposed to an external user.
[Show more details](https://github.com/yogeshojha/rengine/security/code-scanning/148) | github-advanced-security[bot] | 24 |
yogeshojha/rengine | 963 | 2.0-jasper release | ### Added
- Projects: Projects allow you to efficiently organize their web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
- Roles and Permissions: assign distinct roles to your team members: Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- GPT-powered Report Generation: With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments.
- API Vault: This feature allows you to organize your API keys such as OpenAI or Netlas API keys.
- GPT-powered Attack Surface Generation
- URL gathering now is much more efficient, removing duplicate endpoints based on similar HTTP Responses, having the same content_lenth, or page_title. Custom duplicate fields can also be set from the scan engine configuration.
- URL Path filtering while initiating scan: For instance, if we want to scan only endpoints starting with https://example.com/start/, we can pass the /start as a path filter while starting the scan. @ocervell
- Expanding Target Concept: reNgine 2.0 now accepts IPs, URLS, etc as targets. (#678, #658) Excellent work by @ocervell
- A ton of refactoring on reNgine's core to improve scan efficiency. Massive kudos to @ocervell
- Created a custom celery workflow to be able to run several tasks in parallel that are not dependent on each other, such OSINT task and subdomain discovery will run in parallel, and directory and file fuzzing, vulnerability scan, screenshot gathering etc. will run in parallel after port scan or url fetching is completed. This will increase the efficiency of scans and instead of having one long flow of tasks, they can run independently on their own. @ocervell
- Refactored all tasks to run asynchronously @ocervell
- Added a stream_command that allows to read the output of a command live: this means the UI is updated with results while the command runs and does not have to wait until the task completes. Excellent work by @ocervell
- Pwndb is now replaced by h8mail. @ocervell
- Group Scan Results: reNgine 2.0 allows to group of subdomains based on similar page titles and HTTP status, and also vulnerability grouping based on the same vulnerability title and severity.
- Added Support for Nmap: reNgine 2.0 allows to run Nmap scripts and vuln scans on ports found by Naabu. @ocervell
- Added support for Shared Scan Variables in Scan Engine Configuration:
- `enable_http_crawl`: (true/false) You can disable it to be more stealthy or focus on something different than HTTP
- `timeout`: set timeout for all tasks
- `rate_limit`: set rate limit for all tasks
- `retries`: set retries for all tasks
- `custom_header`: set the custom header for all tasks
- Added Dalfox for XSS Vulnerability Scan
- Added CRLFuzz for CRLF Vulnerability Scan
- Added S3Scanner for scanning misconfigured S3 buckets
- Improve OSINT Dork results, now detects admin panels, login pages and dashboards
- Added Custom Dorks
- Improved UI for vulnerability results, clicking on each vulnerability will open up a sidebar with vulnerability details.
- Added HTTP Request and Response in vulnerability Results
- Under Admin Settings, added an option to allow add/remove/deactivate additional users
- Added Option to Preview Scan Report instead of forcing to download
- Added Katana for crawling and spidering URLs
- Added Netlas for Whois and subdomain gathering
- Added TLSX for subdomain gathering
- Added CTFR for subdomain gathering
- Added historical IP in whois section
### Fixes
- GF patterns do not run on 404 endpoints (#574 closed)
- Fixes for retrieving whois data (#693 closed)
- Related/Associated Domains in Whois section is now fixed
### Removed
- Removed pwndb and tor related to it.
- Removed tor for pwndb | null | 2023-10-02 07:51:35+00:00 | 2023-10-07 10:37:23+00:00 | web/startScan/templates/startScan/history.html | {% extends 'base/base.html' %}
{% load static %}
{% load humanize %}
{% block title %}
Scan history
{% endblock title %}
{% block custom_js_css_link %}
<link rel="stylesheet" type="text/css" href="{% static 'plugins/datatable/datatables.css' %}">
<link rel="stylesheet" type="text/css" href="{% static 'plugins/datatable/global.css' %}">
<link rel="stylesheet" type="text/css" href="{% static 'plugins/datatable/custom.css' %}">
{% endblock custom_js_css_link %}
{% block breadcrumb_title %}
<li class="breadcrumb-item active" aria-current="page">Scan History</li>
{% endblock breadcrumb_title %}
{% block page_title %}
Quick Scan History
{% endblock page_title %}
{% block main_content %}
<div class="row">
<div class="col-12">
<div class="card">
<div class="p-2">
<div class="row">
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<button type="button" class="btn btn-primary dropdown-toggle" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false" id="filterMenu">
Filter <i class="fe-filter"></i>
</button>
<div id="filteringText" class="mt-2">
</div>
<div class="dropdown-menu" style="width: 30%">
<div class="px-4 py-3">
<h4 class="headline-title">Filters</h4>
<div class="">
<label for="filterByOrganization" class="form-label">Filter by Organization</label>
<select class="form-control" id="filterByOrganization">
</select>
</div>
<div class="">
<label for="filterByTarget" class="form-label">Filter by Targets</label>
<select class="form-control" id="filterByTarget">
</select>
</div>
<div class="">
<label for="filterByScanType" class="form-label">Filter by Scan Type</label>
<select class="form-control" id="filterByScanType">
</select>
</div>
<div class="">
<label for="filterByScanStatus" class="form-label">Filter by Scan Status</label>
<select class="form-control" id="filterByScanStatus">
</select>
</div>
</div>
<div class="dropdown-divider"></div>
<a href="#" class="dropdown-ite text-primary float-end" id="resetFilters">Reset Filters</a>
</div>
</div>
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<a class="btn btn-soft-danger float-end disabled ms-1" href="#" onclick="deleteMultipleScan()" id="delete_multiple_button">Delete Multiple Scans</a>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-12">
<div class="card">
<form method="POST" id="scan_history_form">
{% csrf_token %}
<table id="scan_history_table" class="table style-3 table-hover">
<thead>
<tr>
<th class="checkbox-column text-center">Serial Number</th>
<th class="text-center">Serial Number</th>
<th class="">Domain Name</th>
<th>Summary</th>
<th class="">Scan Engine Used</th>
<th>Last Scan</th>
<th class="text-center">Status</th>
<th class="text-center">Progress</th>
<th class="text-center no-sorting">Action</th>
</tr>
</thead>
<tbody>
{% for scan_history in scan_history.all %}
<tr>
<td class="checkbox-column"> {{ scan_history.id }} </td>
<td class=""> {{ scan_history.id }} </td>
<td class="">
{{ scan_history.domain.name }}
<br>
{% for organization in scan_history.domain.get_organization %}
<span class="badge badge-soft-dark mt-1 me-1" data-toggle="tooltip" data-placement="top" title="Domain {{domain.name}} belongs to organization {{organization.name}}">{{ organization.name }}</span>
{% endfor %}
</td>
<td class="text-left">
<span class="badge badge-pills bg-info mt-1" data-toggle="tooltip" data-placement="top" title="Subdomains">{{scan_history.get_subdomain_count}}</span>
<span class="badge badge-pills bg-warning mt-1" data-toggle="tooltip" data-placement="top" title="Endpoints">{{scan_history.get_endpoint_count}}</span>
<span class="badge badge-pills bg-danger mt-1" data-toggle="tooltip" data-placement="top" title="{{scan_history.get_critical_vulnerability_count}} Critical, {{scan_history.get_high_vulnerability_count}} High, {{scan_history.get_medium_vulnerability_count}} Medium Vulnerabilities">{{scan_history.get_vulnerability_count}}</span>
</td>
<td class="">
<span class="badge badge-soft-primary">{{ scan_history.scan_type }}</span>
</td>
<td>
<span data-toggle="tooltip" data-placement="top" title="{{scan_history.start_scan_date}}">{{scan_history.start_scan_date|naturaltime}}</span>
</td>
<td class="text-center">
{% if scan_history.scan_status == -1 %}
<span class="badge badge-soft-warning" data-placement="top" data-toggle="tooltip" data-placement="top" title="Waiting for other scans to complete"><span class="spinner-border spinner-border-sm"></span> Pending</span>
{% elif scan_history.scan_status == 0 %}
<span class="badge badge-soft-danger">Failed</span>
{% if scan_history.error_message %}</br><p class="text-danger">Scan Failed due to: {{scan_history.error_message}}</p>{% endif %}
{% elif scan_history.scan_status == 1 %}
<span class="badge badge-soft-info"><span class="spinner-border spinner-border-sm"></span> In Progress</span>
{% elif scan_history.scan_status == 2 %}
<span class="badge badge-soft-success">Successful</span>
{% elif scan_history.scan_status == 3 %}
<span class="badge badge-soft-danger">Aborted</span>
{% else %}
<span class="badge badge-soft-danger">Unknown</span>
{% endif %}
</td>
<td class="text-center">
{% if scan_history.scan_status == -1 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-warning" role="progressbar" style="width: 75%" aria-valuenow="75" aria-valuemin="0" aria-valuemax="100"></div>
</div>
{% elif scan_history.scan_status == 0 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%"
aria-valuemin="0" aria-valuemax="4"></div>
</div>
{% elif scan_history.scan_status == 1 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-primary progress-bar-striped progress-bar-animated" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%"
aria-valuemin="0" aria-valuemax="4"></div>
</div>
{% elif scan_history.scan_status == 2 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-success" role="progressbar" style="width: 100%" aria-valuemin="0" aria-valuemax="100"></div>
</div>
{% elif scan_history.scan_status == 3 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger progress-bar-striped" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%" aria-valuemin="0"
aria-valuemax="4"></div>
</div>
{% else %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger" role="progressbar" style="width: 100%" aria-valuemin="0" aria-valuemax="100">
</div>
</div>
{% endif %}
</td>
<td class="text-center">
<div class="btn-group mb-2 dropstart">
<div class="btn-group">
<a href="{% url 'detail_scan' scan_history.id %}" class="btn btn-soft-primary">View Results</a>
<div class="btn-group dropstart" role="group">
<button type="button" class="btn btn-soft-primary dropdown-toggle dropdown-toggle-split" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
<i class="mdi mdi-chevron-right"></i>
</button>
<div class="dropdown-menu" style="">
{% if scan_history.scan_status == 0 or scan_history.scan_status == 2 or scan_history.scan_status == 3 %}
<a class="dropdown-item text-primary" href="{% url 'start_scan' scan_history.domain.id %}">
<i class="fe-refresh-ccw"></i> Rescan </a>
{% endif %}
{% if scan_history.scan_status == 1 or scan_history.scan_status == -1%}
<a href="#" class="dropdown-item text-danger" onclick="stop_scan(scan_id={{ scan_history.id }}, subscan_id=null, reload_scan_bar=false, reload_location=true)">
<i class="fe-alert-triangle"></i> Stop Scan</a>
{% endif %}
{% if scan_history.scan_status == 2 or scan_history.scan_status == 3 %}
<a href="#" class="dropdown-item text-danger" onclick="delete_scan('{{ scan_history.id }}')">
<i class="fe-trash-2"></i> Delete Scan Results</a>
{% endif %}
{% if scan.scan_status != -1%}
<div class="dropdown-divider"></div>
<a href="#" class="dropdown-item text-dark" onclick="initiate_report({{scan_history.id}}, '{{scan_history.subdomain_discovery}}', '{{scan_history.vulnerability_scan}}', '{{ scan_history.domain.name }}')">
<i class="fe-download"></i> Download Report</a>
{% endif %}
</div>
</div>
</div>
</div>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</form>
</div>
</div>
</div>
<div class="modal fade" id="generateReportModal" tabindex="-1" style="display: none;" aria-hidden="true">
<div class="modal-dialog modal-dialog-centered">
<div class="modal-content">
<div class="modal-header">
<h4 class="modal-title" id="myCenterModalLabel">Download Report</h4>
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
</div>
<div class="modal-body">
<div class="alert alert-light-primary border-0 mb-4" role="alert">
<div id='report_alert_message'></div>
</div>
<div class="form-group mb-4">
<label for="reportTypeForm">Report Type</label>
<select class="form-control" id="report_type_select" name="report_type">
</select>
</div>
<a id='generateReportButton' href="#" class="btn btn-primary float-end">Download Report</a>
</div>
</div>
</div>
</div>
{% endblock main_content %}
{% block page_level_script %}
<script src="{% static 'plugins/datatable/datatables.js' %}"></script>
<script>
$(document).ready(function() {
var table = $('#scan_history_table').DataTable({
headerCallback: function(e, a, t, n, s) {
e.getElementsByTagName("th")[0].innerHTML='<div class="form-check mb-2 form-check-primary"><input type="checkbox" class="float-start form-check-input chk-parent" id="head_checkbox" onclick=mainCheckBoxSelected(this)>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>\n'
},
"columnDefs":[
{ 'visible': false, 'targets': [1] },
{
"targets":0, "width":"20px", "className":"", "orderable":!1, render:function(e, a, t, n) {
return'<div class="form-check mb-2 form-check-primary"><input type="checkbox" name="targets_checkbox['+ e + ']" class="float-start form-check-input targets_checkbox" value="' + e + '" onchange=toggleMultipleTargetButton()>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>'
},
}],
"order": [[1, 'desc']],
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center mt-sm-0 mt-3'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" +
"<'table-responsive'tr>" +
"<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
"oLanguage": {
"oPaginate": { "sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>', "sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>' },
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"stripeClasses": [],
"lengthMenu": [5, 10, 20, 30, 40, 50],
"pageLength": 20,
"initComplete": function(settings, json) {
$('[data-toggle="tooltip"]').tooltip();
table = settings.oInstance.api();
var rows = table.rows({
selected: true
}).indexes();
// populate filter menu from datatables
// populate targets
var selectedData = table.cells(rows, 2).data();
var target_array = [];
for (var i = 0; i < selectedData.length; i++) {
col1_data = selectedData[i];
domain_name = col1_data.match(/([^\n]+)/g)[0];
target_array.push(domain_name);
}
target_array = Array.from(new Set(target_array));
for (target in target_array) {
select = document.getElementById('filterByTarget');
var option = document.createElement('option');
option.value = target_array[target];
option.innerHTML = target_array[target];
select.appendChild(option);
}
// populate Scan Type
var selectedData = table.cells(rows, 4).data();
var scan_type_array = [];
for (var i = 0; i < selectedData.length; i++) {
col1_data = extractContent(selectedData[i]);
scan_type_array.push(col1_data);
}
scan_type_array = Array.from(new Set(scan_type_array));
for (engine in scan_type_array) {
select = document.getElementById('filterByScanType');
var option = document.createElement('option');
option.value = scan_type_array[engine];
option.innerHTML = scan_type_array[engine];
select.appendChild(option);
}
}
});
multiCheck(table);
// filter organization populate
$.getJSON(`/api/listOrganizations?&format=json`, function(data) {
data = data['organizations']
for (organization in data) {
name = htmlEncode(data[organization]['name']);
select = document.getElementById('filterByOrganization');
var option = document.createElement('option');
option.value = name;
option.innerHTML = name;
select.appendChild(option);
}
}).fail(function() {});
// filtering for scan status
var status_types = ['Pending', 'Scanning', 'Aborted', 'Successful', 'Failed'];
for (status in status_types) {
select = document.getElementById('filterByScanStatus');
var option = document.createElement('option');
option.value = status_types[status];
option.innerHTML = status_types[status];
select.appendChild(option);
}
var org_filter = document.getElementById('filterByOrganization');
org_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Organization: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by organization ${this.value}`,
pos: 'top-center'
});
}, false);
var status_filter = document.getElementById('filterByScanStatus');
status_filter.addEventListener('click', function() {
table.search(this.value).draw();
switch (this.value) {
case 'Pending':
badge_color = 'warning';
break;
case 'Scanning':
badge_color = 'info';
break;
case 'Aborted':
badge_color = 'danger';
break;
case 'Failed':
badge_color = 'danger';
break;
case 'Successful':
badge_color = 'success';
break;
default:
badge_color = 'primary'
}
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-${badge_color}">Scan Status: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by scan status ${this.value}`,
pos: 'top-center'
});
}, false);
var engine_filter = document.getElementById('filterByScanType');
engine_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Scan Engine: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by Engine ${this.value}`,
pos: 'top-center'
});
}, false);
var target_filter = document.getElementById('filterByTarget');
target_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Target/Domain: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by Engine ${this.value}`,
pos: 'top-center'
});
}, false);
// reset filtering
var reset_filter = document.getElementById('resetFilters');
reset_filter.addEventListener('click', function() {
resetFilters(table);
}, false);
});
function resetFilters(table_obj) {
table_obj.search("").draw();
Snackbar.show({
text: `Filters Reset`,
pos: 'top-center'
});
document.getElementById('filteringText').innerHTML = '';
}
function checkedCount() {
// this function will count the number of boxes checked
item = document.getElementsByClassName("targets_checkbox");
count = 0;
for (var i = 0; i < item.length; i++) {
if (item[i].checked) {
count++;
}
}
return count;
}
function toggleMultipleTargetButton() {
if (checkedCount() > 0) {
$("#delete_multiple_button").removeClass("disabled");
} else {
$("#delete_multiple_button").addClass("disabled");
}
}
function mainCheckBoxSelected(checkbox) {
if (checkbox.checked) {
$("#delete_multiple_button").removeClass("disabled");
$(".targets_checkbox").prop('checked', true);
} else {
$("#delete_multiple_button").addClass("disabled");
$(".targets_checkbox").prop('checked', false);
}
}
function deleteMultipleScan() {
if (!checkedCount()) {
swal({
title: '',
text: "Oops! No targets has been selected!",
type: 'error',
padding: '2em'
})
} else {
// atleast one target is selected
swal.queue([{
title: 'Are you sure you want to delete ' + checkedCount() + ' Scans?',
text: "This action is irreversible.\nThis will delete all the scan data and vulnerabilities related to the scan.",
type: 'warning',
showCancelButton: true,
confirmButtonText: 'Delete',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
deleteForm = document.getElementById("scan_history_form");
deleteForm.action = "../delete/multiple";
deleteForm.submit();
}
}])
}
}
function initiate_report(id, is_subdomain_scan, is_vulnerability_scan, domain_name) {
$('#generateReportModal').modal('show');
$('#report_alert_message').empty();
$('#report_type_select').empty();
if (is_subdomain_scan == 'True' && is_vulnerability_scan == 'True') {
$('#report_alert_message').append(`
<b>Full Scan</b> will include both Reconnaissance and Vulnerability Report.<br>
`);
$('#report_type_select').append($('<option>', {
value: 'full',
text: 'Full Scan Report'
}));
}
if (is_subdomain_scan == 'True') {
// eligible for reconnaissance report
$('#report_alert_message').append(`
<b>Reconnaissance Report</b> will only include Assets Discovered Section.<br>
`);
$('#report_type_select').append($('<option>', {
value: 'recon',
text: 'Reconnaissance Report'
}));
}
if (is_vulnerability_scan == 'True'){
// eligible for vulnerability report
$('#report_alert_message').append(`
<b>Vulnerability Report</b> will only include details of Vulnerabilities Identified.
`);
$('#report_type_select').append($('<option>', {
value: 'vulnerability',
text: 'Vulnerability Report'
}));
}
$('#generateReportButton').attr('onClick', `generate_report(${id}, '${domain_name}')`);
}
function generate_report(id, domain_name) {
var report_type = $("#report_type_select option:selected").val();
$('#generateReportModal').modal('hide');
swal.queue([{
title: 'Generating Report!',
text: `Please wait until we generate a report for you!`,
padding: '2em',
onOpen: function() {
swal.showLoading()
return fetch(`/scan/create_report/${id}?download&report_type=${report_type}`, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
})
.then(function(response) {
return response.blob();
}).then(function(blob) {
const file = new Blob([blob], {type: 'application/pdf'});
// process to auto download it
const fileURL = URL.createObjectURL(file);
const link = document.createElement('a');
link.href = fileURL;
link.download = domain_name + ".pdf";
link.click();
swal.close();
})
.catch(function() {
swal.insertQueueStep({
type: 'error',
title: 'Oops! Unable to generate report!'
})
})
}
}]);
}
</script>
{% endblock page_level_script %}
| {% extends 'base/base.html' %}
{% load static %}
{% load humanize %}
{% load permission_tags %}
{% block title %}
Scan history
{% endblock title %}
{% block custom_js_css_link %}
{% endblock custom_js_css_link %}
{% block breadcrumb_title %}
<li class="breadcrumb-item active" aria-current="page">Scan History</li>
{% endblock breadcrumb_title %}
{% block page_title %}
Quick Scan History
{% endblock page_title %}
{% block main_content %}
<div class="row">
<div class="col-12">
<div class="card">
<div class="p-2">
<div class="row">
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<button type="button" class="btn btn-primary dropdown-toggle" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false" id="filterMenu">
Filter <i class="fe-filter"></i>
</button>
<div id="filteringText" class="mt-2">
</div>
<div class="dropdown-menu" style="width: 30%">
<div class="px-4 py-3">
<h4 class="headline-title">Filters</h4>
<div class="">
<label for="filterByOrganization" class="form-label">Filter by Organization</label>
<select class="form-control" id="filterByOrganization">
</select>
</div>
<div class="">
<label for="filterByTarget" class="form-label">Filter by Targets</label>
<select class="form-control" id="filterByTarget">
</select>
</div>
<div class="">
<label for="filterByScanType" class="form-label">Filter by Scan Type</label>
<select class="form-control" id="filterByScanType">
</select>
</div>
<div class="">
<label for="filterByScanStatus" class="form-label">Filter by Scan Status</label>
<select class="form-control" id="filterByScanStatus">
</select>
</div>
</div>
<div class="dropdown-divider"></div>
<a href="#" class="dropdown-ite text-primary float-end" id="resetFilters">Reset Filters</a>
</div>
</div>
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<a class="btn btn-soft-danger float-end disabled ms-1" href="#" onclick="deleteMultipleScan()" id="delete_multiple_button">Delete Multiple Scans</a>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-12">
<div class="card">
<form method="POST" id="scan_history_form">
{% csrf_token %}
<table id="scan_history_table" class="table dt-responsive w-100">
<thead>
<tr>
<th class="checkbox-column text-center">Serial Number</th>
<th class="text-center">Serial Number</th>
<th class="">Domain Name</th>
<th>Summary</th>
<th class="">Scan Engine Used</th>
<th>Last Scan</th>
<th class="text-center">Status</th>
<th class="text-center">Progress</th>
<th class="text-center no-sorting">Action</th>
</tr>
</thead>
<tbody>
{% for scan_history in scan_history.all %}
<tr>
<td class="checkbox-column"> {{ scan_history.id }} </td>
<td class=""> {{ scan_history.id }} </td>
<td class="">
{{ scan_history.domain.name }}
<br>
{% for organization in scan_history.domain.get_organization %}
<span class="badge badge-soft-dark mt-1 me-1" data-toggle="tooltip" data-placement="top" title="Domain {{domain.name}} belongs to organization {{organization.name}}">{{ organization.name }}</span>
{% endfor %}
</td>
<td class="text-left">
<span class="badge badge-pills bg-info mt-1" data-toggle="tooltip" data-placement="top" title="Subdomains">{{scan_history.get_subdomain_count}}</span>
<span class="badge badge-pills bg-warning mt-1" data-toggle="tooltip" data-placement="top" title="Endpoints">{{scan_history.get_endpoint_count}}</span>
<span class="badge badge-pills bg-danger mt-1" data-toggle="tooltip" data-placement="top" title="{{scan_history.get_critical_vulnerability_count}} Critical, {{scan_history.get_high_vulnerability_count}} High, {{scan_history.get_medium_vulnerability_count}} Medium Vulnerabilities">{{scan_history.get_vulnerability_count}}</span>
</td>
<td class="">
<span class="badge badge-soft-primary">{{ scan_history.scan_type }}</span>
</td>
<td>
<span data-toggle="tooltip" data-placement="top" title="{{scan_history.start_scan_date}}">{{scan_history.start_scan_date|naturaltime}}</span>
</td>
<td class="text-center">
{% if scan_history.scan_status == -1 %}
<span class="badge badge-soft-warning" data-placement="top" data-toggle="tooltip" data-placement="top" title="Waiting for other scans to complete"><span class="spinner-border spinner-border-sm"></span> Pending</span>
{% elif scan_history.scan_status == 0 %}
<span class="badge badge-soft-danger">Failed</span>
{% if scan_history.error_message %}</br><p class="text-danger">Scan Failed due to: {{scan_history.error_message}}</p>{% endif %}
{% elif scan_history.scan_status == 1 %}
<span class="badge badge-soft-info"><span class="spinner-border spinner-border-sm"></span> In Progress</span>
{% elif scan_history.scan_status == 2 %}
<span class="badge badge-soft-success">Successful</span>
{% elif scan_history.scan_status == 3 %}
<span class="badge badge-soft-danger">Aborted</span>
{% else %}
<span class="badge badge-soft-danger">Unknown</span>
{% endif %}
</td>
<td class="text-center">
{% if scan_history.scan_status == -1 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-warning" role="progressbar" style="width: 75%" aria-valuenow="75" aria-valuemin="0" aria-valuemax="100"></div>
</div>
{% elif scan_history.scan_status == 0 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%"
aria-valuemin="0" aria-valuemax="4"></div>
</div>
{% elif scan_history.scan_status == 1 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-primary progress-bar-striped progress-bar-animated" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%"
aria-valuemin="0" aria-valuemax="4"></div>
</div>
{% elif scan_history.scan_status == 2 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-success" role="progressbar" style="width: 100%" aria-valuemin="0" aria-valuemax="100"></div>
</div>
{% elif scan_history.scan_status == 3 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger progress-bar-striped" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%" aria-valuemin="0"
aria-valuemax="4"></div>
</div>
{% else %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger" role="progressbar" style="width: 100%" aria-valuemin="0" aria-valuemax="100">
</div>
</div>
{% endif %}
</td>
<td class="text-center">
<div class="btn-group mb-2 dropstart">
<div class="btn-group">
<a href="/scan/{{current_project.slug}}/detail/{{scan_history.id}}" class="btn btn-soft-primary">View Results</a>
<div class="btn-group dropstart" role="group">
<button type="button" class="btn btn-soft-primary dropdown-toggle dropdown-toggle-split" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
<i class="mdi mdi-chevron-right"></i>
</button>
<div class="dropdown-menu" style="">
{% if user|can:'initiate_scans_subscans' %}
{% if scan_history.scan_status == 0 or scan_history.scan_status == 2 or scan_history.scan_status == 3 %}
<a class="dropdown-item text-primary" href="/scan/{{current_project.slug}}/start/{{scan_history.domain.id}}">
<i class="fe-refresh-ccw"></i> Rescan </a>
{% endif %}
{% if scan_history.scan_status == 1 or scan_history.scan_status == -1%}
<a href="#" class="dropdown-item text-danger" onclick="stop_scan(scan_id={{ scan_history.id }}, subscan_id=null, reload_scan_bar=false, reload_location=true)">
<i class="fe-alert-triangle"></i> Stop Scan</a>
{% endif %}
{% endif %}
{% if user|can:'modify_scan_results' %}
{% if scan_history.scan_status == 2 or scan_history.scan_status == 3 or scan_history.scan_status == 0 %}
<a href="#" class="dropdown-item text-danger" onclick="delete_scan('{{ scan_history.id }}')">
<i class="fe-trash-2"></i> Delete Scan Results</a>
{% endif %}
<div class="dropdown-divider"></div>
{% endif %}
{% if scan.scan_status != -1%}
<a href="#" class="dropdown-item text-dark" onclick="initiate_report({{scan_history.id}}, '{% if 'subdomain_discovery' in scan_history.scan_type.tasks %}True{% endif %}', '{% if 'vulnerability_scan' in scan_history.scan_type.tasks %}True{% endif %}', '{{ scan_history.domain.name }}')">
<i class="fe-download"></i> Scan Report</a>
{% endif %}
</div>
</div>
</div>
</div>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</form>
</div>
</div>
</div>
<div class="modal fade" id="generateReportModal" tabindex="-1" style="display: none;" aria-hidden="true">
<div class="modal-dialog modal-dialog-centered">
<div class="modal-content">
<div class="modal-header">
<h4 class="modal-title" id="myCenterModalLabel">Download Report</h4>
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
</div>
<div class="modal-body">
<div class="alert alert-light-primary border-0 mb-4" role="alert">
<div id='report_alert_message'></div>
</div>
<div class="form-group mb-4">
<label for="reportTypeForm">Report Type</label>
<select class="form-control" id="report_type_select" name="report_type">
</select>
</div>
<div class="form-group mb-4">
<div class="form-check" id="report_info_vuln_div">
<input type="checkbox" class="form-check-input" id="report_ignore_info_vuln" checked="">
<label class="form-check-label" for="report_ignore_info_vuln">Ignore Informational Vulnerabilities</label>
</div>
</div>
<a id='generateReportButton' href="#" class="btn btn-primary float-end m-2">Download Report</a>
<a id='previewReportButton' href="#" class="btn btn-secondary float-end m-2">Preview Report</a>
</div>
</div>
</div>
</div>
{% endblock main_content %}
{% block page_level_script %}
<script>
$(document).ready(function() {
var table = $('#scan_history_table').DataTable({
headerCallback: function(e, a, t, n, s) {
e.getElementsByTagName("th")[0].innerHTML='<div class="form-check mb-2 form-check-primary"><input type="checkbox" class="float-start form-check-input chk-parent" id="head_checkbox" onclick=mainCheckBoxSelected(this)>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>\n'
},
"columnDefs":[
{ 'visible': false, 'targets': [1] },
{
"targets":0, "width":"20px", "className":"", "orderable":!1, render:function(e, a, t, n) {
return'<div class="form-check mb-2 form-check-primary"><input type="checkbox" name="targets_checkbox['+ e + ']" class="float-start form-check-input targets_checkbox" value="' + e + '" onchange=toggleMultipleTargetButton()>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>'
},
}
],
"order": [[1, 'desc']],
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center mt-sm-0 mt-3'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" +
"<'table-responsive'tr>" +
"<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
"oLanguage": {
"oPaginate": { "sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>', "sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>' },
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"stripeClasses": [],
"lengthMenu": [5, 10, 20, 30, 40, 50],
"pageLength": 20,
"initComplete": function(settings, json) {
$('[data-toggle="tooltip"]').tooltip();
table = settings.oInstance.api();
var rows = table.rows({
selected: true
}).indexes();
// populate filter menu from datatables
// populate targets
var selectedData = table.cells(rows, 2).data();
var target_array = [];
for (var i = 0; i < selectedData.length; i++) {
col1_data = selectedData[i];
domain_name = col1_data.match(/([^\n]+)/g)[0];
target_array.push(domain_name);
}
target_array = Array.from(new Set(target_array));
for (target in target_array) {
select = document.getElementById('filterByTarget');
var option = document.createElement('option');
option.value = target_array[target];
option.innerHTML = target_array[target];
select.appendChild(option);
}
// populate Scan Type
var selectedData = table.cells(rows, 4).data();
var scan_type_array = [];
for (var i = 0; i < selectedData.length; i++) {
col1_data = extractContent(selectedData[i]);
scan_type_array.push(col1_data);
}
scan_type_array = Array.from(new Set(scan_type_array));
for (engine in scan_type_array) {
select = document.getElementById('filterByScanType');
var option = document.createElement('option');
option.value = scan_type_array[engine];
option.innerHTML = scan_type_array[engine];
select.appendChild(option);
}
}
});
multiCheck(table);
// filter organization populate
$.getJSON(`/api/listOrganizations?&format=json`, function(data) {
data = data['organizations']
for (organization in data) {
name = htmlEncode(data[organization]['name']);
select = document.getElementById('filterByOrganization');
var option = document.createElement('option');
option.value = name;
option.innerHTML = name;
select.appendChild(option);
}
}).fail(function() {});
// filtering for scan status
var status_types = ['Pending', 'Scanning', 'Aborted', 'Successful', 'Failed'];
for (status in status_types) {
select = document.getElementById('filterByScanStatus');
var option = document.createElement('option');
option.value = status_types[status];
option.innerHTML = status_types[status];
select.appendChild(option);
}
var org_filter = document.getElementById('filterByOrganization');
org_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Organization: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by organization ${this.value}`,
pos: 'top-center'
});
}, false);
var status_filter = document.getElementById('filterByScanStatus');
status_filter.addEventListener('click', function() {
table.search(this.value).draw();
switch (this.value) {
case 'Pending':
badge_color = 'warning';
break;
case 'Scanning':
badge_color = 'info';
break;
case 'Aborted':
badge_color = 'danger';
break;
case 'Failed':
badge_color = 'danger';
break;
case 'Successful':
badge_color = 'success';
break;
default:
badge_color = 'primary'
}
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-${badge_color}">Scan Status: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by scan status ${this.value}`,
pos: 'top-center'
});
}, false);
var engine_filter = document.getElementById('filterByScanType');
engine_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Scan Engine: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by Engine ${this.value}`,
pos: 'top-center'
});
}, false);
var target_filter = document.getElementById('filterByTarget');
target_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Target/Domain: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by Engine ${this.value}`,
pos: 'top-center'
});
}, false);
// reset filtering
var reset_filter = document.getElementById('resetFilters');
reset_filter.addEventListener('click', function() {
resetFilters(table);
}, false);
});
function resetFilters(table_obj) {
table_obj.search("").draw();
Snackbar.show({
text: `Filters Reset`,
pos: 'top-center'
});
document.getElementById('filteringText').innerHTML = '';
}
function checkedCount() {
// this function will count the number of boxes checked
item = document.getElementsByClassName("targets_checkbox");
count = 0;
for (var i = 0; i < item.length; i++) {
if (item[i].checked) {
count++;
}
}
return count;
}
function toggleMultipleTargetButton() {
if (checkedCount() > 0) {
$("#delete_multiple_button").removeClass("disabled");
} else {
$("#delete_multiple_button").addClass("disabled");
}
}
function mainCheckBoxSelected(checkbox) {
if (checkbox.checked) {
$("#delete_multiple_button").removeClass("disabled");
$(".targets_checkbox").prop('checked', true);
} else {
$("#delete_multiple_button").addClass("disabled");
$(".targets_checkbox").prop('checked', false);
}
}
function deleteMultipleScan() {
if (!checkedCount()) {
swal({
title: '',
text: "Oops! No targets has been selected!",
type: 'error',
padding: '2em'
})
} else {
// atleast one target is selected
swal.queue([{
title: 'Are you sure you want to delete ' + checkedCount() + ' Scans?',
text: "This action is irreversible.\nThis will delete all the scan data and vulnerabilities related to the scan.",
type: 'warning',
showCancelButton: true,
confirmButtonText: 'Delete',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
deleteForm = document.getElementById("scan_history_form");
deleteForm.action = "../delete/multiple";
deleteForm.submit();
}
}])
}
}
// select option listener for report_type_select
var report_type = document.getElementById("report_type_select");
report_type.addEventListener("change", function() {
if(report_type.value == "recon")
{
$("#report_info_vuln_div").hide();
}
else{
$("#report_info_vuln_div").show();
}
});
function initiate_report(id, is_subdomain_scan, is_vulnerability_scan, domain_name) {
$('#generateReportModal').modal('show');
$('#report_alert_message').empty();
$('#report_type_select').empty();
if (is_subdomain_scan == 'True' && is_vulnerability_scan == 'True') {
$('#report_alert_message').append(`
<b>Full Scan</b> will include both Reconnaissance and Vulnerability Report.<br>
`);
$('#report_type_select').append($('<option>', {
value: 'full',
text: 'Full Scan Report'
}));
}
if (is_subdomain_scan == 'True') {
// eligible for reconnaissance report
$('#report_alert_message').append(`
<b>Reconnaissance Report</b> will only include Assets Discovered Section.<br>
`);
$('#report_type_select').append($('<option>', {
value: 'recon',
text: 'Reconnaissance Report'
}));
}
if (is_vulnerability_scan == 'True'){
// eligible for vulnerability report
$('#report_alert_message').append(`
<b>Vulnerability Report</b> will only include details of Vulnerabilities Identified.
`);
$('#report_type_select').append($('<option>', {
value: 'vulnerability',
text: 'Vulnerability Report'
}));
}
$('#generateReportButton').attr('onClick', `generate_report(${id}, '${domain_name}')`);
$('#previewReportButton').attr('onClick', `preview_report(${id}, '${domain_name}')`);
}
function preview_report(id, domain_name){
var report_type = $("#report_type_select option:selected").val();
var url = `/scan/create_report/${id}?report_type=${report_type}`;
if ($('#report_ignore_info_vuln').is(":checked")) {
url += `&ignore_info_vuln`
}
$('#generateReportModal').modal('hide');
window.open(url, '_blank').focus();
}
function generate_report(id, domain_name) {
var report_type = $("#report_type_select option:selected").val();
var url = `/scan/create_report/${id}?report_type=${report_type}&download`;
if ($('#report_ignore_info_vuln').is(":checked")) {
url += `&ignore_info_vuln`
}
$('#generateReportModal').modal('hide');
swal.queue([{
title: 'Generating Report!',
text: `Please wait until we generate a report for you!`,
padding: '2em',
onOpen: function() {
swal.showLoading()
return fetch(url, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
})
.then(function(response) {
return response.blob();
}).then(function(blob) {
const file = new Blob([blob], {type: 'application/pdf'});
// process to auto download it
const fileURL = URL.createObjectURL(file);
const link = document.createElement('a');
link.href = fileURL;
link.download = domain_name + ".pdf";
link.click();
swal.close();
})
.catch(function() {
swal.insertQueueStep({
type: 'error',
title: 'Oops! Unable to generate report!'
})
})
}
}]);
}
</script>
{% endblock page_level_script %}
| yogeshojha | 3c60bc1ee495044794d91edee0c96fff73ab46c7 | 5413708d243799a5271440c47c6f98d0c51154ca | ## DOM text reinterpreted as HTML
[DOM text](1) is reinterpreted as HTML without escaping meta-characters.
[Show more details](https://github.com/yogeshojha/rengine/security/code-scanning/153) | github-advanced-security[bot] | 25 |
yogeshojha/rengine | 963 | 2.0-jasper release | ### Added
- Projects: Projects allow you to efficiently organize their web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
- Roles and Permissions: assign distinct roles to your team members: Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- GPT-powered Report Generation: With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments.
- API Vault: This feature allows you to organize your API keys such as OpenAI or Netlas API keys.
- GPT-powered Attack Surface Generation
- URL gathering now is much more efficient, removing duplicate endpoints based on similar HTTP Responses, having the same content_lenth, or page_title. Custom duplicate fields can also be set from the scan engine configuration.
- URL Path filtering while initiating scan: For instance, if we want to scan only endpoints starting with https://example.com/start/, we can pass the /start as a path filter while starting the scan. @ocervell
- Expanding Target Concept: reNgine 2.0 now accepts IPs, URLS, etc as targets. (#678, #658) Excellent work by @ocervell
- A ton of refactoring on reNgine's core to improve scan efficiency. Massive kudos to @ocervell
- Created a custom celery workflow to be able to run several tasks in parallel that are not dependent on each other, such OSINT task and subdomain discovery will run in parallel, and directory and file fuzzing, vulnerability scan, screenshot gathering etc. will run in parallel after port scan or url fetching is completed. This will increase the efficiency of scans and instead of having one long flow of tasks, they can run independently on their own. @ocervell
- Refactored all tasks to run asynchronously @ocervell
- Added a stream_command that allows to read the output of a command live: this means the UI is updated with results while the command runs and does not have to wait until the task completes. Excellent work by @ocervell
- Pwndb is now replaced by h8mail. @ocervell
- Group Scan Results: reNgine 2.0 allows to group of subdomains based on similar page titles and HTTP status, and also vulnerability grouping based on the same vulnerability title and severity.
- Added Support for Nmap: reNgine 2.0 allows to run Nmap scripts and vuln scans on ports found by Naabu. @ocervell
- Added support for Shared Scan Variables in Scan Engine Configuration:
- `enable_http_crawl`: (true/false) You can disable it to be more stealthy or focus on something different than HTTP
- `timeout`: set timeout for all tasks
- `rate_limit`: set rate limit for all tasks
- `retries`: set retries for all tasks
- `custom_header`: set the custom header for all tasks
- Added Dalfox for XSS Vulnerability Scan
- Added CRLFuzz for CRLF Vulnerability Scan
- Added S3Scanner for scanning misconfigured S3 buckets
- Improve OSINT Dork results, now detects admin panels, login pages and dashboards
- Added Custom Dorks
- Improved UI for vulnerability results, clicking on each vulnerability will open up a sidebar with vulnerability details.
- Added HTTP Request and Response in vulnerability Results
- Under Admin Settings, added an option to allow add/remove/deactivate additional users
- Added Option to Preview Scan Report instead of forcing to download
- Added Katana for crawling and spidering URLs
- Added Netlas for Whois and subdomain gathering
- Added TLSX for subdomain gathering
- Added CTFR for subdomain gathering
- Added historical IP in whois section
### Fixes
- GF patterns do not run on 404 endpoints (#574 closed)
- Fixes for retrieving whois data (#693 closed)
- Related/Associated Domains in Whois section is now fixed
### Removed
- Removed pwndb and tor related to it.
- Removed tor for pwndb | null | 2023-10-02 07:51:35+00:00 | 2023-10-07 10:37:23+00:00 | web/startScan/templates/startScan/history.html | {% extends 'base/base.html' %}
{% load static %}
{% load humanize %}
{% block title %}
Scan history
{% endblock title %}
{% block custom_js_css_link %}
<link rel="stylesheet" type="text/css" href="{% static 'plugins/datatable/datatables.css' %}">
<link rel="stylesheet" type="text/css" href="{% static 'plugins/datatable/global.css' %}">
<link rel="stylesheet" type="text/css" href="{% static 'plugins/datatable/custom.css' %}">
{% endblock custom_js_css_link %}
{% block breadcrumb_title %}
<li class="breadcrumb-item active" aria-current="page">Scan History</li>
{% endblock breadcrumb_title %}
{% block page_title %}
Quick Scan History
{% endblock page_title %}
{% block main_content %}
<div class="row">
<div class="col-12">
<div class="card">
<div class="p-2">
<div class="row">
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<button type="button" class="btn btn-primary dropdown-toggle" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false" id="filterMenu">
Filter <i class="fe-filter"></i>
</button>
<div id="filteringText" class="mt-2">
</div>
<div class="dropdown-menu" style="width: 30%">
<div class="px-4 py-3">
<h4 class="headline-title">Filters</h4>
<div class="">
<label for="filterByOrganization" class="form-label">Filter by Organization</label>
<select class="form-control" id="filterByOrganization">
</select>
</div>
<div class="">
<label for="filterByTarget" class="form-label">Filter by Targets</label>
<select class="form-control" id="filterByTarget">
</select>
</div>
<div class="">
<label for="filterByScanType" class="form-label">Filter by Scan Type</label>
<select class="form-control" id="filterByScanType">
</select>
</div>
<div class="">
<label for="filterByScanStatus" class="form-label">Filter by Scan Status</label>
<select class="form-control" id="filterByScanStatus">
</select>
</div>
</div>
<div class="dropdown-divider"></div>
<a href="#" class="dropdown-ite text-primary float-end" id="resetFilters">Reset Filters</a>
</div>
</div>
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<a class="btn btn-soft-danger float-end disabled ms-1" href="#" onclick="deleteMultipleScan()" id="delete_multiple_button">Delete Multiple Scans</a>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-12">
<div class="card">
<form method="POST" id="scan_history_form">
{% csrf_token %}
<table id="scan_history_table" class="table style-3 table-hover">
<thead>
<tr>
<th class="checkbox-column text-center">Serial Number</th>
<th class="text-center">Serial Number</th>
<th class="">Domain Name</th>
<th>Summary</th>
<th class="">Scan Engine Used</th>
<th>Last Scan</th>
<th class="text-center">Status</th>
<th class="text-center">Progress</th>
<th class="text-center no-sorting">Action</th>
</tr>
</thead>
<tbody>
{% for scan_history in scan_history.all %}
<tr>
<td class="checkbox-column"> {{ scan_history.id }} </td>
<td class=""> {{ scan_history.id }} </td>
<td class="">
{{ scan_history.domain.name }}
<br>
{% for organization in scan_history.domain.get_organization %}
<span class="badge badge-soft-dark mt-1 me-1" data-toggle="tooltip" data-placement="top" title="Domain {{domain.name}} belongs to organization {{organization.name}}">{{ organization.name }}</span>
{% endfor %}
</td>
<td class="text-left">
<span class="badge badge-pills bg-info mt-1" data-toggle="tooltip" data-placement="top" title="Subdomains">{{scan_history.get_subdomain_count}}</span>
<span class="badge badge-pills bg-warning mt-1" data-toggle="tooltip" data-placement="top" title="Endpoints">{{scan_history.get_endpoint_count}}</span>
<span class="badge badge-pills bg-danger mt-1" data-toggle="tooltip" data-placement="top" title="{{scan_history.get_critical_vulnerability_count}} Critical, {{scan_history.get_high_vulnerability_count}} High, {{scan_history.get_medium_vulnerability_count}} Medium Vulnerabilities">{{scan_history.get_vulnerability_count}}</span>
</td>
<td class="">
<span class="badge badge-soft-primary">{{ scan_history.scan_type }}</span>
</td>
<td>
<span data-toggle="tooltip" data-placement="top" title="{{scan_history.start_scan_date}}">{{scan_history.start_scan_date|naturaltime}}</span>
</td>
<td class="text-center">
{% if scan_history.scan_status == -1 %}
<span class="badge badge-soft-warning" data-placement="top" data-toggle="tooltip" data-placement="top" title="Waiting for other scans to complete"><span class="spinner-border spinner-border-sm"></span> Pending</span>
{% elif scan_history.scan_status == 0 %}
<span class="badge badge-soft-danger">Failed</span>
{% if scan_history.error_message %}</br><p class="text-danger">Scan Failed due to: {{scan_history.error_message}}</p>{% endif %}
{% elif scan_history.scan_status == 1 %}
<span class="badge badge-soft-info"><span class="spinner-border spinner-border-sm"></span> In Progress</span>
{% elif scan_history.scan_status == 2 %}
<span class="badge badge-soft-success">Successful</span>
{% elif scan_history.scan_status == 3 %}
<span class="badge badge-soft-danger">Aborted</span>
{% else %}
<span class="badge badge-soft-danger">Unknown</span>
{% endif %}
</td>
<td class="text-center">
{% if scan_history.scan_status == -1 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-warning" role="progressbar" style="width: 75%" aria-valuenow="75" aria-valuemin="0" aria-valuemax="100"></div>
</div>
{% elif scan_history.scan_status == 0 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%"
aria-valuemin="0" aria-valuemax="4"></div>
</div>
{% elif scan_history.scan_status == 1 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-primary progress-bar-striped progress-bar-animated" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%"
aria-valuemin="0" aria-valuemax="4"></div>
</div>
{% elif scan_history.scan_status == 2 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-success" role="progressbar" style="width: 100%" aria-valuemin="0" aria-valuemax="100"></div>
</div>
{% elif scan_history.scan_status == 3 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger progress-bar-striped" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%" aria-valuemin="0"
aria-valuemax="4"></div>
</div>
{% else %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger" role="progressbar" style="width: 100%" aria-valuemin="0" aria-valuemax="100">
</div>
</div>
{% endif %}
</td>
<td class="text-center">
<div class="btn-group mb-2 dropstart">
<div class="btn-group">
<a href="{% url 'detail_scan' scan_history.id %}" class="btn btn-soft-primary">View Results</a>
<div class="btn-group dropstart" role="group">
<button type="button" class="btn btn-soft-primary dropdown-toggle dropdown-toggle-split" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
<i class="mdi mdi-chevron-right"></i>
</button>
<div class="dropdown-menu" style="">
{% if scan_history.scan_status == 0 or scan_history.scan_status == 2 or scan_history.scan_status == 3 %}
<a class="dropdown-item text-primary" href="{% url 'start_scan' scan_history.domain.id %}">
<i class="fe-refresh-ccw"></i> Rescan </a>
{% endif %}
{% if scan_history.scan_status == 1 or scan_history.scan_status == -1%}
<a href="#" class="dropdown-item text-danger" onclick="stop_scan(scan_id={{ scan_history.id }}, subscan_id=null, reload_scan_bar=false, reload_location=true)">
<i class="fe-alert-triangle"></i> Stop Scan</a>
{% endif %}
{% if scan_history.scan_status == 2 or scan_history.scan_status == 3 %}
<a href="#" class="dropdown-item text-danger" onclick="delete_scan('{{ scan_history.id }}')">
<i class="fe-trash-2"></i> Delete Scan Results</a>
{% endif %}
{% if scan.scan_status != -1%}
<div class="dropdown-divider"></div>
<a href="#" class="dropdown-item text-dark" onclick="initiate_report({{scan_history.id}}, '{{scan_history.subdomain_discovery}}', '{{scan_history.vulnerability_scan}}', '{{ scan_history.domain.name }}')">
<i class="fe-download"></i> Download Report</a>
{% endif %}
</div>
</div>
</div>
</div>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</form>
</div>
</div>
</div>
<div class="modal fade" id="generateReportModal" tabindex="-1" style="display: none;" aria-hidden="true">
<div class="modal-dialog modal-dialog-centered">
<div class="modal-content">
<div class="modal-header">
<h4 class="modal-title" id="myCenterModalLabel">Download Report</h4>
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
</div>
<div class="modal-body">
<div class="alert alert-light-primary border-0 mb-4" role="alert">
<div id='report_alert_message'></div>
</div>
<div class="form-group mb-4">
<label for="reportTypeForm">Report Type</label>
<select class="form-control" id="report_type_select" name="report_type">
</select>
</div>
<a id='generateReportButton' href="#" class="btn btn-primary float-end">Download Report</a>
</div>
</div>
</div>
</div>
{% endblock main_content %}
{% block page_level_script %}
<script src="{% static 'plugins/datatable/datatables.js' %}"></script>
<script>
$(document).ready(function() {
var table = $('#scan_history_table').DataTable({
headerCallback: function(e, a, t, n, s) {
e.getElementsByTagName("th")[0].innerHTML='<div class="form-check mb-2 form-check-primary"><input type="checkbox" class="float-start form-check-input chk-parent" id="head_checkbox" onclick=mainCheckBoxSelected(this)>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>\n'
},
"columnDefs":[
{ 'visible': false, 'targets': [1] },
{
"targets":0, "width":"20px", "className":"", "orderable":!1, render:function(e, a, t, n) {
return'<div class="form-check mb-2 form-check-primary"><input type="checkbox" name="targets_checkbox['+ e + ']" class="float-start form-check-input targets_checkbox" value="' + e + '" onchange=toggleMultipleTargetButton()>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>'
},
}],
"order": [[1, 'desc']],
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center mt-sm-0 mt-3'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" +
"<'table-responsive'tr>" +
"<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
"oLanguage": {
"oPaginate": { "sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>', "sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>' },
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"stripeClasses": [],
"lengthMenu": [5, 10, 20, 30, 40, 50],
"pageLength": 20,
"initComplete": function(settings, json) {
$('[data-toggle="tooltip"]').tooltip();
table = settings.oInstance.api();
var rows = table.rows({
selected: true
}).indexes();
// populate filter menu from datatables
// populate targets
var selectedData = table.cells(rows, 2).data();
var target_array = [];
for (var i = 0; i < selectedData.length; i++) {
col1_data = selectedData[i];
domain_name = col1_data.match(/([^\n]+)/g)[0];
target_array.push(domain_name);
}
target_array = Array.from(new Set(target_array));
for (target in target_array) {
select = document.getElementById('filterByTarget');
var option = document.createElement('option');
option.value = target_array[target];
option.innerHTML = target_array[target];
select.appendChild(option);
}
// populate Scan Type
var selectedData = table.cells(rows, 4).data();
var scan_type_array = [];
for (var i = 0; i < selectedData.length; i++) {
col1_data = extractContent(selectedData[i]);
scan_type_array.push(col1_data);
}
scan_type_array = Array.from(new Set(scan_type_array));
for (engine in scan_type_array) {
select = document.getElementById('filterByScanType');
var option = document.createElement('option');
option.value = scan_type_array[engine];
option.innerHTML = scan_type_array[engine];
select.appendChild(option);
}
}
});
multiCheck(table);
// filter organization populate
$.getJSON(`/api/listOrganizations?&format=json`, function(data) {
data = data['organizations']
for (organization in data) {
name = htmlEncode(data[organization]['name']);
select = document.getElementById('filterByOrganization');
var option = document.createElement('option');
option.value = name;
option.innerHTML = name;
select.appendChild(option);
}
}).fail(function() {});
// filtering for scan status
var status_types = ['Pending', 'Scanning', 'Aborted', 'Successful', 'Failed'];
for (status in status_types) {
select = document.getElementById('filterByScanStatus');
var option = document.createElement('option');
option.value = status_types[status];
option.innerHTML = status_types[status];
select.appendChild(option);
}
var org_filter = document.getElementById('filterByOrganization');
org_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Organization: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by organization ${this.value}`,
pos: 'top-center'
});
}, false);
var status_filter = document.getElementById('filterByScanStatus');
status_filter.addEventListener('click', function() {
table.search(this.value).draw();
switch (this.value) {
case 'Pending':
badge_color = 'warning';
break;
case 'Scanning':
badge_color = 'info';
break;
case 'Aborted':
badge_color = 'danger';
break;
case 'Failed':
badge_color = 'danger';
break;
case 'Successful':
badge_color = 'success';
break;
default:
badge_color = 'primary'
}
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-${badge_color}">Scan Status: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by scan status ${this.value}`,
pos: 'top-center'
});
}, false);
var engine_filter = document.getElementById('filterByScanType');
engine_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Scan Engine: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by Engine ${this.value}`,
pos: 'top-center'
});
}, false);
var target_filter = document.getElementById('filterByTarget');
target_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Target/Domain: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by Engine ${this.value}`,
pos: 'top-center'
});
}, false);
// reset filtering
var reset_filter = document.getElementById('resetFilters');
reset_filter.addEventListener('click', function() {
resetFilters(table);
}, false);
});
function resetFilters(table_obj) {
table_obj.search("").draw();
Snackbar.show({
text: `Filters Reset`,
pos: 'top-center'
});
document.getElementById('filteringText').innerHTML = '';
}
function checkedCount() {
// this function will count the number of boxes checked
item = document.getElementsByClassName("targets_checkbox");
count = 0;
for (var i = 0; i < item.length; i++) {
if (item[i].checked) {
count++;
}
}
return count;
}
function toggleMultipleTargetButton() {
if (checkedCount() > 0) {
$("#delete_multiple_button").removeClass("disabled");
} else {
$("#delete_multiple_button").addClass("disabled");
}
}
function mainCheckBoxSelected(checkbox) {
if (checkbox.checked) {
$("#delete_multiple_button").removeClass("disabled");
$(".targets_checkbox").prop('checked', true);
} else {
$("#delete_multiple_button").addClass("disabled");
$(".targets_checkbox").prop('checked', false);
}
}
function deleteMultipleScan() {
if (!checkedCount()) {
swal({
title: '',
text: "Oops! No targets has been selected!",
type: 'error',
padding: '2em'
})
} else {
// atleast one target is selected
swal.queue([{
title: 'Are you sure you want to delete ' + checkedCount() + ' Scans?',
text: "This action is irreversible.\nThis will delete all the scan data and vulnerabilities related to the scan.",
type: 'warning',
showCancelButton: true,
confirmButtonText: 'Delete',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
deleteForm = document.getElementById("scan_history_form");
deleteForm.action = "../delete/multiple";
deleteForm.submit();
}
}])
}
}
function initiate_report(id, is_subdomain_scan, is_vulnerability_scan, domain_name) {
$('#generateReportModal').modal('show');
$('#report_alert_message').empty();
$('#report_type_select').empty();
if (is_subdomain_scan == 'True' && is_vulnerability_scan == 'True') {
$('#report_alert_message').append(`
<b>Full Scan</b> will include both Reconnaissance and Vulnerability Report.<br>
`);
$('#report_type_select').append($('<option>', {
value: 'full',
text: 'Full Scan Report'
}));
}
if (is_subdomain_scan == 'True') {
// eligible for reconnaissance report
$('#report_alert_message').append(`
<b>Reconnaissance Report</b> will only include Assets Discovered Section.<br>
`);
$('#report_type_select').append($('<option>', {
value: 'recon',
text: 'Reconnaissance Report'
}));
}
if (is_vulnerability_scan == 'True'){
// eligible for vulnerability report
$('#report_alert_message').append(`
<b>Vulnerability Report</b> will only include details of Vulnerabilities Identified.
`);
$('#report_type_select').append($('<option>', {
value: 'vulnerability',
text: 'Vulnerability Report'
}));
}
$('#generateReportButton').attr('onClick', `generate_report(${id}, '${domain_name}')`);
}
function generate_report(id, domain_name) {
var report_type = $("#report_type_select option:selected").val();
$('#generateReportModal').modal('hide');
swal.queue([{
title: 'Generating Report!',
text: `Please wait until we generate a report for you!`,
padding: '2em',
onOpen: function() {
swal.showLoading()
return fetch(`/scan/create_report/${id}?download&report_type=${report_type}`, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
})
.then(function(response) {
return response.blob();
}).then(function(blob) {
const file = new Blob([blob], {type: 'application/pdf'});
// process to auto download it
const fileURL = URL.createObjectURL(file);
const link = document.createElement('a');
link.href = fileURL;
link.download = domain_name + ".pdf";
link.click();
swal.close();
})
.catch(function() {
swal.insertQueueStep({
type: 'error',
title: 'Oops! Unable to generate report!'
})
})
}
}]);
}
</script>
{% endblock page_level_script %}
| {% extends 'base/base.html' %}
{% load static %}
{% load humanize %}
{% load permission_tags %}
{% block title %}
Scan history
{% endblock title %}
{% block custom_js_css_link %}
{% endblock custom_js_css_link %}
{% block breadcrumb_title %}
<li class="breadcrumb-item active" aria-current="page">Scan History</li>
{% endblock breadcrumb_title %}
{% block page_title %}
Quick Scan History
{% endblock page_title %}
{% block main_content %}
<div class="row">
<div class="col-12">
<div class="card">
<div class="p-2">
<div class="row">
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<button type="button" class="btn btn-primary dropdown-toggle" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false" id="filterMenu">
Filter <i class="fe-filter"></i>
</button>
<div id="filteringText" class="mt-2">
</div>
<div class="dropdown-menu" style="width: 30%">
<div class="px-4 py-3">
<h4 class="headline-title">Filters</h4>
<div class="">
<label for="filterByOrganization" class="form-label">Filter by Organization</label>
<select class="form-control" id="filterByOrganization">
</select>
</div>
<div class="">
<label for="filterByTarget" class="form-label">Filter by Targets</label>
<select class="form-control" id="filterByTarget">
</select>
</div>
<div class="">
<label for="filterByScanType" class="form-label">Filter by Scan Type</label>
<select class="form-control" id="filterByScanType">
</select>
</div>
<div class="">
<label for="filterByScanStatus" class="form-label">Filter by Scan Status</label>
<select class="form-control" id="filterByScanStatus">
</select>
</div>
</div>
<div class="dropdown-divider"></div>
<a href="#" class="dropdown-ite text-primary float-end" id="resetFilters">Reset Filters</a>
</div>
</div>
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<a class="btn btn-soft-danger float-end disabled ms-1" href="#" onclick="deleteMultipleScan()" id="delete_multiple_button">Delete Multiple Scans</a>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-12">
<div class="card">
<form method="POST" id="scan_history_form">
{% csrf_token %}
<table id="scan_history_table" class="table dt-responsive w-100">
<thead>
<tr>
<th class="checkbox-column text-center">Serial Number</th>
<th class="text-center">Serial Number</th>
<th class="">Domain Name</th>
<th>Summary</th>
<th class="">Scan Engine Used</th>
<th>Last Scan</th>
<th class="text-center">Status</th>
<th class="text-center">Progress</th>
<th class="text-center no-sorting">Action</th>
</tr>
</thead>
<tbody>
{% for scan_history in scan_history.all %}
<tr>
<td class="checkbox-column"> {{ scan_history.id }} </td>
<td class=""> {{ scan_history.id }} </td>
<td class="">
{{ scan_history.domain.name }}
<br>
{% for organization in scan_history.domain.get_organization %}
<span class="badge badge-soft-dark mt-1 me-1" data-toggle="tooltip" data-placement="top" title="Domain {{domain.name}} belongs to organization {{organization.name}}">{{ organization.name }}</span>
{% endfor %}
</td>
<td class="text-left">
<span class="badge badge-pills bg-info mt-1" data-toggle="tooltip" data-placement="top" title="Subdomains">{{scan_history.get_subdomain_count}}</span>
<span class="badge badge-pills bg-warning mt-1" data-toggle="tooltip" data-placement="top" title="Endpoints">{{scan_history.get_endpoint_count}}</span>
<span class="badge badge-pills bg-danger mt-1" data-toggle="tooltip" data-placement="top" title="{{scan_history.get_critical_vulnerability_count}} Critical, {{scan_history.get_high_vulnerability_count}} High, {{scan_history.get_medium_vulnerability_count}} Medium Vulnerabilities">{{scan_history.get_vulnerability_count}}</span>
</td>
<td class="">
<span class="badge badge-soft-primary">{{ scan_history.scan_type }}</span>
</td>
<td>
<span data-toggle="tooltip" data-placement="top" title="{{scan_history.start_scan_date}}">{{scan_history.start_scan_date|naturaltime}}</span>
</td>
<td class="text-center">
{% if scan_history.scan_status == -1 %}
<span class="badge badge-soft-warning" data-placement="top" data-toggle="tooltip" data-placement="top" title="Waiting for other scans to complete"><span class="spinner-border spinner-border-sm"></span> Pending</span>
{% elif scan_history.scan_status == 0 %}
<span class="badge badge-soft-danger">Failed</span>
{% if scan_history.error_message %}</br><p class="text-danger">Scan Failed due to: {{scan_history.error_message}}</p>{% endif %}
{% elif scan_history.scan_status == 1 %}
<span class="badge badge-soft-info"><span class="spinner-border spinner-border-sm"></span> In Progress</span>
{% elif scan_history.scan_status == 2 %}
<span class="badge badge-soft-success">Successful</span>
{% elif scan_history.scan_status == 3 %}
<span class="badge badge-soft-danger">Aborted</span>
{% else %}
<span class="badge badge-soft-danger">Unknown</span>
{% endif %}
</td>
<td class="text-center">
{% if scan_history.scan_status == -1 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-warning" role="progressbar" style="width: 75%" aria-valuenow="75" aria-valuemin="0" aria-valuemax="100"></div>
</div>
{% elif scan_history.scan_status == 0 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%"
aria-valuemin="0" aria-valuemax="4"></div>
</div>
{% elif scan_history.scan_status == 1 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-primary progress-bar-striped progress-bar-animated" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%"
aria-valuemin="0" aria-valuemax="4"></div>
</div>
{% elif scan_history.scan_status == 2 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-success" role="progressbar" style="width: 100%" aria-valuemin="0" aria-valuemax="100"></div>
</div>
{% elif scan_history.scan_status == 3 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger progress-bar-striped" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%" aria-valuemin="0"
aria-valuemax="4"></div>
</div>
{% else %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger" role="progressbar" style="width: 100%" aria-valuemin="0" aria-valuemax="100">
</div>
</div>
{% endif %}
</td>
<td class="text-center">
<div class="btn-group mb-2 dropstart">
<div class="btn-group">
<a href="/scan/{{current_project.slug}}/detail/{{scan_history.id}}" class="btn btn-soft-primary">View Results</a>
<div class="btn-group dropstart" role="group">
<button type="button" class="btn btn-soft-primary dropdown-toggle dropdown-toggle-split" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
<i class="mdi mdi-chevron-right"></i>
</button>
<div class="dropdown-menu" style="">
{% if user|can:'initiate_scans_subscans' %}
{% if scan_history.scan_status == 0 or scan_history.scan_status == 2 or scan_history.scan_status == 3 %}
<a class="dropdown-item text-primary" href="/scan/{{current_project.slug}}/start/{{scan_history.domain.id}}">
<i class="fe-refresh-ccw"></i> Rescan </a>
{% endif %}
{% if scan_history.scan_status == 1 or scan_history.scan_status == -1%}
<a href="#" class="dropdown-item text-danger" onclick="stop_scan(scan_id={{ scan_history.id }}, subscan_id=null, reload_scan_bar=false, reload_location=true)">
<i class="fe-alert-triangle"></i> Stop Scan</a>
{% endif %}
{% endif %}
{% if user|can:'modify_scan_results' %}
{% if scan_history.scan_status == 2 or scan_history.scan_status == 3 or scan_history.scan_status == 0 %}
<a href="#" class="dropdown-item text-danger" onclick="delete_scan('{{ scan_history.id }}')">
<i class="fe-trash-2"></i> Delete Scan Results</a>
{% endif %}
<div class="dropdown-divider"></div>
{% endif %}
{% if scan.scan_status != -1%}
<a href="#" class="dropdown-item text-dark" onclick="initiate_report({{scan_history.id}}, '{% if 'subdomain_discovery' in scan_history.scan_type.tasks %}True{% endif %}', '{% if 'vulnerability_scan' in scan_history.scan_type.tasks %}True{% endif %}', '{{ scan_history.domain.name }}')">
<i class="fe-download"></i> Scan Report</a>
{% endif %}
</div>
</div>
</div>
</div>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</form>
</div>
</div>
</div>
<div class="modal fade" id="generateReportModal" tabindex="-1" style="display: none;" aria-hidden="true">
<div class="modal-dialog modal-dialog-centered">
<div class="modal-content">
<div class="modal-header">
<h4 class="modal-title" id="myCenterModalLabel">Download Report</h4>
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
</div>
<div class="modal-body">
<div class="alert alert-light-primary border-0 mb-4" role="alert">
<div id='report_alert_message'></div>
</div>
<div class="form-group mb-4">
<label for="reportTypeForm">Report Type</label>
<select class="form-control" id="report_type_select" name="report_type">
</select>
</div>
<div class="form-group mb-4">
<div class="form-check" id="report_info_vuln_div">
<input type="checkbox" class="form-check-input" id="report_ignore_info_vuln" checked="">
<label class="form-check-label" for="report_ignore_info_vuln">Ignore Informational Vulnerabilities</label>
</div>
</div>
<a id='generateReportButton' href="#" class="btn btn-primary float-end m-2">Download Report</a>
<a id='previewReportButton' href="#" class="btn btn-secondary float-end m-2">Preview Report</a>
</div>
</div>
</div>
</div>
{% endblock main_content %}
{% block page_level_script %}
<script>
$(document).ready(function() {
var table = $('#scan_history_table').DataTable({
headerCallback: function(e, a, t, n, s) {
e.getElementsByTagName("th")[0].innerHTML='<div class="form-check mb-2 form-check-primary"><input type="checkbox" class="float-start form-check-input chk-parent" id="head_checkbox" onclick=mainCheckBoxSelected(this)>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>\n'
},
"columnDefs":[
{ 'visible': false, 'targets': [1] },
{
"targets":0, "width":"20px", "className":"", "orderable":!1, render:function(e, a, t, n) {
return'<div class="form-check mb-2 form-check-primary"><input type="checkbox" name="targets_checkbox['+ e + ']" class="float-start form-check-input targets_checkbox" value="' + e + '" onchange=toggleMultipleTargetButton()>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>'
},
}
],
"order": [[1, 'desc']],
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center mt-sm-0 mt-3'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" +
"<'table-responsive'tr>" +
"<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
"oLanguage": {
"oPaginate": { "sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>', "sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>' },
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"stripeClasses": [],
"lengthMenu": [5, 10, 20, 30, 40, 50],
"pageLength": 20,
"initComplete": function(settings, json) {
$('[data-toggle="tooltip"]').tooltip();
table = settings.oInstance.api();
var rows = table.rows({
selected: true
}).indexes();
// populate filter menu from datatables
// populate targets
var selectedData = table.cells(rows, 2).data();
var target_array = [];
for (var i = 0; i < selectedData.length; i++) {
col1_data = selectedData[i];
domain_name = col1_data.match(/([^\n]+)/g)[0];
target_array.push(domain_name);
}
target_array = Array.from(new Set(target_array));
for (target in target_array) {
select = document.getElementById('filterByTarget');
var option = document.createElement('option');
option.value = target_array[target];
option.innerHTML = target_array[target];
select.appendChild(option);
}
// populate Scan Type
var selectedData = table.cells(rows, 4).data();
var scan_type_array = [];
for (var i = 0; i < selectedData.length; i++) {
col1_data = extractContent(selectedData[i]);
scan_type_array.push(col1_data);
}
scan_type_array = Array.from(new Set(scan_type_array));
for (engine in scan_type_array) {
select = document.getElementById('filterByScanType');
var option = document.createElement('option');
option.value = scan_type_array[engine];
option.innerHTML = scan_type_array[engine];
select.appendChild(option);
}
}
});
multiCheck(table);
// filter organization populate
$.getJSON(`/api/listOrganizations?&format=json`, function(data) {
data = data['organizations']
for (organization in data) {
name = htmlEncode(data[organization]['name']);
select = document.getElementById('filterByOrganization');
var option = document.createElement('option');
option.value = name;
option.innerHTML = name;
select.appendChild(option);
}
}).fail(function() {});
// filtering for scan status
var status_types = ['Pending', 'Scanning', 'Aborted', 'Successful', 'Failed'];
for (status in status_types) {
select = document.getElementById('filterByScanStatus');
var option = document.createElement('option');
option.value = status_types[status];
option.innerHTML = status_types[status];
select.appendChild(option);
}
var org_filter = document.getElementById('filterByOrganization');
org_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Organization: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by organization ${this.value}`,
pos: 'top-center'
});
}, false);
var status_filter = document.getElementById('filterByScanStatus');
status_filter.addEventListener('click', function() {
table.search(this.value).draw();
switch (this.value) {
case 'Pending':
badge_color = 'warning';
break;
case 'Scanning':
badge_color = 'info';
break;
case 'Aborted':
badge_color = 'danger';
break;
case 'Failed':
badge_color = 'danger';
break;
case 'Successful':
badge_color = 'success';
break;
default:
badge_color = 'primary'
}
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-${badge_color}">Scan Status: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by scan status ${this.value}`,
pos: 'top-center'
});
}, false);
var engine_filter = document.getElementById('filterByScanType');
engine_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Scan Engine: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by Engine ${this.value}`,
pos: 'top-center'
});
}, false);
var target_filter = document.getElementById('filterByTarget');
target_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Target/Domain: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by Engine ${this.value}`,
pos: 'top-center'
});
}, false);
// reset filtering
var reset_filter = document.getElementById('resetFilters');
reset_filter.addEventListener('click', function() {
resetFilters(table);
}, false);
});
function resetFilters(table_obj) {
table_obj.search("").draw();
Snackbar.show({
text: `Filters Reset`,
pos: 'top-center'
});
document.getElementById('filteringText').innerHTML = '';
}
function checkedCount() {
// this function will count the number of boxes checked
item = document.getElementsByClassName("targets_checkbox");
count = 0;
for (var i = 0; i < item.length; i++) {
if (item[i].checked) {
count++;
}
}
return count;
}
function toggleMultipleTargetButton() {
if (checkedCount() > 0) {
$("#delete_multiple_button").removeClass("disabled");
} else {
$("#delete_multiple_button").addClass("disabled");
}
}
function mainCheckBoxSelected(checkbox) {
if (checkbox.checked) {
$("#delete_multiple_button").removeClass("disabled");
$(".targets_checkbox").prop('checked', true);
} else {
$("#delete_multiple_button").addClass("disabled");
$(".targets_checkbox").prop('checked', false);
}
}
function deleteMultipleScan() {
if (!checkedCount()) {
swal({
title: '',
text: "Oops! No targets has been selected!",
type: 'error',
padding: '2em'
})
} else {
// atleast one target is selected
swal.queue([{
title: 'Are you sure you want to delete ' + checkedCount() + ' Scans?',
text: "This action is irreversible.\nThis will delete all the scan data and vulnerabilities related to the scan.",
type: 'warning',
showCancelButton: true,
confirmButtonText: 'Delete',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
deleteForm = document.getElementById("scan_history_form");
deleteForm.action = "../delete/multiple";
deleteForm.submit();
}
}])
}
}
// select option listener for report_type_select
var report_type = document.getElementById("report_type_select");
report_type.addEventListener("change", function() {
if(report_type.value == "recon")
{
$("#report_info_vuln_div").hide();
}
else{
$("#report_info_vuln_div").show();
}
});
function initiate_report(id, is_subdomain_scan, is_vulnerability_scan, domain_name) {
$('#generateReportModal').modal('show');
$('#report_alert_message').empty();
$('#report_type_select').empty();
if (is_subdomain_scan == 'True' && is_vulnerability_scan == 'True') {
$('#report_alert_message').append(`
<b>Full Scan</b> will include both Reconnaissance and Vulnerability Report.<br>
`);
$('#report_type_select').append($('<option>', {
value: 'full',
text: 'Full Scan Report'
}));
}
if (is_subdomain_scan == 'True') {
// eligible for reconnaissance report
$('#report_alert_message').append(`
<b>Reconnaissance Report</b> will only include Assets Discovered Section.<br>
`);
$('#report_type_select').append($('<option>', {
value: 'recon',
text: 'Reconnaissance Report'
}));
}
if (is_vulnerability_scan == 'True'){
// eligible for vulnerability report
$('#report_alert_message').append(`
<b>Vulnerability Report</b> will only include details of Vulnerabilities Identified.
`);
$('#report_type_select').append($('<option>', {
value: 'vulnerability',
text: 'Vulnerability Report'
}));
}
$('#generateReportButton').attr('onClick', `generate_report(${id}, '${domain_name}')`);
$('#previewReportButton').attr('onClick', `preview_report(${id}, '${domain_name}')`);
}
function preview_report(id, domain_name){
var report_type = $("#report_type_select option:selected").val();
var url = `/scan/create_report/${id}?report_type=${report_type}`;
if ($('#report_ignore_info_vuln').is(":checked")) {
url += `&ignore_info_vuln`
}
$('#generateReportModal').modal('hide');
window.open(url, '_blank').focus();
}
function generate_report(id, domain_name) {
var report_type = $("#report_type_select option:selected").val();
var url = `/scan/create_report/${id}?report_type=${report_type}&download`;
if ($('#report_ignore_info_vuln').is(":checked")) {
url += `&ignore_info_vuln`
}
$('#generateReportModal').modal('hide');
swal.queue([{
title: 'Generating Report!',
text: `Please wait until we generate a report for you!`,
padding: '2em',
onOpen: function() {
swal.showLoading()
return fetch(url, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
})
.then(function(response) {
return response.blob();
}).then(function(blob) {
const file = new Blob([blob], {type: 'application/pdf'});
// process to auto download it
const fileURL = URL.createObjectURL(file);
const link = document.createElement('a');
link.href = fileURL;
link.download = domain_name + ".pdf";
link.click();
swal.close();
})
.catch(function() {
swal.insertQueueStep({
type: 'error',
title: 'Oops! Unable to generate report!'
})
})
}
}]);
}
</script>
{% endblock page_level_script %}
| yogeshojha | 3c60bc1ee495044794d91edee0c96fff73ab46c7 | 5413708d243799a5271440c47c6f98d0c51154ca | ## DOM text reinterpreted as HTML
[DOM text](1) is reinterpreted as HTML without escaping meta-characters.
[Show more details](https://github.com/yogeshojha/rengine/security/code-scanning/154) | github-advanced-security[bot] | 26 |
yogeshojha/rengine | 963 | 2.0-jasper release | ### Added
- Projects: Projects allow you to efficiently organize their web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
- Roles and Permissions: assign distinct roles to your team members: Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- GPT-powered Report Generation: With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments.
- API Vault: This feature allows you to organize your API keys such as OpenAI or Netlas API keys.
- GPT-powered Attack Surface Generation
- URL gathering now is much more efficient, removing duplicate endpoints based on similar HTTP Responses, having the same content_lenth, or page_title. Custom duplicate fields can also be set from the scan engine configuration.
- URL Path filtering while initiating scan: For instance, if we want to scan only endpoints starting with https://example.com/start/, we can pass the /start as a path filter while starting the scan. @ocervell
- Expanding Target Concept: reNgine 2.0 now accepts IPs, URLS, etc as targets. (#678, #658) Excellent work by @ocervell
- A ton of refactoring on reNgine's core to improve scan efficiency. Massive kudos to @ocervell
- Created a custom celery workflow to be able to run several tasks in parallel that are not dependent on each other, such OSINT task and subdomain discovery will run in parallel, and directory and file fuzzing, vulnerability scan, screenshot gathering etc. will run in parallel after port scan or url fetching is completed. This will increase the efficiency of scans and instead of having one long flow of tasks, they can run independently on their own. @ocervell
- Refactored all tasks to run asynchronously @ocervell
- Added a stream_command that allows to read the output of a command live: this means the UI is updated with results while the command runs and does not have to wait until the task completes. Excellent work by @ocervell
- Pwndb is now replaced by h8mail. @ocervell
- Group Scan Results: reNgine 2.0 allows to group of subdomains based on similar page titles and HTTP status, and also vulnerability grouping based on the same vulnerability title and severity.
- Added Support for Nmap: reNgine 2.0 allows to run Nmap scripts and vuln scans on ports found by Naabu. @ocervell
- Added support for Shared Scan Variables in Scan Engine Configuration:
- `enable_http_crawl`: (true/false) You can disable it to be more stealthy or focus on something different than HTTP
- `timeout`: set timeout for all tasks
- `rate_limit`: set rate limit for all tasks
- `retries`: set retries for all tasks
- `custom_header`: set the custom header for all tasks
- Added Dalfox for XSS Vulnerability Scan
- Added CRLFuzz for CRLF Vulnerability Scan
- Added S3Scanner for scanning misconfigured S3 buckets
- Improve OSINT Dork results, now detects admin panels, login pages and dashboards
- Added Custom Dorks
- Improved UI for vulnerability results, clicking on each vulnerability will open up a sidebar with vulnerability details.
- Added HTTP Request and Response in vulnerability Results
- Under Admin Settings, added an option to allow add/remove/deactivate additional users
- Added Option to Preview Scan Report instead of forcing to download
- Added Katana for crawling and spidering URLs
- Added Netlas for Whois and subdomain gathering
- Added TLSX for subdomain gathering
- Added CTFR for subdomain gathering
- Added historical IP in whois section
### Fixes
- GF patterns do not run on 404 endpoints (#574 closed)
- Fixes for retrieving whois data (#693 closed)
- Related/Associated Domains in Whois section is now fixed
### Removed
- Removed pwndb and tor related to it.
- Removed tor for pwndb | null | 2023-10-02 07:51:35+00:00 | 2023-10-07 10:37:23+00:00 | web/startScan/templates/startScan/history.html | {% extends 'base/base.html' %}
{% load static %}
{% load humanize %}
{% block title %}
Scan history
{% endblock title %}
{% block custom_js_css_link %}
<link rel="stylesheet" type="text/css" href="{% static 'plugins/datatable/datatables.css' %}">
<link rel="stylesheet" type="text/css" href="{% static 'plugins/datatable/global.css' %}">
<link rel="stylesheet" type="text/css" href="{% static 'plugins/datatable/custom.css' %}">
{% endblock custom_js_css_link %}
{% block breadcrumb_title %}
<li class="breadcrumb-item active" aria-current="page">Scan History</li>
{% endblock breadcrumb_title %}
{% block page_title %}
Quick Scan History
{% endblock page_title %}
{% block main_content %}
<div class="row">
<div class="col-12">
<div class="card">
<div class="p-2">
<div class="row">
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<button type="button" class="btn btn-primary dropdown-toggle" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false" id="filterMenu">
Filter <i class="fe-filter"></i>
</button>
<div id="filteringText" class="mt-2">
</div>
<div class="dropdown-menu" style="width: 30%">
<div class="px-4 py-3">
<h4 class="headline-title">Filters</h4>
<div class="">
<label for="filterByOrganization" class="form-label">Filter by Organization</label>
<select class="form-control" id="filterByOrganization">
</select>
</div>
<div class="">
<label for="filterByTarget" class="form-label">Filter by Targets</label>
<select class="form-control" id="filterByTarget">
</select>
</div>
<div class="">
<label for="filterByScanType" class="form-label">Filter by Scan Type</label>
<select class="form-control" id="filterByScanType">
</select>
</div>
<div class="">
<label for="filterByScanStatus" class="form-label">Filter by Scan Status</label>
<select class="form-control" id="filterByScanStatus">
</select>
</div>
</div>
<div class="dropdown-divider"></div>
<a href="#" class="dropdown-ite text-primary float-end" id="resetFilters">Reset Filters</a>
</div>
</div>
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<a class="btn btn-soft-danger float-end disabled ms-1" href="#" onclick="deleteMultipleScan()" id="delete_multiple_button">Delete Multiple Scans</a>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-12">
<div class="card">
<form method="POST" id="scan_history_form">
{% csrf_token %}
<table id="scan_history_table" class="table style-3 table-hover">
<thead>
<tr>
<th class="checkbox-column text-center">Serial Number</th>
<th class="text-center">Serial Number</th>
<th class="">Domain Name</th>
<th>Summary</th>
<th class="">Scan Engine Used</th>
<th>Last Scan</th>
<th class="text-center">Status</th>
<th class="text-center">Progress</th>
<th class="text-center no-sorting">Action</th>
</tr>
</thead>
<tbody>
{% for scan_history in scan_history.all %}
<tr>
<td class="checkbox-column"> {{ scan_history.id }} </td>
<td class=""> {{ scan_history.id }} </td>
<td class="">
{{ scan_history.domain.name }}
<br>
{% for organization in scan_history.domain.get_organization %}
<span class="badge badge-soft-dark mt-1 me-1" data-toggle="tooltip" data-placement="top" title="Domain {{domain.name}} belongs to organization {{organization.name}}">{{ organization.name }}</span>
{% endfor %}
</td>
<td class="text-left">
<span class="badge badge-pills bg-info mt-1" data-toggle="tooltip" data-placement="top" title="Subdomains">{{scan_history.get_subdomain_count}}</span>
<span class="badge badge-pills bg-warning mt-1" data-toggle="tooltip" data-placement="top" title="Endpoints">{{scan_history.get_endpoint_count}}</span>
<span class="badge badge-pills bg-danger mt-1" data-toggle="tooltip" data-placement="top" title="{{scan_history.get_critical_vulnerability_count}} Critical, {{scan_history.get_high_vulnerability_count}} High, {{scan_history.get_medium_vulnerability_count}} Medium Vulnerabilities">{{scan_history.get_vulnerability_count}}</span>
</td>
<td class="">
<span class="badge badge-soft-primary">{{ scan_history.scan_type }}</span>
</td>
<td>
<span data-toggle="tooltip" data-placement="top" title="{{scan_history.start_scan_date}}">{{scan_history.start_scan_date|naturaltime}}</span>
</td>
<td class="text-center">
{% if scan_history.scan_status == -1 %}
<span class="badge badge-soft-warning" data-placement="top" data-toggle="tooltip" data-placement="top" title="Waiting for other scans to complete"><span class="spinner-border spinner-border-sm"></span> Pending</span>
{% elif scan_history.scan_status == 0 %}
<span class="badge badge-soft-danger">Failed</span>
{% if scan_history.error_message %}</br><p class="text-danger">Scan Failed due to: {{scan_history.error_message}}</p>{% endif %}
{% elif scan_history.scan_status == 1 %}
<span class="badge badge-soft-info"><span class="spinner-border spinner-border-sm"></span> In Progress</span>
{% elif scan_history.scan_status == 2 %}
<span class="badge badge-soft-success">Successful</span>
{% elif scan_history.scan_status == 3 %}
<span class="badge badge-soft-danger">Aborted</span>
{% else %}
<span class="badge badge-soft-danger">Unknown</span>
{% endif %}
</td>
<td class="text-center">
{% if scan_history.scan_status == -1 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-warning" role="progressbar" style="width: 75%" aria-valuenow="75" aria-valuemin="0" aria-valuemax="100"></div>
</div>
{% elif scan_history.scan_status == 0 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%"
aria-valuemin="0" aria-valuemax="4"></div>
</div>
{% elif scan_history.scan_status == 1 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-primary progress-bar-striped progress-bar-animated" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%"
aria-valuemin="0" aria-valuemax="4"></div>
</div>
{% elif scan_history.scan_status == 2 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-success" role="progressbar" style="width: 100%" aria-valuemin="0" aria-valuemax="100"></div>
</div>
{% elif scan_history.scan_status == 3 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger progress-bar-striped" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%" aria-valuemin="0"
aria-valuemax="4"></div>
</div>
{% else %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger" role="progressbar" style="width: 100%" aria-valuemin="0" aria-valuemax="100">
</div>
</div>
{% endif %}
</td>
<td class="text-center">
<div class="btn-group mb-2 dropstart">
<div class="btn-group">
<a href="{% url 'detail_scan' scan_history.id %}" class="btn btn-soft-primary">View Results</a>
<div class="btn-group dropstart" role="group">
<button type="button" class="btn btn-soft-primary dropdown-toggle dropdown-toggle-split" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
<i class="mdi mdi-chevron-right"></i>
</button>
<div class="dropdown-menu" style="">
{% if scan_history.scan_status == 0 or scan_history.scan_status == 2 or scan_history.scan_status == 3 %}
<a class="dropdown-item text-primary" href="{% url 'start_scan' scan_history.domain.id %}">
<i class="fe-refresh-ccw"></i> Rescan </a>
{% endif %}
{% if scan_history.scan_status == 1 or scan_history.scan_status == -1%}
<a href="#" class="dropdown-item text-danger" onclick="stop_scan(scan_id={{ scan_history.id }}, subscan_id=null, reload_scan_bar=false, reload_location=true)">
<i class="fe-alert-triangle"></i> Stop Scan</a>
{% endif %}
{% if scan_history.scan_status == 2 or scan_history.scan_status == 3 %}
<a href="#" class="dropdown-item text-danger" onclick="delete_scan('{{ scan_history.id }}')">
<i class="fe-trash-2"></i> Delete Scan Results</a>
{% endif %}
{% if scan.scan_status != -1%}
<div class="dropdown-divider"></div>
<a href="#" class="dropdown-item text-dark" onclick="initiate_report({{scan_history.id}}, '{{scan_history.subdomain_discovery}}', '{{scan_history.vulnerability_scan}}', '{{ scan_history.domain.name }}')">
<i class="fe-download"></i> Download Report</a>
{% endif %}
</div>
</div>
</div>
</div>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</form>
</div>
</div>
</div>
<div class="modal fade" id="generateReportModal" tabindex="-1" style="display: none;" aria-hidden="true">
<div class="modal-dialog modal-dialog-centered">
<div class="modal-content">
<div class="modal-header">
<h4 class="modal-title" id="myCenterModalLabel">Download Report</h4>
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
</div>
<div class="modal-body">
<div class="alert alert-light-primary border-0 mb-4" role="alert">
<div id='report_alert_message'></div>
</div>
<div class="form-group mb-4">
<label for="reportTypeForm">Report Type</label>
<select class="form-control" id="report_type_select" name="report_type">
</select>
</div>
<a id='generateReportButton' href="#" class="btn btn-primary float-end">Download Report</a>
</div>
</div>
</div>
</div>
{% endblock main_content %}
{% block page_level_script %}
<script src="{% static 'plugins/datatable/datatables.js' %}"></script>
<script>
$(document).ready(function() {
var table = $('#scan_history_table').DataTable({
headerCallback: function(e, a, t, n, s) {
e.getElementsByTagName("th")[0].innerHTML='<div class="form-check mb-2 form-check-primary"><input type="checkbox" class="float-start form-check-input chk-parent" id="head_checkbox" onclick=mainCheckBoxSelected(this)>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>\n'
},
"columnDefs":[
{ 'visible': false, 'targets': [1] },
{
"targets":0, "width":"20px", "className":"", "orderable":!1, render:function(e, a, t, n) {
return'<div class="form-check mb-2 form-check-primary"><input type="checkbox" name="targets_checkbox['+ e + ']" class="float-start form-check-input targets_checkbox" value="' + e + '" onchange=toggleMultipleTargetButton()>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>'
},
}],
"order": [[1, 'desc']],
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center mt-sm-0 mt-3'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" +
"<'table-responsive'tr>" +
"<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
"oLanguage": {
"oPaginate": { "sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>', "sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>' },
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"stripeClasses": [],
"lengthMenu": [5, 10, 20, 30, 40, 50],
"pageLength": 20,
"initComplete": function(settings, json) {
$('[data-toggle="tooltip"]').tooltip();
table = settings.oInstance.api();
var rows = table.rows({
selected: true
}).indexes();
// populate filter menu from datatables
// populate targets
var selectedData = table.cells(rows, 2).data();
var target_array = [];
for (var i = 0; i < selectedData.length; i++) {
col1_data = selectedData[i];
domain_name = col1_data.match(/([^\n]+)/g)[0];
target_array.push(domain_name);
}
target_array = Array.from(new Set(target_array));
for (target in target_array) {
select = document.getElementById('filterByTarget');
var option = document.createElement('option');
option.value = target_array[target];
option.innerHTML = target_array[target];
select.appendChild(option);
}
// populate Scan Type
var selectedData = table.cells(rows, 4).data();
var scan_type_array = [];
for (var i = 0; i < selectedData.length; i++) {
col1_data = extractContent(selectedData[i]);
scan_type_array.push(col1_data);
}
scan_type_array = Array.from(new Set(scan_type_array));
for (engine in scan_type_array) {
select = document.getElementById('filterByScanType');
var option = document.createElement('option');
option.value = scan_type_array[engine];
option.innerHTML = scan_type_array[engine];
select.appendChild(option);
}
}
});
multiCheck(table);
// filter organization populate
$.getJSON(`/api/listOrganizations?&format=json`, function(data) {
data = data['organizations']
for (organization in data) {
name = htmlEncode(data[organization]['name']);
select = document.getElementById('filterByOrganization');
var option = document.createElement('option');
option.value = name;
option.innerHTML = name;
select.appendChild(option);
}
}).fail(function() {});
// filtering for scan status
var status_types = ['Pending', 'Scanning', 'Aborted', 'Successful', 'Failed'];
for (status in status_types) {
select = document.getElementById('filterByScanStatus');
var option = document.createElement('option');
option.value = status_types[status];
option.innerHTML = status_types[status];
select.appendChild(option);
}
var org_filter = document.getElementById('filterByOrganization');
org_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Organization: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by organization ${this.value}`,
pos: 'top-center'
});
}, false);
var status_filter = document.getElementById('filterByScanStatus');
status_filter.addEventListener('click', function() {
table.search(this.value).draw();
switch (this.value) {
case 'Pending':
badge_color = 'warning';
break;
case 'Scanning':
badge_color = 'info';
break;
case 'Aborted':
badge_color = 'danger';
break;
case 'Failed':
badge_color = 'danger';
break;
case 'Successful':
badge_color = 'success';
break;
default:
badge_color = 'primary'
}
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-${badge_color}">Scan Status: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by scan status ${this.value}`,
pos: 'top-center'
});
}, false);
var engine_filter = document.getElementById('filterByScanType');
engine_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Scan Engine: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by Engine ${this.value}`,
pos: 'top-center'
});
}, false);
var target_filter = document.getElementById('filterByTarget');
target_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Target/Domain: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by Engine ${this.value}`,
pos: 'top-center'
});
}, false);
// reset filtering
var reset_filter = document.getElementById('resetFilters');
reset_filter.addEventListener('click', function() {
resetFilters(table);
}, false);
});
function resetFilters(table_obj) {
table_obj.search("").draw();
Snackbar.show({
text: `Filters Reset`,
pos: 'top-center'
});
document.getElementById('filteringText').innerHTML = '';
}
function checkedCount() {
// this function will count the number of boxes checked
item = document.getElementsByClassName("targets_checkbox");
count = 0;
for (var i = 0; i < item.length; i++) {
if (item[i].checked) {
count++;
}
}
return count;
}
function toggleMultipleTargetButton() {
if (checkedCount() > 0) {
$("#delete_multiple_button").removeClass("disabled");
} else {
$("#delete_multiple_button").addClass("disabled");
}
}
function mainCheckBoxSelected(checkbox) {
if (checkbox.checked) {
$("#delete_multiple_button").removeClass("disabled");
$(".targets_checkbox").prop('checked', true);
} else {
$("#delete_multiple_button").addClass("disabled");
$(".targets_checkbox").prop('checked', false);
}
}
function deleteMultipleScan() {
if (!checkedCount()) {
swal({
title: '',
text: "Oops! No targets has been selected!",
type: 'error',
padding: '2em'
})
} else {
// atleast one target is selected
swal.queue([{
title: 'Are you sure you want to delete ' + checkedCount() + ' Scans?',
text: "This action is irreversible.\nThis will delete all the scan data and vulnerabilities related to the scan.",
type: 'warning',
showCancelButton: true,
confirmButtonText: 'Delete',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
deleteForm = document.getElementById("scan_history_form");
deleteForm.action = "../delete/multiple";
deleteForm.submit();
}
}])
}
}
function initiate_report(id, is_subdomain_scan, is_vulnerability_scan, domain_name) {
$('#generateReportModal').modal('show');
$('#report_alert_message').empty();
$('#report_type_select').empty();
if (is_subdomain_scan == 'True' && is_vulnerability_scan == 'True') {
$('#report_alert_message').append(`
<b>Full Scan</b> will include both Reconnaissance and Vulnerability Report.<br>
`);
$('#report_type_select').append($('<option>', {
value: 'full',
text: 'Full Scan Report'
}));
}
if (is_subdomain_scan == 'True') {
// eligible for reconnaissance report
$('#report_alert_message').append(`
<b>Reconnaissance Report</b> will only include Assets Discovered Section.<br>
`);
$('#report_type_select').append($('<option>', {
value: 'recon',
text: 'Reconnaissance Report'
}));
}
if (is_vulnerability_scan == 'True'){
// eligible for vulnerability report
$('#report_alert_message').append(`
<b>Vulnerability Report</b> will only include details of Vulnerabilities Identified.
`);
$('#report_type_select').append($('<option>', {
value: 'vulnerability',
text: 'Vulnerability Report'
}));
}
$('#generateReportButton').attr('onClick', `generate_report(${id}, '${domain_name}')`);
}
function generate_report(id, domain_name) {
var report_type = $("#report_type_select option:selected").val();
$('#generateReportModal').modal('hide');
swal.queue([{
title: 'Generating Report!',
text: `Please wait until we generate a report for you!`,
padding: '2em',
onOpen: function() {
swal.showLoading()
return fetch(`/scan/create_report/${id}?download&report_type=${report_type}`, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
})
.then(function(response) {
return response.blob();
}).then(function(blob) {
const file = new Blob([blob], {type: 'application/pdf'});
// process to auto download it
const fileURL = URL.createObjectURL(file);
const link = document.createElement('a');
link.href = fileURL;
link.download = domain_name + ".pdf";
link.click();
swal.close();
})
.catch(function() {
swal.insertQueueStep({
type: 'error',
title: 'Oops! Unable to generate report!'
})
})
}
}]);
}
</script>
{% endblock page_level_script %}
| {% extends 'base/base.html' %}
{% load static %}
{% load humanize %}
{% load permission_tags %}
{% block title %}
Scan history
{% endblock title %}
{% block custom_js_css_link %}
{% endblock custom_js_css_link %}
{% block breadcrumb_title %}
<li class="breadcrumb-item active" aria-current="page">Scan History</li>
{% endblock breadcrumb_title %}
{% block page_title %}
Quick Scan History
{% endblock page_title %}
{% block main_content %}
<div class="row">
<div class="col-12">
<div class="card">
<div class="p-2">
<div class="row">
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<button type="button" class="btn btn-primary dropdown-toggle" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false" id="filterMenu">
Filter <i class="fe-filter"></i>
</button>
<div id="filteringText" class="mt-2">
</div>
<div class="dropdown-menu" style="width: 30%">
<div class="px-4 py-3">
<h4 class="headline-title">Filters</h4>
<div class="">
<label for="filterByOrganization" class="form-label">Filter by Organization</label>
<select class="form-control" id="filterByOrganization">
</select>
</div>
<div class="">
<label for="filterByTarget" class="form-label">Filter by Targets</label>
<select class="form-control" id="filterByTarget">
</select>
</div>
<div class="">
<label for="filterByScanType" class="form-label">Filter by Scan Type</label>
<select class="form-control" id="filterByScanType">
</select>
</div>
<div class="">
<label for="filterByScanStatus" class="form-label">Filter by Scan Status</label>
<select class="form-control" id="filterByScanStatus">
</select>
</div>
</div>
<div class="dropdown-divider"></div>
<a href="#" class="dropdown-ite text-primary float-end" id="resetFilters">Reset Filters</a>
</div>
</div>
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<a class="btn btn-soft-danger float-end disabled ms-1" href="#" onclick="deleteMultipleScan()" id="delete_multiple_button">Delete Multiple Scans</a>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-12">
<div class="card">
<form method="POST" id="scan_history_form">
{% csrf_token %}
<table id="scan_history_table" class="table dt-responsive w-100">
<thead>
<tr>
<th class="checkbox-column text-center">Serial Number</th>
<th class="text-center">Serial Number</th>
<th class="">Domain Name</th>
<th>Summary</th>
<th class="">Scan Engine Used</th>
<th>Last Scan</th>
<th class="text-center">Status</th>
<th class="text-center">Progress</th>
<th class="text-center no-sorting">Action</th>
</tr>
</thead>
<tbody>
{% for scan_history in scan_history.all %}
<tr>
<td class="checkbox-column"> {{ scan_history.id }} </td>
<td class=""> {{ scan_history.id }} </td>
<td class="">
{{ scan_history.domain.name }}
<br>
{% for organization in scan_history.domain.get_organization %}
<span class="badge badge-soft-dark mt-1 me-1" data-toggle="tooltip" data-placement="top" title="Domain {{domain.name}} belongs to organization {{organization.name}}">{{ organization.name }}</span>
{% endfor %}
</td>
<td class="text-left">
<span class="badge badge-pills bg-info mt-1" data-toggle="tooltip" data-placement="top" title="Subdomains">{{scan_history.get_subdomain_count}}</span>
<span class="badge badge-pills bg-warning mt-1" data-toggle="tooltip" data-placement="top" title="Endpoints">{{scan_history.get_endpoint_count}}</span>
<span class="badge badge-pills bg-danger mt-1" data-toggle="tooltip" data-placement="top" title="{{scan_history.get_critical_vulnerability_count}} Critical, {{scan_history.get_high_vulnerability_count}} High, {{scan_history.get_medium_vulnerability_count}} Medium Vulnerabilities">{{scan_history.get_vulnerability_count}}</span>
</td>
<td class="">
<span class="badge badge-soft-primary">{{ scan_history.scan_type }}</span>
</td>
<td>
<span data-toggle="tooltip" data-placement="top" title="{{scan_history.start_scan_date}}">{{scan_history.start_scan_date|naturaltime}}</span>
</td>
<td class="text-center">
{% if scan_history.scan_status == -1 %}
<span class="badge badge-soft-warning" data-placement="top" data-toggle="tooltip" data-placement="top" title="Waiting for other scans to complete"><span class="spinner-border spinner-border-sm"></span> Pending</span>
{% elif scan_history.scan_status == 0 %}
<span class="badge badge-soft-danger">Failed</span>
{% if scan_history.error_message %}</br><p class="text-danger">Scan Failed due to: {{scan_history.error_message}}</p>{% endif %}
{% elif scan_history.scan_status == 1 %}
<span class="badge badge-soft-info"><span class="spinner-border spinner-border-sm"></span> In Progress</span>
{% elif scan_history.scan_status == 2 %}
<span class="badge badge-soft-success">Successful</span>
{% elif scan_history.scan_status == 3 %}
<span class="badge badge-soft-danger">Aborted</span>
{% else %}
<span class="badge badge-soft-danger">Unknown</span>
{% endif %}
</td>
<td class="text-center">
{% if scan_history.scan_status == -1 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-warning" role="progressbar" style="width: 75%" aria-valuenow="75" aria-valuemin="0" aria-valuemax="100"></div>
</div>
{% elif scan_history.scan_status == 0 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%"
aria-valuemin="0" aria-valuemax="4"></div>
</div>
{% elif scan_history.scan_status == 1 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-primary progress-bar-striped progress-bar-animated" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%"
aria-valuemin="0" aria-valuemax="4"></div>
</div>
{% elif scan_history.scan_status == 2 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-success" role="progressbar" style="width: 100%" aria-valuemin="0" aria-valuemax="100"></div>
</div>
{% elif scan_history.scan_status == 3 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger progress-bar-striped" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%" aria-valuemin="0"
aria-valuemax="4"></div>
</div>
{% else %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger" role="progressbar" style="width: 100%" aria-valuemin="0" aria-valuemax="100">
</div>
</div>
{% endif %}
</td>
<td class="text-center">
<div class="btn-group mb-2 dropstart">
<div class="btn-group">
<a href="/scan/{{current_project.slug}}/detail/{{scan_history.id}}" class="btn btn-soft-primary">View Results</a>
<div class="btn-group dropstart" role="group">
<button type="button" class="btn btn-soft-primary dropdown-toggle dropdown-toggle-split" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
<i class="mdi mdi-chevron-right"></i>
</button>
<div class="dropdown-menu" style="">
{% if user|can:'initiate_scans_subscans' %}
{% if scan_history.scan_status == 0 or scan_history.scan_status == 2 or scan_history.scan_status == 3 %}
<a class="dropdown-item text-primary" href="/scan/{{current_project.slug}}/start/{{scan_history.domain.id}}">
<i class="fe-refresh-ccw"></i> Rescan </a>
{% endif %}
{% if scan_history.scan_status == 1 or scan_history.scan_status == -1%}
<a href="#" class="dropdown-item text-danger" onclick="stop_scan(scan_id={{ scan_history.id }}, subscan_id=null, reload_scan_bar=false, reload_location=true)">
<i class="fe-alert-triangle"></i> Stop Scan</a>
{% endif %}
{% endif %}
{% if user|can:'modify_scan_results' %}
{% if scan_history.scan_status == 2 or scan_history.scan_status == 3 or scan_history.scan_status == 0 %}
<a href="#" class="dropdown-item text-danger" onclick="delete_scan('{{ scan_history.id }}')">
<i class="fe-trash-2"></i> Delete Scan Results</a>
{% endif %}
<div class="dropdown-divider"></div>
{% endif %}
{% if scan.scan_status != -1%}
<a href="#" class="dropdown-item text-dark" onclick="initiate_report({{scan_history.id}}, '{% if 'subdomain_discovery' in scan_history.scan_type.tasks %}True{% endif %}', '{% if 'vulnerability_scan' in scan_history.scan_type.tasks %}True{% endif %}', '{{ scan_history.domain.name }}')">
<i class="fe-download"></i> Scan Report</a>
{% endif %}
</div>
</div>
</div>
</div>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</form>
</div>
</div>
</div>
<div class="modal fade" id="generateReportModal" tabindex="-1" style="display: none;" aria-hidden="true">
<div class="modal-dialog modal-dialog-centered">
<div class="modal-content">
<div class="modal-header">
<h4 class="modal-title" id="myCenterModalLabel">Download Report</h4>
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
</div>
<div class="modal-body">
<div class="alert alert-light-primary border-0 mb-4" role="alert">
<div id='report_alert_message'></div>
</div>
<div class="form-group mb-4">
<label for="reportTypeForm">Report Type</label>
<select class="form-control" id="report_type_select" name="report_type">
</select>
</div>
<div class="form-group mb-4">
<div class="form-check" id="report_info_vuln_div">
<input type="checkbox" class="form-check-input" id="report_ignore_info_vuln" checked="">
<label class="form-check-label" for="report_ignore_info_vuln">Ignore Informational Vulnerabilities</label>
</div>
</div>
<a id='generateReportButton' href="#" class="btn btn-primary float-end m-2">Download Report</a>
<a id='previewReportButton' href="#" class="btn btn-secondary float-end m-2">Preview Report</a>
</div>
</div>
</div>
</div>
{% endblock main_content %}
{% block page_level_script %}
<script>
$(document).ready(function() {
var table = $('#scan_history_table').DataTable({
headerCallback: function(e, a, t, n, s) {
e.getElementsByTagName("th")[0].innerHTML='<div class="form-check mb-2 form-check-primary"><input type="checkbox" class="float-start form-check-input chk-parent" id="head_checkbox" onclick=mainCheckBoxSelected(this)>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>\n'
},
"columnDefs":[
{ 'visible': false, 'targets': [1] },
{
"targets":0, "width":"20px", "className":"", "orderable":!1, render:function(e, a, t, n) {
return'<div class="form-check mb-2 form-check-primary"><input type="checkbox" name="targets_checkbox['+ e + ']" class="float-start form-check-input targets_checkbox" value="' + e + '" onchange=toggleMultipleTargetButton()>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>'
},
}
],
"order": [[1, 'desc']],
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center mt-sm-0 mt-3'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" +
"<'table-responsive'tr>" +
"<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
"oLanguage": {
"oPaginate": { "sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>', "sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>' },
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"stripeClasses": [],
"lengthMenu": [5, 10, 20, 30, 40, 50],
"pageLength": 20,
"initComplete": function(settings, json) {
$('[data-toggle="tooltip"]').tooltip();
table = settings.oInstance.api();
var rows = table.rows({
selected: true
}).indexes();
// populate filter menu from datatables
// populate targets
var selectedData = table.cells(rows, 2).data();
var target_array = [];
for (var i = 0; i < selectedData.length; i++) {
col1_data = selectedData[i];
domain_name = col1_data.match(/([^\n]+)/g)[0];
target_array.push(domain_name);
}
target_array = Array.from(new Set(target_array));
for (target in target_array) {
select = document.getElementById('filterByTarget');
var option = document.createElement('option');
option.value = target_array[target];
option.innerHTML = target_array[target];
select.appendChild(option);
}
// populate Scan Type
var selectedData = table.cells(rows, 4).data();
var scan_type_array = [];
for (var i = 0; i < selectedData.length; i++) {
col1_data = extractContent(selectedData[i]);
scan_type_array.push(col1_data);
}
scan_type_array = Array.from(new Set(scan_type_array));
for (engine in scan_type_array) {
select = document.getElementById('filterByScanType');
var option = document.createElement('option');
option.value = scan_type_array[engine];
option.innerHTML = scan_type_array[engine];
select.appendChild(option);
}
}
});
multiCheck(table);
// filter organization populate
$.getJSON(`/api/listOrganizations?&format=json`, function(data) {
data = data['organizations']
for (organization in data) {
name = htmlEncode(data[organization]['name']);
select = document.getElementById('filterByOrganization');
var option = document.createElement('option');
option.value = name;
option.innerHTML = name;
select.appendChild(option);
}
}).fail(function() {});
// filtering for scan status
var status_types = ['Pending', 'Scanning', 'Aborted', 'Successful', 'Failed'];
for (status in status_types) {
select = document.getElementById('filterByScanStatus');
var option = document.createElement('option');
option.value = status_types[status];
option.innerHTML = status_types[status];
select.appendChild(option);
}
var org_filter = document.getElementById('filterByOrganization');
org_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Organization: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by organization ${this.value}`,
pos: 'top-center'
});
}, false);
var status_filter = document.getElementById('filterByScanStatus');
status_filter.addEventListener('click', function() {
table.search(this.value).draw();
switch (this.value) {
case 'Pending':
badge_color = 'warning';
break;
case 'Scanning':
badge_color = 'info';
break;
case 'Aborted':
badge_color = 'danger';
break;
case 'Failed':
badge_color = 'danger';
break;
case 'Successful':
badge_color = 'success';
break;
default:
badge_color = 'primary'
}
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-${badge_color}">Scan Status: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by scan status ${this.value}`,
pos: 'top-center'
});
}, false);
var engine_filter = document.getElementById('filterByScanType');
engine_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Scan Engine: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by Engine ${this.value}`,
pos: 'top-center'
});
}, false);
var target_filter = document.getElementById('filterByTarget');
target_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Target/Domain: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by Engine ${this.value}`,
pos: 'top-center'
});
}, false);
// reset filtering
var reset_filter = document.getElementById('resetFilters');
reset_filter.addEventListener('click', function() {
resetFilters(table);
}, false);
});
function resetFilters(table_obj) {
table_obj.search("").draw();
Snackbar.show({
text: `Filters Reset`,
pos: 'top-center'
});
document.getElementById('filteringText').innerHTML = '';
}
function checkedCount() {
// this function will count the number of boxes checked
item = document.getElementsByClassName("targets_checkbox");
count = 0;
for (var i = 0; i < item.length; i++) {
if (item[i].checked) {
count++;
}
}
return count;
}
function toggleMultipleTargetButton() {
if (checkedCount() > 0) {
$("#delete_multiple_button").removeClass("disabled");
} else {
$("#delete_multiple_button").addClass("disabled");
}
}
function mainCheckBoxSelected(checkbox) {
if (checkbox.checked) {
$("#delete_multiple_button").removeClass("disabled");
$(".targets_checkbox").prop('checked', true);
} else {
$("#delete_multiple_button").addClass("disabled");
$(".targets_checkbox").prop('checked', false);
}
}
function deleteMultipleScan() {
if (!checkedCount()) {
swal({
title: '',
text: "Oops! No targets has been selected!",
type: 'error',
padding: '2em'
})
} else {
// atleast one target is selected
swal.queue([{
title: 'Are you sure you want to delete ' + checkedCount() + ' Scans?',
text: "This action is irreversible.\nThis will delete all the scan data and vulnerabilities related to the scan.",
type: 'warning',
showCancelButton: true,
confirmButtonText: 'Delete',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
deleteForm = document.getElementById("scan_history_form");
deleteForm.action = "../delete/multiple";
deleteForm.submit();
}
}])
}
}
// select option listener for report_type_select
var report_type = document.getElementById("report_type_select");
report_type.addEventListener("change", function() {
if(report_type.value == "recon")
{
$("#report_info_vuln_div").hide();
}
else{
$("#report_info_vuln_div").show();
}
});
function initiate_report(id, is_subdomain_scan, is_vulnerability_scan, domain_name) {
$('#generateReportModal').modal('show');
$('#report_alert_message').empty();
$('#report_type_select').empty();
if (is_subdomain_scan == 'True' && is_vulnerability_scan == 'True') {
$('#report_alert_message').append(`
<b>Full Scan</b> will include both Reconnaissance and Vulnerability Report.<br>
`);
$('#report_type_select').append($('<option>', {
value: 'full',
text: 'Full Scan Report'
}));
}
if (is_subdomain_scan == 'True') {
// eligible for reconnaissance report
$('#report_alert_message').append(`
<b>Reconnaissance Report</b> will only include Assets Discovered Section.<br>
`);
$('#report_type_select').append($('<option>', {
value: 'recon',
text: 'Reconnaissance Report'
}));
}
if (is_vulnerability_scan == 'True'){
// eligible for vulnerability report
$('#report_alert_message').append(`
<b>Vulnerability Report</b> will only include details of Vulnerabilities Identified.
`);
$('#report_type_select').append($('<option>', {
value: 'vulnerability',
text: 'Vulnerability Report'
}));
}
$('#generateReportButton').attr('onClick', `generate_report(${id}, '${domain_name}')`);
$('#previewReportButton').attr('onClick', `preview_report(${id}, '${domain_name}')`);
}
function preview_report(id, domain_name){
var report_type = $("#report_type_select option:selected").val();
var url = `/scan/create_report/${id}?report_type=${report_type}`;
if ($('#report_ignore_info_vuln').is(":checked")) {
url += `&ignore_info_vuln`
}
$('#generateReportModal').modal('hide');
window.open(url, '_blank').focus();
}
function generate_report(id, domain_name) {
var report_type = $("#report_type_select option:selected").val();
var url = `/scan/create_report/${id}?report_type=${report_type}&download`;
if ($('#report_ignore_info_vuln').is(":checked")) {
url += `&ignore_info_vuln`
}
$('#generateReportModal').modal('hide');
swal.queue([{
title: 'Generating Report!',
text: `Please wait until we generate a report for you!`,
padding: '2em',
onOpen: function() {
swal.showLoading()
return fetch(url, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
})
.then(function(response) {
return response.blob();
}).then(function(blob) {
const file = new Blob([blob], {type: 'application/pdf'});
// process to auto download it
const fileURL = URL.createObjectURL(file);
const link = document.createElement('a');
link.href = fileURL;
link.download = domain_name + ".pdf";
link.click();
swal.close();
})
.catch(function() {
swal.insertQueueStep({
type: 'error',
title: 'Oops! Unable to generate report!'
})
})
}
}]);
}
</script>
{% endblock page_level_script %}
| yogeshojha | 3c60bc1ee495044794d91edee0c96fff73ab46c7 | 5413708d243799a5271440c47c6f98d0c51154ca | ## DOM text reinterpreted as HTML
[DOM text](1) is reinterpreted as HTML without escaping meta-characters.
[Show more details](https://github.com/yogeshojha/rengine/security/code-scanning/155) | github-advanced-security[bot] | 27 |
yogeshojha/rengine | 963 | 2.0-jasper release | ### Added
- Projects: Projects allow you to efficiently organize their web application reconnaissance efforts. With this feature, you can create distinct project spaces, each tailored to a specific purpose, such as personal bug bounty hunting, client engagements, or any other specialized recon task.
- Roles and Permissions: assign distinct roles to your team members: Sys Admin, Penetration Tester, and Auditor—each with precisely defined permissions to tailor their access and actions within the reNgine ecosystem.
- GPT-powered Report Generation: With the power of OpenAI's GPT, reNgine now provides you with detailed vulnerability descriptions, remediation strategies, and impact assessments.
- API Vault: This feature allows you to organize your API keys such as OpenAI or Netlas API keys.
- GPT-powered Attack Surface Generation
- URL gathering now is much more efficient, removing duplicate endpoints based on similar HTTP Responses, having the same content_lenth, or page_title. Custom duplicate fields can also be set from the scan engine configuration.
- URL Path filtering while initiating scan: For instance, if we want to scan only endpoints starting with https://example.com/start/, we can pass the /start as a path filter while starting the scan. @ocervell
- Expanding Target Concept: reNgine 2.0 now accepts IPs, URLS, etc as targets. (#678, #658) Excellent work by @ocervell
- A ton of refactoring on reNgine's core to improve scan efficiency. Massive kudos to @ocervell
- Created a custom celery workflow to be able to run several tasks in parallel that are not dependent on each other, such OSINT task and subdomain discovery will run in parallel, and directory and file fuzzing, vulnerability scan, screenshot gathering etc. will run in parallel after port scan or url fetching is completed. This will increase the efficiency of scans and instead of having one long flow of tasks, they can run independently on their own. @ocervell
- Refactored all tasks to run asynchronously @ocervell
- Added a stream_command that allows to read the output of a command live: this means the UI is updated with results while the command runs and does not have to wait until the task completes. Excellent work by @ocervell
- Pwndb is now replaced by h8mail. @ocervell
- Group Scan Results: reNgine 2.0 allows to group of subdomains based on similar page titles and HTTP status, and also vulnerability grouping based on the same vulnerability title and severity.
- Added Support for Nmap: reNgine 2.0 allows to run Nmap scripts and vuln scans on ports found by Naabu. @ocervell
- Added support for Shared Scan Variables in Scan Engine Configuration:
- `enable_http_crawl`: (true/false) You can disable it to be more stealthy or focus on something different than HTTP
- `timeout`: set timeout for all tasks
- `rate_limit`: set rate limit for all tasks
- `retries`: set retries for all tasks
- `custom_header`: set the custom header for all tasks
- Added Dalfox for XSS Vulnerability Scan
- Added CRLFuzz for CRLF Vulnerability Scan
- Added S3Scanner for scanning misconfigured S3 buckets
- Improve OSINT Dork results, now detects admin panels, login pages and dashboards
- Added Custom Dorks
- Improved UI for vulnerability results, clicking on each vulnerability will open up a sidebar with vulnerability details.
- Added HTTP Request and Response in vulnerability Results
- Under Admin Settings, added an option to allow add/remove/deactivate additional users
- Added Option to Preview Scan Report instead of forcing to download
- Added Katana for crawling and spidering URLs
- Added Netlas for Whois and subdomain gathering
- Added TLSX for subdomain gathering
- Added CTFR for subdomain gathering
- Added historical IP in whois section
### Fixes
- GF patterns do not run on 404 endpoints (#574 closed)
- Fixes for retrieving whois data (#693 closed)
- Related/Associated Domains in Whois section is now fixed
### Removed
- Removed pwndb and tor related to it.
- Removed tor for pwndb | null | 2023-10-02 07:51:35+00:00 | 2023-10-07 10:37:23+00:00 | web/startScan/templates/startScan/history.html | {% extends 'base/base.html' %}
{% load static %}
{% load humanize %}
{% block title %}
Scan history
{% endblock title %}
{% block custom_js_css_link %}
<link rel="stylesheet" type="text/css" href="{% static 'plugins/datatable/datatables.css' %}">
<link rel="stylesheet" type="text/css" href="{% static 'plugins/datatable/global.css' %}">
<link rel="stylesheet" type="text/css" href="{% static 'plugins/datatable/custom.css' %}">
{% endblock custom_js_css_link %}
{% block breadcrumb_title %}
<li class="breadcrumb-item active" aria-current="page">Scan History</li>
{% endblock breadcrumb_title %}
{% block page_title %}
Quick Scan History
{% endblock page_title %}
{% block main_content %}
<div class="row">
<div class="col-12">
<div class="card">
<div class="p-2">
<div class="row">
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<button type="button" class="btn btn-primary dropdown-toggle" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false" id="filterMenu">
Filter <i class="fe-filter"></i>
</button>
<div id="filteringText" class="mt-2">
</div>
<div class="dropdown-menu" style="width: 30%">
<div class="px-4 py-3">
<h4 class="headline-title">Filters</h4>
<div class="">
<label for="filterByOrganization" class="form-label">Filter by Organization</label>
<select class="form-control" id="filterByOrganization">
</select>
</div>
<div class="">
<label for="filterByTarget" class="form-label">Filter by Targets</label>
<select class="form-control" id="filterByTarget">
</select>
</div>
<div class="">
<label for="filterByScanType" class="form-label">Filter by Scan Type</label>
<select class="form-control" id="filterByScanType">
</select>
</div>
<div class="">
<label for="filterByScanStatus" class="form-label">Filter by Scan Status</label>
<select class="form-control" id="filterByScanStatus">
</select>
</div>
</div>
<div class="dropdown-divider"></div>
<a href="#" class="dropdown-ite text-primary float-end" id="resetFilters">Reset Filters</a>
</div>
</div>
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<a class="btn btn-soft-danger float-end disabled ms-1" href="#" onclick="deleteMultipleScan()" id="delete_multiple_button">Delete Multiple Scans</a>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-12">
<div class="card">
<form method="POST" id="scan_history_form">
{% csrf_token %}
<table id="scan_history_table" class="table style-3 table-hover">
<thead>
<tr>
<th class="checkbox-column text-center">Serial Number</th>
<th class="text-center">Serial Number</th>
<th class="">Domain Name</th>
<th>Summary</th>
<th class="">Scan Engine Used</th>
<th>Last Scan</th>
<th class="text-center">Status</th>
<th class="text-center">Progress</th>
<th class="text-center no-sorting">Action</th>
</tr>
</thead>
<tbody>
{% for scan_history in scan_history.all %}
<tr>
<td class="checkbox-column"> {{ scan_history.id }} </td>
<td class=""> {{ scan_history.id }} </td>
<td class="">
{{ scan_history.domain.name }}
<br>
{% for organization in scan_history.domain.get_organization %}
<span class="badge badge-soft-dark mt-1 me-1" data-toggle="tooltip" data-placement="top" title="Domain {{domain.name}} belongs to organization {{organization.name}}">{{ organization.name }}</span>
{% endfor %}
</td>
<td class="text-left">
<span class="badge badge-pills bg-info mt-1" data-toggle="tooltip" data-placement="top" title="Subdomains">{{scan_history.get_subdomain_count}}</span>
<span class="badge badge-pills bg-warning mt-1" data-toggle="tooltip" data-placement="top" title="Endpoints">{{scan_history.get_endpoint_count}}</span>
<span class="badge badge-pills bg-danger mt-1" data-toggle="tooltip" data-placement="top" title="{{scan_history.get_critical_vulnerability_count}} Critical, {{scan_history.get_high_vulnerability_count}} High, {{scan_history.get_medium_vulnerability_count}} Medium Vulnerabilities">{{scan_history.get_vulnerability_count}}</span>
</td>
<td class="">
<span class="badge badge-soft-primary">{{ scan_history.scan_type }}</span>
</td>
<td>
<span data-toggle="tooltip" data-placement="top" title="{{scan_history.start_scan_date}}">{{scan_history.start_scan_date|naturaltime}}</span>
</td>
<td class="text-center">
{% if scan_history.scan_status == -1 %}
<span class="badge badge-soft-warning" data-placement="top" data-toggle="tooltip" data-placement="top" title="Waiting for other scans to complete"><span class="spinner-border spinner-border-sm"></span> Pending</span>
{% elif scan_history.scan_status == 0 %}
<span class="badge badge-soft-danger">Failed</span>
{% if scan_history.error_message %}</br><p class="text-danger">Scan Failed due to: {{scan_history.error_message}}</p>{% endif %}
{% elif scan_history.scan_status == 1 %}
<span class="badge badge-soft-info"><span class="spinner-border spinner-border-sm"></span> In Progress</span>
{% elif scan_history.scan_status == 2 %}
<span class="badge badge-soft-success">Successful</span>
{% elif scan_history.scan_status == 3 %}
<span class="badge badge-soft-danger">Aborted</span>
{% else %}
<span class="badge badge-soft-danger">Unknown</span>
{% endif %}
</td>
<td class="text-center">
{% if scan_history.scan_status == -1 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-warning" role="progressbar" style="width: 75%" aria-valuenow="75" aria-valuemin="0" aria-valuemax="100"></div>
</div>
{% elif scan_history.scan_status == 0 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%"
aria-valuemin="0" aria-valuemax="4"></div>
</div>
{% elif scan_history.scan_status == 1 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-primary progress-bar-striped progress-bar-animated" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%"
aria-valuemin="0" aria-valuemax="4"></div>
</div>
{% elif scan_history.scan_status == 2 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-success" role="progressbar" style="width: 100%" aria-valuemin="0" aria-valuemax="100"></div>
</div>
{% elif scan_history.scan_status == 3 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger progress-bar-striped" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%" aria-valuemin="0"
aria-valuemax="4"></div>
</div>
{% else %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger" role="progressbar" style="width: 100%" aria-valuemin="0" aria-valuemax="100">
</div>
</div>
{% endif %}
</td>
<td class="text-center">
<div class="btn-group mb-2 dropstart">
<div class="btn-group">
<a href="{% url 'detail_scan' scan_history.id %}" class="btn btn-soft-primary">View Results</a>
<div class="btn-group dropstart" role="group">
<button type="button" class="btn btn-soft-primary dropdown-toggle dropdown-toggle-split" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
<i class="mdi mdi-chevron-right"></i>
</button>
<div class="dropdown-menu" style="">
{% if scan_history.scan_status == 0 or scan_history.scan_status == 2 or scan_history.scan_status == 3 %}
<a class="dropdown-item text-primary" href="{% url 'start_scan' scan_history.domain.id %}">
<i class="fe-refresh-ccw"></i> Rescan </a>
{% endif %}
{% if scan_history.scan_status == 1 or scan_history.scan_status == -1%}
<a href="#" class="dropdown-item text-danger" onclick="stop_scan(scan_id={{ scan_history.id }}, subscan_id=null, reload_scan_bar=false, reload_location=true)">
<i class="fe-alert-triangle"></i> Stop Scan</a>
{% endif %}
{% if scan_history.scan_status == 2 or scan_history.scan_status == 3 %}
<a href="#" class="dropdown-item text-danger" onclick="delete_scan('{{ scan_history.id }}')">
<i class="fe-trash-2"></i> Delete Scan Results</a>
{% endif %}
{% if scan.scan_status != -1%}
<div class="dropdown-divider"></div>
<a href="#" class="dropdown-item text-dark" onclick="initiate_report({{scan_history.id}}, '{{scan_history.subdomain_discovery}}', '{{scan_history.vulnerability_scan}}', '{{ scan_history.domain.name }}')">
<i class="fe-download"></i> Download Report</a>
{% endif %}
</div>
</div>
</div>
</div>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</form>
</div>
</div>
</div>
<div class="modal fade" id="generateReportModal" tabindex="-1" style="display: none;" aria-hidden="true">
<div class="modal-dialog modal-dialog-centered">
<div class="modal-content">
<div class="modal-header">
<h4 class="modal-title" id="myCenterModalLabel">Download Report</h4>
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
</div>
<div class="modal-body">
<div class="alert alert-light-primary border-0 mb-4" role="alert">
<div id='report_alert_message'></div>
</div>
<div class="form-group mb-4">
<label for="reportTypeForm">Report Type</label>
<select class="form-control" id="report_type_select" name="report_type">
</select>
</div>
<a id='generateReportButton' href="#" class="btn btn-primary float-end">Download Report</a>
</div>
</div>
</div>
</div>
{% endblock main_content %}
{% block page_level_script %}
<script src="{% static 'plugins/datatable/datatables.js' %}"></script>
<script>
$(document).ready(function() {
var table = $('#scan_history_table').DataTable({
headerCallback: function(e, a, t, n, s) {
e.getElementsByTagName("th")[0].innerHTML='<div class="form-check mb-2 form-check-primary"><input type="checkbox" class="float-start form-check-input chk-parent" id="head_checkbox" onclick=mainCheckBoxSelected(this)>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>\n'
},
"columnDefs":[
{ 'visible': false, 'targets': [1] },
{
"targets":0, "width":"20px", "className":"", "orderable":!1, render:function(e, a, t, n) {
return'<div class="form-check mb-2 form-check-primary"><input type="checkbox" name="targets_checkbox['+ e + ']" class="float-start form-check-input targets_checkbox" value="' + e + '" onchange=toggleMultipleTargetButton()>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>'
},
}],
"order": [[1, 'desc']],
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center mt-sm-0 mt-3'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" +
"<'table-responsive'tr>" +
"<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
"oLanguage": {
"oPaginate": { "sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>', "sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>' },
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"stripeClasses": [],
"lengthMenu": [5, 10, 20, 30, 40, 50],
"pageLength": 20,
"initComplete": function(settings, json) {
$('[data-toggle="tooltip"]').tooltip();
table = settings.oInstance.api();
var rows = table.rows({
selected: true
}).indexes();
// populate filter menu from datatables
// populate targets
var selectedData = table.cells(rows, 2).data();
var target_array = [];
for (var i = 0; i < selectedData.length; i++) {
col1_data = selectedData[i];
domain_name = col1_data.match(/([^\n]+)/g)[0];
target_array.push(domain_name);
}
target_array = Array.from(new Set(target_array));
for (target in target_array) {
select = document.getElementById('filterByTarget');
var option = document.createElement('option');
option.value = target_array[target];
option.innerHTML = target_array[target];
select.appendChild(option);
}
// populate Scan Type
var selectedData = table.cells(rows, 4).data();
var scan_type_array = [];
for (var i = 0; i < selectedData.length; i++) {
col1_data = extractContent(selectedData[i]);
scan_type_array.push(col1_data);
}
scan_type_array = Array.from(new Set(scan_type_array));
for (engine in scan_type_array) {
select = document.getElementById('filterByScanType');
var option = document.createElement('option');
option.value = scan_type_array[engine];
option.innerHTML = scan_type_array[engine];
select.appendChild(option);
}
}
});
multiCheck(table);
// filter organization populate
$.getJSON(`/api/listOrganizations?&format=json`, function(data) {
data = data['organizations']
for (organization in data) {
name = htmlEncode(data[organization]['name']);
select = document.getElementById('filterByOrganization');
var option = document.createElement('option');
option.value = name;
option.innerHTML = name;
select.appendChild(option);
}
}).fail(function() {});
// filtering for scan status
var status_types = ['Pending', 'Scanning', 'Aborted', 'Successful', 'Failed'];
for (status in status_types) {
select = document.getElementById('filterByScanStatus');
var option = document.createElement('option');
option.value = status_types[status];
option.innerHTML = status_types[status];
select.appendChild(option);
}
var org_filter = document.getElementById('filterByOrganization');
org_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Organization: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by organization ${this.value}`,
pos: 'top-center'
});
}, false);
var status_filter = document.getElementById('filterByScanStatus');
status_filter.addEventListener('click', function() {
table.search(this.value).draw();
switch (this.value) {
case 'Pending':
badge_color = 'warning';
break;
case 'Scanning':
badge_color = 'info';
break;
case 'Aborted':
badge_color = 'danger';
break;
case 'Failed':
badge_color = 'danger';
break;
case 'Successful':
badge_color = 'success';
break;
default:
badge_color = 'primary'
}
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-${badge_color}">Scan Status: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by scan status ${this.value}`,
pos: 'top-center'
});
}, false);
var engine_filter = document.getElementById('filterByScanType');
engine_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Scan Engine: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by Engine ${this.value}`,
pos: 'top-center'
});
}, false);
var target_filter = document.getElementById('filterByTarget');
target_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Target/Domain: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by Engine ${this.value}`,
pos: 'top-center'
});
}, false);
// reset filtering
var reset_filter = document.getElementById('resetFilters');
reset_filter.addEventListener('click', function() {
resetFilters(table);
}, false);
});
function resetFilters(table_obj) {
table_obj.search("").draw();
Snackbar.show({
text: `Filters Reset`,
pos: 'top-center'
});
document.getElementById('filteringText').innerHTML = '';
}
function checkedCount() {
// this function will count the number of boxes checked
item = document.getElementsByClassName("targets_checkbox");
count = 0;
for (var i = 0; i < item.length; i++) {
if (item[i].checked) {
count++;
}
}
return count;
}
function toggleMultipleTargetButton() {
if (checkedCount() > 0) {
$("#delete_multiple_button").removeClass("disabled");
} else {
$("#delete_multiple_button").addClass("disabled");
}
}
function mainCheckBoxSelected(checkbox) {
if (checkbox.checked) {
$("#delete_multiple_button").removeClass("disabled");
$(".targets_checkbox").prop('checked', true);
} else {
$("#delete_multiple_button").addClass("disabled");
$(".targets_checkbox").prop('checked', false);
}
}
function deleteMultipleScan() {
if (!checkedCount()) {
swal({
title: '',
text: "Oops! No targets has been selected!",
type: 'error',
padding: '2em'
})
} else {
// atleast one target is selected
swal.queue([{
title: 'Are you sure you want to delete ' + checkedCount() + ' Scans?',
text: "This action is irreversible.\nThis will delete all the scan data and vulnerabilities related to the scan.",
type: 'warning',
showCancelButton: true,
confirmButtonText: 'Delete',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
deleteForm = document.getElementById("scan_history_form");
deleteForm.action = "../delete/multiple";
deleteForm.submit();
}
}])
}
}
function initiate_report(id, is_subdomain_scan, is_vulnerability_scan, domain_name) {
$('#generateReportModal').modal('show');
$('#report_alert_message').empty();
$('#report_type_select').empty();
if (is_subdomain_scan == 'True' && is_vulnerability_scan == 'True') {
$('#report_alert_message').append(`
<b>Full Scan</b> will include both Reconnaissance and Vulnerability Report.<br>
`);
$('#report_type_select').append($('<option>', {
value: 'full',
text: 'Full Scan Report'
}));
}
if (is_subdomain_scan == 'True') {
// eligible for reconnaissance report
$('#report_alert_message').append(`
<b>Reconnaissance Report</b> will only include Assets Discovered Section.<br>
`);
$('#report_type_select').append($('<option>', {
value: 'recon',
text: 'Reconnaissance Report'
}));
}
if (is_vulnerability_scan == 'True'){
// eligible for vulnerability report
$('#report_alert_message').append(`
<b>Vulnerability Report</b> will only include details of Vulnerabilities Identified.
`);
$('#report_type_select').append($('<option>', {
value: 'vulnerability',
text: 'Vulnerability Report'
}));
}
$('#generateReportButton').attr('onClick', `generate_report(${id}, '${domain_name}')`);
}
function generate_report(id, domain_name) {
var report_type = $("#report_type_select option:selected").val();
$('#generateReportModal').modal('hide');
swal.queue([{
title: 'Generating Report!',
text: `Please wait until we generate a report for you!`,
padding: '2em',
onOpen: function() {
swal.showLoading()
return fetch(`/scan/create_report/${id}?download&report_type=${report_type}`, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
})
.then(function(response) {
return response.blob();
}).then(function(blob) {
const file = new Blob([blob], {type: 'application/pdf'});
// process to auto download it
const fileURL = URL.createObjectURL(file);
const link = document.createElement('a');
link.href = fileURL;
link.download = domain_name + ".pdf";
link.click();
swal.close();
})
.catch(function() {
swal.insertQueueStep({
type: 'error',
title: 'Oops! Unable to generate report!'
})
})
}
}]);
}
</script>
{% endblock page_level_script %}
| {% extends 'base/base.html' %}
{% load static %}
{% load humanize %}
{% load permission_tags %}
{% block title %}
Scan history
{% endblock title %}
{% block custom_js_css_link %}
{% endblock custom_js_css_link %}
{% block breadcrumb_title %}
<li class="breadcrumb-item active" aria-current="page">Scan History</li>
{% endblock breadcrumb_title %}
{% block page_title %}
Quick Scan History
{% endblock page_title %}
{% block main_content %}
<div class="row">
<div class="col-12">
<div class="card">
<div class="p-2">
<div class="row">
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<button type="button" class="btn btn-primary dropdown-toggle" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false" id="filterMenu">
Filter <i class="fe-filter"></i>
</button>
<div id="filteringText" class="mt-2">
</div>
<div class="dropdown-menu" style="width: 30%">
<div class="px-4 py-3">
<h4 class="headline-title">Filters</h4>
<div class="">
<label for="filterByOrganization" class="form-label">Filter by Organization</label>
<select class="form-control" id="filterByOrganization">
</select>
</div>
<div class="">
<label for="filterByTarget" class="form-label">Filter by Targets</label>
<select class="form-control" id="filterByTarget">
</select>
</div>
<div class="">
<label for="filterByScanType" class="form-label">Filter by Scan Type</label>
<select class="form-control" id="filterByScanType">
</select>
</div>
<div class="">
<label for="filterByScanStatus" class="form-label">Filter by Scan Status</label>
<select class="form-control" id="filterByScanStatus">
</select>
</div>
</div>
<div class="dropdown-divider"></div>
<a href="#" class="dropdown-ite text-primary float-end" id="resetFilters">Reset Filters</a>
</div>
</div>
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<a class="btn btn-soft-danger float-end disabled ms-1" href="#" onclick="deleteMultipleScan()" id="delete_multiple_button">Delete Multiple Scans</a>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-12">
<div class="card">
<form method="POST" id="scan_history_form">
{% csrf_token %}
<table id="scan_history_table" class="table dt-responsive w-100">
<thead>
<tr>
<th class="checkbox-column text-center">Serial Number</th>
<th class="text-center">Serial Number</th>
<th class="">Domain Name</th>
<th>Summary</th>
<th class="">Scan Engine Used</th>
<th>Last Scan</th>
<th class="text-center">Status</th>
<th class="text-center">Progress</th>
<th class="text-center no-sorting">Action</th>
</tr>
</thead>
<tbody>
{% for scan_history in scan_history.all %}
<tr>
<td class="checkbox-column"> {{ scan_history.id }} </td>
<td class=""> {{ scan_history.id }} </td>
<td class="">
{{ scan_history.domain.name }}
<br>
{% for organization in scan_history.domain.get_organization %}
<span class="badge badge-soft-dark mt-1 me-1" data-toggle="tooltip" data-placement="top" title="Domain {{domain.name}} belongs to organization {{organization.name}}">{{ organization.name }}</span>
{% endfor %}
</td>
<td class="text-left">
<span class="badge badge-pills bg-info mt-1" data-toggle="tooltip" data-placement="top" title="Subdomains">{{scan_history.get_subdomain_count}}</span>
<span class="badge badge-pills bg-warning mt-1" data-toggle="tooltip" data-placement="top" title="Endpoints">{{scan_history.get_endpoint_count}}</span>
<span class="badge badge-pills bg-danger mt-1" data-toggle="tooltip" data-placement="top" title="{{scan_history.get_critical_vulnerability_count}} Critical, {{scan_history.get_high_vulnerability_count}} High, {{scan_history.get_medium_vulnerability_count}} Medium Vulnerabilities">{{scan_history.get_vulnerability_count}}</span>
</td>
<td class="">
<span class="badge badge-soft-primary">{{ scan_history.scan_type }}</span>
</td>
<td>
<span data-toggle="tooltip" data-placement="top" title="{{scan_history.start_scan_date}}">{{scan_history.start_scan_date|naturaltime}}</span>
</td>
<td class="text-center">
{% if scan_history.scan_status == -1 %}
<span class="badge badge-soft-warning" data-placement="top" data-toggle="tooltip" data-placement="top" title="Waiting for other scans to complete"><span class="spinner-border spinner-border-sm"></span> Pending</span>
{% elif scan_history.scan_status == 0 %}
<span class="badge badge-soft-danger">Failed</span>
{% if scan_history.error_message %}</br><p class="text-danger">Scan Failed due to: {{scan_history.error_message}}</p>{% endif %}
{% elif scan_history.scan_status == 1 %}
<span class="badge badge-soft-info"><span class="spinner-border spinner-border-sm"></span> In Progress</span>
{% elif scan_history.scan_status == 2 %}
<span class="badge badge-soft-success">Successful</span>
{% elif scan_history.scan_status == 3 %}
<span class="badge badge-soft-danger">Aborted</span>
{% else %}
<span class="badge badge-soft-danger">Unknown</span>
{% endif %}
</td>
<td class="text-center">
{% if scan_history.scan_status == -1 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-warning" role="progressbar" style="width: 75%" aria-valuenow="75" aria-valuemin="0" aria-valuemax="100"></div>
</div>
{% elif scan_history.scan_status == 0 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%"
aria-valuemin="0" aria-valuemax="4"></div>
</div>
{% elif scan_history.scan_status == 1 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-primary progress-bar-striped progress-bar-animated" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%"
aria-valuemin="0" aria-valuemax="4"></div>
</div>
{% elif scan_history.scan_status == 2 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-success" role="progressbar" style="width: 100%" aria-valuemin="0" aria-valuemax="100"></div>
</div>
{% elif scan_history.scan_status == 3 %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger progress-bar-striped" role="progressbar" style="width: {% widthratio scan_history.scanactivity_set.all|length scan_history.scan_type.get_number_of_steps|add:4 100 %}%" aria-valuemin="0"
aria-valuemax="4"></div>
</div>
{% else %}
<div class="progress progress-md mt-1">
<div class="progress-bar bg-danger" role="progressbar" style="width: 100%" aria-valuemin="0" aria-valuemax="100">
</div>
</div>
{% endif %}
</td>
<td class="text-center">
<div class="btn-group mb-2 dropstart">
<div class="btn-group">
<a href="/scan/{{current_project.slug}}/detail/{{scan_history.id}}" class="btn btn-soft-primary">View Results</a>
<div class="btn-group dropstart" role="group">
<button type="button" class="btn btn-soft-primary dropdown-toggle dropdown-toggle-split" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
<i class="mdi mdi-chevron-right"></i>
</button>
<div class="dropdown-menu" style="">
{% if user|can:'initiate_scans_subscans' %}
{% if scan_history.scan_status == 0 or scan_history.scan_status == 2 or scan_history.scan_status == 3 %}
<a class="dropdown-item text-primary" href="/scan/{{current_project.slug}}/start/{{scan_history.domain.id}}">
<i class="fe-refresh-ccw"></i> Rescan </a>
{% endif %}
{% if scan_history.scan_status == 1 or scan_history.scan_status == -1%}
<a href="#" class="dropdown-item text-danger" onclick="stop_scan(scan_id={{ scan_history.id }}, subscan_id=null, reload_scan_bar=false, reload_location=true)">
<i class="fe-alert-triangle"></i> Stop Scan</a>
{% endif %}
{% endif %}
{% if user|can:'modify_scan_results' %}
{% if scan_history.scan_status == 2 or scan_history.scan_status == 3 or scan_history.scan_status == 0 %}
<a href="#" class="dropdown-item text-danger" onclick="delete_scan('{{ scan_history.id }}')">
<i class="fe-trash-2"></i> Delete Scan Results</a>
{% endif %}
<div class="dropdown-divider"></div>
{% endif %}
{% if scan.scan_status != -1%}
<a href="#" class="dropdown-item text-dark" onclick="initiate_report({{scan_history.id}}, '{% if 'subdomain_discovery' in scan_history.scan_type.tasks %}True{% endif %}', '{% if 'vulnerability_scan' in scan_history.scan_type.tasks %}True{% endif %}', '{{ scan_history.domain.name }}')">
<i class="fe-download"></i> Scan Report</a>
{% endif %}
</div>
</div>
</div>
</div>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</form>
</div>
</div>
</div>
<div class="modal fade" id="generateReportModal" tabindex="-1" style="display: none;" aria-hidden="true">
<div class="modal-dialog modal-dialog-centered">
<div class="modal-content">
<div class="modal-header">
<h4 class="modal-title" id="myCenterModalLabel">Download Report</h4>
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
</div>
<div class="modal-body">
<div class="alert alert-light-primary border-0 mb-4" role="alert">
<div id='report_alert_message'></div>
</div>
<div class="form-group mb-4">
<label for="reportTypeForm">Report Type</label>
<select class="form-control" id="report_type_select" name="report_type">
</select>
</div>
<div class="form-group mb-4">
<div class="form-check" id="report_info_vuln_div">
<input type="checkbox" class="form-check-input" id="report_ignore_info_vuln" checked="">
<label class="form-check-label" for="report_ignore_info_vuln">Ignore Informational Vulnerabilities</label>
</div>
</div>
<a id='generateReportButton' href="#" class="btn btn-primary float-end m-2">Download Report</a>
<a id='previewReportButton' href="#" class="btn btn-secondary float-end m-2">Preview Report</a>
</div>
</div>
</div>
</div>
{% endblock main_content %}
{% block page_level_script %}
<script>
$(document).ready(function() {
var table = $('#scan_history_table').DataTable({
headerCallback: function(e, a, t, n, s) {
e.getElementsByTagName("th")[0].innerHTML='<div class="form-check mb-2 form-check-primary"><input type="checkbox" class="float-start form-check-input chk-parent" id="head_checkbox" onclick=mainCheckBoxSelected(this)>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>\n'
},
"columnDefs":[
{ 'visible': false, 'targets': [1] },
{
"targets":0, "width":"20px", "className":"", "orderable":!1, render:function(e, a, t, n) {
return'<div class="form-check mb-2 form-check-primary"><input type="checkbox" name="targets_checkbox['+ e + ']" class="float-start form-check-input targets_checkbox" value="' + e + '" onchange=toggleMultipleTargetButton()>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>'
},
}
],
"order": [[1, 'desc']],
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center mt-sm-0 mt-3'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" +
"<'table-responsive'tr>" +
"<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
"oLanguage": {
"oPaginate": { "sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>', "sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>' },
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"stripeClasses": [],
"lengthMenu": [5, 10, 20, 30, 40, 50],
"pageLength": 20,
"initComplete": function(settings, json) {
$('[data-toggle="tooltip"]').tooltip();
table = settings.oInstance.api();
var rows = table.rows({
selected: true
}).indexes();
// populate filter menu from datatables
// populate targets
var selectedData = table.cells(rows, 2).data();
var target_array = [];
for (var i = 0; i < selectedData.length; i++) {
col1_data = selectedData[i];
domain_name = col1_data.match(/([^\n]+)/g)[0];
target_array.push(domain_name);
}
target_array = Array.from(new Set(target_array));
for (target in target_array) {
select = document.getElementById('filterByTarget');
var option = document.createElement('option');
option.value = target_array[target];
option.innerHTML = target_array[target];
select.appendChild(option);
}
// populate Scan Type
var selectedData = table.cells(rows, 4).data();
var scan_type_array = [];
for (var i = 0; i < selectedData.length; i++) {
col1_data = extractContent(selectedData[i]);
scan_type_array.push(col1_data);
}
scan_type_array = Array.from(new Set(scan_type_array));
for (engine in scan_type_array) {
select = document.getElementById('filterByScanType');
var option = document.createElement('option');
option.value = scan_type_array[engine];
option.innerHTML = scan_type_array[engine];
select.appendChild(option);
}
}
});
multiCheck(table);
// filter organization populate
$.getJSON(`/api/listOrganizations?&format=json`, function(data) {
data = data['organizations']
for (organization in data) {
name = htmlEncode(data[organization]['name']);
select = document.getElementById('filterByOrganization');
var option = document.createElement('option');
option.value = name;
option.innerHTML = name;
select.appendChild(option);
}
}).fail(function() {});
// filtering for scan status
var status_types = ['Pending', 'Scanning', 'Aborted', 'Successful', 'Failed'];
for (status in status_types) {
select = document.getElementById('filterByScanStatus');
var option = document.createElement('option');
option.value = status_types[status];
option.innerHTML = status_types[status];
select.appendChild(option);
}
var org_filter = document.getElementById('filterByOrganization');
org_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Organization: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by organization ${this.value}`,
pos: 'top-center'
});
}, false);
var status_filter = document.getElementById('filterByScanStatus');
status_filter.addEventListener('click', function() {
table.search(this.value).draw();
switch (this.value) {
case 'Pending':
badge_color = 'warning';
break;
case 'Scanning':
badge_color = 'info';
break;
case 'Aborted':
badge_color = 'danger';
break;
case 'Failed':
badge_color = 'danger';
break;
case 'Successful':
badge_color = 'success';
break;
default:
badge_color = 'primary'
}
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-${badge_color}">Scan Status: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by scan status ${this.value}`,
pos: 'top-center'
});
}, false);
var engine_filter = document.getElementById('filterByScanType');
engine_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Scan Engine: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by Engine ${this.value}`,
pos: 'top-center'
});
}, false);
var target_filter = document.getElementById('filterByTarget');
target_filter.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-soft-primary">Target/Domain: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by Engine ${this.value}`,
pos: 'top-center'
});
}, false);
// reset filtering
var reset_filter = document.getElementById('resetFilters');
reset_filter.addEventListener('click', function() {
resetFilters(table);
}, false);
});
function resetFilters(table_obj) {
table_obj.search("").draw();
Snackbar.show({
text: `Filters Reset`,
pos: 'top-center'
});
document.getElementById('filteringText').innerHTML = '';
}
function checkedCount() {
// this function will count the number of boxes checked
item = document.getElementsByClassName("targets_checkbox");
count = 0;
for (var i = 0; i < item.length; i++) {
if (item[i].checked) {
count++;
}
}
return count;
}
function toggleMultipleTargetButton() {
if (checkedCount() > 0) {
$("#delete_multiple_button").removeClass("disabled");
} else {
$("#delete_multiple_button").addClass("disabled");
}
}
function mainCheckBoxSelected(checkbox) {
if (checkbox.checked) {
$("#delete_multiple_button").removeClass("disabled");
$(".targets_checkbox").prop('checked', true);
} else {
$("#delete_multiple_button").addClass("disabled");
$(".targets_checkbox").prop('checked', false);
}
}
function deleteMultipleScan() {
if (!checkedCount()) {
swal({
title: '',
text: "Oops! No targets has been selected!",
type: 'error',
padding: '2em'
})
} else {
// atleast one target is selected
swal.queue([{
title: 'Are you sure you want to delete ' + checkedCount() + ' Scans?',
text: "This action is irreversible.\nThis will delete all the scan data and vulnerabilities related to the scan.",
type: 'warning',
showCancelButton: true,
confirmButtonText: 'Delete',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
deleteForm = document.getElementById("scan_history_form");
deleteForm.action = "../delete/multiple";
deleteForm.submit();
}
}])
}
}
// select option listener for report_type_select
var report_type = document.getElementById("report_type_select");
report_type.addEventListener("change", function() {
if(report_type.value == "recon")
{
$("#report_info_vuln_div").hide();
}
else{
$("#report_info_vuln_div").show();
}
});
function initiate_report(id, is_subdomain_scan, is_vulnerability_scan, domain_name) {
$('#generateReportModal').modal('show');
$('#report_alert_message').empty();
$('#report_type_select').empty();
if (is_subdomain_scan == 'True' && is_vulnerability_scan == 'True') {
$('#report_alert_message').append(`
<b>Full Scan</b> will include both Reconnaissance and Vulnerability Report.<br>
`);
$('#report_type_select').append($('<option>', {
value: 'full',
text: 'Full Scan Report'
}));
}
if (is_subdomain_scan == 'True') {
// eligible for reconnaissance report
$('#report_alert_message').append(`
<b>Reconnaissance Report</b> will only include Assets Discovered Section.<br>
`);
$('#report_type_select').append($('<option>', {
value: 'recon',
text: 'Reconnaissance Report'
}));
}
if (is_vulnerability_scan == 'True'){
// eligible for vulnerability report
$('#report_alert_message').append(`
<b>Vulnerability Report</b> will only include details of Vulnerabilities Identified.
`);
$('#report_type_select').append($('<option>', {
value: 'vulnerability',
text: 'Vulnerability Report'
}));
}
$('#generateReportButton').attr('onClick', `generate_report(${id}, '${domain_name}')`);
$('#previewReportButton').attr('onClick', `preview_report(${id}, '${domain_name}')`);
}
function preview_report(id, domain_name){
var report_type = $("#report_type_select option:selected").val();
var url = `/scan/create_report/${id}?report_type=${report_type}`;
if ($('#report_ignore_info_vuln').is(":checked")) {
url += `&ignore_info_vuln`
}
$('#generateReportModal').modal('hide');
window.open(url, '_blank').focus();
}
function generate_report(id, domain_name) {
var report_type = $("#report_type_select option:selected").val();
var url = `/scan/create_report/${id}?report_type=${report_type}&download`;
if ($('#report_ignore_info_vuln').is(":checked")) {
url += `&ignore_info_vuln`
}
$('#generateReportModal').modal('hide');
swal.queue([{
title: 'Generating Report!',
text: `Please wait until we generate a report for you!`,
padding: '2em',
onOpen: function() {
swal.showLoading()
return fetch(url, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
})
.then(function(response) {
return response.blob();
}).then(function(blob) {
const file = new Blob([blob], {type: 'application/pdf'});
// process to auto download it
const fileURL = URL.createObjectURL(file);
const link = document.createElement('a');
link.href = fileURL;
link.download = domain_name + ".pdf";
link.click();
swal.close();
})
.catch(function() {
swal.insertQueueStep({
type: 'error',
title: 'Oops! Unable to generate report!'
})
})
}
}]);
}
</script>
{% endblock page_level_script %}
| yogeshojha | 3c60bc1ee495044794d91edee0c96fff73ab46c7 | 5413708d243799a5271440c47c6f98d0c51154ca | ## DOM text reinterpreted as HTML
[DOM text](1) is reinterpreted as HTML without escaping meta-characters.
[Show more details](https://github.com/yogeshojha/rengine/security/code-scanning/156) | github-advanced-security[bot] | 28 |
yogeshojha/rengine | 814 | Fixes required to get install script working | Update celery version to 5.2.7 in requirements.txt
Update go version to 1.20 in Dockerfile
These changes are required for install.sh to complete on Ubuntu 22.04.1 LTS (GNU/Linux 5.15.0-66-generic x86_64) (digital ocean droplet).
Note: Saw some warning/maybe error related to whatisport (or similar) pypi package. Will investigate at some point. | null | 2023-02-12 05:24:03+00:00 | 2023-03-02 17:25:01+00:00 | web/Dockerfile | # Base image
FROM ubuntu:20.04
# Labels and Credits
LABEL \
name="reNgine" \
author="Yogesh Ojha <yogesh.ojha11@gmail.com>" \
description="reNgine is a automated pipeline of recon process, useful for information gathering during web application penetration testing."
# Environment Variables
ENV DEBIAN_FRONTEND="noninteractive" \
DATABASE="postgres"
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Install essentials
RUN apt update -y && apt install -y --no-install-recommends \
build-essential \
cmake \
firefox \
gcc \
git \
libpq-dev \
libpq-dev \
libpcap-dev \
netcat \
postgresql \
python3 \
python3-dev \
python3-pip \
python3-netaddr \
wget \
x11-utils \
xvfb \
python3-cffi \
python3-brotli \
libpango-1.0-0 \
libpangoft2-1.0-0 \
geoip-bin \
geoip-database
# Download and install go 1.18
RUN wget https://golang.org/dl/go1.18.2.linux-amd64.tar.gz
RUN tar -xvf go1.18.2.linux-amd64.tar.gz
RUN rm go1.18.2.linux-amd64.tar.gz
RUN mv go /usr/local
# Download geckodriver
RUN wget https://github.com/mozilla/geckodriver/releases/download/v0.26.0/geckodriver-v0.26.0-linux64.tar.gz
RUN tar -xvf geckodriver-v0.26.0-linux64.tar.gz
RUN rm geckodriver-v0.26.0-linux64.tar.gz
RUN mv geckodriver /usr/bin
# ENV for Go
ENV GOROOT="/usr/local/go"
ENV PATH="${PATH}:${GOROOT}/bin"
ENV PATH="${PATH}:${GOPATH}/bin"
ENV GOPATH=$HOME/go
ENV PATH="${PATH}:${GOROOT}/bin:${GOPATH}/bin"
# Make directory for app
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Download Go packages
RUN go install -v github.com/hakluke/hakrawler@latest
RUN GO111MODULE=on go install -v -v github.com/bp0lr/gauplus@latest
RUN GO111MODULE=on go install -v github.com/jaeles-project/gospider@latest
RUN go install -v github.com/OWASP/Amass/v3/...@latest
RUN go install -v github.com/ffuf/ffuf@latest
RUN go install -v github.com/tomnomnom/assetfinder@latest
RUN GO111MODULE=on go install -v github.com/tomnomnom/gf@latest
RUN GO111MODULE=on go install -v github.com/tomnomnom/unfurl@latest
RUN GO111MODULE=on go install -v github.com/tomnomnom/waybackurls@latest
RUN GO111MODULE=on go install -v github.com/projectdiscovery/httpx/cmd/httpx@latest
RUN GO111MODULE=on go install -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest
RUN GO111MODULE=on go install -v github.com/projectdiscovery/nuclei/v2/cmd/nuclei@latest
RUN GO111MODULE=on go install -v github.com/projectdiscovery/naabu/v2/cmd/naabu@latest
# Update Nuclei and Nuclei-Templates
RUN nuclei -update
RUN nuclei -update-templates
# Copy requirements
COPY ./requirements.txt /tmp/requirements.txt
RUN pip3 install --upgrade setuptools pip && \
pip3 install -r /tmp/requirements.txt
# install eyewitness
RUN python3 -m pip install fuzzywuzzy \
selenium \
python-Levenshtein \
pyvirtualdisplay \
netaddr
# Copy source code
COPY . /usr/src/app/
# httpx seems to have issue, use alias instead!!!
RUN echo 'alias httpx="/go/bin/httpx"' >> ~/.bashrc
| # Base image
FROM ubuntu:20.04
# Labels and Credits
LABEL \
name="reNgine" \
author="Yogesh Ojha <yogesh.ojha11@gmail.com>" \
description="reNgine is a automated pipeline of recon process, useful for information gathering during web application penetration testing."
# Environment Variables
ENV DEBIAN_FRONTEND="noninteractive" \
DATABASE="postgres"
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Install essentials
RUN apt update -y && apt install -y --no-install-recommends \
build-essential \
cmake \
firefox \
gcc \
git \
libpq-dev \
libpq-dev \
libpcap-dev \
netcat \
postgresql \
python3 \
python3-dev \
python3-pip \
python3-netaddr \
wget \
x11-utils \
xvfb \
python3-cffi \
python3-brotli \
libpango-1.0-0 \
libpangoft2-1.0-0 \
geoip-bin \
geoip-database
# Download and install go 1.18
RUN wget https://golang.org/dl/go1.20.linux-amd64.tar.gz
RUN tar -xvf go1.20.linux-amd64.tar.gz
RUN rm go1.20.linux-amd64.tar.gz
RUN mv go /usr/local
# Download geckodriver
RUN wget https://github.com/mozilla/geckodriver/releases/download/v0.26.0/geckodriver-v0.26.0-linux64.tar.gz
RUN tar -xvf geckodriver-v0.26.0-linux64.tar.gz
RUN rm geckodriver-v0.26.0-linux64.tar.gz
RUN mv geckodriver /usr/bin
# ENV for Go
ENV GOROOT="/usr/local/go"
ENV PATH="${PATH}:${GOROOT}/bin"
ENV PATH="${PATH}:${GOPATH}/bin"
ENV GOPATH=$HOME/go
ENV PATH="${PATH}:${GOROOT}/bin:${GOPATH}/bin"
# Make directory for app
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Download Go packages
RUN go install -v github.com/hakluke/hakrawler@latest
RUN GO111MODULE=on go install -v -v github.com/bp0lr/gauplus@latest
RUN GO111MODULE=on go install -v github.com/jaeles-project/gospider@latest
RUN go install -v github.com/OWASP/Amass/v3/...@latest
RUN go install -v github.com/ffuf/ffuf@latest
RUN go install -v github.com/tomnomnom/assetfinder@latest
RUN GO111MODULE=on go install -v github.com/tomnomnom/gf@latest
RUN GO111MODULE=on go install -v github.com/tomnomnom/unfurl@latest
RUN GO111MODULE=on go install -v github.com/tomnomnom/waybackurls@latest
RUN GO111MODULE=on go install -v github.com/projectdiscovery/httpx/cmd/httpx@latest
RUN GO111MODULE=on go install -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest
RUN GO111MODULE=on go install -v github.com/projectdiscovery/nuclei/v2/cmd/nuclei@latest
RUN GO111MODULE=on go install -v github.com/projectdiscovery/naabu/v2/cmd/naabu@latest
# Update Nuclei and Nuclei-Templates
RUN nuclei -update
RUN nuclei -update-templates
# Copy requirements
COPY ./requirements.txt /tmp/requirements.txt
RUN pip3 install --upgrade setuptools pip && \
pip3 install -r /tmp/requirements.txt
# install eyewitness
RUN python3 -m pip install fuzzywuzzy \
selenium \
python-Levenshtein \
pyvirtualdisplay \
netaddr
# Copy source code
COPY . /usr/src/app/
# httpx seems to have issue, use alias instead!!!
RUN echo 'alias httpx="/go/bin/httpx"' >> ~/.bashrc
| m00tiny | 5e3c04c336d3798fbff20f362a8091dface53203 | a0ce6a270a30e8e3c224b6141555a530b9b6c50e | - RUN wget https://go.dev/dl/go1.20.1.linux-amd64.tar.gz | cybersaki | 29 |
yogeshojha/rengine | 680 | Release/1.3.1 | # Fixes
- Fix for #643 Downloading issue for Subdomain and Endpoints
- Fix for #627 Too many Targets causes issues while loading datatable
- Fix version Numbering issue | null | 2022-08-12 12:46:29+00:00 | 2022-08-12 12:57:24+00:00 | web/targetApp/templates/target/list.html | {% extends 'base/base.html' %}
{% load static %}
{% load humanize %}
{% block title %}
List all targets
{% endblock title %}
{% block custom_js_css_link %}
<link rel="stylesheet" type="text/css" href="{% static 'plugins/datatable/datatables.css' %}">
<link rel="stylesheet" type="text/css" href="{% static 'plugins/datatable/global.css' %}">
<link rel="stylesheet" type="text/css" href="{% static 'plugins/datatable/custom.css' %}">
{% endblock custom_js_css_link %}
{% block breadcrumb_title %}
<li class="breadcrumb-item active"><a href="{% url 'list_target' %}">Targets</a></li>
<li class="breadcrumb-item"><a href="#">All Targets</a></li>
{% endblock breadcrumb_title %}
{% block page_title %}
Targets
{% endblock page_title %}
{% block main_content %}
<div class="row">
<div class="col-12">
<div class="card">
<div class="p-2">
<div class="row">
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<button type="button" class="btn btn-primary dropdown-toggle mt-1" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false" id="filterMenu">
Filter <i class="fe-filter"></i>
</button>
<div class="dropdown-menu" style="width: 30%">
<div class="px-4 py-3">
<h4 class="headline-title">Filters</h4>
<label for="filterByOrganization" class="form-label">Filter by Organization</label>
<select class="form-control" id="filterByOrganization">
</select>
</div>
<div class="dropdown-divider"></div>
<a href="#" class="dropdown-ite text-primary float-end" id="resetFilters">Reset Filters</a>
</div>
</div>
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<a class="btn btn-soft-danger float-end disabled ms-1 mt-1" href="#" onclick="deleteMultipleTargets()" id="delete_multiple_button">Delete Multiple Targets</a>
<a class="btn btn-soft-info float-end disabled ms-1 mt-1" href="#" onclick="scanMultipleTargets()" id="scan_multiple_button">Scan Multiple Targets</a>
<a class="btn btn-soft-primary float-end ms-1 mt-1" href="{% url 'add_target' %}" data-toggle="tooltip" data-placement="top" title="Add New Targets">Add Targets</a>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-12">
<div class="card">
<form method="POST" id="multiple_targets_form" action="../../scan/start/multiple/">
{% csrf_token %}
<table id="list_target_table" class="table style-3 table-hover">
<thead>
<tr>
<th class="checkbox-column text-center">Serial Number</th>
<th>Serial Number</th>
<th class="text-center">Domain Name</th>
<th>Description</th>
<th class="text-center">Last Scanned</th>
<th class="text-center">Action</th>
</tr>
</thead>
<tbody>
{% for domain in domains.all %}
<tr>
<td class="checkbox-column"> {{ domain.id }} </td>
<td class=""> {{ domain.id }} </td>
<td>
<b>{{ domain.name }}</b> <a href="#" onclick="get_target_whois('{{domain.name}}')">(view whois)</a>
<br>
<small class="text-muted">Added {{ domain.insert_date|naturaltime }}</small>
{% if domain.get_organization %}
<br>
{% for organization in domain.get_organization %}
<span class="badge badge-soft-primary me-1 mb-1" data-toggle="tooltip" data-placement="top" title="Domain {{domain.name}} belongs to organization {{organization.name}}">{{ organization.name }}</span>
{% endfor %}
{% endif %}
{% if domain.get_recent_scan_id %}
<br>
<a href="{% url 'detail_scan' domain.get_recent_scan_id %}" class="text-info">Recent Scan <svg xmlns="http://www.w3.org/2000/svg" width="15" height="15" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-external-link"><path d="M18 13v6a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V8a2 2 0 0 1 2-2h6"></path><polyline points="15 3 21 3 21 9"></polyline><line x1="10" y1="14" x2="21" y2="3"></line></svg></a>
{% endif %}
</td>
<td>{% if domain.description %}{{domain.description}}{% endif %}</td>
{% if domain.start_scan_date %}
<td class="text-center"><span data-toggle="tooltip" data-placement="top" title="{{domain.start_scan_date}}">{{domain.start_scan_date|naturaltime}}</span></td>
{% else %}
<td class="text-center"><span class="badge badge-soft-warning">Never Scanned Before</span></td>
{% endif %}
<td class="text-center">
<div class="btn-group mb-2 dropstart">
<div class="btn-group">
<a class="btn btn-soft-primary" href="{% url 'target_summary' domain.id %}"><i class="fe-info"></i> Target Summary</a>
<a href="{% url 'start_scan' domain.id %}" class="btn btn-soft-primary"><i class="fe-zap"></i> Initiate Scan</a>
<div class="btn-group dropstart" role="group">
<button type="button" class="btn btn-soft-primary dropdown-toggle dropdown-toggle-split" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
<i class="mdi mdi-chevron-right"></i>
</button>
<div class="dropdown-menu" style="">
<a class="dropdown-item" href="{% url 'schedule_scan' domain.id %}"><i class="fe-clock"></i> Schedule Scan</a>
<div class="dropdown-divider"></div>
<a class="dropdown-item" href="{% url 'update_target' domain.id %}"><i class="fe-edit-2"></i> Edit Target</a>
<a class="dropdown-item text-danger" href="#" onclick="delete_target({{ domain.id }}, '{{ domain.name }}')"><i class="fe-trash-2"></i> Delete target</a>
</div>
</div>
</div>
</div>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</form>
</div>
</div>
</div>
{% endblock main_content %}
{% block page_level_script %}
<script src="{% static 'custom/custom.js' %}"></script>
<script src="{% static 'targetApp/js/custom_domain.js' %}"></script>
<script src="{% static 'plugins/datatable/datatables.js' %}"></script>
<script>
$(document).ready(function(){
var table = $('#list_target_table').DataTable({
headerCallback:function(e, a, t, n, s) {
e.getElementsByTagName("th")[0].innerHTML='<div class="form-check mb-2 form-check-primary"><input type="checkbox" class="float-start form-check-input chk-parent" id="head_checkbox" onclick=mainCheckBoxSelected()>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>\n'
},
"columnDefs":[
{ 'visible': false, 'targets': [1] },
{
"targets":0, "width":"20px", "className":"", "orderable":!1, render:function(e, a, t, n) {
return'<div class="form-check mb-2 form-check-primary"><input type="checkbox" name="targets_checkbox['+ e + ']" class="float-start form-check-input targets_checkbox" value="' + e + '" onchange=toggleMultipleTargetButton()>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>'
},
}],
"order": [[1, 'desc']],
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center mt-sm-0 mt-3'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" +
"<'table-responsive'tr>" +
"<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
"oLanguage": {
"oPaginate": { "sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>', "sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>' },
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"stripeClasses": [],
"lengthMenu": [5, 10, 20, 30, 40, 50, 100, 500, 1000],
"pageLength": 20,
});
multiCheck(table);
// Handle form submission event
$('#frm-example').on('submit', function(e){
var form = this;
table.$('input[type="checkbox"]').each(function(){
if(!$.contains(document, this)){
if(this.checked){
$(form).append(
$('<input>')
.attr('type', 'hidden')
.attr('name', this.name)
.val(this.value)
);
}
}
});
e.preventDefault();
});
// filter organization populate
$.getJSON(`/api/listOrganizations?&format=json`, function(data) {
data = data['organizations']
for (organization in data) {
name = htmlEncode(data[organization]['name']);
select = document.getElementById('filterByOrganization');
var option = document.createElement('option');
option.value = name;
option.innerHTML = name;
select.appendChild(option);
}
}).fail(function(){
});
var a = document.getElementById('filterByOrganization');
a.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-primary m-2">Organization: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by organization ${this.value}`,
pos: 'top-center'
});
}, false);
// reset filtering
var reset_filter = document.getElementById('resetFilters');
reset_filter.addEventListener('click', function() {
resetFilters(table);
}, false);
$('[data-toggle=tooltip]').tooltip();
});
function resetFilters(table_obj) {
table_obj.search("").draw();
Snackbar.show({
text: `Filters Reset`,
pos: 'top-center'
});
document.getElementById('filteringText').innerHTML = '';
}
</script>
{% endblock page_level_script %}
| {% extends 'base/base.html' %}
{% load static %}
{% load humanize %}
{% block title %}
List all targets
{% endblock title %}
{% block custom_js_css_link %}
<link rel="stylesheet" type="text/css" href="{% static 'plugins/datatable/datatables.css' %}">
<link rel="stylesheet" type="text/css" href="{% static 'plugins/datatable/global.css' %}">
<link rel="stylesheet" type="text/css" href="{% static 'plugins/datatable/custom.css' %}">
{% endblock custom_js_css_link %}
{% block breadcrumb_title %}
<li class="breadcrumb-item active"><a href="{% url 'list_target' %}">Targets</a></li>
<li class="breadcrumb-item"><a href="#">All Targets</a></li>
{% endblock breadcrumb_title %}
{% block page_title %}
Targets
{% endblock page_title %}
{% block main_content %}
<div class="row">
<div class="col-12">
<div class="card">
<div class="p-2">
<div class="row">
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<button type="button" class="btn btn-primary dropdown-toggle mt-1" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false" id="filterMenu">
Filter <i class="fe-filter"></i>
</button>
<div class="dropdown-menu" style="width: 30%">
<div class="px-4 py-3">
<h4 class="headline-title">Filters</h4>
<label for="filterByOrganization" class="form-label">Filter by Organization</label>
<select class="form-control" id="filterByOrganization">
</select>
</div>
<div class="dropdown-divider"></div>
<a href="#" class="dropdown-ite text-primary float-end" id="resetFilters">Reset Filters</a>
</div>
</div>
<div class="col-xl-6 col-lg-6 col-md-6 col-sm-12 col-12">
<a class="btn btn-soft-danger float-end disabled ms-1 mt-1" href="#" onclick="deleteMultipleTargets()" id="delete_multiple_button">Delete Multiple Targets</a>
<a class="btn btn-soft-info float-end disabled ms-1 mt-1" href="#" onclick="scanMultipleTargets()" id="scan_multiple_button">Scan Multiple Targets</a>
<a class="btn btn-soft-primary float-end ms-1 mt-1" href="{% url 'add_target' %}" data-toggle="tooltip" data-placement="top" title="Add New Targets">Add Targets</a>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-12">
<div class="card">
<form method="POST" id="multiple_targets_form" action="../../scan/start/multiple/">
{% csrf_token %}
<table id="list_target_table" class="table style-3 table-hover">
<thead>
<tr>
<th class="checkbox-column text-center">Serial Number</th>
<th>Serial Number</th>
<th>Target</th>
<th>Description</th>
<th class="text-center">Added On</th>
<th class="text-center">Last Scanned</th>
<th class="text-center">Action</th>
</tr>
</thead>
</table>
</form>
</div>
</div>
</div>
{% endblock main_content %}
{% block page_level_script %}
<script src="{% static 'custom/custom.js' %}"></script>
<script src="{% static 'targetApp/js/custom_domain.js' %}"></script>
<script src="{% static 'plugins/datatable/datatables.js' %}"></script>
<script>
$(document).ready(function(){
var table = $('#list_target_table').DataTable({
"headerCallback": function(e, a, t, n, s) {
e.getElementsByTagName("th")[0].innerHTML='<div class="form-check mb-2 form-check-primary"><input type="checkbox" class="float-start form-check-input chk-parent" id="head_checkbox" onclick=mainCheckBoxSelected()>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>\n'
},
"destroy": true,
"processing": true,
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center mt-sm-0 mt-3'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" +
"<'table-responsive'tr>" +
"<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
"oLanguage": {
"oPaginate": { "sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>', "sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>' },
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"fnCreatedRow": function(nRow, aData, iDataIndex) {
$(nRow).attr('id', 'target_row_' + aData['id']);
},
"stripeClasses": [],
"lengthMenu": [5, 10, 20, 30, 40, 50, 100, 500, 1000],
"pageLength": 20,
'serverSide': true,
"ajax": '/api/listTargets/?format=datatables',
"order": [
[1, "desc"]
],
"columns": [
{
'data': 'id'
},
{
'data': 'id'
},
{
'data': 'name'
},
{
'data': 'description'
},
{
'data': 'id'
},
{
'data': 'start_scan_date'
},
{
'data': 'id'
},
{
'data': 'organization'
},
{
'data': 'most_recent_scan'
},
{
'data': 'insert_date'
},
{
'data': 'insert_date_humanized'
},
{
'data': 'start_scan_date_humanized'
},
],
"columnDefs":[
{ 'orderable': false, 'targets': [0, 3, 6]},
{ 'visible': false, 'targets': [1, 7, 8, 9, 10, 11] },
{
"targets":0, "width":"20px", "className":"", "orderable":!1, render:function(e, a, t, n) {
return'<div class="form-check mb-2 form-check-primary"><input type="checkbox" name="targets_checkbox['+ e + ']" class="float-start form-check-input targets_checkbox" value="' + e + '" onchange=toggleMultipleTargetButton()>\n<span class="new-control-indicator"></span><span style="visibility:hidden">c</span></div>'
}
},
{
"render": function(data, type, row) {
var content = '';
content += `<b>${data}</b> <a href="#" onclick="get_target_whois('${data}')">(view whois)</a>`;
if (row.organization) {
content += '<br>';
for (var org in row.organization) {
content += `<span class="badge badge-soft-primary me-1 mb-1" data-toggle="tooltip" data-placement="top" title="Domain ${data} belongs to organization ${row.organization[org]}">${row.organization[org]}</span>`;
}
}
if (row.most_recent_scan) {
content += `<br><a href="/scan/detail/${row.most_recent_scan}" class="text-primary">Recent Scan <svg xmlns="http://www.w3.org/2000/svg" width="15" height="15" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-external-link"><path d="M18 13v6a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2V8a2 2 0 0 1 2-2h6"></path><polyline points="15 3 21 3 21 9"></polyline><line x1="10" y1="14" x2="21" y2="3"></line></svg></a>`;
}
return content;
},
"targets": 2,
},
{
"render": function(data, type, row) {
var content = '<div class="text-center">';
content += `
<span class="badge badge-soft-primary">${row.insert_date}, ${row.insert_date_humanized}</span>
`;
content += '</div>';
return content;
},
"targets": 4,
},
{
"render": function(data, type, row) {
var content = '<div class="text-center">';
if (data) {
content += `<span class="badge badge-soft-primary">${row.start_scan_date_humanized}</span>`;
}
else{
content += '<span class="badge badge-soft-warning">Never Scanned Before</span>';
}
content += '</div>';
return content;
},
"targets": 5,
},
{
"render": function(data, type, row) {
var content = '';
content += `
<div class="btn-group float-end">
<a class="btn btn-soft-primary" href="/target/summary/${row.id}"><i class="fe-info"></i> Target Summary</a>
<a href="/scan/start/${row.id}" class="btn btn-soft-primary"><i class="fe-zap"></i> Initiate Scan</a>
<div class="btn-group dropstart" role="group">
<button type="button" class="btn btn-soft-primary dropdown-toggle dropdown-toggle-split" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
<i class="mdi mdi-chevron-right"></i>
</button>
<div class="dropdown-menu" style="">
<a class="dropdown-item" href="/scan/schedule/target/${row.id}"><i class="fe-clock"></i> Schedule Scan</a>
<div class="dropdown-divider"></div>
<a class="dropdown-item" href="/target/update/target/${row.id}"><i class="fe-edit-2"></i> Edit Target</a>
<a class="dropdown-item text-danger" href="#" onclick="delete_target(${row.id}, '${row.name}')"><i class="fe-trash-2"></i> Delete target</a>
</div>
</div>
</div>
`;
return content;
},
"targets": 6,
},
],
drawCallback: function() {
$('.badge').tooltip({
template: '<div class="tooltip status" role="tooltip"><div class="arrow"></div><div class="tooltip-inner"></div></div>'
})
$('.bs-tooltip').tooltip();
},
});
multiCheck(table);
// Handle form submission event
$('#frm-example').on('submit', function(e){
var form = this;
table.$('input[type="checkbox"]').each(function(){
if(!$.contains(document, this)){
if(this.checked){
$(form).append(
$('<input>')
.attr('type', 'hidden')
.attr('name', this.name)
.val(this.value)
);
}
}
});
e.preventDefault();
});
// filter organization populate
$.getJSON(`/api/listOrganizations?&format=json`, function(data) {
data = data['organizations']
for (organization in data) {
name = htmlEncode(data[organization]['name']);
select = document.getElementById('filterByOrganization');
var option = document.createElement('option');
option.value = name;
option.innerHTML = name;
select.appendChild(option);
}
}).fail(function(){
});
var a = document.getElementById('filterByOrganization');
a.addEventListener('click', function() {
table.search(this.value).draw();
document.getElementById('filteringText').innerHTML = `<span class="badge badge-primary m-2">Organization: ${this.value}
<span id="clearFilterChip" class="badge-link" onclick="document.getElementById('resetFilters').click()">X</span>
</span>`;
Snackbar.show({
text: `Filtering by organization ${this.value}`,
pos: 'top-center'
});
}, false);
// reset filtering
var reset_filter = document.getElementById('resetFilters');
reset_filter.addEventListener('click', function() {
resetFilters(table);
}, false);
$('[data-toggle=tooltip]').tooltip();
});
function resetFilters(table_obj) {
table_obj.search("").draw();
Snackbar.show({
text: `Filters Reset`,
pos: 'top-center'
});
document.getElementById('filteringText').innerHTML = '';
}
</script>
{% endblock page_level_script %}
| yogeshojha | 758debc4e79b5dc3f1ee29fcabcacb8e15656a94 | 0caa3a6f04a26f9f3554e1617c7f369a9b10330e | ## DOM text reinterpreted as HTML
[DOM text](1) is reinterpreted as HTML without escaping meta-characters.
[Show more details](https://github.com/yogeshojha/rengine/security/code-scanning/135) | github-advanced-security[bot] | 30 |
yogeshojha/rengine | 664 | Release/1.3.0 | ## 1.3.0
**Release Date: July 11, 2022**
## Added
- Geographic Distribution of Assets Map
## Fixes
- WHOIS Provider Changed
- Fixed Dark UI Issues
- Fix HTTPX Issue | null | 2022-07-10 17:49:40+00:00 | 2022-07-18 19:30:04+00:00 | web/static/custom/custom.js | function checkall(clickchk, relChkbox) {
var checker = $('#' + clickchk);
var multichk = $('.' + relChkbox);
checker.click(function() {
multichk.prop('checked', $(this).prop('checked'));
});
}
function multiCheck(tb_var) {
tb_var.on("change", ".chk-parent", function() {
var e = $(this).closest("table").find("td:first-child .child-chk"),
a = $(this).is(":checked");
$(e).each(function() {
a ? ($(this).prop("checked", !0), $(this).closest("tr").addClass("active")) : ($(this).prop("checked", !1), $(this).closest("tr").removeClass("active"))
})
}),
tb_var.on("change", "tbody tr .new-control", function() {
$(this).parents("tr").toggleClass("active")
})
}
function GetIEVersion() {
var sAgent = window.navigator.userAgent;
var Idx = sAgent.indexOf("MSIE");
// If IE, return version number.
if (Idx > 0) return parseInt(sAgent.substring(Idx + 5, sAgent.indexOf(".", Idx)));
// If IE 11 then look for Updated user agent string.
else if (!!navigator.userAgent.match(/Trident\/7\./)) return 11;
else return 0; //It is not IE
}
function truncate(str, n) {
return (str.length > n) ? str.substr(0, n - 1) + '…' : str;
};
function return_str_if_not_null(val) {
return val ? val : '';
}
// seperate hostname and url
// Referenced from https://stackoverflow.com/questions/736513/how-do-i-parse-a-url-into-hostname-and-path-in-javascript
function getParsedURL(url) {
var parser = new URL(url);
return parser.pathname + parser.search;
};
function getCookie(name) {
var cookieValue = null;
if (document.cookie && document.cookie !== '') {
var cookies = document.cookie.split(';');
for (var i = 0; i < cookies.length; i++) {
var cookie = jQuery.trim(cookies[i]);
// Does this cookie string begin with the name we want?
if (cookie.substring(0, name.length + 1) === (name + '=')) {
cookieValue = decodeURIComponent(cookie.substring(name.length + 1));
break;
}
}
}
return cookieValue;
}
// Source: https://portswigger.net/web-security/cross-site-scripting/preventing#encode-data-on-output
function htmlEncode(str) {
return String(str).replace(/[^\w. ]/gi, function(c) {
return '&#' + c.charCodeAt(0) + ';';
});
}
// Source: https://portswigger.net/web-security/cross-site-scripting/preventing#encode-data-on-output
function jsEscape(str) {
return String(str).replace(/[^\w. ]/gi, function(c) {
return '\\u' + ('0000' + c.charCodeAt(0).toString(16)).slice(-4);
});
}
function deleteScheduledScan(id) {
const delAPI = "../delete/scheduled_task/" + id;
swal.queue([{
title: 'Are you sure you want to delete this?',
text: "This action can not be undone.",
icon: 'warning',
showCancelButton: true,
confirmButtonText: 'Delete',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
return fetch(delAPI, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
}).then(function(response) {
return response.json();
}).then(function(data) {
// TODO Look for better way
return location.reload();
}).catch(function() {
swal.insertQueueStep({
icon: 'error',
title: 'Oops! Unable to delete the scheduled task!'
})
})
}
}])
}
function change_scheduled_task_status(id, checkbox) {
if (checkbox.checked) {
text_msg = 'Schedule Scan Started';
} else {
text_msg = 'Schedule Scan Stopped';
}
Snackbar.show({
text: text_msg,
pos: 'top-right',
duration: 2500
});
const taskStatusApi = "../toggle/scheduled_task/" + id;
return fetch(taskStatusApi, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
})
}
function change_vuln_status(id) {
const vulnStatusApi = "../toggle/vuln_status/" + id;
return fetch(vulnStatusApi, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
})
}
// splits really long strings into multiple lines
// Souce: https://stackoverflow.com/a/52395960
function split_into_lines(str, maxWidth) {
const newLineStr = "</br>";
done = false;
res = '';
do {
found = false;
// Inserts new line at first whitespace of the line
for (i = maxWidth - 1; i >= 0; i--) {
if (test_white_space(str.charAt(i))) {
res = res + [str.slice(0, i), newLineStr].join('');
str = str.slice(i + 1);
found = true;
break;
}
}
// Inserts new line at maxWidth position, the word is too long to wrap
if (!found) {
res += [str.slice(0, maxWidth), newLineStr].join('');
str = str.slice(maxWidth);
}
if (str.length < maxWidth) done = true;
} while (!done);
return res + str;
}
function test_white_space(x) {
const white = new RegExp(/^\s$/);
return white.test(x.charAt(0));
};
// span values function will seperate the values by comma and put badge around it
function parse_comma_values_into_span(data, color, outline = null) {
if (data) {
var badge = `<span class='badge badge-soft-` + color + ` m-1'>`;
var data_with_span = "";
data.split(/\s*,\s*/).forEach(function(split_vals) {
data_with_span += badge + split_vals + "</span>";
});
return data_with_span;
}
return '';
}
function get_severity_badge(severity) {
switch (severity) {
case 'Info':
return "<span class='badge badge-soft-primary'> INFO </span>";
break;
case 'Low':
return "<span class='badge badge-low'> LOW </span>";
break;
case 'Medium':
return "<span class='badge badge-soft-warning'> MEDIUM </span>";
break;
case 'High':
return "<span class='badge badge-soft-danger'> HIGH </span>";
break;
case 'Critical':
return "<span class='badge badge-critical'> CRITICAL </span>";
break;
case 'Unknown':
return "<span class='badge badge-soft-info'> UNKNOWN </span>";
default:
return "";
}
}
// Source: https://stackoverflow.com/a/54733055
function typingEffect(words, id, i) {
let word = words[i].split("");
var loopTyping = function() {
if (word.length > 0) {
let elem = document.getElementById(id);
elem.setAttribute('placeholder', elem.getAttribute('placeholder') + word.shift());
} else {
deletingEffect(words, id, i);
return false;
};
timer = setTimeout(loopTyping, 150);
};
loopTyping();
};
function deletingEffect(words, id, i) {
let word = words[i].split("");
var loopDeleting = function() {
if (word.length > 0) {
word.pop();
document.getElementById(id).setAttribute('placeholder', word.join(""));
} else {
if (words.length > (i + 1)) {
i++;
} else {
i = 0;
};
typingEffect(words, id, i);
return false;
};
timer = setTimeout(loopDeleting, 90);
};
loopDeleting();
};
function fullScreenDiv(id, btn) {
let fullscreen = document.querySelector(id);
let button = document.querySelector(btn);
document.fullscreenElement && document.exitFullscreen() || document.querySelector(id).requestFullscreen()
fullscreen.setAttribute("style", "overflow:auto");
}
function get_randid() {
return '_' + Math.random().toString(36).substr(2, 9);
}
function delete_all_scan_results() {
const delAPI = "../scan/delete/scan_results/";
swal.queue([{
title: 'Are you sure you want to delete all scan results?',
text: "You won't be able to revert this!",
icon: 'warning',
showCancelButton: true,
confirmButtonText: 'Delete',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
return fetch(delAPI, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
}).then(function(response) {
return response.json();
}).then(function(data) {
// TODO Look for better way
return location.reload();
}).catch(function() {
swal.insertQueueStep({
icon: 'error',
title: 'Oops! Unable to delete Delete scan results!'
})
})
}
}])
}
function delete_all_screenshots() {
const delAPI = "../scan/delete/screenshots/";
swal.queue([{
title: 'Are you sure you want to delete all Screenshots?',
text: "You won't be able to revert this!",
icon: 'warning',
showCancelButton: true,
confirmButtonText: 'Delete',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
return fetch(delAPI, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
}).then(function(response) {
return response.json();
}).then(function(data) {
// TODO Look for better way
return location.reload();
}).catch(function() {
swal.insertQueueStep({
icon: 'error',
title: 'Oops! Unable to delete Empty Screenshots!'
})
})
}
}])
}
function load_image_from_url(src, append_to_id) {
img = document.createElement('img');
img.src = src;
img.style.width = '100%';
document.getElementById(append_to_id).appendChild(img);
}
function setTooltip(btn, message) {
hide_all_tooltips();
const instance = tippy(document.querySelector(btn));
instance.setContent(message);
instance.show();
setTimeout(function() {
instance.hide();
}, 500);
}
function hide_all_tooltips() {
$(".tooltip").tooltip("hide");
}
function get_response_time_text(response_time) {
if (response_time) {
var text_color = 'danger';
if (response_time < 0.5) {
text_color = 'success'
} else if (response_time >= 0.5 && response_time < 1) {
text_color = 'warning'
}
return `<span class="text-${text_color}">${response_time.toFixed(4)}s</span>`;
}
return '';
}
function parse_technology(data, color, scan_id = null, domain_id=null) {
var badge = `<span data-toggle="tooltip" title="Technology" class='badge-link badge badge-soft-` + color + ` mt-1 me-1'`;
var data_with_span = "";
for (var key in data) {
if (scan_id) {
data_with_span += badge + ` onclick="get_tech_details('${data[key]['name']}', ${scan_id}, domain_id=null)">` + data[key]['name'] + "</span>";
} else if (domain_id) {
data_with_span += badge + ` onclick="get_tech_details('${data[key]['name']}', scan_id=null, domain_id=domain_id)">` + data[key]['name'] + "</span>";
}
}
return data_with_span;
}
// span values function will seperate the values by comma and put badge around it
function parse_ip(data, cdn) {
if (cdn) {
var badge = `<span class='badge badge-soft-warning m-1 bs-tooltip' title="CDN IP Address">`;
} else {
var badge = `<span class='badge badge-soft-primary m-1'>`;
}
var data_with_span = "";
data.split(/\s*,\s*/).forEach(function(split_vals) {
data_with_span += badge + split_vals + "</span>";
});
return data_with_span;
}
//to remove the image element if there is no screenshot captured
function removeImageElement(element) {
element.parentElement.remove();
}
// https://stackoverflow.com/a/18197341/9338140
function download(filename, text) {
var element = document.createElement('a');
element.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(text));
element.setAttribute('download', filename);
element.style.display = 'none';
document.body.appendChild(element);
element.click();
document.body.removeChild(element);
}
function vuln_status_change(checkbox, id) {
if (checkbox.checked) {
checkbox.parentNode.parentNode.parentNode.className = "table-success text-strike";
} else {
checkbox.parentNode.parentNode.parentNode.classList.remove("table-success");
checkbox.parentNode.parentNode.parentNode.classList.remove("text-strike");
}
change_vuln_status(id);
}
function report_hackerone(vulnerability_id, severity) {
message = ""
if (severity == 'Info' || severity == 'Low' || severity == 'Medium') {
message = "We do not recommended sending this vulnerability report to hackerone due to the severity, do you still want to report this?"
} else {
message = "This vulnerability report will be sent to Hackerone.";
}
const vulnerability_report_api = "../../api/vulnerability/report/?vulnerability_id=" + vulnerability_id;
swal.queue([{
title: 'Reporting vulnerability to hackerone',
text: message,
icon: 'warning',
showCancelButton: true,
confirmButtonText: 'Report',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
return fetch(vulnerability_report_api, {
method: 'GET',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
}).then(function(response) {
return response.json();
}).then(function(data) {
console.log(data.status)
if (data.status == 111) {
swal.insertQueueStep({
icon: 'error',
title: 'Target does not has team_handle to send report to.'
})
} else if (data.status == 201) {
swal.insertQueueStep({
icon: 'success',
title: 'Vulnerability report successfully submitted to hackerone.'
})
} else if (data.status == 400) {
swal.insertQueueStep({
icon: 'error',
title: 'Invalid Report.'
})
} else if (data.status == 401) {
swal.insertQueueStep({
icon: 'error',
title: 'Hackerone authentication failed.'
})
} else if (data.status == 403) {
swal.insertQueueStep({
icon: 'error',
title: 'API Key forbidden by Hackerone.'
})
} else if (data.status == 423) {
swal.insertQueueStep({
icon: 'error',
title: 'Too many requests.'
})
}
}).catch(function() {
swal.insertQueueStep({
icon: 'error',
title: 'Oops! Unable to send vulnerability report to hackerone, check your target team_handle or hackerone configurarions!'
})
})
}
}])
}
function get_interesting_subdomains(target_id, scan_history_id) {
if (target_id) {
url = `/api/listInterestingEndpoints/?target_id=${target_id}&format=datatables`;
non_orderable_targets = [0, 1, 2, 3];
} else if (scan_history_id) {
url = `/api/listInterestingSubdomains/?scan_id=${scan_history_id}&format=datatables`;
non_orderable_targets = [];
}
var interesting_subdomain_table = $('#interesting_subdomains').DataTable({
"drawCallback": function(settings, start, end, max, total, pre) {
// if no interesting subdomains are found, hide the datatable and show no interesting subdomains found badge
if (this.fnSettings().fnRecordsTotal() == 0) {
$('#interesting_subdomain_div').empty();
// $('#interesting_subdomain_div').append(`<div class="card-header bg-primary py-3 text-white">
// <div class="card-widgets">
// <a href="#" data-toggle="remove"><i class="mdi mdi-close"></i></a>
// </div>
// <h5 class="card-title mb-0 text-white"><i class="mdi mdi-fire-alert me-2"></i>Interesting subdomains could not be identified</h5>
// </div>
// <div id="cardCollpase4" class="collapse show">
// <div class="card-body">
// reNgine could not identify any interesting subdomains. You can customize interesting subdomain keywords <a href="/scanEngine/interesting/lookup/">from here</a> and this section would be automatically updated.
// </div>
// </div>`);
} else {
// show nav bar
$('.interesting-tab-show').removeAttr('style');
$('#interesting_subdomain_alert_count').html(`${this.fnSettings().fnRecordsTotal()} Interesting Subdomains`)
$('#interesting_subdomain_count_badge').empty();
$('#interesting_subdomain_count_badge').html(`<span class="badge badge-soft-primary me-1">${this.fnSettings().fnRecordsTotal()}</span>`);
}
},
"oLanguage": {
"oPaginate": {
"sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>',
"sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>'
},
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"processing": true,
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" + "<'table-responsive'tr>" + "<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
"destroy": true,
"bInfo": false,
"stripeClasses": [],
'serverSide': true,
"ajax": url,
"order": [
[3, "desc"]
],
"lengthMenu": [5, 10, 20, 50, 100],
"pageLength": 10,
"columns": [{
'data': 'name'
}, {
'data': 'page_title'
}, {
'data': 'http_status'
}, {
'data': 'content_length'
}, {
'data': 'http_url'
}, {
'data': 'technologies'
}, ],
"columnDefs": [{
"orderable": false,
"targets": non_orderable_targets
}, {
"targets": [4],
"visible": false,
"searchable": false,
}, {
"targets": [5],
"visible": false,
"searchable": true,
}, {
"className": "text-center",
"targets": [2]
}, {
"render": function(data, type, row) {
tech_badge = '';
if (row['technologies']) {
// tech_badge = `</br>` + parse_technology(row['technologies'], "primary", outline=true, scan_id=null);
}
if (row['http_url']) {
return `<a href="` + row['http_url'] + `" class="text-primary" target="_blank">` + data + `</a>` + tech_badge;
}
return `<a href="https://` + data + `" class="text-primary" target="_blank">` + data + `</a>` + tech_badge;
},
"targets": 0
}, {
"render": function(data, type, row) {
// display badge based on http status
// green for http status 2XX, orange for 3XX and warning for everything else
if (data >= 200 && data < 300) {
return "<span class='badge badge-pills badge-soft-success'>" + data + "</span>";
} else if (data >= 300 && data < 400) {
return "<span class='badge badge-pills badge-soft-warning'>" + data + "</span>";
} else if (data == 0) {
// datatable throws error when no data is returned
return "";
}
return `<span class='badge badge-pills badge-soft-danger'>` + data + `</span>`;
},
"targets": 2,
}, ],
});
}
function get_interesting_endpoint(target_id, scan_history_id) {
var non_orderable_targets = [];
if (target_id) {
url = `/api/listInterestingEndpoints/?target_id=${target_id}&format=datatables`;
// non_orderable_targets = [0, 1, 2, 3];
} else if (scan_history_id) {
url = `/api/listInterestingEndpoints/?scan_id=${scan_history_id}&format=datatables`;
// non_orderable_targets = [0, 1, 2, 3];
}
$('#interesting_endpoints').DataTable({
"drawCallback": function(settings, start, end, max, total, pre) {
if (this.fnSettings().fnRecordsTotal() == 0) {
$('#interesting_endpoint_div').remove();
} else {
$('.interesting-tab-show').removeAttr('style');
$('#interesting_endpoint_alert_count').html(`, ${this.fnSettings().fnRecordsTotal()} Interesting Endpoints`)
$('#interesting_endpoint_count_badge').empty();
$('#interesting_endpoint_count_badge').html(`<span class="badge badge-soft-primary me-1">${this.fnSettings().fnRecordsTotal()}</span>`);
}
},
"oLanguage": {
"oPaginate": {
"sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>',
"sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>'
},
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"processing": true,
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" + "<'table-responsive'tr>" + "<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
'serverSide': true,
"destroy": true,
"bInfo": false,
"ajax": url,
"order": [
[3, "desc"]
],
"lengthMenu": [5, 10, 20, 50, 100],
"pageLength": 10,
"columns": [{
'data': 'http_url'
}, {
'data': 'page_title'
}, {
'data': 'http_status'
}, {
'data': 'content_length'
}, ],
"columnDefs": [{
"orderable": false,
"targets": non_orderable_targets
}, {
"className": "text-center",
"targets": [2]
}, {
"render": function(data, type, row) {
var url = split_into_lines(data, 70);
return "<a href='" + data + "' target='_blank' class='text-primary'>" + url + "</a>";
},
"targets": 0
}, {
"render": function(data, type, row) {
// display badge based on http status
// green for http status 2XX, orange for 3XX and warning for everything else
if (data >= 200 && data < 300) {
return "<span class='badge badge-pills badge-soft-success'>" + data + "</span>";
} else if (data >= 300 && data < 400) {
return "<span class='badge badge-pills badge-soft-warning'>" + data + "</span>";
} else if (data == 0) {
// datatable throws error when no data is returned
return "";
}
return `<span class='badge badge-pills badge-soft-danger'>` + data + `</span>`;
},
"targets": 2,
}, ],
});
}
function get_important_subdomains(target_id, scan_history_id) {
var url = `/api/querySubdomains/?only_important&no_lookup_interesting&format=json`;
if (target_id) {
url += `&target_id=${target_id}`;
} else if (scan_history_id) {
url += `&scan_id=${scan_history_id}`;
}
$.getJSON(url, function(data) {
$('#important-count').empty();
$('#important-subdomains-list').empty();
if (data['subdomains'].length > 0) {
$('#important-count').html(`<span class="badge badge-soft-primary ms-1 me-1">${data['subdomains'].length}</span>`);
for (var val in data['subdomains']) {
subdomain = data['subdomains'][val];
div_id = 'important_' + subdomain['id'];
$("#important-subdomains-list").append(`
<div id="${div_id}">
<p>
<span id="subdomain_${subdomain['id']}"> ${subdomain['name']}
<span class="">
<a href="javascript:;" data-clipboard-action="copy" class="m-1 float-end badge-link text-info copyable text-primary" data-toggle="tooltip" data-placement="top" title="Copy Subdomain!" data-clipboard-target="#subdomain_${subdomain['id']}">
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-copy"><rect x="9" y="9" width="13" height="13" rx="2" ry="2"></rect><path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1"></path></svg></span>
</a>
</span>
</p>
</div>
<hr />
`);
}
} else {
$('#important-count').html(`<span class="badge badge-soft-primary ms-1 me-1">0</span>`);
$('#important-subdomains-list').append(`<p>No subdomains markerd as important!</p>`);
}
$('.bs-tooltip').tooltip();
});
}
function mark_important_subdomain(row, subdomain_id) {
if (row) {
parentNode = row.parentNode.parentNode.parentNode.parentNode;
if (parentNode.classList.contains('table-danger')) {
parentNode.classList.remove('table-danger');
} else {
parentNode.className = "table-danger";
}
}
var data = {'subdomain_id': subdomain_id}
const subdomainImpApi = "/api/toggle/subdomain/important/";
if ($("#important_subdomain_" + subdomain_id).length == 0) {
$("#subdomain-" + subdomain_id).prepend(`<span id="important_subdomain_${subdomain_id}"></span>`);
setTooltip("#subdomain-" + subdomain_id, 'Marked Important!');
} else {
$("#important_subdomain_" + subdomain_id).remove();
setTooltip("#subdomain-" + subdomain_id, 'Marked Un-Important!');
}
return fetch(subdomainImpApi, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken"),
'Content-Type': 'application/json'
},
body: JSON.stringify(data)
});
}
function delete_scan(id) {
const delAPI = "../delete/scan/" + id;
swal.queue([{
title: 'Are you sure you want to delete this scan history?',
text: "You won't be able to revert this!",
icon: 'warning',
showCancelButton: true,
confirmButtonText: 'Delete',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
return fetch(delAPI, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
}).then(function(response) {
return response.json();
}).then(function(data) {
// TODO Look for better way
return location.reload();
}).catch(function() {
swal.insertQueueStep({
icon: 'error',
title: 'Oops! Unable to delete the scan history!'
})
})
}
}]);
}
function stop_scan(scan_id=null, subscan_id=null, reload_scan_bar=true, reload_location=false) {
const stopAPI = "/api/action/stop/scan/";
if (scan_id) {
var data = {'scan_id': scan_id}
}
else if (subscan_id) {
var data = {'subscan_id': subscan_id}
}
swal.queue([{
title: 'Are you sure you want to stop this scan?',
text: "You won't be able to revert this!",
icon: 'warning',
showCancelButton: true,
confirmButtonText: 'Stop',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
return fetch(stopAPI, {
method: 'POST',
credentials: "same-origin",
body: JSON.stringify(data),
headers: {
"X-CSRFToken": getCookie("csrftoken"),
"Content-Type": 'application/json',
}
}).then(function(response) {
return response.json();
}).then(function(data) {
// TODO Look for better way
if (data.status) {
Snackbar.show({
text: 'Scan Successfully Aborted.',
pos: 'top-right',
duration: 1500
});
if (reload_scan_bar) {
getScanStatusSidebar();
}
if (reload_location) {
window.location.reload();
}
} else {
Snackbar.show({
text: 'Oops! Could not abort the scan. ' + data.message,
pos: 'top-right',
duration: 1500
});
}
}).catch(function() {
swal.insertQueueStep({
icon: 'error',
title: 'Oops! Unable to stop the scan'
})
})
}
}])
}
function extractContent(s) {
var span = document.createElement('span');
span.innerHTML = s;
return span.textContent || span.innerText;
};
function delete_datatable_rows(table_id, rows_id, show_snackbar = true, snackbar_title) {
// this function will delete the datatables rows after actions such as delete
// table_id => datatable_id with #
// rows_ids: list/array => list of all numerical ids to delete, to maintain consistency
// rows id will always follow this pattern: datatable_id_row_n
// show_snackbar = bool => whether to show snackbar or not!
// snackbar_title: str => snackbar title if show_snackbar = True
var table = $(table_id).DataTable();
for (var row in rows_id) {
table.row(table_id + '_row_' + rows_id[row]).remove().draw();
}
Snackbar.show({
text: snackbar_title,
pos: 'top-right',
duration: 1500,
actionTextColor: '#fff',
backgroundColor: '#e7515a',
});
}
function delete_subscan(subscan_id) {
// This function will delete the sunscans using rest api
// Supported method: POST
const delAPI = "/api/action/rows/delete/";
var data = {
'type': 'subscan',
'rows': [subscan_id]
}
swal.queue([{
title: 'Are you sure you want to delete this subscan?',
text: "You won't be able to revert this!",
icon: 'warning',
showCancelButton: true,
confirmButtonText: 'Delete',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
return fetch(delAPI, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken"),
"Content-Type": "application/json"
},
body: JSON.stringify(data)
}).then(function(response) {
return response.json();
}).then(function(response) {
if (response['status']) {
delete_datatable_rows('#subscan_history_table', [subscan_id], show_snackbar = true, '1 Subscan Deleted!')
}
}).catch(function() {
swal.insertQueueStep({
icon: 'error',
title: 'Oops! Unable to delete the scan history!'
})
})
}
}])
}
function show_subscan_results(subscan_id) {
// This function will popup a modal and show the subscan results
// modal being used is from base
var api_url = '/api/fetch/results/subscan/?format=json';
var data = {
'subscan_id': subscan_id
};
Swal.fire({
title: 'Fetching Results...'
});
swal.showLoading();
fetch(api_url, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken"),
'Content-Type': 'application/json'
},
body: JSON.stringify(data)
}).then(response => response.json()).then(function(response) {
console.log(response);
swal.close();
if (response['subscan']['status'] == -1) {
swal.fire("Error!", "Scan has not yet started! Please wait for other scans to complete...", "warning", {
button: "Okay",
});
return;
} else if (response['subscan']['status'] == 1) {
swal.fire("Error!", "Scan is in progress! Please come back later...", "warning", {
button: "Okay",
});
return;
}
$('#xl-modal-title').empty();
$('#xl-modal-content').empty();
$('#xl-modal-footer').empty();
var task_name = '';
if (response['subscan']['task'] == 'port_scan') {
task_name = 'Port Scan';
} else if (response['subscan']['task'] == 'vulnerability_scan') {
task_name = 'Vulnerability Scan';
} else if (response['subscan']['task'] == 'fetch_url') {
task_name = 'EndPoint Gathering';
} else if (response['subscan']['task'] == 'dir_file_fuzz') {
task_name = 'Directory and Files Fuzzing';
}
$('#xl-modal_title').html(`${task_name} Results on ${response['subscan']['subdomain_name']}`);
var scan_status = '';
var badge_color = 'danger';
if (response['subscan']['status'] == 0) {
scan_status = 'Failed';
} else if (response['subscan']['status'] == 2) {
scan_status = 'Successful';
badge_color = 'success';
} else if (response['subscan']['status'] == 3) {
scan_status = 'Aborted';
} else {
scan_status = 'Unknown';
}
$('#xl-modal-content').append(`<div>Scan Status: <span class="badge bg-${badge_color}">${scan_status}</span></div>`);
console.log(response);
$('#xl-modal-content').append(`<div class="mt-1">Engine Used: <span class="badge bg-primary">${htmlEncode(response['subscan']['engine'])}</span></div>`);
if (response['result'].length > 0) {
if (response['subscan']['task'] == 'port_scan') {
$('#xl-modal-content').append(`<div id="port_results_li"></div>`);
for (var ip in response['result']) {
var ip_addr = response['result'][ip]['address'];
var id_name = `ip_${ip_addr}`;
$('#port_results_li').append(`<h5>IP Address: ${ip_addr}</br></br>${response['result'][ip]['ports'].length} Ports Open</h5>`);
$('#port_results_li').append(`<ul id="${id_name}"></ul>`);
for (var port_obj in response['result'][ip]['ports']) {
var port = response['result'][ip]['ports'][port_obj];
var port_color = 'primary';
if (port["is_uncommon"]) {
port_color = 'danger';
}
$('#port_results_li ul').append(`<li><span class="ms-1 mt-1 me-1 badge badge-soft-${port_color}">${port['number']}</span>/<span class="ms-1 mt-1 me-1 badge badge-soft-${port_color}">${port['service_name']}</span>/<span class="ms-1 mt-1 me-1 badge badge-soft-${port_color}">${port['description']}</span></li>`);
}
}
$('#xl-modal-footer').append(`<span class="text-danger">* Uncommon Ports</span>`);
} else if (response['subscan']['task'] == 'vulnerability_scan') {
render_vulnerability_in_xl_modal(vuln_count = response['result'].length, subdomain_name = response['subscan']['subdomain_name'], result = response['result']);
} else if (response['subscan']['task'] == 'fetch_url') {
render_endpoint_in_xlmodal(endpoint_count = response['result'].length, subdomain_name = response['subscan']['subdomain_name'], result = response['result']);
} else if (response['subscan']['task'] == 'dir_file_fuzz') {
if (response['result'][0]['directory_files'].length == 0) {
$('#xl-modal-content').append(`
<div class="alert alert-info mt-2" role="alert">
<i class="mdi mdi-alert-circle-outline me-2"></i> ${task_name} could not fetch any results.
</div>
`);
} else {
render_directories_in_xl_modal(response['result'][0]['directory_files'].length, response['subscan']['subdomain_name'], response['result'][0]['directory_files']);
}
}
} else {
$('#xl-modal-content').append(`
<div class="alert alert-info mt-2" role="alert">
<i class="mdi mdi-alert-circle-outline me-2"></i> ${task_name} could not fetch any results.
</div>
`);
}
$('#modal_xl_scroll_dialog').modal('show');
$("body").tooltip({
selector: '[data-toggle=tooltip]'
});
});
}
function get_http_status_badge(data) {
if (data >= 200 && data < 300) {
return "<span class='badge badge-soft-success'>" + data + "</span>";
} else if (data >= 300 && data < 400) {
return "<span class='badge badge-soft-warning'>" + data + "</span>";
} else if (data == 0) {
// datatable throws error when no data is returned
return "";
}
return "<span class='badge badge-soft-danger'>" + data + "</span>";
}
function render_endpoint_in_xlmodal(endpoint_count, subdomain_name, result) {
// This function renders endpoints datatable in xl modal
// Used in Subscan results and subdomain to endpoints modal
$('#xl-modal-content').append(`<h5> ${endpoint_count} Endpoints Discovered on subdomain ${subdomain_name}</h5>`);
$('#xl-modal-content').append(`
<div class="">
<table id="endpoint-modal-datatable" class="table dt-responsive nowrap w-100">
<thead>
<tr>
<th>HTTP URL</th>
<th>Status</th>
<th>Page Title</th>
<th>Tags</th>
<th>Content Type</th>
<th>Content Length</th>
<th>Response Time</th>
</tr>
</thead>
<tbody id="endpoint_tbody">
</tbody>
</table>
</div>
`);
$('#endpoint_tbody').empty();
for (var endpoint_obj in result) {
var endpoint = result[endpoint_obj];
var tech_badge = '';
var web_server = '';
if (endpoint['technologies']) {
tech_badge = '<div>' + parse_technology(endpoint['technologies'], "primary", outline = true);
}
if (endpoint['webserver']) {
web_server = `<span class='m-1 badge badge-soft-info' data-toggle="tooltip" data-placement="top" title="Web Server">${endpoint['webserver']}</span>`;
}
var url = split_into_lines(endpoint['http_url'], 70);
var rand_id = get_randid();
tech_badge += web_server + '</div>';
var http_url_td = "<a href='" + endpoint['http_url'] + `' target='_blank' class='text-primary'>` + url + "</a>" + tech_badge;
$('#endpoint_tbody').append(`
<tr>
<td>${http_url_td}</td>
<td>${get_http_status_badge(endpoint['http_status'])}</td>
<td>${return_str_if_not_null(endpoint['page_title'])}</td>
<td>${parse_comma_values_into_span(endpoint['matched_gf_patterns'], "danger", outline=true)}</td>
<td>${return_str_if_not_null(endpoint['content_type'])}</td>
<td>${return_str_if_not_null(endpoint['content_length'])}</td>
<td>${get_response_time_text(endpoint['response_time'])}</td>
</tr>
`);
}
$("#endpoint-modal-datatable").DataTable({
"oLanguage": {
"oPaginate": {
"sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>',
"sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>'
},
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" + "<'table-responsive'tr>" + "<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
"order": [
[5, "desc"]
],
drawCallback: function() {
$(".dataTables_paginate > .pagination").addClass("pagination-rounded")
}
});
}
function render_vulnerability_in_xl_modal(vuln_count, subdomain_name, result) {
// This function will render the vulnerability datatable in xl modal
$('#xl-modal-content').append(`<h5> ${vuln_count} Vulnerabilities Discovered on subdomain ${subdomain_name}</h5>`);
$('#xl-modal-content').append(`<ol id="vuln_results_ol" class="list-group list-group-numbered"></ol>`);
$('#xl-modal-content').append(`
<div class="">
<table id="vulnerability-modal-datatable" class="table dt-responsive nowrap w-100">
<thead>
<tr>
<th>Type</th>
<th>Title</th>
<th class="text-center">Severity</th>
<th>CVSS Score</th>
<th>CVE/CWE</th>
<th>Vulnerable URL</th>
<th>Description</th>
<th class="text-center dt-no-sorting">Action</th>
</tr>
</thead>
<tbody id="vuln_tbody">
</tbody>
</table>
</div>
`);
$('#vuln_tbody').empty();
for (var vuln in result) {
var vuln_obj = result[vuln];
var vuln_type = vuln_obj['type'] ? `<span class="badge badge-soft-primary"> ${vuln_obj['type'].toUpperCase()} </span>` : '';
var tags = '';
var cvss_metrics_badge = '';
switch (vuln_obj['severity']) {
case 'Info':
color = 'primary'
badge_color = 'soft-primary'
break;
case 'Low':
color = 'low'
badge_color = 'soft-warning'
break;
case 'Medium':
color = 'warning'
badge_color = 'soft-warning'
break;
case 'High':
color = 'danger'
badge_color = 'soft-danger'
break;
case 'Critical':
color = 'critical'
badge_color = 'critical'
break;
default:
}
if (vuln_obj['tags']) {
tags = '<div>';
vuln_obj['tags'].forEach(tag => {
tags += `<span class="badge badge-${badge_color} me-1 mb-1" data-toggle="tooltip" data-placement="top" title="Tags">${tag.name}</span>`;
});
tags += '</div>';
}
if (vuln_obj['cvss_metrics']) {
cvss_metrics_badge = `<div><span class="badge badge-outline-primary my-1" data-toggle="tooltip" data-placement="top" title="CVSS Metrics">${vuln_obj['cvss_metrics']}</span></div>`;
}
var vuln_title = `<b class="text-${color}">` + vuln_obj['name'] + `</b>` + cvss_metrics_badge + tags;
var badge = 'danger';
var cvss_score = '';
if (vuln_obj['cvss_score']) {
if (vuln_obj['cvss_score'] > 0.1 && vuln_obj['cvss_score'] <= 3.9) {
badge = 'info';
} else if (vuln_obj['cvss_score'] > 3.9 && vuln_obj['cvss_score'] <= 6.9) {
badge = 'warning';
} else if (vuln_obj['cvss_score'] > 6.9 && vuln_obj['cvss_score'] <= 8.9) {
badge = 'danger';
}
cvss_score = `<span class="badge badge-outline-${badge}" data-toggle="tooltip" data-placement="top" title="CVSS Score">${vuln_obj['cvss_score']}</span>`;
}
var cve_cwe_badge = '<div>';
if (vuln_obj['cve_ids']) {
vuln_obj['cve_ids'].forEach(cve => {
cve_cwe_badge += `<a href="https://google.com/search?q=${cve.name.toUpperCase()}" target="_blank" class="badge badge-outline-primary me-1 mt-1" data-toggle="tooltip" data-placement="top" title="CVE ID">${cve.name.toUpperCase()}</a>`;
});
}
if (vuln_obj['cwe_ids']) {
vuln_obj['cwe_ids'].forEach(cwe => {
cve_cwe_badge += `<a href="https://google.com/search?q=${cwe.name.toUpperCase()}" target="_blank" class="badge badge-outline-primary me-1 mt-1" data-toggle="tooltip" data-placement="top" title="CWE ID">${cwe.name.toUpperCase()}</a>`;
});
}
cve_cwe_badge += '</div>';
var http_url = vuln_obj['http_url'].includes('http') ? "<a href='" + htmlEncode(vuln_obj['http_url']) + "' target='_blank' class='text-danger'>" + htmlEncode(vuln_obj['http_url']) + "</a>" : vuln_obj['http_url'];
var description = vuln_obj['description'] ? `<div>${split_into_lines(vuln_obj['description'], 30)}</div>` : '';
// show extracted results, and show matcher names, matcher names can be in badges
if (vuln_obj['matcher_name']) {
description += `<span class="badge badge-soft-primary" data-toggle="tooltip" data-placement="top" title="Matcher Name">${vuln_obj['matcher_name']}</span>`;
}
if (vuln_obj['extracted_results'] && vuln_obj['extracted_results'].length > 0) {
description += `<br><a class="mt-2" data-bs-toggle="collapse" href="#results_${vuln_obj['id']}" aria-expanded="false" aria-controls="results_${vuln_obj['id']}">Extracted Results <i class="fe-chevron-down"></i></a>`;
description += `<div class="collapse" id="results_${vuln_obj['id']}"><ul>`;
vuln_obj['extracted_results'].forEach(results => {
description += `<li>${results}</li>`;
});
description += '</ul></div>';
}
if (vuln_obj['references'] && vuln_obj['references'].length > 0) {
description += `<br><a class="mt-2" data-bs-toggle="collapse" href="#references_${vuln_obj['id']}" aria-expanded="false" aria-controls="references_${vuln_obj['id']}">References <i class="fe-chevron-down"></i></a>`;
description += `<div class="collapse" id="references_${vuln_obj['id']}"><ul>`;
vuln_obj['references'].forEach(reference => {
description += `<li><a href="${reference.url}" target="_blank">${reference.url}</a></li>`;
});
description += '</ul></div>';
}
if (vuln_obj['curl_command']) {
description += `<br><a class="mt-2" data-bs-toggle="collapse" href="#curl_command_${vuln_obj['id']}" aria-expanded="false" aria-controls="curl_command_${vuln_obj['id']}">CURL command <i class="fe-terminal"></i></a>`;
description += `<div class="collapse" id="curl_command_${vuln_obj['id']}"><ul>`;
description += `<li><code>${split_into_lines(htmlEncode(vuln_obj['curl_command']), 30)}</code></li>`;
description += '</ul></div>';
}
var action_icon = vuln_obj['hackerone_report_id'] ? '' : `
<div class="btn-group mb-2 dropstart">
<a href="#" class="text-dark dropdown-toggle float-end" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-more-horizontal"><circle cx="12" cy="12" r="1"></circle><circle cx="19" cy="12" r="1"></circle><circle cx="5" cy="12" r="1"></circle></svg>
</a>
<div class="dropdown-menu" style="">
<a class="dropdown-item" href="javascript:report_hackerone(${vuln_obj['id']}, '${vuln_obj['severity']}');">Report to Hackerone</a>
</div>
</div>`;
$('#vuln_tbody').append(`
<tr>
<td>${vuln_type}</td>
<td>${vuln_title}</td>
<td class="text-center">${get_severity_badge(vuln_obj['severity'])}</td>
<td class="text-center">${cvss_score}</td>
<td>${cve_cwe_badge}</td>
<td>${http_url}</td>
<td>${description}</td>
<td>${action_icon}</td>
</tr>
`);
}
$("#vulnerability-modal-datatable").DataTable({
"oLanguage": {
"oPaginate": {
"sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>',
"sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>'
},
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" + "<'table-responsive'tr>" + "<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
"order": [
[5, "desc"]
],
drawCallback: function() {
$(".dataTables_paginate > .pagination").addClass("pagination-rounded")
}
});
}
function render_directories_in_xl_modal(directory_count, subdomain_name, result) {
$('#xl-modal-content').append(`<h5> ${directory_count} Directories Discovered on subdomain ${subdomain_name}</h5>`);
$('#xl-modal-content').append(`
<div class="">
<table id="directory-modal-datatable" class="table dt-responsive nowrap w-100">
<thead>
<tr>
<th>Directory</th>
<th class="text-center">HTTP Status</th>
<th>Content Length</th>
<th>Lines</th>
<th>Words</th>
</tr>
</thead>
<tbody id="directory_tbody">
</tbody>
</table>
</div>
`);
$('#directory_tbody').empty();
for (var dir_obj in result) {
var dir = result[dir_obj];
$('#directory_tbody').append(`
<tr>
<td><a href="${dir.url}" target="_blank">${dir.name}</a></td>
<td class="text-center">${get_http_status_badge(dir.http_status)}</td>
<td>${dir.length}</td>
<td>${dir.lines}</td>
<td>${dir.words}</td>
</tr>
`);
}
var interesting_keywords_array = [];
var dir_modal_table = $("#directory-modal-datatable").DataTable({
"oLanguage": {
"oPaginate": {
"sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>',
"sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>'
},
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" + "<'table-responsive'tr>" + "<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
"order": [
[2, "desc"]
],
drawCallback: function() {
$(".dataTables_paginate > .pagination").addClass("pagination-rounded");
}
});
// TODO: Find interetsing dirs
// fetch("/api/listInterestingKeywords")
// .then(response => {
// return response.json();
// })
// .then(data => {
// interesting_keywords_array = data;
// dir_modal_table.rows().every(function(){
// console.log(this.data());
// });
// });
}
function get_and_render_subscan_history(subdomain_id, subdomain_name) {
// This function displays the subscan history in a modal for any particular subdomain
var data = {
'subdomain_id': subdomain_id
};
fetch('/api/listSubScans/?format=json', {
method: 'POST',
credentials: "same-origin",
body: JSON.stringify(data),
headers: {
"X-CSRFToken": getCookie("csrftoken"),
"Content-Type": 'application/json',
}
}).then(function(response) {
return response.json();
}).then(function(data) {
console.log(data);
if (data['status']) {
$('#modal_title').html('Subscan History for subdomain ' + subdomain_name);
$('#modal-content').empty();
$('#modal-content').append(`<div id="subscan_history_table"></div>`);
$('#subscan_history_table').empty();
for (var result in data['results']) {
var result_obj = data['results'][result];
var error_message = '';
var task_name = get_task_name(result_obj);
if (result_obj.status == 0) {
color = 'danger';
bg_color = 'bg-soft-danger';
status_badge = '<span class="float-end badge bg-danger">Failed</span>';
error_message = `</br><span class="text-danger">Error: ${result_obj.error_message}`;
} else if (result_obj.status == 3) {
color = 'danger';
bg_color = 'bg-soft-danger';
status_badge = '<span class="float-end badge bg-danger">Aborted</span>';
} else if (result_obj.status == 2) {
color = 'success';
bg_color = 'bg-soft-success';
status_badge = '<span class="float-end badge bg-success">Task Completed</span>';
}
$('#subscan_history_table').append(`
<div class="card border-${color} border mini-card">
<a href="#" class="text-reset item-hovered" onclick="show_subscan_results(${result_obj['id']})">
<div class="card-header ${bg_color} text-${color} mini-card-header">
${task_name} on <b>${result_obj.subdomain_name}</b> using engine <b>${htmlEncode(result_obj.engine)}</b>
</div>
<div class="card-body mini-card-body">
<p class="card-text">
${status_badge}
<span class="">
Task Completed ${result_obj.completed_ago} ago
</span>
Took ${result_obj.time_taken}
${error_message}
</p>
</div>
</a>
</div>
`);
}
$('#modal_dialog').modal('show');
}
});
}
function fetch_whois(domain_name, save_db) {
// this function will fetch WHOIS record for any subdomain and also display
// snackbar once whois is fetched
var url = `/api/tools/whois/?format=json&ip_domain=${domain_name}`;
if (save_db) {
url += '&save_db';
}
$('[data-toggle="tooltip"]').tooltip('hide');
Snackbar.show({
text: 'Fetching WHOIS...',
pos: 'top-right',
duration: 1500,
});
$("#whois_not_fetched_alert").hide();
$("#whois_fetching_alert").show();
fetch(url, {}).then(res => res.json())
.then(function(response) {
$("#whois_fetching_alert").hide();
document.getElementById('domain_age').innerHTML = response['domain']['domain_age'] + ' ' + response['domain']['date_created'];
document.getElementById('ip_address').innerHTML = response['domain']['ip_address'];
document.getElementById('ip_geolocation').innerHTML = response['domain']['geolocation'];
document.getElementById('registrant_name').innerHTML = response['registrant']['name'];
console.log(response['registrant']['organization'])
document.getElementById('registrant_organization').innerHTML = response['registrant']['organization'] ? response['registrant']['organization'] : ' ';
document.getElementById('registrant_address').innerHTML = response['registrant']['address'] + ' ' + response['registrant']['city'] + ' ' + response['registrant']['state'] + ' ' + response['registrant']['country'];
document.getElementById('registrant_phone_numbers').innerHTML = response['registrant']['tel'];
document.getElementById('registrant_fax').innerHTML = response['registrant']['fax'];
Snackbar.show({
text: 'Whois Fetched...',
pos: 'top-right',
duration: 3000
});
$("#whois_fetched_alert").show();
$("#whois_fetched_alert").fadeTo(2000, 500).slideUp(1500, function() {
$("#whois_fetched_alert").slideUp(500);
});
}).catch(function(error) {
console.log(error);
});
}
function get_target_whois(domain_name) {
// this function will fetch whois from db, if not fetched, will make a fresh
// query and will display whois on a modal
var url = `/api/tools/whois/?format=json&ip_domain=${domain_name}&fetch_from_db`
Swal.fire({
title: `Fetching WHOIS details for ${domain_name}...`
});
swal.showLoading();
fetch(url, {
method: 'GET',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken"),
'Content-Type': 'application/json'
},
}).then(response => response.json()).then(function(response) {
console.log(response);
if (response.status) {
swal.close();
display_whois_on_modal(response);
} else {
fetch(`/api/tools/whois/?format=json&ip_domain=${domain_name}&save_db`, {
method: 'GET',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken"),
'Content-Type': 'application/json'
},
}).then(response => response.json()).then(function(response) {
console.log(response);
if (response.status) {
swal.close();
display_whois_on_modal(response);
} else {
Swal.fire({
title: 'Oops!',
text: `reNgine could not fetch WHOIS records for ${domain_name}!`,
icon: 'error'
});
}
});
}
});
}
function get_domain_whois(domain_name, show_add_target_btn=false) {
// this function will get whois for domains that are not targets, this will
// not store whois into db nor create target
var url = `/api/tools/whois/?format=json&ip_domain=${domain_name}`
Swal.fire({
title: `Fetching WHOIS details for ${domain_name}...`
});
$('.modal').modal('hide');
swal.showLoading();
fetch(url, {
method: 'GET',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken"),
'Content-Type': 'application/json'
},
}).then(response => response.json()).then(function(response) {
swal.close();
if (response.status) {
display_whois_on_modal(response, show_add_target_btn=show_add_target_btn);
} else {
Swal.fire({
title: 'Oops!',
text: `reNgine could not fetch WHOIS records for ${domain_name}! ${response['message']}`,
icon: 'error'
});
}
});
}
function display_whois_on_modal(response, show_add_target_btn=false) {
// this function will display whois data on modal, should be followed after get_domain_whois()
$('#modal_dialog').modal('show');
$('#modal-content').empty();
$("#modal-footer").empty();
content = `<div class="row mt-3">
<div class="col-sm-3">
<div class="nav flex-column nav-pills nav-pills-tab" id="v-pills-tab" role="tablist" aria-orientation="vertical">
<a class="nav-link active show mb-1" id="v-pills-domain-tab" data-bs-toggle="pill" href="#v-pills-domain" role="tab" aria-controls="v-pills-domain-tab" aria-selected="true">Domain info</a>
<a class="nav-link mb-1" id="v-pills-whois-tab" data-bs-toggle="pill" href="#v-pills-whois" role="tab" aria-controls="v-pills-whois" aria-selected="false">Whois</a>
<a class="nav-link mb-1" id="v-pills-nameserver-tab" data-bs-toggle="pill" href="#v-pills-nameserver" role="tab" aria-controls="v-pills-nameserver"aria-selected="false">Nameservers</a>
<a class="nav-link mb-1" id="v-pills-history-tab" data-bs-toggle="pill" href="#v-pills-history" role="tab" aria-controls="v-pills-history"aria-selected="false">NS History</a>
<a class="nav-link mb-1" id="v-pills-related-tab" data-bs-toggle="pill" href="#v-pills-related" role="tab" aria-controls="v-pills-related"aria-selected="false">Related Domains`;
if (response['related_domains'].length) {
content += `<span class="badge badge-soft-info float-end">${response['related_domains'].length}</span>`
}
content += `</a>`;
content += `<a class="nav-link mb-1" id="v-pills-related-tld-tab" data-bs-toggle="pill" href="#v-pills-related-tld" role="tab" aria-controls="v-pills-related-tld"aria-selected="false">Related TLDs`;
if (response['related_tlds'].length) {
content += `<span class="badge badge-soft-info float-end">${response['related_tlds'].length}</span>`
}
content += `</span></a>`;
content += `</div></div>
<div class="col-sm-9">
<div class="tab-content pt-0">
<div class="tab-pane fade active show" id="v-pills-domain" role="tabpanel" aria-labelledby="v-pills-domain-tab" data-simplebar style="max-height: 300px; min-height: 300px;">
<h4 class="header-title">Domain Information</h4>
<table class="domain_details_table table table-hover table-borderless">
<tr style="display: none">
<th> </th>
<th> </th>
</tr>
<tr>
<td>Domain Name</td>
<td>${response['ip_domain'] ? response['ip_domain']: "-"}</td>
</tr>
<tr>
<td>Domain age</td>
<td>${response['domain']['domain_age'] ? response['domain']['domain_age']: "-"}</td>
</tr>
<tr>
<td>IP Address</td>
<td>${response['domain']['ip_address'] ? response['domain']['ip_address']: "-" }</td>
</tr>
<tr>
<td>IP Geolocation</td>
<td>
${response['domain']['geolocation_iso'] ? `<img src="https://domainbigdata.com/img/flags-iso/flat/24/${response['domain']['geolocation_iso']}.png" alt="${response['domain']['geolocation_iso']}">` : ""}
${response['domain']['geolocation'] ? response['domain']['geolocation'] : "-"}</td>
</tr>
</table>
<h4 class="header-title mt-3">Registrant Information</h4>
<table class="domain_details_table table table-hover table-borderless">
<tr style="display: none">
<th> </th>
<th> </th>
</tr>
<tr>
<td>Name</td>
<td>${response['registrant']['name'] ? response['registrant']['name']: "-"}</td>
</tr>
<tr>
<td>Email</td>
<td>${response['registrant']['email'] ? response['registrant']['email']: "-"}</td>
</tr>
<tr>
<td>Organization</td>
<td>${response['registrant']['organization'] ? response['registrant']['organization']: "-"}</td>
</tr>
<tr>
<td>Address</td>
<td>${response['registrant']['address'] ? response['registrant']['address']: "-"}</td>
</tr>
<tr>
<td>Phone Numbers</td>
<td>${response['registrant']['tel'] ? response['registrant']['tel']: "-"}</td>
</tr>
<tr>
<td>Fax</td>
<td>${response['registrant']['fax'] ? response['registrant']['fax']: "-"}</td>
</tr>
</table>
</div>
<div class="tab-pane fade" id="v-pills-whois" role="tabpanel" aria-labelledby="v-pills-whois-tab">
<pre data-simplebar style="max-height: 310px; min-height: 310px;">${response['whois'] ? response['whois'] : "No Whois Data found!"}</pre>
</div>
<div class="tab-pane fade" id="v-pills-history" role="tabpanel" aria-labelledby="v-pills-history-tab" data-simplebar style="max-height: 300px; min-height: 300px;">`;
if (response['nameserver']['history'].length) {
content += `<table class="table table-striped mb-0">
<thead class="table-dark">
<td>Date</td>
<td>Action</td>
<td>NameServer</td>
</thead>
<tbody>`;
for (var history in response['nameserver']['history']) {
var obj = response['nameserver']['history'][history];
content += `
<tr>
<td>${obj['date']? obj['date'] : '-'}</td>
<td>${obj['action']? obj['action'] : '-'}</td>
<td>${obj['server']? obj['server'] : '-'}</td>
</tr>
`;
}
content += `</tbody></table>`
} else {
content += 'No DNS history records found.';
}
content += `
</div>
<div class="tab-pane fade" id="v-pills-nameserver" role="tabpanel" aria-labelledby="v-pills-nameserver-tab" data-simplebar style="max-height: 300px; min-height: 300px;">`;
if (response['nameserver']['records'].length) {
content += `<table class="table table-striped mb-0">
<thead class="table-dark">
<td>Type</td>
<td>Hostname</td>
<td>Address</td>
<td>TTL</td>
<td>Class</td>
<td>Preference</td>
</thead>
<tbody>`;
for (var record in response['nameserver']['records']) {
var obj = response['nameserver']['records'][record];
content += `
<tr>
<td><span class="badge badge-soft-primary me-1 ms-1">${obj['type']? obj['type'] : '-'}</span</td>
<td>${obj['hostname']? obj['hostname'] : '-'}</td>
<td>${obj['address']? obj['address'] : '-'}</td>
<td>${obj['ttl']? obj['ttl'] : '-'}</td>
<td>${obj['ns_class']? obj['ns_class'] : '-'}</td>
<td>${obj['preference']? obj['preference'] : '-'}</td>
</tr>`;
}
content += `</tbody></table>`;
} else {
content += `No DNS history records found.`;
}
content += `
</div>
<div class="tab-pane fade" id="v-pills-related" role="tabpanel" aria-labelledby="v-pills-related-tab" data-simplebar style="max-height: 300px; min-height: 300px;">
`;
if (!response['related_domains'].length) {
content += `<div class="alert alert-warning" role="alert">
<i class="mdi mdi-alert-outline me-2"></i> Oops! Could not find any related domains.
</div>`;
}
for (var domain in response['related_domains']) {
var domain_obj = response['related_domains'][domain];
content += `<span class="badge badge-soft-primary badge-link waves-effect waves-light me-1" data-toggle="tooltip" title="Add ${domain_obj} as target." onclick="add_target('${domain_obj}')">${domain_obj}</span>`
}
content += `
</div>
<div class="tab-pane fade" id="v-pills-related-tld" role="tabpanel" aria-labelledby="v-pills-related-tld-tab" data-simplebar style="max-height: 300px; min-height: 300px;">
`;
if (!response['related_tlds'].length) {
content += `<div class="alert alert-warning" role="alert">
<i class="mdi mdi-alert-outline me-2"></i> Oops! Could not find any related TLds.
</div>`;
}
for (var domain in response['related_tlds']) {
var domain_obj = response['related_tlds'][domain];
content += `<span class="badge badge-soft-primary badge-link waves-effect waves-light me-1" data-toggle="tooltip" title="Add ${domain_obj} as target." onclick="add_target('${domain_obj}')">${domain_obj}</span>`
}
content += `
</div>
</div>
</div>
</div>`;
if (show_add_target_btn) {
content += `<div class="text-center">
<button class="btn btn-primary float-end" type="submit" id="search_whois_toolbox_btn" onclick="add_target('${response['ip_domain']}')">Add ${response['ip_domain']} as target</button>
</div>`
}
$('#modal-content').append(content);
$('[data-toggle="tooltip"]').tooltip();
}
function show_quick_add_target_modal() {
// this function will display the modal to add target
$('#modal_title').html('Add target');
$('#modal-content').empty();
$('#modal-content').append(`
If you would like to add IP/CIDRs, multiple domain, Please <a href="/target/add/target">click here.</a>
<div class="mb-3">
<label for="target_name_modal" class="form-label">Target Name</label>
<input class="form-control" type="text" id="target_name_modal" required="" placeholder="yourdomain.com">
</div>
<div class="mb-3">
<label for="target_description_modal" class="form-label">Description (Optional)</label>
<input class="form-control" type="text" id="target_description_modal" required="" placeholder="Target Description">
</div>
<div class="mb-3">
<label for="h1_handle_modal" class="form-label">Hackerone Target Team Handle (Optional)</label>
<input class="form-control" type="text" id="h1_handle_modal" placeholder="hackerone.com/team_handle, Only enter team_handle after /">
</div>
<div class="mb-3 text-center">
<button class="btn btn-primary float-end" type="submit" id="add_target_modal" onclick="add_quick_target()">Add Target</button>
</div>
`);
$('#modal_dialog').modal('show');
}
function add_quick_target() {
// this function will be a onclick for add target button on add_target modal
$('#modal_dialog').modal('hide');
var domain_name = $('#target_name_modal').val();
var description = $('#target_description_modal').val();
var h1_handle = $('#h1_handle_modal').val();
const data = {
'domain_name': domain_name,
'h1_team_handle': h1_handle,
'description': description
};
add_target(domain_name, h1_handle = h1_handle, description = description);
}
function add_target(domain_name, h1_handle = null, description = null) {
// this function will add domain_name as target
const add_api = '/api/add/target/?format=json';
const data = {
'domain_name': domain_name,
'h1_team_handle': h1_handle,
'description': description
};
swal.queue([{
title: 'Add Target',
text: `Would you like to add ${domain_name} as target?`,
icon: 'info',
showCancelButton: true,
confirmButtonText: 'Add Target',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
return fetch(add_api, {
method: 'POST',
credentials: "same-origin",
headers: {
'X-CSRFToken': getCookie("csrftoken"),
'Content-Type': 'application/json'
},
body: JSON.stringify(data)
}).then(function(response) {
return response.json();
}).then(function(data) {
if (data.status) {
swal.queue([{
title: 'Target Successfully added!',
text: `Do you wish to initiate the scan on new target?`,
icon: 'success',
showCancelButton: true,
confirmButtonText: 'Initiate Scan',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
window.location = `/scan/start/${data.domain_id}`;
}
}]);
} else {
swal.insertQueueStep({
icon: 'error',
title: data.message
});
}
}).catch(function() {
swal.insertQueueStep({
icon: 'error',
title: 'Oops! Unable to delete the scan history!'
});
})
}
}]);
}
function loadSubscanHistoryWidget(scan_history_id = null, domain_id = null) {
// This function will load the subscan history widget
if (scan_history_id) {
var data = {
'scan_history_id': scan_history_id
}
}
if (domain_id) {
var data = {
'domain_id': domain_id
}
}
fetch('/api/listSubScans/?format=json', {
method: 'POST',
credentials: "same-origin",
body: JSON.stringify(data),
headers: {
"X-CSRFToken": getCookie("csrftoken"),
"Content-Type": 'application/json',
}
}).then(function(response) {
return response.json();
}).then(function(data) {
console.log(data);
$('#subscan_history_widget').empty();
if (data['status']) {
$('#sub_scan_history_count').append(`
<span class="badge badge-soft-primary me-1">${data['results'].length}</span>
`)
for (var result in data['results']) {
var error_message = '';
var result_obj = data['results'][result];
var task_name = get_task_name(result_obj);
if (result_obj.status == 0) {
color = 'danger';
bg_color = 'bg-soft-danger';
status_badge = '<span class="float-end badge bg-danger">Failed</span>';
error_message = `</br><span class="text-danger">Error: ${result_obj.error_message}`;
} else if (result_obj.status == 3) {
color = 'danger';
bg_color = 'bg-soft-danger';
status_badge = '<span class="float-end badge bg-danger">Aborted</span>';
} else if (result_obj.status == 2) {
color = 'success';
bg_color = 'bg-soft-success';
status_badge = '<span class="float-end badge bg-success">Task Completed</span>';
} else if (result_obj.status == 1) {
color = 'primary';
bg_color = 'bg-soft-primary';
status_badge = '<span class="float-end badge bg-primary">Running</span>';
}
$('#subscan_history_widget').append(`
<div class="card border-${color} border mini-card">
<a href="#" class="text-reset item-hovered" onclick="show_subscan_results(${result_obj['id']})">
<div class="card-header ${bg_color} text-${color} mini-card-header">
${task_name} on <b>${result_obj.subdomain_name}</b>
</div>
<div class="card-body mini-card-body">
<p class="card-text">
${status_badge}
<span class="">
Task Completed ${result_obj.completed_ago} ago
</span>
Took ${result_obj.time_taken}
${error_message}
</p>
</div>
</a>
</div>
`);
}
} else {
$('#sub_scan_history_count').append(`
<span class="badge badge-soft-primary me-1">0</span>
`)
$('#subscan_history_widget').append(`
<div class="alert alert-warning alert-dismissible fade show mt-2" role="alert">
<button type="button" class="btn-close" data-bs-dismiss="alert" aria-label="Close"></button>
No Subscans has been initiated for any subdomains. You can select individual subdomains and initiate subscans like Directory Fuzzing, Vulnerability Scan etc.
</div>
`);
}
});
}
function get_ips(scan_id=null, domain_id=null){
// this function will fetch and render ips in widget
var url = '/api/queryIps/?';
if (scan_id) {
url += `scan_id=${scan_id}`;
}
if (domain_id) {
url += `target_id=${domain_id}`;
}
url += `&format=json`;
$.getJSON(url, function(data) {
$('#ip-address-count').empty();
for (var val in data['ips']){
ip = data['ips'][val]
badge_color = ip['is_cdn'] ? 'warning' : 'primary';
if (scan_id) {
$("#ip-address").append(`<span class='badge badge-soft-${badge_color} m-1 badge-link' data-toggle="tooltip" title="${ip['ports'].length} Ports Open." onclick="get_ip_details('${ip['address']}', scan_id=${scan_id}, domain_id=null)">${ip['address']}</span>`);
}
else if (domain_id) {
$("#ip-address").append(`<span class='badge badge-soft-${badge_color} m-1 badge-link' data-toggle="tooltip" title="${ip['ports'].length} Ports Open." onclick="get_ip_details('${ip['address']}', scan_id=null, domain_id=${domain_id})">${ip['address']}</span>`);
}
// $("#ip-address").append(`<span class='badge badge-soft-${badge_color} m-1' data-toggle="modal" data-target="#tabsModal">${ip['address']}</span>`);
}
$('#ip-address-count').html(`<span class="badge badge-soft-primary me-1">${data['ips'].length}</span>`);
$("body").tooltip({ selector: '[data-toggle=tooltip]' });
});
}
function get_technologies(scan_id=null, domain_id=null){
// this function will fetch and render tech in widget
var url = '/api/queryTechnologies/?';
if (scan_id) {
url += `scan_id=${scan_id}`;
}
if (domain_id) {
url += `target_id=${domain_id}`;
}
url += `&format=json`;
$.getJSON(url, function(data) {
$('#technologies-count').empty();
for (var val in data['technologies']){
tech = data['technologies'][val]
if (scan_id) {
$("#technologies").append(`<span class='badge badge-soft-primary m-1 badge-link' data-toggle="tooltip" title="${tech['count']} Subdomains use this technology." onclick="get_tech_details('${tech['name']}', scan_id=${scan_id}, domain_id=null)">${tech['name']}</span>`);
}
else if (domain_id) {
$("#technologies").append(`<span class='badge badge-soft-primary m-1 badge-link' data-toggle="tooltip" title="${tech['count']} Subdomains use this technology." onclick="get_tech_details('${tech['name']}', scan_id=null, domain_id=${domain_id})">${tech['name']}</span>`);
}
}
$('#technologies-count').html(`<span class="badge badge-soft-primary me-1">${data['technologies'].length}</span>`);
$("body").tooltip({ selector: '[data-toggle=tooltip]' });
});
}
function get_ports(scan_id=null, domain_id=null){
// this function will fetch and render ports in widget
var url = '/api/queryPorts/?';
if (scan_id) {
url += `scan_id=${scan_id}`;
}
if (domain_id) {
url += `target_id=${domain_id}`;
}
url += `&format=json`;
$.getJSON(url, function(data) {
$('#ports-count').empty();
for (var val in data['ports']){
port = data['ports'][val]
badge_color = port['is_uncommon'] ? 'danger' : 'primary';
if (scan_id) {
$("#ports").append(`<span class='badge badge-soft-${badge_color} m-1 badge-link' data-toggle="tooltip" title="${port['description']}" onclick="get_port_details('${port['number']}', scan_id=${scan_id}, domain_id=null)">${port['number']}/${port['service_name']}</span>`);
}
else if (domain_id){
$("#ports").append(`<span class='badge badge-soft-${badge_color} m-1 badge-link' data-toggle="tooltip" title="${port['description']}" onclick="get_port_details('${port['number']}', scan_id=null, domain_id=${domain_id})">${port['number']}/${port['service_name']}</span>`);
}
}
$('#ports-count').html(`<span class="badge badge-soft-primary me-1">${data['ports'].length}</span>`);
$("body").tooltip({ selector: '[data-toggle=tooltip]' });
});
}
function get_ip_details(ip_address, scan_id=null, domain_id=null){
var port_url = `/api/queryPorts/?ip_address=${ip_address}`;
var subdomain_url = `/api/querySubdomains/?ip_address=${ip_address}`;
if (scan_id) {
port_url += `&scan_id=${scan_id}`;
subdomain_url += `&scan_id=${scan_id}`;
}
else if(domain_id){
port_url += `&target_id=${domain_id}`;
subdomain_url += `&target_id=${domain_id}`;
}
port_url += `&format=json`;
subdomain_url += `&format=json`;
var interesting_badge = `<span class="m-1 badge badge-soft-danger bs-tooltip" title="Interesting Subdomain">Interesting</span>`;
var port_loader = `<span class="inner-div spinner-border text-primary align-self-center loader-sm" id="port-modal-loader"></span>`;
var subdomain_loader = `<span class="inner-div spinner-border text-primary align-self-center loader-sm" id="subdomain-modal-loader"></span>`;
// add tab modal title
$('#modal_title').html('Details for IP: <b>' + ip_address + '</b>');
$('#modal-content').empty();
$('#modal-tabs').empty();
$('#modal-content').append(`<ul class='nav nav-tabs nav-bordered' id="modal_tab_nav"></ul><div id="modal_tab_content" class="tab-content"></div>`);
$('#modal_tab_nav').append(`<li class="nav-item"><a class="nav-link active" data-bs-toggle="tab" href="#modal_content_port" aria-expanded="true"><span id="modal-open-ports-count"></span>Open Ports ${port_loader}</a></li>`);
$('#modal_tab_nav').append(`<li class="nav-item"><a class="nav-link" data-bs-toggle="tab" href="#modal_content_subdomain" aria-expanded="false"><span id="modal-subdomain-count"></span>Subdomains ${subdomain_loader}</a></li>`)
// add content area
$('#modal_tab_content').empty();
$('#modal_tab_content').append(`<div class="tab-pane show active" id="modal_content_port"></div><div class="tab-pane" id="modal_content_subdomain"></div>`);
$('#modal-open-ports').append(`<div class="modal-text" id="modal-text-open-port"></div>`);
$('#modal-text-open-port').append(`<ul id="modal-open-port-text"></ul>`);
$('#modal_content_port').append(`<ul id="modal_port_ul"></ul>`);
$('#modal_content_subdomain').append(`<ul id="modal_subdomain_ul"></ul>`);
$.getJSON(port_url, function(data) {
$('#modal_content_port').empty();
$('#modal_content_port').append(`<p> IP Addresses ${ip_address} has ${data['ports'].length} Open Ports`);
$('#modal-open-ports-count').html(`<b>${data['ports'].length}</b> `);
for (port in data['ports']){
port_obj = data['ports'][port];
badge_color = port_obj['is_uncommon'] ? 'danger' : 'info';
$("#modal_content_port").append(`<li class="mt-1">${port_obj['description']} <b class="text-${badge_color}">(${port_obj['number']}/${port_obj['service_name']})</b></li>`)
}
$("#port-modal-loader").remove();
});
$('#modal_dialog').modal('show');
// query subdomains
$.getJSON(subdomain_url, function(data) {
$('#modal_content_subdomain').empty();
$('#modal_content_subdomain').append(`<p>${data['subdomains'].length} Subdomains are associated with IP ${ip_address}`);
$('#modal-subdomain-count').html(`<b>${data['subdomains'].length}</b> `);
for (subdomain in data['subdomains']){
subdomain_obj = data['subdomains'][subdomain];
badge_color = subdomain_obj['http_status'] >= 400 ? 'danger' : '';
li_id = get_randid();
if (subdomain_obj['http_url']) {
$("#modal_content_subdomain").append(`<li class="mt-1" id="${li_id}"><a href='${subdomain_obj['http_url']}' target="_blank" class="text-${badge_color}">${subdomain_obj['name']}</a></li>`)
}
else {
$("#modal_content_subdomain").append(`<li class="mt-1 text-${badge_color}" id="${li_id}">${subdomain_obj['name']}</li>`);
}
if (subdomain_obj['http_status']) {
$("#"+li_id).append(get_http_badge(subdomain_obj['http_status']));
$('.bs-tooltip').tooltip();
}
if (subdomain_obj['is_interesting']) {
$("#"+li_id).append(interesting_badge)
}
}
$("#modal-text-subdomain").append(`<span class="float-end text-danger">*Subdomains highlighted are 40X HTTP Status</span>`);
$("#subdomain-modal-loader").remove();
});
}
function get_port_details(port, scan_id=null, domain_id=null){
var ip_url = `/api/queryIps/?port=${port}`;
var subdomain_url = `/api/querySubdomains/?port=${port}`;
if (scan_id) {
ip_url += `&scan_id=${scan_id}`;
subdomain_url += `&scan_id=${scan_id}`;
}
else if(domain_id){
ip_url += `&target_id=${domain_id}`;
subdomain_url += `&target_id=${domain_id}`;
}
ip_url += `&format=json`;
subdomain_url += `&format=json`;
var interesting_badge = `<span class="m-1 badge badge-soft-danger bs-tooltip" title="Interesting Subdomain">Interesting</span>`;
var ip_spinner = `<span class="spinner-border spinner-border-sm me-1" id="ip-modal-loader"></span>`;
var subdomain_spinner = `<span class="spinner-border spinner-border-sm me-1" id="subdomain-modal-loader"></span>`;
$('#modal_title').html('Details for Port: <b>' + port + '</b>');
$('#modal-content').empty();
$('#modal-tabs').empty();
$('#modal-content').append(`<ul class='nav nav-tabs nav-bordered' id="modal_tab_nav"></ul><div id="modal_tab_content" class="tab-content"></div>`);
$('#modal_tab_nav').append(`<li class="nav-item"><a class="nav-link active" data-bs-toggle="tab" href="#modal_content_ip" aria-expanded="true"><span id="modal-ip-count"></span>IP Address ${ip_spinner}</a></li>`);
$('#modal_tab_nav').append(`<li class="nav-item"><a class="nav-link" data-bs-toggle="tab" href="#modal_content_subdomain" aria-expanded="false"><span id="modal-subdomain-count"></span>Subdomains ${subdomain_spinner}</a></li>`)
// add content area
$('#modal_tab_content').append(`<div class="tab-pane show active" id="modal_content_ip"></div><div class="tab-pane" id="modal_content_subdomain"></div>`);
$('#modal_content_ip').append(`<ul id="modal_ip_ul"></ul>`);
$('#modal_content_subdomain').append(`<ul id="modal_subdomain_ul"></ul>`);
$('#modal_dialog').modal('show');
$.getJSON(ip_url, function(data) {
$('#modal_ip_ul').empty();
$('#modal_ip_ul').append(`<p>${data['ips'].length} IP Addresses have Port ${port} Open`);
$('#modal-ip-count').html(`<b>${data['ips'].length}</b> `);
for (ip in data['ips']){
ip_obj = data['ips'][ip];
text_color = ip_obj['is_cdn'] ? 'warning' : '';
$("#modal_ip_ul").append(`<li class='mt-1 text-${text_color}'>${ip_obj['address']}</li>`)
}
$('#modal_ip_ul').append(`<span class="float-end text-warning">*IP Address highlighted are CDN IP Address</span>`);
$("#ip-modal-loader").remove();
});
// query subdomains
$.getJSON(subdomain_url, function(data) {
$('#modal_subdomain_ul').empty();
$('#modal_subdomain_ul').append(`<p>${data['subdomains'].length} Subdomains have Port ${port} Open`);
$('#modal-subdomain-count').html(`<b>${data['subdomains'].length}</b> `);
for (subdomain in data['subdomains']){
subdomain_obj = data['subdomains'][subdomain];
badge_color = subdomain_obj['http_status'] >= 400 ? 'danger' : '';
li_id = get_randid();
if (subdomain_obj['http_url']) {
$("#modal_subdomain_ul").append(`<li id="${li_id}" class="mt-1"><a href='${subdomain_obj['http_url']}' target="_blank" class="text-${badge_color}">${subdomain_obj['name']}</a></li>`)
}
else {
$("#modal_subdomain_ul").append(`<li class="mt-1 text-${badge_color}" id="${li_id}">${subdomain_obj['name']}</li>`);
}
if (subdomain_obj['http_status']) {
$("#"+li_id).append(get_http_badge(subdomain_obj['http_status']));
$('.bs-tooltip').tooltip();
}
if (subdomain_obj['is_interesting']) {
$("#"+li_id).append(interesting_badge)
}
}
$("#modal_subdomain_ul").append(`<span class="float-end text-danger">*Subdomains highlighted are 40X HTTP Status</span>`);
$("#subdomain-modal-loader").remove();
});
}
function get_tech_details(tech, scan_id=null, domain_id=null){
var url = `/api/querySubdomains/?tech=${tech}`;
if (scan_id) {
url += `&scan_id=${scan_id}`;
}
else if(domain_id){
url += `&target_id=${domain_id}`;
}
url += `&format=json`;
var interesting_badge = `<span class="m-1 badge badge-soft-danger bs-tooltip" title="Interesting Subdomain">Interesting</span>`;
// render tab modal
$('.modal-title').html('Details for Technology: <b>' + tech + '</b>');
$('#modal_dialog').modal('show');
$('.modal-text').empty();
$('#modal-footer').empty();
$('.modal-text').append(`<div class='outer-div' id="modal-loader"><span class="inner-div spinner-border text-primary align-self-center loader-sm"></span></div>`);
// query subdomains
$.getJSON(url, function(data) {
$('#modal-loader').empty();
$('#modal-content').empty();
$('#modal-content').append(`${data['subdomains'].length} Subdomains are using ${tech}`);
for (subdomain in data['subdomains']){
subdomain_obj = data['subdomains'][subdomain];
badge_color = subdomain_obj['http_status'] >= 400 ? 'danger' : '';
li_id = get_randid();
if (subdomain_obj['http_url']) {
$("#modal-content").append(`<li id="${li_id}"><a href='${subdomain_obj['http_url']}' target="_blank" class="text-${badge_color}">${subdomain_obj['name']}</a></li>`)
}
else {
$("#modal-content").append(`<li class="text-${badge_color}" id="${li_id}">${subdomain_obj['name']}</li>`);
}
if (subdomain_obj['http_status']) {
$("#"+li_id).append(get_http_badge(subdomain_obj['http_status']));
$('.bs-tooltip').tooltip();
}
if (subdomain_obj['is_interesting']) {
$("#"+li_id).append(interesting_badge)
}
}
$("#modal-content").append(`<span class="float-end text-danger">*Subdomains highlighted are 40X HTTP Status</span>`);
$("#subdomain-modal-loader").remove();
}).fail(function(){
$('#modal-loader').empty();
});
}
function get_http_badge(http_status){
switch (true) {
case (http_status >= 400):
badge_color = 'danger'
break;
case (http_status >= 300):
badge_color = 'warning'
break;
case (http_status >= 200):
badge_color = 'success'
break;
default:
badge_color = 'danger'
}
if (http_status) {
badge = `<span class="badge badge-soft-${badge_color} me-1 ms-1 bs-tooltip" data-placement="top" title="HTTP Status">${http_status}</span>`;
return badge
}
}
function get_and_render_cve_details(cve_id){
var api_url = `/api/tools/cve_details/?cve_id=${cve_id}&format=json`;
Swal.fire({
title: 'Fetching CVE Details...'
});
swal.showLoading();
fetch(api_url, {
method: 'GET',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken"),
"Content-Type": "application/json"
},
}).then(response => response.json()).then(function(response) {
console.log(response);
swal.close();
if (response.status) {
$('#xl-modal-title').empty();
$('#xl-modal-content').empty();
$('#xl-modal-footer').empty();
$('#xl-modal_title').html(`CVE Details of ${cve_id}`);
var cvss_score_badge = 'danger';
if (response.result.cvss > 0.1 && response.result.cvss <= 3.9) {
cvss_score_badge = 'info';
}
else if (response.result.cvss > 3.9 && response.result.cvss <= 6.9) {
cvss_score_badge = 'warning';
}
content = `<div class="row mt-3">
<div class="col-sm-3">
<div class="nav flex-column nav-pills nav-pills-tab" id="v-pills-tab" role="tablist" aria-orientation="vertical">
<a class="nav-link active show mb-1" id="v-pills-cve-details-tab" data-bs-toggle="pill" href="#v-pills-cve-details" role="tab" aria-controls="v-pills-cve-details-tab" aria-selected="true">CVE Details</a>
<a class="nav-link mb-1" id="v-pills-affected-products-tab" data-bs-toggle="pill" href="#v-pills-affected-products" role="tab" aria-controls="v-pills-affected-products-tab" aria-selected="true">Affected Products</a>
<a class="nav-link mb-1" id="v-pills-affected-versions-tab" data-bs-toggle="pill" href="#v-pills-affected-versions" role="tab" aria-controls="v-pills-affected-versions-tab" aria-selected="true">Affected Versions</a>
<a class="nav-link mb-1" id="v-pills-cve-references-tab" data-bs-toggle="pill" href="#v-pills-cve-references" role="tab" aria-controls="v-pills-cve-references-tab" aria-selected="true">References</a>
</div>
</div>
<div class="col-sm-9">
<div class="tab-content pt-0">`;
content += `
<div class="tab-pane fade active show" id="v-pills-cve-details" role="tabpanel" aria-labelledby="v-pills-cve-details-tab" data-simplebar style="max-height: 600px; min-height: 600px;">
<h4 class="header-title">${cve_id}</h4>
<div class="alert alert-warning" role="alert">
${response.result.summary}
</div>
<span class="badge badge-soft-primary">Assigner: ${response.result.assigner}</span>
<span class="badge badge-outline-primary">CVSS Vector: ${response.result['cvss-vector']}</span>
<table class="domain_details_table table table-hover table-borderless">
<tr style="display: none">
<th> </th>
<th> </th>
</tr>
<tr>
<td>CVSS Score</td>
<td><span class="badge badge-soft-${cvss_score_badge}">${response.result.cvss ? response.result.cvss: "-"}</span></td>
</tr>
<tr>
<td>Confidentiality Impact</td>
<td>${response.result.impact.confidentiality ? response.result.impact.confidentiality: "N/A"}</td>
</tr>
<tr>
<td>Integrity Impact</td>
<td>${response.result.impact.integrity ? response.result.impact.integrity: "N/A"}</td>
</tr>
<tr>
<td>Availability Impact</td>
<td>${response.result.impact.availability ? response.result.impact.availability: "N/A"}</td>
</tr>
<tr>
<td>Access Complexity</td>
<td>${response.result.access.complexity ? response.result.access.complexity: "N/A"}</td>
</tr>
<tr>
<td>Authentication</td>
<td>${response.result.access.authentication ? response.result.access.authentication: "N/A"}</td>
</tr>
<tr>
<td>CWE ID</td>
<td><span class="badge badge-outline-danger">${response.result.cwe ? response.result.cwe: "N/A"}</span></td>
</tr>
</table>
</div>
`;
content += `<div class="tab-pane fade" id="v-pills-cve-references" role="tabpanel" aria-labelledby="v-pills-cve-references-tab" data-simplebar style="max-height: 600px; min-height: 600px;">
<ul>`;
for (var reference in response.result.references) {
content += `<li><a href="${response.result.references[reference]}" target="_blank">${response.result.references[reference]}</a></li>`;
}
content += `</ul></div>`;
content += `<div class="tab-pane fade" id="v-pills-affected-products" role="tabpanel" aria-labelledby="v-pills-affected-products-tab" data-simplebar style="max-height: 600px; min-height: 600px;">
<ul>`;
for (var prod in response.result.vulnerable_product) {
content += `<li>${response.result.vulnerable_product[prod]}</li>`;
}
content += `</ul></div>`;
content += `<div class="tab-pane fade" id="v-pills-affected-versions" role="tabpanel" aria-labelledby="v-pills-affected-versions-tab" data-simplebar style="max-height: 600px; min-height: 600px;">
<ul>`;
for (var conf in response.result.vulnerable_configuration) {
content += `<li>${response.result.vulnerable_configuration[conf]['id']}</li>`;
}
content += `</ul></div>`;
content += `</div></div></div>`;
$('#xl-modal-content').append(content);
$('#modal_xl_scroll_dialog').modal('show');
$("body").tooltip({
selector: '[data-toggle=tooltip]'
});
}
else{
swal.fire("Error!", response.message, "error", {
button: "Okay",
});
}
});
}
function get_most_vulnerable_target(scan_id=null, target_id=null, ignore_info=false, limit=50){
$('#most_vulnerable_target_div').empty();
$('#most_vulnerable_spinner').append(`<div class="spinner-border text-primary m-2" role="status"></div>`);
var data = {};
if (scan_id) {
data['scan_history_id'] = scan_id;
}
else if (target_id) {
data['target_id'] = target_id;
}
data['ignore_info'] = ignore_info;
data['limit'] = limit;
fetch('/api/fetch/most_vulnerable/?format=json', {
method: 'POST',
credentials: "same-origin",
body: JSON.stringify(data),
headers: {
"X-CSRFToken": getCookie("csrftoken"),
"Content-Type": 'application/json',
}
}).then(function(response) {
return response.json();
}).then(function(response) {
$('#most_vulnerable_spinner').empty();
if (response.status) {
$('#most_vulnerable_target_div').append(`
<table class="table table-borderless table-nowrap table-hover table-centered m-0">
<thead>
<tr>
<th style="width: 60%">Target</th>
<th style="width: 30%">Vulnerabilities Count</th>
</tr>
</thead>
<tbody id="most_vulnerable_target_tbody">
</tbody>
</table>
`);
for (var res in response.result) {
var targ_obj = response.result[res];
var tr = `<tr onclick="window.location='/scan/detail/vuln?domain=${targ_obj.name}';" style="cursor: pointer;">`;
if (scan_id || target_id) {
tr = `<tr onclick="window.location='/scan/detail/vuln?subdomain=${targ_obj.name}';" style="cursor: pointer;">`;
}
$('#most_vulnerable_target_tbody').append(`
${tr}
<td>
<h5 class="m-0 fw-normal">${targ_obj.name}</h5>
</td>
<td>
<span class="badge badge-outline-danger">${targ_obj.vuln_count} Vulnerabilities</span>
</td>
</tr>
`);
}
}
else{
$('#most_vulnerable_target_div').append(`
<div class="mt-4 alert alert-warning">
Could not find most vulnerable targets.
</br>
Once the vulnerability scan is performed, reNgine will identify the most vulnerable targets.</div>
`);
}
});
}
function get_most_common_vulnerability(scan_id=null, target_id=null, ignore_info=false, limit=50){
$('#most_common_vuln_div').empty();
$('#most_common_vuln_spinner').append(`<div class="spinner-border text-primary m-2" role="status"></div>`);
var data = {};
if (scan_id) {
data['scan_history_id'] = scan_id;
}
else if (target_id) {
data['target_id'] = target_id;
}
data['ignore_info'] = ignore_info;
data['limit'] = limit;
fetch('/api/fetch/most_common_vulnerability/?format=json', {
method: 'POST',
credentials: "same-origin",
body: JSON.stringify(data),
headers: {
"X-CSRFToken": getCookie("csrftoken"),
"Content-Type": 'application/json',
}
}).then(function(response) {
return response.json();
}).then(function(response) {
$('#most_common_vuln_spinner').empty();
if (response.status) {
$('#most_common_vuln_div').append(`
<table class="table table-borderless table-nowrap table-hover table-centered m-0">
<thead>
<tr>
<th style="width: 60%">Vulnerability Name</th>
<th style="width: 20%">Count</th>
<th style="width: 20%">Severity</th>
</tr>
</thead>
<tbody id="most_common_vuln_tbody">
</tbody>
</table>
`);
for (var res in response.result) {
var vuln_obj = response.result[res];
var vuln_badge = '';
switch (vuln_obj.severity) {
case -1:
vuln_badge = get_severity_badge('Unknown');
break;
case 0:
vuln_badge = get_severity_badge('Info');
break;
case 1:
vuln_badge = get_severity_badge('Low');
break;
case 2:
vuln_badge = get_severity_badge('Medium');
break;
case 3:
vuln_badge = get_severity_badge('High');
break;
case 4:
vuln_badge = get_severity_badge('Critical');
break;
default:
vuln_badge = get_severity_badge('Unknown');
}
$('#most_common_vuln_tbody').append(`
<tr onclick="window.location='/scan/detail/vuln?vulnerability_name=${vuln_obj.name}';" style="cursor: pointer;">
<td>
<h5 class="m-0 fw-normal">${vuln_obj.name}</h5>
</td>
<td>
<span class="badge badge-outline-danger">${vuln_obj.count}</span>
</td>
<td>
${vuln_badge}
</td>
</tr>
`);
}
}
else{
$('#most_common_vuln_div').append(`
<div class="mt-4 alert alert-warning">
Could not find Most Common Vulnerabilities.
</br>
Once the vulnerability scan is performed, reNgine will identify the Most Common Vulnerabilities.</div>
`);
}
});
}
function highlight_search(search_keyword, content){
// this function will send the highlighted text from search keyword
var reg = new RegExp('('+search_keyword+')', 'gi');
return content.replace(reg, '<mark>$1</mark>');
}
function validURL(str) {
// checks for valid http url
var pattern = new RegExp('^(https?:\\/\\/)?'+ // protocol
'((([a-z\\d]([a-z\\d-]*[a-z\\d])*)\\.)+[a-z]{2,}|'+ // domain name
'((\\d{1,3}\\.){3}\\d{1,3}))'+ // OR ip (v4) address
'(\\:\\d+)?(\\/[-a-z\\d%_.~+]*)*'+ // port and path
'(\\?[;&a-z\\d%_.~+=-]*)?'+ // query string
'(\\#[-a-z\\d_]*)?$','i'); // fragment locator
return !!pattern.test(str);
}
| function checkall(clickchk, relChkbox) {
var checker = $('#' + clickchk);
var multichk = $('.' + relChkbox);
checker.click(function() {
multichk.prop('checked', $(this).prop('checked'));
});
}
function multiCheck(tb_var) {
tb_var.on("change", ".chk-parent", function() {
var e = $(this).closest("table").find("td:first-child .child-chk"),
a = $(this).is(":checked");
$(e).each(function() {
a ? ($(this).prop("checked", !0), $(this).closest("tr").addClass("active")) : ($(this).prop("checked", !1), $(this).closest("tr").removeClass("active"))
})
}),
tb_var.on("change", "tbody tr .new-control", function() {
$(this).parents("tr").toggleClass("active")
})
}
function GetIEVersion() {
var sAgent = window.navigator.userAgent;
var Idx = sAgent.indexOf("MSIE");
// If IE, return version number.
if (Idx > 0) return parseInt(sAgent.substring(Idx + 5, sAgent.indexOf(".", Idx)));
// If IE 11 then look for Updated user agent string.
else if (!!navigator.userAgent.match(/Trident\/7\./)) return 11;
else return 0; //It is not IE
}
function truncate(str, n) {
return (str.length > n) ? str.substr(0, n - 1) + '…' : str;
};
function return_str_if_not_null(val) {
return val ? val : '';
}
// seperate hostname and url
// Referenced from https://stackoverflow.com/questions/736513/how-do-i-parse-a-url-into-hostname-and-path-in-javascript
function getParsedURL(url) {
var parser = new URL(url);
return parser.pathname + parser.search;
};
function getCookie(name) {
var cookieValue = null;
if (document.cookie && document.cookie !== '') {
var cookies = document.cookie.split(';');
for (var i = 0; i < cookies.length; i++) {
var cookie = jQuery.trim(cookies[i]);
// Does this cookie string begin with the name we want?
if (cookie.substring(0, name.length + 1) === (name + '=')) {
cookieValue = decodeURIComponent(cookie.substring(name.length + 1));
break;
}
}
}
return cookieValue;
}
// Source: https://portswigger.net/web-security/cross-site-scripting/preventing#encode-data-on-output
function htmlEncode(str) {
return String(str).replace(/[^\w. ]/gi, function(c) {
return '&#' + c.charCodeAt(0) + ';';
});
}
// Source: https://portswigger.net/web-security/cross-site-scripting/preventing#encode-data-on-output
function jsEscape(str) {
return String(str).replace(/[^\w. ]/gi, function(c) {
return '\\u' + ('0000' + c.charCodeAt(0).toString(16)).slice(-4);
});
}
function deleteScheduledScan(id) {
const delAPI = "../delete/scheduled_task/" + id;
swal.queue([{
title: 'Are you sure you want to delete this?',
text: "This action can not be undone.",
icon: 'warning',
showCancelButton: true,
confirmButtonText: 'Delete',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
return fetch(delAPI, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
}).then(function(response) {
return response.json();
}).then(function(data) {
// TODO Look for better way
return location.reload();
}).catch(function() {
swal.insertQueueStep({
icon: 'error',
title: 'Oops! Unable to delete the scheduled task!'
})
})
}
}])
}
function change_scheduled_task_status(id, checkbox) {
if (checkbox.checked) {
text_msg = 'Schedule Scan Started';
} else {
text_msg = 'Schedule Scan Stopped';
}
Snackbar.show({
text: text_msg,
pos: 'top-right',
duration: 2500
});
const taskStatusApi = "../toggle/scheduled_task/" + id;
return fetch(taskStatusApi, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
})
}
function change_vuln_status(id) {
const vulnStatusApi = "../toggle/vuln_status/" + id;
return fetch(vulnStatusApi, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
})
}
// splits really long strings into multiple lines
// Souce: https://stackoverflow.com/a/52395960
function split_into_lines(str, maxWidth) {
const newLineStr = "</br>";
done = false;
res = '';
do {
found = false;
// Inserts new line at first whitespace of the line
for (i = maxWidth - 1; i >= 0; i--) {
if (test_white_space(str.charAt(i))) {
res = res + [str.slice(0, i), newLineStr].join('');
str = str.slice(i + 1);
found = true;
break;
}
}
// Inserts new line at maxWidth position, the word is too long to wrap
if (!found) {
res += [str.slice(0, maxWidth), newLineStr].join('');
str = str.slice(maxWidth);
}
if (str.length < maxWidth) done = true;
} while (!done);
return res + str;
}
function test_white_space(x) {
const white = new RegExp(/^\s$/);
return white.test(x.charAt(0));
};
// span values function will seperate the values by comma and put badge around it
function parse_comma_values_into_span(data, color, outline = null) {
if (data) {
var badge = `<span class='badge badge-soft-` + color + ` m-1'>`;
var data_with_span = "";
data.split(/\s*,\s*/).forEach(function(split_vals) {
data_with_span += badge + split_vals + "</span>";
});
return data_with_span;
}
return '';
}
function get_severity_badge(severity) {
switch (severity) {
case 'Info':
return "<span class='badge badge-soft-primary'> INFO </span>";
break;
case 'Low':
return "<span class='badge badge-low'> LOW </span>";
break;
case 'Medium':
return "<span class='badge badge-soft-warning'> MEDIUM </span>";
break;
case 'High':
return "<span class='badge badge-soft-danger'> HIGH </span>";
break;
case 'Critical':
return "<span class='badge badge-critical'> CRITICAL </span>";
break;
case 'Unknown':
return "<span class='badge badge-soft-info'> UNKNOWN </span>";
default:
return "";
}
}
// Source: https://stackoverflow.com/a/54733055
function typingEffect(words, id, i) {
let word = words[i].split("");
var loopTyping = function() {
if (word.length > 0) {
let elem = document.getElementById(id);
elem.setAttribute('placeholder', elem.getAttribute('placeholder') + word.shift());
} else {
deletingEffect(words, id, i);
return false;
};
timer = setTimeout(loopTyping, 150);
};
loopTyping();
};
function deletingEffect(words, id, i) {
let word = words[i].split("");
var loopDeleting = function() {
if (word.length > 0) {
word.pop();
document.getElementById(id).setAttribute('placeholder', word.join(""));
} else {
if (words.length > (i + 1)) {
i++;
} else {
i = 0;
};
typingEffect(words, id, i);
return false;
};
timer = setTimeout(loopDeleting, 90);
};
loopDeleting();
};
function fullScreenDiv(id, btn) {
let fullscreen = document.querySelector(id);
let button = document.querySelector(btn);
document.fullscreenElement && document.exitFullscreen() || document.querySelector(id).requestFullscreen()
fullscreen.setAttribute("style", "overflow:auto");
}
function get_randid() {
return '_' + Math.random().toString(36).substr(2, 9);
}
function delete_all_scan_results() {
const delAPI = "../scan/delete/scan_results/";
swal.queue([{
title: 'Are you sure you want to delete all scan results?',
text: "You won't be able to revert this!",
icon: 'warning',
showCancelButton: true,
confirmButtonText: 'Delete',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
return fetch(delAPI, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
}).then(function(response) {
return response.json();
}).then(function(data) {
// TODO Look for better way
return location.reload();
}).catch(function() {
swal.insertQueueStep({
icon: 'error',
title: 'Oops! Unable to delete Delete scan results!'
})
})
}
}])
}
function delete_all_screenshots() {
const delAPI = "../scan/delete/screenshots/";
swal.queue([{
title: 'Are you sure you want to delete all Screenshots?',
text: "You won't be able to revert this!",
icon: 'warning',
showCancelButton: true,
confirmButtonText: 'Delete',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
return fetch(delAPI, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
}).then(function(response) {
return response.json();
}).then(function(data) {
// TODO Look for better way
return location.reload();
}).catch(function() {
swal.insertQueueStep({
icon: 'error',
title: 'Oops! Unable to delete Empty Screenshots!'
})
})
}
}])
}
function load_image_from_url(src, append_to_id) {
img = document.createElement('img');
img.src = src;
img.style.width = '100%';
document.getElementById(append_to_id).appendChild(img);
}
function setTooltip(btn, message) {
hide_all_tooltips();
const instance = tippy(document.querySelector(btn));
instance.setContent(message);
instance.show();
setTimeout(function() {
instance.hide();
}, 500);
}
function hide_all_tooltips() {
$(".tooltip").tooltip("hide");
}
function get_response_time_text(response_time) {
if (response_time) {
var text_color = 'danger';
if (response_time < 0.5) {
text_color = 'success'
} else if (response_time >= 0.5 && response_time < 1) {
text_color = 'warning'
}
return `<span class="text-${text_color}">${response_time.toFixed(4)}s</span>`;
}
return '';
}
function parse_technology(data, color, scan_id = null, domain_id=null) {
var badge = `<span data-toggle="tooltip" title="Technology" class='badge-link badge badge-soft-` + color + ` mt-1 me-1'`;
var data_with_span = "";
for (var key in data) {
if (scan_id) {
data_with_span += badge + ` onclick="get_tech_details('${data[key]['name']}', ${scan_id}, domain_id=null)">` + data[key]['name'] + "</span>";
} else if (domain_id) {
data_with_span += badge + ` onclick="get_tech_details('${data[key]['name']}', scan_id=null, domain_id=domain_id)">` + data[key]['name'] + "</span>";
}
}
return data_with_span;
}
// span values function will seperate the values by comma and put badge around it
function parse_ip(data, cdn) {
if (cdn) {
var badge = `<span class='badge badge-soft-warning m-1 bs-tooltip' title="CDN IP Address">`;
} else {
var badge = `<span class='badge badge-soft-primary m-1'>`;
}
var data_with_span = "";
data.split(/\s*,\s*/).forEach(function(split_vals) {
data_with_span += badge + split_vals + "</span>";
});
return data_with_span;
}
//to remove the image element if there is no screenshot captured
function removeImageElement(element) {
element.parentElement.remove();
}
// https://stackoverflow.com/a/18197341/9338140
function download(filename, text) {
var element = document.createElement('a');
element.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(text));
element.setAttribute('download', filename);
element.style.display = 'none';
document.body.appendChild(element);
element.click();
document.body.removeChild(element);
}
function vuln_status_change(checkbox, id) {
if (checkbox.checked) {
checkbox.parentNode.parentNode.parentNode.className = "table-success text-strike";
} else {
checkbox.parentNode.parentNode.parentNode.classList.remove("table-success");
checkbox.parentNode.parentNode.parentNode.classList.remove("text-strike");
}
change_vuln_status(id);
}
function report_hackerone(vulnerability_id, severity) {
message = ""
if (severity == 'Info' || severity == 'Low' || severity == 'Medium') {
message = "We do not recommended sending this vulnerability report to hackerone due to the severity, do you still want to report this?"
} else {
message = "This vulnerability report will be sent to Hackerone.";
}
const vulnerability_report_api = "../../api/vulnerability/report/?vulnerability_id=" + vulnerability_id;
swal.queue([{
title: 'Reporting vulnerability to hackerone',
text: message,
icon: 'warning',
showCancelButton: true,
confirmButtonText: 'Report',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
return fetch(vulnerability_report_api, {
method: 'GET',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
}).then(function(response) {
return response.json();
}).then(function(data) {
console.log(data.status)
if (data.status == 111) {
swal.insertQueueStep({
icon: 'error',
title: 'Target does not has team_handle to send report to.'
})
} else if (data.status == 201) {
swal.insertQueueStep({
icon: 'success',
title: 'Vulnerability report successfully submitted to hackerone.'
})
} else if (data.status == 400) {
swal.insertQueueStep({
icon: 'error',
title: 'Invalid Report.'
})
} else if (data.status == 401) {
swal.insertQueueStep({
icon: 'error',
title: 'Hackerone authentication failed.'
})
} else if (data.status == 403) {
swal.insertQueueStep({
icon: 'error',
title: 'API Key forbidden by Hackerone.'
})
} else if (data.status == 423) {
swal.insertQueueStep({
icon: 'error',
title: 'Too many requests.'
})
}
}).catch(function() {
swal.insertQueueStep({
icon: 'error',
title: 'Oops! Unable to send vulnerability report to hackerone, check your target team_handle or hackerone configurarions!'
})
})
}
}])
}
function get_interesting_subdomains(target_id, scan_history_id) {
if (target_id) {
url = `/api/listInterestingEndpoints/?target_id=${target_id}&format=datatables`;
non_orderable_targets = [0, 1, 2, 3];
} else if (scan_history_id) {
url = `/api/listInterestingSubdomains/?scan_id=${scan_history_id}&format=datatables`;
non_orderable_targets = [];
}
var interesting_subdomain_table = $('#interesting_subdomains').DataTable({
"drawCallback": function(settings, start, end, max, total, pre) {
// if no interesting subdomains are found, hide the datatable and show no interesting subdomains found badge
if (this.fnSettings().fnRecordsTotal() == 0) {
$('#interesting_subdomain_div').empty();
// $('#interesting_subdomain_div').append(`<div class="card-header bg-primary py-3 text-white">
// <div class="card-widgets">
// <a href="#" data-toggle="remove"><i class="mdi mdi-close"></i></a>
// </div>
// <h5 class="card-title mb-0 text-white"><i class="mdi mdi-fire-alert me-2"></i>Interesting subdomains could not be identified</h5>
// </div>
// <div id="cardCollpase4" class="collapse show">
// <div class="card-body">
// reNgine could not identify any interesting subdomains. You can customize interesting subdomain keywords <a href="/scanEngine/interesting/lookup/">from here</a> and this section would be automatically updated.
// </div>
// </div>`);
} else {
// show nav bar
$('.interesting-tab-show').removeAttr('style');
$('#interesting_subdomain_alert_count').html(`${this.fnSettings().fnRecordsTotal()} Interesting Subdomains`)
$('#interesting_subdomain_count_badge').empty();
$('#interesting_subdomain_count_badge').html(`<span class="badge badge-soft-primary me-1">${this.fnSettings().fnRecordsTotal()}</span>`);
}
},
"oLanguage": {
"oPaginate": {
"sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>',
"sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>'
},
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"processing": true,
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" + "<'table-responsive'tr>" + "<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
"destroy": true,
"bInfo": false,
"stripeClasses": [],
'serverSide': true,
"ajax": url,
"order": [
[3, "desc"]
],
"lengthMenu": [5, 10, 20, 50, 100],
"pageLength": 10,
"columns": [{
'data': 'name'
}, {
'data': 'page_title'
}, {
'data': 'http_status'
}, {
'data': 'content_length'
}, {
'data': 'http_url'
}, {
'data': 'technologies'
}, ],
"columnDefs": [{
"orderable": false,
"targets": non_orderable_targets
}, {
"targets": [4],
"visible": false,
"searchable": false,
}, {
"targets": [5],
"visible": false,
"searchable": true,
}, {
"className": "text-center",
"targets": [2]
}, {
"render": function(data, type, row) {
tech_badge = '';
if (row['technologies']) {
// tech_badge = `</br>` + parse_technology(row['technologies'], "primary", outline=true, scan_id=null);
}
if (row['http_url']) {
return `<a href="` + row['http_url'] + `" class="text-primary" target="_blank">` + data + `</a>` + tech_badge;
}
return `<a href="https://` + data + `" class="text-primary" target="_blank">` + data + `</a>` + tech_badge;
},
"targets": 0
}, {
"render": function(data, type, row) {
// display badge based on http status
// green for http status 2XX, orange for 3XX and warning for everything else
if (data >= 200 && data < 300) {
return "<span class='badge badge-pills badge-soft-success'>" + data + "</span>";
} else if (data >= 300 && data < 400) {
return "<span class='badge badge-pills badge-soft-warning'>" + data + "</span>";
} else if (data == 0) {
// datatable throws error when no data is returned
return "";
}
return `<span class='badge badge-pills badge-soft-danger'>` + data + `</span>`;
},
"targets": 2,
}, ],
});
}
function get_interesting_endpoint(target_id, scan_history_id) {
var non_orderable_targets = [];
if (target_id) {
url = `/api/listInterestingEndpoints/?target_id=${target_id}&format=datatables`;
// non_orderable_targets = [0, 1, 2, 3];
} else if (scan_history_id) {
url = `/api/listInterestingEndpoints/?scan_id=${scan_history_id}&format=datatables`;
// non_orderable_targets = [0, 1, 2, 3];
}
$('#interesting_endpoints').DataTable({
"drawCallback": function(settings, start, end, max, total, pre) {
if (this.fnSettings().fnRecordsTotal() == 0) {
$('#interesting_endpoint_div').remove();
} else {
$('.interesting-tab-show').removeAttr('style');
$('#interesting_endpoint_alert_count').html(`, ${this.fnSettings().fnRecordsTotal()} Interesting Endpoints`)
$('#interesting_endpoint_count_badge').empty();
$('#interesting_endpoint_count_badge').html(`<span class="badge badge-soft-primary me-1">${this.fnSettings().fnRecordsTotal()}</span>`);
}
},
"oLanguage": {
"oPaginate": {
"sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>',
"sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>'
},
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"processing": true,
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" + "<'table-responsive'tr>" + "<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
'serverSide': true,
"destroy": true,
"bInfo": false,
"ajax": url,
"order": [
[3, "desc"]
],
"lengthMenu": [5, 10, 20, 50, 100],
"pageLength": 10,
"columns": [{
'data': 'http_url'
}, {
'data': 'page_title'
}, {
'data': 'http_status'
}, {
'data': 'content_length'
}, ],
"columnDefs": [{
"orderable": false,
"targets": non_orderable_targets
}, {
"className": "text-center",
"targets": [2]
}, {
"render": function(data, type, row) {
var url = split_into_lines(data, 70);
return "<a href='" + data + "' target='_blank' class='text-primary'>" + url + "</a>";
},
"targets": 0
}, {
"render": function(data, type, row) {
// display badge based on http status
// green for http status 2XX, orange for 3XX and warning for everything else
if (data >= 200 && data < 300) {
return "<span class='badge badge-pills badge-soft-success'>" + data + "</span>";
} else if (data >= 300 && data < 400) {
return "<span class='badge badge-pills badge-soft-warning'>" + data + "</span>";
} else if (data == 0) {
// datatable throws error when no data is returned
return "";
}
return `<span class='badge badge-pills badge-soft-danger'>` + data + `</span>`;
},
"targets": 2,
}, ],
});
}
function get_important_subdomains(target_id, scan_history_id) {
var url = `/api/querySubdomains/?only_important&no_lookup_interesting&format=json`;
if (target_id) {
url += `&target_id=${target_id}`;
} else if (scan_history_id) {
url += `&scan_id=${scan_history_id}`;
}
$.getJSON(url, function(data) {
$('#important-count').empty();
$('#important-subdomains-list').empty();
if (data['subdomains'].length > 0) {
$('#important-count').html(`<span class="badge badge-soft-primary ms-1 me-1">${data['subdomains'].length}</span>`);
for (var val in data['subdomains']) {
subdomain = data['subdomains'][val];
div_id = 'important_' + subdomain['id'];
$("#important-subdomains-list").append(`
<div id="${div_id}">
<p>
<span id="subdomain_${subdomain['id']}"> ${subdomain['name']}
<span class="">
<a href="javascript:;" data-clipboard-action="copy" class="m-1 float-end badge-link text-info copyable text-primary" data-toggle="tooltip" data-placement="top" title="Copy Subdomain!" data-clipboard-target="#subdomain_${subdomain['id']}">
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-copy"><rect x="9" y="9" width="13" height="13" rx="2" ry="2"></rect><path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1"></path></svg></span>
</a>
</span>
</p>
</div>
<hr />
`);
}
} else {
$('#important-count').html(`<span class="badge badge-soft-primary ms-1 me-1">0</span>`);
$('#important-subdomains-list').append(`<p>No subdomains markerd as important!</p>`);
}
$('.bs-tooltip').tooltip();
});
}
function mark_important_subdomain(row, subdomain_id) {
if (row) {
parentNode = row.parentNode.parentNode.parentNode.parentNode;
if (parentNode.classList.contains('table-danger')) {
parentNode.classList.remove('table-danger');
} else {
parentNode.className = "table-danger";
}
}
var data = {'subdomain_id': subdomain_id}
const subdomainImpApi = "/api/toggle/subdomain/important/";
if ($("#important_subdomain_" + subdomain_id).length == 0) {
$("#subdomain-" + subdomain_id).prepend(`<span id="important_subdomain_${subdomain_id}"></span>`);
setTooltip("#subdomain-" + subdomain_id, 'Marked Important!');
} else {
$("#important_subdomain_" + subdomain_id).remove();
setTooltip("#subdomain-" + subdomain_id, 'Marked Un-Important!');
}
return fetch(subdomainImpApi, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken"),
'Content-Type': 'application/json'
},
body: JSON.stringify(data)
});
}
function delete_scan(id) {
const delAPI = "../delete/scan/" + id;
swal.queue([{
title: 'Are you sure you want to delete this scan history?',
text: "You won't be able to revert this!",
icon: 'warning',
showCancelButton: true,
confirmButtonText: 'Delete',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
return fetch(delAPI, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken")
}
}).then(function(response) {
return response.json();
}).then(function(data) {
// TODO Look for better way
return location.reload();
}).catch(function() {
swal.insertQueueStep({
icon: 'error',
title: 'Oops! Unable to delete the scan history!'
})
})
}
}]);
}
function stop_scan(scan_id=null, subscan_id=null, reload_scan_bar=true, reload_location=false) {
const stopAPI = "/api/action/stop/scan/";
if (scan_id) {
var data = {'scan_id': scan_id}
}
else if (subscan_id) {
var data = {'subscan_id': subscan_id}
}
swal.queue([{
title: 'Are you sure you want to stop this scan?',
text: "You won't be able to revert this!",
icon: 'warning',
showCancelButton: true,
confirmButtonText: 'Stop',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
return fetch(stopAPI, {
method: 'POST',
credentials: "same-origin",
body: JSON.stringify(data),
headers: {
"X-CSRFToken": getCookie("csrftoken"),
"Content-Type": 'application/json',
}
}).then(function(response) {
return response.json();
}).then(function(data) {
// TODO Look for better way
if (data.status) {
Snackbar.show({
text: 'Scan Successfully Aborted.',
pos: 'top-right',
duration: 1500
});
if (reload_scan_bar) {
getScanStatusSidebar();
}
if (reload_location) {
window.location.reload();
}
} else {
Snackbar.show({
text: 'Oops! Could not abort the scan. ' + data.message,
pos: 'top-right',
duration: 1500
});
}
}).catch(function() {
swal.insertQueueStep({
icon: 'error',
title: 'Oops! Unable to stop the scan'
})
})
}
}])
}
function extractContent(s) {
var span = document.createElement('span');
span.innerHTML = s;
return span.textContent || span.innerText;
};
function delete_datatable_rows(table_id, rows_id, show_snackbar = true, snackbar_title) {
// this function will delete the datatables rows after actions such as delete
// table_id => datatable_id with #
// rows_ids: list/array => list of all numerical ids to delete, to maintain consistency
// rows id will always follow this pattern: datatable_id_row_n
// show_snackbar = bool => whether to show snackbar or not!
// snackbar_title: str => snackbar title if show_snackbar = True
var table = $(table_id).DataTable();
for (var row in rows_id) {
table.row(table_id + '_row_' + rows_id[row]).remove().draw();
}
Snackbar.show({
text: snackbar_title,
pos: 'top-right',
duration: 1500,
actionTextColor: '#fff',
backgroundColor: '#e7515a',
});
}
function delete_subscan(subscan_id) {
// This function will delete the sunscans using rest api
// Supported method: POST
const delAPI = "/api/action/rows/delete/";
var data = {
'type': 'subscan',
'rows': [subscan_id]
}
swal.queue([{
title: 'Are you sure you want to delete this subscan?',
text: "You won't be able to revert this!",
icon: 'warning',
showCancelButton: true,
confirmButtonText: 'Delete',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
return fetch(delAPI, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken"),
"Content-Type": "application/json"
},
body: JSON.stringify(data)
}).then(function(response) {
return response.json();
}).then(function(response) {
if (response['status']) {
delete_datatable_rows('#subscan_history_table', [subscan_id], show_snackbar = true, '1 Subscan Deleted!')
}
}).catch(function() {
swal.insertQueueStep({
icon: 'error',
title: 'Oops! Unable to delete the scan history!'
})
})
}
}])
}
function show_subscan_results(subscan_id) {
// This function will popup a modal and show the subscan results
// modal being used is from base
var api_url = '/api/fetch/results/subscan/?format=json';
var data = {
'subscan_id': subscan_id
};
Swal.fire({
title: 'Fetching Results...'
});
swal.showLoading();
fetch(api_url, {
method: 'POST',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken"),
'Content-Type': 'application/json'
},
body: JSON.stringify(data)
}).then(response => response.json()).then(function(response) {
console.log(response);
swal.close();
if (response['subscan']['status'] == -1) {
swal.fire("Error!", "Scan has not yet started! Please wait for other scans to complete...", "warning", {
button: "Okay",
});
return;
} else if (response['subscan']['status'] == 1) {
swal.fire("Error!", "Scan is in progress! Please come back later...", "warning", {
button: "Okay",
});
return;
}
$('#xl-modal-title').empty();
$('#xl-modal-content').empty();
$('#xl-modal-footer').empty();
var task_name = '';
if (response['subscan']['task'] == 'port_scan') {
task_name = 'Port Scan';
} else if (response['subscan']['task'] == 'vulnerability_scan') {
task_name = 'Vulnerability Scan';
} else if (response['subscan']['task'] == 'fetch_url') {
task_name = 'EndPoint Gathering';
} else if (response['subscan']['task'] == 'dir_file_fuzz') {
task_name = 'Directory and Files Fuzzing';
}
$('#xl-modal_title').html(`${task_name} Results on ${response['subscan']['subdomain_name']}`);
var scan_status = '';
var badge_color = 'danger';
if (response['subscan']['status'] == 0) {
scan_status = 'Failed';
} else if (response['subscan']['status'] == 2) {
scan_status = 'Successful';
badge_color = 'success';
} else if (response['subscan']['status'] == 3) {
scan_status = 'Aborted';
} else {
scan_status = 'Unknown';
}
$('#xl-modal-content').append(`<div>Scan Status: <span class="badge bg-${badge_color}">${scan_status}</span></div>`);
console.log(response);
$('#xl-modal-content').append(`<div class="mt-1">Engine Used: <span class="badge bg-primary">${htmlEncode(response['subscan']['engine'])}</span></div>`);
if (response['result'].length > 0) {
if (response['subscan']['task'] == 'port_scan') {
$('#xl-modal-content').append(`<div id="port_results_li"></div>`);
for (var ip in response['result']) {
var ip_addr = response['result'][ip]['address'];
var id_name = `ip_${ip_addr}`;
$('#port_results_li').append(`<h5>IP Address: ${ip_addr}</br></br>${response['result'][ip]['ports'].length} Ports Open</h5>`);
$('#port_results_li').append(`<ul id="${id_name}"></ul>`);
for (var port_obj in response['result'][ip]['ports']) {
var port = response['result'][ip]['ports'][port_obj];
var port_color = 'primary';
if (port["is_uncommon"]) {
port_color = 'danger';
}
$('#port_results_li ul').append(`<li><span class="ms-1 mt-1 me-1 badge badge-soft-${port_color}">${port['number']}</span>/<span class="ms-1 mt-1 me-1 badge badge-soft-${port_color}">${port['service_name']}</span>/<span class="ms-1 mt-1 me-1 badge badge-soft-${port_color}">${port['description']}</span></li>`);
}
}
$('#xl-modal-footer').append(`<span class="text-danger">* Uncommon Ports</span>`);
} else if (response['subscan']['task'] == 'vulnerability_scan') {
render_vulnerability_in_xl_modal(vuln_count = response['result'].length, subdomain_name = response['subscan']['subdomain_name'], result = response['result']);
} else if (response['subscan']['task'] == 'fetch_url') {
render_endpoint_in_xlmodal(endpoint_count = response['result'].length, subdomain_name = response['subscan']['subdomain_name'], result = response['result']);
} else if (response['subscan']['task'] == 'dir_file_fuzz') {
if (response['result'][0]['directory_files'].length == 0) {
$('#xl-modal-content').append(`
<div class="alert alert-info mt-2" role="alert">
<i class="mdi mdi-alert-circle-outline me-2"></i> ${task_name} could not fetch any results.
</div>
`);
} else {
render_directories_in_xl_modal(response['result'][0]['directory_files'].length, response['subscan']['subdomain_name'], response['result'][0]['directory_files']);
}
}
} else {
$('#xl-modal-content').append(`
<div class="alert alert-info mt-2" role="alert">
<i class="mdi mdi-alert-circle-outline me-2"></i> ${task_name} could not fetch any results.
</div>
`);
}
$('#modal_xl_scroll_dialog').modal('show');
$("body").tooltip({
selector: '[data-toggle=tooltip]'
});
});
}
function get_http_status_badge(data) {
if (data >= 200 && data < 300) {
return "<span class='badge badge-soft-success'>" + data + "</span>";
} else if (data >= 300 && data < 400) {
return "<span class='badge badge-soft-warning'>" + data + "</span>";
} else if (data == 0) {
// datatable throws error when no data is returned
return "";
}
return "<span class='badge badge-soft-danger'>" + data + "</span>";
}
function render_endpoint_in_xlmodal(endpoint_count, subdomain_name, result) {
// This function renders endpoints datatable in xl modal
// Used in Subscan results and subdomain to endpoints modal
$('#xl-modal-content').append(`<h5> ${endpoint_count} Endpoints Discovered on subdomain ${subdomain_name}</h5>`);
$('#xl-modal-content').append(`
<div class="">
<table id="endpoint-modal-datatable" class="table dt-responsive nowrap w-100">
<thead>
<tr>
<th>HTTP URL</th>
<th>Status</th>
<th>Page Title</th>
<th>Tags</th>
<th>Content Type</th>
<th>Content Length</th>
<th>Response Time</th>
</tr>
</thead>
<tbody id="endpoint_tbody">
</tbody>
</table>
</div>
`);
$('#endpoint_tbody').empty();
for (var endpoint_obj in result) {
var endpoint = result[endpoint_obj];
var tech_badge = '';
var web_server = '';
if (endpoint['technologies']) {
tech_badge = '<div>' + parse_technology(endpoint['technologies'], "primary", outline = true);
}
if (endpoint['webserver']) {
web_server = `<span class='m-1 badge badge-soft-info' data-toggle="tooltip" data-placement="top" title="Web Server">${endpoint['webserver']}</span>`;
}
var url = split_into_lines(endpoint['http_url'], 70);
var rand_id = get_randid();
tech_badge += web_server + '</div>';
var http_url_td = "<a href='" + endpoint['http_url'] + `' target='_blank' class='text-primary'>` + url + "</a>" + tech_badge;
$('#endpoint_tbody').append(`
<tr>
<td>${http_url_td}</td>
<td>${get_http_status_badge(endpoint['http_status'])}</td>
<td>${return_str_if_not_null(endpoint['page_title'])}</td>
<td>${parse_comma_values_into_span(endpoint['matched_gf_patterns'], "danger", outline=true)}</td>
<td>${return_str_if_not_null(endpoint['content_type'])}</td>
<td>${return_str_if_not_null(endpoint['content_length'])}</td>
<td>${get_response_time_text(endpoint['response_time'])}</td>
</tr>
`);
}
$("#endpoint-modal-datatable").DataTable({
"oLanguage": {
"oPaginate": {
"sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>',
"sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>'
},
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" + "<'table-responsive'tr>" + "<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
"order": [
[5, "desc"]
],
drawCallback: function() {
$(".dataTables_paginate > .pagination").addClass("pagination-rounded")
}
});
}
function render_vulnerability_in_xl_modal(vuln_count, subdomain_name, result) {
// This function will render the vulnerability datatable in xl modal
$('#xl-modal-content').append(`<h5> ${vuln_count} Vulnerabilities Discovered on subdomain ${subdomain_name}</h5>`);
$('#xl-modal-content').append(`<ol id="vuln_results_ol" class="list-group list-group-numbered"></ol>`);
$('#xl-modal-content').append(`
<div class="">
<table id="vulnerability-modal-datatable" class="table dt-responsive nowrap w-100">
<thead>
<tr>
<th>Type</th>
<th>Title</th>
<th class="text-center">Severity</th>
<th>CVSS Score</th>
<th>CVE/CWE</th>
<th>Vulnerable URL</th>
<th>Description</th>
<th class="text-center dt-no-sorting">Action</th>
</tr>
</thead>
<tbody id="vuln_tbody">
</tbody>
</table>
</div>
`);
$('#vuln_tbody').empty();
for (var vuln in result) {
var vuln_obj = result[vuln];
var vuln_type = vuln_obj['type'] ? `<span class="badge badge-soft-primary"> ${vuln_obj['type'].toUpperCase()} </span>` : '';
var tags = '';
var cvss_metrics_badge = '';
switch (vuln_obj['severity']) {
case 'Info':
color = 'primary'
badge_color = 'soft-primary'
break;
case 'Low':
color = 'low'
badge_color = 'soft-warning'
break;
case 'Medium':
color = 'warning'
badge_color = 'soft-warning'
break;
case 'High':
color = 'danger'
badge_color = 'soft-danger'
break;
case 'Critical':
color = 'critical'
badge_color = 'critical'
break;
default:
}
if (vuln_obj['tags']) {
tags = '<div>';
vuln_obj['tags'].forEach(tag => {
tags += `<span class="badge badge-${badge_color} me-1 mb-1" data-toggle="tooltip" data-placement="top" title="Tags">${tag.name}</span>`;
});
tags += '</div>';
}
if (vuln_obj['cvss_metrics']) {
cvss_metrics_badge = `<div><span class="badge badge-outline-primary my-1" data-toggle="tooltip" data-placement="top" title="CVSS Metrics">${vuln_obj['cvss_metrics']}</span></div>`;
}
var vuln_title = `<b class="text-${color}">` + vuln_obj['name'] + `</b>` + cvss_metrics_badge + tags;
var badge = 'danger';
var cvss_score = '';
if (vuln_obj['cvss_score']) {
if (vuln_obj['cvss_score'] > 0.1 && vuln_obj['cvss_score'] <= 3.9) {
badge = 'info';
} else if (vuln_obj['cvss_score'] > 3.9 && vuln_obj['cvss_score'] <= 6.9) {
badge = 'warning';
} else if (vuln_obj['cvss_score'] > 6.9 && vuln_obj['cvss_score'] <= 8.9) {
badge = 'danger';
}
cvss_score = `<span class="badge badge-outline-${badge}" data-toggle="tooltip" data-placement="top" title="CVSS Score">${vuln_obj['cvss_score']}</span>`;
}
var cve_cwe_badge = '<div>';
if (vuln_obj['cve_ids']) {
vuln_obj['cve_ids'].forEach(cve => {
cve_cwe_badge += `<a href="https://google.com/search?q=${cve.name.toUpperCase()}" target="_blank" class="badge badge-outline-primary me-1 mt-1" data-toggle="tooltip" data-placement="top" title="CVE ID">${cve.name.toUpperCase()}</a>`;
});
}
if (vuln_obj['cwe_ids']) {
vuln_obj['cwe_ids'].forEach(cwe => {
cve_cwe_badge += `<a href="https://google.com/search?q=${cwe.name.toUpperCase()}" target="_blank" class="badge badge-outline-primary me-1 mt-1" data-toggle="tooltip" data-placement="top" title="CWE ID">${cwe.name.toUpperCase()}</a>`;
});
}
cve_cwe_badge += '</div>';
var http_url = vuln_obj['http_url'].includes('http') ? "<a href='" + htmlEncode(vuln_obj['http_url']) + "' target='_blank' class='text-danger'>" + htmlEncode(vuln_obj['http_url']) + "</a>" : vuln_obj['http_url'];
var description = vuln_obj['description'] ? `<div>${split_into_lines(vuln_obj['description'], 30)}</div>` : '';
// show extracted results, and show matcher names, matcher names can be in badges
if (vuln_obj['matcher_name']) {
description += `<span class="badge badge-soft-primary" data-toggle="tooltip" data-placement="top" title="Matcher Name">${vuln_obj['matcher_name']}</span>`;
}
if (vuln_obj['extracted_results'] && vuln_obj['extracted_results'].length > 0) {
description += `<br><a class="mt-2" data-bs-toggle="collapse" href="#results_${vuln_obj['id']}" aria-expanded="false" aria-controls="results_${vuln_obj['id']}">Extracted Results <i class="fe-chevron-down"></i></a>`;
description += `<div class="collapse" id="results_${vuln_obj['id']}"><ul>`;
vuln_obj['extracted_results'].forEach(results => {
description += `<li>${results}</li>`;
});
description += '</ul></div>';
}
if (vuln_obj['references'] && vuln_obj['references'].length > 0) {
description += `<br><a class="mt-2" data-bs-toggle="collapse" href="#references_${vuln_obj['id']}" aria-expanded="false" aria-controls="references_${vuln_obj['id']}">References <i class="fe-chevron-down"></i></a>`;
description += `<div class="collapse" id="references_${vuln_obj['id']}"><ul>`;
vuln_obj['references'].forEach(reference => {
description += `<li><a href="${reference.url}" target="_blank">${reference.url}</a></li>`;
});
description += '</ul></div>';
}
if (vuln_obj['curl_command']) {
description += `<br><a class="mt-2" data-bs-toggle="collapse" href="#curl_command_${vuln_obj['id']}" aria-expanded="false" aria-controls="curl_command_${vuln_obj['id']}">CURL command <i class="fe-terminal"></i></a>`;
description += `<div class="collapse" id="curl_command_${vuln_obj['id']}"><ul>`;
description += `<li><code>${split_into_lines(htmlEncode(vuln_obj['curl_command']), 30)}</code></li>`;
description += '</ul></div>';
}
var action_icon = vuln_obj['hackerone_report_id'] ? '' : `
<div class="btn-group mb-2 dropstart">
<a href="#" class="text-dark dropdown-toggle float-end" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-more-horizontal"><circle cx="12" cy="12" r="1"></circle><circle cx="19" cy="12" r="1"></circle><circle cx="5" cy="12" r="1"></circle></svg>
</a>
<div class="dropdown-menu" style="">
<a class="dropdown-item" href="javascript:report_hackerone(${vuln_obj['id']}, '${vuln_obj['severity']}');">Report to Hackerone</a>
</div>
</div>`;
$('#vuln_tbody').append(`
<tr>
<td>${vuln_type}</td>
<td>${vuln_title}</td>
<td class="text-center">${get_severity_badge(vuln_obj['severity'])}</td>
<td class="text-center">${cvss_score}</td>
<td>${cve_cwe_badge}</td>
<td>${http_url}</td>
<td>${description}</td>
<td>${action_icon}</td>
</tr>
`);
}
$("#vulnerability-modal-datatable").DataTable({
"oLanguage": {
"oPaginate": {
"sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>',
"sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>'
},
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" + "<'table-responsive'tr>" + "<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
"order": [
[5, "desc"]
],
drawCallback: function() {
$(".dataTables_paginate > .pagination").addClass("pagination-rounded")
}
});
}
function render_directories_in_xl_modal(directory_count, subdomain_name, result) {
$('#xl-modal-content').append(`<h5> ${directory_count} Directories Discovered on subdomain ${subdomain_name}</h5>`);
$('#xl-modal-content').append(`
<div class="">
<table id="directory-modal-datatable" class="table dt-responsive nowrap w-100">
<thead>
<tr>
<th>Directory</th>
<th class="text-center">HTTP Status</th>
<th>Content Length</th>
<th>Lines</th>
<th>Words</th>
</tr>
</thead>
<tbody id="directory_tbody">
</tbody>
</table>
</div>
`);
$('#directory_tbody').empty();
for (var dir_obj in result) {
var dir = result[dir_obj];
$('#directory_tbody').append(`
<tr>
<td><a href="${dir.url}" target="_blank">${dir.name}</a></td>
<td class="text-center">${get_http_status_badge(dir.http_status)}</td>
<td>${dir.length}</td>
<td>${dir.lines}</td>
<td>${dir.words}</td>
</tr>
`);
}
var interesting_keywords_array = [];
var dir_modal_table = $("#directory-modal-datatable").DataTable({
"oLanguage": {
"oPaginate": {
"sPrevious": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-left"><line x1="19" y1="12" x2="5" y2="12"></line><polyline points="12 19 5 12 12 5"></polyline></svg>',
"sNext": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-arrow-right"><line x1="5" y1="12" x2="19" y2="12"></line><polyline points="12 5 19 12 12 19"></polyline></svg>'
},
"sInfo": "Showing page _PAGE_ of _PAGES_",
"sSearch": '<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>',
"sSearchPlaceholder": "Search...",
"sLengthMenu": "Results : _MENU_",
},
"dom": "<'dt--top-section'<'row'<'col-12 col-sm-6 d-flex justify-content-sm-start justify-content-center'f><'col-12 col-sm-6 d-flex justify-content-sm-end justify-content-center'l>>>" + "<'table-responsive'tr>" + "<'dt--bottom-section d-sm-flex justify-content-sm-between text-center'<'dt--pages-count mb-sm-0 mb-3'i><'dt--pagination'p>>",
"order": [
[2, "desc"]
],
drawCallback: function() {
$(".dataTables_paginate > .pagination").addClass("pagination-rounded");
}
});
// TODO: Find interetsing dirs
// fetch("/api/listInterestingKeywords")
// .then(response => {
// return response.json();
// })
// .then(data => {
// interesting_keywords_array = data;
// dir_modal_table.rows().every(function(){
// console.log(this.data());
// });
// });
}
function get_and_render_subscan_history(subdomain_id, subdomain_name) {
// This function displays the subscan history in a modal for any particular subdomain
var data = {
'subdomain_id': subdomain_id
};
fetch('/api/listSubScans/?format=json', {
method: 'POST',
credentials: "same-origin",
body: JSON.stringify(data),
headers: {
"X-CSRFToken": getCookie("csrftoken"),
"Content-Type": 'application/json',
}
}).then(function(response) {
return response.json();
}).then(function(data) {
console.log(data);
if (data['status']) {
$('#modal_title').html('Subscan History for subdomain ' + subdomain_name);
$('#modal-content').empty();
$('#modal-content').append(`<div id="subscan_history_table"></div>`);
$('#subscan_history_table').empty();
for (var result in data['results']) {
var result_obj = data['results'][result];
var error_message = '';
var task_name = get_task_name(result_obj);
if (result_obj.status == 0) {
color = 'danger';
bg_color = 'bg-soft-danger';
status_badge = '<span class="float-end badge bg-danger">Failed</span>';
error_message = `</br><span class="text-danger">Error: ${result_obj.error_message}`;
} else if (result_obj.status == 3) {
color = 'danger';
bg_color = 'bg-soft-danger';
status_badge = '<span class="float-end badge bg-danger">Aborted</span>';
} else if (result_obj.status == 2) {
color = 'success';
bg_color = 'bg-soft-success';
status_badge = '<span class="float-end badge bg-success">Task Completed</span>';
}
$('#subscan_history_table').append(`
<div class="card border-${color} border mini-card">
<a href="#" class="text-reset item-hovered" onclick="show_subscan_results(${result_obj['id']})">
<div class="card-header ${bg_color} text-${color} mini-card-header">
${task_name} on <b>${result_obj.subdomain_name}</b> using engine <b>${htmlEncode(result_obj.engine)}</b>
</div>
<div class="card-body mini-card-body">
<p class="card-text">
${status_badge}
<span class="">
Task Completed ${result_obj.completed_ago} ago
</span>
Took ${result_obj.time_taken}
${error_message}
</p>
</div>
</a>
</div>
`);
}
$('#modal_dialog').modal('show');
}
});
}
function fetch_whois(domain_name, save_db) {
// this function will fetch WHOIS record for any subdomain and also display
// snackbar once whois is fetched
var url = `/api/tools/whois/?format=json&ip_domain=${domain_name}`;
if (save_db) {
url += '&save_db';
}
$('[data-toggle="tooltip"]').tooltip('hide');
Snackbar.show({
text: 'Fetching WHOIS...',
pos: 'top-right',
duration: 1500,
});
$("#whois_not_fetched_alert").hide();
$("#whois_fetching_alert").show();
fetch(url, {}).then(res => res.json())
.then(function(response) {
$("#whois_fetching_alert").hide();
document.getElementById('domain_age').innerHTML = response['domain']['domain_age'] + ' ' + response['domain']['date_created'];
document.getElementById('ip_address').innerHTML = response['domain']['ip_address'];
document.getElementById('ip_geolocation').innerHTML = response['domain']['geolocation'];
document.getElementById('registrant_name').innerHTML = response['registrant']['name'];
console.log(response['registrant']['organization'])
document.getElementById('registrant_organization').innerHTML = response['registrant']['organization'] ? response['registrant']['organization'] : ' ';
document.getElementById('registrant_address').innerHTML = response['registrant']['address'] + ' ' + response['registrant']['city'] + ' ' + response['registrant']['state'] + ' ' + response['registrant']['country'];
document.getElementById('registrant_phone_numbers').innerHTML = response['registrant']['tel'];
document.getElementById('registrant_fax').innerHTML = response['registrant']['fax'];
Snackbar.show({
text: 'Whois Fetched...',
pos: 'top-right',
duration: 3000
});
$("#whois_fetched_alert").show();
$("#whois_fetched_alert").fadeTo(2000, 500).slideUp(1500, function() {
$("#whois_fetched_alert").slideUp(500);
});
}).catch(function(error) {
console.log(error);
});
}
function get_target_whois(domain_name) {
// this function will fetch whois from db, if not fetched, will make a fresh
// query and will display whois on a modal
var url = `/api/tools/whois/?format=json&ip_domain=${domain_name}&fetch_from_db`
Swal.fire({
title: `Fetching WHOIS details for ${domain_name}...`
});
swal.showLoading();
fetch(url, {
method: 'GET',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken"),
'Content-Type': 'application/json'
},
}).then(response => response.json()).then(function(response) {
console.log(response);
if (response.status) {
swal.close();
display_whois_on_modal(response);
} else {
fetch(`/api/tools/whois/?format=json&ip_domain=${domain_name}&save_db`, {
method: 'GET',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken"),
'Content-Type': 'application/json'
},
}).then(response => response.json()).then(function(response) {
console.log(response);
if (response.status) {
swal.close();
display_whois_on_modal(response);
} else {
Swal.fire({
title: 'Oops!',
text: `reNgine could not fetch WHOIS records for ${domain_name}!`,
icon: 'error'
});
}
});
}
});
}
function get_domain_whois(domain_name, show_add_target_btn=false) {
// this function will get whois for domains that are not targets, this will
// not store whois into db nor create target
var url = `/api/tools/whois/?format=json&ip_domain=${domain_name}`
Swal.fire({
title: `Fetching WHOIS details for ${domain_name}...`
});
$('.modal').modal('hide');
swal.showLoading();
fetch(url, {
method: 'GET',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken"),
'Content-Type': 'application/json'
},
}).then(response => response.json()).then(function(response) {
swal.close();
if (response.status) {
display_whois_on_modal(response, show_add_target_btn=show_add_target_btn);
} else {
Swal.fire({
title: 'Oops!',
text: `reNgine could not fetch WHOIS records for ${domain_name}! ${response['message']}`,
icon: 'error'
});
}
});
}
function display_whois_on_modal(response, show_add_target_btn=false) {
console.log(response);
// this function will display whois data on modal, should be followed after get_domain_whois()
$('#modal_dialog').modal('show');
$('#modal-content').empty();
$("#modal-footer").empty();
content = `
<div class="row mt-3">
<div class="col-sm-3">
<div class="nav flex-column nav-pills nav-pills-tab" id="v-pills-tab" role="tablist" aria-orientation="vertical">
<a class="nav-link active show mb-1" id="v-pills-domain-tab" data-bs-toggle="pill" href="#v-pills-domain" role="tab" aria-controls="v-pills-domain-tab" aria-selected="true">Domain info</a>
<a class="nav-link mb-1" id="v-pills-whois-tab" data-bs-toggle="pill" href="#v-pills-whois" role="tab" aria-controls="v-pills-whois" aria-selected="false">Whois</a>
<a class="nav-link mb-1" id="v-pills-nameserver-tab" data-bs-toggle="pill" href="#v-pills-nameserver" role="tab" aria-controls="v-pills-nameserver" aria-selected="false">Nameservers</a>
<a class="nav-link mb-1" id="v-pills-history-tab" data-bs-toggle="pill" href="#v-pills-history" role="tab" aria-controls="v-pills-history" aria-selected="false">NS History</a>
</div>
</div> <!-- end col-->
<div class="col-sm-9">
<div class="tab-content pt-0">
<div class="tab-pane fade active show" id="v-pills-domain" role="tabpanel" aria-labelledby="v-pills-domain-tab" data-simplebar style="max-height: 300px; min-height: 300px;">
<h4 class="header-title text-primary"><span class="fe-info"></span> Contact Information</h4>
<ul class="nav nav-tabs nav-bordered nav-justified">
<li class="nav-item">
<a href="#registrant-tab" data-bs-toggle="tab" aria-expanded="false" class="nav-link active">
Registrant
</a>
</li>
<li class="nav-item">
<a href="#admin-tab" data-bs-toggle="tab" aria-expanded="true" class="nav-link">
Admin
</a>
</li>
<li class="nav-item">
<a href="#technical-tab" data-bs-toggle="tab" aria-expanded="false" class="nav-link">
Technical
</a>
</li>
</ul>
<div class="tab-content">
<div class="tab-pane active" id="registrant-tab">
<div class="table-responsive">
<table class="table mb-0">
<tbody>
<tr class="">
<td><b>Name</b></td>
<td><span class="fe-user"></span> ${response.registrant.name}</td>
</tr>
<tr class="table-primary">
<td><b>Organization</b></td>
<td><span class="fe-briefcase"></span> ${response.registrant.organization}</td>
</tr>
<tr class="">
<td><b>Email</b></td>
<td><span class="fe-mail"></span> ${response.registrant.email}</td>
</tr>
<tr class="table-info">
<td><b>Phone/Fax</b></td>
<td>
<span class="fe-phone"></span> ${response.registrant.phone}
<span class="fe-printer"></span> ${response.registrant.fax}
</td>
</tr>
<tr class="">
<td><b>Address</b></td>
<td><span class="fe-home"></span> ${response.registrant.address}</td>
</tr>
<tr class="table-danger">
<td><b>Address</b></td>
<td><b>City: </b>${response.registrant.city} <b>State: </b>${response.registrant.state} <b>Zip Code: </b>${response.registrant.zipcode} <b>Country:
</b>${response.registrant.country} </td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="tab-pane" id="admin-tab">
<div class="table-responsive">
<table class="table mb-0">
<tbody>
<tr class="table-primary">
<td><b>Name</b></td>
<td><span class="fe-user"></span> ${response.admin.name}</td>
</tr>
<tr class="">
<td><b>Organization</b></td>
<td><span class="fe-briefcase"></span> ${response.admin.organization}</td>
</tr>
<tr class="table-info">
<td><b>Admin ID</b></td>
<td><span class="fe-user"></span> ${response.admin.id}</td>
</tr>
<tr class="">
<td><b>Email</b></td>
<td><span class="fe-mail"></span> ${response.admin.email}</td>
</tr>
<tr class="table-success">
<td><b>Phone/Fax</b></td>
<td>
<span class="fe-phone"></span> ${response.admin.phone}
<span class="fe-printer"></span> ${response.admin.fax}
</td>
</tr>
<tr class="">
<td><b>Address</b></td>
<td><span class="fe-home"></span> ${response.admin.address}</td>
</tr>
<tr class="table-danger">
<td><b>Address</b></td>
<td><b>City: </b>${response.admin.city} <b>State: </b>${response.admin.state} <b>Zip Code: </b>${response.admin.zipcode} <b>Country:
</b>${response.admin.country} </td>
</tr>
</tbody>
</table>
</div>
</div>
<div class="tab-pane" id="technical-tab">
<div class="table-responsive">
<table class="table mb-0">
<tbody>
<tr class="table-info">
<td><b>Name</b></td>
<td><span class="fe-user"></span> ${response.technical_contact.name}</td>
</tr>
<tr class="">
<td><b>Organization</b></td>
<td><span class="fe-briefcase"></span> ${response.technical_contact.organization}</td>
</tr>
<tr class="table-primary">
<td><b>Tech ID</b></td>
<td><span class="fe-user"></span> ${response.technical_contact.id}</td>
</tr>
<tr class="">
<td><b>Email</b></td>
<td><span class="fe-mail"></span> ${response.technical_contact.email}</td>
</tr>
<tr class="table-success">
<td><b>Phone/Fax</b></td>
<td>
<span class="fe-phone"></span> ${response.technical_contact.phone}
<span class="fe-printer"></span> ${response.technical_contact.fax}
</td>
</tr>
<tr>
<td><b>Address</b></td>
<td><span class="fe-home"></span> ${response.technical_contact.address}</td>
</tr>
<tr class="table-danger">
<td><b>Address</b></td>
<td><b>City: </b>${response.technical_contact.city} <b>State: </b>${response.technical_contact.state} <b>Zip Code: </b>${response.technical_contact.zipcode} <b>Country:
</b>${response.technical_contact.country} </td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
<div class="tab-pane fade" id="v-pills-whois" role="tabpanel" aria-labelledby="v-pills-whois-tab">
<pre data-simplebar style="max-height: 310px; min-height: 310px;">${response.raw_text}</pre>
</div>
<div class="tab-pane fade" id="v-pills-history" role="tabpanel" aria-labelledby="v-pills-history-tab" data-simplebar style="max-height: 300px; min-height: 300px;">
</div>
<div class="tab-pane fade" id="v-pills-nameserver" role="tabpanel" aria-labelledby="v-pills-nameserver-tab" data-simplebar style="max-height: 300px; min-height: 300px;">
`;
for (var ns in response.nameservers) {
var ns_object = response.nameservers[ns];
content += `<span class="badge badge-soft-primary me-1 ms-1">${ns_object}</span>`;
}
content += `
</div>
<div class="tab-pane fade" id="v-pills-related" role="tabpanel" aria-labelledby="v-pills-related-tab" data-simplebar style="max-height: 300px; min-height: 300px;">
<!--<span class="badge badge-soft-primary badge-link waves-effect waves-light me-1" data-toggle="tooltip" title="Add {{domain}} as target." onclick="add_target('{{domain}}')">{{domain}}</span>-->
</div>
<div class="tab-pane fade" id="v-pills-related-tld" role="tabpanel" aria-labelledby="v-pills-related-tld-tab" data-simplebar style="max-height: 300px; min-height: 300px;">
<!--<span class="badge badge-soft-primary badge-link waves-effect waves-light me-1" data-toggle="tooltip" title="Add {{domain}} as target." onclick="add_target('{{domain}}')">{{domain}}</span>-->
</div>
</div>
</div>
</div>
`;
if (show_add_target_btn) {
content += `<div class="text-center">
<button class="btn btn-primary float-end mt-4" type="submit" id="search_whois_toolbox_btn" onclick="add_target('${response['ip_domain']}')">Add ${response['ip_domain']} as target</button>
</div>`
}
$('#modal-content').append(content);
$('[data-toggle="tooltip"]').tooltip();
}
function show_quick_add_target_modal() {
// this function will display the modal to add target
$('#modal_title').html('Add target');
$('#modal-content').empty();
$('#modal-content').append(`
If you would like to add IP/CIDRs, multiple domain, Please <a href="/target/add/target">click here.</a>
<div class="mb-3">
<label for="target_name_modal" class="form-label">Target Name</label>
<input class="form-control" type="text" id="target_name_modal" required="" placeholder="yourdomain.com">
</div>
<div class="mb-3">
<label for="target_description_modal" class="form-label">Description (Optional)</label>
<input class="form-control" type="text" id="target_description_modal" required="" placeholder="Target Description">
</div>
<div class="mb-3">
<label for="h1_handle_modal" class="form-label">Hackerone Target Team Handle (Optional)</label>
<input class="form-control" type="text" id="h1_handle_modal" placeholder="hackerone.com/team_handle, Only enter team_handle after /">
</div>
<div class="mb-3 text-center">
<button class="btn btn-primary float-end" type="submit" id="add_target_modal" onclick="add_quick_target()">Add Target</button>
</div>
`);
$('#modal_dialog').modal('show');
}
function add_quick_target() {
// this function will be a onclick for add target button on add_target modal
$('#modal_dialog').modal('hide');
var domain_name = $('#target_name_modal').val();
var description = $('#target_description_modal').val();
var h1_handle = $('#h1_handle_modal').val();
const data = {
'domain_name': domain_name,
'h1_team_handle': h1_handle,
'description': description
};
add_target(domain_name, h1_handle = h1_handle, description = description);
}
function add_target(domain_name, h1_handle = null, description = null) {
// this function will add domain_name as target
const add_api = '/api/add/target/?format=json';
const data = {
'domain_name': domain_name,
'h1_team_handle': h1_handle,
'description': description
};
swal.queue([{
title: 'Add Target',
text: `Would you like to add ${domain_name} as target?`,
icon: 'info',
showCancelButton: true,
confirmButtonText: 'Add Target',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
return fetch(add_api, {
method: 'POST',
credentials: "same-origin",
headers: {
'X-CSRFToken': getCookie("csrftoken"),
'Content-Type': 'application/json'
},
body: JSON.stringify(data)
}).then(function(response) {
return response.json();
}).then(function(data) {
if (data.status) {
swal.queue([{
title: 'Target Successfully added!',
text: `Do you wish to initiate the scan on new target?`,
icon: 'success',
showCancelButton: true,
confirmButtonText: 'Initiate Scan',
padding: '2em',
showLoaderOnConfirm: true,
preConfirm: function() {
window.location = `/scan/start/${data.domain_id}`;
}
}]);
} else {
swal.insertQueueStep({
icon: 'error',
title: data.message
});
}
}).catch(function() {
swal.insertQueueStep({
icon: 'error',
title: 'Oops! Unable to delete the scan history!'
});
})
}
}]);
}
function loadSubscanHistoryWidget(scan_history_id = null, domain_id = null) {
// This function will load the subscan history widget
if (scan_history_id) {
var data = {
'scan_history_id': scan_history_id
}
}
if (domain_id) {
var data = {
'domain_id': domain_id
}
}
fetch('/api/listSubScans/?format=json', {
method: 'POST',
credentials: "same-origin",
body: JSON.stringify(data),
headers: {
"X-CSRFToken": getCookie("csrftoken"),
"Content-Type": 'application/json',
}
}).then(function(response) {
return response.json();
}).then(function(data) {
console.log(data);
$('#subscan_history_widget').empty();
if (data['status']) {
$('#sub_scan_history_count').append(`
<span class="badge badge-soft-primary me-1">${data['results'].length}</span>
`)
for (var result in data['results']) {
var error_message = '';
var result_obj = data['results'][result];
var task_name = get_task_name(result_obj);
if (result_obj.status == 0) {
color = 'danger';
bg_color = 'bg-soft-danger';
status_badge = '<span class="float-end badge bg-danger">Failed</span>';
error_message = `</br><span class="text-danger">Error: ${result_obj.error_message}`;
} else if (result_obj.status == 3) {
color = 'danger';
bg_color = 'bg-soft-danger';
status_badge = '<span class="float-end badge bg-danger">Aborted</span>';
} else if (result_obj.status == 2) {
color = 'success';
bg_color = 'bg-soft-success';
status_badge = '<span class="float-end badge bg-success">Task Completed</span>';
} else if (result_obj.status == 1) {
color = 'primary';
bg_color = 'bg-soft-primary';
status_badge = '<span class="float-end badge bg-primary">Running</span>';
}
$('#subscan_history_widget').append(`
<div class="card border-${color} border mini-card">
<a href="#" class="text-reset item-hovered" onclick="show_subscan_results(${result_obj['id']})">
<div class="card-header ${bg_color} text-${color} mini-card-header">
${task_name} on <b>${result_obj.subdomain_name}</b>
</div>
<div class="card-body mini-card-body">
<p class="card-text">
${status_badge}
<span class="">
Task Completed ${result_obj.completed_ago} ago
</span>
Took ${result_obj.time_taken}
${error_message}
</p>
</div>
</a>
</div>
`);
}
} else {
$('#sub_scan_history_count').append(`
<span class="badge badge-soft-primary me-1">0</span>
`)
$('#subscan_history_widget').append(`
<div class="alert alert-warning alert-dismissible fade show mt-2" role="alert">
<button type="button" class="btn-close" data-bs-dismiss="alert" aria-label="Close"></button>
No Subscans has been initiated for any subdomains. You can select individual subdomains and initiate subscans like Directory Fuzzing, Vulnerability Scan etc.
</div>
`);
}
});
}
function get_ips(scan_id=null, domain_id=null){
// this function will fetch and render ips in widget
var url = '/api/queryIps/?';
if (scan_id) {
url += `scan_id=${scan_id}`;
}
if (domain_id) {
url += `target_id=${domain_id}`;
}
url += `&format=json`;
$.getJSON(url, function(data) {
$('#ip-address-count').empty();
for (var val in data['ips']){
ip = data['ips'][val]
badge_color = ip['is_cdn'] ? 'warning' : 'primary';
if (scan_id) {
$("#ip-address").append(`<span class='badge badge-soft-${badge_color} m-1 badge-link' data-toggle="tooltip" title="${ip['ports'].length} Ports Open." onclick="get_ip_details('${ip['address']}', scan_id=${scan_id}, domain_id=null)">${ip['address']}</span>`);
}
else if (domain_id) {
$("#ip-address").append(`<span class='badge badge-soft-${badge_color} m-1 badge-link' data-toggle="tooltip" title="${ip['ports'].length} Ports Open." onclick="get_ip_details('${ip['address']}', scan_id=null, domain_id=${domain_id})">${ip['address']}</span>`);
}
// $("#ip-address").append(`<span class='badge badge-soft-${badge_color} m-1' data-toggle="modal" data-target="#tabsModal">${ip['address']}</span>`);
}
$('#ip-address-count').html(`<span class="badge badge-soft-primary me-1">${data['ips'].length}</span>`);
$("body").tooltip({ selector: '[data-toggle=tooltip]' });
});
}
function get_technologies(scan_id=null, domain_id=null){
// this function will fetch and render tech in widget
var url = '/api/queryTechnologies/?';
if (scan_id) {
url += `scan_id=${scan_id}`;
}
if (domain_id) {
url += `target_id=${domain_id}`;
}
url += `&format=json`;
$.getJSON(url, function(data) {
$('#technologies-count').empty();
for (var val in data['technologies']){
tech = data['technologies'][val]
if (scan_id) {
$("#technologies").append(`<span class='badge badge-soft-primary m-1 badge-link' data-toggle="tooltip" title="${tech['count']} Subdomains use this technology." onclick="get_tech_details('${tech['name']}', scan_id=${scan_id}, domain_id=null)">${tech['name']}</span>`);
}
else if (domain_id) {
$("#technologies").append(`<span class='badge badge-soft-primary m-1 badge-link' data-toggle="tooltip" title="${tech['count']} Subdomains use this technology." onclick="get_tech_details('${tech['name']}', scan_id=null, domain_id=${domain_id})">${tech['name']}</span>`);
}
}
$('#technologies-count').html(`<span class="badge badge-soft-primary me-1">${data['technologies'].length}</span>`);
$("body").tooltip({ selector: '[data-toggle=tooltip]' });
});
}
function get_ports(scan_id=null, domain_id=null){
// this function will fetch and render ports in widget
var url = '/api/queryPorts/?';
if (scan_id) {
url += `scan_id=${scan_id}`;
}
if (domain_id) {
url += `target_id=${domain_id}`;
}
url += `&format=json`;
$.getJSON(url, function(data) {
$('#ports-count').empty();
for (var val in data['ports']){
port = data['ports'][val]
badge_color = port['is_uncommon'] ? 'danger' : 'primary';
if (scan_id) {
$("#ports").append(`<span class='badge badge-soft-${badge_color} m-1 badge-link' data-toggle="tooltip" title="${port['description']}" onclick="get_port_details('${port['number']}', scan_id=${scan_id}, domain_id=null)">${port['number']}/${port['service_name']}</span>`);
}
else if (domain_id){
$("#ports").append(`<span class='badge badge-soft-${badge_color} m-1 badge-link' data-toggle="tooltip" title="${port['description']}" onclick="get_port_details('${port['number']}', scan_id=null, domain_id=${domain_id})">${port['number']}/${port['service_name']}</span>`);
}
}
$('#ports-count').html(`<span class="badge badge-soft-primary me-1">${data['ports'].length}</span>`);
$("body").tooltip({ selector: '[data-toggle=tooltip]' });
});
}
function get_ip_details(ip_address, scan_id=null, domain_id=null){
var port_url = `/api/queryPorts/?ip_address=${ip_address}`;
var subdomain_url = `/api/querySubdomains/?ip_address=${ip_address}`;
if (scan_id) {
port_url += `&scan_id=${scan_id}`;
subdomain_url += `&scan_id=${scan_id}`;
}
else if(domain_id){
port_url += `&target_id=${domain_id}`;
subdomain_url += `&target_id=${domain_id}`;
}
port_url += `&format=json`;
subdomain_url += `&format=json`;
var interesting_badge = `<span class="m-1 badge badge-soft-danger bs-tooltip" title="Interesting Subdomain">Interesting</span>`;
var port_loader = `<span class="inner-div spinner-border text-primary align-self-center loader-sm" id="port-modal-loader"></span>`;
var subdomain_loader = `<span class="inner-div spinner-border text-primary align-self-center loader-sm" id="subdomain-modal-loader"></span>`;
// add tab modal title
$('#modal_title').html('Details for IP: <b>' + ip_address + '</b>');
$('#modal-content').empty();
$('#modal-tabs').empty();
$('#modal-content').append(`<ul class='nav nav-tabs nav-bordered' id="modal_tab_nav"></ul><div id="modal_tab_content" class="tab-content"></div>`);
$('#modal_tab_nav').append(`<li class="nav-item"><a class="nav-link active" data-bs-toggle="tab" href="#modal_content_port" aria-expanded="true"><span id="modal-open-ports-count"></span>Open Ports ${port_loader}</a></li>`);
$('#modal_tab_nav').append(`<li class="nav-item"><a class="nav-link" data-bs-toggle="tab" href="#modal_content_subdomain" aria-expanded="false"><span id="modal-subdomain-count"></span>Subdomains ${subdomain_loader}</a></li>`)
// add content area
$('#modal_tab_content').empty();
$('#modal_tab_content').append(`<div class="tab-pane show active" id="modal_content_port"></div><div class="tab-pane" id="modal_content_subdomain"></div>`);
$('#modal-open-ports').append(`<div class="modal-text" id="modal-text-open-port"></div>`);
$('#modal-text-open-port').append(`<ul id="modal-open-port-text"></ul>`);
$('#modal_content_port').append(`<ul id="modal_port_ul"></ul>`);
$('#modal_content_subdomain').append(`<ul id="modal_subdomain_ul"></ul>`);
$.getJSON(port_url, function(data) {
$('#modal_content_port').empty();
$('#modal_content_port').append(`<p> IP Addresses ${ip_address} has ${data['ports'].length} Open Ports`);
$('#modal-open-ports-count').html(`<b>${data['ports'].length}</b> `);
for (port in data['ports']){
port_obj = data['ports'][port];
badge_color = port_obj['is_uncommon'] ? 'danger' : 'info';
$("#modal_content_port").append(`<li class="mt-1">${port_obj['description']} <b class="text-${badge_color}">(${port_obj['number']}/${port_obj['service_name']})</b></li>`)
}
$("#port-modal-loader").remove();
});
$('#modal_dialog').modal('show');
// query subdomains
$.getJSON(subdomain_url, function(data) {
$('#modal_content_subdomain').empty();
$('#modal_content_subdomain').append(`<p>${data['subdomains'].length} Subdomains are associated with IP ${ip_address}`);
$('#modal-subdomain-count').html(`<b>${data['subdomains'].length}</b> `);
for (subdomain in data['subdomains']){
subdomain_obj = data['subdomains'][subdomain];
badge_color = subdomain_obj['http_status'] >= 400 ? 'danger' : '';
li_id = get_randid();
if (subdomain_obj['http_url']) {
$("#modal_content_subdomain").append(`<li class="mt-1" id="${li_id}"><a href='${subdomain_obj['http_url']}' target="_blank" class="text-${badge_color}">${subdomain_obj['name']}</a></li>`)
}
else {
$("#modal_content_subdomain").append(`<li class="mt-1 text-${badge_color}" id="${li_id}">${subdomain_obj['name']}</li>`);
}
if (subdomain_obj['http_status']) {
$("#"+li_id).append(get_http_badge(subdomain_obj['http_status']));
$('.bs-tooltip').tooltip();
}
if (subdomain_obj['is_interesting']) {
$("#"+li_id).append(interesting_badge)
}
}
$("#modal-text-subdomain").append(`<span class="float-end text-danger">*Subdomains highlighted are 40X HTTP Status</span>`);
$("#subdomain-modal-loader").remove();
});
}
function get_port_details(port, scan_id=null, domain_id=null){
var ip_url = `/api/queryIps/?port=${port}`;
var subdomain_url = `/api/querySubdomains/?port=${port}`;
if (scan_id) {
ip_url += `&scan_id=${scan_id}`;
subdomain_url += `&scan_id=${scan_id}`;
}
else if(domain_id){
ip_url += `&target_id=${domain_id}`;
subdomain_url += `&target_id=${domain_id}`;
}
ip_url += `&format=json`;
subdomain_url += `&format=json`;
var interesting_badge = `<span class="m-1 badge badge-soft-danger bs-tooltip" title="Interesting Subdomain">Interesting</span>`;
var ip_spinner = `<span class="spinner-border spinner-border-sm me-1" id="ip-modal-loader"></span>`;
var subdomain_spinner = `<span class="spinner-border spinner-border-sm me-1" id="subdomain-modal-loader"></span>`;
$('#modal_title').html('Details for Port: <b>' + port + '</b>');
$('#modal-content').empty();
$('#modal-tabs').empty();
$('#modal-content').append(`<ul class='nav nav-tabs nav-bordered' id="modal_tab_nav"></ul><div id="modal_tab_content" class="tab-content"></div>`);
$('#modal_tab_nav').append(`<li class="nav-item"><a class="nav-link active" data-bs-toggle="tab" href="#modal_content_ip" aria-expanded="true"><span id="modal-ip-count"></span>IP Address ${ip_spinner}</a></li>`);
$('#modal_tab_nav').append(`<li class="nav-item"><a class="nav-link" data-bs-toggle="tab" href="#modal_content_subdomain" aria-expanded="false"><span id="modal-subdomain-count"></span>Subdomains ${subdomain_spinner}</a></li>`)
// add content area
$('#modal_tab_content').append(`<div class="tab-pane show active" id="modal_content_ip"></div><div class="tab-pane" id="modal_content_subdomain"></div>`);
$('#modal_content_ip').append(`<ul id="modal_ip_ul"></ul>`);
$('#modal_content_subdomain').append(`<ul id="modal_subdomain_ul"></ul>`);
$('#modal_dialog').modal('show');
$.getJSON(ip_url, function(data) {
$('#modal_ip_ul').empty();
$('#modal_ip_ul').append(`<p>${data['ips'].length} IP Addresses have Port ${port} Open`);
$('#modal-ip-count').html(`<b>${data['ips'].length}</b> `);
for (ip in data['ips']){
ip_obj = data['ips'][ip];
text_color = ip_obj['is_cdn'] ? 'warning' : '';
$("#modal_ip_ul").append(`<li class='mt-1 text-${text_color}'>${ip_obj['address']}</li>`)
}
$('#modal_ip_ul').append(`<span class="float-end text-warning">*IP Address highlighted are CDN IP Address</span>`);
$("#ip-modal-loader").remove();
});
// query subdomains
$.getJSON(subdomain_url, function(data) {
$('#modal_subdomain_ul').empty();
$('#modal_subdomain_ul').append(`<p>${data['subdomains'].length} Subdomains have Port ${port} Open`);
$('#modal-subdomain-count').html(`<b>${data['subdomains'].length}</b> `);
for (subdomain in data['subdomains']){
subdomain_obj = data['subdomains'][subdomain];
badge_color = subdomain_obj['http_status'] >= 400 ? 'danger' : '';
li_id = get_randid();
if (subdomain_obj['http_url']) {
$("#modal_subdomain_ul").append(`<li id="${li_id}" class="mt-1"><a href='${subdomain_obj['http_url']}' target="_blank" class="text-${badge_color}">${subdomain_obj['name']}</a></li>`)
}
else {
$("#modal_subdomain_ul").append(`<li class="mt-1 text-${badge_color}" id="${li_id}">${subdomain_obj['name']}</li>`);
}
if (subdomain_obj['http_status']) {
$("#"+li_id).append(get_http_badge(subdomain_obj['http_status']));
$('.bs-tooltip').tooltip();
}
if (subdomain_obj['is_interesting']) {
$("#"+li_id).append(interesting_badge)
}
}
$("#modal_subdomain_ul").append(`<span class="float-end text-danger">*Subdomains highlighted are 40X HTTP Status</span>`);
$("#subdomain-modal-loader").remove();
});
}
function get_tech_details(tech, scan_id=null, domain_id=null){
var url = `/api/querySubdomains/?tech=${tech}`;
if (scan_id) {
url += `&scan_id=${scan_id}`;
}
else if(domain_id){
url += `&target_id=${domain_id}`;
}
url += `&format=json`;
var interesting_badge = `<span class="m-1 badge badge-soft-danger bs-tooltip" title="Interesting Subdomain">Interesting</span>`;
// render tab modal
$('.modal-title').html('Details for Technology: <b>' + tech + '</b>');
$('#modal_dialog').modal('show');
$('.modal-text').empty();
$('#modal-footer').empty();
$('.modal-text').append(`<div class='outer-div' id="modal-loader"><span class="inner-div spinner-border text-primary align-self-center loader-sm"></span></div>`);
// query subdomains
$.getJSON(url, function(data) {
$('#modal-loader').empty();
$('#modal-content').empty();
$('#modal-content').append(`${data['subdomains'].length} Subdomains are using ${tech}`);
for (subdomain in data['subdomains']){
subdomain_obj = data['subdomains'][subdomain];
badge_color = subdomain_obj['http_status'] >= 400 ? 'danger' : '';
li_id = get_randid();
if (subdomain_obj['http_url']) {
$("#modal-content").append(`<li id="${li_id}"><a href='${subdomain_obj['http_url']}' target="_blank" class="text-${badge_color}">${subdomain_obj['name']}</a></li>`)
}
else {
$("#modal-content").append(`<li class="text-${badge_color}" id="${li_id}">${subdomain_obj['name']}</li>`);
}
if (subdomain_obj['http_status']) {
$("#"+li_id).append(get_http_badge(subdomain_obj['http_status']));
$('.bs-tooltip').tooltip();
}
if (subdomain_obj['is_interesting']) {
$("#"+li_id).append(interesting_badge)
}
}
$("#modal-content").append(`<span class="float-end text-danger">*Subdomains highlighted are 40X HTTP Status</span>`);
$("#subdomain-modal-loader").remove();
}).fail(function(){
$('#modal-loader').empty();
});
}
function get_http_badge(http_status){
switch (true) {
case (http_status >= 400):
badge_color = 'danger'
break;
case (http_status >= 300):
badge_color = 'warning'
break;
case (http_status >= 200):
badge_color = 'success'
break;
default:
badge_color = 'danger'
}
if (http_status) {
badge = `<span class="badge badge-soft-${badge_color} me-1 ms-1 bs-tooltip" data-placement="top" title="HTTP Status">${http_status}</span>`;
return badge
}
}
function get_and_render_cve_details(cve_id){
var api_url = `/api/tools/cve_details/?cve_id=${cve_id}&format=json`;
Swal.fire({
title: 'Fetching CVE Details...'
});
swal.showLoading();
fetch(api_url, {
method: 'GET',
credentials: "same-origin",
headers: {
"X-CSRFToken": getCookie("csrftoken"),
"Content-Type": "application/json"
},
}).then(response => response.json()).then(function(response) {
console.log(response);
swal.close();
if (response.status) {
$('#xl-modal-title').empty();
$('#xl-modal-content').empty();
$('#xl-modal-footer').empty();
$('#xl-modal_title').html(`CVE Details of ${cve_id}`);
var cvss_score_badge = 'danger';
if (response.result.cvss > 0.1 && response.result.cvss <= 3.9) {
cvss_score_badge = 'info';
}
else if (response.result.cvss > 3.9 && response.result.cvss <= 6.9) {
cvss_score_badge = 'warning';
}
content = `<div class="row mt-3">
<div class="col-sm-3">
<div class="nav flex-column nav-pills nav-pills-tab" id="v-pills-tab" role="tablist" aria-orientation="vertical">
<a class="nav-link active show mb-1" id="v-pills-cve-details-tab" data-bs-toggle="pill" href="#v-pills-cve-details" role="tab" aria-controls="v-pills-cve-details-tab" aria-selected="true">CVE Details</a>
<a class="nav-link mb-1" id="v-pills-affected-products-tab" data-bs-toggle="pill" href="#v-pills-affected-products" role="tab" aria-controls="v-pills-affected-products-tab" aria-selected="true">Affected Products</a>
<a class="nav-link mb-1" id="v-pills-affected-versions-tab" data-bs-toggle="pill" href="#v-pills-affected-versions" role="tab" aria-controls="v-pills-affected-versions-tab" aria-selected="true">Affected Versions</a>
<a class="nav-link mb-1" id="v-pills-cve-references-tab" data-bs-toggle="pill" href="#v-pills-cve-references" role="tab" aria-controls="v-pills-cve-references-tab" aria-selected="true">References</a>
</div>
</div>
<div class="col-sm-9">
<div class="tab-content pt-0">`;
content += `
<div class="tab-pane fade active show" id="v-pills-cve-details" role="tabpanel" aria-labelledby="v-pills-cve-details-tab" data-simplebar style="max-height: 600px; min-height: 600px;">
<h4 class="header-title">${cve_id}</h4>
<div class="alert alert-warning" role="alert">
${response.result.summary}
</div>
<span class="badge badge-soft-primary">Assigner: ${response.result.assigner}</span>
<span class="badge badge-outline-primary">CVSS Vector: ${response.result['cvss-vector']}</span>
<table class="domain_details_table table table-hover table-borderless">
<tr style="display: none">
<th> </th>
<th> </th>
</tr>
<tr>
<td>CVSS Score</td>
<td><span class="badge badge-soft-${cvss_score_badge}">${response.result.cvss ? response.result.cvss: "-"}</span></td>
</tr>
<tr>
<td>Confidentiality Impact</td>
<td>${response.result.impact.confidentiality ? response.result.impact.confidentiality: "N/A"}</td>
</tr>
<tr>
<td>Integrity Impact</td>
<td>${response.result.impact.integrity ? response.result.impact.integrity: "N/A"}</td>
</tr>
<tr>
<td>Availability Impact</td>
<td>${response.result.impact.availability ? response.result.impact.availability: "N/A"}</td>
</tr>
<tr>
<td>Access Complexity</td>
<td>${response.result.access.complexity ? response.result.access.complexity: "N/A"}</td>
</tr>
<tr>
<td>Authentication</td>
<td>${response.result.access.authentication ? response.result.access.authentication: "N/A"}</td>
</tr>
<tr>
<td>CWE ID</td>
<td><span class="badge badge-outline-danger">${response.result.cwe ? response.result.cwe: "N/A"}</span></td>
</tr>
</table>
</div>
`;
content += `<div class="tab-pane fade" id="v-pills-cve-references" role="tabpanel" aria-labelledby="v-pills-cve-references-tab" data-simplebar style="max-height: 600px; min-height: 600px;">
<ul>`;
for (var reference in response.result.references) {
content += `<li><a href="${response.result.references[reference]}" target="_blank">${response.result.references[reference]}</a></li>`;
}
content += `</ul></div>`;
content += `<div class="tab-pane fade" id="v-pills-affected-products" role="tabpanel" aria-labelledby="v-pills-affected-products-tab" data-simplebar style="max-height: 600px; min-height: 600px;">
<ul>`;
for (var prod in response.result.vulnerable_product) {
content += `<li>${response.result.vulnerable_product[prod]}</li>`;
}
content += `</ul></div>`;
content += `<div class="tab-pane fade" id="v-pills-affected-versions" role="tabpanel" aria-labelledby="v-pills-affected-versions-tab" data-simplebar style="max-height: 600px; min-height: 600px;">
<ul>`;
for (var conf in response.result.vulnerable_configuration) {
content += `<li>${response.result.vulnerable_configuration[conf]['id']}</li>`;
}
content += `</ul></div>`;
content += `</div></div></div>`;
$('#xl-modal-content').append(content);
$('#modal_xl_scroll_dialog').modal('show');
$("body").tooltip({
selector: '[data-toggle=tooltip]'
});
}
else{
swal.fire("Error!", response.message, "error", {
button: "Okay",
});
}
});
}
function get_most_vulnerable_target(scan_id=null, target_id=null, ignore_info=false, limit=50){
$('#most_vulnerable_target_div').empty();
$('#most_vulnerable_spinner').append(`<div class="spinner-border text-primary m-2" role="status"></div>`);
var data = {};
if (scan_id) {
data['scan_history_id'] = scan_id;
}
else if (target_id) {
data['target_id'] = target_id;
}
data['ignore_info'] = ignore_info;
data['limit'] = limit;
fetch('/api/fetch/most_vulnerable/?format=json', {
method: 'POST',
credentials: "same-origin",
body: JSON.stringify(data),
headers: {
"X-CSRFToken": getCookie("csrftoken"),
"Content-Type": 'application/json',
}
}).then(function(response) {
return response.json();
}).then(function(response) {
$('#most_vulnerable_spinner').empty();
if (response.status) {
$('#most_vulnerable_target_div').append(`
<table class="table table-borderless table-nowrap table-hover table-centered m-0">
<thead>
<tr>
<th style="width: 60%">Target</th>
<th style="width: 30%">Vulnerabilities Count</th>
</tr>
</thead>
<tbody id="most_vulnerable_target_tbody">
</tbody>
</table>
`);
for (var res in response.result) {
var targ_obj = response.result[res];
var tr = `<tr onclick="window.location='/scan/detail/vuln?domain=${targ_obj.name}';" style="cursor: pointer;">`;
if (scan_id || target_id) {
tr = `<tr onclick="window.location='/scan/detail/vuln?subdomain=${targ_obj.name}';" style="cursor: pointer;">`;
}
$('#most_vulnerable_target_tbody').append(`
${tr}
<td>
<h5 class="m-0 fw-normal">${targ_obj.name}</h5>
</td>
<td>
<span class="badge badge-outline-danger">${targ_obj.vuln_count} Vulnerabilities</span>
</td>
</tr>
`);
}
}
else{
$('#most_vulnerable_target_div').append(`
<div class="mt-4 alert alert-warning">
Could not find most vulnerable targets.
</br>
Once the vulnerability scan is performed, reNgine will identify the most vulnerable targets.</div>
`);
}
});
}
function get_most_common_vulnerability(scan_id=null, target_id=null, ignore_info=false, limit=50){
$('#most_common_vuln_div').empty();
$('#most_common_vuln_spinner').append(`<div class="spinner-border text-primary m-2" role="status"></div>`);
var data = {};
if (scan_id) {
data['scan_history_id'] = scan_id;
}
else if (target_id) {
data['target_id'] = target_id;
}
data['ignore_info'] = ignore_info;
data['limit'] = limit;
fetch('/api/fetch/most_common_vulnerability/?format=json', {
method: 'POST',
credentials: "same-origin",
body: JSON.stringify(data),
headers: {
"X-CSRFToken": getCookie("csrftoken"),
"Content-Type": 'application/json',
}
}).then(function(response) {
return response.json();
}).then(function(response) {
$('#most_common_vuln_spinner').empty();
if (response.status) {
$('#most_common_vuln_div').append(`
<table class="table table-borderless table-nowrap table-hover table-centered m-0">
<thead>
<tr>
<th style="width: 60%">Vulnerability Name</th>
<th style="width: 20%">Count</th>
<th style="width: 20%">Severity</th>
</tr>
</thead>
<tbody id="most_common_vuln_tbody">
</tbody>
</table>
`);
for (var res in response.result) {
var vuln_obj = response.result[res];
var vuln_badge = '';
switch (vuln_obj.severity) {
case -1:
vuln_badge = get_severity_badge('Unknown');
break;
case 0:
vuln_badge = get_severity_badge('Info');
break;
case 1:
vuln_badge = get_severity_badge('Low');
break;
case 2:
vuln_badge = get_severity_badge('Medium');
break;
case 3:
vuln_badge = get_severity_badge('High');
break;
case 4:
vuln_badge = get_severity_badge('Critical');
break;
default:
vuln_badge = get_severity_badge('Unknown');
}
$('#most_common_vuln_tbody').append(`
<tr onclick="window.location='/scan/detail/vuln?vulnerability_name=${vuln_obj.name}';" style="cursor: pointer;">
<td>
<h5 class="m-0 fw-normal">${vuln_obj.name}</h5>
</td>
<td>
<span class="badge badge-outline-danger">${vuln_obj.count}</span>
</td>
<td>
${vuln_badge}
</td>
</tr>
`);
}
}
else{
$('#most_common_vuln_div').append(`
<div class="mt-4 alert alert-warning">
Could not find Most Common Vulnerabilities.
</br>
Once the vulnerability scan is performed, reNgine will identify the Most Common Vulnerabilities.</div>
`);
}
});
}
function highlight_search(search_keyword, content){
// this function will send the highlighted text from search keyword
var reg = new RegExp('('+search_keyword+')', 'gi');
return content.replace(reg, '<mark>$1</mark>');
}
function validURL(str) {
// checks for valid http url
var pattern = new RegExp('^(https?:\\/\\/)?'+ // protocol
'((([a-z\\d]([a-z\\d-]*[a-z\\d])*)\\.)+[a-z]{2,}|'+ // domain name
'((\\d{1,3}\\.){3}\\d{1,3}))'+ // OR ip (v4) address
'(\\:\\d+)?(\\/[-a-z\\d%_.~+]*)*'+ // port and path
'(\\?[;&a-z\\d%_.~+=-]*)?'+ // query string
'(\\#[-a-z\\d_]*)?$','i'); // fragment locator
return !!pattern.test(str);
}
function shadeColor(color, percent) {
//https://stackoverflow.com/a/13532993
var R = parseInt(color.substring(1,3),16);
var G = parseInt(color.substring(3,5),16);
var B = parseInt(color.substring(5,7),16);
R = parseInt(R * (100 + percent) / 100);
G = parseInt(G * (100 + percent) / 100);
B = parseInt(B * (100 + percent) / 100);
R = (R<255)?R:255;
G = (G<255)?G:255;
B = (B<255)?B:255;
var RR = ((R.toString(16).length==1)?"0"+R.toString(16):R.toString(16));
var GG = ((G.toString(16).length==1)?"0"+G.toString(16):G.toString(16));
var BB = ((B.toString(16).length==1)?"0"+B.toString(16):B.toString(16));
return "#"+RR+GG+BB;
}
| yogeshojha | 9fb69660d763b79e6ab505099c7e4cd58f19761c | 18be197fed32ce87979564bb50f002e46290bc3f | ## Inefficient regular expression
This part of the regular expression may cause exponential backtracking on strings starting with '0' and containing many repetitions of '0'.
[Show more details](https://github.com/yogeshojha/rengine/security/code-scanning/134) | github-advanced-security[bot] | 31 |
yogeshojha/rengine | 530 | Fix #529 | Nuclei returns the response to stdout:
`{"template-id":"tech-detect","info":{"name":"Wappalyzer Technology Detection","author":["hakluke"],"tags":["tech"],"reference":null,"severity":"info"},"matcher-name":"nginx","type":"http","host":"https://example.com:443","matched-at":"https://example.com:443","timestamp":"2021-10-31T09:39:47.1571248Z","curl-command":"curl -X 'GET' -d '' -H 'Accept: */*' -H 'Accept-Language: en' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1944.0 Safari/537.36' 'https://example.com'"}`
It needs to read host_url from matched-at, not from matched. | null | 2021-10-31 10:27:33+00:00 | 2021-11-01 16:58:16+00:00 | web/reNgine/tasks.py | import os
import traceback
import yaml
import json
import csv
import validators
import random
import requests
import logging
import metafinder.extractor as metadata_extractor
import whatportis
import subprocess
from selenium.webdriver.firefox.options import Options as FirefoxOptions
from selenium import webdriver
from emailfinder.extractor import *
from dotted_dict import DottedDict
from celery import shared_task
from discord_webhook import DiscordWebhook
from reNgine.celery import app
from startScan.models import *
from targetApp.models import Domain
from scanEngine.models import EngineType
from django.conf import settings
from django.shortcuts import get_object_or_404
from celery import shared_task
from datetime import datetime
from degoogle import degoogle
from django.conf import settings
from django.utils import timezone, dateformat
from django.shortcuts import get_object_or_404
from django.core.exceptions import ObjectDoesNotExist
from reNgine.celery import app
from reNgine.definitions import *
from startScan.models import *
from targetApp.models import Domain
from scanEngine.models import EngineType, Configuration, Wordlist
from .common_func import *
'''
task for background scan
'''
@app.task
def initiate_scan(
domain_id,
scan_history_id,
scan_type,
engine_type,
imported_subdomains=None,
out_of_scope_subdomains=[]
):
'''
scan_type = 0 -> immediate scan, need not create scan object
scan_type = 1 -> scheduled scan
'''
engine_object = EngineType.objects.get(pk=engine_type)
domain = Domain.objects.get(pk=domain_id)
if scan_type == 1:
task = ScanHistory()
task.scan_status = -1
elif scan_type == 0:
task = ScanHistory.objects.get(pk=scan_history_id)
# save the last scan date for domain model
domain.last_scan_date = timezone.now()
domain.save()
# once the celery task starts, change the task status to Started
task.scan_type = engine_object
task.celery_id = initiate_scan.request.id
task.domain = domain
task.scan_status = 1
task.start_scan_date = timezone.now()
task.subdomain_discovery = True if engine_object.subdomain_discovery else False
task.dir_file_search = True if engine_object.dir_file_search else False
task.port_scan = True if engine_object.port_scan else False
task.fetch_url = True if engine_object.fetch_url else False
task.osint = True if engine_object.osint else False
task.screenshot = True if engine_object.screenshot else False
task.vulnerability_scan = True if engine_object.vulnerability_scan else False
task.save()
activity_id = create_scan_activity(task, "Scanning Started", 2)
results_dir = '/usr/src/scan_results/'
os.chdir(results_dir)
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has initiated recon for target {} with engine type {}'.format(domain.name, engine_object.engine_name))
try:
current_scan_dir = domain.name + '_' + str(random.randint(100000000000, 999999999999))
os.mkdir(current_scan_dir)
task.results_dir = current_scan_dir
task.save()
except Exception as exception:
logger.error(exception)
scan_failed(task)
yaml_configuration = None
excluded_subdomains = ''
try:
yaml_configuration = yaml.load(
task.scan_type.yaml_configuration,
Loader=yaml.FullLoader)
except Exception as exception:
logger.error(exception)
# TODO: Put failed reason on db
'''
Add GF patterns name to db for dynamic URLs menu
'''
if engine_object.fetch_url and GF_PATTERNS in yaml_configuration[FETCH_URL]:
task.used_gf_patterns = ','.join(
pattern for pattern in yaml_configuration[FETCH_URL][GF_PATTERNS])
task.save()
results_dir = results_dir + current_scan_dir
# put all imported subdomains into txt file and also in Subdomain model
if imported_subdomains:
extract_imported_subdomain(
imported_subdomains, task, domain, results_dir)
if yaml_configuration:
'''
a target in itself is a subdomain, some tool give subdomains as
www.yogeshojha.com but url and everything else resolves to yogeshojha.com
In that case, we would already need to store target itself as subdomain
'''
initial_subdomain_file = '/target_domain.txt' if task.subdomain_discovery else '/sorted_subdomain_collection.txt'
subdomain_file = open(results_dir + initial_subdomain_file, "w")
subdomain_file.write(domain.name + "\n")
subdomain_file.close()
if(task.subdomain_discovery):
activity_id = create_scan_activity(task, "Subdomain Scanning", 1)
subdomain_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id,
out_of_scope_subdomains
)
else:
skip_subdomain_scan(task, domain, results_dir)
update_last_activity(activity_id, 2)
activity_id = create_scan_activity(task, "HTTP Crawler", 1)
http_crawler(
task,
domain,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
try:
if task.screenshot:
activity_id = create_scan_activity(
task, "Visual Recon - Screenshot", 1)
grab_screenshot(
task,
domain,
yaml_configuration,
current_scan_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if(task.port_scan):
activity_id = create_scan_activity(task, "Port Scanning", 1)
port_scanning(task, domain, yaml_configuration, results_dir)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.osint:
activity_id = create_scan_activity(task, "OSINT Running", 1)
perform_osint(task, domain, yaml_configuration, results_dir)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.dir_file_search:
activity_id = create_scan_activity(task, "Directory Search", 1)
directory_brute(
task,
domain,
yaml_configuration,
results_dir,
activity_id
)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.fetch_url:
activity_id = create_scan_activity(task, "Fetching endpoints", 1)
fetch_endpoints(
task,
domain,
yaml_configuration,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.vulnerability_scan:
activity_id = create_scan_activity(task, "Vulnerability Scan", 1)
vulnerability_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
activity_id = create_scan_activity(task, "Scan Completed", 2)
if notification and notification[0].send_scan_status_notif:
send_notification('*Scan Completed*\nreNgine has finished performing recon on target {}.'.format(domain.name))
'''
Once the scan is completed, save the status to successful
'''
if ScanActivity.objects.filter(scan_of=task).filter(status=0).all():
task.scan_status = 0
else:
task.scan_status = 2
task.stop_scan_date = timezone.now()
task.save()
# cleanup results
delete_scan_data(results_dir)
return {"status": True}
def skip_subdomain_scan(task, domain, results_dir):
# store default target as subdomain
'''
If the imported subdomain already has target domain saved, we can skip this
'''
if not Subdomain.objects.filter(
scan_history=task,
name=domain.name).exists():
subdomain_dict = DottedDict({
'name': domain.name,
'scan_history': task,
'target_domain': domain
})
save_subdomain(subdomain_dict)
# Save target into target_domain.txt
with open('{}/target_domain.txt'.format(results_dir), 'w+') as file:
file.write(domain.name + '\n')
file.close()
'''
We can have two conditions, either subdomain scan happens, or subdomain scan
does not happen, in either cases, because we are using import subdomain, we
need to collect and sort all the subdomains
Write target domain into subdomain_collection
'''
os.system(
'cat {0}/target_domain.txt > {0}/subdomain_collection.txt'.format(results_dir))
os.system(
'cat {0}/from_imported.txt > {0}/subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/from_imported.txt'.format(results_dir))
'''
Sort all Subdomains
'''
os.system(
'sort -u {0}/subdomain_collection.txt -o {0}/sorted_subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/subdomain_collection.txt'.format(results_dir))
def extract_imported_subdomain(imported_subdomains, task, domain, results_dir):
valid_imported_subdomains = [subdomain for subdomain in imported_subdomains if validators.domain(
subdomain) and domain.name == get_domain_from_subdomain(subdomain)]
# remove any duplicate
valid_imported_subdomains = list(set(valid_imported_subdomains))
with open('{}/from_imported.txt'.format(results_dir), 'w+') as file:
for subdomain_name in valid_imported_subdomains:
# save _subdomain to Subdomain model db
if not Subdomain.objects.filter(
scan_history=task, name=subdomain_name).exists():
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': subdomain_name,
'is_imported_subdomain': True
})
save_subdomain(subdomain_dict)
# save subdomain to file
file.write('{}\n'.format(subdomain_name))
file.close()
def subdomain_scan(task, domain, yaml_configuration, results_dir, activity_id, out_of_scope_subdomains=None):
'''
This function is responsible for performing subdomain enumeration
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Subdomain Gathering for target {} has been started'.format(domain.name))
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
# check for all the tools and add them into string
# if tool selected is all then make string, no need for loop
if ALL in yaml_configuration[SUBDOMAIN_DISCOVERY][USES_TOOLS]:
tools = 'amass-active amass-passive assetfinder sublist3r subfinder oneforall'
else:
tools = ' '.join(
str(tool) for tool in yaml_configuration[SUBDOMAIN_DISCOVERY][USES_TOOLS])
logging.info(tools)
# check for THREADS, by default 10
threads = 10
if THREADS in yaml_configuration[SUBDOMAIN_DISCOVERY]:
_threads = yaml_configuration[SUBDOMAIN_DISCOVERY][THREADS]
if _threads > 0:
threads = _threads
if 'amass' in tools:
if 'amass-passive' in tools:
amass_command = 'amass enum -passive -d {} -o {}/from_amass.txt'.format(
domain.name, results_dir)
if USE_AMASS_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_AMASS_CONFIG]:
amass_command += ' -config /root/.config/amass.ini'
# Run Amass Passive
logging.info(amass_command)
os.system(amass_command)
if 'amass-active' in tools:
amass_command = 'amass enum -active -d {} -o {}/from_amass_active.txt'.format(
domain.name, results_dir)
if USE_AMASS_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_AMASS_CONFIG]:
amass_command += ' -config /root/.config/amass.ini'
if AMASS_WORDLIST in yaml_configuration[SUBDOMAIN_DISCOVERY]:
wordlist = yaml_configuration[SUBDOMAIN_DISCOVERY][AMASS_WORDLIST]
if wordlist == 'default':
wordlist_path = '/usr/src/wordlist/deepmagic.com-prefixes-top50000.txt'
else:
wordlist_path = '/usr/src/wordlist/' + wordlist + '.txt'
if not os.path.exists(wordlist_path):
wordlist_path = '/usr/src/' + AMASS_WORDLIST
amass_command = amass_command + \
' -brute -w {}'.format(wordlist_path)
if amass_config_path:
amass_command = amass_command + \
' -config {}'.format('/usr/src/scan_results/' + amass_config_path)
# Run Amass Active
logging.info(amass_command)
os.system(amass_command)
if 'assetfinder' in tools:
assetfinder_command = 'assetfinder --subs-only {} > {}/from_assetfinder.txt'.format(
domain.name, results_dir)
# Run Assetfinder
logging.info(assetfinder_command)
os.system(assetfinder_command)
if 'sublist3r' in tools:
sublist3r_command = 'python3 /usr/src/github/Sublist3r/sublist3r.py -d {} -t {} -o {}/from_sublister.txt'.format(
domain.name, threads, results_dir)
# Run sublist3r
logging.info(sublist3r_command)
os.system(sublist3r_command)
if 'subfinder' in tools:
subfinder_command = 'subfinder -d {} -t {} -o {}/from_subfinder.txt'.format(
domain.name, threads, results_dir)
if USE_SUBFINDER_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_SUBFINDER_CONFIG]:
subfinder_command += ' -config /root/.config/subfinder/config.yaml'
# Run Subfinder
logging.info(subfinder_command)
os.system(subfinder_command)
if 'oneforall' in tools:
oneforall_command = 'python3 /usr/src/github/OneForAll/oneforall.py --target {} run'.format(
domain.name, results_dir)
# Run OneForAll
logging.info(oneforall_command)
os.system(oneforall_command)
extract_subdomain = "cut -d',' -f6 /usr/src/github/OneForAll/results/{}.csv >> {}/from_oneforall.txt".format(
domain.name, results_dir)
os.system(extract_subdomain)
# remove the results from oneforall directory
os.system(
'rm -rf /usr/src/github/OneForAll/results/{}.*'.format(domain.name))
'''
All tools have gathered the list of subdomains with filename
initials as from_*
We will gather all the results in one single file, sort them and
remove the older results from_*
'''
os.system(
'cat {0}/*.txt > {0}/subdomain_collection.txt'.format(results_dir))
'''
Write target domain into subdomain_collection
'''
os.system(
'cat {0}/target_domain.txt >> {0}/subdomain_collection.txt'.format(results_dir))
'''
Remove all the from_* files
'''
os.system('rm -f {}/from*'.format(results_dir))
'''
Sort all Subdomains
'''
os.system(
'sort -u {0}/subdomain_collection.txt -o {0}/sorted_subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/subdomain_collection.txt'.format(results_dir))
'''
The final results will be stored in sorted_subdomain_collection.
'''
# parse the subdomain list file and store in db
with open(subdomain_scan_results_file) as subdomain_list:
for _subdomain in subdomain_list:
__subdomain = _subdomain.rstrip('\n')
if not Subdomain.objects.filter(scan_history=task, name=__subdomain).exists(
) and validators.domain(__subdomain) and __subdomain not in out_of_scope_subdomains:
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': __subdomain,
})
save_subdomain(subdomain_dict)
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
subdomains_count = Subdomain.objects.filter(scan_history=task).count()
send_notification('Subdomain Gathering for target {} has been completed and has discovered *{}* subdomains.'.format(domain.name, subdomains_count))
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/sorted_subdomain_collection.txt')
# check for any subdomain changes and send notif if any
if notification and notification[0].send_subdomain_changes_notif:
newly_added_subdomain = get_new_added_subdomain(task.id, domain.id)
if newly_added_subdomain:
message = "**{} New Subdomains Discovered on domain {}**".format(newly_added_subdomain.count(), domain.name)
for subdomain in newly_added_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
removed_subdomain = get_removed_subdomain(task.id, domain.id)
if removed_subdomain:
message = "**{} Subdomains are no longer available on domain {}**".format(removed_subdomain.count(), domain.name)
for subdomain in removed_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
# check for interesting subdomains and send notif if any
if notification and notification[0].send_interesting_notif:
interesting_subdomain = get_interesting_subdomains(task.id, domain.id)
print(interesting_subdomain)
if interesting_subdomain:
message = "**{} Interesting Subdomains Found on domain {}**".format(interesting_subdomain.count(), domain.name)
for subdomain in interesting_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
def get_new_added_subdomain(scan_id, domain_id):
scan_history = ScanHistory.objects.filter(
domain=domain_id).filter(
subdomain_discovery=True).filter(
id__lte=scan_id)
if scan_history.count() > 1:
last_scan = scan_history.order_by('-start_scan_date')[1]
scanned_host_q1 = Subdomain.objects.filter(
scan_history__id=scan_id).values('name')
scanned_host_q2 = Subdomain.objects.filter(
scan_history__id=last_scan.id).values('name')
added_subdomain = scanned_host_q1.difference(scanned_host_q2)
return Subdomain.objects.filter(
scan_history=scan_id).filter(
name__in=added_subdomain)
def get_removed_subdomain(scan_id, domain_id):
scan_history = ScanHistory.objects.filter(
domain=domain_id).filter(
subdomain_discovery=True).filter(
id__lte=scan_id)
if scan_history.count() > 1:
last_scan = scan_history.order_by('-start_scan_date')[1]
scanned_host_q1 = Subdomain.objects.filter(
scan_history__id=scan_id).values('name')
scanned_host_q2 = Subdomain.objects.filter(
scan_history__id=last_scan.id).values('name')
removed_subdomains = scanned_host_q2.difference(scanned_host_q1)
print()
return Subdomain.objects.filter(
scan_history=last_scan).filter(
name__in=removed_subdomains)
def http_crawler(task, domain, results_dir, activity_id):
'''
This function is runs right after subdomain gathering, and gathers important
like page title, http status, etc
HTTP Crawler runs by default
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('HTTP Crawler for target {} has been initiated.'.format(domain.name))
alive_file_location = results_dir + '/alive.txt'
httpx_results_file = results_dir + '/httpx.json'
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
httpx_command = 'httpx -status-code -content-length -title -tech-detect -cdn -ip -follow-host-redirects -random-agent'
proxy = get_random_proxy()
if proxy:
httpx_command += " --http-proxy '{}'".format(proxy)
httpx_command += ' -json -o {}'.format(
httpx_results_file
)
httpx_command = 'cat {} | {}'.format(subdomain_scan_results_file, httpx_command)
print(httpx_command)
os.system(httpx_command)
# alive subdomains from httpx
alive_file = open(alive_file_location, 'w')
# writing httpx results
if os.path.isfile(httpx_results_file):
httpx_json_result = open(httpx_results_file, 'r')
lines = httpx_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
try:
# fallback for older versions of httpx
if 'url' in json_st:
subdomain = Subdomain.objects.get(
scan_history=task, name=json_st['input'])
else:
subdomain = Subdomain.objects.get(
scan_history=task, name=json_st['url'].split("//")[-1])
'''
Saving Default http urls to EndPoint
'''
endpoint = EndPoint()
endpoint.scan_history = task
endpoint.target_domain = domain
endpoint.subdomain = subdomain
if 'url' in json_st:
endpoint.http_url = json_st['url']
subdomain.http_url = json_st['url']
if 'status-code' in json_st:
endpoint.http_status = json_st['status-code']
subdomain.http_status = json_st['status-code']
if 'title' in json_st:
endpoint.page_title = json_st['title']
subdomain.page_title = json_st['title']
if 'content-length' in json_st:
endpoint.content_length = json_st['content-length']
subdomain.content_length = json_st['content-length']
if 'content-type' in json_st:
endpoint.content_type = json_st['content-type']
subdomain.content_type = json_st['content-type']
if 'webserver' in json_st:
endpoint.webserver = json_st['webserver']
subdomain.webserver = json_st['webserver']
if 'response-time' in json_st:
response_time = float(
''.join(
ch for ch in json_st['response-time'] if not ch.isalpha()))
if json_st['response-time'][-2:] == 'ms':
response_time = response_time / 1000
endpoint.response_time = response_time
subdomain.response_time = response_time
if 'cnames' in json_st:
cname_list = ','.join(json_st['cnames'])
subdomain.cname = cname_list
discovered_date = timezone.now()
endpoint.discovered_date = discovered_date
subdomain.discovered_date = discovered_date
endpoint.is_default = True
endpoint.save()
subdomain.save()
if 'technologies' in json_st:
for _tech in json_st['technologies']:
if Technology.objects.filter(name=_tech).exists():
tech = Technology.objects.get(name=_tech)
else:
tech = Technology(name=_tech)
tech.save()
subdomain.technologies.add(tech)
endpoint.technologies.add(tech)
if 'a' in json_st:
for _ip in json_st['a']:
if IpAddress.objects.filter(address=_ip).exists():
ip = IpAddress.objects.get(address=_ip)
else:
ip = IpAddress(address=_ip)
if 'cdn' in json_st:
ip.is_cdn = json_st['cdn']
ip.save()
subdomain.ip_addresses.add(ip)
# see if to ignore 404 or 5xx
alive_file.write(json_st['url'] + '\n')
subdomain.save()
endpoint.save()
except Exception as exception:
logging.error(exception)
alive_file.close()
if notification and notification[0].send_scan_status_notif:
alive_count = Subdomain.objects.filter(
scan_history__id=task.id).values('name').distinct().filter(
http_status__exact=200).count()
send_notification('HTTP Crawler for target {} has been completed.\n\n {} subdomains were alive (http status 200).'.format(domain.name, alive_count))
def grab_screenshot(task, domain, yaml_configuration, results_dir, activity_id):
'''
This function is responsible for taking screenshots
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine is currently gathering screenshots for {}'.format(domain.name))
output_screenshots_path = results_dir + '/screenshots'
result_csv_path = results_dir + '/screenshots/Requests.csv'
alive_subdomains_path = results_dir + '/alive.txt'
eyewitness_command = 'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py'
eyewitness_command += ' -f {} -d {} --no-prompt'.format(
alive_subdomains_path,
output_screenshots_path
)
if EYEWITNESS in yaml_configuration \
and TIMEOUT in yaml_configuration[EYEWITNESS] \
and yaml_configuration[EYEWITNESS][TIMEOUT] > 0:
eyewitness_command += ' --timeout {}'.format(
yaml_configuration[EYEWITNESS][TIMEOUT]
)
if EYEWITNESS in yaml_configuration \
and THREADS in yaml_configuration[EYEWITNESS] \
and yaml_configuration[EYEWITNESS][THREADS] > 0:
eyewitness_command += ' --threads {}'.format(
yaml_configuration[EYEWITNESS][THREADS]
)
logger.info(eyewitness_command)
os.system(eyewitness_command)
if os.path.isfile(result_csv_path):
logger.info('Gathering Eyewitness results')
with open(result_csv_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
if row[3] == 'Successful' \
and Subdomain.objects.filter(
scan_history__id=task.id).filter(name=row[2]).exists():
subdomain = Subdomain.objects.get(
scan_history__id=task.id,
name=row[2]
)
subdomain.screenshot_path = row[4].replace(
'/usr/src/scan_results/',
''
)
subdomain.save()
# remove all db, html extra files in screenshot results
os.system('rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(
output_screenshots_path,
))
os.system('rm -rf {0}/source'.format(
output_screenshots_path,
))
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has finished gathering screenshots for {}'.format(domain.name))
def port_scanning(task, domain, yaml_configuration, results_dir):
'''
This function is responsible for running the port scan
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Port Scan initiated for {}'.format(domain.name))
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
port_results_file = results_dir + '/ports.json'
# check the yaml_configuration and choose the ports to be scanned
scan_ports = '-' # default port scan everything
if PORTS in yaml_configuration[PORT_SCAN]:
# TODO: legacy code, remove top-100 in future versions
all_ports = yaml_configuration[PORT_SCAN][PORTS]
if 'full' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -p {}'.format(
subdomain_scan_results_file, port_results_file, '-')
elif 'top-100' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -top-ports 100'.format(
subdomain_scan_results_file, port_results_file)
elif 'top-1000' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -top-ports 1000'.format(
subdomain_scan_results_file, port_results_file)
else:
scan_ports = ','.join(
str(port) for port in all_ports)
naabu_command = 'cat {} | naabu -json -o {} -p {}'.format(
subdomain_scan_results_file, port_results_file, scan_ports)
# check for exclude ports
if EXCLUDE_PORTS in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][EXCLUDE_PORTS]:
exclude_ports = ','.join(
str(port) for port in yaml_configuration['port_scan']['exclude_ports'])
naabu_command = naabu_command + \
' -exclude-ports {}'.format(exclude_ports)
if NAABU_RATE in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][NAABU_RATE] > 0:
naabu_command = naabu_command + \
' -rate {}'.format(
yaml_configuration[PORT_SCAN][NAABU_RATE])
if USE_NAABU_CONFIG in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][USE_NAABU_CONFIG]:
naabu_command += ' -config /root/.config/naabu/naabu.conf'
# run naabu
os.system(naabu_command)
# writing port results
try:
port_json_result = open(port_results_file, 'r')
lines = port_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
port_number = json_st['port']
ip_address = json_st['ip']
# see if port already exists
if Port.objects.filter(number__exact=port_number).exists():
port = Port.objects.get(number=port_number)
else:
port = Port()
port.number = port_number
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port_detail = whatportis.get_ports(str(port_number))
if len(port_detail):
port.service_name = port_detail[0].name
port.description = port_detail[0].description
port.save()
if IpAddress.objects.filter(address=json_st['ip']).exists():
ip = IpAddress.objects.get(address=json_st['ip'])
ip.ports.add(port)
ip.save()
except BaseException as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
port_count = Port.objects.filter(
ports__in=IpAddress.objects.filter(
ip_addresses__in=Subdomain.objects.filter(
scan_history__id=task.id))).distinct().count()
send_notification('reNgine has finished Port Scanning on {} and has identified {} ports.'.format(domain.name, port_count))
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/ports.json')
def check_waf():
'''
This function will check for the WAF being used in subdomains using wafw00f
'''
pass
def directory_brute(task, domain, yaml_configuration, results_dir, activity_id):
'''
This function is responsible for performing directory scan
'''
# scan directories for all the alive subdomain with http status >
# 200
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Directory Bruteforce has been initiated for {}.'.format(domain.name))
alive_subdomains = Subdomain.objects.filter(
scan_history__id=task.id).exclude(http_url__isnull=True)
dirs_results = results_dir + '/dirs.json'
# check the yaml settings
if EXTENSIONS in yaml_configuration[DIR_FILE_SEARCH]:
extensions = ','.join(
str(ext) for ext in yaml_configuration[DIR_FILE_SEARCH][EXTENSIONS])
else:
extensions = 'php,git,yaml,conf,db,mysql,bak,txt'
# Threads
if THREADS in yaml_configuration[DIR_FILE_SEARCH] \
and yaml_configuration[DIR_FILE_SEARCH][THREADS] > 0:
threads = yaml_configuration[DIR_FILE_SEARCH][THREADS]
else:
threads = 10
for subdomain in alive_subdomains:
# delete any existing dirs.json
if os.path.isfile(dirs_results):
os.system('rm -rf {}'.format(dirs_results))
dirsearch_command = 'python3 /usr/src/github/dirsearch/dirsearch.py'
dirsearch_command += ' -u {}'.format(subdomain.http_url)
if (WORDLIST not in yaml_configuration[DIR_FILE_SEARCH] or
not yaml_configuration[DIR_FILE_SEARCH][WORDLIST] or
'default' in yaml_configuration[DIR_FILE_SEARCH][WORDLIST]):
wordlist_location = '/usr/src/github/dirsearch/db/dicc.txt'
else:
wordlist_location = '/usr/src/wordlist/' + \
yaml_configuration[DIR_FILE_SEARCH][WORDLIST] + '.txt'
dirsearch_command += ' -w {}'.format(wordlist_location)
dirsearch_command += ' --format json -o {}'.format(dirs_results)
dirsearch_command += ' -e {}'.format(extensions)
dirsearch_command += ' -t {}'.format(threads)
dirsearch_command += ' --random-agent --follow-redirects --exclude-status 403,401,404'
if EXCLUDE_EXTENSIONS in yaml_configuration[DIR_FILE_SEARCH]:
exclude_extensions = ','.join(
str(ext) for ext in yaml_configuration[DIR_FILE_SEARCH][EXCLUDE_EXTENSIONS])
dirsearch_command += ' -X {}'.format(exclude_extensions)
if EXCLUDE_TEXT in yaml_configuration[DIR_FILE_SEARCH]:
exclude_text = ','.join(
str(text) for text in yaml_configuration[DIR_FILE_SEARCH][EXCLUDE_TEXT])
dirsearch_command += ' -exclude-texts {}'.format(exclude_text)
# check if recursive strategy is set to on
if RECURSIVE_LEVEL in yaml_configuration[DIR_FILE_SEARCH]:
dirsearch_command += ' --recursion-depth {}'.format(yaml_configuration[DIR_FILE_SEARCH][RECURSIVE_LEVEL])
if RECURSIVE_LEVEL in yaml_configuration[DIR_FILE_SEARCH]:
dirsearch_command += ' --recursion-depth {}'.format(yaml_configuration[DIR_FILE_SEARCH][RECURSIVE_LEVEL])
# proxy
proxy = get_random_proxy()
if proxy:
dirsearch_command += " --proxy '{}'".format(proxy)
print(dirsearch_command)
os.system(dirsearch_command)
try:
if os.path.isfile(dirs_results):
with open(dirs_results, "r") as json_file:
json_string = json_file.read()
subdomain = Subdomain.objects.get(
scan_history__id=task.id, http_url=subdomain.http_url)
subdomain.directory_json = json_string
subdomain.save()
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
send_notification('Directory Bruteforce has been completed for {}.'.format(domain.name))
def fetch_endpoints(
task,
domain,
yaml_configuration,
results_dir,
activity_id):
'''
This function is responsible for fetching all the urls associated with target
and run HTTP probe
It first runs gau to gather all urls from wayback, then we will use hakrawler to identify more urls
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine is currently gathering endpoints for {}.'.format(domain.name))
# check yaml settings
if ALL in yaml_configuration[FETCH_URL][USES_TOOLS]:
tools = 'gauplus hakrawler waybackurls gospider'
else:
tools = ' '.join(
str(tool) for tool in yaml_configuration[FETCH_URL][USES_TOOLS])
if INTENSITY in yaml_configuration[FETCH_URL]:
scan_type = yaml_configuration[FETCH_URL][INTENSITY]
else:
scan_type = 'normal'
domain_regex = "\'https?://([a-z0-9]+[.])*{}.*\'".format(domain.name)
if 'deep' in scan_type:
# performs deep url gathering for all the subdomains present -
# RECOMMENDED
logger.info('Deep URLS Fetch')
os.system(settings.TOOL_LOCATION + 'get_urls.sh %s %s %s %s %s' %
("None", results_dir, scan_type, domain_regex, tools))
else:
# perform url gathering only for main domain - USE only for quick scan
logger.info('Non Deep URLS Fetch')
os.system(
settings.TOOL_LOCATION +
'get_urls.sh %s %s %s %s %s' % (
domain.name,
results_dir,
scan_type,
domain_regex,
tools
))
if IGNORE_FILE_EXTENSION in yaml_configuration[FETCH_URL]:
ignore_extension = '|'.join(
yaml_configuration[FETCH_URL][IGNORE_FILE_EXTENSION])
logger.info('Ignore extensions' + ignore_extension)
os.system(
'cat {0}/all_urls.txt | grep -Eiv "\\.({1}).*" > {0}/temp_urls.txt'.format(
results_dir, ignore_extension))
os.system(
'rm {0}/all_urls.txt && mv {0}/temp_urls.txt {0}/all_urls.txt'.format(results_dir))
'''
Store all the endpoints and then run the httpx
'''
try:
endpoint_final_url = results_dir + '/all_urls.txt'
if os.path.isfile(endpoint_final_url):
with open(endpoint_final_url) as endpoint_list:
for url in endpoint_list:
http_url = url.rstrip('\n')
if not EndPoint.objects.filter(scan_history=task, http_url=http_url).exists():
_subdomain = get_subdomain_from_url(http_url)
if Subdomain.objects.filter(
scan_history=task).filter(
name=_subdomain).exists():
subdomain = Subdomain.objects.get(
scan_history=task, name=_subdomain)
else:
'''
gau or gosppider can gather interesting endpoints which
when parsed can give subdomains that were not existent from
subdomain scan. so storing them
'''
logger.error(
'Subdomain {} not found, adding...'.format(_subdomain))
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': _subdomain,
})
subdomain = save_subdomain(subdomain_dict)
endpoint_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'subdomain': subdomain,
'http_url': http_url,
})
save_endpoint(endpoint_dict)
except Exception as e:
logger.error(e)
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/all_urls.txt')
'''
TODO:
Go spider & waybackurls accumulates a lot of urls, which is good but nuclei
takes forever to scan even a simple website, so we will do http probing
and filter HTTP status 404, this way we can reduce the number of Non Existent
URLS
'''
logger.info('HTTP Probing on collected endpoints')
httpx_command = 'httpx -l {0}/all_urls.txt -status-code -content-length -ip -cdn -title -tech-detect -json -follow-redirects -random-agent -o {0}/final_httpx_urls.json'.format(results_dir)
proxy = get_random_proxy()
if proxy:
httpx_command += " --http-proxy '{}'".format(proxy)
os.system(httpx_command)
url_results_file = results_dir + '/final_httpx_urls.json'
try:
urls_json_result = open(url_results_file, 'r')
lines = urls_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
http_url = json_st['url']
_subdomain = get_subdomain_from_url(http_url)
if Subdomain.objects.filter(
scan_history=task).filter(
name=_subdomain).exists():
subdomain_obj = Subdomain.objects.get(
scan_history=task, name=_subdomain)
else:
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': _subdomain,
})
subdomain_obj = save_subdomain(subdomain_dict)
if EndPoint.objects.filter(
scan_history=task).filter(
http_url=http_url).exists():
endpoint = EndPoint.objects.get(
scan_history=task, http_url=http_url)
else:
endpoint = EndPoint()
endpoint_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'http_url': http_url,
'subdomain': subdomain_obj
})
endpoint = save_endpoint(endpoint_dict)
if 'title' in json_st:
endpoint.page_title = json_st['title']
if 'webserver' in json_st:
endpoint.webserver = json_st['webserver']
if 'content-length' in json_st:
endpoint.content_length = json_st['content-length']
if 'content-type' in json_st:
endpoint.content_type = json_st['content-type']
if 'status-code' in json_st:
endpoint.http_status = json_st['status-code']
if 'response-time' in json_st:
response_time = float(''.join(ch for ch in json_st['response-time'] if not ch.isalpha()))
if json_st['response-time'][-2:] == 'ms':
response_time = response_time / 1000
endpoint.response_time = response_time
endpoint.save()
if 'technologies' in json_st:
for _tech in json_st['technologies']:
if Technology.objects.filter(name=_tech).exists():
tech = Technology.objects.get(name=_tech)
else:
tech = Technology(name=_tech)
tech.save()
endpoint.technologies.add(tech)
# get subdomain object
subdomain = Subdomain.objects.get(scan_history=task, name=_subdomain)
subdomain.technologies.add(tech)
subdomain.save()
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
endpoint_count = EndPoint.objects.filter(
scan_history__id=task.id).values('http_url').distinct().count()
endpoint_alive_count = EndPoint.objects.filter(
scan_history__id=task.id, http_status__exact=200).values('http_url').distinct().count()
send_notification('reNgine has finished gathering endpoints for {} and has discovered *{}* unique endpoints.\n\n{} of those endpoints reported HTTP status 200.'.format(
domain.name,
endpoint_count,
endpoint_alive_count
))
# once endpoint is saved, run gf patterns TODO: run threads
if GF_PATTERNS in yaml_configuration[FETCH_URL]:
for pattern in yaml_configuration[FETCH_URL][GF_PATTERNS]:
logger.info('Running GF for {}'.format(pattern))
gf_output_file_path = '{0}/gf_patterns_{1}.txt'.format(
results_dir, pattern)
gf_command = 'cat {0}/all_urls.txt | gf {1} >> {2}'.format(
results_dir, pattern, gf_output_file_path)
os.system(gf_command)
if os.path.exists(gf_output_file_path):
with open(gf_output_file_path) as gf_output:
for line in gf_output:
url = line.rstrip('\n')
try:
endpoint = EndPoint.objects.get(
scan_history=task, http_url=url)
earlier_pattern = endpoint.matched_gf_patterns
new_pattern = earlier_pattern + ',' + pattern if earlier_pattern else pattern
endpoint.matched_gf_patterns = new_pattern
except Exception as e:
# add the url in db
logger.error(e)
logger.info('Adding URL' + url)
endpoint = EndPoint()
endpoint.http_url = url
endpoint.target_domain = domain
endpoint.scan_history = task
try:
_subdomain = Subdomain.objects.get(
scan_history=task, name=get_subdomain_from_url(url))
endpoint.subdomain = _subdomain
except Exception as e:
continue
endpoint.matched_gf_patterns = pattern
finally:
endpoint.save()
def vulnerability_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id):
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Vulnerability scan has been initiated for {}.'.format(domain.name))
'''
This function will run nuclei as a vulnerability scanner
----
unfurl the urls to keep only domain and path, this will be sent to vuln scan
ignore certain file extensions
Thanks: https://github.com/six2dez/reconftw
'''
urls_path = '/alive.txt'
if task.scan_type.fetch_url:
os.system('cat {0}/all_urls.txt | grep -Eiv "\\.(eot|jpg|jpeg|gif|css|tif|tiff|png|ttf|otf|woff|woff2|ico|pdf|svg|txt|js|doc|docx)$" | unfurl -u format %s://%d%p >> {0}/unfurl_urls.txt'.format(results_dir))
os.system(
'sort -u {0}/unfurl_urls.txt -o {0}/unfurl_urls.txt'.format(results_dir))
urls_path = '/unfurl_urls.txt'
vulnerability_result_path = results_dir + '/vulnerability.json'
vulnerability_scan_input_file = results_dir + urls_path
nuclei_command = 'nuclei -json -l {} -o {}'.format(
vulnerability_scan_input_file, vulnerability_result_path)
# check nuclei config
if USE_NUCLEI_CONFIG in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[VULNERABILITY_SCAN][USE_NUCLEI_CONFIG]:
nuclei_command += ' -config /root/.config/nuclei/config.yaml'
'''
Nuclei Templates
Either custom template has to be supplied or default template, if neither has
been supplied then use all templates including custom templates
'''
if CUSTOM_NUCLEI_TEMPLATE in yaml_configuration[
VULNERABILITY_SCAN] or NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
# check yaml settings for templates
if NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
if ALL in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_TEMPLATE]:
template = NUCLEI_TEMPLATES_PATH
else:
_template = ','.join([NUCLEI_TEMPLATES_PATH + str(element)
for element in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_TEMPLATE]])
template = _template.replace(',', ' -t ')
# Update nuclei command with templates
nuclei_command = nuclei_command + ' -t ' + template
if CUSTOM_NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
# add .yaml to the custom template extensions
_template = ','.join(
[str(element) + '.yaml' for element in yaml_configuration[VULNERABILITY_SCAN][CUSTOM_NUCLEI_TEMPLATE]])
template = _template.replace(',', ' -t ')
# Update nuclei command with templates
nuclei_command = nuclei_command + ' -t ' + template
else:
nuclei_command = nuclei_command + ' -t /root/nuclei-templates'
# check yaml settings for concurrency
if NUCLEI_CONCURRENCY in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][NUCLEI_CONCURRENCY] > 0:
concurrency = yaml_configuration[VULNERABILITY_SCAN][NUCLEI_CONCURRENCY]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -c ' + str(concurrency)
if RATE_LIMIT in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][RATE_LIMIT] > 0:
rate_limit = yaml_configuration[VULNERABILITY_SCAN][RATE_LIMIT]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -rl ' + str(rate_limit)
if TIMEOUT in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][TIMEOUT] > 0:
timeout = yaml_configuration[VULNERABILITY_SCAN][TIMEOUT]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -timeout ' + str(timeout)
if RETRIES in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][RETRIES] > 0:
retries = yaml_configuration[VULNERABILITY_SCAN][RETRIES]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -retries ' + str(retries)
# for severity
if NUCLEI_SEVERITY in yaml_configuration[VULNERABILITY_SCAN] and ALL not in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_SEVERITY]:
_severity = ','.join(
[str(element) for element in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_SEVERITY]])
severity = _severity.replace(" ", "")
else:
severity = "critical, high, medium, low, info"
# update nuclei templates before running scan
os.system('nuclei -update-templates')
for _severity in severity.split(","):
# delete any existing vulnerability.json file
if os.path.isfile(vulnerability_result_path):
os.system('rm {}'.format(vulnerability_result_path))
# run nuclei
final_nuclei_command = nuclei_command + ' -severity ' + _severity
proxy = get_random_proxy()
if proxy:
final_nuclei_command += " --proxy-url '{}'".format(proxy)
logger.info(final_nuclei_command)
os.system(final_nuclei_command)
try:
if os.path.isfile(vulnerability_result_path):
urls_json_result = open(vulnerability_result_path, 'r')
lines = urls_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
host = json_st['host']
_subdomain = get_subdomain_from_url(host)
try:
subdomain = Subdomain.objects.get(
name=_subdomain, scan_history=task)
vulnerability = Vulnerability()
vulnerability.subdomain = subdomain
vulnerability.scan_history = task
vulnerability.target_domain = domain
try:
endpoint = EndPoint.objects.get(
scan_history=task, target_domain=domain, http_url=host)
vulnerability.endpoint = endpoint
except Exception as exception:
logger.error(exception)
if 'name' in json_st['info']:
vulnerability.name = json_st['info']['name']
if 'severity' in json_st['info']:
if json_st['info']['severity'] == 'info':
severity = 0
elif json_st['info']['severity'] == 'low':
severity = 1
elif json_st['info']['severity'] == 'medium':
severity = 2
elif json_st['info']['severity'] == 'high':
severity = 3
elif json_st['info']['severity'] == 'critical':
severity = 4
else:
severity = 0
else:
severity = 0
vulnerability.severity = severity
if 'tags' in json_st['info']:
vulnerability.tags = json_st['info']['tags']
if 'description' in json_st['info']:
vulnerability.description = json_st['info']['description']
if 'reference' in json_st['info']:
vulnerability.reference = json_st['info']['reference']
if 'matched' in json_st:
vulnerability.http_url = json_st['matched']
if 'templateID' in json_st:
vulnerability.template_used = json_st['templateID']
if 'description' in json_st:
vulnerability.description = json_st['description']
if 'matcher_name' in json_st:
vulnerability.matcher_name = json_st['matcher_name']
if 'extracted_results' in json_st:
vulnerability.extracted_results = json_st['extracted_results']
vulnerability.discovered_date = timezone.now()
vulnerability.open_status = True
vulnerability.save()
# send notification for all vulnerabilities except info
if json_st['info']['severity'] != "info" and notification and notification[0].send_vuln_notif:
message = "*Alert: Vulnerability Identified*"
message += "\n\n"
message += "A *{}* severity vulnerability has been identified.".format(json_st['info']['severity'])
message += "\nVulnerability Name: {}".format(json_st['info']['name'])
message += "\nVulnerable URL: {}".format(json_st['host'])
send_notification(message)
# send report to hackerone
if Hackerone.objects.all().exists() and json_st['info']['severity'] != 'info' and json_st['info']['severity'] \
!= 'low' and vulnerability.target_domain.h1_team_handle:
hackerone = Hackerone.objects.all()[0]
if hackerone.send_critical and json_st['info']['severity'] == 'critical':
send_hackerone_report(vulnerability.id)
elif hackerone.send_high and json_st['info']['severity'] == 'high':
send_hackerone_report(vulnerability.id)
elif hackerone.send_medium and json_st['info']['severity'] == 'medium':
send_hackerone_report(vulnerability.id)
except ObjectDoesNotExist:
logger.error('Object not found')
continue
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
info_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=0).count()
low_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=1).count()
medium_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=2).count()
high_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=3).count()
critical_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=4).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count
message = 'Vulnerability scan has been completed for {} and discovered {} vulnerabilities.'.format(
domain.name,
vulnerability_count
)
message += '\n\n*Vulnerability Stats:*'
message += '\nCritical: {}'.format(critical_count)
message += '\nHigh: {}'.format(high_count)
message += '\nMedium: {}'.format(medium_count)
message += '\nLow: {}'.format(low_count)
message += '\nInfo: {}'.format(info_count)
send_notification(message)
def scan_failed(task):
task.scan_status = 0
task.stop_scan_date = timezone.now()
task.save()
def create_scan_activity(task, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = task
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
def update_last_activity(id, activity_status):
ScanActivity.objects.filter(
id=id).update(
status=activity_status,
time=timezone.now())
def delete_scan_data(results_dir):
# remove all txt,html,json files
os.system('find {} -name "*.txt" -type f -delete'.format(results_dir))
os.system('find {} -name "*.html" -type f -delete'.format(results_dir))
os.system('find {} -name "*.json" -type f -delete'.format(results_dir))
def save_subdomain(subdomain_dict):
subdomain = Subdomain()
subdomain.discovered_date = timezone.now()
subdomain.target_domain = subdomain_dict.get('target_domain')
subdomain.scan_history = subdomain_dict.get('scan_history')
subdomain.name = subdomain_dict.get('name')
subdomain.http_url = subdomain_dict.get('http_url')
subdomain.screenshot_path = subdomain_dict.get('screenshot_path')
subdomain.http_header_path = subdomain_dict.get('http_header_path')
subdomain.cname = subdomain_dict.get('cname')
subdomain.is_cdn = subdomain_dict.get('is_cdn')
subdomain.content_type = subdomain_dict.get('content_type')
subdomain.webserver = subdomain_dict.get('webserver')
subdomain.page_title = subdomain_dict.get('page_title')
subdomain.is_imported_subdomain = subdomain_dict.get(
'is_imported_subdomain') if 'is_imported_subdomain' in subdomain_dict else False
if 'http_status' in subdomain_dict:
subdomain.http_status = subdomain_dict.get('http_status')
if 'response_time' in subdomain_dict:
subdomain.response_time = subdomain_dict.get('response_time')
if 'content_length' in subdomain_dict:
subdomain.content_length = subdomain_dict.get('content_length')
subdomain.save()
return subdomain
def save_endpoint(endpoint_dict):
endpoint = EndPoint()
endpoint.discovered_date = timezone.now()
endpoint.scan_history = endpoint_dict.get('scan_history')
endpoint.target_domain = endpoint_dict.get('target_domain') if 'target_domain' in endpoint_dict else None
endpoint.subdomain = endpoint_dict.get('subdomain') if 'target_domain' in endpoint_dict else None
endpoint.http_url = endpoint_dict.get('http_url')
endpoint.page_title = endpoint_dict.get('page_title') if 'page_title' in endpoint_dict else None
endpoint.content_type = endpoint_dict.get('content_type') if 'content_type' in endpoint_dict else None
endpoint.webserver = endpoint_dict.get('webserver') if 'webserver' in endpoint_dict else None
endpoint.response_time = endpoint_dict.get('response_time') if 'response_time' in endpoint_dict else 0
endpoint.http_status = endpoint_dict.get('http_status') if 'http_status' in endpoint_dict else 0
endpoint.content_length = endpoint_dict.get('content_length') if 'content_length' in endpoint_dict else 0
endpoint.is_default = endpoint_dict.get('is_default') if 'is_default' in endpoint_dict else False
endpoint.save()
return endpoint
def perform_osint(task, domain, yaml_configuration, results_dir):
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has initiated OSINT on target {}'.format(domain.name))
if 'discover' in yaml_configuration[OSINT]:
osint_discovery(task, domain, yaml_configuration, results_dir)
if 'dork' in yaml_configuration[OSINT]:
dorking(task, yaml_configuration)
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has completed performing OSINT on target {}'.format(domain.name))
def osint_discovery(task, domain, yaml_configuration, results_dir):
if ALL in yaml_configuration[OSINT][OSINT_DISCOVER]:
osint_lookup = 'emails metainfo employees'
else:
osint_lookup = ' '.join(
str(lookup) for lookup in yaml_configuration[OSINT][OSINT_DISCOVER])
if 'metainfo' in osint_lookup:
if INTENSITY in yaml_configuration[OSINT]:
osint_intensity = yaml_configuration[OSINT][INTENSITY]
else:
osint_intensity = 'normal'
if OSINT_DOCUMENTS_LIMIT in yaml_configuration[OSINT]:
documents_limit = yaml_configuration[OSINT][OSINT_DOCUMENTS_LIMIT]
else:
documents_limit = 50
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': domain.name,
'domain': domain,
'scan_id': task,
'documents_limit': documents_limit
})
get_and_save_meta_info(meta_dict)
elif osint_intensity == 'deep':
# get all subdomains in scan_id
subdomains = Subdomain.objects.filter(scan_history=task)
for subdomain in subdomains:
meta_dict = DottedDict({
'osint_target': subdomain.name,
'domain': domain,
'scan_id': task,
'documents_limit': documents_limit
})
get_and_save_meta_info(meta_dict)
if 'emails' in osint_lookup:
get_and_save_emails(task, results_dir)
get_and_save_leaked_credentials(task, results_dir)
if 'employees' in osint_lookup:
get_and_save_employees(task, results_dir)
def dorking(scan_history, yaml_configuration):
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
# look in stackoverflow
if ALL in yaml_configuration[OSINT][OSINT_DORK]:
dork_lookup = 'stackoverflow, 3rdparty, social_media, project_management, code_sharing, config_files, jenkins, cloud_buckets, php_error, exposed_documents, struts_rce, db_files, traefik, git_exposed'
else:
dork_lookup = ' '.join(
str(lookup) for lookup in yaml_configuration[OSINT][OSINT_DORK])
if 'stackoverflow' in dork_lookup:
dork = 'site:stackoverflow.com'
dork_type = 'stackoverflow'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=False
)
if '3rdparty' in dork_lookup:
# look in 3rd party sitee
dork_type = '3rdparty'
lookup_websites = [
'gitter.im',
'papaly.com',
'productforums.google.com',
'coggle.it',
'replt.it',
'ycombinator.com',
'libraries.io',
'npm.runkit.com',
'npmjs.com',
'scribd.com',
'gitter.im'
]
dork = ''
for website in lookup_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'social_media' in dork_lookup:
dork_type = 'Social Media'
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'pinterest.com',
'tumblr.com',
'reddit.com'
]
dork = ''
for website in social_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'project_management' in dork_lookup:
dork_type = 'Project Management'
project_websites = [
'trello.com',
'*.atlassian.net'
]
dork = ''
for website in project_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'code_sharing' in dork_lookup:
dork_type = 'Code Sharing Sites'
code_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
dork = ''
for website in code_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'config_files' in dork_lookup:
dork_type = 'Config Files'
config_file_ext = [
'env',
'xml',
'conf',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
dork = ''
for extension in config_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'jenkins' in dork_lookup:
dork_type = 'Jenkins'
dork = 'intitle:\"Dashboard [Jenkins]\"'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
if 'wordpress_files' in dork_lookup:
dork_type = 'Wordpress Files'
inurl_lookup = [
'wp-content',
'wp-includes'
]
dork = ''
for lookup in inurl_lookup:
dork = dork + ' | ' + 'inurl:' + lookup
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'cloud_buckets' in dork_lookup:
dork_type = 'Cloud Buckets'
cloud_websites = [
'.s3.amazonaws.com',
'storage.googleapis.com',
'amazonaws.com'
]
dork = ''
for website in cloud_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'php_error' in dork_lookup:
dork_type = 'PHP Error'
error_words = [
'\"PHP Parse error\"',
'\"PHP Warning\"',
'\"PHP Error\"'
]
dork = ''
for word in error_words:
dork = dork + ' | ' + word
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'exposed_documents' in dork_lookup:
dork_type = 'Exposed Documents'
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
dork = ''
for extension in docs_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'struts_rce' in dork_lookup:
dork_type = 'Apache Struts RCE'
struts_file_ext = [
'action',
'struts',
'do'
]
dork = ''
for extension in struts_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'db_files' in dork_lookup:
dork_type = 'Database Files'
db_file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
dork = ''
for extension in db_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'traefik' in dork_lookup:
dork = 'intitle:traefik inurl:8080/dashboard'
dork_type = 'Traefik'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
if 'git_exposed' in dork_lookup:
dork = 'inurl:\"/.git\"'
dork_type = '.git Exposed'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
def get_and_save_dork_results(dork, type, scan_history, in_target=False):
degoogle_obj = degoogle.dg()
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
if in_target:
query = dork + " site:" + scan_history.domain.name
else:
query = dork + " \"{}\"".format(scan_history.domain.name)
logger.info(query)
degoogle_obj.query = query
results = degoogle_obj.run()
logger.info(results)
for result in results:
dork, _ = Dork.objects.get_or_create(
type=type,
description=result['desc'],
url=result['url']
)
scan_history.dorks.add(dork)
def get_and_save_employees(scan_history, results_dir):
theHarvester_location = '/usr/src/github/theHarvester'
# update proxies.yaml
if Proxy.objects.all().exists():
proxy = Proxy.objects.all()[0]
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(theHarvester_location + '/proxies.yaml', 'w') as file:
documents = yaml.dump(yaml_data, file)
os.system('cd {} && python3 theHarvester.py -d {} -b all -f {}/theHarvester.html'.format(
theHarvester_location,
scan_history.domain.name,
results_dir
))
file_location = results_dir + '/theHarvester.html'
print(file_location)
# delete proxy environ var
if os.environ.get(('https_proxy')):
del os.environ['https_proxy']
if os.environ.get(('HTTPS_PROXY')):
del os.environ['HTTPS_PROXY']
if os.path.isfile(file_location):
logger.info('Parsing theHarvester results')
options = FirefoxOptions()
options.add_argument("--headless")
driver = webdriver.Firefox(options=options)
driver.get('file://'+file_location)
tabledata = driver.execute_script('return tabledata')
# save email addresses and linkedin employees
for data in tabledata:
if data['record'] == 'email':
_email = data['result']
email, _ = Email.objects.get_or_create(address=_email)
scan_history.emails.add(email)
elif data['record'] == 'people':
_employee = data['result']
split_val = _employee.split('-')
name = split_val[0]
if len(split_val) == 2:
designation = split_val[1]
else:
designation = ""
employee, _ = Employee.objects.get_or_create(name=name, designation=designation)
scan_history.employees.add(employee)
driver.quit()
print(tabledata)
def get_and_save_emails(scan_history, results_dir):
leak_target_path = '{}/creds_target.txt'.format(results_dir)
# get email address
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
emails = []
try:
logger.info('OSINT: Getting emails from Google')
email_from_google = get_emails_from_google(scan_history.domain.name)
logger.info('OSINT: Getting emails from Bing')
email_from_bing = get_emails_from_bing(scan_history.domain.name)
logger.info('OSINT: Getting emails from Baidu')
email_from_baidu = get_emails_from_baidu(scan_history.domain.name)
emails = list(set(email_from_google + email_from_bing + email_from_baidu))
logger.info(emails)
except Exception as e:
logger.error(e)
leak_target_file = open(leak_target_path, 'w')
for _email in emails:
email, _ = Email.objects.get_or_create(address=_email)
scan_history.emails.add(email)
leak_target_file.write('{}\n'.format(_email))
# fill leak_target_file with possible email address
leak_target_file.write('%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.write('%.%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%.%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.write('%_%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%_%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.close()
def get_and_save_leaked_credentials(scan_history, results_dir):
logger.info('OSINT: Getting leaked credentials...')
leak_target_file = '{}/creds_target.txt'.format(results_dir)
leak_output_file = '{}/pwndb.json'.format(results_dir)
pwndb_command = 'python3 /usr/src/github/pwndb/pwndb.py --proxy tor:9150 --output json --list {}'.format(
leak_target_file
)
try:
pwndb_output = subprocess.getoutput(pwndb_command)
creds = json.loads(pwndb_output)
for cred in creds:
if cred['username'] != 'donate':
email_id = "{}@{}".format(cred['username'], cred['domain'])
email_obj, _ = Email.objects.get_or_create(
address=email_id,
)
email_obj.password = cred['password']
email_obj.save()
scan_history.emails.add(email_obj)
except Exception as e:
logger.error(e)
pass
def get_and_save_meta_info(meta_dict):
logger.info('Getting METADATA for {}'.format(meta_dict.osint_target))
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
result = metadata_extractor.extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if result:
results = result.get_metadata()
for meta in results:
meta_finder_document = MetaFinderDocument()
subdomain = Subdomain.objects.get(scan_history=meta_dict.scan_id, name=meta_dict.osint_target)
meta_finder_document.subdomain = subdomain
meta_finder_document.target_domain = meta_dict.domain
meta_finder_document.scan_history = meta_dict.scan_id
item = DottedDict(results[meta])
meta_finder_document.url = item.url
meta_finder_document.doc_name = meta
meta_finder_document.http_status = item.status_code
metadata = results[meta]['metadata']
for data in metadata:
if 'Producer' in metadata and metadata['Producer']:
meta_finder_document.producer = metadata['Producer'].rstrip('\x00')
if 'Creator' in metadata and metadata['Creator']:
meta_finder_document.creator = metadata['Creator'].rstrip('\x00')
if 'CreationDate' in metadata and metadata['CreationDate']:
meta_finder_document.creation_date = metadata['CreationDate'].rstrip('\x00')
if 'ModDate' in metadata and metadata['ModDate']:
meta_finder_document.modified_date = metadata['ModDate'].rstrip('\x00')
if 'Author' in metadata and metadata['Author']:
meta_finder_document.author = metadata['Author'].rstrip('\x00')
if 'Title' in metadata and metadata['Title']:
meta_finder_document.title = metadata['Title'].rstrip('\x00')
if 'OSInfo' in metadata and metadata['OSInfo']:
meta_finder_document.os = metadata['OSInfo'].rstrip('\x00')
meta_finder_document.save()
@app.task(bind=True)
def test_task(self):
print('*' * 40)
print('test task run')
print('*' * 40)
| import os
import traceback
import yaml
import json
import csv
import validators
import random
import requests
import logging
import metafinder.extractor as metadata_extractor
import whatportis
import subprocess
from selenium.webdriver.firefox.options import Options as FirefoxOptions
from selenium import webdriver
from emailfinder.extractor import *
from dotted_dict import DottedDict
from celery import shared_task
from discord_webhook import DiscordWebhook
from reNgine.celery import app
from startScan.models import *
from targetApp.models import Domain
from scanEngine.models import EngineType
from django.conf import settings
from django.shortcuts import get_object_or_404
from celery import shared_task
from datetime import datetime
from degoogle import degoogle
from django.conf import settings
from django.utils import timezone, dateformat
from django.shortcuts import get_object_or_404
from django.core.exceptions import ObjectDoesNotExist
from reNgine.celery import app
from reNgine.definitions import *
from startScan.models import *
from targetApp.models import Domain
from scanEngine.models import EngineType, Configuration, Wordlist
from .common_func import *
'''
task for background scan
'''
@app.task
def initiate_scan(
domain_id,
scan_history_id,
scan_type,
engine_type,
imported_subdomains=None,
out_of_scope_subdomains=[]
):
'''
scan_type = 0 -> immediate scan, need not create scan object
scan_type = 1 -> scheduled scan
'''
engine_object = EngineType.objects.get(pk=engine_type)
domain = Domain.objects.get(pk=domain_id)
if scan_type == 1:
task = ScanHistory()
task.scan_status = -1
elif scan_type == 0:
task = ScanHistory.objects.get(pk=scan_history_id)
# save the last scan date for domain model
domain.last_scan_date = timezone.now()
domain.save()
# once the celery task starts, change the task status to Started
task.scan_type = engine_object
task.celery_id = initiate_scan.request.id
task.domain = domain
task.scan_status = 1
task.start_scan_date = timezone.now()
task.subdomain_discovery = True if engine_object.subdomain_discovery else False
task.dir_file_search = True if engine_object.dir_file_search else False
task.port_scan = True if engine_object.port_scan else False
task.fetch_url = True if engine_object.fetch_url else False
task.osint = True if engine_object.osint else False
task.screenshot = True if engine_object.screenshot else False
task.vulnerability_scan = True if engine_object.vulnerability_scan else False
task.save()
activity_id = create_scan_activity(task, "Scanning Started", 2)
results_dir = '/usr/src/scan_results/'
os.chdir(results_dir)
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has initiated recon for target {} with engine type {}'.format(domain.name, engine_object.engine_name))
try:
current_scan_dir = domain.name + '_' + str(random.randint(100000000000, 999999999999))
os.mkdir(current_scan_dir)
task.results_dir = current_scan_dir
task.save()
except Exception as exception:
logger.error(exception)
scan_failed(task)
yaml_configuration = None
excluded_subdomains = ''
try:
yaml_configuration = yaml.load(
task.scan_type.yaml_configuration,
Loader=yaml.FullLoader)
except Exception as exception:
logger.error(exception)
# TODO: Put failed reason on db
'''
Add GF patterns name to db for dynamic URLs menu
'''
if engine_object.fetch_url and GF_PATTERNS in yaml_configuration[FETCH_URL]:
task.used_gf_patterns = ','.join(
pattern for pattern in yaml_configuration[FETCH_URL][GF_PATTERNS])
task.save()
results_dir = results_dir + current_scan_dir
# put all imported subdomains into txt file and also in Subdomain model
if imported_subdomains:
extract_imported_subdomain(
imported_subdomains, task, domain, results_dir)
if yaml_configuration:
'''
a target in itself is a subdomain, some tool give subdomains as
www.yogeshojha.com but url and everything else resolves to yogeshojha.com
In that case, we would already need to store target itself as subdomain
'''
initial_subdomain_file = '/target_domain.txt' if task.subdomain_discovery else '/sorted_subdomain_collection.txt'
subdomain_file = open(results_dir + initial_subdomain_file, "w")
subdomain_file.write(domain.name + "\n")
subdomain_file.close()
if(task.subdomain_discovery):
activity_id = create_scan_activity(task, "Subdomain Scanning", 1)
subdomain_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id,
out_of_scope_subdomains
)
else:
skip_subdomain_scan(task, domain, results_dir)
update_last_activity(activity_id, 2)
activity_id = create_scan_activity(task, "HTTP Crawler", 1)
http_crawler(
task,
domain,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
try:
if task.screenshot:
activity_id = create_scan_activity(
task, "Visual Recon - Screenshot", 1)
grab_screenshot(
task,
domain,
yaml_configuration,
current_scan_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if(task.port_scan):
activity_id = create_scan_activity(task, "Port Scanning", 1)
port_scanning(task, domain, yaml_configuration, results_dir)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.osint:
activity_id = create_scan_activity(task, "OSINT Running", 1)
perform_osint(task, domain, yaml_configuration, results_dir)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.dir_file_search:
activity_id = create_scan_activity(task, "Directory Search", 1)
directory_brute(
task,
domain,
yaml_configuration,
results_dir,
activity_id
)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.fetch_url:
activity_id = create_scan_activity(task, "Fetching endpoints", 1)
fetch_endpoints(
task,
domain,
yaml_configuration,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.vulnerability_scan:
activity_id = create_scan_activity(task, "Vulnerability Scan", 1)
vulnerability_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
activity_id = create_scan_activity(task, "Scan Completed", 2)
if notification and notification[0].send_scan_status_notif:
send_notification('*Scan Completed*\nreNgine has finished performing recon on target {}.'.format(domain.name))
'''
Once the scan is completed, save the status to successful
'''
if ScanActivity.objects.filter(scan_of=task).filter(status=0).all():
task.scan_status = 0
else:
task.scan_status = 2
task.stop_scan_date = timezone.now()
task.save()
# cleanup results
delete_scan_data(results_dir)
return {"status": True}
def skip_subdomain_scan(task, domain, results_dir):
# store default target as subdomain
'''
If the imported subdomain already has target domain saved, we can skip this
'''
if not Subdomain.objects.filter(
scan_history=task,
name=domain.name).exists():
subdomain_dict = DottedDict({
'name': domain.name,
'scan_history': task,
'target_domain': domain
})
save_subdomain(subdomain_dict)
# Save target into target_domain.txt
with open('{}/target_domain.txt'.format(results_dir), 'w+') as file:
file.write(domain.name + '\n')
file.close()
'''
We can have two conditions, either subdomain scan happens, or subdomain scan
does not happen, in either cases, because we are using import subdomain, we
need to collect and sort all the subdomains
Write target domain into subdomain_collection
'''
os.system(
'cat {0}/target_domain.txt > {0}/subdomain_collection.txt'.format(results_dir))
os.system(
'cat {0}/from_imported.txt > {0}/subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/from_imported.txt'.format(results_dir))
'''
Sort all Subdomains
'''
os.system(
'sort -u {0}/subdomain_collection.txt -o {0}/sorted_subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/subdomain_collection.txt'.format(results_dir))
def extract_imported_subdomain(imported_subdomains, task, domain, results_dir):
valid_imported_subdomains = [subdomain for subdomain in imported_subdomains if validators.domain(
subdomain) and domain.name == get_domain_from_subdomain(subdomain)]
# remove any duplicate
valid_imported_subdomains = list(set(valid_imported_subdomains))
with open('{}/from_imported.txt'.format(results_dir), 'w+') as file:
for subdomain_name in valid_imported_subdomains:
# save _subdomain to Subdomain model db
if not Subdomain.objects.filter(
scan_history=task, name=subdomain_name).exists():
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': subdomain_name,
'is_imported_subdomain': True
})
save_subdomain(subdomain_dict)
# save subdomain to file
file.write('{}\n'.format(subdomain_name))
file.close()
def subdomain_scan(task, domain, yaml_configuration, results_dir, activity_id, out_of_scope_subdomains=None):
'''
This function is responsible for performing subdomain enumeration
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Subdomain Gathering for target {} has been started'.format(domain.name))
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
# check for all the tools and add them into string
# if tool selected is all then make string, no need for loop
if ALL in yaml_configuration[SUBDOMAIN_DISCOVERY][USES_TOOLS]:
tools = 'amass-active amass-passive assetfinder sublist3r subfinder oneforall'
else:
tools = ' '.join(
str(tool) for tool in yaml_configuration[SUBDOMAIN_DISCOVERY][USES_TOOLS])
logging.info(tools)
# check for THREADS, by default 10
threads = 10
if THREADS in yaml_configuration[SUBDOMAIN_DISCOVERY]:
_threads = yaml_configuration[SUBDOMAIN_DISCOVERY][THREADS]
if _threads > 0:
threads = _threads
if 'amass' in tools:
if 'amass-passive' in tools:
amass_command = 'amass enum -passive -d {} -o {}/from_amass.txt'.format(
domain.name, results_dir)
if USE_AMASS_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_AMASS_CONFIG]:
amass_command += ' -config /root/.config/amass.ini'
# Run Amass Passive
logging.info(amass_command)
os.system(amass_command)
if 'amass-active' in tools:
amass_command = 'amass enum -active -d {} -o {}/from_amass_active.txt'.format(
domain.name, results_dir)
if USE_AMASS_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_AMASS_CONFIG]:
amass_command += ' -config /root/.config/amass.ini'
if AMASS_WORDLIST in yaml_configuration[SUBDOMAIN_DISCOVERY]:
wordlist = yaml_configuration[SUBDOMAIN_DISCOVERY][AMASS_WORDLIST]
if wordlist == 'default':
wordlist_path = '/usr/src/wordlist/deepmagic.com-prefixes-top50000.txt'
else:
wordlist_path = '/usr/src/wordlist/' + wordlist + '.txt'
if not os.path.exists(wordlist_path):
wordlist_path = '/usr/src/' + AMASS_WORDLIST
amass_command = amass_command + \
' -brute -w {}'.format(wordlist_path)
if amass_config_path:
amass_command = amass_command + \
' -config {}'.format('/usr/src/scan_results/' + amass_config_path)
# Run Amass Active
logging.info(amass_command)
os.system(amass_command)
if 'assetfinder' in tools:
assetfinder_command = 'assetfinder --subs-only {} > {}/from_assetfinder.txt'.format(
domain.name, results_dir)
# Run Assetfinder
logging.info(assetfinder_command)
os.system(assetfinder_command)
if 'sublist3r' in tools:
sublist3r_command = 'python3 /usr/src/github/Sublist3r/sublist3r.py -d {} -t {} -o {}/from_sublister.txt'.format(
domain.name, threads, results_dir)
# Run sublist3r
logging.info(sublist3r_command)
os.system(sublist3r_command)
if 'subfinder' in tools:
subfinder_command = 'subfinder -d {} -t {} -o {}/from_subfinder.txt'.format(
domain.name, threads, results_dir)
if USE_SUBFINDER_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_SUBFINDER_CONFIG]:
subfinder_command += ' -config /root/.config/subfinder/config.yaml'
# Run Subfinder
logging.info(subfinder_command)
os.system(subfinder_command)
if 'oneforall' in tools:
oneforall_command = 'python3 /usr/src/github/OneForAll/oneforall.py --target {} run'.format(
domain.name, results_dir)
# Run OneForAll
logging.info(oneforall_command)
os.system(oneforall_command)
extract_subdomain = "cut -d',' -f6 /usr/src/github/OneForAll/results/{}.csv >> {}/from_oneforall.txt".format(
domain.name, results_dir)
os.system(extract_subdomain)
# remove the results from oneforall directory
os.system(
'rm -rf /usr/src/github/OneForAll/results/{}.*'.format(domain.name))
'''
All tools have gathered the list of subdomains with filename
initials as from_*
We will gather all the results in one single file, sort them and
remove the older results from_*
'''
os.system(
'cat {0}/*.txt > {0}/subdomain_collection.txt'.format(results_dir))
'''
Write target domain into subdomain_collection
'''
os.system(
'cat {0}/target_domain.txt >> {0}/subdomain_collection.txt'.format(results_dir))
'''
Remove all the from_* files
'''
os.system('rm -f {}/from*'.format(results_dir))
'''
Sort all Subdomains
'''
os.system(
'sort -u {0}/subdomain_collection.txt -o {0}/sorted_subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/subdomain_collection.txt'.format(results_dir))
'''
The final results will be stored in sorted_subdomain_collection.
'''
# parse the subdomain list file and store in db
with open(subdomain_scan_results_file) as subdomain_list:
for _subdomain in subdomain_list:
__subdomain = _subdomain.rstrip('\n')
if not Subdomain.objects.filter(scan_history=task, name=__subdomain).exists(
) and validators.domain(__subdomain) and __subdomain not in out_of_scope_subdomains:
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': __subdomain,
})
save_subdomain(subdomain_dict)
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
subdomains_count = Subdomain.objects.filter(scan_history=task).count()
send_notification('Subdomain Gathering for target {} has been completed and has discovered *{}* subdomains.'.format(domain.name, subdomains_count))
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/sorted_subdomain_collection.txt')
# check for any subdomain changes and send notif if any
if notification and notification[0].send_subdomain_changes_notif:
newly_added_subdomain = get_new_added_subdomain(task.id, domain.id)
if newly_added_subdomain:
message = "**{} New Subdomains Discovered on domain {}**".format(newly_added_subdomain.count(), domain.name)
for subdomain in newly_added_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
removed_subdomain = get_removed_subdomain(task.id, domain.id)
if removed_subdomain:
message = "**{} Subdomains are no longer available on domain {}**".format(removed_subdomain.count(), domain.name)
for subdomain in removed_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
# check for interesting subdomains and send notif if any
if notification and notification[0].send_interesting_notif:
interesting_subdomain = get_interesting_subdomains(task.id, domain.id)
print(interesting_subdomain)
if interesting_subdomain:
message = "**{} Interesting Subdomains Found on domain {}**".format(interesting_subdomain.count(), domain.name)
for subdomain in interesting_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
def get_new_added_subdomain(scan_id, domain_id):
scan_history = ScanHistory.objects.filter(
domain=domain_id).filter(
subdomain_discovery=True).filter(
id__lte=scan_id)
if scan_history.count() > 1:
last_scan = scan_history.order_by('-start_scan_date')[1]
scanned_host_q1 = Subdomain.objects.filter(
scan_history__id=scan_id).values('name')
scanned_host_q2 = Subdomain.objects.filter(
scan_history__id=last_scan.id).values('name')
added_subdomain = scanned_host_q1.difference(scanned_host_q2)
return Subdomain.objects.filter(
scan_history=scan_id).filter(
name__in=added_subdomain)
def get_removed_subdomain(scan_id, domain_id):
scan_history = ScanHistory.objects.filter(
domain=domain_id).filter(
subdomain_discovery=True).filter(
id__lte=scan_id)
if scan_history.count() > 1:
last_scan = scan_history.order_by('-start_scan_date')[1]
scanned_host_q1 = Subdomain.objects.filter(
scan_history__id=scan_id).values('name')
scanned_host_q2 = Subdomain.objects.filter(
scan_history__id=last_scan.id).values('name')
removed_subdomains = scanned_host_q2.difference(scanned_host_q1)
print()
return Subdomain.objects.filter(
scan_history=last_scan).filter(
name__in=removed_subdomains)
def http_crawler(task, domain, results_dir, activity_id):
'''
This function is runs right after subdomain gathering, and gathers important
like page title, http status, etc
HTTP Crawler runs by default
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('HTTP Crawler for target {} has been initiated.'.format(domain.name))
alive_file_location = results_dir + '/alive.txt'
httpx_results_file = results_dir + '/httpx.json'
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
httpx_command = 'httpx -status-code -content-length -title -tech-detect -cdn -ip -follow-host-redirects -random-agent'
proxy = get_random_proxy()
if proxy:
httpx_command += " --http-proxy '{}'".format(proxy)
httpx_command += ' -json -o {}'.format(
httpx_results_file
)
httpx_command = 'cat {} | {}'.format(subdomain_scan_results_file, httpx_command)
print(httpx_command)
os.system(httpx_command)
# alive subdomains from httpx
alive_file = open(alive_file_location, 'w')
# writing httpx results
if os.path.isfile(httpx_results_file):
httpx_json_result = open(httpx_results_file, 'r')
lines = httpx_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
try:
# fallback for older versions of httpx
if 'url' in json_st:
subdomain = Subdomain.objects.get(
scan_history=task, name=json_st['input'])
else:
subdomain = Subdomain.objects.get(
scan_history=task, name=json_st['url'].split("//")[-1])
'''
Saving Default http urls to EndPoint
'''
endpoint = EndPoint()
endpoint.scan_history = task
endpoint.target_domain = domain
endpoint.subdomain = subdomain
if 'url' in json_st:
endpoint.http_url = json_st['url']
subdomain.http_url = json_st['url']
if 'status-code' in json_st:
endpoint.http_status = json_st['status-code']
subdomain.http_status = json_st['status-code']
if 'title' in json_st:
endpoint.page_title = json_st['title']
subdomain.page_title = json_st['title']
if 'content-length' in json_st:
endpoint.content_length = json_st['content-length']
subdomain.content_length = json_st['content-length']
if 'content-type' in json_st:
endpoint.content_type = json_st['content-type']
subdomain.content_type = json_st['content-type']
if 'webserver' in json_st:
endpoint.webserver = json_st['webserver']
subdomain.webserver = json_st['webserver']
if 'response-time' in json_st:
response_time = float(
''.join(
ch for ch in json_st['response-time'] if not ch.isalpha()))
if json_st['response-time'][-2:] == 'ms':
response_time = response_time / 1000
endpoint.response_time = response_time
subdomain.response_time = response_time
if 'cnames' in json_st:
cname_list = ','.join(json_st['cnames'])
subdomain.cname = cname_list
discovered_date = timezone.now()
endpoint.discovered_date = discovered_date
subdomain.discovered_date = discovered_date
endpoint.is_default = True
endpoint.save()
subdomain.save()
if 'technologies' in json_st:
for _tech in json_st['technologies']:
if Technology.objects.filter(name=_tech).exists():
tech = Technology.objects.get(name=_tech)
else:
tech = Technology(name=_tech)
tech.save()
subdomain.technologies.add(tech)
endpoint.technologies.add(tech)
if 'a' in json_st:
for _ip in json_st['a']:
if IpAddress.objects.filter(address=_ip).exists():
ip = IpAddress.objects.get(address=_ip)
else:
ip = IpAddress(address=_ip)
if 'cdn' in json_st:
ip.is_cdn = json_st['cdn']
ip.save()
subdomain.ip_addresses.add(ip)
# see if to ignore 404 or 5xx
alive_file.write(json_st['url'] + '\n')
subdomain.save()
endpoint.save()
except Exception as exception:
logging.error(exception)
alive_file.close()
if notification and notification[0].send_scan_status_notif:
alive_count = Subdomain.objects.filter(
scan_history__id=task.id).values('name').distinct().filter(
http_status__exact=200).count()
send_notification('HTTP Crawler for target {} has been completed.\n\n {} subdomains were alive (http status 200).'.format(domain.name, alive_count))
def grab_screenshot(task, domain, yaml_configuration, results_dir, activity_id):
'''
This function is responsible for taking screenshots
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine is currently gathering screenshots for {}'.format(domain.name))
output_screenshots_path = results_dir + '/screenshots'
result_csv_path = results_dir + '/screenshots/Requests.csv'
alive_subdomains_path = results_dir + '/alive.txt'
eyewitness_command = 'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py'
eyewitness_command += ' -f {} -d {} --no-prompt'.format(
alive_subdomains_path,
output_screenshots_path
)
if EYEWITNESS in yaml_configuration \
and TIMEOUT in yaml_configuration[EYEWITNESS] \
and yaml_configuration[EYEWITNESS][TIMEOUT] > 0:
eyewitness_command += ' --timeout {}'.format(
yaml_configuration[EYEWITNESS][TIMEOUT]
)
if EYEWITNESS in yaml_configuration \
and THREADS in yaml_configuration[EYEWITNESS] \
and yaml_configuration[EYEWITNESS][THREADS] > 0:
eyewitness_command += ' --threads {}'.format(
yaml_configuration[EYEWITNESS][THREADS]
)
logger.info(eyewitness_command)
os.system(eyewitness_command)
if os.path.isfile(result_csv_path):
logger.info('Gathering Eyewitness results')
with open(result_csv_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
if row[3] == 'Successful' \
and Subdomain.objects.filter(
scan_history__id=task.id).filter(name=row[2]).exists():
subdomain = Subdomain.objects.get(
scan_history__id=task.id,
name=row[2]
)
subdomain.screenshot_path = row[4].replace(
'/usr/src/scan_results/',
''
)
subdomain.save()
# remove all db, html extra files in screenshot results
os.system('rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(
output_screenshots_path,
))
os.system('rm -rf {0}/source'.format(
output_screenshots_path,
))
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has finished gathering screenshots for {}'.format(domain.name))
def port_scanning(task, domain, yaml_configuration, results_dir):
'''
This function is responsible for running the port scan
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Port Scan initiated for {}'.format(domain.name))
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
port_results_file = results_dir + '/ports.json'
# check the yaml_configuration and choose the ports to be scanned
scan_ports = '-' # default port scan everything
if PORTS in yaml_configuration[PORT_SCAN]:
# TODO: legacy code, remove top-100 in future versions
all_ports = yaml_configuration[PORT_SCAN][PORTS]
if 'full' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -p {}'.format(
subdomain_scan_results_file, port_results_file, '-')
elif 'top-100' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -top-ports 100'.format(
subdomain_scan_results_file, port_results_file)
elif 'top-1000' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -top-ports 1000'.format(
subdomain_scan_results_file, port_results_file)
else:
scan_ports = ','.join(
str(port) for port in all_ports)
naabu_command = 'cat {} | naabu -json -o {} -p {}'.format(
subdomain_scan_results_file, port_results_file, scan_ports)
# check for exclude ports
if EXCLUDE_PORTS in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][EXCLUDE_PORTS]:
exclude_ports = ','.join(
str(port) for port in yaml_configuration['port_scan']['exclude_ports'])
naabu_command = naabu_command + \
' -exclude-ports {}'.format(exclude_ports)
if NAABU_RATE in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][NAABU_RATE] > 0:
naabu_command = naabu_command + \
' -rate {}'.format(
yaml_configuration[PORT_SCAN][NAABU_RATE])
if USE_NAABU_CONFIG in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][USE_NAABU_CONFIG]:
naabu_command += ' -config /root/.config/naabu/naabu.conf'
# run naabu
os.system(naabu_command)
# writing port results
try:
port_json_result = open(port_results_file, 'r')
lines = port_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
port_number = json_st['port']
ip_address = json_st['ip']
# see if port already exists
if Port.objects.filter(number__exact=port_number).exists():
port = Port.objects.get(number=port_number)
else:
port = Port()
port.number = port_number
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port_detail = whatportis.get_ports(str(port_number))
if len(port_detail):
port.service_name = port_detail[0].name
port.description = port_detail[0].description
port.save()
if IpAddress.objects.filter(address=json_st['ip']).exists():
ip = IpAddress.objects.get(address=json_st['ip'])
ip.ports.add(port)
ip.save()
except BaseException as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
port_count = Port.objects.filter(
ports__in=IpAddress.objects.filter(
ip_addresses__in=Subdomain.objects.filter(
scan_history__id=task.id))).distinct().count()
send_notification('reNgine has finished Port Scanning on {} and has identified {} ports.'.format(domain.name, port_count))
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/ports.json')
def check_waf():
'''
This function will check for the WAF being used in subdomains using wafw00f
'''
pass
def directory_brute(task, domain, yaml_configuration, results_dir, activity_id):
'''
This function is responsible for performing directory scan
'''
# scan directories for all the alive subdomain with http status >
# 200
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Directory Bruteforce has been initiated for {}.'.format(domain.name))
alive_subdomains = Subdomain.objects.filter(
scan_history__id=task.id).exclude(http_url__isnull=True)
dirs_results = results_dir + '/dirs.json'
# check the yaml settings
if EXTENSIONS in yaml_configuration[DIR_FILE_SEARCH]:
extensions = ','.join(
str(ext) for ext in yaml_configuration[DIR_FILE_SEARCH][EXTENSIONS])
else:
extensions = 'php,git,yaml,conf,db,mysql,bak,txt'
# Threads
if THREADS in yaml_configuration[DIR_FILE_SEARCH] \
and yaml_configuration[DIR_FILE_SEARCH][THREADS] > 0:
threads = yaml_configuration[DIR_FILE_SEARCH][THREADS]
else:
threads = 10
for subdomain in alive_subdomains:
# delete any existing dirs.json
if os.path.isfile(dirs_results):
os.system('rm -rf {}'.format(dirs_results))
dirsearch_command = 'python3 /usr/src/github/dirsearch/dirsearch.py'
dirsearch_command += ' -u {}'.format(subdomain.http_url)
if (WORDLIST not in yaml_configuration[DIR_FILE_SEARCH] or
not yaml_configuration[DIR_FILE_SEARCH][WORDLIST] or
'default' in yaml_configuration[DIR_FILE_SEARCH][WORDLIST]):
wordlist_location = '/usr/src/github/dirsearch/db/dicc.txt'
else:
wordlist_location = '/usr/src/wordlist/' + \
yaml_configuration[DIR_FILE_SEARCH][WORDLIST] + '.txt'
dirsearch_command += ' -w {}'.format(wordlist_location)
dirsearch_command += ' --format json -o {}'.format(dirs_results)
dirsearch_command += ' -e {}'.format(extensions)
dirsearch_command += ' -t {}'.format(threads)
dirsearch_command += ' --random-agent --follow-redirects --exclude-status 403,401,404'
if EXCLUDE_EXTENSIONS in yaml_configuration[DIR_FILE_SEARCH]:
exclude_extensions = ','.join(
str(ext) for ext in yaml_configuration[DIR_FILE_SEARCH][EXCLUDE_EXTENSIONS])
dirsearch_command += ' -X {}'.format(exclude_extensions)
if EXCLUDE_TEXT in yaml_configuration[DIR_FILE_SEARCH]:
exclude_text = ','.join(
str(text) for text in yaml_configuration[DIR_FILE_SEARCH][EXCLUDE_TEXT])
dirsearch_command += ' -exclude-texts {}'.format(exclude_text)
# check if recursive strategy is set to on
if RECURSIVE_LEVEL in yaml_configuration[DIR_FILE_SEARCH]:
dirsearch_command += ' --recursion-depth {}'.format(yaml_configuration[DIR_FILE_SEARCH][RECURSIVE_LEVEL])
if RECURSIVE_LEVEL in yaml_configuration[DIR_FILE_SEARCH]:
dirsearch_command += ' --recursion-depth {}'.format(yaml_configuration[DIR_FILE_SEARCH][RECURSIVE_LEVEL])
# proxy
proxy = get_random_proxy()
if proxy:
dirsearch_command += " --proxy '{}'".format(proxy)
print(dirsearch_command)
os.system(dirsearch_command)
try:
if os.path.isfile(dirs_results):
with open(dirs_results, "r") as json_file:
json_string = json_file.read()
subdomain = Subdomain.objects.get(
scan_history__id=task.id, http_url=subdomain.http_url)
subdomain.directory_json = json_string
subdomain.save()
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
send_notification('Directory Bruteforce has been completed for {}.'.format(domain.name))
def fetch_endpoints(
task,
domain,
yaml_configuration,
results_dir,
activity_id):
'''
This function is responsible for fetching all the urls associated with target
and run HTTP probe
It first runs gau to gather all urls from wayback, then we will use hakrawler to identify more urls
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine is currently gathering endpoints for {}.'.format(domain.name))
# check yaml settings
if ALL in yaml_configuration[FETCH_URL][USES_TOOLS]:
tools = 'gauplus hakrawler waybackurls gospider'
else:
tools = ' '.join(
str(tool) for tool in yaml_configuration[FETCH_URL][USES_TOOLS])
if INTENSITY in yaml_configuration[FETCH_URL]:
scan_type = yaml_configuration[FETCH_URL][INTENSITY]
else:
scan_type = 'normal'
domain_regex = "\'https?://([a-z0-9]+[.])*{}.*\'".format(domain.name)
if 'deep' in scan_type:
# performs deep url gathering for all the subdomains present -
# RECOMMENDED
logger.info('Deep URLS Fetch')
os.system(settings.TOOL_LOCATION + 'get_urls.sh %s %s %s %s %s' %
("None", results_dir, scan_type, domain_regex, tools))
else:
# perform url gathering only for main domain - USE only for quick scan
logger.info('Non Deep URLS Fetch')
os.system(
settings.TOOL_LOCATION +
'get_urls.sh %s %s %s %s %s' % (
domain.name,
results_dir,
scan_type,
domain_regex,
tools
))
if IGNORE_FILE_EXTENSION in yaml_configuration[FETCH_URL]:
ignore_extension = '|'.join(
yaml_configuration[FETCH_URL][IGNORE_FILE_EXTENSION])
logger.info('Ignore extensions' + ignore_extension)
os.system(
'cat {0}/all_urls.txt | grep -Eiv "\\.({1}).*" > {0}/temp_urls.txt'.format(
results_dir, ignore_extension))
os.system(
'rm {0}/all_urls.txt && mv {0}/temp_urls.txt {0}/all_urls.txt'.format(results_dir))
'''
Store all the endpoints and then run the httpx
'''
try:
endpoint_final_url = results_dir + '/all_urls.txt'
if os.path.isfile(endpoint_final_url):
with open(endpoint_final_url) as endpoint_list:
for url in endpoint_list:
http_url = url.rstrip('\n')
if not EndPoint.objects.filter(scan_history=task, http_url=http_url).exists():
_subdomain = get_subdomain_from_url(http_url)
if Subdomain.objects.filter(
scan_history=task).filter(
name=_subdomain).exists():
subdomain = Subdomain.objects.get(
scan_history=task, name=_subdomain)
else:
'''
gau or gosppider can gather interesting endpoints which
when parsed can give subdomains that were not existent from
subdomain scan. so storing them
'''
logger.error(
'Subdomain {} not found, adding...'.format(_subdomain))
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': _subdomain,
})
subdomain = save_subdomain(subdomain_dict)
endpoint_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'subdomain': subdomain,
'http_url': http_url,
})
save_endpoint(endpoint_dict)
except Exception as e:
logger.error(e)
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/all_urls.txt')
'''
TODO:
Go spider & waybackurls accumulates a lot of urls, which is good but nuclei
takes forever to scan even a simple website, so we will do http probing
and filter HTTP status 404, this way we can reduce the number of Non Existent
URLS
'''
logger.info('HTTP Probing on collected endpoints')
httpx_command = 'httpx -l {0}/all_urls.txt -status-code -content-length -ip -cdn -title -tech-detect -json -follow-redirects -random-agent -o {0}/final_httpx_urls.json'.format(results_dir)
proxy = get_random_proxy()
if proxy:
httpx_command += " --http-proxy '{}'".format(proxy)
os.system(httpx_command)
url_results_file = results_dir + '/final_httpx_urls.json'
try:
urls_json_result = open(url_results_file, 'r')
lines = urls_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
http_url = json_st['url']
_subdomain = get_subdomain_from_url(http_url)
if Subdomain.objects.filter(
scan_history=task).filter(
name=_subdomain).exists():
subdomain_obj = Subdomain.objects.get(
scan_history=task, name=_subdomain)
else:
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': _subdomain,
})
subdomain_obj = save_subdomain(subdomain_dict)
if EndPoint.objects.filter(
scan_history=task).filter(
http_url=http_url).exists():
endpoint = EndPoint.objects.get(
scan_history=task, http_url=http_url)
else:
endpoint = EndPoint()
endpoint_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'http_url': http_url,
'subdomain': subdomain_obj
})
endpoint = save_endpoint(endpoint_dict)
if 'title' in json_st:
endpoint.page_title = json_st['title']
if 'webserver' in json_st:
endpoint.webserver = json_st['webserver']
if 'content-length' in json_st:
endpoint.content_length = json_st['content-length']
if 'content-type' in json_st:
endpoint.content_type = json_st['content-type']
if 'status-code' in json_st:
endpoint.http_status = json_st['status-code']
if 'response-time' in json_st:
response_time = float(''.join(ch for ch in json_st['response-time'] if not ch.isalpha()))
if json_st['response-time'][-2:] == 'ms':
response_time = response_time / 1000
endpoint.response_time = response_time
endpoint.save()
if 'technologies' in json_st:
for _tech in json_st['technologies']:
if Technology.objects.filter(name=_tech).exists():
tech = Technology.objects.get(name=_tech)
else:
tech = Technology(name=_tech)
tech.save()
endpoint.technologies.add(tech)
# get subdomain object
subdomain = Subdomain.objects.get(scan_history=task, name=_subdomain)
subdomain.technologies.add(tech)
subdomain.save()
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
endpoint_count = EndPoint.objects.filter(
scan_history__id=task.id).values('http_url').distinct().count()
endpoint_alive_count = EndPoint.objects.filter(
scan_history__id=task.id, http_status__exact=200).values('http_url').distinct().count()
send_notification('reNgine has finished gathering endpoints for {} and has discovered *{}* unique endpoints.\n\n{} of those endpoints reported HTTP status 200.'.format(
domain.name,
endpoint_count,
endpoint_alive_count
))
# once endpoint is saved, run gf patterns TODO: run threads
if GF_PATTERNS in yaml_configuration[FETCH_URL]:
for pattern in yaml_configuration[FETCH_URL][GF_PATTERNS]:
logger.info('Running GF for {}'.format(pattern))
gf_output_file_path = '{0}/gf_patterns_{1}.txt'.format(
results_dir, pattern)
gf_command = 'cat {0}/all_urls.txt | gf {1} >> {2}'.format(
results_dir, pattern, gf_output_file_path)
os.system(gf_command)
if os.path.exists(gf_output_file_path):
with open(gf_output_file_path) as gf_output:
for line in gf_output:
url = line.rstrip('\n')
try:
endpoint = EndPoint.objects.get(
scan_history=task, http_url=url)
earlier_pattern = endpoint.matched_gf_patterns
new_pattern = earlier_pattern + ',' + pattern if earlier_pattern else pattern
endpoint.matched_gf_patterns = new_pattern
except Exception as e:
# add the url in db
logger.error(e)
logger.info('Adding URL' + url)
endpoint = EndPoint()
endpoint.http_url = url
endpoint.target_domain = domain
endpoint.scan_history = task
try:
_subdomain = Subdomain.objects.get(
scan_history=task, name=get_subdomain_from_url(url))
endpoint.subdomain = _subdomain
except Exception as e:
continue
endpoint.matched_gf_patterns = pattern
finally:
endpoint.save()
def vulnerability_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id):
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Vulnerability scan has been initiated for {}.'.format(domain.name))
'''
This function will run nuclei as a vulnerability scanner
----
unfurl the urls to keep only domain and path, this will be sent to vuln scan
ignore certain file extensions
Thanks: https://github.com/six2dez/reconftw
'''
urls_path = '/alive.txt'
if task.scan_type.fetch_url:
os.system('cat {0}/all_urls.txt | grep -Eiv "\\.(eot|jpg|jpeg|gif|css|tif|tiff|png|ttf|otf|woff|woff2|ico|pdf|svg|txt|js|doc|docx)$" | unfurl -u format %s://%d%p >> {0}/unfurl_urls.txt'.format(results_dir))
os.system(
'sort -u {0}/unfurl_urls.txt -o {0}/unfurl_urls.txt'.format(results_dir))
urls_path = '/unfurl_urls.txt'
vulnerability_result_path = results_dir + '/vulnerability.json'
vulnerability_scan_input_file = results_dir + urls_path
nuclei_command = 'nuclei -json -l {} -o {}'.format(
vulnerability_scan_input_file, vulnerability_result_path)
# check nuclei config
if USE_NUCLEI_CONFIG in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[VULNERABILITY_SCAN][USE_NUCLEI_CONFIG]:
nuclei_command += ' -config /root/.config/nuclei/config.yaml'
'''
Nuclei Templates
Either custom template has to be supplied or default template, if neither has
been supplied then use all templates including custom templates
'''
if CUSTOM_NUCLEI_TEMPLATE in yaml_configuration[
VULNERABILITY_SCAN] or NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
# check yaml settings for templates
if NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
if ALL in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_TEMPLATE]:
template = NUCLEI_TEMPLATES_PATH
else:
_template = ','.join([NUCLEI_TEMPLATES_PATH + str(element)
for element in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_TEMPLATE]])
template = _template.replace(',', ' -t ')
# Update nuclei command with templates
nuclei_command = nuclei_command + ' -t ' + template
if CUSTOM_NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
# add .yaml to the custom template extensions
_template = ','.join(
[str(element) + '.yaml' for element in yaml_configuration[VULNERABILITY_SCAN][CUSTOM_NUCLEI_TEMPLATE]])
template = _template.replace(',', ' -t ')
# Update nuclei command with templates
nuclei_command = nuclei_command + ' -t ' + template
else:
nuclei_command = nuclei_command + ' -t /root/nuclei-templates'
# check yaml settings for concurrency
if NUCLEI_CONCURRENCY in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][NUCLEI_CONCURRENCY] > 0:
concurrency = yaml_configuration[VULNERABILITY_SCAN][NUCLEI_CONCURRENCY]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -c ' + str(concurrency)
if RATE_LIMIT in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][RATE_LIMIT] > 0:
rate_limit = yaml_configuration[VULNERABILITY_SCAN][RATE_LIMIT]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -rl ' + str(rate_limit)
if TIMEOUT in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][TIMEOUT] > 0:
timeout = yaml_configuration[VULNERABILITY_SCAN][TIMEOUT]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -timeout ' + str(timeout)
if RETRIES in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][RETRIES] > 0:
retries = yaml_configuration[VULNERABILITY_SCAN][RETRIES]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -retries ' + str(retries)
# for severity
if NUCLEI_SEVERITY in yaml_configuration[VULNERABILITY_SCAN] and ALL not in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_SEVERITY]:
_severity = ','.join(
[str(element) for element in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_SEVERITY]])
severity = _severity.replace(" ", "")
else:
severity = "critical, high, medium, low, info"
# update nuclei templates before running scan
os.system('nuclei -update-templates')
for _severity in severity.split(","):
# delete any existing vulnerability.json file
if os.path.isfile(vulnerability_result_path):
os.system('rm {}'.format(vulnerability_result_path))
# run nuclei
final_nuclei_command = nuclei_command + ' -severity ' + _severity
proxy = get_random_proxy()
if proxy:
final_nuclei_command += " --proxy-url '{}'".format(proxy)
logger.info(final_nuclei_command)
os.system(final_nuclei_command)
try:
if os.path.isfile(vulnerability_result_path):
urls_json_result = open(vulnerability_result_path, 'r')
lines = urls_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
host = json_st['host']
_subdomain = get_subdomain_from_url(host)
try:
subdomain = Subdomain.objects.get(
name=_subdomain, scan_history=task)
vulnerability = Vulnerability()
vulnerability.subdomain = subdomain
vulnerability.scan_history = task
vulnerability.target_domain = domain
try:
endpoint = EndPoint.objects.get(
scan_history=task, target_domain=domain, http_url=host)
vulnerability.endpoint = endpoint
except Exception as exception:
logger.error(exception)
if 'name' in json_st['info']:
vulnerability.name = json_st['info']['name']
if 'severity' in json_st['info']:
if json_st['info']['severity'] == 'info':
severity = 0
elif json_st['info']['severity'] == 'low':
severity = 1
elif json_st['info']['severity'] == 'medium':
severity = 2
elif json_st['info']['severity'] == 'high':
severity = 3
elif json_st['info']['severity'] == 'critical':
severity = 4
else:
severity = 0
else:
severity = 0
vulnerability.severity = severity
if 'tags' in json_st['info']:
vulnerability.tags = json_st['info']['tags']
if 'description' in json_st['info']:
vulnerability.description = json_st['info']['description']
if 'reference' in json_st['info']:
vulnerability.reference = json_st['info']['reference']
if 'matched' in json_st: # TODO remove in rengine 1.1. 'matched' isn't used in nuclei 2.5.3
vulnerability.http_url = json_st['matched']
if 'matched-at' in json_st:
vulnerability.http_url = json_st['matched-at']
if 'templateID' in json_st:
vulnerability.template_used = json_st['templateID']
if 'description' in json_st:
vulnerability.description = json_st['description']
if 'matcher_name' in json_st:
vulnerability.matcher_name = json_st['matcher_name']
if 'extracted_results' in json_st:
vulnerability.extracted_results = json_st['extracted_results']
vulnerability.discovered_date = timezone.now()
vulnerability.open_status = True
vulnerability.save()
# send notification for all vulnerabilities except info
if json_st['info']['severity'] != "info" and notification and notification[0].send_vuln_notif:
message = "*Alert: Vulnerability Identified*"
message += "\n\n"
message += "A *{}* severity vulnerability has been identified.".format(json_st['info']['severity'])
message += "\nVulnerability Name: {}".format(json_st['info']['name'])
message += "\nVulnerable URL: {}".format(json_st['host'])
send_notification(message)
# send report to hackerone
if Hackerone.objects.all().exists() and json_st['info']['severity'] != 'info' and json_st['info']['severity'] \
!= 'low' and vulnerability.target_domain.h1_team_handle:
hackerone = Hackerone.objects.all()[0]
if hackerone.send_critical and json_st['info']['severity'] == 'critical':
send_hackerone_report(vulnerability.id)
elif hackerone.send_high and json_st['info']['severity'] == 'high':
send_hackerone_report(vulnerability.id)
elif hackerone.send_medium and json_st['info']['severity'] == 'medium':
send_hackerone_report(vulnerability.id)
except ObjectDoesNotExist:
logger.error('Object not found')
continue
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
info_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=0).count()
low_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=1).count()
medium_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=2).count()
high_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=3).count()
critical_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=4).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count
message = 'Vulnerability scan has been completed for {} and discovered {} vulnerabilities.'.format(
domain.name,
vulnerability_count
)
message += '\n\n*Vulnerability Stats:*'
message += '\nCritical: {}'.format(critical_count)
message += '\nHigh: {}'.format(high_count)
message += '\nMedium: {}'.format(medium_count)
message += '\nLow: {}'.format(low_count)
message += '\nInfo: {}'.format(info_count)
send_notification(message)
def scan_failed(task):
task.scan_status = 0
task.stop_scan_date = timezone.now()
task.save()
def create_scan_activity(task, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = task
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
def update_last_activity(id, activity_status):
ScanActivity.objects.filter(
id=id).update(
status=activity_status,
time=timezone.now())
def delete_scan_data(results_dir):
# remove all txt,html,json files
os.system('find {} -name "*.txt" -type f -delete'.format(results_dir))
os.system('find {} -name "*.html" -type f -delete'.format(results_dir))
os.system('find {} -name "*.json" -type f -delete'.format(results_dir))
def save_subdomain(subdomain_dict):
subdomain = Subdomain()
subdomain.discovered_date = timezone.now()
subdomain.target_domain = subdomain_dict.get('target_domain')
subdomain.scan_history = subdomain_dict.get('scan_history')
subdomain.name = subdomain_dict.get('name')
subdomain.http_url = subdomain_dict.get('http_url')
subdomain.screenshot_path = subdomain_dict.get('screenshot_path')
subdomain.http_header_path = subdomain_dict.get('http_header_path')
subdomain.cname = subdomain_dict.get('cname')
subdomain.is_cdn = subdomain_dict.get('is_cdn')
subdomain.content_type = subdomain_dict.get('content_type')
subdomain.webserver = subdomain_dict.get('webserver')
subdomain.page_title = subdomain_dict.get('page_title')
subdomain.is_imported_subdomain = subdomain_dict.get(
'is_imported_subdomain') if 'is_imported_subdomain' in subdomain_dict else False
if 'http_status' in subdomain_dict:
subdomain.http_status = subdomain_dict.get('http_status')
if 'response_time' in subdomain_dict:
subdomain.response_time = subdomain_dict.get('response_time')
if 'content_length' in subdomain_dict:
subdomain.content_length = subdomain_dict.get('content_length')
subdomain.save()
return subdomain
def save_endpoint(endpoint_dict):
endpoint = EndPoint()
endpoint.discovered_date = timezone.now()
endpoint.scan_history = endpoint_dict.get('scan_history')
endpoint.target_domain = endpoint_dict.get('target_domain') if 'target_domain' in endpoint_dict else None
endpoint.subdomain = endpoint_dict.get('subdomain') if 'target_domain' in endpoint_dict else None
endpoint.http_url = endpoint_dict.get('http_url')
endpoint.page_title = endpoint_dict.get('page_title') if 'page_title' in endpoint_dict else None
endpoint.content_type = endpoint_dict.get('content_type') if 'content_type' in endpoint_dict else None
endpoint.webserver = endpoint_dict.get('webserver') if 'webserver' in endpoint_dict else None
endpoint.response_time = endpoint_dict.get('response_time') if 'response_time' in endpoint_dict else 0
endpoint.http_status = endpoint_dict.get('http_status') if 'http_status' in endpoint_dict else 0
endpoint.content_length = endpoint_dict.get('content_length') if 'content_length' in endpoint_dict else 0
endpoint.is_default = endpoint_dict.get('is_default') if 'is_default' in endpoint_dict else False
endpoint.save()
return endpoint
def perform_osint(task, domain, yaml_configuration, results_dir):
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has initiated OSINT on target {}'.format(domain.name))
if 'discover' in yaml_configuration[OSINT]:
osint_discovery(task, domain, yaml_configuration, results_dir)
if 'dork' in yaml_configuration[OSINT]:
dorking(task, yaml_configuration)
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has completed performing OSINT on target {}'.format(domain.name))
def osint_discovery(task, domain, yaml_configuration, results_dir):
if ALL in yaml_configuration[OSINT][OSINT_DISCOVER]:
osint_lookup = 'emails metainfo employees'
else:
osint_lookup = ' '.join(
str(lookup) for lookup in yaml_configuration[OSINT][OSINT_DISCOVER])
if 'metainfo' in osint_lookup:
if INTENSITY in yaml_configuration[OSINT]:
osint_intensity = yaml_configuration[OSINT][INTENSITY]
else:
osint_intensity = 'normal'
if OSINT_DOCUMENTS_LIMIT in yaml_configuration[OSINT]:
documents_limit = yaml_configuration[OSINT][OSINT_DOCUMENTS_LIMIT]
else:
documents_limit = 50
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': domain.name,
'domain': domain,
'scan_id': task,
'documents_limit': documents_limit
})
get_and_save_meta_info(meta_dict)
elif osint_intensity == 'deep':
# get all subdomains in scan_id
subdomains = Subdomain.objects.filter(scan_history=task)
for subdomain in subdomains:
meta_dict = DottedDict({
'osint_target': subdomain.name,
'domain': domain,
'scan_id': task,
'documents_limit': documents_limit
})
get_and_save_meta_info(meta_dict)
if 'emails' in osint_lookup:
get_and_save_emails(task, results_dir)
get_and_save_leaked_credentials(task, results_dir)
if 'employees' in osint_lookup:
get_and_save_employees(task, results_dir)
def dorking(scan_history, yaml_configuration):
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
# look in stackoverflow
if ALL in yaml_configuration[OSINT][OSINT_DORK]:
dork_lookup = 'stackoverflow, 3rdparty, social_media, project_management, code_sharing, config_files, jenkins, cloud_buckets, php_error, exposed_documents, struts_rce, db_files, traefik, git_exposed'
else:
dork_lookup = ' '.join(
str(lookup) for lookup in yaml_configuration[OSINT][OSINT_DORK])
if 'stackoverflow' in dork_lookup:
dork = 'site:stackoverflow.com'
dork_type = 'stackoverflow'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=False
)
if '3rdparty' in dork_lookup:
# look in 3rd party sitee
dork_type = '3rdparty'
lookup_websites = [
'gitter.im',
'papaly.com',
'productforums.google.com',
'coggle.it',
'replt.it',
'ycombinator.com',
'libraries.io',
'npm.runkit.com',
'npmjs.com',
'scribd.com',
'gitter.im'
]
dork = ''
for website in lookup_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'social_media' in dork_lookup:
dork_type = 'Social Media'
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'pinterest.com',
'tumblr.com',
'reddit.com'
]
dork = ''
for website in social_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'project_management' in dork_lookup:
dork_type = 'Project Management'
project_websites = [
'trello.com',
'*.atlassian.net'
]
dork = ''
for website in project_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'code_sharing' in dork_lookup:
dork_type = 'Code Sharing Sites'
code_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
dork = ''
for website in code_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'config_files' in dork_lookup:
dork_type = 'Config Files'
config_file_ext = [
'env',
'xml',
'conf',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
dork = ''
for extension in config_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'jenkins' in dork_lookup:
dork_type = 'Jenkins'
dork = 'intitle:\"Dashboard [Jenkins]\"'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
if 'wordpress_files' in dork_lookup:
dork_type = 'Wordpress Files'
inurl_lookup = [
'wp-content',
'wp-includes'
]
dork = ''
for lookup in inurl_lookup:
dork = dork + ' | ' + 'inurl:' + lookup
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'cloud_buckets' in dork_lookup:
dork_type = 'Cloud Buckets'
cloud_websites = [
'.s3.amazonaws.com',
'storage.googleapis.com',
'amazonaws.com'
]
dork = ''
for website in cloud_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'php_error' in dork_lookup:
dork_type = 'PHP Error'
error_words = [
'\"PHP Parse error\"',
'\"PHP Warning\"',
'\"PHP Error\"'
]
dork = ''
for word in error_words:
dork = dork + ' | ' + word
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'exposed_documents' in dork_lookup:
dork_type = 'Exposed Documents'
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
dork = ''
for extension in docs_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'struts_rce' in dork_lookup:
dork_type = 'Apache Struts RCE'
struts_file_ext = [
'action',
'struts',
'do'
]
dork = ''
for extension in struts_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'db_files' in dork_lookup:
dork_type = 'Database Files'
db_file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
dork = ''
for extension in db_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'traefik' in dork_lookup:
dork = 'intitle:traefik inurl:8080/dashboard'
dork_type = 'Traefik'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
if 'git_exposed' in dork_lookup:
dork = 'inurl:\"/.git\"'
dork_type = '.git Exposed'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
def get_and_save_dork_results(dork, type, scan_history, in_target=False):
degoogle_obj = degoogle.dg()
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
if in_target:
query = dork + " site:" + scan_history.domain.name
else:
query = dork + " \"{}\"".format(scan_history.domain.name)
logger.info(query)
degoogle_obj.query = query
results = degoogle_obj.run()
logger.info(results)
for result in results:
dork, _ = Dork.objects.get_or_create(
type=type,
description=result['desc'],
url=result['url']
)
scan_history.dorks.add(dork)
def get_and_save_employees(scan_history, results_dir):
theHarvester_location = '/usr/src/github/theHarvester'
# update proxies.yaml
if Proxy.objects.all().exists():
proxy = Proxy.objects.all()[0]
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(theHarvester_location + '/proxies.yaml', 'w') as file:
documents = yaml.dump(yaml_data, file)
os.system('cd {} && python3 theHarvester.py -d {} -b all -f {}/theHarvester.html'.format(
theHarvester_location,
scan_history.domain.name,
results_dir
))
file_location = results_dir + '/theHarvester.html'
print(file_location)
# delete proxy environ var
if os.environ.get(('https_proxy')):
del os.environ['https_proxy']
if os.environ.get(('HTTPS_PROXY')):
del os.environ['HTTPS_PROXY']
if os.path.isfile(file_location):
logger.info('Parsing theHarvester results')
options = FirefoxOptions()
options.add_argument("--headless")
driver = webdriver.Firefox(options=options)
driver.get('file://'+file_location)
tabledata = driver.execute_script('return tabledata')
# save email addresses and linkedin employees
for data in tabledata:
if data['record'] == 'email':
_email = data['result']
email, _ = Email.objects.get_or_create(address=_email)
scan_history.emails.add(email)
elif data['record'] == 'people':
_employee = data['result']
split_val = _employee.split('-')
name = split_val[0]
if len(split_val) == 2:
designation = split_val[1]
else:
designation = ""
employee, _ = Employee.objects.get_or_create(name=name, designation=designation)
scan_history.employees.add(employee)
driver.quit()
print(tabledata)
def get_and_save_emails(scan_history, results_dir):
leak_target_path = '{}/creds_target.txt'.format(results_dir)
# get email address
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
emails = []
try:
logger.info('OSINT: Getting emails from Google')
email_from_google = get_emails_from_google(scan_history.domain.name)
logger.info('OSINT: Getting emails from Bing')
email_from_bing = get_emails_from_bing(scan_history.domain.name)
logger.info('OSINT: Getting emails from Baidu')
email_from_baidu = get_emails_from_baidu(scan_history.domain.name)
emails = list(set(email_from_google + email_from_bing + email_from_baidu))
logger.info(emails)
except Exception as e:
logger.error(e)
leak_target_file = open(leak_target_path, 'w')
for _email in emails:
email, _ = Email.objects.get_or_create(address=_email)
scan_history.emails.add(email)
leak_target_file.write('{}\n'.format(_email))
# fill leak_target_file with possible email address
leak_target_file.write('%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.write('%.%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%.%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.write('%_%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%_%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.close()
def get_and_save_leaked_credentials(scan_history, results_dir):
logger.info('OSINT: Getting leaked credentials...')
leak_target_file = '{}/creds_target.txt'.format(results_dir)
leak_output_file = '{}/pwndb.json'.format(results_dir)
pwndb_command = 'python3 /usr/src/github/pwndb/pwndb.py --proxy tor:9150 --output json --list {}'.format(
leak_target_file
)
try:
pwndb_output = subprocess.getoutput(pwndb_command)
creds = json.loads(pwndb_output)
for cred in creds:
if cred['username'] != 'donate':
email_id = "{}@{}".format(cred['username'], cred['domain'])
email_obj, _ = Email.objects.get_or_create(
address=email_id,
)
email_obj.password = cred['password']
email_obj.save()
scan_history.emails.add(email_obj)
except Exception as e:
logger.error(e)
pass
def get_and_save_meta_info(meta_dict):
logger.info('Getting METADATA for {}'.format(meta_dict.osint_target))
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
result = metadata_extractor.extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if result:
results = result.get_metadata()
for meta in results:
meta_finder_document = MetaFinderDocument()
subdomain = Subdomain.objects.get(scan_history=meta_dict.scan_id, name=meta_dict.osint_target)
meta_finder_document.subdomain = subdomain
meta_finder_document.target_domain = meta_dict.domain
meta_finder_document.scan_history = meta_dict.scan_id
item = DottedDict(results[meta])
meta_finder_document.url = item.url
meta_finder_document.doc_name = meta
meta_finder_document.http_status = item.status_code
metadata = results[meta]['metadata']
for data in metadata:
if 'Producer' in metadata and metadata['Producer']:
meta_finder_document.producer = metadata['Producer'].rstrip('\x00')
if 'Creator' in metadata and metadata['Creator']:
meta_finder_document.creator = metadata['Creator'].rstrip('\x00')
if 'CreationDate' in metadata and metadata['CreationDate']:
meta_finder_document.creation_date = metadata['CreationDate'].rstrip('\x00')
if 'ModDate' in metadata and metadata['ModDate']:
meta_finder_document.modified_date = metadata['ModDate'].rstrip('\x00')
if 'Author' in metadata and metadata['Author']:
meta_finder_document.author = metadata['Author'].rstrip('\x00')
if 'Title' in metadata and metadata['Title']:
meta_finder_document.title = metadata['Title'].rstrip('\x00')
if 'OSInfo' in metadata and metadata['OSInfo']:
meta_finder_document.os = metadata['OSInfo'].rstrip('\x00')
meta_finder_document.save()
@app.task(bind=True)
def test_task(self):
print('*' * 40)
print('test task run')
print('*' * 40)
| radaram | 43af3a6aecdece4923ee74b108853f7b9c51ed12 | 27d6ec5827a51fd74e3ab97a5cef38fc7f5d9168 | But we can remove this, not sure if this `matched` is returned to some specific version. | yogeshojha | 32 |
yogeshojha/rengine | 530 | Fix #529 | Nuclei returns the response to stdout:
`{"template-id":"tech-detect","info":{"name":"Wappalyzer Technology Detection","author":["hakluke"],"tags":["tech"],"reference":null,"severity":"info"},"matcher-name":"nginx","type":"http","host":"https://example.com:443","matched-at":"https://example.com:443","timestamp":"2021-10-31T09:39:47.1571248Z","curl-command":"curl -X 'GET' -d '' -H 'Accept: */*' -H 'Accept-Language: en' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1944.0 Safari/537.36' 'https://example.com'"}`
It needs to read host_url from matched-at, not from matched. | null | 2021-10-31 10:27:33+00:00 | 2021-11-01 16:58:16+00:00 | web/reNgine/tasks.py | import os
import traceback
import yaml
import json
import csv
import validators
import random
import requests
import logging
import metafinder.extractor as metadata_extractor
import whatportis
import subprocess
from selenium.webdriver.firefox.options import Options as FirefoxOptions
from selenium import webdriver
from emailfinder.extractor import *
from dotted_dict import DottedDict
from celery import shared_task
from discord_webhook import DiscordWebhook
from reNgine.celery import app
from startScan.models import *
from targetApp.models import Domain
from scanEngine.models import EngineType
from django.conf import settings
from django.shortcuts import get_object_or_404
from celery import shared_task
from datetime import datetime
from degoogle import degoogle
from django.conf import settings
from django.utils import timezone, dateformat
from django.shortcuts import get_object_or_404
from django.core.exceptions import ObjectDoesNotExist
from reNgine.celery import app
from reNgine.definitions import *
from startScan.models import *
from targetApp.models import Domain
from scanEngine.models import EngineType, Configuration, Wordlist
from .common_func import *
'''
task for background scan
'''
@app.task
def initiate_scan(
domain_id,
scan_history_id,
scan_type,
engine_type,
imported_subdomains=None,
out_of_scope_subdomains=[]
):
'''
scan_type = 0 -> immediate scan, need not create scan object
scan_type = 1 -> scheduled scan
'''
engine_object = EngineType.objects.get(pk=engine_type)
domain = Domain.objects.get(pk=domain_id)
if scan_type == 1:
task = ScanHistory()
task.scan_status = -1
elif scan_type == 0:
task = ScanHistory.objects.get(pk=scan_history_id)
# save the last scan date for domain model
domain.last_scan_date = timezone.now()
domain.save()
# once the celery task starts, change the task status to Started
task.scan_type = engine_object
task.celery_id = initiate_scan.request.id
task.domain = domain
task.scan_status = 1
task.start_scan_date = timezone.now()
task.subdomain_discovery = True if engine_object.subdomain_discovery else False
task.dir_file_search = True if engine_object.dir_file_search else False
task.port_scan = True if engine_object.port_scan else False
task.fetch_url = True if engine_object.fetch_url else False
task.osint = True if engine_object.osint else False
task.screenshot = True if engine_object.screenshot else False
task.vulnerability_scan = True if engine_object.vulnerability_scan else False
task.save()
activity_id = create_scan_activity(task, "Scanning Started", 2)
results_dir = '/usr/src/scan_results/'
os.chdir(results_dir)
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has initiated recon for target {} with engine type {}'.format(domain.name, engine_object.engine_name))
try:
current_scan_dir = domain.name + '_' + str(random.randint(100000000000, 999999999999))
os.mkdir(current_scan_dir)
task.results_dir = current_scan_dir
task.save()
except Exception as exception:
logger.error(exception)
scan_failed(task)
yaml_configuration = None
excluded_subdomains = ''
try:
yaml_configuration = yaml.load(
task.scan_type.yaml_configuration,
Loader=yaml.FullLoader)
except Exception as exception:
logger.error(exception)
# TODO: Put failed reason on db
'''
Add GF patterns name to db for dynamic URLs menu
'''
if engine_object.fetch_url and GF_PATTERNS in yaml_configuration[FETCH_URL]:
task.used_gf_patterns = ','.join(
pattern for pattern in yaml_configuration[FETCH_URL][GF_PATTERNS])
task.save()
results_dir = results_dir + current_scan_dir
# put all imported subdomains into txt file and also in Subdomain model
if imported_subdomains:
extract_imported_subdomain(
imported_subdomains, task, domain, results_dir)
if yaml_configuration:
'''
a target in itself is a subdomain, some tool give subdomains as
www.yogeshojha.com but url and everything else resolves to yogeshojha.com
In that case, we would already need to store target itself as subdomain
'''
initial_subdomain_file = '/target_domain.txt' if task.subdomain_discovery else '/sorted_subdomain_collection.txt'
subdomain_file = open(results_dir + initial_subdomain_file, "w")
subdomain_file.write(domain.name + "\n")
subdomain_file.close()
if(task.subdomain_discovery):
activity_id = create_scan_activity(task, "Subdomain Scanning", 1)
subdomain_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id,
out_of_scope_subdomains
)
else:
skip_subdomain_scan(task, domain, results_dir)
update_last_activity(activity_id, 2)
activity_id = create_scan_activity(task, "HTTP Crawler", 1)
http_crawler(
task,
domain,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
try:
if task.screenshot:
activity_id = create_scan_activity(
task, "Visual Recon - Screenshot", 1)
grab_screenshot(
task,
domain,
yaml_configuration,
current_scan_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if(task.port_scan):
activity_id = create_scan_activity(task, "Port Scanning", 1)
port_scanning(task, domain, yaml_configuration, results_dir)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.osint:
activity_id = create_scan_activity(task, "OSINT Running", 1)
perform_osint(task, domain, yaml_configuration, results_dir)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.dir_file_search:
activity_id = create_scan_activity(task, "Directory Search", 1)
directory_brute(
task,
domain,
yaml_configuration,
results_dir,
activity_id
)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.fetch_url:
activity_id = create_scan_activity(task, "Fetching endpoints", 1)
fetch_endpoints(
task,
domain,
yaml_configuration,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.vulnerability_scan:
activity_id = create_scan_activity(task, "Vulnerability Scan", 1)
vulnerability_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
activity_id = create_scan_activity(task, "Scan Completed", 2)
if notification and notification[0].send_scan_status_notif:
send_notification('*Scan Completed*\nreNgine has finished performing recon on target {}.'.format(domain.name))
'''
Once the scan is completed, save the status to successful
'''
if ScanActivity.objects.filter(scan_of=task).filter(status=0).all():
task.scan_status = 0
else:
task.scan_status = 2
task.stop_scan_date = timezone.now()
task.save()
# cleanup results
delete_scan_data(results_dir)
return {"status": True}
def skip_subdomain_scan(task, domain, results_dir):
# store default target as subdomain
'''
If the imported subdomain already has target domain saved, we can skip this
'''
if not Subdomain.objects.filter(
scan_history=task,
name=domain.name).exists():
subdomain_dict = DottedDict({
'name': domain.name,
'scan_history': task,
'target_domain': domain
})
save_subdomain(subdomain_dict)
# Save target into target_domain.txt
with open('{}/target_domain.txt'.format(results_dir), 'w+') as file:
file.write(domain.name + '\n')
file.close()
'''
We can have two conditions, either subdomain scan happens, or subdomain scan
does not happen, in either cases, because we are using import subdomain, we
need to collect and sort all the subdomains
Write target domain into subdomain_collection
'''
os.system(
'cat {0}/target_domain.txt > {0}/subdomain_collection.txt'.format(results_dir))
os.system(
'cat {0}/from_imported.txt > {0}/subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/from_imported.txt'.format(results_dir))
'''
Sort all Subdomains
'''
os.system(
'sort -u {0}/subdomain_collection.txt -o {0}/sorted_subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/subdomain_collection.txt'.format(results_dir))
def extract_imported_subdomain(imported_subdomains, task, domain, results_dir):
valid_imported_subdomains = [subdomain for subdomain in imported_subdomains if validators.domain(
subdomain) and domain.name == get_domain_from_subdomain(subdomain)]
# remove any duplicate
valid_imported_subdomains = list(set(valid_imported_subdomains))
with open('{}/from_imported.txt'.format(results_dir), 'w+') as file:
for subdomain_name in valid_imported_subdomains:
# save _subdomain to Subdomain model db
if not Subdomain.objects.filter(
scan_history=task, name=subdomain_name).exists():
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': subdomain_name,
'is_imported_subdomain': True
})
save_subdomain(subdomain_dict)
# save subdomain to file
file.write('{}\n'.format(subdomain_name))
file.close()
def subdomain_scan(task, domain, yaml_configuration, results_dir, activity_id, out_of_scope_subdomains=None):
'''
This function is responsible for performing subdomain enumeration
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Subdomain Gathering for target {} has been started'.format(domain.name))
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
# check for all the tools and add them into string
# if tool selected is all then make string, no need for loop
if ALL in yaml_configuration[SUBDOMAIN_DISCOVERY][USES_TOOLS]:
tools = 'amass-active amass-passive assetfinder sublist3r subfinder oneforall'
else:
tools = ' '.join(
str(tool) for tool in yaml_configuration[SUBDOMAIN_DISCOVERY][USES_TOOLS])
logging.info(tools)
# check for THREADS, by default 10
threads = 10
if THREADS in yaml_configuration[SUBDOMAIN_DISCOVERY]:
_threads = yaml_configuration[SUBDOMAIN_DISCOVERY][THREADS]
if _threads > 0:
threads = _threads
if 'amass' in tools:
if 'amass-passive' in tools:
amass_command = 'amass enum -passive -d {} -o {}/from_amass.txt'.format(
domain.name, results_dir)
if USE_AMASS_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_AMASS_CONFIG]:
amass_command += ' -config /root/.config/amass.ini'
# Run Amass Passive
logging.info(amass_command)
os.system(amass_command)
if 'amass-active' in tools:
amass_command = 'amass enum -active -d {} -o {}/from_amass_active.txt'.format(
domain.name, results_dir)
if USE_AMASS_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_AMASS_CONFIG]:
amass_command += ' -config /root/.config/amass.ini'
if AMASS_WORDLIST in yaml_configuration[SUBDOMAIN_DISCOVERY]:
wordlist = yaml_configuration[SUBDOMAIN_DISCOVERY][AMASS_WORDLIST]
if wordlist == 'default':
wordlist_path = '/usr/src/wordlist/deepmagic.com-prefixes-top50000.txt'
else:
wordlist_path = '/usr/src/wordlist/' + wordlist + '.txt'
if not os.path.exists(wordlist_path):
wordlist_path = '/usr/src/' + AMASS_WORDLIST
amass_command = amass_command + \
' -brute -w {}'.format(wordlist_path)
if amass_config_path:
amass_command = amass_command + \
' -config {}'.format('/usr/src/scan_results/' + amass_config_path)
# Run Amass Active
logging.info(amass_command)
os.system(amass_command)
if 'assetfinder' in tools:
assetfinder_command = 'assetfinder --subs-only {} > {}/from_assetfinder.txt'.format(
domain.name, results_dir)
# Run Assetfinder
logging.info(assetfinder_command)
os.system(assetfinder_command)
if 'sublist3r' in tools:
sublist3r_command = 'python3 /usr/src/github/Sublist3r/sublist3r.py -d {} -t {} -o {}/from_sublister.txt'.format(
domain.name, threads, results_dir)
# Run sublist3r
logging.info(sublist3r_command)
os.system(sublist3r_command)
if 'subfinder' in tools:
subfinder_command = 'subfinder -d {} -t {} -o {}/from_subfinder.txt'.format(
domain.name, threads, results_dir)
if USE_SUBFINDER_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_SUBFINDER_CONFIG]:
subfinder_command += ' -config /root/.config/subfinder/config.yaml'
# Run Subfinder
logging.info(subfinder_command)
os.system(subfinder_command)
if 'oneforall' in tools:
oneforall_command = 'python3 /usr/src/github/OneForAll/oneforall.py --target {} run'.format(
domain.name, results_dir)
# Run OneForAll
logging.info(oneforall_command)
os.system(oneforall_command)
extract_subdomain = "cut -d',' -f6 /usr/src/github/OneForAll/results/{}.csv >> {}/from_oneforall.txt".format(
domain.name, results_dir)
os.system(extract_subdomain)
# remove the results from oneforall directory
os.system(
'rm -rf /usr/src/github/OneForAll/results/{}.*'.format(domain.name))
'''
All tools have gathered the list of subdomains with filename
initials as from_*
We will gather all the results in one single file, sort them and
remove the older results from_*
'''
os.system(
'cat {0}/*.txt > {0}/subdomain_collection.txt'.format(results_dir))
'''
Write target domain into subdomain_collection
'''
os.system(
'cat {0}/target_domain.txt >> {0}/subdomain_collection.txt'.format(results_dir))
'''
Remove all the from_* files
'''
os.system('rm -f {}/from*'.format(results_dir))
'''
Sort all Subdomains
'''
os.system(
'sort -u {0}/subdomain_collection.txt -o {0}/sorted_subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/subdomain_collection.txt'.format(results_dir))
'''
The final results will be stored in sorted_subdomain_collection.
'''
# parse the subdomain list file and store in db
with open(subdomain_scan_results_file) as subdomain_list:
for _subdomain in subdomain_list:
__subdomain = _subdomain.rstrip('\n')
if not Subdomain.objects.filter(scan_history=task, name=__subdomain).exists(
) and validators.domain(__subdomain) and __subdomain not in out_of_scope_subdomains:
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': __subdomain,
})
save_subdomain(subdomain_dict)
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
subdomains_count = Subdomain.objects.filter(scan_history=task).count()
send_notification('Subdomain Gathering for target {} has been completed and has discovered *{}* subdomains.'.format(domain.name, subdomains_count))
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/sorted_subdomain_collection.txt')
# check for any subdomain changes and send notif if any
if notification and notification[0].send_subdomain_changes_notif:
newly_added_subdomain = get_new_added_subdomain(task.id, domain.id)
if newly_added_subdomain:
message = "**{} New Subdomains Discovered on domain {}**".format(newly_added_subdomain.count(), domain.name)
for subdomain in newly_added_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
removed_subdomain = get_removed_subdomain(task.id, domain.id)
if removed_subdomain:
message = "**{} Subdomains are no longer available on domain {}**".format(removed_subdomain.count(), domain.name)
for subdomain in removed_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
# check for interesting subdomains and send notif if any
if notification and notification[0].send_interesting_notif:
interesting_subdomain = get_interesting_subdomains(task.id, domain.id)
print(interesting_subdomain)
if interesting_subdomain:
message = "**{} Interesting Subdomains Found on domain {}**".format(interesting_subdomain.count(), domain.name)
for subdomain in interesting_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
def get_new_added_subdomain(scan_id, domain_id):
scan_history = ScanHistory.objects.filter(
domain=domain_id).filter(
subdomain_discovery=True).filter(
id__lte=scan_id)
if scan_history.count() > 1:
last_scan = scan_history.order_by('-start_scan_date')[1]
scanned_host_q1 = Subdomain.objects.filter(
scan_history__id=scan_id).values('name')
scanned_host_q2 = Subdomain.objects.filter(
scan_history__id=last_scan.id).values('name')
added_subdomain = scanned_host_q1.difference(scanned_host_q2)
return Subdomain.objects.filter(
scan_history=scan_id).filter(
name__in=added_subdomain)
def get_removed_subdomain(scan_id, domain_id):
scan_history = ScanHistory.objects.filter(
domain=domain_id).filter(
subdomain_discovery=True).filter(
id__lte=scan_id)
if scan_history.count() > 1:
last_scan = scan_history.order_by('-start_scan_date')[1]
scanned_host_q1 = Subdomain.objects.filter(
scan_history__id=scan_id).values('name')
scanned_host_q2 = Subdomain.objects.filter(
scan_history__id=last_scan.id).values('name')
removed_subdomains = scanned_host_q2.difference(scanned_host_q1)
print()
return Subdomain.objects.filter(
scan_history=last_scan).filter(
name__in=removed_subdomains)
def http_crawler(task, domain, results_dir, activity_id):
'''
This function is runs right after subdomain gathering, and gathers important
like page title, http status, etc
HTTP Crawler runs by default
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('HTTP Crawler for target {} has been initiated.'.format(domain.name))
alive_file_location = results_dir + '/alive.txt'
httpx_results_file = results_dir + '/httpx.json'
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
httpx_command = 'httpx -status-code -content-length -title -tech-detect -cdn -ip -follow-host-redirects -random-agent'
proxy = get_random_proxy()
if proxy:
httpx_command += " --http-proxy '{}'".format(proxy)
httpx_command += ' -json -o {}'.format(
httpx_results_file
)
httpx_command = 'cat {} | {}'.format(subdomain_scan_results_file, httpx_command)
print(httpx_command)
os.system(httpx_command)
# alive subdomains from httpx
alive_file = open(alive_file_location, 'w')
# writing httpx results
if os.path.isfile(httpx_results_file):
httpx_json_result = open(httpx_results_file, 'r')
lines = httpx_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
try:
# fallback for older versions of httpx
if 'url' in json_st:
subdomain = Subdomain.objects.get(
scan_history=task, name=json_st['input'])
else:
subdomain = Subdomain.objects.get(
scan_history=task, name=json_st['url'].split("//")[-1])
'''
Saving Default http urls to EndPoint
'''
endpoint = EndPoint()
endpoint.scan_history = task
endpoint.target_domain = domain
endpoint.subdomain = subdomain
if 'url' in json_st:
endpoint.http_url = json_st['url']
subdomain.http_url = json_st['url']
if 'status-code' in json_st:
endpoint.http_status = json_st['status-code']
subdomain.http_status = json_st['status-code']
if 'title' in json_st:
endpoint.page_title = json_st['title']
subdomain.page_title = json_st['title']
if 'content-length' in json_st:
endpoint.content_length = json_st['content-length']
subdomain.content_length = json_st['content-length']
if 'content-type' in json_st:
endpoint.content_type = json_st['content-type']
subdomain.content_type = json_st['content-type']
if 'webserver' in json_st:
endpoint.webserver = json_st['webserver']
subdomain.webserver = json_st['webserver']
if 'response-time' in json_st:
response_time = float(
''.join(
ch for ch in json_st['response-time'] if not ch.isalpha()))
if json_st['response-time'][-2:] == 'ms':
response_time = response_time / 1000
endpoint.response_time = response_time
subdomain.response_time = response_time
if 'cnames' in json_st:
cname_list = ','.join(json_st['cnames'])
subdomain.cname = cname_list
discovered_date = timezone.now()
endpoint.discovered_date = discovered_date
subdomain.discovered_date = discovered_date
endpoint.is_default = True
endpoint.save()
subdomain.save()
if 'technologies' in json_st:
for _tech in json_st['technologies']:
if Technology.objects.filter(name=_tech).exists():
tech = Technology.objects.get(name=_tech)
else:
tech = Technology(name=_tech)
tech.save()
subdomain.technologies.add(tech)
endpoint.technologies.add(tech)
if 'a' in json_st:
for _ip in json_st['a']:
if IpAddress.objects.filter(address=_ip).exists():
ip = IpAddress.objects.get(address=_ip)
else:
ip = IpAddress(address=_ip)
if 'cdn' in json_st:
ip.is_cdn = json_st['cdn']
ip.save()
subdomain.ip_addresses.add(ip)
# see if to ignore 404 or 5xx
alive_file.write(json_st['url'] + '\n')
subdomain.save()
endpoint.save()
except Exception as exception:
logging.error(exception)
alive_file.close()
if notification and notification[0].send_scan_status_notif:
alive_count = Subdomain.objects.filter(
scan_history__id=task.id).values('name').distinct().filter(
http_status__exact=200).count()
send_notification('HTTP Crawler for target {} has been completed.\n\n {} subdomains were alive (http status 200).'.format(domain.name, alive_count))
def grab_screenshot(task, domain, yaml_configuration, results_dir, activity_id):
'''
This function is responsible for taking screenshots
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine is currently gathering screenshots for {}'.format(domain.name))
output_screenshots_path = results_dir + '/screenshots'
result_csv_path = results_dir + '/screenshots/Requests.csv'
alive_subdomains_path = results_dir + '/alive.txt'
eyewitness_command = 'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py'
eyewitness_command += ' -f {} -d {} --no-prompt'.format(
alive_subdomains_path,
output_screenshots_path
)
if EYEWITNESS in yaml_configuration \
and TIMEOUT in yaml_configuration[EYEWITNESS] \
and yaml_configuration[EYEWITNESS][TIMEOUT] > 0:
eyewitness_command += ' --timeout {}'.format(
yaml_configuration[EYEWITNESS][TIMEOUT]
)
if EYEWITNESS in yaml_configuration \
and THREADS in yaml_configuration[EYEWITNESS] \
and yaml_configuration[EYEWITNESS][THREADS] > 0:
eyewitness_command += ' --threads {}'.format(
yaml_configuration[EYEWITNESS][THREADS]
)
logger.info(eyewitness_command)
os.system(eyewitness_command)
if os.path.isfile(result_csv_path):
logger.info('Gathering Eyewitness results')
with open(result_csv_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
if row[3] == 'Successful' \
and Subdomain.objects.filter(
scan_history__id=task.id).filter(name=row[2]).exists():
subdomain = Subdomain.objects.get(
scan_history__id=task.id,
name=row[2]
)
subdomain.screenshot_path = row[4].replace(
'/usr/src/scan_results/',
''
)
subdomain.save()
# remove all db, html extra files in screenshot results
os.system('rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(
output_screenshots_path,
))
os.system('rm -rf {0}/source'.format(
output_screenshots_path,
))
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has finished gathering screenshots for {}'.format(domain.name))
def port_scanning(task, domain, yaml_configuration, results_dir):
'''
This function is responsible for running the port scan
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Port Scan initiated for {}'.format(domain.name))
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
port_results_file = results_dir + '/ports.json'
# check the yaml_configuration and choose the ports to be scanned
scan_ports = '-' # default port scan everything
if PORTS in yaml_configuration[PORT_SCAN]:
# TODO: legacy code, remove top-100 in future versions
all_ports = yaml_configuration[PORT_SCAN][PORTS]
if 'full' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -p {}'.format(
subdomain_scan_results_file, port_results_file, '-')
elif 'top-100' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -top-ports 100'.format(
subdomain_scan_results_file, port_results_file)
elif 'top-1000' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -top-ports 1000'.format(
subdomain_scan_results_file, port_results_file)
else:
scan_ports = ','.join(
str(port) for port in all_ports)
naabu_command = 'cat {} | naabu -json -o {} -p {}'.format(
subdomain_scan_results_file, port_results_file, scan_ports)
# check for exclude ports
if EXCLUDE_PORTS in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][EXCLUDE_PORTS]:
exclude_ports = ','.join(
str(port) for port in yaml_configuration['port_scan']['exclude_ports'])
naabu_command = naabu_command + \
' -exclude-ports {}'.format(exclude_ports)
if NAABU_RATE in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][NAABU_RATE] > 0:
naabu_command = naabu_command + \
' -rate {}'.format(
yaml_configuration[PORT_SCAN][NAABU_RATE])
if USE_NAABU_CONFIG in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][USE_NAABU_CONFIG]:
naabu_command += ' -config /root/.config/naabu/naabu.conf'
# run naabu
os.system(naabu_command)
# writing port results
try:
port_json_result = open(port_results_file, 'r')
lines = port_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
port_number = json_st['port']
ip_address = json_st['ip']
# see if port already exists
if Port.objects.filter(number__exact=port_number).exists():
port = Port.objects.get(number=port_number)
else:
port = Port()
port.number = port_number
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port_detail = whatportis.get_ports(str(port_number))
if len(port_detail):
port.service_name = port_detail[0].name
port.description = port_detail[0].description
port.save()
if IpAddress.objects.filter(address=json_st['ip']).exists():
ip = IpAddress.objects.get(address=json_st['ip'])
ip.ports.add(port)
ip.save()
except BaseException as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
port_count = Port.objects.filter(
ports__in=IpAddress.objects.filter(
ip_addresses__in=Subdomain.objects.filter(
scan_history__id=task.id))).distinct().count()
send_notification('reNgine has finished Port Scanning on {} and has identified {} ports.'.format(domain.name, port_count))
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/ports.json')
def check_waf():
'''
This function will check for the WAF being used in subdomains using wafw00f
'''
pass
def directory_brute(task, domain, yaml_configuration, results_dir, activity_id):
'''
This function is responsible for performing directory scan
'''
# scan directories for all the alive subdomain with http status >
# 200
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Directory Bruteforce has been initiated for {}.'.format(domain.name))
alive_subdomains = Subdomain.objects.filter(
scan_history__id=task.id).exclude(http_url__isnull=True)
dirs_results = results_dir + '/dirs.json'
# check the yaml settings
if EXTENSIONS in yaml_configuration[DIR_FILE_SEARCH]:
extensions = ','.join(
str(ext) for ext in yaml_configuration[DIR_FILE_SEARCH][EXTENSIONS])
else:
extensions = 'php,git,yaml,conf,db,mysql,bak,txt'
# Threads
if THREADS in yaml_configuration[DIR_FILE_SEARCH] \
and yaml_configuration[DIR_FILE_SEARCH][THREADS] > 0:
threads = yaml_configuration[DIR_FILE_SEARCH][THREADS]
else:
threads = 10
for subdomain in alive_subdomains:
# delete any existing dirs.json
if os.path.isfile(dirs_results):
os.system('rm -rf {}'.format(dirs_results))
dirsearch_command = 'python3 /usr/src/github/dirsearch/dirsearch.py'
dirsearch_command += ' -u {}'.format(subdomain.http_url)
if (WORDLIST not in yaml_configuration[DIR_FILE_SEARCH] or
not yaml_configuration[DIR_FILE_SEARCH][WORDLIST] or
'default' in yaml_configuration[DIR_FILE_SEARCH][WORDLIST]):
wordlist_location = '/usr/src/github/dirsearch/db/dicc.txt'
else:
wordlist_location = '/usr/src/wordlist/' + \
yaml_configuration[DIR_FILE_SEARCH][WORDLIST] + '.txt'
dirsearch_command += ' -w {}'.format(wordlist_location)
dirsearch_command += ' --format json -o {}'.format(dirs_results)
dirsearch_command += ' -e {}'.format(extensions)
dirsearch_command += ' -t {}'.format(threads)
dirsearch_command += ' --random-agent --follow-redirects --exclude-status 403,401,404'
if EXCLUDE_EXTENSIONS in yaml_configuration[DIR_FILE_SEARCH]:
exclude_extensions = ','.join(
str(ext) for ext in yaml_configuration[DIR_FILE_SEARCH][EXCLUDE_EXTENSIONS])
dirsearch_command += ' -X {}'.format(exclude_extensions)
if EXCLUDE_TEXT in yaml_configuration[DIR_FILE_SEARCH]:
exclude_text = ','.join(
str(text) for text in yaml_configuration[DIR_FILE_SEARCH][EXCLUDE_TEXT])
dirsearch_command += ' -exclude-texts {}'.format(exclude_text)
# check if recursive strategy is set to on
if RECURSIVE_LEVEL in yaml_configuration[DIR_FILE_SEARCH]:
dirsearch_command += ' --recursion-depth {}'.format(yaml_configuration[DIR_FILE_SEARCH][RECURSIVE_LEVEL])
if RECURSIVE_LEVEL in yaml_configuration[DIR_FILE_SEARCH]:
dirsearch_command += ' --recursion-depth {}'.format(yaml_configuration[DIR_FILE_SEARCH][RECURSIVE_LEVEL])
# proxy
proxy = get_random_proxy()
if proxy:
dirsearch_command += " --proxy '{}'".format(proxy)
print(dirsearch_command)
os.system(dirsearch_command)
try:
if os.path.isfile(dirs_results):
with open(dirs_results, "r") as json_file:
json_string = json_file.read()
subdomain = Subdomain.objects.get(
scan_history__id=task.id, http_url=subdomain.http_url)
subdomain.directory_json = json_string
subdomain.save()
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
send_notification('Directory Bruteforce has been completed for {}.'.format(domain.name))
def fetch_endpoints(
task,
domain,
yaml_configuration,
results_dir,
activity_id):
'''
This function is responsible for fetching all the urls associated with target
and run HTTP probe
It first runs gau to gather all urls from wayback, then we will use hakrawler to identify more urls
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine is currently gathering endpoints for {}.'.format(domain.name))
# check yaml settings
if ALL in yaml_configuration[FETCH_URL][USES_TOOLS]:
tools = 'gauplus hakrawler waybackurls gospider'
else:
tools = ' '.join(
str(tool) for tool in yaml_configuration[FETCH_URL][USES_TOOLS])
if INTENSITY in yaml_configuration[FETCH_URL]:
scan_type = yaml_configuration[FETCH_URL][INTENSITY]
else:
scan_type = 'normal'
domain_regex = "\'https?://([a-z0-9]+[.])*{}.*\'".format(domain.name)
if 'deep' in scan_type:
# performs deep url gathering for all the subdomains present -
# RECOMMENDED
logger.info('Deep URLS Fetch')
os.system(settings.TOOL_LOCATION + 'get_urls.sh %s %s %s %s %s' %
("None", results_dir, scan_type, domain_regex, tools))
else:
# perform url gathering only for main domain - USE only for quick scan
logger.info('Non Deep URLS Fetch')
os.system(
settings.TOOL_LOCATION +
'get_urls.sh %s %s %s %s %s' % (
domain.name,
results_dir,
scan_type,
domain_regex,
tools
))
if IGNORE_FILE_EXTENSION in yaml_configuration[FETCH_URL]:
ignore_extension = '|'.join(
yaml_configuration[FETCH_URL][IGNORE_FILE_EXTENSION])
logger.info('Ignore extensions' + ignore_extension)
os.system(
'cat {0}/all_urls.txt | grep -Eiv "\\.({1}).*" > {0}/temp_urls.txt'.format(
results_dir, ignore_extension))
os.system(
'rm {0}/all_urls.txt && mv {0}/temp_urls.txt {0}/all_urls.txt'.format(results_dir))
'''
Store all the endpoints and then run the httpx
'''
try:
endpoint_final_url = results_dir + '/all_urls.txt'
if os.path.isfile(endpoint_final_url):
with open(endpoint_final_url) as endpoint_list:
for url in endpoint_list:
http_url = url.rstrip('\n')
if not EndPoint.objects.filter(scan_history=task, http_url=http_url).exists():
_subdomain = get_subdomain_from_url(http_url)
if Subdomain.objects.filter(
scan_history=task).filter(
name=_subdomain).exists():
subdomain = Subdomain.objects.get(
scan_history=task, name=_subdomain)
else:
'''
gau or gosppider can gather interesting endpoints which
when parsed can give subdomains that were not existent from
subdomain scan. so storing them
'''
logger.error(
'Subdomain {} not found, adding...'.format(_subdomain))
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': _subdomain,
})
subdomain = save_subdomain(subdomain_dict)
endpoint_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'subdomain': subdomain,
'http_url': http_url,
})
save_endpoint(endpoint_dict)
except Exception as e:
logger.error(e)
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/all_urls.txt')
'''
TODO:
Go spider & waybackurls accumulates a lot of urls, which is good but nuclei
takes forever to scan even a simple website, so we will do http probing
and filter HTTP status 404, this way we can reduce the number of Non Existent
URLS
'''
logger.info('HTTP Probing on collected endpoints')
httpx_command = 'httpx -l {0}/all_urls.txt -status-code -content-length -ip -cdn -title -tech-detect -json -follow-redirects -random-agent -o {0}/final_httpx_urls.json'.format(results_dir)
proxy = get_random_proxy()
if proxy:
httpx_command += " --http-proxy '{}'".format(proxy)
os.system(httpx_command)
url_results_file = results_dir + '/final_httpx_urls.json'
try:
urls_json_result = open(url_results_file, 'r')
lines = urls_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
http_url = json_st['url']
_subdomain = get_subdomain_from_url(http_url)
if Subdomain.objects.filter(
scan_history=task).filter(
name=_subdomain).exists():
subdomain_obj = Subdomain.objects.get(
scan_history=task, name=_subdomain)
else:
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': _subdomain,
})
subdomain_obj = save_subdomain(subdomain_dict)
if EndPoint.objects.filter(
scan_history=task).filter(
http_url=http_url).exists():
endpoint = EndPoint.objects.get(
scan_history=task, http_url=http_url)
else:
endpoint = EndPoint()
endpoint_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'http_url': http_url,
'subdomain': subdomain_obj
})
endpoint = save_endpoint(endpoint_dict)
if 'title' in json_st:
endpoint.page_title = json_st['title']
if 'webserver' in json_st:
endpoint.webserver = json_st['webserver']
if 'content-length' in json_st:
endpoint.content_length = json_st['content-length']
if 'content-type' in json_st:
endpoint.content_type = json_st['content-type']
if 'status-code' in json_st:
endpoint.http_status = json_st['status-code']
if 'response-time' in json_st:
response_time = float(''.join(ch for ch in json_st['response-time'] if not ch.isalpha()))
if json_st['response-time'][-2:] == 'ms':
response_time = response_time / 1000
endpoint.response_time = response_time
endpoint.save()
if 'technologies' in json_st:
for _tech in json_st['technologies']:
if Technology.objects.filter(name=_tech).exists():
tech = Technology.objects.get(name=_tech)
else:
tech = Technology(name=_tech)
tech.save()
endpoint.technologies.add(tech)
# get subdomain object
subdomain = Subdomain.objects.get(scan_history=task, name=_subdomain)
subdomain.technologies.add(tech)
subdomain.save()
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
endpoint_count = EndPoint.objects.filter(
scan_history__id=task.id).values('http_url').distinct().count()
endpoint_alive_count = EndPoint.objects.filter(
scan_history__id=task.id, http_status__exact=200).values('http_url').distinct().count()
send_notification('reNgine has finished gathering endpoints for {} and has discovered *{}* unique endpoints.\n\n{} of those endpoints reported HTTP status 200.'.format(
domain.name,
endpoint_count,
endpoint_alive_count
))
# once endpoint is saved, run gf patterns TODO: run threads
if GF_PATTERNS in yaml_configuration[FETCH_URL]:
for pattern in yaml_configuration[FETCH_URL][GF_PATTERNS]:
logger.info('Running GF for {}'.format(pattern))
gf_output_file_path = '{0}/gf_patterns_{1}.txt'.format(
results_dir, pattern)
gf_command = 'cat {0}/all_urls.txt | gf {1} >> {2}'.format(
results_dir, pattern, gf_output_file_path)
os.system(gf_command)
if os.path.exists(gf_output_file_path):
with open(gf_output_file_path) as gf_output:
for line in gf_output:
url = line.rstrip('\n')
try:
endpoint = EndPoint.objects.get(
scan_history=task, http_url=url)
earlier_pattern = endpoint.matched_gf_patterns
new_pattern = earlier_pattern + ',' + pattern if earlier_pattern else pattern
endpoint.matched_gf_patterns = new_pattern
except Exception as e:
# add the url in db
logger.error(e)
logger.info('Adding URL' + url)
endpoint = EndPoint()
endpoint.http_url = url
endpoint.target_domain = domain
endpoint.scan_history = task
try:
_subdomain = Subdomain.objects.get(
scan_history=task, name=get_subdomain_from_url(url))
endpoint.subdomain = _subdomain
except Exception as e:
continue
endpoint.matched_gf_patterns = pattern
finally:
endpoint.save()
def vulnerability_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id):
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Vulnerability scan has been initiated for {}.'.format(domain.name))
'''
This function will run nuclei as a vulnerability scanner
----
unfurl the urls to keep only domain and path, this will be sent to vuln scan
ignore certain file extensions
Thanks: https://github.com/six2dez/reconftw
'''
urls_path = '/alive.txt'
if task.scan_type.fetch_url:
os.system('cat {0}/all_urls.txt | grep -Eiv "\\.(eot|jpg|jpeg|gif|css|tif|tiff|png|ttf|otf|woff|woff2|ico|pdf|svg|txt|js|doc|docx)$" | unfurl -u format %s://%d%p >> {0}/unfurl_urls.txt'.format(results_dir))
os.system(
'sort -u {0}/unfurl_urls.txt -o {0}/unfurl_urls.txt'.format(results_dir))
urls_path = '/unfurl_urls.txt'
vulnerability_result_path = results_dir + '/vulnerability.json'
vulnerability_scan_input_file = results_dir + urls_path
nuclei_command = 'nuclei -json -l {} -o {}'.format(
vulnerability_scan_input_file, vulnerability_result_path)
# check nuclei config
if USE_NUCLEI_CONFIG in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[VULNERABILITY_SCAN][USE_NUCLEI_CONFIG]:
nuclei_command += ' -config /root/.config/nuclei/config.yaml'
'''
Nuclei Templates
Either custom template has to be supplied or default template, if neither has
been supplied then use all templates including custom templates
'''
if CUSTOM_NUCLEI_TEMPLATE in yaml_configuration[
VULNERABILITY_SCAN] or NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
# check yaml settings for templates
if NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
if ALL in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_TEMPLATE]:
template = NUCLEI_TEMPLATES_PATH
else:
_template = ','.join([NUCLEI_TEMPLATES_PATH + str(element)
for element in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_TEMPLATE]])
template = _template.replace(',', ' -t ')
# Update nuclei command with templates
nuclei_command = nuclei_command + ' -t ' + template
if CUSTOM_NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
# add .yaml to the custom template extensions
_template = ','.join(
[str(element) + '.yaml' for element in yaml_configuration[VULNERABILITY_SCAN][CUSTOM_NUCLEI_TEMPLATE]])
template = _template.replace(',', ' -t ')
# Update nuclei command with templates
nuclei_command = nuclei_command + ' -t ' + template
else:
nuclei_command = nuclei_command + ' -t /root/nuclei-templates'
# check yaml settings for concurrency
if NUCLEI_CONCURRENCY in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][NUCLEI_CONCURRENCY] > 0:
concurrency = yaml_configuration[VULNERABILITY_SCAN][NUCLEI_CONCURRENCY]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -c ' + str(concurrency)
if RATE_LIMIT in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][RATE_LIMIT] > 0:
rate_limit = yaml_configuration[VULNERABILITY_SCAN][RATE_LIMIT]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -rl ' + str(rate_limit)
if TIMEOUT in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][TIMEOUT] > 0:
timeout = yaml_configuration[VULNERABILITY_SCAN][TIMEOUT]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -timeout ' + str(timeout)
if RETRIES in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][RETRIES] > 0:
retries = yaml_configuration[VULNERABILITY_SCAN][RETRIES]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -retries ' + str(retries)
# for severity
if NUCLEI_SEVERITY in yaml_configuration[VULNERABILITY_SCAN] and ALL not in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_SEVERITY]:
_severity = ','.join(
[str(element) for element in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_SEVERITY]])
severity = _severity.replace(" ", "")
else:
severity = "critical, high, medium, low, info"
# update nuclei templates before running scan
os.system('nuclei -update-templates')
for _severity in severity.split(","):
# delete any existing vulnerability.json file
if os.path.isfile(vulnerability_result_path):
os.system('rm {}'.format(vulnerability_result_path))
# run nuclei
final_nuclei_command = nuclei_command + ' -severity ' + _severity
proxy = get_random_proxy()
if proxy:
final_nuclei_command += " --proxy-url '{}'".format(proxy)
logger.info(final_nuclei_command)
os.system(final_nuclei_command)
try:
if os.path.isfile(vulnerability_result_path):
urls_json_result = open(vulnerability_result_path, 'r')
lines = urls_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
host = json_st['host']
_subdomain = get_subdomain_from_url(host)
try:
subdomain = Subdomain.objects.get(
name=_subdomain, scan_history=task)
vulnerability = Vulnerability()
vulnerability.subdomain = subdomain
vulnerability.scan_history = task
vulnerability.target_domain = domain
try:
endpoint = EndPoint.objects.get(
scan_history=task, target_domain=domain, http_url=host)
vulnerability.endpoint = endpoint
except Exception as exception:
logger.error(exception)
if 'name' in json_st['info']:
vulnerability.name = json_st['info']['name']
if 'severity' in json_st['info']:
if json_st['info']['severity'] == 'info':
severity = 0
elif json_st['info']['severity'] == 'low':
severity = 1
elif json_st['info']['severity'] == 'medium':
severity = 2
elif json_st['info']['severity'] == 'high':
severity = 3
elif json_st['info']['severity'] == 'critical':
severity = 4
else:
severity = 0
else:
severity = 0
vulnerability.severity = severity
if 'tags' in json_st['info']:
vulnerability.tags = json_st['info']['tags']
if 'description' in json_st['info']:
vulnerability.description = json_st['info']['description']
if 'reference' in json_st['info']:
vulnerability.reference = json_st['info']['reference']
if 'matched' in json_st:
vulnerability.http_url = json_st['matched']
if 'templateID' in json_st:
vulnerability.template_used = json_st['templateID']
if 'description' in json_st:
vulnerability.description = json_st['description']
if 'matcher_name' in json_st:
vulnerability.matcher_name = json_st['matcher_name']
if 'extracted_results' in json_st:
vulnerability.extracted_results = json_st['extracted_results']
vulnerability.discovered_date = timezone.now()
vulnerability.open_status = True
vulnerability.save()
# send notification for all vulnerabilities except info
if json_st['info']['severity'] != "info" and notification and notification[0].send_vuln_notif:
message = "*Alert: Vulnerability Identified*"
message += "\n\n"
message += "A *{}* severity vulnerability has been identified.".format(json_st['info']['severity'])
message += "\nVulnerability Name: {}".format(json_st['info']['name'])
message += "\nVulnerable URL: {}".format(json_st['host'])
send_notification(message)
# send report to hackerone
if Hackerone.objects.all().exists() and json_st['info']['severity'] != 'info' and json_st['info']['severity'] \
!= 'low' and vulnerability.target_domain.h1_team_handle:
hackerone = Hackerone.objects.all()[0]
if hackerone.send_critical and json_st['info']['severity'] == 'critical':
send_hackerone_report(vulnerability.id)
elif hackerone.send_high and json_st['info']['severity'] == 'high':
send_hackerone_report(vulnerability.id)
elif hackerone.send_medium and json_st['info']['severity'] == 'medium':
send_hackerone_report(vulnerability.id)
except ObjectDoesNotExist:
logger.error('Object not found')
continue
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
info_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=0).count()
low_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=1).count()
medium_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=2).count()
high_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=3).count()
critical_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=4).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count
message = 'Vulnerability scan has been completed for {} and discovered {} vulnerabilities.'.format(
domain.name,
vulnerability_count
)
message += '\n\n*Vulnerability Stats:*'
message += '\nCritical: {}'.format(critical_count)
message += '\nHigh: {}'.format(high_count)
message += '\nMedium: {}'.format(medium_count)
message += '\nLow: {}'.format(low_count)
message += '\nInfo: {}'.format(info_count)
send_notification(message)
def scan_failed(task):
task.scan_status = 0
task.stop_scan_date = timezone.now()
task.save()
def create_scan_activity(task, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = task
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
def update_last_activity(id, activity_status):
ScanActivity.objects.filter(
id=id).update(
status=activity_status,
time=timezone.now())
def delete_scan_data(results_dir):
# remove all txt,html,json files
os.system('find {} -name "*.txt" -type f -delete'.format(results_dir))
os.system('find {} -name "*.html" -type f -delete'.format(results_dir))
os.system('find {} -name "*.json" -type f -delete'.format(results_dir))
def save_subdomain(subdomain_dict):
subdomain = Subdomain()
subdomain.discovered_date = timezone.now()
subdomain.target_domain = subdomain_dict.get('target_domain')
subdomain.scan_history = subdomain_dict.get('scan_history')
subdomain.name = subdomain_dict.get('name')
subdomain.http_url = subdomain_dict.get('http_url')
subdomain.screenshot_path = subdomain_dict.get('screenshot_path')
subdomain.http_header_path = subdomain_dict.get('http_header_path')
subdomain.cname = subdomain_dict.get('cname')
subdomain.is_cdn = subdomain_dict.get('is_cdn')
subdomain.content_type = subdomain_dict.get('content_type')
subdomain.webserver = subdomain_dict.get('webserver')
subdomain.page_title = subdomain_dict.get('page_title')
subdomain.is_imported_subdomain = subdomain_dict.get(
'is_imported_subdomain') if 'is_imported_subdomain' in subdomain_dict else False
if 'http_status' in subdomain_dict:
subdomain.http_status = subdomain_dict.get('http_status')
if 'response_time' in subdomain_dict:
subdomain.response_time = subdomain_dict.get('response_time')
if 'content_length' in subdomain_dict:
subdomain.content_length = subdomain_dict.get('content_length')
subdomain.save()
return subdomain
def save_endpoint(endpoint_dict):
endpoint = EndPoint()
endpoint.discovered_date = timezone.now()
endpoint.scan_history = endpoint_dict.get('scan_history')
endpoint.target_domain = endpoint_dict.get('target_domain') if 'target_domain' in endpoint_dict else None
endpoint.subdomain = endpoint_dict.get('subdomain') if 'target_domain' in endpoint_dict else None
endpoint.http_url = endpoint_dict.get('http_url')
endpoint.page_title = endpoint_dict.get('page_title') if 'page_title' in endpoint_dict else None
endpoint.content_type = endpoint_dict.get('content_type') if 'content_type' in endpoint_dict else None
endpoint.webserver = endpoint_dict.get('webserver') if 'webserver' in endpoint_dict else None
endpoint.response_time = endpoint_dict.get('response_time') if 'response_time' in endpoint_dict else 0
endpoint.http_status = endpoint_dict.get('http_status') if 'http_status' in endpoint_dict else 0
endpoint.content_length = endpoint_dict.get('content_length') if 'content_length' in endpoint_dict else 0
endpoint.is_default = endpoint_dict.get('is_default') if 'is_default' in endpoint_dict else False
endpoint.save()
return endpoint
def perform_osint(task, domain, yaml_configuration, results_dir):
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has initiated OSINT on target {}'.format(domain.name))
if 'discover' in yaml_configuration[OSINT]:
osint_discovery(task, domain, yaml_configuration, results_dir)
if 'dork' in yaml_configuration[OSINT]:
dorking(task, yaml_configuration)
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has completed performing OSINT on target {}'.format(domain.name))
def osint_discovery(task, domain, yaml_configuration, results_dir):
if ALL in yaml_configuration[OSINT][OSINT_DISCOVER]:
osint_lookup = 'emails metainfo employees'
else:
osint_lookup = ' '.join(
str(lookup) for lookup in yaml_configuration[OSINT][OSINT_DISCOVER])
if 'metainfo' in osint_lookup:
if INTENSITY in yaml_configuration[OSINT]:
osint_intensity = yaml_configuration[OSINT][INTENSITY]
else:
osint_intensity = 'normal'
if OSINT_DOCUMENTS_LIMIT in yaml_configuration[OSINT]:
documents_limit = yaml_configuration[OSINT][OSINT_DOCUMENTS_LIMIT]
else:
documents_limit = 50
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': domain.name,
'domain': domain,
'scan_id': task,
'documents_limit': documents_limit
})
get_and_save_meta_info(meta_dict)
elif osint_intensity == 'deep':
# get all subdomains in scan_id
subdomains = Subdomain.objects.filter(scan_history=task)
for subdomain in subdomains:
meta_dict = DottedDict({
'osint_target': subdomain.name,
'domain': domain,
'scan_id': task,
'documents_limit': documents_limit
})
get_and_save_meta_info(meta_dict)
if 'emails' in osint_lookup:
get_and_save_emails(task, results_dir)
get_and_save_leaked_credentials(task, results_dir)
if 'employees' in osint_lookup:
get_and_save_employees(task, results_dir)
def dorking(scan_history, yaml_configuration):
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
# look in stackoverflow
if ALL in yaml_configuration[OSINT][OSINT_DORK]:
dork_lookup = 'stackoverflow, 3rdparty, social_media, project_management, code_sharing, config_files, jenkins, cloud_buckets, php_error, exposed_documents, struts_rce, db_files, traefik, git_exposed'
else:
dork_lookup = ' '.join(
str(lookup) for lookup in yaml_configuration[OSINT][OSINT_DORK])
if 'stackoverflow' in dork_lookup:
dork = 'site:stackoverflow.com'
dork_type = 'stackoverflow'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=False
)
if '3rdparty' in dork_lookup:
# look in 3rd party sitee
dork_type = '3rdparty'
lookup_websites = [
'gitter.im',
'papaly.com',
'productforums.google.com',
'coggle.it',
'replt.it',
'ycombinator.com',
'libraries.io',
'npm.runkit.com',
'npmjs.com',
'scribd.com',
'gitter.im'
]
dork = ''
for website in lookup_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'social_media' in dork_lookup:
dork_type = 'Social Media'
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'pinterest.com',
'tumblr.com',
'reddit.com'
]
dork = ''
for website in social_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'project_management' in dork_lookup:
dork_type = 'Project Management'
project_websites = [
'trello.com',
'*.atlassian.net'
]
dork = ''
for website in project_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'code_sharing' in dork_lookup:
dork_type = 'Code Sharing Sites'
code_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
dork = ''
for website in code_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'config_files' in dork_lookup:
dork_type = 'Config Files'
config_file_ext = [
'env',
'xml',
'conf',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
dork = ''
for extension in config_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'jenkins' in dork_lookup:
dork_type = 'Jenkins'
dork = 'intitle:\"Dashboard [Jenkins]\"'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
if 'wordpress_files' in dork_lookup:
dork_type = 'Wordpress Files'
inurl_lookup = [
'wp-content',
'wp-includes'
]
dork = ''
for lookup in inurl_lookup:
dork = dork + ' | ' + 'inurl:' + lookup
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'cloud_buckets' in dork_lookup:
dork_type = 'Cloud Buckets'
cloud_websites = [
'.s3.amazonaws.com',
'storage.googleapis.com',
'amazonaws.com'
]
dork = ''
for website in cloud_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'php_error' in dork_lookup:
dork_type = 'PHP Error'
error_words = [
'\"PHP Parse error\"',
'\"PHP Warning\"',
'\"PHP Error\"'
]
dork = ''
for word in error_words:
dork = dork + ' | ' + word
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'exposed_documents' in dork_lookup:
dork_type = 'Exposed Documents'
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
dork = ''
for extension in docs_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'struts_rce' in dork_lookup:
dork_type = 'Apache Struts RCE'
struts_file_ext = [
'action',
'struts',
'do'
]
dork = ''
for extension in struts_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'db_files' in dork_lookup:
dork_type = 'Database Files'
db_file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
dork = ''
for extension in db_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'traefik' in dork_lookup:
dork = 'intitle:traefik inurl:8080/dashboard'
dork_type = 'Traefik'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
if 'git_exposed' in dork_lookup:
dork = 'inurl:\"/.git\"'
dork_type = '.git Exposed'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
def get_and_save_dork_results(dork, type, scan_history, in_target=False):
degoogle_obj = degoogle.dg()
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
if in_target:
query = dork + " site:" + scan_history.domain.name
else:
query = dork + " \"{}\"".format(scan_history.domain.name)
logger.info(query)
degoogle_obj.query = query
results = degoogle_obj.run()
logger.info(results)
for result in results:
dork, _ = Dork.objects.get_or_create(
type=type,
description=result['desc'],
url=result['url']
)
scan_history.dorks.add(dork)
def get_and_save_employees(scan_history, results_dir):
theHarvester_location = '/usr/src/github/theHarvester'
# update proxies.yaml
if Proxy.objects.all().exists():
proxy = Proxy.objects.all()[0]
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(theHarvester_location + '/proxies.yaml', 'w') as file:
documents = yaml.dump(yaml_data, file)
os.system('cd {} && python3 theHarvester.py -d {} -b all -f {}/theHarvester.html'.format(
theHarvester_location,
scan_history.domain.name,
results_dir
))
file_location = results_dir + '/theHarvester.html'
print(file_location)
# delete proxy environ var
if os.environ.get(('https_proxy')):
del os.environ['https_proxy']
if os.environ.get(('HTTPS_PROXY')):
del os.environ['HTTPS_PROXY']
if os.path.isfile(file_location):
logger.info('Parsing theHarvester results')
options = FirefoxOptions()
options.add_argument("--headless")
driver = webdriver.Firefox(options=options)
driver.get('file://'+file_location)
tabledata = driver.execute_script('return tabledata')
# save email addresses and linkedin employees
for data in tabledata:
if data['record'] == 'email':
_email = data['result']
email, _ = Email.objects.get_or_create(address=_email)
scan_history.emails.add(email)
elif data['record'] == 'people':
_employee = data['result']
split_val = _employee.split('-')
name = split_val[0]
if len(split_val) == 2:
designation = split_val[1]
else:
designation = ""
employee, _ = Employee.objects.get_or_create(name=name, designation=designation)
scan_history.employees.add(employee)
driver.quit()
print(tabledata)
def get_and_save_emails(scan_history, results_dir):
leak_target_path = '{}/creds_target.txt'.format(results_dir)
# get email address
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
emails = []
try:
logger.info('OSINT: Getting emails from Google')
email_from_google = get_emails_from_google(scan_history.domain.name)
logger.info('OSINT: Getting emails from Bing')
email_from_bing = get_emails_from_bing(scan_history.domain.name)
logger.info('OSINT: Getting emails from Baidu')
email_from_baidu = get_emails_from_baidu(scan_history.domain.name)
emails = list(set(email_from_google + email_from_bing + email_from_baidu))
logger.info(emails)
except Exception as e:
logger.error(e)
leak_target_file = open(leak_target_path, 'w')
for _email in emails:
email, _ = Email.objects.get_or_create(address=_email)
scan_history.emails.add(email)
leak_target_file.write('{}\n'.format(_email))
# fill leak_target_file with possible email address
leak_target_file.write('%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.write('%.%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%.%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.write('%_%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%_%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.close()
def get_and_save_leaked_credentials(scan_history, results_dir):
logger.info('OSINT: Getting leaked credentials...')
leak_target_file = '{}/creds_target.txt'.format(results_dir)
leak_output_file = '{}/pwndb.json'.format(results_dir)
pwndb_command = 'python3 /usr/src/github/pwndb/pwndb.py --proxy tor:9150 --output json --list {}'.format(
leak_target_file
)
try:
pwndb_output = subprocess.getoutput(pwndb_command)
creds = json.loads(pwndb_output)
for cred in creds:
if cred['username'] != 'donate':
email_id = "{}@{}".format(cred['username'], cred['domain'])
email_obj, _ = Email.objects.get_or_create(
address=email_id,
)
email_obj.password = cred['password']
email_obj.save()
scan_history.emails.add(email_obj)
except Exception as e:
logger.error(e)
pass
def get_and_save_meta_info(meta_dict):
logger.info('Getting METADATA for {}'.format(meta_dict.osint_target))
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
result = metadata_extractor.extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if result:
results = result.get_metadata()
for meta in results:
meta_finder_document = MetaFinderDocument()
subdomain = Subdomain.objects.get(scan_history=meta_dict.scan_id, name=meta_dict.osint_target)
meta_finder_document.subdomain = subdomain
meta_finder_document.target_domain = meta_dict.domain
meta_finder_document.scan_history = meta_dict.scan_id
item = DottedDict(results[meta])
meta_finder_document.url = item.url
meta_finder_document.doc_name = meta
meta_finder_document.http_status = item.status_code
metadata = results[meta]['metadata']
for data in metadata:
if 'Producer' in metadata and metadata['Producer']:
meta_finder_document.producer = metadata['Producer'].rstrip('\x00')
if 'Creator' in metadata and metadata['Creator']:
meta_finder_document.creator = metadata['Creator'].rstrip('\x00')
if 'CreationDate' in metadata and metadata['CreationDate']:
meta_finder_document.creation_date = metadata['CreationDate'].rstrip('\x00')
if 'ModDate' in metadata and metadata['ModDate']:
meta_finder_document.modified_date = metadata['ModDate'].rstrip('\x00')
if 'Author' in metadata and metadata['Author']:
meta_finder_document.author = metadata['Author'].rstrip('\x00')
if 'Title' in metadata and metadata['Title']:
meta_finder_document.title = metadata['Title'].rstrip('\x00')
if 'OSInfo' in metadata and metadata['OSInfo']:
meta_finder_document.os = metadata['OSInfo'].rstrip('\x00')
meta_finder_document.save()
@app.task(bind=True)
def test_task(self):
print('*' * 40)
print('test task run')
print('*' * 40)
| import os
import traceback
import yaml
import json
import csv
import validators
import random
import requests
import logging
import metafinder.extractor as metadata_extractor
import whatportis
import subprocess
from selenium.webdriver.firefox.options import Options as FirefoxOptions
from selenium import webdriver
from emailfinder.extractor import *
from dotted_dict import DottedDict
from celery import shared_task
from discord_webhook import DiscordWebhook
from reNgine.celery import app
from startScan.models import *
from targetApp.models import Domain
from scanEngine.models import EngineType
from django.conf import settings
from django.shortcuts import get_object_or_404
from celery import shared_task
from datetime import datetime
from degoogle import degoogle
from django.conf import settings
from django.utils import timezone, dateformat
from django.shortcuts import get_object_or_404
from django.core.exceptions import ObjectDoesNotExist
from reNgine.celery import app
from reNgine.definitions import *
from startScan.models import *
from targetApp.models import Domain
from scanEngine.models import EngineType, Configuration, Wordlist
from .common_func import *
'''
task for background scan
'''
@app.task
def initiate_scan(
domain_id,
scan_history_id,
scan_type,
engine_type,
imported_subdomains=None,
out_of_scope_subdomains=[]
):
'''
scan_type = 0 -> immediate scan, need not create scan object
scan_type = 1 -> scheduled scan
'''
engine_object = EngineType.objects.get(pk=engine_type)
domain = Domain.objects.get(pk=domain_id)
if scan_type == 1:
task = ScanHistory()
task.scan_status = -1
elif scan_type == 0:
task = ScanHistory.objects.get(pk=scan_history_id)
# save the last scan date for domain model
domain.last_scan_date = timezone.now()
domain.save()
# once the celery task starts, change the task status to Started
task.scan_type = engine_object
task.celery_id = initiate_scan.request.id
task.domain = domain
task.scan_status = 1
task.start_scan_date = timezone.now()
task.subdomain_discovery = True if engine_object.subdomain_discovery else False
task.dir_file_search = True if engine_object.dir_file_search else False
task.port_scan = True if engine_object.port_scan else False
task.fetch_url = True if engine_object.fetch_url else False
task.osint = True if engine_object.osint else False
task.screenshot = True if engine_object.screenshot else False
task.vulnerability_scan = True if engine_object.vulnerability_scan else False
task.save()
activity_id = create_scan_activity(task, "Scanning Started", 2)
results_dir = '/usr/src/scan_results/'
os.chdir(results_dir)
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has initiated recon for target {} with engine type {}'.format(domain.name, engine_object.engine_name))
try:
current_scan_dir = domain.name + '_' + str(random.randint(100000000000, 999999999999))
os.mkdir(current_scan_dir)
task.results_dir = current_scan_dir
task.save()
except Exception as exception:
logger.error(exception)
scan_failed(task)
yaml_configuration = None
excluded_subdomains = ''
try:
yaml_configuration = yaml.load(
task.scan_type.yaml_configuration,
Loader=yaml.FullLoader)
except Exception as exception:
logger.error(exception)
# TODO: Put failed reason on db
'''
Add GF patterns name to db for dynamic URLs menu
'''
if engine_object.fetch_url and GF_PATTERNS in yaml_configuration[FETCH_URL]:
task.used_gf_patterns = ','.join(
pattern for pattern in yaml_configuration[FETCH_URL][GF_PATTERNS])
task.save()
results_dir = results_dir + current_scan_dir
# put all imported subdomains into txt file and also in Subdomain model
if imported_subdomains:
extract_imported_subdomain(
imported_subdomains, task, domain, results_dir)
if yaml_configuration:
'''
a target in itself is a subdomain, some tool give subdomains as
www.yogeshojha.com but url and everything else resolves to yogeshojha.com
In that case, we would already need to store target itself as subdomain
'''
initial_subdomain_file = '/target_domain.txt' if task.subdomain_discovery else '/sorted_subdomain_collection.txt'
subdomain_file = open(results_dir + initial_subdomain_file, "w")
subdomain_file.write(domain.name + "\n")
subdomain_file.close()
if(task.subdomain_discovery):
activity_id = create_scan_activity(task, "Subdomain Scanning", 1)
subdomain_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id,
out_of_scope_subdomains
)
else:
skip_subdomain_scan(task, domain, results_dir)
update_last_activity(activity_id, 2)
activity_id = create_scan_activity(task, "HTTP Crawler", 1)
http_crawler(
task,
domain,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
try:
if task.screenshot:
activity_id = create_scan_activity(
task, "Visual Recon - Screenshot", 1)
grab_screenshot(
task,
domain,
yaml_configuration,
current_scan_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if(task.port_scan):
activity_id = create_scan_activity(task, "Port Scanning", 1)
port_scanning(task, domain, yaml_configuration, results_dir)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.osint:
activity_id = create_scan_activity(task, "OSINT Running", 1)
perform_osint(task, domain, yaml_configuration, results_dir)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.dir_file_search:
activity_id = create_scan_activity(task, "Directory Search", 1)
directory_brute(
task,
domain,
yaml_configuration,
results_dir,
activity_id
)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.fetch_url:
activity_id = create_scan_activity(task, "Fetching endpoints", 1)
fetch_endpoints(
task,
domain,
yaml_configuration,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.vulnerability_scan:
activity_id = create_scan_activity(task, "Vulnerability Scan", 1)
vulnerability_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
activity_id = create_scan_activity(task, "Scan Completed", 2)
if notification and notification[0].send_scan_status_notif:
send_notification('*Scan Completed*\nreNgine has finished performing recon on target {}.'.format(domain.name))
'''
Once the scan is completed, save the status to successful
'''
if ScanActivity.objects.filter(scan_of=task).filter(status=0).all():
task.scan_status = 0
else:
task.scan_status = 2
task.stop_scan_date = timezone.now()
task.save()
# cleanup results
delete_scan_data(results_dir)
return {"status": True}
def skip_subdomain_scan(task, domain, results_dir):
# store default target as subdomain
'''
If the imported subdomain already has target domain saved, we can skip this
'''
if not Subdomain.objects.filter(
scan_history=task,
name=domain.name).exists():
subdomain_dict = DottedDict({
'name': domain.name,
'scan_history': task,
'target_domain': domain
})
save_subdomain(subdomain_dict)
# Save target into target_domain.txt
with open('{}/target_domain.txt'.format(results_dir), 'w+') as file:
file.write(domain.name + '\n')
file.close()
'''
We can have two conditions, either subdomain scan happens, or subdomain scan
does not happen, in either cases, because we are using import subdomain, we
need to collect and sort all the subdomains
Write target domain into subdomain_collection
'''
os.system(
'cat {0}/target_domain.txt > {0}/subdomain_collection.txt'.format(results_dir))
os.system(
'cat {0}/from_imported.txt > {0}/subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/from_imported.txt'.format(results_dir))
'''
Sort all Subdomains
'''
os.system(
'sort -u {0}/subdomain_collection.txt -o {0}/sorted_subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/subdomain_collection.txt'.format(results_dir))
def extract_imported_subdomain(imported_subdomains, task, domain, results_dir):
valid_imported_subdomains = [subdomain for subdomain in imported_subdomains if validators.domain(
subdomain) and domain.name == get_domain_from_subdomain(subdomain)]
# remove any duplicate
valid_imported_subdomains = list(set(valid_imported_subdomains))
with open('{}/from_imported.txt'.format(results_dir), 'w+') as file:
for subdomain_name in valid_imported_subdomains:
# save _subdomain to Subdomain model db
if not Subdomain.objects.filter(
scan_history=task, name=subdomain_name).exists():
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': subdomain_name,
'is_imported_subdomain': True
})
save_subdomain(subdomain_dict)
# save subdomain to file
file.write('{}\n'.format(subdomain_name))
file.close()
def subdomain_scan(task, domain, yaml_configuration, results_dir, activity_id, out_of_scope_subdomains=None):
'''
This function is responsible for performing subdomain enumeration
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Subdomain Gathering for target {} has been started'.format(domain.name))
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
# check for all the tools and add them into string
# if tool selected is all then make string, no need for loop
if ALL in yaml_configuration[SUBDOMAIN_DISCOVERY][USES_TOOLS]:
tools = 'amass-active amass-passive assetfinder sublist3r subfinder oneforall'
else:
tools = ' '.join(
str(tool) for tool in yaml_configuration[SUBDOMAIN_DISCOVERY][USES_TOOLS])
logging.info(tools)
# check for THREADS, by default 10
threads = 10
if THREADS in yaml_configuration[SUBDOMAIN_DISCOVERY]:
_threads = yaml_configuration[SUBDOMAIN_DISCOVERY][THREADS]
if _threads > 0:
threads = _threads
if 'amass' in tools:
if 'amass-passive' in tools:
amass_command = 'amass enum -passive -d {} -o {}/from_amass.txt'.format(
domain.name, results_dir)
if USE_AMASS_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_AMASS_CONFIG]:
amass_command += ' -config /root/.config/amass.ini'
# Run Amass Passive
logging.info(amass_command)
os.system(amass_command)
if 'amass-active' in tools:
amass_command = 'amass enum -active -d {} -o {}/from_amass_active.txt'.format(
domain.name, results_dir)
if USE_AMASS_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_AMASS_CONFIG]:
amass_command += ' -config /root/.config/amass.ini'
if AMASS_WORDLIST in yaml_configuration[SUBDOMAIN_DISCOVERY]:
wordlist = yaml_configuration[SUBDOMAIN_DISCOVERY][AMASS_WORDLIST]
if wordlist == 'default':
wordlist_path = '/usr/src/wordlist/deepmagic.com-prefixes-top50000.txt'
else:
wordlist_path = '/usr/src/wordlist/' + wordlist + '.txt'
if not os.path.exists(wordlist_path):
wordlist_path = '/usr/src/' + AMASS_WORDLIST
amass_command = amass_command + \
' -brute -w {}'.format(wordlist_path)
if amass_config_path:
amass_command = amass_command + \
' -config {}'.format('/usr/src/scan_results/' + amass_config_path)
# Run Amass Active
logging.info(amass_command)
os.system(amass_command)
if 'assetfinder' in tools:
assetfinder_command = 'assetfinder --subs-only {} > {}/from_assetfinder.txt'.format(
domain.name, results_dir)
# Run Assetfinder
logging.info(assetfinder_command)
os.system(assetfinder_command)
if 'sublist3r' in tools:
sublist3r_command = 'python3 /usr/src/github/Sublist3r/sublist3r.py -d {} -t {} -o {}/from_sublister.txt'.format(
domain.name, threads, results_dir)
# Run sublist3r
logging.info(sublist3r_command)
os.system(sublist3r_command)
if 'subfinder' in tools:
subfinder_command = 'subfinder -d {} -t {} -o {}/from_subfinder.txt'.format(
domain.name, threads, results_dir)
if USE_SUBFINDER_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_SUBFINDER_CONFIG]:
subfinder_command += ' -config /root/.config/subfinder/config.yaml'
# Run Subfinder
logging.info(subfinder_command)
os.system(subfinder_command)
if 'oneforall' in tools:
oneforall_command = 'python3 /usr/src/github/OneForAll/oneforall.py --target {} run'.format(
domain.name, results_dir)
# Run OneForAll
logging.info(oneforall_command)
os.system(oneforall_command)
extract_subdomain = "cut -d',' -f6 /usr/src/github/OneForAll/results/{}.csv >> {}/from_oneforall.txt".format(
domain.name, results_dir)
os.system(extract_subdomain)
# remove the results from oneforall directory
os.system(
'rm -rf /usr/src/github/OneForAll/results/{}.*'.format(domain.name))
'''
All tools have gathered the list of subdomains with filename
initials as from_*
We will gather all the results in one single file, sort them and
remove the older results from_*
'''
os.system(
'cat {0}/*.txt > {0}/subdomain_collection.txt'.format(results_dir))
'''
Write target domain into subdomain_collection
'''
os.system(
'cat {0}/target_domain.txt >> {0}/subdomain_collection.txt'.format(results_dir))
'''
Remove all the from_* files
'''
os.system('rm -f {}/from*'.format(results_dir))
'''
Sort all Subdomains
'''
os.system(
'sort -u {0}/subdomain_collection.txt -o {0}/sorted_subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/subdomain_collection.txt'.format(results_dir))
'''
The final results will be stored in sorted_subdomain_collection.
'''
# parse the subdomain list file and store in db
with open(subdomain_scan_results_file) as subdomain_list:
for _subdomain in subdomain_list:
__subdomain = _subdomain.rstrip('\n')
if not Subdomain.objects.filter(scan_history=task, name=__subdomain).exists(
) and validators.domain(__subdomain) and __subdomain not in out_of_scope_subdomains:
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': __subdomain,
})
save_subdomain(subdomain_dict)
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
subdomains_count = Subdomain.objects.filter(scan_history=task).count()
send_notification('Subdomain Gathering for target {} has been completed and has discovered *{}* subdomains.'.format(domain.name, subdomains_count))
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/sorted_subdomain_collection.txt')
# check for any subdomain changes and send notif if any
if notification and notification[0].send_subdomain_changes_notif:
newly_added_subdomain = get_new_added_subdomain(task.id, domain.id)
if newly_added_subdomain:
message = "**{} New Subdomains Discovered on domain {}**".format(newly_added_subdomain.count(), domain.name)
for subdomain in newly_added_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
removed_subdomain = get_removed_subdomain(task.id, domain.id)
if removed_subdomain:
message = "**{} Subdomains are no longer available on domain {}**".format(removed_subdomain.count(), domain.name)
for subdomain in removed_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
# check for interesting subdomains and send notif if any
if notification and notification[0].send_interesting_notif:
interesting_subdomain = get_interesting_subdomains(task.id, domain.id)
print(interesting_subdomain)
if interesting_subdomain:
message = "**{} Interesting Subdomains Found on domain {}**".format(interesting_subdomain.count(), domain.name)
for subdomain in interesting_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
def get_new_added_subdomain(scan_id, domain_id):
scan_history = ScanHistory.objects.filter(
domain=domain_id).filter(
subdomain_discovery=True).filter(
id__lte=scan_id)
if scan_history.count() > 1:
last_scan = scan_history.order_by('-start_scan_date')[1]
scanned_host_q1 = Subdomain.objects.filter(
scan_history__id=scan_id).values('name')
scanned_host_q2 = Subdomain.objects.filter(
scan_history__id=last_scan.id).values('name')
added_subdomain = scanned_host_q1.difference(scanned_host_q2)
return Subdomain.objects.filter(
scan_history=scan_id).filter(
name__in=added_subdomain)
def get_removed_subdomain(scan_id, domain_id):
scan_history = ScanHistory.objects.filter(
domain=domain_id).filter(
subdomain_discovery=True).filter(
id__lte=scan_id)
if scan_history.count() > 1:
last_scan = scan_history.order_by('-start_scan_date')[1]
scanned_host_q1 = Subdomain.objects.filter(
scan_history__id=scan_id).values('name')
scanned_host_q2 = Subdomain.objects.filter(
scan_history__id=last_scan.id).values('name')
removed_subdomains = scanned_host_q2.difference(scanned_host_q1)
print()
return Subdomain.objects.filter(
scan_history=last_scan).filter(
name__in=removed_subdomains)
def http_crawler(task, domain, results_dir, activity_id):
'''
This function is runs right after subdomain gathering, and gathers important
like page title, http status, etc
HTTP Crawler runs by default
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('HTTP Crawler for target {} has been initiated.'.format(domain.name))
alive_file_location = results_dir + '/alive.txt'
httpx_results_file = results_dir + '/httpx.json'
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
httpx_command = 'httpx -status-code -content-length -title -tech-detect -cdn -ip -follow-host-redirects -random-agent'
proxy = get_random_proxy()
if proxy:
httpx_command += " --http-proxy '{}'".format(proxy)
httpx_command += ' -json -o {}'.format(
httpx_results_file
)
httpx_command = 'cat {} | {}'.format(subdomain_scan_results_file, httpx_command)
print(httpx_command)
os.system(httpx_command)
# alive subdomains from httpx
alive_file = open(alive_file_location, 'w')
# writing httpx results
if os.path.isfile(httpx_results_file):
httpx_json_result = open(httpx_results_file, 'r')
lines = httpx_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
try:
# fallback for older versions of httpx
if 'url' in json_st:
subdomain = Subdomain.objects.get(
scan_history=task, name=json_st['input'])
else:
subdomain = Subdomain.objects.get(
scan_history=task, name=json_st['url'].split("//")[-1])
'''
Saving Default http urls to EndPoint
'''
endpoint = EndPoint()
endpoint.scan_history = task
endpoint.target_domain = domain
endpoint.subdomain = subdomain
if 'url' in json_st:
endpoint.http_url = json_st['url']
subdomain.http_url = json_st['url']
if 'status-code' in json_st:
endpoint.http_status = json_st['status-code']
subdomain.http_status = json_st['status-code']
if 'title' in json_st:
endpoint.page_title = json_st['title']
subdomain.page_title = json_st['title']
if 'content-length' in json_st:
endpoint.content_length = json_st['content-length']
subdomain.content_length = json_st['content-length']
if 'content-type' in json_st:
endpoint.content_type = json_st['content-type']
subdomain.content_type = json_st['content-type']
if 'webserver' in json_st:
endpoint.webserver = json_st['webserver']
subdomain.webserver = json_st['webserver']
if 'response-time' in json_st:
response_time = float(
''.join(
ch for ch in json_st['response-time'] if not ch.isalpha()))
if json_st['response-time'][-2:] == 'ms':
response_time = response_time / 1000
endpoint.response_time = response_time
subdomain.response_time = response_time
if 'cnames' in json_st:
cname_list = ','.join(json_st['cnames'])
subdomain.cname = cname_list
discovered_date = timezone.now()
endpoint.discovered_date = discovered_date
subdomain.discovered_date = discovered_date
endpoint.is_default = True
endpoint.save()
subdomain.save()
if 'technologies' in json_st:
for _tech in json_st['technologies']:
if Technology.objects.filter(name=_tech).exists():
tech = Technology.objects.get(name=_tech)
else:
tech = Technology(name=_tech)
tech.save()
subdomain.technologies.add(tech)
endpoint.technologies.add(tech)
if 'a' in json_st:
for _ip in json_st['a']:
if IpAddress.objects.filter(address=_ip).exists():
ip = IpAddress.objects.get(address=_ip)
else:
ip = IpAddress(address=_ip)
if 'cdn' in json_st:
ip.is_cdn = json_st['cdn']
ip.save()
subdomain.ip_addresses.add(ip)
# see if to ignore 404 or 5xx
alive_file.write(json_st['url'] + '\n')
subdomain.save()
endpoint.save()
except Exception as exception:
logging.error(exception)
alive_file.close()
if notification and notification[0].send_scan_status_notif:
alive_count = Subdomain.objects.filter(
scan_history__id=task.id).values('name').distinct().filter(
http_status__exact=200).count()
send_notification('HTTP Crawler for target {} has been completed.\n\n {} subdomains were alive (http status 200).'.format(domain.name, alive_count))
def grab_screenshot(task, domain, yaml_configuration, results_dir, activity_id):
'''
This function is responsible for taking screenshots
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine is currently gathering screenshots for {}'.format(domain.name))
output_screenshots_path = results_dir + '/screenshots'
result_csv_path = results_dir + '/screenshots/Requests.csv'
alive_subdomains_path = results_dir + '/alive.txt'
eyewitness_command = 'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py'
eyewitness_command += ' -f {} -d {} --no-prompt'.format(
alive_subdomains_path,
output_screenshots_path
)
if EYEWITNESS in yaml_configuration \
and TIMEOUT in yaml_configuration[EYEWITNESS] \
and yaml_configuration[EYEWITNESS][TIMEOUT] > 0:
eyewitness_command += ' --timeout {}'.format(
yaml_configuration[EYEWITNESS][TIMEOUT]
)
if EYEWITNESS in yaml_configuration \
and THREADS in yaml_configuration[EYEWITNESS] \
and yaml_configuration[EYEWITNESS][THREADS] > 0:
eyewitness_command += ' --threads {}'.format(
yaml_configuration[EYEWITNESS][THREADS]
)
logger.info(eyewitness_command)
os.system(eyewitness_command)
if os.path.isfile(result_csv_path):
logger.info('Gathering Eyewitness results')
with open(result_csv_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
if row[3] == 'Successful' \
and Subdomain.objects.filter(
scan_history__id=task.id).filter(name=row[2]).exists():
subdomain = Subdomain.objects.get(
scan_history__id=task.id,
name=row[2]
)
subdomain.screenshot_path = row[4].replace(
'/usr/src/scan_results/',
''
)
subdomain.save()
# remove all db, html extra files in screenshot results
os.system('rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(
output_screenshots_path,
))
os.system('rm -rf {0}/source'.format(
output_screenshots_path,
))
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has finished gathering screenshots for {}'.format(domain.name))
def port_scanning(task, domain, yaml_configuration, results_dir):
'''
This function is responsible for running the port scan
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Port Scan initiated for {}'.format(domain.name))
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
port_results_file = results_dir + '/ports.json'
# check the yaml_configuration and choose the ports to be scanned
scan_ports = '-' # default port scan everything
if PORTS in yaml_configuration[PORT_SCAN]:
# TODO: legacy code, remove top-100 in future versions
all_ports = yaml_configuration[PORT_SCAN][PORTS]
if 'full' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -p {}'.format(
subdomain_scan_results_file, port_results_file, '-')
elif 'top-100' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -top-ports 100'.format(
subdomain_scan_results_file, port_results_file)
elif 'top-1000' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -top-ports 1000'.format(
subdomain_scan_results_file, port_results_file)
else:
scan_ports = ','.join(
str(port) for port in all_ports)
naabu_command = 'cat {} | naabu -json -o {} -p {}'.format(
subdomain_scan_results_file, port_results_file, scan_ports)
# check for exclude ports
if EXCLUDE_PORTS in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][EXCLUDE_PORTS]:
exclude_ports = ','.join(
str(port) for port in yaml_configuration['port_scan']['exclude_ports'])
naabu_command = naabu_command + \
' -exclude-ports {}'.format(exclude_ports)
if NAABU_RATE in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][NAABU_RATE] > 0:
naabu_command = naabu_command + \
' -rate {}'.format(
yaml_configuration[PORT_SCAN][NAABU_RATE])
if USE_NAABU_CONFIG in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][USE_NAABU_CONFIG]:
naabu_command += ' -config /root/.config/naabu/naabu.conf'
# run naabu
os.system(naabu_command)
# writing port results
try:
port_json_result = open(port_results_file, 'r')
lines = port_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
port_number = json_st['port']
ip_address = json_st['ip']
# see if port already exists
if Port.objects.filter(number__exact=port_number).exists():
port = Port.objects.get(number=port_number)
else:
port = Port()
port.number = port_number
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port_detail = whatportis.get_ports(str(port_number))
if len(port_detail):
port.service_name = port_detail[0].name
port.description = port_detail[0].description
port.save()
if IpAddress.objects.filter(address=json_st['ip']).exists():
ip = IpAddress.objects.get(address=json_st['ip'])
ip.ports.add(port)
ip.save()
except BaseException as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
port_count = Port.objects.filter(
ports__in=IpAddress.objects.filter(
ip_addresses__in=Subdomain.objects.filter(
scan_history__id=task.id))).distinct().count()
send_notification('reNgine has finished Port Scanning on {} and has identified {} ports.'.format(domain.name, port_count))
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/ports.json')
def check_waf():
'''
This function will check for the WAF being used in subdomains using wafw00f
'''
pass
def directory_brute(task, domain, yaml_configuration, results_dir, activity_id):
'''
This function is responsible for performing directory scan
'''
# scan directories for all the alive subdomain with http status >
# 200
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Directory Bruteforce has been initiated for {}.'.format(domain.name))
alive_subdomains = Subdomain.objects.filter(
scan_history__id=task.id).exclude(http_url__isnull=True)
dirs_results = results_dir + '/dirs.json'
# check the yaml settings
if EXTENSIONS in yaml_configuration[DIR_FILE_SEARCH]:
extensions = ','.join(
str(ext) for ext in yaml_configuration[DIR_FILE_SEARCH][EXTENSIONS])
else:
extensions = 'php,git,yaml,conf,db,mysql,bak,txt'
# Threads
if THREADS in yaml_configuration[DIR_FILE_SEARCH] \
and yaml_configuration[DIR_FILE_SEARCH][THREADS] > 0:
threads = yaml_configuration[DIR_FILE_SEARCH][THREADS]
else:
threads = 10
for subdomain in alive_subdomains:
# delete any existing dirs.json
if os.path.isfile(dirs_results):
os.system('rm -rf {}'.format(dirs_results))
dirsearch_command = 'python3 /usr/src/github/dirsearch/dirsearch.py'
dirsearch_command += ' -u {}'.format(subdomain.http_url)
if (WORDLIST not in yaml_configuration[DIR_FILE_SEARCH] or
not yaml_configuration[DIR_FILE_SEARCH][WORDLIST] or
'default' in yaml_configuration[DIR_FILE_SEARCH][WORDLIST]):
wordlist_location = '/usr/src/github/dirsearch/db/dicc.txt'
else:
wordlist_location = '/usr/src/wordlist/' + \
yaml_configuration[DIR_FILE_SEARCH][WORDLIST] + '.txt'
dirsearch_command += ' -w {}'.format(wordlist_location)
dirsearch_command += ' --format json -o {}'.format(dirs_results)
dirsearch_command += ' -e {}'.format(extensions)
dirsearch_command += ' -t {}'.format(threads)
dirsearch_command += ' --random-agent --follow-redirects --exclude-status 403,401,404'
if EXCLUDE_EXTENSIONS in yaml_configuration[DIR_FILE_SEARCH]:
exclude_extensions = ','.join(
str(ext) for ext in yaml_configuration[DIR_FILE_SEARCH][EXCLUDE_EXTENSIONS])
dirsearch_command += ' -X {}'.format(exclude_extensions)
if EXCLUDE_TEXT in yaml_configuration[DIR_FILE_SEARCH]:
exclude_text = ','.join(
str(text) for text in yaml_configuration[DIR_FILE_SEARCH][EXCLUDE_TEXT])
dirsearch_command += ' -exclude-texts {}'.format(exclude_text)
# check if recursive strategy is set to on
if RECURSIVE_LEVEL in yaml_configuration[DIR_FILE_SEARCH]:
dirsearch_command += ' --recursion-depth {}'.format(yaml_configuration[DIR_FILE_SEARCH][RECURSIVE_LEVEL])
if RECURSIVE_LEVEL in yaml_configuration[DIR_FILE_SEARCH]:
dirsearch_command += ' --recursion-depth {}'.format(yaml_configuration[DIR_FILE_SEARCH][RECURSIVE_LEVEL])
# proxy
proxy = get_random_proxy()
if proxy:
dirsearch_command += " --proxy '{}'".format(proxy)
print(dirsearch_command)
os.system(dirsearch_command)
try:
if os.path.isfile(dirs_results):
with open(dirs_results, "r") as json_file:
json_string = json_file.read()
subdomain = Subdomain.objects.get(
scan_history__id=task.id, http_url=subdomain.http_url)
subdomain.directory_json = json_string
subdomain.save()
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
send_notification('Directory Bruteforce has been completed for {}.'.format(domain.name))
def fetch_endpoints(
task,
domain,
yaml_configuration,
results_dir,
activity_id):
'''
This function is responsible for fetching all the urls associated with target
and run HTTP probe
It first runs gau to gather all urls from wayback, then we will use hakrawler to identify more urls
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine is currently gathering endpoints for {}.'.format(domain.name))
# check yaml settings
if ALL in yaml_configuration[FETCH_URL][USES_TOOLS]:
tools = 'gauplus hakrawler waybackurls gospider'
else:
tools = ' '.join(
str(tool) for tool in yaml_configuration[FETCH_URL][USES_TOOLS])
if INTENSITY in yaml_configuration[FETCH_URL]:
scan_type = yaml_configuration[FETCH_URL][INTENSITY]
else:
scan_type = 'normal'
domain_regex = "\'https?://([a-z0-9]+[.])*{}.*\'".format(domain.name)
if 'deep' in scan_type:
# performs deep url gathering for all the subdomains present -
# RECOMMENDED
logger.info('Deep URLS Fetch')
os.system(settings.TOOL_LOCATION + 'get_urls.sh %s %s %s %s %s' %
("None", results_dir, scan_type, domain_regex, tools))
else:
# perform url gathering only for main domain - USE only for quick scan
logger.info('Non Deep URLS Fetch')
os.system(
settings.TOOL_LOCATION +
'get_urls.sh %s %s %s %s %s' % (
domain.name,
results_dir,
scan_type,
domain_regex,
tools
))
if IGNORE_FILE_EXTENSION in yaml_configuration[FETCH_URL]:
ignore_extension = '|'.join(
yaml_configuration[FETCH_URL][IGNORE_FILE_EXTENSION])
logger.info('Ignore extensions' + ignore_extension)
os.system(
'cat {0}/all_urls.txt | grep -Eiv "\\.({1}).*" > {0}/temp_urls.txt'.format(
results_dir, ignore_extension))
os.system(
'rm {0}/all_urls.txt && mv {0}/temp_urls.txt {0}/all_urls.txt'.format(results_dir))
'''
Store all the endpoints and then run the httpx
'''
try:
endpoint_final_url = results_dir + '/all_urls.txt'
if os.path.isfile(endpoint_final_url):
with open(endpoint_final_url) as endpoint_list:
for url in endpoint_list:
http_url = url.rstrip('\n')
if not EndPoint.objects.filter(scan_history=task, http_url=http_url).exists():
_subdomain = get_subdomain_from_url(http_url)
if Subdomain.objects.filter(
scan_history=task).filter(
name=_subdomain).exists():
subdomain = Subdomain.objects.get(
scan_history=task, name=_subdomain)
else:
'''
gau or gosppider can gather interesting endpoints which
when parsed can give subdomains that were not existent from
subdomain scan. so storing them
'''
logger.error(
'Subdomain {} not found, adding...'.format(_subdomain))
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': _subdomain,
})
subdomain = save_subdomain(subdomain_dict)
endpoint_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'subdomain': subdomain,
'http_url': http_url,
})
save_endpoint(endpoint_dict)
except Exception as e:
logger.error(e)
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/all_urls.txt')
'''
TODO:
Go spider & waybackurls accumulates a lot of urls, which is good but nuclei
takes forever to scan even a simple website, so we will do http probing
and filter HTTP status 404, this way we can reduce the number of Non Existent
URLS
'''
logger.info('HTTP Probing on collected endpoints')
httpx_command = 'httpx -l {0}/all_urls.txt -status-code -content-length -ip -cdn -title -tech-detect -json -follow-redirects -random-agent -o {0}/final_httpx_urls.json'.format(results_dir)
proxy = get_random_proxy()
if proxy:
httpx_command += " --http-proxy '{}'".format(proxy)
os.system(httpx_command)
url_results_file = results_dir + '/final_httpx_urls.json'
try:
urls_json_result = open(url_results_file, 'r')
lines = urls_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
http_url = json_st['url']
_subdomain = get_subdomain_from_url(http_url)
if Subdomain.objects.filter(
scan_history=task).filter(
name=_subdomain).exists():
subdomain_obj = Subdomain.objects.get(
scan_history=task, name=_subdomain)
else:
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': _subdomain,
})
subdomain_obj = save_subdomain(subdomain_dict)
if EndPoint.objects.filter(
scan_history=task).filter(
http_url=http_url).exists():
endpoint = EndPoint.objects.get(
scan_history=task, http_url=http_url)
else:
endpoint = EndPoint()
endpoint_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'http_url': http_url,
'subdomain': subdomain_obj
})
endpoint = save_endpoint(endpoint_dict)
if 'title' in json_st:
endpoint.page_title = json_st['title']
if 'webserver' in json_st:
endpoint.webserver = json_st['webserver']
if 'content-length' in json_st:
endpoint.content_length = json_st['content-length']
if 'content-type' in json_st:
endpoint.content_type = json_st['content-type']
if 'status-code' in json_st:
endpoint.http_status = json_st['status-code']
if 'response-time' in json_st:
response_time = float(''.join(ch for ch in json_st['response-time'] if not ch.isalpha()))
if json_st['response-time'][-2:] == 'ms':
response_time = response_time / 1000
endpoint.response_time = response_time
endpoint.save()
if 'technologies' in json_st:
for _tech in json_st['technologies']:
if Technology.objects.filter(name=_tech).exists():
tech = Technology.objects.get(name=_tech)
else:
tech = Technology(name=_tech)
tech.save()
endpoint.technologies.add(tech)
# get subdomain object
subdomain = Subdomain.objects.get(scan_history=task, name=_subdomain)
subdomain.technologies.add(tech)
subdomain.save()
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
endpoint_count = EndPoint.objects.filter(
scan_history__id=task.id).values('http_url').distinct().count()
endpoint_alive_count = EndPoint.objects.filter(
scan_history__id=task.id, http_status__exact=200).values('http_url').distinct().count()
send_notification('reNgine has finished gathering endpoints for {} and has discovered *{}* unique endpoints.\n\n{} of those endpoints reported HTTP status 200.'.format(
domain.name,
endpoint_count,
endpoint_alive_count
))
# once endpoint is saved, run gf patterns TODO: run threads
if GF_PATTERNS in yaml_configuration[FETCH_URL]:
for pattern in yaml_configuration[FETCH_URL][GF_PATTERNS]:
logger.info('Running GF for {}'.format(pattern))
gf_output_file_path = '{0}/gf_patterns_{1}.txt'.format(
results_dir, pattern)
gf_command = 'cat {0}/all_urls.txt | gf {1} >> {2}'.format(
results_dir, pattern, gf_output_file_path)
os.system(gf_command)
if os.path.exists(gf_output_file_path):
with open(gf_output_file_path) as gf_output:
for line in gf_output:
url = line.rstrip('\n')
try:
endpoint = EndPoint.objects.get(
scan_history=task, http_url=url)
earlier_pattern = endpoint.matched_gf_patterns
new_pattern = earlier_pattern + ',' + pattern if earlier_pattern else pattern
endpoint.matched_gf_patterns = new_pattern
except Exception as e:
# add the url in db
logger.error(e)
logger.info('Adding URL' + url)
endpoint = EndPoint()
endpoint.http_url = url
endpoint.target_domain = domain
endpoint.scan_history = task
try:
_subdomain = Subdomain.objects.get(
scan_history=task, name=get_subdomain_from_url(url))
endpoint.subdomain = _subdomain
except Exception as e:
continue
endpoint.matched_gf_patterns = pattern
finally:
endpoint.save()
def vulnerability_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id):
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Vulnerability scan has been initiated for {}.'.format(domain.name))
'''
This function will run nuclei as a vulnerability scanner
----
unfurl the urls to keep only domain and path, this will be sent to vuln scan
ignore certain file extensions
Thanks: https://github.com/six2dez/reconftw
'''
urls_path = '/alive.txt'
if task.scan_type.fetch_url:
os.system('cat {0}/all_urls.txt | grep -Eiv "\\.(eot|jpg|jpeg|gif|css|tif|tiff|png|ttf|otf|woff|woff2|ico|pdf|svg|txt|js|doc|docx)$" | unfurl -u format %s://%d%p >> {0}/unfurl_urls.txt'.format(results_dir))
os.system(
'sort -u {0}/unfurl_urls.txt -o {0}/unfurl_urls.txt'.format(results_dir))
urls_path = '/unfurl_urls.txt'
vulnerability_result_path = results_dir + '/vulnerability.json'
vulnerability_scan_input_file = results_dir + urls_path
nuclei_command = 'nuclei -json -l {} -o {}'.format(
vulnerability_scan_input_file, vulnerability_result_path)
# check nuclei config
if USE_NUCLEI_CONFIG in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[VULNERABILITY_SCAN][USE_NUCLEI_CONFIG]:
nuclei_command += ' -config /root/.config/nuclei/config.yaml'
'''
Nuclei Templates
Either custom template has to be supplied or default template, if neither has
been supplied then use all templates including custom templates
'''
if CUSTOM_NUCLEI_TEMPLATE in yaml_configuration[
VULNERABILITY_SCAN] or NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
# check yaml settings for templates
if NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
if ALL in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_TEMPLATE]:
template = NUCLEI_TEMPLATES_PATH
else:
_template = ','.join([NUCLEI_TEMPLATES_PATH + str(element)
for element in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_TEMPLATE]])
template = _template.replace(',', ' -t ')
# Update nuclei command with templates
nuclei_command = nuclei_command + ' -t ' + template
if CUSTOM_NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
# add .yaml to the custom template extensions
_template = ','.join(
[str(element) + '.yaml' for element in yaml_configuration[VULNERABILITY_SCAN][CUSTOM_NUCLEI_TEMPLATE]])
template = _template.replace(',', ' -t ')
# Update nuclei command with templates
nuclei_command = nuclei_command + ' -t ' + template
else:
nuclei_command = nuclei_command + ' -t /root/nuclei-templates'
# check yaml settings for concurrency
if NUCLEI_CONCURRENCY in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][NUCLEI_CONCURRENCY] > 0:
concurrency = yaml_configuration[VULNERABILITY_SCAN][NUCLEI_CONCURRENCY]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -c ' + str(concurrency)
if RATE_LIMIT in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][RATE_LIMIT] > 0:
rate_limit = yaml_configuration[VULNERABILITY_SCAN][RATE_LIMIT]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -rl ' + str(rate_limit)
if TIMEOUT in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][TIMEOUT] > 0:
timeout = yaml_configuration[VULNERABILITY_SCAN][TIMEOUT]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -timeout ' + str(timeout)
if RETRIES in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][RETRIES] > 0:
retries = yaml_configuration[VULNERABILITY_SCAN][RETRIES]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -retries ' + str(retries)
# for severity
if NUCLEI_SEVERITY in yaml_configuration[VULNERABILITY_SCAN] and ALL not in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_SEVERITY]:
_severity = ','.join(
[str(element) for element in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_SEVERITY]])
severity = _severity.replace(" ", "")
else:
severity = "critical, high, medium, low, info"
# update nuclei templates before running scan
os.system('nuclei -update-templates')
for _severity in severity.split(","):
# delete any existing vulnerability.json file
if os.path.isfile(vulnerability_result_path):
os.system('rm {}'.format(vulnerability_result_path))
# run nuclei
final_nuclei_command = nuclei_command + ' -severity ' + _severity
proxy = get_random_proxy()
if proxy:
final_nuclei_command += " --proxy-url '{}'".format(proxy)
logger.info(final_nuclei_command)
os.system(final_nuclei_command)
try:
if os.path.isfile(vulnerability_result_path):
urls_json_result = open(vulnerability_result_path, 'r')
lines = urls_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
host = json_st['host']
_subdomain = get_subdomain_from_url(host)
try:
subdomain = Subdomain.objects.get(
name=_subdomain, scan_history=task)
vulnerability = Vulnerability()
vulnerability.subdomain = subdomain
vulnerability.scan_history = task
vulnerability.target_domain = domain
try:
endpoint = EndPoint.objects.get(
scan_history=task, target_domain=domain, http_url=host)
vulnerability.endpoint = endpoint
except Exception as exception:
logger.error(exception)
if 'name' in json_st['info']:
vulnerability.name = json_st['info']['name']
if 'severity' in json_st['info']:
if json_st['info']['severity'] == 'info':
severity = 0
elif json_st['info']['severity'] == 'low':
severity = 1
elif json_st['info']['severity'] == 'medium':
severity = 2
elif json_st['info']['severity'] == 'high':
severity = 3
elif json_st['info']['severity'] == 'critical':
severity = 4
else:
severity = 0
else:
severity = 0
vulnerability.severity = severity
if 'tags' in json_st['info']:
vulnerability.tags = json_st['info']['tags']
if 'description' in json_st['info']:
vulnerability.description = json_st['info']['description']
if 'reference' in json_st['info']:
vulnerability.reference = json_st['info']['reference']
if 'matched' in json_st: # TODO remove in rengine 1.1. 'matched' isn't used in nuclei 2.5.3
vulnerability.http_url = json_st['matched']
if 'matched-at' in json_st:
vulnerability.http_url = json_st['matched-at']
if 'templateID' in json_st:
vulnerability.template_used = json_st['templateID']
if 'description' in json_st:
vulnerability.description = json_st['description']
if 'matcher_name' in json_st:
vulnerability.matcher_name = json_st['matcher_name']
if 'extracted_results' in json_st:
vulnerability.extracted_results = json_st['extracted_results']
vulnerability.discovered_date = timezone.now()
vulnerability.open_status = True
vulnerability.save()
# send notification for all vulnerabilities except info
if json_st['info']['severity'] != "info" and notification and notification[0].send_vuln_notif:
message = "*Alert: Vulnerability Identified*"
message += "\n\n"
message += "A *{}* severity vulnerability has been identified.".format(json_st['info']['severity'])
message += "\nVulnerability Name: {}".format(json_st['info']['name'])
message += "\nVulnerable URL: {}".format(json_st['host'])
send_notification(message)
# send report to hackerone
if Hackerone.objects.all().exists() and json_st['info']['severity'] != 'info' and json_st['info']['severity'] \
!= 'low' and vulnerability.target_domain.h1_team_handle:
hackerone = Hackerone.objects.all()[0]
if hackerone.send_critical and json_st['info']['severity'] == 'critical':
send_hackerone_report(vulnerability.id)
elif hackerone.send_high and json_st['info']['severity'] == 'high':
send_hackerone_report(vulnerability.id)
elif hackerone.send_medium and json_st['info']['severity'] == 'medium':
send_hackerone_report(vulnerability.id)
except ObjectDoesNotExist:
logger.error('Object not found')
continue
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
info_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=0).count()
low_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=1).count()
medium_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=2).count()
high_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=3).count()
critical_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=4).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count
message = 'Vulnerability scan has been completed for {} and discovered {} vulnerabilities.'.format(
domain.name,
vulnerability_count
)
message += '\n\n*Vulnerability Stats:*'
message += '\nCritical: {}'.format(critical_count)
message += '\nHigh: {}'.format(high_count)
message += '\nMedium: {}'.format(medium_count)
message += '\nLow: {}'.format(low_count)
message += '\nInfo: {}'.format(info_count)
send_notification(message)
def scan_failed(task):
task.scan_status = 0
task.stop_scan_date = timezone.now()
task.save()
def create_scan_activity(task, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = task
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
def update_last_activity(id, activity_status):
ScanActivity.objects.filter(
id=id).update(
status=activity_status,
time=timezone.now())
def delete_scan_data(results_dir):
# remove all txt,html,json files
os.system('find {} -name "*.txt" -type f -delete'.format(results_dir))
os.system('find {} -name "*.html" -type f -delete'.format(results_dir))
os.system('find {} -name "*.json" -type f -delete'.format(results_dir))
def save_subdomain(subdomain_dict):
subdomain = Subdomain()
subdomain.discovered_date = timezone.now()
subdomain.target_domain = subdomain_dict.get('target_domain')
subdomain.scan_history = subdomain_dict.get('scan_history')
subdomain.name = subdomain_dict.get('name')
subdomain.http_url = subdomain_dict.get('http_url')
subdomain.screenshot_path = subdomain_dict.get('screenshot_path')
subdomain.http_header_path = subdomain_dict.get('http_header_path')
subdomain.cname = subdomain_dict.get('cname')
subdomain.is_cdn = subdomain_dict.get('is_cdn')
subdomain.content_type = subdomain_dict.get('content_type')
subdomain.webserver = subdomain_dict.get('webserver')
subdomain.page_title = subdomain_dict.get('page_title')
subdomain.is_imported_subdomain = subdomain_dict.get(
'is_imported_subdomain') if 'is_imported_subdomain' in subdomain_dict else False
if 'http_status' in subdomain_dict:
subdomain.http_status = subdomain_dict.get('http_status')
if 'response_time' in subdomain_dict:
subdomain.response_time = subdomain_dict.get('response_time')
if 'content_length' in subdomain_dict:
subdomain.content_length = subdomain_dict.get('content_length')
subdomain.save()
return subdomain
def save_endpoint(endpoint_dict):
endpoint = EndPoint()
endpoint.discovered_date = timezone.now()
endpoint.scan_history = endpoint_dict.get('scan_history')
endpoint.target_domain = endpoint_dict.get('target_domain') if 'target_domain' in endpoint_dict else None
endpoint.subdomain = endpoint_dict.get('subdomain') if 'target_domain' in endpoint_dict else None
endpoint.http_url = endpoint_dict.get('http_url')
endpoint.page_title = endpoint_dict.get('page_title') if 'page_title' in endpoint_dict else None
endpoint.content_type = endpoint_dict.get('content_type') if 'content_type' in endpoint_dict else None
endpoint.webserver = endpoint_dict.get('webserver') if 'webserver' in endpoint_dict else None
endpoint.response_time = endpoint_dict.get('response_time') if 'response_time' in endpoint_dict else 0
endpoint.http_status = endpoint_dict.get('http_status') if 'http_status' in endpoint_dict else 0
endpoint.content_length = endpoint_dict.get('content_length') if 'content_length' in endpoint_dict else 0
endpoint.is_default = endpoint_dict.get('is_default') if 'is_default' in endpoint_dict else False
endpoint.save()
return endpoint
def perform_osint(task, domain, yaml_configuration, results_dir):
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has initiated OSINT on target {}'.format(domain.name))
if 'discover' in yaml_configuration[OSINT]:
osint_discovery(task, domain, yaml_configuration, results_dir)
if 'dork' in yaml_configuration[OSINT]:
dorking(task, yaml_configuration)
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has completed performing OSINT on target {}'.format(domain.name))
def osint_discovery(task, domain, yaml_configuration, results_dir):
if ALL in yaml_configuration[OSINT][OSINT_DISCOVER]:
osint_lookup = 'emails metainfo employees'
else:
osint_lookup = ' '.join(
str(lookup) for lookup in yaml_configuration[OSINT][OSINT_DISCOVER])
if 'metainfo' in osint_lookup:
if INTENSITY in yaml_configuration[OSINT]:
osint_intensity = yaml_configuration[OSINT][INTENSITY]
else:
osint_intensity = 'normal'
if OSINT_DOCUMENTS_LIMIT in yaml_configuration[OSINT]:
documents_limit = yaml_configuration[OSINT][OSINT_DOCUMENTS_LIMIT]
else:
documents_limit = 50
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': domain.name,
'domain': domain,
'scan_id': task,
'documents_limit': documents_limit
})
get_and_save_meta_info(meta_dict)
elif osint_intensity == 'deep':
# get all subdomains in scan_id
subdomains = Subdomain.objects.filter(scan_history=task)
for subdomain in subdomains:
meta_dict = DottedDict({
'osint_target': subdomain.name,
'domain': domain,
'scan_id': task,
'documents_limit': documents_limit
})
get_and_save_meta_info(meta_dict)
if 'emails' in osint_lookup:
get_and_save_emails(task, results_dir)
get_and_save_leaked_credentials(task, results_dir)
if 'employees' in osint_lookup:
get_and_save_employees(task, results_dir)
def dorking(scan_history, yaml_configuration):
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
# look in stackoverflow
if ALL in yaml_configuration[OSINT][OSINT_DORK]:
dork_lookup = 'stackoverflow, 3rdparty, social_media, project_management, code_sharing, config_files, jenkins, cloud_buckets, php_error, exposed_documents, struts_rce, db_files, traefik, git_exposed'
else:
dork_lookup = ' '.join(
str(lookup) for lookup in yaml_configuration[OSINT][OSINT_DORK])
if 'stackoverflow' in dork_lookup:
dork = 'site:stackoverflow.com'
dork_type = 'stackoverflow'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=False
)
if '3rdparty' in dork_lookup:
# look in 3rd party sitee
dork_type = '3rdparty'
lookup_websites = [
'gitter.im',
'papaly.com',
'productforums.google.com',
'coggle.it',
'replt.it',
'ycombinator.com',
'libraries.io',
'npm.runkit.com',
'npmjs.com',
'scribd.com',
'gitter.im'
]
dork = ''
for website in lookup_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'social_media' in dork_lookup:
dork_type = 'Social Media'
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'pinterest.com',
'tumblr.com',
'reddit.com'
]
dork = ''
for website in social_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'project_management' in dork_lookup:
dork_type = 'Project Management'
project_websites = [
'trello.com',
'*.atlassian.net'
]
dork = ''
for website in project_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'code_sharing' in dork_lookup:
dork_type = 'Code Sharing Sites'
code_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
dork = ''
for website in code_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'config_files' in dork_lookup:
dork_type = 'Config Files'
config_file_ext = [
'env',
'xml',
'conf',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
dork = ''
for extension in config_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'jenkins' in dork_lookup:
dork_type = 'Jenkins'
dork = 'intitle:\"Dashboard [Jenkins]\"'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
if 'wordpress_files' in dork_lookup:
dork_type = 'Wordpress Files'
inurl_lookup = [
'wp-content',
'wp-includes'
]
dork = ''
for lookup in inurl_lookup:
dork = dork + ' | ' + 'inurl:' + lookup
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'cloud_buckets' in dork_lookup:
dork_type = 'Cloud Buckets'
cloud_websites = [
'.s3.amazonaws.com',
'storage.googleapis.com',
'amazonaws.com'
]
dork = ''
for website in cloud_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'php_error' in dork_lookup:
dork_type = 'PHP Error'
error_words = [
'\"PHP Parse error\"',
'\"PHP Warning\"',
'\"PHP Error\"'
]
dork = ''
for word in error_words:
dork = dork + ' | ' + word
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'exposed_documents' in dork_lookup:
dork_type = 'Exposed Documents'
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
dork = ''
for extension in docs_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'struts_rce' in dork_lookup:
dork_type = 'Apache Struts RCE'
struts_file_ext = [
'action',
'struts',
'do'
]
dork = ''
for extension in struts_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'db_files' in dork_lookup:
dork_type = 'Database Files'
db_file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
dork = ''
for extension in db_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'traefik' in dork_lookup:
dork = 'intitle:traefik inurl:8080/dashboard'
dork_type = 'Traefik'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
if 'git_exposed' in dork_lookup:
dork = 'inurl:\"/.git\"'
dork_type = '.git Exposed'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
def get_and_save_dork_results(dork, type, scan_history, in_target=False):
degoogle_obj = degoogle.dg()
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
if in_target:
query = dork + " site:" + scan_history.domain.name
else:
query = dork + " \"{}\"".format(scan_history.domain.name)
logger.info(query)
degoogle_obj.query = query
results = degoogle_obj.run()
logger.info(results)
for result in results:
dork, _ = Dork.objects.get_or_create(
type=type,
description=result['desc'],
url=result['url']
)
scan_history.dorks.add(dork)
def get_and_save_employees(scan_history, results_dir):
theHarvester_location = '/usr/src/github/theHarvester'
# update proxies.yaml
if Proxy.objects.all().exists():
proxy = Proxy.objects.all()[0]
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(theHarvester_location + '/proxies.yaml', 'w') as file:
documents = yaml.dump(yaml_data, file)
os.system('cd {} && python3 theHarvester.py -d {} -b all -f {}/theHarvester.html'.format(
theHarvester_location,
scan_history.domain.name,
results_dir
))
file_location = results_dir + '/theHarvester.html'
print(file_location)
# delete proxy environ var
if os.environ.get(('https_proxy')):
del os.environ['https_proxy']
if os.environ.get(('HTTPS_PROXY')):
del os.environ['HTTPS_PROXY']
if os.path.isfile(file_location):
logger.info('Parsing theHarvester results')
options = FirefoxOptions()
options.add_argument("--headless")
driver = webdriver.Firefox(options=options)
driver.get('file://'+file_location)
tabledata = driver.execute_script('return tabledata')
# save email addresses and linkedin employees
for data in tabledata:
if data['record'] == 'email':
_email = data['result']
email, _ = Email.objects.get_or_create(address=_email)
scan_history.emails.add(email)
elif data['record'] == 'people':
_employee = data['result']
split_val = _employee.split('-')
name = split_val[0]
if len(split_val) == 2:
designation = split_val[1]
else:
designation = ""
employee, _ = Employee.objects.get_or_create(name=name, designation=designation)
scan_history.employees.add(employee)
driver.quit()
print(tabledata)
def get_and_save_emails(scan_history, results_dir):
leak_target_path = '{}/creds_target.txt'.format(results_dir)
# get email address
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
emails = []
try:
logger.info('OSINT: Getting emails from Google')
email_from_google = get_emails_from_google(scan_history.domain.name)
logger.info('OSINT: Getting emails from Bing')
email_from_bing = get_emails_from_bing(scan_history.domain.name)
logger.info('OSINT: Getting emails from Baidu')
email_from_baidu = get_emails_from_baidu(scan_history.domain.name)
emails = list(set(email_from_google + email_from_bing + email_from_baidu))
logger.info(emails)
except Exception as e:
logger.error(e)
leak_target_file = open(leak_target_path, 'w')
for _email in emails:
email, _ = Email.objects.get_or_create(address=_email)
scan_history.emails.add(email)
leak_target_file.write('{}\n'.format(_email))
# fill leak_target_file with possible email address
leak_target_file.write('%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.write('%.%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%.%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.write('%_%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%_%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.close()
def get_and_save_leaked_credentials(scan_history, results_dir):
logger.info('OSINT: Getting leaked credentials...')
leak_target_file = '{}/creds_target.txt'.format(results_dir)
leak_output_file = '{}/pwndb.json'.format(results_dir)
pwndb_command = 'python3 /usr/src/github/pwndb/pwndb.py --proxy tor:9150 --output json --list {}'.format(
leak_target_file
)
try:
pwndb_output = subprocess.getoutput(pwndb_command)
creds = json.loads(pwndb_output)
for cred in creds:
if cred['username'] != 'donate':
email_id = "{}@{}".format(cred['username'], cred['domain'])
email_obj, _ = Email.objects.get_or_create(
address=email_id,
)
email_obj.password = cred['password']
email_obj.save()
scan_history.emails.add(email_obj)
except Exception as e:
logger.error(e)
pass
def get_and_save_meta_info(meta_dict):
logger.info('Getting METADATA for {}'.format(meta_dict.osint_target))
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
result = metadata_extractor.extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if result:
results = result.get_metadata()
for meta in results:
meta_finder_document = MetaFinderDocument()
subdomain = Subdomain.objects.get(scan_history=meta_dict.scan_id, name=meta_dict.osint_target)
meta_finder_document.subdomain = subdomain
meta_finder_document.target_domain = meta_dict.domain
meta_finder_document.scan_history = meta_dict.scan_id
item = DottedDict(results[meta])
meta_finder_document.url = item.url
meta_finder_document.doc_name = meta
meta_finder_document.http_status = item.status_code
metadata = results[meta]['metadata']
for data in metadata:
if 'Producer' in metadata and metadata['Producer']:
meta_finder_document.producer = metadata['Producer'].rstrip('\x00')
if 'Creator' in metadata and metadata['Creator']:
meta_finder_document.creator = metadata['Creator'].rstrip('\x00')
if 'CreationDate' in metadata and metadata['CreationDate']:
meta_finder_document.creation_date = metadata['CreationDate'].rstrip('\x00')
if 'ModDate' in metadata and metadata['ModDate']:
meta_finder_document.modified_date = metadata['ModDate'].rstrip('\x00')
if 'Author' in metadata and metadata['Author']:
meta_finder_document.author = metadata['Author'].rstrip('\x00')
if 'Title' in metadata and metadata['Title']:
meta_finder_document.title = metadata['Title'].rstrip('\x00')
if 'OSInfo' in metadata and metadata['OSInfo']:
meta_finder_document.os = metadata['OSInfo'].rstrip('\x00')
meta_finder_document.save()
@app.task(bind=True)
def test_task(self):
print('*' * 40)
print('test task run')
print('*' * 40)
| radaram | 43af3a6aecdece4923ee74b108853f7b9c51ed12 | 27d6ec5827a51fd74e3ab97a5cef38fc7f5d9168 | Can you please confirm, if it is always returning `matched-at` on the latest Nuclei release? Thanks | yogeshojha | 33 |
yogeshojha/rengine | 530 | Fix #529 | Nuclei returns the response to stdout:
`{"template-id":"tech-detect","info":{"name":"Wappalyzer Technology Detection","author":["hakluke"],"tags":["tech"],"reference":null,"severity":"info"},"matcher-name":"nginx","type":"http","host":"https://example.com:443","matched-at":"https://example.com:443","timestamp":"2021-10-31T09:39:47.1571248Z","curl-command":"curl -X 'GET' -d '' -H 'Accept: */*' -H 'Accept-Language: en' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1944.0 Safari/537.36' 'https://example.com'"}`
It needs to read host_url from matched-at, not from matched. | null | 2021-10-31 10:27:33+00:00 | 2021-11-01 16:58:16+00:00 | web/reNgine/tasks.py | import os
import traceback
import yaml
import json
import csv
import validators
import random
import requests
import logging
import metafinder.extractor as metadata_extractor
import whatportis
import subprocess
from selenium.webdriver.firefox.options import Options as FirefoxOptions
from selenium import webdriver
from emailfinder.extractor import *
from dotted_dict import DottedDict
from celery import shared_task
from discord_webhook import DiscordWebhook
from reNgine.celery import app
from startScan.models import *
from targetApp.models import Domain
from scanEngine.models import EngineType
from django.conf import settings
from django.shortcuts import get_object_or_404
from celery import shared_task
from datetime import datetime
from degoogle import degoogle
from django.conf import settings
from django.utils import timezone, dateformat
from django.shortcuts import get_object_or_404
from django.core.exceptions import ObjectDoesNotExist
from reNgine.celery import app
from reNgine.definitions import *
from startScan.models import *
from targetApp.models import Domain
from scanEngine.models import EngineType, Configuration, Wordlist
from .common_func import *
'''
task for background scan
'''
@app.task
def initiate_scan(
domain_id,
scan_history_id,
scan_type,
engine_type,
imported_subdomains=None,
out_of_scope_subdomains=[]
):
'''
scan_type = 0 -> immediate scan, need not create scan object
scan_type = 1 -> scheduled scan
'''
engine_object = EngineType.objects.get(pk=engine_type)
domain = Domain.objects.get(pk=domain_id)
if scan_type == 1:
task = ScanHistory()
task.scan_status = -1
elif scan_type == 0:
task = ScanHistory.objects.get(pk=scan_history_id)
# save the last scan date for domain model
domain.last_scan_date = timezone.now()
domain.save()
# once the celery task starts, change the task status to Started
task.scan_type = engine_object
task.celery_id = initiate_scan.request.id
task.domain = domain
task.scan_status = 1
task.start_scan_date = timezone.now()
task.subdomain_discovery = True if engine_object.subdomain_discovery else False
task.dir_file_search = True if engine_object.dir_file_search else False
task.port_scan = True if engine_object.port_scan else False
task.fetch_url = True if engine_object.fetch_url else False
task.osint = True if engine_object.osint else False
task.screenshot = True if engine_object.screenshot else False
task.vulnerability_scan = True if engine_object.vulnerability_scan else False
task.save()
activity_id = create_scan_activity(task, "Scanning Started", 2)
results_dir = '/usr/src/scan_results/'
os.chdir(results_dir)
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has initiated recon for target {} with engine type {}'.format(domain.name, engine_object.engine_name))
try:
current_scan_dir = domain.name + '_' + str(random.randint(100000000000, 999999999999))
os.mkdir(current_scan_dir)
task.results_dir = current_scan_dir
task.save()
except Exception as exception:
logger.error(exception)
scan_failed(task)
yaml_configuration = None
excluded_subdomains = ''
try:
yaml_configuration = yaml.load(
task.scan_type.yaml_configuration,
Loader=yaml.FullLoader)
except Exception as exception:
logger.error(exception)
# TODO: Put failed reason on db
'''
Add GF patterns name to db for dynamic URLs menu
'''
if engine_object.fetch_url and GF_PATTERNS in yaml_configuration[FETCH_URL]:
task.used_gf_patterns = ','.join(
pattern for pattern in yaml_configuration[FETCH_URL][GF_PATTERNS])
task.save()
results_dir = results_dir + current_scan_dir
# put all imported subdomains into txt file and also in Subdomain model
if imported_subdomains:
extract_imported_subdomain(
imported_subdomains, task, domain, results_dir)
if yaml_configuration:
'''
a target in itself is a subdomain, some tool give subdomains as
www.yogeshojha.com but url and everything else resolves to yogeshojha.com
In that case, we would already need to store target itself as subdomain
'''
initial_subdomain_file = '/target_domain.txt' if task.subdomain_discovery else '/sorted_subdomain_collection.txt'
subdomain_file = open(results_dir + initial_subdomain_file, "w")
subdomain_file.write(domain.name + "\n")
subdomain_file.close()
if(task.subdomain_discovery):
activity_id = create_scan_activity(task, "Subdomain Scanning", 1)
subdomain_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id,
out_of_scope_subdomains
)
else:
skip_subdomain_scan(task, domain, results_dir)
update_last_activity(activity_id, 2)
activity_id = create_scan_activity(task, "HTTP Crawler", 1)
http_crawler(
task,
domain,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
try:
if task.screenshot:
activity_id = create_scan_activity(
task, "Visual Recon - Screenshot", 1)
grab_screenshot(
task,
domain,
yaml_configuration,
current_scan_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if(task.port_scan):
activity_id = create_scan_activity(task, "Port Scanning", 1)
port_scanning(task, domain, yaml_configuration, results_dir)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.osint:
activity_id = create_scan_activity(task, "OSINT Running", 1)
perform_osint(task, domain, yaml_configuration, results_dir)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.dir_file_search:
activity_id = create_scan_activity(task, "Directory Search", 1)
directory_brute(
task,
domain,
yaml_configuration,
results_dir,
activity_id
)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.fetch_url:
activity_id = create_scan_activity(task, "Fetching endpoints", 1)
fetch_endpoints(
task,
domain,
yaml_configuration,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.vulnerability_scan:
activity_id = create_scan_activity(task, "Vulnerability Scan", 1)
vulnerability_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
activity_id = create_scan_activity(task, "Scan Completed", 2)
if notification and notification[0].send_scan_status_notif:
send_notification('*Scan Completed*\nreNgine has finished performing recon on target {}.'.format(domain.name))
'''
Once the scan is completed, save the status to successful
'''
if ScanActivity.objects.filter(scan_of=task).filter(status=0).all():
task.scan_status = 0
else:
task.scan_status = 2
task.stop_scan_date = timezone.now()
task.save()
# cleanup results
delete_scan_data(results_dir)
return {"status": True}
def skip_subdomain_scan(task, domain, results_dir):
# store default target as subdomain
'''
If the imported subdomain already has target domain saved, we can skip this
'''
if not Subdomain.objects.filter(
scan_history=task,
name=domain.name).exists():
subdomain_dict = DottedDict({
'name': domain.name,
'scan_history': task,
'target_domain': domain
})
save_subdomain(subdomain_dict)
# Save target into target_domain.txt
with open('{}/target_domain.txt'.format(results_dir), 'w+') as file:
file.write(domain.name + '\n')
file.close()
'''
We can have two conditions, either subdomain scan happens, or subdomain scan
does not happen, in either cases, because we are using import subdomain, we
need to collect and sort all the subdomains
Write target domain into subdomain_collection
'''
os.system(
'cat {0}/target_domain.txt > {0}/subdomain_collection.txt'.format(results_dir))
os.system(
'cat {0}/from_imported.txt > {0}/subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/from_imported.txt'.format(results_dir))
'''
Sort all Subdomains
'''
os.system(
'sort -u {0}/subdomain_collection.txt -o {0}/sorted_subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/subdomain_collection.txt'.format(results_dir))
def extract_imported_subdomain(imported_subdomains, task, domain, results_dir):
valid_imported_subdomains = [subdomain for subdomain in imported_subdomains if validators.domain(
subdomain) and domain.name == get_domain_from_subdomain(subdomain)]
# remove any duplicate
valid_imported_subdomains = list(set(valid_imported_subdomains))
with open('{}/from_imported.txt'.format(results_dir), 'w+') as file:
for subdomain_name in valid_imported_subdomains:
# save _subdomain to Subdomain model db
if not Subdomain.objects.filter(
scan_history=task, name=subdomain_name).exists():
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': subdomain_name,
'is_imported_subdomain': True
})
save_subdomain(subdomain_dict)
# save subdomain to file
file.write('{}\n'.format(subdomain_name))
file.close()
def subdomain_scan(task, domain, yaml_configuration, results_dir, activity_id, out_of_scope_subdomains=None):
'''
This function is responsible for performing subdomain enumeration
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Subdomain Gathering for target {} has been started'.format(domain.name))
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
# check for all the tools and add them into string
# if tool selected is all then make string, no need for loop
if ALL in yaml_configuration[SUBDOMAIN_DISCOVERY][USES_TOOLS]:
tools = 'amass-active amass-passive assetfinder sublist3r subfinder oneforall'
else:
tools = ' '.join(
str(tool) for tool in yaml_configuration[SUBDOMAIN_DISCOVERY][USES_TOOLS])
logging.info(tools)
# check for THREADS, by default 10
threads = 10
if THREADS in yaml_configuration[SUBDOMAIN_DISCOVERY]:
_threads = yaml_configuration[SUBDOMAIN_DISCOVERY][THREADS]
if _threads > 0:
threads = _threads
if 'amass' in tools:
if 'amass-passive' in tools:
amass_command = 'amass enum -passive -d {} -o {}/from_amass.txt'.format(
domain.name, results_dir)
if USE_AMASS_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_AMASS_CONFIG]:
amass_command += ' -config /root/.config/amass.ini'
# Run Amass Passive
logging.info(amass_command)
os.system(amass_command)
if 'amass-active' in tools:
amass_command = 'amass enum -active -d {} -o {}/from_amass_active.txt'.format(
domain.name, results_dir)
if USE_AMASS_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_AMASS_CONFIG]:
amass_command += ' -config /root/.config/amass.ini'
if AMASS_WORDLIST in yaml_configuration[SUBDOMAIN_DISCOVERY]:
wordlist = yaml_configuration[SUBDOMAIN_DISCOVERY][AMASS_WORDLIST]
if wordlist == 'default':
wordlist_path = '/usr/src/wordlist/deepmagic.com-prefixes-top50000.txt'
else:
wordlist_path = '/usr/src/wordlist/' + wordlist + '.txt'
if not os.path.exists(wordlist_path):
wordlist_path = '/usr/src/' + AMASS_WORDLIST
amass_command = amass_command + \
' -brute -w {}'.format(wordlist_path)
if amass_config_path:
amass_command = amass_command + \
' -config {}'.format('/usr/src/scan_results/' + amass_config_path)
# Run Amass Active
logging.info(amass_command)
os.system(amass_command)
if 'assetfinder' in tools:
assetfinder_command = 'assetfinder --subs-only {} > {}/from_assetfinder.txt'.format(
domain.name, results_dir)
# Run Assetfinder
logging.info(assetfinder_command)
os.system(assetfinder_command)
if 'sublist3r' in tools:
sublist3r_command = 'python3 /usr/src/github/Sublist3r/sublist3r.py -d {} -t {} -o {}/from_sublister.txt'.format(
domain.name, threads, results_dir)
# Run sublist3r
logging.info(sublist3r_command)
os.system(sublist3r_command)
if 'subfinder' in tools:
subfinder_command = 'subfinder -d {} -t {} -o {}/from_subfinder.txt'.format(
domain.name, threads, results_dir)
if USE_SUBFINDER_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_SUBFINDER_CONFIG]:
subfinder_command += ' -config /root/.config/subfinder/config.yaml'
# Run Subfinder
logging.info(subfinder_command)
os.system(subfinder_command)
if 'oneforall' in tools:
oneforall_command = 'python3 /usr/src/github/OneForAll/oneforall.py --target {} run'.format(
domain.name, results_dir)
# Run OneForAll
logging.info(oneforall_command)
os.system(oneforall_command)
extract_subdomain = "cut -d',' -f6 /usr/src/github/OneForAll/results/{}.csv >> {}/from_oneforall.txt".format(
domain.name, results_dir)
os.system(extract_subdomain)
# remove the results from oneforall directory
os.system(
'rm -rf /usr/src/github/OneForAll/results/{}.*'.format(domain.name))
'''
All tools have gathered the list of subdomains with filename
initials as from_*
We will gather all the results in one single file, sort them and
remove the older results from_*
'''
os.system(
'cat {0}/*.txt > {0}/subdomain_collection.txt'.format(results_dir))
'''
Write target domain into subdomain_collection
'''
os.system(
'cat {0}/target_domain.txt >> {0}/subdomain_collection.txt'.format(results_dir))
'''
Remove all the from_* files
'''
os.system('rm -f {}/from*'.format(results_dir))
'''
Sort all Subdomains
'''
os.system(
'sort -u {0}/subdomain_collection.txt -o {0}/sorted_subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/subdomain_collection.txt'.format(results_dir))
'''
The final results will be stored in sorted_subdomain_collection.
'''
# parse the subdomain list file and store in db
with open(subdomain_scan_results_file) as subdomain_list:
for _subdomain in subdomain_list:
__subdomain = _subdomain.rstrip('\n')
if not Subdomain.objects.filter(scan_history=task, name=__subdomain).exists(
) and validators.domain(__subdomain) and __subdomain not in out_of_scope_subdomains:
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': __subdomain,
})
save_subdomain(subdomain_dict)
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
subdomains_count = Subdomain.objects.filter(scan_history=task).count()
send_notification('Subdomain Gathering for target {} has been completed and has discovered *{}* subdomains.'.format(domain.name, subdomains_count))
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/sorted_subdomain_collection.txt')
# check for any subdomain changes and send notif if any
if notification and notification[0].send_subdomain_changes_notif:
newly_added_subdomain = get_new_added_subdomain(task.id, domain.id)
if newly_added_subdomain:
message = "**{} New Subdomains Discovered on domain {}**".format(newly_added_subdomain.count(), domain.name)
for subdomain in newly_added_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
removed_subdomain = get_removed_subdomain(task.id, domain.id)
if removed_subdomain:
message = "**{} Subdomains are no longer available on domain {}**".format(removed_subdomain.count(), domain.name)
for subdomain in removed_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
# check for interesting subdomains and send notif if any
if notification and notification[0].send_interesting_notif:
interesting_subdomain = get_interesting_subdomains(task.id, domain.id)
print(interesting_subdomain)
if interesting_subdomain:
message = "**{} Interesting Subdomains Found on domain {}**".format(interesting_subdomain.count(), domain.name)
for subdomain in interesting_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
def get_new_added_subdomain(scan_id, domain_id):
scan_history = ScanHistory.objects.filter(
domain=domain_id).filter(
subdomain_discovery=True).filter(
id__lte=scan_id)
if scan_history.count() > 1:
last_scan = scan_history.order_by('-start_scan_date')[1]
scanned_host_q1 = Subdomain.objects.filter(
scan_history__id=scan_id).values('name')
scanned_host_q2 = Subdomain.objects.filter(
scan_history__id=last_scan.id).values('name')
added_subdomain = scanned_host_q1.difference(scanned_host_q2)
return Subdomain.objects.filter(
scan_history=scan_id).filter(
name__in=added_subdomain)
def get_removed_subdomain(scan_id, domain_id):
scan_history = ScanHistory.objects.filter(
domain=domain_id).filter(
subdomain_discovery=True).filter(
id__lte=scan_id)
if scan_history.count() > 1:
last_scan = scan_history.order_by('-start_scan_date')[1]
scanned_host_q1 = Subdomain.objects.filter(
scan_history__id=scan_id).values('name')
scanned_host_q2 = Subdomain.objects.filter(
scan_history__id=last_scan.id).values('name')
removed_subdomains = scanned_host_q2.difference(scanned_host_q1)
print()
return Subdomain.objects.filter(
scan_history=last_scan).filter(
name__in=removed_subdomains)
def http_crawler(task, domain, results_dir, activity_id):
'''
This function is runs right after subdomain gathering, and gathers important
like page title, http status, etc
HTTP Crawler runs by default
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('HTTP Crawler for target {} has been initiated.'.format(domain.name))
alive_file_location = results_dir + '/alive.txt'
httpx_results_file = results_dir + '/httpx.json'
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
httpx_command = 'httpx -status-code -content-length -title -tech-detect -cdn -ip -follow-host-redirects -random-agent'
proxy = get_random_proxy()
if proxy:
httpx_command += " --http-proxy '{}'".format(proxy)
httpx_command += ' -json -o {}'.format(
httpx_results_file
)
httpx_command = 'cat {} | {}'.format(subdomain_scan_results_file, httpx_command)
print(httpx_command)
os.system(httpx_command)
# alive subdomains from httpx
alive_file = open(alive_file_location, 'w')
# writing httpx results
if os.path.isfile(httpx_results_file):
httpx_json_result = open(httpx_results_file, 'r')
lines = httpx_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
try:
# fallback for older versions of httpx
if 'url' in json_st:
subdomain = Subdomain.objects.get(
scan_history=task, name=json_st['input'])
else:
subdomain = Subdomain.objects.get(
scan_history=task, name=json_st['url'].split("//")[-1])
'''
Saving Default http urls to EndPoint
'''
endpoint = EndPoint()
endpoint.scan_history = task
endpoint.target_domain = domain
endpoint.subdomain = subdomain
if 'url' in json_st:
endpoint.http_url = json_st['url']
subdomain.http_url = json_st['url']
if 'status-code' in json_st:
endpoint.http_status = json_st['status-code']
subdomain.http_status = json_st['status-code']
if 'title' in json_st:
endpoint.page_title = json_st['title']
subdomain.page_title = json_st['title']
if 'content-length' in json_st:
endpoint.content_length = json_st['content-length']
subdomain.content_length = json_st['content-length']
if 'content-type' in json_st:
endpoint.content_type = json_st['content-type']
subdomain.content_type = json_st['content-type']
if 'webserver' in json_st:
endpoint.webserver = json_st['webserver']
subdomain.webserver = json_st['webserver']
if 'response-time' in json_st:
response_time = float(
''.join(
ch for ch in json_st['response-time'] if not ch.isalpha()))
if json_st['response-time'][-2:] == 'ms':
response_time = response_time / 1000
endpoint.response_time = response_time
subdomain.response_time = response_time
if 'cnames' in json_st:
cname_list = ','.join(json_st['cnames'])
subdomain.cname = cname_list
discovered_date = timezone.now()
endpoint.discovered_date = discovered_date
subdomain.discovered_date = discovered_date
endpoint.is_default = True
endpoint.save()
subdomain.save()
if 'technologies' in json_st:
for _tech in json_st['technologies']:
if Technology.objects.filter(name=_tech).exists():
tech = Technology.objects.get(name=_tech)
else:
tech = Technology(name=_tech)
tech.save()
subdomain.technologies.add(tech)
endpoint.technologies.add(tech)
if 'a' in json_st:
for _ip in json_st['a']:
if IpAddress.objects.filter(address=_ip).exists():
ip = IpAddress.objects.get(address=_ip)
else:
ip = IpAddress(address=_ip)
if 'cdn' in json_st:
ip.is_cdn = json_st['cdn']
ip.save()
subdomain.ip_addresses.add(ip)
# see if to ignore 404 or 5xx
alive_file.write(json_st['url'] + '\n')
subdomain.save()
endpoint.save()
except Exception as exception:
logging.error(exception)
alive_file.close()
if notification and notification[0].send_scan_status_notif:
alive_count = Subdomain.objects.filter(
scan_history__id=task.id).values('name').distinct().filter(
http_status__exact=200).count()
send_notification('HTTP Crawler for target {} has been completed.\n\n {} subdomains were alive (http status 200).'.format(domain.name, alive_count))
def grab_screenshot(task, domain, yaml_configuration, results_dir, activity_id):
'''
This function is responsible for taking screenshots
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine is currently gathering screenshots for {}'.format(domain.name))
output_screenshots_path = results_dir + '/screenshots'
result_csv_path = results_dir + '/screenshots/Requests.csv'
alive_subdomains_path = results_dir + '/alive.txt'
eyewitness_command = 'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py'
eyewitness_command += ' -f {} -d {} --no-prompt'.format(
alive_subdomains_path,
output_screenshots_path
)
if EYEWITNESS in yaml_configuration \
and TIMEOUT in yaml_configuration[EYEWITNESS] \
and yaml_configuration[EYEWITNESS][TIMEOUT] > 0:
eyewitness_command += ' --timeout {}'.format(
yaml_configuration[EYEWITNESS][TIMEOUT]
)
if EYEWITNESS in yaml_configuration \
and THREADS in yaml_configuration[EYEWITNESS] \
and yaml_configuration[EYEWITNESS][THREADS] > 0:
eyewitness_command += ' --threads {}'.format(
yaml_configuration[EYEWITNESS][THREADS]
)
logger.info(eyewitness_command)
os.system(eyewitness_command)
if os.path.isfile(result_csv_path):
logger.info('Gathering Eyewitness results')
with open(result_csv_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
if row[3] == 'Successful' \
and Subdomain.objects.filter(
scan_history__id=task.id).filter(name=row[2]).exists():
subdomain = Subdomain.objects.get(
scan_history__id=task.id,
name=row[2]
)
subdomain.screenshot_path = row[4].replace(
'/usr/src/scan_results/',
''
)
subdomain.save()
# remove all db, html extra files in screenshot results
os.system('rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(
output_screenshots_path,
))
os.system('rm -rf {0}/source'.format(
output_screenshots_path,
))
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has finished gathering screenshots for {}'.format(domain.name))
def port_scanning(task, domain, yaml_configuration, results_dir):
'''
This function is responsible for running the port scan
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Port Scan initiated for {}'.format(domain.name))
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
port_results_file = results_dir + '/ports.json'
# check the yaml_configuration and choose the ports to be scanned
scan_ports = '-' # default port scan everything
if PORTS in yaml_configuration[PORT_SCAN]:
# TODO: legacy code, remove top-100 in future versions
all_ports = yaml_configuration[PORT_SCAN][PORTS]
if 'full' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -p {}'.format(
subdomain_scan_results_file, port_results_file, '-')
elif 'top-100' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -top-ports 100'.format(
subdomain_scan_results_file, port_results_file)
elif 'top-1000' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -top-ports 1000'.format(
subdomain_scan_results_file, port_results_file)
else:
scan_ports = ','.join(
str(port) for port in all_ports)
naabu_command = 'cat {} | naabu -json -o {} -p {}'.format(
subdomain_scan_results_file, port_results_file, scan_ports)
# check for exclude ports
if EXCLUDE_PORTS in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][EXCLUDE_PORTS]:
exclude_ports = ','.join(
str(port) for port in yaml_configuration['port_scan']['exclude_ports'])
naabu_command = naabu_command + \
' -exclude-ports {}'.format(exclude_ports)
if NAABU_RATE in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][NAABU_RATE] > 0:
naabu_command = naabu_command + \
' -rate {}'.format(
yaml_configuration[PORT_SCAN][NAABU_RATE])
if USE_NAABU_CONFIG in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][USE_NAABU_CONFIG]:
naabu_command += ' -config /root/.config/naabu/naabu.conf'
# run naabu
os.system(naabu_command)
# writing port results
try:
port_json_result = open(port_results_file, 'r')
lines = port_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
port_number = json_st['port']
ip_address = json_st['ip']
# see if port already exists
if Port.objects.filter(number__exact=port_number).exists():
port = Port.objects.get(number=port_number)
else:
port = Port()
port.number = port_number
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port_detail = whatportis.get_ports(str(port_number))
if len(port_detail):
port.service_name = port_detail[0].name
port.description = port_detail[0].description
port.save()
if IpAddress.objects.filter(address=json_st['ip']).exists():
ip = IpAddress.objects.get(address=json_st['ip'])
ip.ports.add(port)
ip.save()
except BaseException as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
port_count = Port.objects.filter(
ports__in=IpAddress.objects.filter(
ip_addresses__in=Subdomain.objects.filter(
scan_history__id=task.id))).distinct().count()
send_notification('reNgine has finished Port Scanning on {} and has identified {} ports.'.format(domain.name, port_count))
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/ports.json')
def check_waf():
'''
This function will check for the WAF being used in subdomains using wafw00f
'''
pass
def directory_brute(task, domain, yaml_configuration, results_dir, activity_id):
'''
This function is responsible for performing directory scan
'''
# scan directories for all the alive subdomain with http status >
# 200
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Directory Bruteforce has been initiated for {}.'.format(domain.name))
alive_subdomains = Subdomain.objects.filter(
scan_history__id=task.id).exclude(http_url__isnull=True)
dirs_results = results_dir + '/dirs.json'
# check the yaml settings
if EXTENSIONS in yaml_configuration[DIR_FILE_SEARCH]:
extensions = ','.join(
str(ext) for ext in yaml_configuration[DIR_FILE_SEARCH][EXTENSIONS])
else:
extensions = 'php,git,yaml,conf,db,mysql,bak,txt'
# Threads
if THREADS in yaml_configuration[DIR_FILE_SEARCH] \
and yaml_configuration[DIR_FILE_SEARCH][THREADS] > 0:
threads = yaml_configuration[DIR_FILE_SEARCH][THREADS]
else:
threads = 10
for subdomain in alive_subdomains:
# delete any existing dirs.json
if os.path.isfile(dirs_results):
os.system('rm -rf {}'.format(dirs_results))
dirsearch_command = 'python3 /usr/src/github/dirsearch/dirsearch.py'
dirsearch_command += ' -u {}'.format(subdomain.http_url)
if (WORDLIST not in yaml_configuration[DIR_FILE_SEARCH] or
not yaml_configuration[DIR_FILE_SEARCH][WORDLIST] or
'default' in yaml_configuration[DIR_FILE_SEARCH][WORDLIST]):
wordlist_location = '/usr/src/github/dirsearch/db/dicc.txt'
else:
wordlist_location = '/usr/src/wordlist/' + \
yaml_configuration[DIR_FILE_SEARCH][WORDLIST] + '.txt'
dirsearch_command += ' -w {}'.format(wordlist_location)
dirsearch_command += ' --format json -o {}'.format(dirs_results)
dirsearch_command += ' -e {}'.format(extensions)
dirsearch_command += ' -t {}'.format(threads)
dirsearch_command += ' --random-agent --follow-redirects --exclude-status 403,401,404'
if EXCLUDE_EXTENSIONS in yaml_configuration[DIR_FILE_SEARCH]:
exclude_extensions = ','.join(
str(ext) for ext in yaml_configuration[DIR_FILE_SEARCH][EXCLUDE_EXTENSIONS])
dirsearch_command += ' -X {}'.format(exclude_extensions)
if EXCLUDE_TEXT in yaml_configuration[DIR_FILE_SEARCH]:
exclude_text = ','.join(
str(text) for text in yaml_configuration[DIR_FILE_SEARCH][EXCLUDE_TEXT])
dirsearch_command += ' -exclude-texts {}'.format(exclude_text)
# check if recursive strategy is set to on
if RECURSIVE_LEVEL in yaml_configuration[DIR_FILE_SEARCH]:
dirsearch_command += ' --recursion-depth {}'.format(yaml_configuration[DIR_FILE_SEARCH][RECURSIVE_LEVEL])
if RECURSIVE_LEVEL in yaml_configuration[DIR_FILE_SEARCH]:
dirsearch_command += ' --recursion-depth {}'.format(yaml_configuration[DIR_FILE_SEARCH][RECURSIVE_LEVEL])
# proxy
proxy = get_random_proxy()
if proxy:
dirsearch_command += " --proxy '{}'".format(proxy)
print(dirsearch_command)
os.system(dirsearch_command)
try:
if os.path.isfile(dirs_results):
with open(dirs_results, "r") as json_file:
json_string = json_file.read()
subdomain = Subdomain.objects.get(
scan_history__id=task.id, http_url=subdomain.http_url)
subdomain.directory_json = json_string
subdomain.save()
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
send_notification('Directory Bruteforce has been completed for {}.'.format(domain.name))
def fetch_endpoints(
task,
domain,
yaml_configuration,
results_dir,
activity_id):
'''
This function is responsible for fetching all the urls associated with target
and run HTTP probe
It first runs gau to gather all urls from wayback, then we will use hakrawler to identify more urls
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine is currently gathering endpoints for {}.'.format(domain.name))
# check yaml settings
if ALL in yaml_configuration[FETCH_URL][USES_TOOLS]:
tools = 'gauplus hakrawler waybackurls gospider'
else:
tools = ' '.join(
str(tool) for tool in yaml_configuration[FETCH_URL][USES_TOOLS])
if INTENSITY in yaml_configuration[FETCH_URL]:
scan_type = yaml_configuration[FETCH_URL][INTENSITY]
else:
scan_type = 'normal'
domain_regex = "\'https?://([a-z0-9]+[.])*{}.*\'".format(domain.name)
if 'deep' in scan_type:
# performs deep url gathering for all the subdomains present -
# RECOMMENDED
logger.info('Deep URLS Fetch')
os.system(settings.TOOL_LOCATION + 'get_urls.sh %s %s %s %s %s' %
("None", results_dir, scan_type, domain_regex, tools))
else:
# perform url gathering only for main domain - USE only for quick scan
logger.info('Non Deep URLS Fetch')
os.system(
settings.TOOL_LOCATION +
'get_urls.sh %s %s %s %s %s' % (
domain.name,
results_dir,
scan_type,
domain_regex,
tools
))
if IGNORE_FILE_EXTENSION in yaml_configuration[FETCH_URL]:
ignore_extension = '|'.join(
yaml_configuration[FETCH_URL][IGNORE_FILE_EXTENSION])
logger.info('Ignore extensions' + ignore_extension)
os.system(
'cat {0}/all_urls.txt | grep -Eiv "\\.({1}).*" > {0}/temp_urls.txt'.format(
results_dir, ignore_extension))
os.system(
'rm {0}/all_urls.txt && mv {0}/temp_urls.txt {0}/all_urls.txt'.format(results_dir))
'''
Store all the endpoints and then run the httpx
'''
try:
endpoint_final_url = results_dir + '/all_urls.txt'
if os.path.isfile(endpoint_final_url):
with open(endpoint_final_url) as endpoint_list:
for url in endpoint_list:
http_url = url.rstrip('\n')
if not EndPoint.objects.filter(scan_history=task, http_url=http_url).exists():
_subdomain = get_subdomain_from_url(http_url)
if Subdomain.objects.filter(
scan_history=task).filter(
name=_subdomain).exists():
subdomain = Subdomain.objects.get(
scan_history=task, name=_subdomain)
else:
'''
gau or gosppider can gather interesting endpoints which
when parsed can give subdomains that were not existent from
subdomain scan. so storing them
'''
logger.error(
'Subdomain {} not found, adding...'.format(_subdomain))
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': _subdomain,
})
subdomain = save_subdomain(subdomain_dict)
endpoint_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'subdomain': subdomain,
'http_url': http_url,
})
save_endpoint(endpoint_dict)
except Exception as e:
logger.error(e)
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/all_urls.txt')
'''
TODO:
Go spider & waybackurls accumulates a lot of urls, which is good but nuclei
takes forever to scan even a simple website, so we will do http probing
and filter HTTP status 404, this way we can reduce the number of Non Existent
URLS
'''
logger.info('HTTP Probing on collected endpoints')
httpx_command = 'httpx -l {0}/all_urls.txt -status-code -content-length -ip -cdn -title -tech-detect -json -follow-redirects -random-agent -o {0}/final_httpx_urls.json'.format(results_dir)
proxy = get_random_proxy()
if proxy:
httpx_command += " --http-proxy '{}'".format(proxy)
os.system(httpx_command)
url_results_file = results_dir + '/final_httpx_urls.json'
try:
urls_json_result = open(url_results_file, 'r')
lines = urls_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
http_url = json_st['url']
_subdomain = get_subdomain_from_url(http_url)
if Subdomain.objects.filter(
scan_history=task).filter(
name=_subdomain).exists():
subdomain_obj = Subdomain.objects.get(
scan_history=task, name=_subdomain)
else:
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': _subdomain,
})
subdomain_obj = save_subdomain(subdomain_dict)
if EndPoint.objects.filter(
scan_history=task).filter(
http_url=http_url).exists():
endpoint = EndPoint.objects.get(
scan_history=task, http_url=http_url)
else:
endpoint = EndPoint()
endpoint_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'http_url': http_url,
'subdomain': subdomain_obj
})
endpoint = save_endpoint(endpoint_dict)
if 'title' in json_st:
endpoint.page_title = json_st['title']
if 'webserver' in json_st:
endpoint.webserver = json_st['webserver']
if 'content-length' in json_st:
endpoint.content_length = json_st['content-length']
if 'content-type' in json_st:
endpoint.content_type = json_st['content-type']
if 'status-code' in json_st:
endpoint.http_status = json_st['status-code']
if 'response-time' in json_st:
response_time = float(''.join(ch for ch in json_st['response-time'] if not ch.isalpha()))
if json_st['response-time'][-2:] == 'ms':
response_time = response_time / 1000
endpoint.response_time = response_time
endpoint.save()
if 'technologies' in json_st:
for _tech in json_st['technologies']:
if Technology.objects.filter(name=_tech).exists():
tech = Technology.objects.get(name=_tech)
else:
tech = Technology(name=_tech)
tech.save()
endpoint.technologies.add(tech)
# get subdomain object
subdomain = Subdomain.objects.get(scan_history=task, name=_subdomain)
subdomain.technologies.add(tech)
subdomain.save()
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
endpoint_count = EndPoint.objects.filter(
scan_history__id=task.id).values('http_url').distinct().count()
endpoint_alive_count = EndPoint.objects.filter(
scan_history__id=task.id, http_status__exact=200).values('http_url').distinct().count()
send_notification('reNgine has finished gathering endpoints for {} and has discovered *{}* unique endpoints.\n\n{} of those endpoints reported HTTP status 200.'.format(
domain.name,
endpoint_count,
endpoint_alive_count
))
# once endpoint is saved, run gf patterns TODO: run threads
if GF_PATTERNS in yaml_configuration[FETCH_URL]:
for pattern in yaml_configuration[FETCH_URL][GF_PATTERNS]:
logger.info('Running GF for {}'.format(pattern))
gf_output_file_path = '{0}/gf_patterns_{1}.txt'.format(
results_dir, pattern)
gf_command = 'cat {0}/all_urls.txt | gf {1} >> {2}'.format(
results_dir, pattern, gf_output_file_path)
os.system(gf_command)
if os.path.exists(gf_output_file_path):
with open(gf_output_file_path) as gf_output:
for line in gf_output:
url = line.rstrip('\n')
try:
endpoint = EndPoint.objects.get(
scan_history=task, http_url=url)
earlier_pattern = endpoint.matched_gf_patterns
new_pattern = earlier_pattern + ',' + pattern if earlier_pattern else pattern
endpoint.matched_gf_patterns = new_pattern
except Exception as e:
# add the url in db
logger.error(e)
logger.info('Adding URL' + url)
endpoint = EndPoint()
endpoint.http_url = url
endpoint.target_domain = domain
endpoint.scan_history = task
try:
_subdomain = Subdomain.objects.get(
scan_history=task, name=get_subdomain_from_url(url))
endpoint.subdomain = _subdomain
except Exception as e:
continue
endpoint.matched_gf_patterns = pattern
finally:
endpoint.save()
def vulnerability_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id):
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Vulnerability scan has been initiated for {}.'.format(domain.name))
'''
This function will run nuclei as a vulnerability scanner
----
unfurl the urls to keep only domain and path, this will be sent to vuln scan
ignore certain file extensions
Thanks: https://github.com/six2dez/reconftw
'''
urls_path = '/alive.txt'
if task.scan_type.fetch_url:
os.system('cat {0}/all_urls.txt | grep -Eiv "\\.(eot|jpg|jpeg|gif|css|tif|tiff|png|ttf|otf|woff|woff2|ico|pdf|svg|txt|js|doc|docx)$" | unfurl -u format %s://%d%p >> {0}/unfurl_urls.txt'.format(results_dir))
os.system(
'sort -u {0}/unfurl_urls.txt -o {0}/unfurl_urls.txt'.format(results_dir))
urls_path = '/unfurl_urls.txt'
vulnerability_result_path = results_dir + '/vulnerability.json'
vulnerability_scan_input_file = results_dir + urls_path
nuclei_command = 'nuclei -json -l {} -o {}'.format(
vulnerability_scan_input_file, vulnerability_result_path)
# check nuclei config
if USE_NUCLEI_CONFIG in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[VULNERABILITY_SCAN][USE_NUCLEI_CONFIG]:
nuclei_command += ' -config /root/.config/nuclei/config.yaml'
'''
Nuclei Templates
Either custom template has to be supplied or default template, if neither has
been supplied then use all templates including custom templates
'''
if CUSTOM_NUCLEI_TEMPLATE in yaml_configuration[
VULNERABILITY_SCAN] or NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
# check yaml settings for templates
if NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
if ALL in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_TEMPLATE]:
template = NUCLEI_TEMPLATES_PATH
else:
_template = ','.join([NUCLEI_TEMPLATES_PATH + str(element)
for element in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_TEMPLATE]])
template = _template.replace(',', ' -t ')
# Update nuclei command with templates
nuclei_command = nuclei_command + ' -t ' + template
if CUSTOM_NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
# add .yaml to the custom template extensions
_template = ','.join(
[str(element) + '.yaml' for element in yaml_configuration[VULNERABILITY_SCAN][CUSTOM_NUCLEI_TEMPLATE]])
template = _template.replace(',', ' -t ')
# Update nuclei command with templates
nuclei_command = nuclei_command + ' -t ' + template
else:
nuclei_command = nuclei_command + ' -t /root/nuclei-templates'
# check yaml settings for concurrency
if NUCLEI_CONCURRENCY in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][NUCLEI_CONCURRENCY] > 0:
concurrency = yaml_configuration[VULNERABILITY_SCAN][NUCLEI_CONCURRENCY]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -c ' + str(concurrency)
if RATE_LIMIT in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][RATE_LIMIT] > 0:
rate_limit = yaml_configuration[VULNERABILITY_SCAN][RATE_LIMIT]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -rl ' + str(rate_limit)
if TIMEOUT in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][TIMEOUT] > 0:
timeout = yaml_configuration[VULNERABILITY_SCAN][TIMEOUT]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -timeout ' + str(timeout)
if RETRIES in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][RETRIES] > 0:
retries = yaml_configuration[VULNERABILITY_SCAN][RETRIES]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -retries ' + str(retries)
# for severity
if NUCLEI_SEVERITY in yaml_configuration[VULNERABILITY_SCAN] and ALL not in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_SEVERITY]:
_severity = ','.join(
[str(element) for element in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_SEVERITY]])
severity = _severity.replace(" ", "")
else:
severity = "critical, high, medium, low, info"
# update nuclei templates before running scan
os.system('nuclei -update-templates')
for _severity in severity.split(","):
# delete any existing vulnerability.json file
if os.path.isfile(vulnerability_result_path):
os.system('rm {}'.format(vulnerability_result_path))
# run nuclei
final_nuclei_command = nuclei_command + ' -severity ' + _severity
proxy = get_random_proxy()
if proxy:
final_nuclei_command += " --proxy-url '{}'".format(proxy)
logger.info(final_nuclei_command)
os.system(final_nuclei_command)
try:
if os.path.isfile(vulnerability_result_path):
urls_json_result = open(vulnerability_result_path, 'r')
lines = urls_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
host = json_st['host']
_subdomain = get_subdomain_from_url(host)
try:
subdomain = Subdomain.objects.get(
name=_subdomain, scan_history=task)
vulnerability = Vulnerability()
vulnerability.subdomain = subdomain
vulnerability.scan_history = task
vulnerability.target_domain = domain
try:
endpoint = EndPoint.objects.get(
scan_history=task, target_domain=domain, http_url=host)
vulnerability.endpoint = endpoint
except Exception as exception:
logger.error(exception)
if 'name' in json_st['info']:
vulnerability.name = json_st['info']['name']
if 'severity' in json_st['info']:
if json_st['info']['severity'] == 'info':
severity = 0
elif json_st['info']['severity'] == 'low':
severity = 1
elif json_st['info']['severity'] == 'medium':
severity = 2
elif json_st['info']['severity'] == 'high':
severity = 3
elif json_st['info']['severity'] == 'critical':
severity = 4
else:
severity = 0
else:
severity = 0
vulnerability.severity = severity
if 'tags' in json_st['info']:
vulnerability.tags = json_st['info']['tags']
if 'description' in json_st['info']:
vulnerability.description = json_st['info']['description']
if 'reference' in json_st['info']:
vulnerability.reference = json_st['info']['reference']
if 'matched' in json_st:
vulnerability.http_url = json_st['matched']
if 'templateID' in json_st:
vulnerability.template_used = json_st['templateID']
if 'description' in json_st:
vulnerability.description = json_st['description']
if 'matcher_name' in json_st:
vulnerability.matcher_name = json_st['matcher_name']
if 'extracted_results' in json_st:
vulnerability.extracted_results = json_st['extracted_results']
vulnerability.discovered_date = timezone.now()
vulnerability.open_status = True
vulnerability.save()
# send notification for all vulnerabilities except info
if json_st['info']['severity'] != "info" and notification and notification[0].send_vuln_notif:
message = "*Alert: Vulnerability Identified*"
message += "\n\n"
message += "A *{}* severity vulnerability has been identified.".format(json_st['info']['severity'])
message += "\nVulnerability Name: {}".format(json_st['info']['name'])
message += "\nVulnerable URL: {}".format(json_st['host'])
send_notification(message)
# send report to hackerone
if Hackerone.objects.all().exists() and json_st['info']['severity'] != 'info' and json_st['info']['severity'] \
!= 'low' and vulnerability.target_domain.h1_team_handle:
hackerone = Hackerone.objects.all()[0]
if hackerone.send_critical and json_st['info']['severity'] == 'critical':
send_hackerone_report(vulnerability.id)
elif hackerone.send_high and json_st['info']['severity'] == 'high':
send_hackerone_report(vulnerability.id)
elif hackerone.send_medium and json_st['info']['severity'] == 'medium':
send_hackerone_report(vulnerability.id)
except ObjectDoesNotExist:
logger.error('Object not found')
continue
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
info_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=0).count()
low_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=1).count()
medium_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=2).count()
high_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=3).count()
critical_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=4).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count
message = 'Vulnerability scan has been completed for {} and discovered {} vulnerabilities.'.format(
domain.name,
vulnerability_count
)
message += '\n\n*Vulnerability Stats:*'
message += '\nCritical: {}'.format(critical_count)
message += '\nHigh: {}'.format(high_count)
message += '\nMedium: {}'.format(medium_count)
message += '\nLow: {}'.format(low_count)
message += '\nInfo: {}'.format(info_count)
send_notification(message)
def scan_failed(task):
task.scan_status = 0
task.stop_scan_date = timezone.now()
task.save()
def create_scan_activity(task, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = task
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
def update_last_activity(id, activity_status):
ScanActivity.objects.filter(
id=id).update(
status=activity_status,
time=timezone.now())
def delete_scan_data(results_dir):
# remove all txt,html,json files
os.system('find {} -name "*.txt" -type f -delete'.format(results_dir))
os.system('find {} -name "*.html" -type f -delete'.format(results_dir))
os.system('find {} -name "*.json" -type f -delete'.format(results_dir))
def save_subdomain(subdomain_dict):
subdomain = Subdomain()
subdomain.discovered_date = timezone.now()
subdomain.target_domain = subdomain_dict.get('target_domain')
subdomain.scan_history = subdomain_dict.get('scan_history')
subdomain.name = subdomain_dict.get('name')
subdomain.http_url = subdomain_dict.get('http_url')
subdomain.screenshot_path = subdomain_dict.get('screenshot_path')
subdomain.http_header_path = subdomain_dict.get('http_header_path')
subdomain.cname = subdomain_dict.get('cname')
subdomain.is_cdn = subdomain_dict.get('is_cdn')
subdomain.content_type = subdomain_dict.get('content_type')
subdomain.webserver = subdomain_dict.get('webserver')
subdomain.page_title = subdomain_dict.get('page_title')
subdomain.is_imported_subdomain = subdomain_dict.get(
'is_imported_subdomain') if 'is_imported_subdomain' in subdomain_dict else False
if 'http_status' in subdomain_dict:
subdomain.http_status = subdomain_dict.get('http_status')
if 'response_time' in subdomain_dict:
subdomain.response_time = subdomain_dict.get('response_time')
if 'content_length' in subdomain_dict:
subdomain.content_length = subdomain_dict.get('content_length')
subdomain.save()
return subdomain
def save_endpoint(endpoint_dict):
endpoint = EndPoint()
endpoint.discovered_date = timezone.now()
endpoint.scan_history = endpoint_dict.get('scan_history')
endpoint.target_domain = endpoint_dict.get('target_domain') if 'target_domain' in endpoint_dict else None
endpoint.subdomain = endpoint_dict.get('subdomain') if 'target_domain' in endpoint_dict else None
endpoint.http_url = endpoint_dict.get('http_url')
endpoint.page_title = endpoint_dict.get('page_title') if 'page_title' in endpoint_dict else None
endpoint.content_type = endpoint_dict.get('content_type') if 'content_type' in endpoint_dict else None
endpoint.webserver = endpoint_dict.get('webserver') if 'webserver' in endpoint_dict else None
endpoint.response_time = endpoint_dict.get('response_time') if 'response_time' in endpoint_dict else 0
endpoint.http_status = endpoint_dict.get('http_status') if 'http_status' in endpoint_dict else 0
endpoint.content_length = endpoint_dict.get('content_length') if 'content_length' in endpoint_dict else 0
endpoint.is_default = endpoint_dict.get('is_default') if 'is_default' in endpoint_dict else False
endpoint.save()
return endpoint
def perform_osint(task, domain, yaml_configuration, results_dir):
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has initiated OSINT on target {}'.format(domain.name))
if 'discover' in yaml_configuration[OSINT]:
osint_discovery(task, domain, yaml_configuration, results_dir)
if 'dork' in yaml_configuration[OSINT]:
dorking(task, yaml_configuration)
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has completed performing OSINT on target {}'.format(domain.name))
def osint_discovery(task, domain, yaml_configuration, results_dir):
if ALL in yaml_configuration[OSINT][OSINT_DISCOVER]:
osint_lookup = 'emails metainfo employees'
else:
osint_lookup = ' '.join(
str(lookup) for lookup in yaml_configuration[OSINT][OSINT_DISCOVER])
if 'metainfo' in osint_lookup:
if INTENSITY in yaml_configuration[OSINT]:
osint_intensity = yaml_configuration[OSINT][INTENSITY]
else:
osint_intensity = 'normal'
if OSINT_DOCUMENTS_LIMIT in yaml_configuration[OSINT]:
documents_limit = yaml_configuration[OSINT][OSINT_DOCUMENTS_LIMIT]
else:
documents_limit = 50
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': domain.name,
'domain': domain,
'scan_id': task,
'documents_limit': documents_limit
})
get_and_save_meta_info(meta_dict)
elif osint_intensity == 'deep':
# get all subdomains in scan_id
subdomains = Subdomain.objects.filter(scan_history=task)
for subdomain in subdomains:
meta_dict = DottedDict({
'osint_target': subdomain.name,
'domain': domain,
'scan_id': task,
'documents_limit': documents_limit
})
get_and_save_meta_info(meta_dict)
if 'emails' in osint_lookup:
get_and_save_emails(task, results_dir)
get_and_save_leaked_credentials(task, results_dir)
if 'employees' in osint_lookup:
get_and_save_employees(task, results_dir)
def dorking(scan_history, yaml_configuration):
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
# look in stackoverflow
if ALL in yaml_configuration[OSINT][OSINT_DORK]:
dork_lookup = 'stackoverflow, 3rdparty, social_media, project_management, code_sharing, config_files, jenkins, cloud_buckets, php_error, exposed_documents, struts_rce, db_files, traefik, git_exposed'
else:
dork_lookup = ' '.join(
str(lookup) for lookup in yaml_configuration[OSINT][OSINT_DORK])
if 'stackoverflow' in dork_lookup:
dork = 'site:stackoverflow.com'
dork_type = 'stackoverflow'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=False
)
if '3rdparty' in dork_lookup:
# look in 3rd party sitee
dork_type = '3rdparty'
lookup_websites = [
'gitter.im',
'papaly.com',
'productforums.google.com',
'coggle.it',
'replt.it',
'ycombinator.com',
'libraries.io',
'npm.runkit.com',
'npmjs.com',
'scribd.com',
'gitter.im'
]
dork = ''
for website in lookup_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'social_media' in dork_lookup:
dork_type = 'Social Media'
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'pinterest.com',
'tumblr.com',
'reddit.com'
]
dork = ''
for website in social_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'project_management' in dork_lookup:
dork_type = 'Project Management'
project_websites = [
'trello.com',
'*.atlassian.net'
]
dork = ''
for website in project_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'code_sharing' in dork_lookup:
dork_type = 'Code Sharing Sites'
code_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
dork = ''
for website in code_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'config_files' in dork_lookup:
dork_type = 'Config Files'
config_file_ext = [
'env',
'xml',
'conf',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
dork = ''
for extension in config_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'jenkins' in dork_lookup:
dork_type = 'Jenkins'
dork = 'intitle:\"Dashboard [Jenkins]\"'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
if 'wordpress_files' in dork_lookup:
dork_type = 'Wordpress Files'
inurl_lookup = [
'wp-content',
'wp-includes'
]
dork = ''
for lookup in inurl_lookup:
dork = dork + ' | ' + 'inurl:' + lookup
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'cloud_buckets' in dork_lookup:
dork_type = 'Cloud Buckets'
cloud_websites = [
'.s3.amazonaws.com',
'storage.googleapis.com',
'amazonaws.com'
]
dork = ''
for website in cloud_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'php_error' in dork_lookup:
dork_type = 'PHP Error'
error_words = [
'\"PHP Parse error\"',
'\"PHP Warning\"',
'\"PHP Error\"'
]
dork = ''
for word in error_words:
dork = dork + ' | ' + word
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'exposed_documents' in dork_lookup:
dork_type = 'Exposed Documents'
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
dork = ''
for extension in docs_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'struts_rce' in dork_lookup:
dork_type = 'Apache Struts RCE'
struts_file_ext = [
'action',
'struts',
'do'
]
dork = ''
for extension in struts_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'db_files' in dork_lookup:
dork_type = 'Database Files'
db_file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
dork = ''
for extension in db_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'traefik' in dork_lookup:
dork = 'intitle:traefik inurl:8080/dashboard'
dork_type = 'Traefik'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
if 'git_exposed' in dork_lookup:
dork = 'inurl:\"/.git\"'
dork_type = '.git Exposed'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
def get_and_save_dork_results(dork, type, scan_history, in_target=False):
degoogle_obj = degoogle.dg()
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
if in_target:
query = dork + " site:" + scan_history.domain.name
else:
query = dork + " \"{}\"".format(scan_history.domain.name)
logger.info(query)
degoogle_obj.query = query
results = degoogle_obj.run()
logger.info(results)
for result in results:
dork, _ = Dork.objects.get_or_create(
type=type,
description=result['desc'],
url=result['url']
)
scan_history.dorks.add(dork)
def get_and_save_employees(scan_history, results_dir):
theHarvester_location = '/usr/src/github/theHarvester'
# update proxies.yaml
if Proxy.objects.all().exists():
proxy = Proxy.objects.all()[0]
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(theHarvester_location + '/proxies.yaml', 'w') as file:
documents = yaml.dump(yaml_data, file)
os.system('cd {} && python3 theHarvester.py -d {} -b all -f {}/theHarvester.html'.format(
theHarvester_location,
scan_history.domain.name,
results_dir
))
file_location = results_dir + '/theHarvester.html'
print(file_location)
# delete proxy environ var
if os.environ.get(('https_proxy')):
del os.environ['https_proxy']
if os.environ.get(('HTTPS_PROXY')):
del os.environ['HTTPS_PROXY']
if os.path.isfile(file_location):
logger.info('Parsing theHarvester results')
options = FirefoxOptions()
options.add_argument("--headless")
driver = webdriver.Firefox(options=options)
driver.get('file://'+file_location)
tabledata = driver.execute_script('return tabledata')
# save email addresses and linkedin employees
for data in tabledata:
if data['record'] == 'email':
_email = data['result']
email, _ = Email.objects.get_or_create(address=_email)
scan_history.emails.add(email)
elif data['record'] == 'people':
_employee = data['result']
split_val = _employee.split('-')
name = split_val[0]
if len(split_val) == 2:
designation = split_val[1]
else:
designation = ""
employee, _ = Employee.objects.get_or_create(name=name, designation=designation)
scan_history.employees.add(employee)
driver.quit()
print(tabledata)
def get_and_save_emails(scan_history, results_dir):
leak_target_path = '{}/creds_target.txt'.format(results_dir)
# get email address
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
emails = []
try:
logger.info('OSINT: Getting emails from Google')
email_from_google = get_emails_from_google(scan_history.domain.name)
logger.info('OSINT: Getting emails from Bing')
email_from_bing = get_emails_from_bing(scan_history.domain.name)
logger.info('OSINT: Getting emails from Baidu')
email_from_baidu = get_emails_from_baidu(scan_history.domain.name)
emails = list(set(email_from_google + email_from_bing + email_from_baidu))
logger.info(emails)
except Exception as e:
logger.error(e)
leak_target_file = open(leak_target_path, 'w')
for _email in emails:
email, _ = Email.objects.get_or_create(address=_email)
scan_history.emails.add(email)
leak_target_file.write('{}\n'.format(_email))
# fill leak_target_file with possible email address
leak_target_file.write('%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.write('%.%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%.%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.write('%_%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%_%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.close()
def get_and_save_leaked_credentials(scan_history, results_dir):
logger.info('OSINT: Getting leaked credentials...')
leak_target_file = '{}/creds_target.txt'.format(results_dir)
leak_output_file = '{}/pwndb.json'.format(results_dir)
pwndb_command = 'python3 /usr/src/github/pwndb/pwndb.py --proxy tor:9150 --output json --list {}'.format(
leak_target_file
)
try:
pwndb_output = subprocess.getoutput(pwndb_command)
creds = json.loads(pwndb_output)
for cred in creds:
if cred['username'] != 'donate':
email_id = "{}@{}".format(cred['username'], cred['domain'])
email_obj, _ = Email.objects.get_or_create(
address=email_id,
)
email_obj.password = cred['password']
email_obj.save()
scan_history.emails.add(email_obj)
except Exception as e:
logger.error(e)
pass
def get_and_save_meta_info(meta_dict):
logger.info('Getting METADATA for {}'.format(meta_dict.osint_target))
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
result = metadata_extractor.extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if result:
results = result.get_metadata()
for meta in results:
meta_finder_document = MetaFinderDocument()
subdomain = Subdomain.objects.get(scan_history=meta_dict.scan_id, name=meta_dict.osint_target)
meta_finder_document.subdomain = subdomain
meta_finder_document.target_domain = meta_dict.domain
meta_finder_document.scan_history = meta_dict.scan_id
item = DottedDict(results[meta])
meta_finder_document.url = item.url
meta_finder_document.doc_name = meta
meta_finder_document.http_status = item.status_code
metadata = results[meta]['metadata']
for data in metadata:
if 'Producer' in metadata and metadata['Producer']:
meta_finder_document.producer = metadata['Producer'].rstrip('\x00')
if 'Creator' in metadata and metadata['Creator']:
meta_finder_document.creator = metadata['Creator'].rstrip('\x00')
if 'CreationDate' in metadata and metadata['CreationDate']:
meta_finder_document.creation_date = metadata['CreationDate'].rstrip('\x00')
if 'ModDate' in metadata and metadata['ModDate']:
meta_finder_document.modified_date = metadata['ModDate'].rstrip('\x00')
if 'Author' in metadata and metadata['Author']:
meta_finder_document.author = metadata['Author'].rstrip('\x00')
if 'Title' in metadata and metadata['Title']:
meta_finder_document.title = metadata['Title'].rstrip('\x00')
if 'OSInfo' in metadata and metadata['OSInfo']:
meta_finder_document.os = metadata['OSInfo'].rstrip('\x00')
meta_finder_document.save()
@app.task(bind=True)
def test_task(self):
print('*' * 40)
print('test task run')
print('*' * 40)
| import os
import traceback
import yaml
import json
import csv
import validators
import random
import requests
import logging
import metafinder.extractor as metadata_extractor
import whatportis
import subprocess
from selenium.webdriver.firefox.options import Options as FirefoxOptions
from selenium import webdriver
from emailfinder.extractor import *
from dotted_dict import DottedDict
from celery import shared_task
from discord_webhook import DiscordWebhook
from reNgine.celery import app
from startScan.models import *
from targetApp.models import Domain
from scanEngine.models import EngineType
from django.conf import settings
from django.shortcuts import get_object_or_404
from celery import shared_task
from datetime import datetime
from degoogle import degoogle
from django.conf import settings
from django.utils import timezone, dateformat
from django.shortcuts import get_object_or_404
from django.core.exceptions import ObjectDoesNotExist
from reNgine.celery import app
from reNgine.definitions import *
from startScan.models import *
from targetApp.models import Domain
from scanEngine.models import EngineType, Configuration, Wordlist
from .common_func import *
'''
task for background scan
'''
@app.task
def initiate_scan(
domain_id,
scan_history_id,
scan_type,
engine_type,
imported_subdomains=None,
out_of_scope_subdomains=[]
):
'''
scan_type = 0 -> immediate scan, need not create scan object
scan_type = 1 -> scheduled scan
'''
engine_object = EngineType.objects.get(pk=engine_type)
domain = Domain.objects.get(pk=domain_id)
if scan_type == 1:
task = ScanHistory()
task.scan_status = -1
elif scan_type == 0:
task = ScanHistory.objects.get(pk=scan_history_id)
# save the last scan date for domain model
domain.last_scan_date = timezone.now()
domain.save()
# once the celery task starts, change the task status to Started
task.scan_type = engine_object
task.celery_id = initiate_scan.request.id
task.domain = domain
task.scan_status = 1
task.start_scan_date = timezone.now()
task.subdomain_discovery = True if engine_object.subdomain_discovery else False
task.dir_file_search = True if engine_object.dir_file_search else False
task.port_scan = True if engine_object.port_scan else False
task.fetch_url = True if engine_object.fetch_url else False
task.osint = True if engine_object.osint else False
task.screenshot = True if engine_object.screenshot else False
task.vulnerability_scan = True if engine_object.vulnerability_scan else False
task.save()
activity_id = create_scan_activity(task, "Scanning Started", 2)
results_dir = '/usr/src/scan_results/'
os.chdir(results_dir)
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has initiated recon for target {} with engine type {}'.format(domain.name, engine_object.engine_name))
try:
current_scan_dir = domain.name + '_' + str(random.randint(100000000000, 999999999999))
os.mkdir(current_scan_dir)
task.results_dir = current_scan_dir
task.save()
except Exception as exception:
logger.error(exception)
scan_failed(task)
yaml_configuration = None
excluded_subdomains = ''
try:
yaml_configuration = yaml.load(
task.scan_type.yaml_configuration,
Loader=yaml.FullLoader)
except Exception as exception:
logger.error(exception)
# TODO: Put failed reason on db
'''
Add GF patterns name to db for dynamic URLs menu
'''
if engine_object.fetch_url and GF_PATTERNS in yaml_configuration[FETCH_URL]:
task.used_gf_patterns = ','.join(
pattern for pattern in yaml_configuration[FETCH_URL][GF_PATTERNS])
task.save()
results_dir = results_dir + current_scan_dir
# put all imported subdomains into txt file and also in Subdomain model
if imported_subdomains:
extract_imported_subdomain(
imported_subdomains, task, domain, results_dir)
if yaml_configuration:
'''
a target in itself is a subdomain, some tool give subdomains as
www.yogeshojha.com but url and everything else resolves to yogeshojha.com
In that case, we would already need to store target itself as subdomain
'''
initial_subdomain_file = '/target_domain.txt' if task.subdomain_discovery else '/sorted_subdomain_collection.txt'
subdomain_file = open(results_dir + initial_subdomain_file, "w")
subdomain_file.write(domain.name + "\n")
subdomain_file.close()
if(task.subdomain_discovery):
activity_id = create_scan_activity(task, "Subdomain Scanning", 1)
subdomain_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id,
out_of_scope_subdomains
)
else:
skip_subdomain_scan(task, domain, results_dir)
update_last_activity(activity_id, 2)
activity_id = create_scan_activity(task, "HTTP Crawler", 1)
http_crawler(
task,
domain,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
try:
if task.screenshot:
activity_id = create_scan_activity(
task, "Visual Recon - Screenshot", 1)
grab_screenshot(
task,
domain,
yaml_configuration,
current_scan_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if(task.port_scan):
activity_id = create_scan_activity(task, "Port Scanning", 1)
port_scanning(task, domain, yaml_configuration, results_dir)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.osint:
activity_id = create_scan_activity(task, "OSINT Running", 1)
perform_osint(task, domain, yaml_configuration, results_dir)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.dir_file_search:
activity_id = create_scan_activity(task, "Directory Search", 1)
directory_brute(
task,
domain,
yaml_configuration,
results_dir,
activity_id
)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.fetch_url:
activity_id = create_scan_activity(task, "Fetching endpoints", 1)
fetch_endpoints(
task,
domain,
yaml_configuration,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.vulnerability_scan:
activity_id = create_scan_activity(task, "Vulnerability Scan", 1)
vulnerability_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
activity_id = create_scan_activity(task, "Scan Completed", 2)
if notification and notification[0].send_scan_status_notif:
send_notification('*Scan Completed*\nreNgine has finished performing recon on target {}.'.format(domain.name))
'''
Once the scan is completed, save the status to successful
'''
if ScanActivity.objects.filter(scan_of=task).filter(status=0).all():
task.scan_status = 0
else:
task.scan_status = 2
task.stop_scan_date = timezone.now()
task.save()
# cleanup results
delete_scan_data(results_dir)
return {"status": True}
def skip_subdomain_scan(task, domain, results_dir):
# store default target as subdomain
'''
If the imported subdomain already has target domain saved, we can skip this
'''
if not Subdomain.objects.filter(
scan_history=task,
name=domain.name).exists():
subdomain_dict = DottedDict({
'name': domain.name,
'scan_history': task,
'target_domain': domain
})
save_subdomain(subdomain_dict)
# Save target into target_domain.txt
with open('{}/target_domain.txt'.format(results_dir), 'w+') as file:
file.write(domain.name + '\n')
file.close()
'''
We can have two conditions, either subdomain scan happens, or subdomain scan
does not happen, in either cases, because we are using import subdomain, we
need to collect and sort all the subdomains
Write target domain into subdomain_collection
'''
os.system(
'cat {0}/target_domain.txt > {0}/subdomain_collection.txt'.format(results_dir))
os.system(
'cat {0}/from_imported.txt > {0}/subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/from_imported.txt'.format(results_dir))
'''
Sort all Subdomains
'''
os.system(
'sort -u {0}/subdomain_collection.txt -o {0}/sorted_subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/subdomain_collection.txt'.format(results_dir))
def extract_imported_subdomain(imported_subdomains, task, domain, results_dir):
valid_imported_subdomains = [subdomain for subdomain in imported_subdomains if validators.domain(
subdomain) and domain.name == get_domain_from_subdomain(subdomain)]
# remove any duplicate
valid_imported_subdomains = list(set(valid_imported_subdomains))
with open('{}/from_imported.txt'.format(results_dir), 'w+') as file:
for subdomain_name in valid_imported_subdomains:
# save _subdomain to Subdomain model db
if not Subdomain.objects.filter(
scan_history=task, name=subdomain_name).exists():
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': subdomain_name,
'is_imported_subdomain': True
})
save_subdomain(subdomain_dict)
# save subdomain to file
file.write('{}\n'.format(subdomain_name))
file.close()
def subdomain_scan(task, domain, yaml_configuration, results_dir, activity_id, out_of_scope_subdomains=None):
'''
This function is responsible for performing subdomain enumeration
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Subdomain Gathering for target {} has been started'.format(domain.name))
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
# check for all the tools and add them into string
# if tool selected is all then make string, no need for loop
if ALL in yaml_configuration[SUBDOMAIN_DISCOVERY][USES_TOOLS]:
tools = 'amass-active amass-passive assetfinder sublist3r subfinder oneforall'
else:
tools = ' '.join(
str(tool) for tool in yaml_configuration[SUBDOMAIN_DISCOVERY][USES_TOOLS])
logging.info(tools)
# check for THREADS, by default 10
threads = 10
if THREADS in yaml_configuration[SUBDOMAIN_DISCOVERY]:
_threads = yaml_configuration[SUBDOMAIN_DISCOVERY][THREADS]
if _threads > 0:
threads = _threads
if 'amass' in tools:
if 'amass-passive' in tools:
amass_command = 'amass enum -passive -d {} -o {}/from_amass.txt'.format(
domain.name, results_dir)
if USE_AMASS_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_AMASS_CONFIG]:
amass_command += ' -config /root/.config/amass.ini'
# Run Amass Passive
logging.info(amass_command)
os.system(amass_command)
if 'amass-active' in tools:
amass_command = 'amass enum -active -d {} -o {}/from_amass_active.txt'.format(
domain.name, results_dir)
if USE_AMASS_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_AMASS_CONFIG]:
amass_command += ' -config /root/.config/amass.ini'
if AMASS_WORDLIST in yaml_configuration[SUBDOMAIN_DISCOVERY]:
wordlist = yaml_configuration[SUBDOMAIN_DISCOVERY][AMASS_WORDLIST]
if wordlist == 'default':
wordlist_path = '/usr/src/wordlist/deepmagic.com-prefixes-top50000.txt'
else:
wordlist_path = '/usr/src/wordlist/' + wordlist + '.txt'
if not os.path.exists(wordlist_path):
wordlist_path = '/usr/src/' + AMASS_WORDLIST
amass_command = amass_command + \
' -brute -w {}'.format(wordlist_path)
if amass_config_path:
amass_command = amass_command + \
' -config {}'.format('/usr/src/scan_results/' + amass_config_path)
# Run Amass Active
logging.info(amass_command)
os.system(amass_command)
if 'assetfinder' in tools:
assetfinder_command = 'assetfinder --subs-only {} > {}/from_assetfinder.txt'.format(
domain.name, results_dir)
# Run Assetfinder
logging.info(assetfinder_command)
os.system(assetfinder_command)
if 'sublist3r' in tools:
sublist3r_command = 'python3 /usr/src/github/Sublist3r/sublist3r.py -d {} -t {} -o {}/from_sublister.txt'.format(
domain.name, threads, results_dir)
# Run sublist3r
logging.info(sublist3r_command)
os.system(sublist3r_command)
if 'subfinder' in tools:
subfinder_command = 'subfinder -d {} -t {} -o {}/from_subfinder.txt'.format(
domain.name, threads, results_dir)
if USE_SUBFINDER_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_SUBFINDER_CONFIG]:
subfinder_command += ' -config /root/.config/subfinder/config.yaml'
# Run Subfinder
logging.info(subfinder_command)
os.system(subfinder_command)
if 'oneforall' in tools:
oneforall_command = 'python3 /usr/src/github/OneForAll/oneforall.py --target {} run'.format(
domain.name, results_dir)
# Run OneForAll
logging.info(oneforall_command)
os.system(oneforall_command)
extract_subdomain = "cut -d',' -f6 /usr/src/github/OneForAll/results/{}.csv >> {}/from_oneforall.txt".format(
domain.name, results_dir)
os.system(extract_subdomain)
# remove the results from oneforall directory
os.system(
'rm -rf /usr/src/github/OneForAll/results/{}.*'.format(domain.name))
'''
All tools have gathered the list of subdomains with filename
initials as from_*
We will gather all the results in one single file, sort them and
remove the older results from_*
'''
os.system(
'cat {0}/*.txt > {0}/subdomain_collection.txt'.format(results_dir))
'''
Write target domain into subdomain_collection
'''
os.system(
'cat {0}/target_domain.txt >> {0}/subdomain_collection.txt'.format(results_dir))
'''
Remove all the from_* files
'''
os.system('rm -f {}/from*'.format(results_dir))
'''
Sort all Subdomains
'''
os.system(
'sort -u {0}/subdomain_collection.txt -o {0}/sorted_subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/subdomain_collection.txt'.format(results_dir))
'''
The final results will be stored in sorted_subdomain_collection.
'''
# parse the subdomain list file and store in db
with open(subdomain_scan_results_file) as subdomain_list:
for _subdomain in subdomain_list:
__subdomain = _subdomain.rstrip('\n')
if not Subdomain.objects.filter(scan_history=task, name=__subdomain).exists(
) and validators.domain(__subdomain) and __subdomain not in out_of_scope_subdomains:
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': __subdomain,
})
save_subdomain(subdomain_dict)
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
subdomains_count = Subdomain.objects.filter(scan_history=task).count()
send_notification('Subdomain Gathering for target {} has been completed and has discovered *{}* subdomains.'.format(domain.name, subdomains_count))
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/sorted_subdomain_collection.txt')
# check for any subdomain changes and send notif if any
if notification and notification[0].send_subdomain_changes_notif:
newly_added_subdomain = get_new_added_subdomain(task.id, domain.id)
if newly_added_subdomain:
message = "**{} New Subdomains Discovered on domain {}**".format(newly_added_subdomain.count(), domain.name)
for subdomain in newly_added_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
removed_subdomain = get_removed_subdomain(task.id, domain.id)
if removed_subdomain:
message = "**{} Subdomains are no longer available on domain {}**".format(removed_subdomain.count(), domain.name)
for subdomain in removed_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
# check for interesting subdomains and send notif if any
if notification and notification[0].send_interesting_notif:
interesting_subdomain = get_interesting_subdomains(task.id, domain.id)
print(interesting_subdomain)
if interesting_subdomain:
message = "**{} Interesting Subdomains Found on domain {}**".format(interesting_subdomain.count(), domain.name)
for subdomain in interesting_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
def get_new_added_subdomain(scan_id, domain_id):
scan_history = ScanHistory.objects.filter(
domain=domain_id).filter(
subdomain_discovery=True).filter(
id__lte=scan_id)
if scan_history.count() > 1:
last_scan = scan_history.order_by('-start_scan_date')[1]
scanned_host_q1 = Subdomain.objects.filter(
scan_history__id=scan_id).values('name')
scanned_host_q2 = Subdomain.objects.filter(
scan_history__id=last_scan.id).values('name')
added_subdomain = scanned_host_q1.difference(scanned_host_q2)
return Subdomain.objects.filter(
scan_history=scan_id).filter(
name__in=added_subdomain)
def get_removed_subdomain(scan_id, domain_id):
scan_history = ScanHistory.objects.filter(
domain=domain_id).filter(
subdomain_discovery=True).filter(
id__lte=scan_id)
if scan_history.count() > 1:
last_scan = scan_history.order_by('-start_scan_date')[1]
scanned_host_q1 = Subdomain.objects.filter(
scan_history__id=scan_id).values('name')
scanned_host_q2 = Subdomain.objects.filter(
scan_history__id=last_scan.id).values('name')
removed_subdomains = scanned_host_q2.difference(scanned_host_q1)
print()
return Subdomain.objects.filter(
scan_history=last_scan).filter(
name__in=removed_subdomains)
def http_crawler(task, domain, results_dir, activity_id):
'''
This function is runs right after subdomain gathering, and gathers important
like page title, http status, etc
HTTP Crawler runs by default
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('HTTP Crawler for target {} has been initiated.'.format(domain.name))
alive_file_location = results_dir + '/alive.txt'
httpx_results_file = results_dir + '/httpx.json'
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
httpx_command = 'httpx -status-code -content-length -title -tech-detect -cdn -ip -follow-host-redirects -random-agent'
proxy = get_random_proxy()
if proxy:
httpx_command += " --http-proxy '{}'".format(proxy)
httpx_command += ' -json -o {}'.format(
httpx_results_file
)
httpx_command = 'cat {} | {}'.format(subdomain_scan_results_file, httpx_command)
print(httpx_command)
os.system(httpx_command)
# alive subdomains from httpx
alive_file = open(alive_file_location, 'w')
# writing httpx results
if os.path.isfile(httpx_results_file):
httpx_json_result = open(httpx_results_file, 'r')
lines = httpx_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
try:
# fallback for older versions of httpx
if 'url' in json_st:
subdomain = Subdomain.objects.get(
scan_history=task, name=json_st['input'])
else:
subdomain = Subdomain.objects.get(
scan_history=task, name=json_st['url'].split("//")[-1])
'''
Saving Default http urls to EndPoint
'''
endpoint = EndPoint()
endpoint.scan_history = task
endpoint.target_domain = domain
endpoint.subdomain = subdomain
if 'url' in json_st:
endpoint.http_url = json_st['url']
subdomain.http_url = json_st['url']
if 'status-code' in json_st:
endpoint.http_status = json_st['status-code']
subdomain.http_status = json_st['status-code']
if 'title' in json_st:
endpoint.page_title = json_st['title']
subdomain.page_title = json_st['title']
if 'content-length' in json_st:
endpoint.content_length = json_st['content-length']
subdomain.content_length = json_st['content-length']
if 'content-type' in json_st:
endpoint.content_type = json_st['content-type']
subdomain.content_type = json_st['content-type']
if 'webserver' in json_st:
endpoint.webserver = json_st['webserver']
subdomain.webserver = json_st['webserver']
if 'response-time' in json_st:
response_time = float(
''.join(
ch for ch in json_st['response-time'] if not ch.isalpha()))
if json_st['response-time'][-2:] == 'ms':
response_time = response_time / 1000
endpoint.response_time = response_time
subdomain.response_time = response_time
if 'cnames' in json_st:
cname_list = ','.join(json_st['cnames'])
subdomain.cname = cname_list
discovered_date = timezone.now()
endpoint.discovered_date = discovered_date
subdomain.discovered_date = discovered_date
endpoint.is_default = True
endpoint.save()
subdomain.save()
if 'technologies' in json_st:
for _tech in json_st['technologies']:
if Technology.objects.filter(name=_tech).exists():
tech = Technology.objects.get(name=_tech)
else:
tech = Technology(name=_tech)
tech.save()
subdomain.technologies.add(tech)
endpoint.technologies.add(tech)
if 'a' in json_st:
for _ip in json_st['a']:
if IpAddress.objects.filter(address=_ip).exists():
ip = IpAddress.objects.get(address=_ip)
else:
ip = IpAddress(address=_ip)
if 'cdn' in json_st:
ip.is_cdn = json_st['cdn']
ip.save()
subdomain.ip_addresses.add(ip)
# see if to ignore 404 or 5xx
alive_file.write(json_st['url'] + '\n')
subdomain.save()
endpoint.save()
except Exception as exception:
logging.error(exception)
alive_file.close()
if notification and notification[0].send_scan_status_notif:
alive_count = Subdomain.objects.filter(
scan_history__id=task.id).values('name').distinct().filter(
http_status__exact=200).count()
send_notification('HTTP Crawler for target {} has been completed.\n\n {} subdomains were alive (http status 200).'.format(domain.name, alive_count))
def grab_screenshot(task, domain, yaml_configuration, results_dir, activity_id):
'''
This function is responsible for taking screenshots
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine is currently gathering screenshots for {}'.format(domain.name))
output_screenshots_path = results_dir + '/screenshots'
result_csv_path = results_dir + '/screenshots/Requests.csv'
alive_subdomains_path = results_dir + '/alive.txt'
eyewitness_command = 'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py'
eyewitness_command += ' -f {} -d {} --no-prompt'.format(
alive_subdomains_path,
output_screenshots_path
)
if EYEWITNESS in yaml_configuration \
and TIMEOUT in yaml_configuration[EYEWITNESS] \
and yaml_configuration[EYEWITNESS][TIMEOUT] > 0:
eyewitness_command += ' --timeout {}'.format(
yaml_configuration[EYEWITNESS][TIMEOUT]
)
if EYEWITNESS in yaml_configuration \
and THREADS in yaml_configuration[EYEWITNESS] \
and yaml_configuration[EYEWITNESS][THREADS] > 0:
eyewitness_command += ' --threads {}'.format(
yaml_configuration[EYEWITNESS][THREADS]
)
logger.info(eyewitness_command)
os.system(eyewitness_command)
if os.path.isfile(result_csv_path):
logger.info('Gathering Eyewitness results')
with open(result_csv_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
if row[3] == 'Successful' \
and Subdomain.objects.filter(
scan_history__id=task.id).filter(name=row[2]).exists():
subdomain = Subdomain.objects.get(
scan_history__id=task.id,
name=row[2]
)
subdomain.screenshot_path = row[4].replace(
'/usr/src/scan_results/',
''
)
subdomain.save()
# remove all db, html extra files in screenshot results
os.system('rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(
output_screenshots_path,
))
os.system('rm -rf {0}/source'.format(
output_screenshots_path,
))
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has finished gathering screenshots for {}'.format(domain.name))
def port_scanning(task, domain, yaml_configuration, results_dir):
'''
This function is responsible for running the port scan
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Port Scan initiated for {}'.format(domain.name))
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
port_results_file = results_dir + '/ports.json'
# check the yaml_configuration and choose the ports to be scanned
scan_ports = '-' # default port scan everything
if PORTS in yaml_configuration[PORT_SCAN]:
# TODO: legacy code, remove top-100 in future versions
all_ports = yaml_configuration[PORT_SCAN][PORTS]
if 'full' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -p {}'.format(
subdomain_scan_results_file, port_results_file, '-')
elif 'top-100' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -top-ports 100'.format(
subdomain_scan_results_file, port_results_file)
elif 'top-1000' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -top-ports 1000'.format(
subdomain_scan_results_file, port_results_file)
else:
scan_ports = ','.join(
str(port) for port in all_ports)
naabu_command = 'cat {} | naabu -json -o {} -p {}'.format(
subdomain_scan_results_file, port_results_file, scan_ports)
# check for exclude ports
if EXCLUDE_PORTS in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][EXCLUDE_PORTS]:
exclude_ports = ','.join(
str(port) for port in yaml_configuration['port_scan']['exclude_ports'])
naabu_command = naabu_command + \
' -exclude-ports {}'.format(exclude_ports)
if NAABU_RATE in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][NAABU_RATE] > 0:
naabu_command = naabu_command + \
' -rate {}'.format(
yaml_configuration[PORT_SCAN][NAABU_RATE])
if USE_NAABU_CONFIG in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][USE_NAABU_CONFIG]:
naabu_command += ' -config /root/.config/naabu/naabu.conf'
# run naabu
os.system(naabu_command)
# writing port results
try:
port_json_result = open(port_results_file, 'r')
lines = port_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
port_number = json_st['port']
ip_address = json_st['ip']
# see if port already exists
if Port.objects.filter(number__exact=port_number).exists():
port = Port.objects.get(number=port_number)
else:
port = Port()
port.number = port_number
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port_detail = whatportis.get_ports(str(port_number))
if len(port_detail):
port.service_name = port_detail[0].name
port.description = port_detail[0].description
port.save()
if IpAddress.objects.filter(address=json_st['ip']).exists():
ip = IpAddress.objects.get(address=json_st['ip'])
ip.ports.add(port)
ip.save()
except BaseException as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
port_count = Port.objects.filter(
ports__in=IpAddress.objects.filter(
ip_addresses__in=Subdomain.objects.filter(
scan_history__id=task.id))).distinct().count()
send_notification('reNgine has finished Port Scanning on {} and has identified {} ports.'.format(domain.name, port_count))
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/ports.json')
def check_waf():
'''
This function will check for the WAF being used in subdomains using wafw00f
'''
pass
def directory_brute(task, domain, yaml_configuration, results_dir, activity_id):
'''
This function is responsible for performing directory scan
'''
# scan directories for all the alive subdomain with http status >
# 200
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Directory Bruteforce has been initiated for {}.'.format(domain.name))
alive_subdomains = Subdomain.objects.filter(
scan_history__id=task.id).exclude(http_url__isnull=True)
dirs_results = results_dir + '/dirs.json'
# check the yaml settings
if EXTENSIONS in yaml_configuration[DIR_FILE_SEARCH]:
extensions = ','.join(
str(ext) for ext in yaml_configuration[DIR_FILE_SEARCH][EXTENSIONS])
else:
extensions = 'php,git,yaml,conf,db,mysql,bak,txt'
# Threads
if THREADS in yaml_configuration[DIR_FILE_SEARCH] \
and yaml_configuration[DIR_FILE_SEARCH][THREADS] > 0:
threads = yaml_configuration[DIR_FILE_SEARCH][THREADS]
else:
threads = 10
for subdomain in alive_subdomains:
# delete any existing dirs.json
if os.path.isfile(dirs_results):
os.system('rm -rf {}'.format(dirs_results))
dirsearch_command = 'python3 /usr/src/github/dirsearch/dirsearch.py'
dirsearch_command += ' -u {}'.format(subdomain.http_url)
if (WORDLIST not in yaml_configuration[DIR_FILE_SEARCH] or
not yaml_configuration[DIR_FILE_SEARCH][WORDLIST] or
'default' in yaml_configuration[DIR_FILE_SEARCH][WORDLIST]):
wordlist_location = '/usr/src/github/dirsearch/db/dicc.txt'
else:
wordlist_location = '/usr/src/wordlist/' + \
yaml_configuration[DIR_FILE_SEARCH][WORDLIST] + '.txt'
dirsearch_command += ' -w {}'.format(wordlist_location)
dirsearch_command += ' --format json -o {}'.format(dirs_results)
dirsearch_command += ' -e {}'.format(extensions)
dirsearch_command += ' -t {}'.format(threads)
dirsearch_command += ' --random-agent --follow-redirects --exclude-status 403,401,404'
if EXCLUDE_EXTENSIONS in yaml_configuration[DIR_FILE_SEARCH]:
exclude_extensions = ','.join(
str(ext) for ext in yaml_configuration[DIR_FILE_SEARCH][EXCLUDE_EXTENSIONS])
dirsearch_command += ' -X {}'.format(exclude_extensions)
if EXCLUDE_TEXT in yaml_configuration[DIR_FILE_SEARCH]:
exclude_text = ','.join(
str(text) for text in yaml_configuration[DIR_FILE_SEARCH][EXCLUDE_TEXT])
dirsearch_command += ' -exclude-texts {}'.format(exclude_text)
# check if recursive strategy is set to on
if RECURSIVE_LEVEL in yaml_configuration[DIR_FILE_SEARCH]:
dirsearch_command += ' --recursion-depth {}'.format(yaml_configuration[DIR_FILE_SEARCH][RECURSIVE_LEVEL])
if RECURSIVE_LEVEL in yaml_configuration[DIR_FILE_SEARCH]:
dirsearch_command += ' --recursion-depth {}'.format(yaml_configuration[DIR_FILE_SEARCH][RECURSIVE_LEVEL])
# proxy
proxy = get_random_proxy()
if proxy:
dirsearch_command += " --proxy '{}'".format(proxy)
print(dirsearch_command)
os.system(dirsearch_command)
try:
if os.path.isfile(dirs_results):
with open(dirs_results, "r") as json_file:
json_string = json_file.read()
subdomain = Subdomain.objects.get(
scan_history__id=task.id, http_url=subdomain.http_url)
subdomain.directory_json = json_string
subdomain.save()
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
send_notification('Directory Bruteforce has been completed for {}.'.format(domain.name))
def fetch_endpoints(
task,
domain,
yaml_configuration,
results_dir,
activity_id):
'''
This function is responsible for fetching all the urls associated with target
and run HTTP probe
It first runs gau to gather all urls from wayback, then we will use hakrawler to identify more urls
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine is currently gathering endpoints for {}.'.format(domain.name))
# check yaml settings
if ALL in yaml_configuration[FETCH_URL][USES_TOOLS]:
tools = 'gauplus hakrawler waybackurls gospider'
else:
tools = ' '.join(
str(tool) for tool in yaml_configuration[FETCH_URL][USES_TOOLS])
if INTENSITY in yaml_configuration[FETCH_URL]:
scan_type = yaml_configuration[FETCH_URL][INTENSITY]
else:
scan_type = 'normal'
domain_regex = "\'https?://([a-z0-9]+[.])*{}.*\'".format(domain.name)
if 'deep' in scan_type:
# performs deep url gathering for all the subdomains present -
# RECOMMENDED
logger.info('Deep URLS Fetch')
os.system(settings.TOOL_LOCATION + 'get_urls.sh %s %s %s %s %s' %
("None", results_dir, scan_type, domain_regex, tools))
else:
# perform url gathering only for main domain - USE only for quick scan
logger.info('Non Deep URLS Fetch')
os.system(
settings.TOOL_LOCATION +
'get_urls.sh %s %s %s %s %s' % (
domain.name,
results_dir,
scan_type,
domain_regex,
tools
))
if IGNORE_FILE_EXTENSION in yaml_configuration[FETCH_URL]:
ignore_extension = '|'.join(
yaml_configuration[FETCH_URL][IGNORE_FILE_EXTENSION])
logger.info('Ignore extensions' + ignore_extension)
os.system(
'cat {0}/all_urls.txt | grep -Eiv "\\.({1}).*" > {0}/temp_urls.txt'.format(
results_dir, ignore_extension))
os.system(
'rm {0}/all_urls.txt && mv {0}/temp_urls.txt {0}/all_urls.txt'.format(results_dir))
'''
Store all the endpoints and then run the httpx
'''
try:
endpoint_final_url = results_dir + '/all_urls.txt'
if os.path.isfile(endpoint_final_url):
with open(endpoint_final_url) as endpoint_list:
for url in endpoint_list:
http_url = url.rstrip('\n')
if not EndPoint.objects.filter(scan_history=task, http_url=http_url).exists():
_subdomain = get_subdomain_from_url(http_url)
if Subdomain.objects.filter(
scan_history=task).filter(
name=_subdomain).exists():
subdomain = Subdomain.objects.get(
scan_history=task, name=_subdomain)
else:
'''
gau or gosppider can gather interesting endpoints which
when parsed can give subdomains that were not existent from
subdomain scan. so storing them
'''
logger.error(
'Subdomain {} not found, adding...'.format(_subdomain))
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': _subdomain,
})
subdomain = save_subdomain(subdomain_dict)
endpoint_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'subdomain': subdomain,
'http_url': http_url,
})
save_endpoint(endpoint_dict)
except Exception as e:
logger.error(e)
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/all_urls.txt')
'''
TODO:
Go spider & waybackurls accumulates a lot of urls, which is good but nuclei
takes forever to scan even a simple website, so we will do http probing
and filter HTTP status 404, this way we can reduce the number of Non Existent
URLS
'''
logger.info('HTTP Probing on collected endpoints')
httpx_command = 'httpx -l {0}/all_urls.txt -status-code -content-length -ip -cdn -title -tech-detect -json -follow-redirects -random-agent -o {0}/final_httpx_urls.json'.format(results_dir)
proxy = get_random_proxy()
if proxy:
httpx_command += " --http-proxy '{}'".format(proxy)
os.system(httpx_command)
url_results_file = results_dir + '/final_httpx_urls.json'
try:
urls_json_result = open(url_results_file, 'r')
lines = urls_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
http_url = json_st['url']
_subdomain = get_subdomain_from_url(http_url)
if Subdomain.objects.filter(
scan_history=task).filter(
name=_subdomain).exists():
subdomain_obj = Subdomain.objects.get(
scan_history=task, name=_subdomain)
else:
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': _subdomain,
})
subdomain_obj = save_subdomain(subdomain_dict)
if EndPoint.objects.filter(
scan_history=task).filter(
http_url=http_url).exists():
endpoint = EndPoint.objects.get(
scan_history=task, http_url=http_url)
else:
endpoint = EndPoint()
endpoint_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'http_url': http_url,
'subdomain': subdomain_obj
})
endpoint = save_endpoint(endpoint_dict)
if 'title' in json_st:
endpoint.page_title = json_st['title']
if 'webserver' in json_st:
endpoint.webserver = json_st['webserver']
if 'content-length' in json_st:
endpoint.content_length = json_st['content-length']
if 'content-type' in json_st:
endpoint.content_type = json_st['content-type']
if 'status-code' in json_st:
endpoint.http_status = json_st['status-code']
if 'response-time' in json_st:
response_time = float(''.join(ch for ch in json_st['response-time'] if not ch.isalpha()))
if json_st['response-time'][-2:] == 'ms':
response_time = response_time / 1000
endpoint.response_time = response_time
endpoint.save()
if 'technologies' in json_st:
for _tech in json_st['technologies']:
if Technology.objects.filter(name=_tech).exists():
tech = Technology.objects.get(name=_tech)
else:
tech = Technology(name=_tech)
tech.save()
endpoint.technologies.add(tech)
# get subdomain object
subdomain = Subdomain.objects.get(scan_history=task, name=_subdomain)
subdomain.technologies.add(tech)
subdomain.save()
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
endpoint_count = EndPoint.objects.filter(
scan_history__id=task.id).values('http_url').distinct().count()
endpoint_alive_count = EndPoint.objects.filter(
scan_history__id=task.id, http_status__exact=200).values('http_url').distinct().count()
send_notification('reNgine has finished gathering endpoints for {} and has discovered *{}* unique endpoints.\n\n{} of those endpoints reported HTTP status 200.'.format(
domain.name,
endpoint_count,
endpoint_alive_count
))
# once endpoint is saved, run gf patterns TODO: run threads
if GF_PATTERNS in yaml_configuration[FETCH_URL]:
for pattern in yaml_configuration[FETCH_URL][GF_PATTERNS]:
logger.info('Running GF for {}'.format(pattern))
gf_output_file_path = '{0}/gf_patterns_{1}.txt'.format(
results_dir, pattern)
gf_command = 'cat {0}/all_urls.txt | gf {1} >> {2}'.format(
results_dir, pattern, gf_output_file_path)
os.system(gf_command)
if os.path.exists(gf_output_file_path):
with open(gf_output_file_path) as gf_output:
for line in gf_output:
url = line.rstrip('\n')
try:
endpoint = EndPoint.objects.get(
scan_history=task, http_url=url)
earlier_pattern = endpoint.matched_gf_patterns
new_pattern = earlier_pattern + ',' + pattern if earlier_pattern else pattern
endpoint.matched_gf_patterns = new_pattern
except Exception as e:
# add the url in db
logger.error(e)
logger.info('Adding URL' + url)
endpoint = EndPoint()
endpoint.http_url = url
endpoint.target_domain = domain
endpoint.scan_history = task
try:
_subdomain = Subdomain.objects.get(
scan_history=task, name=get_subdomain_from_url(url))
endpoint.subdomain = _subdomain
except Exception as e:
continue
endpoint.matched_gf_patterns = pattern
finally:
endpoint.save()
def vulnerability_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id):
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Vulnerability scan has been initiated for {}.'.format(domain.name))
'''
This function will run nuclei as a vulnerability scanner
----
unfurl the urls to keep only domain and path, this will be sent to vuln scan
ignore certain file extensions
Thanks: https://github.com/six2dez/reconftw
'''
urls_path = '/alive.txt'
if task.scan_type.fetch_url:
os.system('cat {0}/all_urls.txt | grep -Eiv "\\.(eot|jpg|jpeg|gif|css|tif|tiff|png|ttf|otf|woff|woff2|ico|pdf|svg|txt|js|doc|docx)$" | unfurl -u format %s://%d%p >> {0}/unfurl_urls.txt'.format(results_dir))
os.system(
'sort -u {0}/unfurl_urls.txt -o {0}/unfurl_urls.txt'.format(results_dir))
urls_path = '/unfurl_urls.txt'
vulnerability_result_path = results_dir + '/vulnerability.json'
vulnerability_scan_input_file = results_dir + urls_path
nuclei_command = 'nuclei -json -l {} -o {}'.format(
vulnerability_scan_input_file, vulnerability_result_path)
# check nuclei config
if USE_NUCLEI_CONFIG in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[VULNERABILITY_SCAN][USE_NUCLEI_CONFIG]:
nuclei_command += ' -config /root/.config/nuclei/config.yaml'
'''
Nuclei Templates
Either custom template has to be supplied or default template, if neither has
been supplied then use all templates including custom templates
'''
if CUSTOM_NUCLEI_TEMPLATE in yaml_configuration[
VULNERABILITY_SCAN] or NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
# check yaml settings for templates
if NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
if ALL in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_TEMPLATE]:
template = NUCLEI_TEMPLATES_PATH
else:
_template = ','.join([NUCLEI_TEMPLATES_PATH + str(element)
for element in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_TEMPLATE]])
template = _template.replace(',', ' -t ')
# Update nuclei command with templates
nuclei_command = nuclei_command + ' -t ' + template
if CUSTOM_NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
# add .yaml to the custom template extensions
_template = ','.join(
[str(element) + '.yaml' for element in yaml_configuration[VULNERABILITY_SCAN][CUSTOM_NUCLEI_TEMPLATE]])
template = _template.replace(',', ' -t ')
# Update nuclei command with templates
nuclei_command = nuclei_command + ' -t ' + template
else:
nuclei_command = nuclei_command + ' -t /root/nuclei-templates'
# check yaml settings for concurrency
if NUCLEI_CONCURRENCY in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][NUCLEI_CONCURRENCY] > 0:
concurrency = yaml_configuration[VULNERABILITY_SCAN][NUCLEI_CONCURRENCY]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -c ' + str(concurrency)
if RATE_LIMIT in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][RATE_LIMIT] > 0:
rate_limit = yaml_configuration[VULNERABILITY_SCAN][RATE_LIMIT]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -rl ' + str(rate_limit)
if TIMEOUT in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][TIMEOUT] > 0:
timeout = yaml_configuration[VULNERABILITY_SCAN][TIMEOUT]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -timeout ' + str(timeout)
if RETRIES in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][RETRIES] > 0:
retries = yaml_configuration[VULNERABILITY_SCAN][RETRIES]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -retries ' + str(retries)
# for severity
if NUCLEI_SEVERITY in yaml_configuration[VULNERABILITY_SCAN] and ALL not in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_SEVERITY]:
_severity = ','.join(
[str(element) for element in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_SEVERITY]])
severity = _severity.replace(" ", "")
else:
severity = "critical, high, medium, low, info"
# update nuclei templates before running scan
os.system('nuclei -update-templates')
for _severity in severity.split(","):
# delete any existing vulnerability.json file
if os.path.isfile(vulnerability_result_path):
os.system('rm {}'.format(vulnerability_result_path))
# run nuclei
final_nuclei_command = nuclei_command + ' -severity ' + _severity
proxy = get_random_proxy()
if proxy:
final_nuclei_command += " --proxy-url '{}'".format(proxy)
logger.info(final_nuclei_command)
os.system(final_nuclei_command)
try:
if os.path.isfile(vulnerability_result_path):
urls_json_result = open(vulnerability_result_path, 'r')
lines = urls_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
host = json_st['host']
_subdomain = get_subdomain_from_url(host)
try:
subdomain = Subdomain.objects.get(
name=_subdomain, scan_history=task)
vulnerability = Vulnerability()
vulnerability.subdomain = subdomain
vulnerability.scan_history = task
vulnerability.target_domain = domain
try:
endpoint = EndPoint.objects.get(
scan_history=task, target_domain=domain, http_url=host)
vulnerability.endpoint = endpoint
except Exception as exception:
logger.error(exception)
if 'name' in json_st['info']:
vulnerability.name = json_st['info']['name']
if 'severity' in json_st['info']:
if json_st['info']['severity'] == 'info':
severity = 0
elif json_st['info']['severity'] == 'low':
severity = 1
elif json_st['info']['severity'] == 'medium':
severity = 2
elif json_st['info']['severity'] == 'high':
severity = 3
elif json_st['info']['severity'] == 'critical':
severity = 4
else:
severity = 0
else:
severity = 0
vulnerability.severity = severity
if 'tags' in json_st['info']:
vulnerability.tags = json_st['info']['tags']
if 'description' in json_st['info']:
vulnerability.description = json_st['info']['description']
if 'reference' in json_st['info']:
vulnerability.reference = json_st['info']['reference']
if 'matched' in json_st: # TODO remove in rengine 1.1. 'matched' isn't used in nuclei 2.5.3
vulnerability.http_url = json_st['matched']
if 'matched-at' in json_st:
vulnerability.http_url = json_st['matched-at']
if 'templateID' in json_st:
vulnerability.template_used = json_st['templateID']
if 'description' in json_st:
vulnerability.description = json_st['description']
if 'matcher_name' in json_st:
vulnerability.matcher_name = json_st['matcher_name']
if 'extracted_results' in json_st:
vulnerability.extracted_results = json_st['extracted_results']
vulnerability.discovered_date = timezone.now()
vulnerability.open_status = True
vulnerability.save()
# send notification for all vulnerabilities except info
if json_st['info']['severity'] != "info" and notification and notification[0].send_vuln_notif:
message = "*Alert: Vulnerability Identified*"
message += "\n\n"
message += "A *{}* severity vulnerability has been identified.".format(json_st['info']['severity'])
message += "\nVulnerability Name: {}".format(json_st['info']['name'])
message += "\nVulnerable URL: {}".format(json_st['host'])
send_notification(message)
# send report to hackerone
if Hackerone.objects.all().exists() and json_st['info']['severity'] != 'info' and json_st['info']['severity'] \
!= 'low' and vulnerability.target_domain.h1_team_handle:
hackerone = Hackerone.objects.all()[0]
if hackerone.send_critical and json_st['info']['severity'] == 'critical':
send_hackerone_report(vulnerability.id)
elif hackerone.send_high and json_st['info']['severity'] == 'high':
send_hackerone_report(vulnerability.id)
elif hackerone.send_medium and json_st['info']['severity'] == 'medium':
send_hackerone_report(vulnerability.id)
except ObjectDoesNotExist:
logger.error('Object not found')
continue
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
info_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=0).count()
low_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=1).count()
medium_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=2).count()
high_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=3).count()
critical_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=4).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count
message = 'Vulnerability scan has been completed for {} and discovered {} vulnerabilities.'.format(
domain.name,
vulnerability_count
)
message += '\n\n*Vulnerability Stats:*'
message += '\nCritical: {}'.format(critical_count)
message += '\nHigh: {}'.format(high_count)
message += '\nMedium: {}'.format(medium_count)
message += '\nLow: {}'.format(low_count)
message += '\nInfo: {}'.format(info_count)
send_notification(message)
def scan_failed(task):
task.scan_status = 0
task.stop_scan_date = timezone.now()
task.save()
def create_scan_activity(task, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = task
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
def update_last_activity(id, activity_status):
ScanActivity.objects.filter(
id=id).update(
status=activity_status,
time=timezone.now())
def delete_scan_data(results_dir):
# remove all txt,html,json files
os.system('find {} -name "*.txt" -type f -delete'.format(results_dir))
os.system('find {} -name "*.html" -type f -delete'.format(results_dir))
os.system('find {} -name "*.json" -type f -delete'.format(results_dir))
def save_subdomain(subdomain_dict):
subdomain = Subdomain()
subdomain.discovered_date = timezone.now()
subdomain.target_domain = subdomain_dict.get('target_domain')
subdomain.scan_history = subdomain_dict.get('scan_history')
subdomain.name = subdomain_dict.get('name')
subdomain.http_url = subdomain_dict.get('http_url')
subdomain.screenshot_path = subdomain_dict.get('screenshot_path')
subdomain.http_header_path = subdomain_dict.get('http_header_path')
subdomain.cname = subdomain_dict.get('cname')
subdomain.is_cdn = subdomain_dict.get('is_cdn')
subdomain.content_type = subdomain_dict.get('content_type')
subdomain.webserver = subdomain_dict.get('webserver')
subdomain.page_title = subdomain_dict.get('page_title')
subdomain.is_imported_subdomain = subdomain_dict.get(
'is_imported_subdomain') if 'is_imported_subdomain' in subdomain_dict else False
if 'http_status' in subdomain_dict:
subdomain.http_status = subdomain_dict.get('http_status')
if 'response_time' in subdomain_dict:
subdomain.response_time = subdomain_dict.get('response_time')
if 'content_length' in subdomain_dict:
subdomain.content_length = subdomain_dict.get('content_length')
subdomain.save()
return subdomain
def save_endpoint(endpoint_dict):
endpoint = EndPoint()
endpoint.discovered_date = timezone.now()
endpoint.scan_history = endpoint_dict.get('scan_history')
endpoint.target_domain = endpoint_dict.get('target_domain') if 'target_domain' in endpoint_dict else None
endpoint.subdomain = endpoint_dict.get('subdomain') if 'target_domain' in endpoint_dict else None
endpoint.http_url = endpoint_dict.get('http_url')
endpoint.page_title = endpoint_dict.get('page_title') if 'page_title' in endpoint_dict else None
endpoint.content_type = endpoint_dict.get('content_type') if 'content_type' in endpoint_dict else None
endpoint.webserver = endpoint_dict.get('webserver') if 'webserver' in endpoint_dict else None
endpoint.response_time = endpoint_dict.get('response_time') if 'response_time' in endpoint_dict else 0
endpoint.http_status = endpoint_dict.get('http_status') if 'http_status' in endpoint_dict else 0
endpoint.content_length = endpoint_dict.get('content_length') if 'content_length' in endpoint_dict else 0
endpoint.is_default = endpoint_dict.get('is_default') if 'is_default' in endpoint_dict else False
endpoint.save()
return endpoint
def perform_osint(task, domain, yaml_configuration, results_dir):
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has initiated OSINT on target {}'.format(domain.name))
if 'discover' in yaml_configuration[OSINT]:
osint_discovery(task, domain, yaml_configuration, results_dir)
if 'dork' in yaml_configuration[OSINT]:
dorking(task, yaml_configuration)
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has completed performing OSINT on target {}'.format(domain.name))
def osint_discovery(task, domain, yaml_configuration, results_dir):
if ALL in yaml_configuration[OSINT][OSINT_DISCOVER]:
osint_lookup = 'emails metainfo employees'
else:
osint_lookup = ' '.join(
str(lookup) for lookup in yaml_configuration[OSINT][OSINT_DISCOVER])
if 'metainfo' in osint_lookup:
if INTENSITY in yaml_configuration[OSINT]:
osint_intensity = yaml_configuration[OSINT][INTENSITY]
else:
osint_intensity = 'normal'
if OSINT_DOCUMENTS_LIMIT in yaml_configuration[OSINT]:
documents_limit = yaml_configuration[OSINT][OSINT_DOCUMENTS_LIMIT]
else:
documents_limit = 50
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': domain.name,
'domain': domain,
'scan_id': task,
'documents_limit': documents_limit
})
get_and_save_meta_info(meta_dict)
elif osint_intensity == 'deep':
# get all subdomains in scan_id
subdomains = Subdomain.objects.filter(scan_history=task)
for subdomain in subdomains:
meta_dict = DottedDict({
'osint_target': subdomain.name,
'domain': domain,
'scan_id': task,
'documents_limit': documents_limit
})
get_and_save_meta_info(meta_dict)
if 'emails' in osint_lookup:
get_and_save_emails(task, results_dir)
get_and_save_leaked_credentials(task, results_dir)
if 'employees' in osint_lookup:
get_and_save_employees(task, results_dir)
def dorking(scan_history, yaml_configuration):
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
# look in stackoverflow
if ALL in yaml_configuration[OSINT][OSINT_DORK]:
dork_lookup = 'stackoverflow, 3rdparty, social_media, project_management, code_sharing, config_files, jenkins, cloud_buckets, php_error, exposed_documents, struts_rce, db_files, traefik, git_exposed'
else:
dork_lookup = ' '.join(
str(lookup) for lookup in yaml_configuration[OSINT][OSINT_DORK])
if 'stackoverflow' in dork_lookup:
dork = 'site:stackoverflow.com'
dork_type = 'stackoverflow'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=False
)
if '3rdparty' in dork_lookup:
# look in 3rd party sitee
dork_type = '3rdparty'
lookup_websites = [
'gitter.im',
'papaly.com',
'productforums.google.com',
'coggle.it',
'replt.it',
'ycombinator.com',
'libraries.io',
'npm.runkit.com',
'npmjs.com',
'scribd.com',
'gitter.im'
]
dork = ''
for website in lookup_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'social_media' in dork_lookup:
dork_type = 'Social Media'
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'pinterest.com',
'tumblr.com',
'reddit.com'
]
dork = ''
for website in social_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'project_management' in dork_lookup:
dork_type = 'Project Management'
project_websites = [
'trello.com',
'*.atlassian.net'
]
dork = ''
for website in project_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'code_sharing' in dork_lookup:
dork_type = 'Code Sharing Sites'
code_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
dork = ''
for website in code_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'config_files' in dork_lookup:
dork_type = 'Config Files'
config_file_ext = [
'env',
'xml',
'conf',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
dork = ''
for extension in config_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'jenkins' in dork_lookup:
dork_type = 'Jenkins'
dork = 'intitle:\"Dashboard [Jenkins]\"'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
if 'wordpress_files' in dork_lookup:
dork_type = 'Wordpress Files'
inurl_lookup = [
'wp-content',
'wp-includes'
]
dork = ''
for lookup in inurl_lookup:
dork = dork + ' | ' + 'inurl:' + lookup
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'cloud_buckets' in dork_lookup:
dork_type = 'Cloud Buckets'
cloud_websites = [
'.s3.amazonaws.com',
'storage.googleapis.com',
'amazonaws.com'
]
dork = ''
for website in cloud_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'php_error' in dork_lookup:
dork_type = 'PHP Error'
error_words = [
'\"PHP Parse error\"',
'\"PHP Warning\"',
'\"PHP Error\"'
]
dork = ''
for word in error_words:
dork = dork + ' | ' + word
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'exposed_documents' in dork_lookup:
dork_type = 'Exposed Documents'
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
dork = ''
for extension in docs_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'struts_rce' in dork_lookup:
dork_type = 'Apache Struts RCE'
struts_file_ext = [
'action',
'struts',
'do'
]
dork = ''
for extension in struts_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'db_files' in dork_lookup:
dork_type = 'Database Files'
db_file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
dork = ''
for extension in db_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'traefik' in dork_lookup:
dork = 'intitle:traefik inurl:8080/dashboard'
dork_type = 'Traefik'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
if 'git_exposed' in dork_lookup:
dork = 'inurl:\"/.git\"'
dork_type = '.git Exposed'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
def get_and_save_dork_results(dork, type, scan_history, in_target=False):
degoogle_obj = degoogle.dg()
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
if in_target:
query = dork + " site:" + scan_history.domain.name
else:
query = dork + " \"{}\"".format(scan_history.domain.name)
logger.info(query)
degoogle_obj.query = query
results = degoogle_obj.run()
logger.info(results)
for result in results:
dork, _ = Dork.objects.get_or_create(
type=type,
description=result['desc'],
url=result['url']
)
scan_history.dorks.add(dork)
def get_and_save_employees(scan_history, results_dir):
theHarvester_location = '/usr/src/github/theHarvester'
# update proxies.yaml
if Proxy.objects.all().exists():
proxy = Proxy.objects.all()[0]
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(theHarvester_location + '/proxies.yaml', 'w') as file:
documents = yaml.dump(yaml_data, file)
os.system('cd {} && python3 theHarvester.py -d {} -b all -f {}/theHarvester.html'.format(
theHarvester_location,
scan_history.domain.name,
results_dir
))
file_location = results_dir + '/theHarvester.html'
print(file_location)
# delete proxy environ var
if os.environ.get(('https_proxy')):
del os.environ['https_proxy']
if os.environ.get(('HTTPS_PROXY')):
del os.environ['HTTPS_PROXY']
if os.path.isfile(file_location):
logger.info('Parsing theHarvester results')
options = FirefoxOptions()
options.add_argument("--headless")
driver = webdriver.Firefox(options=options)
driver.get('file://'+file_location)
tabledata = driver.execute_script('return tabledata')
# save email addresses and linkedin employees
for data in tabledata:
if data['record'] == 'email':
_email = data['result']
email, _ = Email.objects.get_or_create(address=_email)
scan_history.emails.add(email)
elif data['record'] == 'people':
_employee = data['result']
split_val = _employee.split('-')
name = split_val[0]
if len(split_val) == 2:
designation = split_val[1]
else:
designation = ""
employee, _ = Employee.objects.get_or_create(name=name, designation=designation)
scan_history.employees.add(employee)
driver.quit()
print(tabledata)
def get_and_save_emails(scan_history, results_dir):
leak_target_path = '{}/creds_target.txt'.format(results_dir)
# get email address
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
emails = []
try:
logger.info('OSINT: Getting emails from Google')
email_from_google = get_emails_from_google(scan_history.domain.name)
logger.info('OSINT: Getting emails from Bing')
email_from_bing = get_emails_from_bing(scan_history.domain.name)
logger.info('OSINT: Getting emails from Baidu')
email_from_baidu = get_emails_from_baidu(scan_history.domain.name)
emails = list(set(email_from_google + email_from_bing + email_from_baidu))
logger.info(emails)
except Exception as e:
logger.error(e)
leak_target_file = open(leak_target_path, 'w')
for _email in emails:
email, _ = Email.objects.get_or_create(address=_email)
scan_history.emails.add(email)
leak_target_file.write('{}\n'.format(_email))
# fill leak_target_file with possible email address
leak_target_file.write('%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.write('%.%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%.%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.write('%_%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%_%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.close()
def get_and_save_leaked_credentials(scan_history, results_dir):
logger.info('OSINT: Getting leaked credentials...')
leak_target_file = '{}/creds_target.txt'.format(results_dir)
leak_output_file = '{}/pwndb.json'.format(results_dir)
pwndb_command = 'python3 /usr/src/github/pwndb/pwndb.py --proxy tor:9150 --output json --list {}'.format(
leak_target_file
)
try:
pwndb_output = subprocess.getoutput(pwndb_command)
creds = json.loads(pwndb_output)
for cred in creds:
if cred['username'] != 'donate':
email_id = "{}@{}".format(cred['username'], cred['domain'])
email_obj, _ = Email.objects.get_or_create(
address=email_id,
)
email_obj.password = cred['password']
email_obj.save()
scan_history.emails.add(email_obj)
except Exception as e:
logger.error(e)
pass
def get_and_save_meta_info(meta_dict):
logger.info('Getting METADATA for {}'.format(meta_dict.osint_target))
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
result = metadata_extractor.extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if result:
results = result.get_metadata()
for meta in results:
meta_finder_document = MetaFinderDocument()
subdomain = Subdomain.objects.get(scan_history=meta_dict.scan_id, name=meta_dict.osint_target)
meta_finder_document.subdomain = subdomain
meta_finder_document.target_domain = meta_dict.domain
meta_finder_document.scan_history = meta_dict.scan_id
item = DottedDict(results[meta])
meta_finder_document.url = item.url
meta_finder_document.doc_name = meta
meta_finder_document.http_status = item.status_code
metadata = results[meta]['metadata']
for data in metadata:
if 'Producer' in metadata and metadata['Producer']:
meta_finder_document.producer = metadata['Producer'].rstrip('\x00')
if 'Creator' in metadata and metadata['Creator']:
meta_finder_document.creator = metadata['Creator'].rstrip('\x00')
if 'CreationDate' in metadata and metadata['CreationDate']:
meta_finder_document.creation_date = metadata['CreationDate'].rstrip('\x00')
if 'ModDate' in metadata and metadata['ModDate']:
meta_finder_document.modified_date = metadata['ModDate'].rstrip('\x00')
if 'Author' in metadata and metadata['Author']:
meta_finder_document.author = metadata['Author'].rstrip('\x00')
if 'Title' in metadata and metadata['Title']:
meta_finder_document.title = metadata['Title'].rstrip('\x00')
if 'OSInfo' in metadata and metadata['OSInfo']:
meta_finder_document.os = metadata['OSInfo'].rstrip('\x00')
meta_finder_document.save()
@app.task(bind=True)
def test_task(self):
print('*' * 40)
print('test task run')
print('*' * 40)
| radaram | 43af3a6aecdece4923ee74b108853f7b9c51ed12 | 27d6ec5827a51fd74e3ab97a5cef38fc7f5d9168 | Yes, of course, In version nuclei 2.5.2 passed key matched https://github.com/projectdiscovery/nuclei/blob/v2.5.2/v2/pkg/output/output.go#L78
`Matched string json:"matched,omitempty"`
In nuclei 2.5.3(last version) passed key matched-at
`Matched string json:"matched-at,omitempty"` https://github.com/projectdiscovery/nuclei/blob/v2.5.3/v2/pkg/output/output.go#L78 | radaram | 34 |
yogeshojha/rengine | 530 | Fix #529 | Nuclei returns the response to stdout:
`{"template-id":"tech-detect","info":{"name":"Wappalyzer Technology Detection","author":["hakluke"],"tags":["tech"],"reference":null,"severity":"info"},"matcher-name":"nginx","type":"http","host":"https://example.com:443","matched-at":"https://example.com:443","timestamp":"2021-10-31T09:39:47.1571248Z","curl-command":"curl -X 'GET' -d '' -H 'Accept: */*' -H 'Accept-Language: en' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1944.0 Safari/537.36' 'https://example.com'"}`
It needs to read host_url from matched-at, not from matched. | null | 2021-10-31 10:27:33+00:00 | 2021-11-01 16:58:16+00:00 | web/reNgine/tasks.py | import os
import traceback
import yaml
import json
import csv
import validators
import random
import requests
import logging
import metafinder.extractor as metadata_extractor
import whatportis
import subprocess
from selenium.webdriver.firefox.options import Options as FirefoxOptions
from selenium import webdriver
from emailfinder.extractor import *
from dotted_dict import DottedDict
from celery import shared_task
from discord_webhook import DiscordWebhook
from reNgine.celery import app
from startScan.models import *
from targetApp.models import Domain
from scanEngine.models import EngineType
from django.conf import settings
from django.shortcuts import get_object_or_404
from celery import shared_task
from datetime import datetime
from degoogle import degoogle
from django.conf import settings
from django.utils import timezone, dateformat
from django.shortcuts import get_object_or_404
from django.core.exceptions import ObjectDoesNotExist
from reNgine.celery import app
from reNgine.definitions import *
from startScan.models import *
from targetApp.models import Domain
from scanEngine.models import EngineType, Configuration, Wordlist
from .common_func import *
'''
task for background scan
'''
@app.task
def initiate_scan(
domain_id,
scan_history_id,
scan_type,
engine_type,
imported_subdomains=None,
out_of_scope_subdomains=[]
):
'''
scan_type = 0 -> immediate scan, need not create scan object
scan_type = 1 -> scheduled scan
'''
engine_object = EngineType.objects.get(pk=engine_type)
domain = Domain.objects.get(pk=domain_id)
if scan_type == 1:
task = ScanHistory()
task.scan_status = -1
elif scan_type == 0:
task = ScanHistory.objects.get(pk=scan_history_id)
# save the last scan date for domain model
domain.last_scan_date = timezone.now()
domain.save()
# once the celery task starts, change the task status to Started
task.scan_type = engine_object
task.celery_id = initiate_scan.request.id
task.domain = domain
task.scan_status = 1
task.start_scan_date = timezone.now()
task.subdomain_discovery = True if engine_object.subdomain_discovery else False
task.dir_file_search = True if engine_object.dir_file_search else False
task.port_scan = True if engine_object.port_scan else False
task.fetch_url = True if engine_object.fetch_url else False
task.osint = True if engine_object.osint else False
task.screenshot = True if engine_object.screenshot else False
task.vulnerability_scan = True if engine_object.vulnerability_scan else False
task.save()
activity_id = create_scan_activity(task, "Scanning Started", 2)
results_dir = '/usr/src/scan_results/'
os.chdir(results_dir)
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has initiated recon for target {} with engine type {}'.format(domain.name, engine_object.engine_name))
try:
current_scan_dir = domain.name + '_' + str(random.randint(100000000000, 999999999999))
os.mkdir(current_scan_dir)
task.results_dir = current_scan_dir
task.save()
except Exception as exception:
logger.error(exception)
scan_failed(task)
yaml_configuration = None
excluded_subdomains = ''
try:
yaml_configuration = yaml.load(
task.scan_type.yaml_configuration,
Loader=yaml.FullLoader)
except Exception as exception:
logger.error(exception)
# TODO: Put failed reason on db
'''
Add GF patterns name to db for dynamic URLs menu
'''
if engine_object.fetch_url and GF_PATTERNS in yaml_configuration[FETCH_URL]:
task.used_gf_patterns = ','.join(
pattern for pattern in yaml_configuration[FETCH_URL][GF_PATTERNS])
task.save()
results_dir = results_dir + current_scan_dir
# put all imported subdomains into txt file and also in Subdomain model
if imported_subdomains:
extract_imported_subdomain(
imported_subdomains, task, domain, results_dir)
if yaml_configuration:
'''
a target in itself is a subdomain, some tool give subdomains as
www.yogeshojha.com but url and everything else resolves to yogeshojha.com
In that case, we would already need to store target itself as subdomain
'''
initial_subdomain_file = '/target_domain.txt' if task.subdomain_discovery else '/sorted_subdomain_collection.txt'
subdomain_file = open(results_dir + initial_subdomain_file, "w")
subdomain_file.write(domain.name + "\n")
subdomain_file.close()
if(task.subdomain_discovery):
activity_id = create_scan_activity(task, "Subdomain Scanning", 1)
subdomain_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id,
out_of_scope_subdomains
)
else:
skip_subdomain_scan(task, domain, results_dir)
update_last_activity(activity_id, 2)
activity_id = create_scan_activity(task, "HTTP Crawler", 1)
http_crawler(
task,
domain,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
try:
if task.screenshot:
activity_id = create_scan_activity(
task, "Visual Recon - Screenshot", 1)
grab_screenshot(
task,
domain,
yaml_configuration,
current_scan_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if(task.port_scan):
activity_id = create_scan_activity(task, "Port Scanning", 1)
port_scanning(task, domain, yaml_configuration, results_dir)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.osint:
activity_id = create_scan_activity(task, "OSINT Running", 1)
perform_osint(task, domain, yaml_configuration, results_dir)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.dir_file_search:
activity_id = create_scan_activity(task, "Directory Search", 1)
directory_brute(
task,
domain,
yaml_configuration,
results_dir,
activity_id
)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.fetch_url:
activity_id = create_scan_activity(task, "Fetching endpoints", 1)
fetch_endpoints(
task,
domain,
yaml_configuration,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.vulnerability_scan:
activity_id = create_scan_activity(task, "Vulnerability Scan", 1)
vulnerability_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
activity_id = create_scan_activity(task, "Scan Completed", 2)
if notification and notification[0].send_scan_status_notif:
send_notification('*Scan Completed*\nreNgine has finished performing recon on target {}.'.format(domain.name))
'''
Once the scan is completed, save the status to successful
'''
if ScanActivity.objects.filter(scan_of=task).filter(status=0).all():
task.scan_status = 0
else:
task.scan_status = 2
task.stop_scan_date = timezone.now()
task.save()
# cleanup results
delete_scan_data(results_dir)
return {"status": True}
def skip_subdomain_scan(task, domain, results_dir):
# store default target as subdomain
'''
If the imported subdomain already has target domain saved, we can skip this
'''
if not Subdomain.objects.filter(
scan_history=task,
name=domain.name).exists():
subdomain_dict = DottedDict({
'name': domain.name,
'scan_history': task,
'target_domain': domain
})
save_subdomain(subdomain_dict)
# Save target into target_domain.txt
with open('{}/target_domain.txt'.format(results_dir), 'w+') as file:
file.write(domain.name + '\n')
file.close()
'''
We can have two conditions, either subdomain scan happens, or subdomain scan
does not happen, in either cases, because we are using import subdomain, we
need to collect and sort all the subdomains
Write target domain into subdomain_collection
'''
os.system(
'cat {0}/target_domain.txt > {0}/subdomain_collection.txt'.format(results_dir))
os.system(
'cat {0}/from_imported.txt > {0}/subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/from_imported.txt'.format(results_dir))
'''
Sort all Subdomains
'''
os.system(
'sort -u {0}/subdomain_collection.txt -o {0}/sorted_subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/subdomain_collection.txt'.format(results_dir))
def extract_imported_subdomain(imported_subdomains, task, domain, results_dir):
valid_imported_subdomains = [subdomain for subdomain in imported_subdomains if validators.domain(
subdomain) and domain.name == get_domain_from_subdomain(subdomain)]
# remove any duplicate
valid_imported_subdomains = list(set(valid_imported_subdomains))
with open('{}/from_imported.txt'.format(results_dir), 'w+') as file:
for subdomain_name in valid_imported_subdomains:
# save _subdomain to Subdomain model db
if not Subdomain.objects.filter(
scan_history=task, name=subdomain_name).exists():
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': subdomain_name,
'is_imported_subdomain': True
})
save_subdomain(subdomain_dict)
# save subdomain to file
file.write('{}\n'.format(subdomain_name))
file.close()
def subdomain_scan(task, domain, yaml_configuration, results_dir, activity_id, out_of_scope_subdomains=None):
'''
This function is responsible for performing subdomain enumeration
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Subdomain Gathering for target {} has been started'.format(domain.name))
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
# check for all the tools and add them into string
# if tool selected is all then make string, no need for loop
if ALL in yaml_configuration[SUBDOMAIN_DISCOVERY][USES_TOOLS]:
tools = 'amass-active amass-passive assetfinder sublist3r subfinder oneforall'
else:
tools = ' '.join(
str(tool) for tool in yaml_configuration[SUBDOMAIN_DISCOVERY][USES_TOOLS])
logging.info(tools)
# check for THREADS, by default 10
threads = 10
if THREADS in yaml_configuration[SUBDOMAIN_DISCOVERY]:
_threads = yaml_configuration[SUBDOMAIN_DISCOVERY][THREADS]
if _threads > 0:
threads = _threads
if 'amass' in tools:
if 'amass-passive' in tools:
amass_command = 'amass enum -passive -d {} -o {}/from_amass.txt'.format(
domain.name, results_dir)
if USE_AMASS_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_AMASS_CONFIG]:
amass_command += ' -config /root/.config/amass.ini'
# Run Amass Passive
logging.info(amass_command)
os.system(amass_command)
if 'amass-active' in tools:
amass_command = 'amass enum -active -d {} -o {}/from_amass_active.txt'.format(
domain.name, results_dir)
if USE_AMASS_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_AMASS_CONFIG]:
amass_command += ' -config /root/.config/amass.ini'
if AMASS_WORDLIST in yaml_configuration[SUBDOMAIN_DISCOVERY]:
wordlist = yaml_configuration[SUBDOMAIN_DISCOVERY][AMASS_WORDLIST]
if wordlist == 'default':
wordlist_path = '/usr/src/wordlist/deepmagic.com-prefixes-top50000.txt'
else:
wordlist_path = '/usr/src/wordlist/' + wordlist + '.txt'
if not os.path.exists(wordlist_path):
wordlist_path = '/usr/src/' + AMASS_WORDLIST
amass_command = amass_command + \
' -brute -w {}'.format(wordlist_path)
if amass_config_path:
amass_command = amass_command + \
' -config {}'.format('/usr/src/scan_results/' + amass_config_path)
# Run Amass Active
logging.info(amass_command)
os.system(amass_command)
if 'assetfinder' in tools:
assetfinder_command = 'assetfinder --subs-only {} > {}/from_assetfinder.txt'.format(
domain.name, results_dir)
# Run Assetfinder
logging.info(assetfinder_command)
os.system(assetfinder_command)
if 'sublist3r' in tools:
sublist3r_command = 'python3 /usr/src/github/Sublist3r/sublist3r.py -d {} -t {} -o {}/from_sublister.txt'.format(
domain.name, threads, results_dir)
# Run sublist3r
logging.info(sublist3r_command)
os.system(sublist3r_command)
if 'subfinder' in tools:
subfinder_command = 'subfinder -d {} -t {} -o {}/from_subfinder.txt'.format(
domain.name, threads, results_dir)
if USE_SUBFINDER_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_SUBFINDER_CONFIG]:
subfinder_command += ' -config /root/.config/subfinder/config.yaml'
# Run Subfinder
logging.info(subfinder_command)
os.system(subfinder_command)
if 'oneforall' in tools:
oneforall_command = 'python3 /usr/src/github/OneForAll/oneforall.py --target {} run'.format(
domain.name, results_dir)
# Run OneForAll
logging.info(oneforall_command)
os.system(oneforall_command)
extract_subdomain = "cut -d',' -f6 /usr/src/github/OneForAll/results/{}.csv >> {}/from_oneforall.txt".format(
domain.name, results_dir)
os.system(extract_subdomain)
# remove the results from oneforall directory
os.system(
'rm -rf /usr/src/github/OneForAll/results/{}.*'.format(domain.name))
'''
All tools have gathered the list of subdomains with filename
initials as from_*
We will gather all the results in one single file, sort them and
remove the older results from_*
'''
os.system(
'cat {0}/*.txt > {0}/subdomain_collection.txt'.format(results_dir))
'''
Write target domain into subdomain_collection
'''
os.system(
'cat {0}/target_domain.txt >> {0}/subdomain_collection.txt'.format(results_dir))
'''
Remove all the from_* files
'''
os.system('rm -f {}/from*'.format(results_dir))
'''
Sort all Subdomains
'''
os.system(
'sort -u {0}/subdomain_collection.txt -o {0}/sorted_subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/subdomain_collection.txt'.format(results_dir))
'''
The final results will be stored in sorted_subdomain_collection.
'''
# parse the subdomain list file and store in db
with open(subdomain_scan_results_file) as subdomain_list:
for _subdomain in subdomain_list:
__subdomain = _subdomain.rstrip('\n')
if not Subdomain.objects.filter(scan_history=task, name=__subdomain).exists(
) and validators.domain(__subdomain) and __subdomain not in out_of_scope_subdomains:
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': __subdomain,
})
save_subdomain(subdomain_dict)
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
subdomains_count = Subdomain.objects.filter(scan_history=task).count()
send_notification('Subdomain Gathering for target {} has been completed and has discovered *{}* subdomains.'.format(domain.name, subdomains_count))
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/sorted_subdomain_collection.txt')
# check for any subdomain changes and send notif if any
if notification and notification[0].send_subdomain_changes_notif:
newly_added_subdomain = get_new_added_subdomain(task.id, domain.id)
if newly_added_subdomain:
message = "**{} New Subdomains Discovered on domain {}**".format(newly_added_subdomain.count(), domain.name)
for subdomain in newly_added_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
removed_subdomain = get_removed_subdomain(task.id, domain.id)
if removed_subdomain:
message = "**{} Subdomains are no longer available on domain {}**".format(removed_subdomain.count(), domain.name)
for subdomain in removed_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
# check for interesting subdomains and send notif if any
if notification and notification[0].send_interesting_notif:
interesting_subdomain = get_interesting_subdomains(task.id, domain.id)
print(interesting_subdomain)
if interesting_subdomain:
message = "**{} Interesting Subdomains Found on domain {}**".format(interesting_subdomain.count(), domain.name)
for subdomain in interesting_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
def get_new_added_subdomain(scan_id, domain_id):
scan_history = ScanHistory.objects.filter(
domain=domain_id).filter(
subdomain_discovery=True).filter(
id__lte=scan_id)
if scan_history.count() > 1:
last_scan = scan_history.order_by('-start_scan_date')[1]
scanned_host_q1 = Subdomain.objects.filter(
scan_history__id=scan_id).values('name')
scanned_host_q2 = Subdomain.objects.filter(
scan_history__id=last_scan.id).values('name')
added_subdomain = scanned_host_q1.difference(scanned_host_q2)
return Subdomain.objects.filter(
scan_history=scan_id).filter(
name__in=added_subdomain)
def get_removed_subdomain(scan_id, domain_id):
scan_history = ScanHistory.objects.filter(
domain=domain_id).filter(
subdomain_discovery=True).filter(
id__lte=scan_id)
if scan_history.count() > 1:
last_scan = scan_history.order_by('-start_scan_date')[1]
scanned_host_q1 = Subdomain.objects.filter(
scan_history__id=scan_id).values('name')
scanned_host_q2 = Subdomain.objects.filter(
scan_history__id=last_scan.id).values('name')
removed_subdomains = scanned_host_q2.difference(scanned_host_q1)
print()
return Subdomain.objects.filter(
scan_history=last_scan).filter(
name__in=removed_subdomains)
def http_crawler(task, domain, results_dir, activity_id):
'''
This function is runs right after subdomain gathering, and gathers important
like page title, http status, etc
HTTP Crawler runs by default
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('HTTP Crawler for target {} has been initiated.'.format(domain.name))
alive_file_location = results_dir + '/alive.txt'
httpx_results_file = results_dir + '/httpx.json'
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
httpx_command = 'httpx -status-code -content-length -title -tech-detect -cdn -ip -follow-host-redirects -random-agent'
proxy = get_random_proxy()
if proxy:
httpx_command += " --http-proxy '{}'".format(proxy)
httpx_command += ' -json -o {}'.format(
httpx_results_file
)
httpx_command = 'cat {} | {}'.format(subdomain_scan_results_file, httpx_command)
print(httpx_command)
os.system(httpx_command)
# alive subdomains from httpx
alive_file = open(alive_file_location, 'w')
# writing httpx results
if os.path.isfile(httpx_results_file):
httpx_json_result = open(httpx_results_file, 'r')
lines = httpx_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
try:
# fallback for older versions of httpx
if 'url' in json_st:
subdomain = Subdomain.objects.get(
scan_history=task, name=json_st['input'])
else:
subdomain = Subdomain.objects.get(
scan_history=task, name=json_st['url'].split("//")[-1])
'''
Saving Default http urls to EndPoint
'''
endpoint = EndPoint()
endpoint.scan_history = task
endpoint.target_domain = domain
endpoint.subdomain = subdomain
if 'url' in json_st:
endpoint.http_url = json_st['url']
subdomain.http_url = json_st['url']
if 'status-code' in json_st:
endpoint.http_status = json_st['status-code']
subdomain.http_status = json_st['status-code']
if 'title' in json_st:
endpoint.page_title = json_st['title']
subdomain.page_title = json_st['title']
if 'content-length' in json_st:
endpoint.content_length = json_st['content-length']
subdomain.content_length = json_st['content-length']
if 'content-type' in json_st:
endpoint.content_type = json_st['content-type']
subdomain.content_type = json_st['content-type']
if 'webserver' in json_st:
endpoint.webserver = json_st['webserver']
subdomain.webserver = json_st['webserver']
if 'response-time' in json_st:
response_time = float(
''.join(
ch for ch in json_st['response-time'] if not ch.isalpha()))
if json_st['response-time'][-2:] == 'ms':
response_time = response_time / 1000
endpoint.response_time = response_time
subdomain.response_time = response_time
if 'cnames' in json_st:
cname_list = ','.join(json_st['cnames'])
subdomain.cname = cname_list
discovered_date = timezone.now()
endpoint.discovered_date = discovered_date
subdomain.discovered_date = discovered_date
endpoint.is_default = True
endpoint.save()
subdomain.save()
if 'technologies' in json_st:
for _tech in json_st['technologies']:
if Technology.objects.filter(name=_tech).exists():
tech = Technology.objects.get(name=_tech)
else:
tech = Technology(name=_tech)
tech.save()
subdomain.technologies.add(tech)
endpoint.technologies.add(tech)
if 'a' in json_st:
for _ip in json_st['a']:
if IpAddress.objects.filter(address=_ip).exists():
ip = IpAddress.objects.get(address=_ip)
else:
ip = IpAddress(address=_ip)
if 'cdn' in json_st:
ip.is_cdn = json_st['cdn']
ip.save()
subdomain.ip_addresses.add(ip)
# see if to ignore 404 or 5xx
alive_file.write(json_st['url'] + '\n')
subdomain.save()
endpoint.save()
except Exception as exception:
logging.error(exception)
alive_file.close()
if notification and notification[0].send_scan_status_notif:
alive_count = Subdomain.objects.filter(
scan_history__id=task.id).values('name').distinct().filter(
http_status__exact=200).count()
send_notification('HTTP Crawler for target {} has been completed.\n\n {} subdomains were alive (http status 200).'.format(domain.name, alive_count))
def grab_screenshot(task, domain, yaml_configuration, results_dir, activity_id):
'''
This function is responsible for taking screenshots
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine is currently gathering screenshots for {}'.format(domain.name))
output_screenshots_path = results_dir + '/screenshots'
result_csv_path = results_dir + '/screenshots/Requests.csv'
alive_subdomains_path = results_dir + '/alive.txt'
eyewitness_command = 'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py'
eyewitness_command += ' -f {} -d {} --no-prompt'.format(
alive_subdomains_path,
output_screenshots_path
)
if EYEWITNESS in yaml_configuration \
and TIMEOUT in yaml_configuration[EYEWITNESS] \
and yaml_configuration[EYEWITNESS][TIMEOUT] > 0:
eyewitness_command += ' --timeout {}'.format(
yaml_configuration[EYEWITNESS][TIMEOUT]
)
if EYEWITNESS in yaml_configuration \
and THREADS in yaml_configuration[EYEWITNESS] \
and yaml_configuration[EYEWITNESS][THREADS] > 0:
eyewitness_command += ' --threads {}'.format(
yaml_configuration[EYEWITNESS][THREADS]
)
logger.info(eyewitness_command)
os.system(eyewitness_command)
if os.path.isfile(result_csv_path):
logger.info('Gathering Eyewitness results')
with open(result_csv_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
if row[3] == 'Successful' \
and Subdomain.objects.filter(
scan_history__id=task.id).filter(name=row[2]).exists():
subdomain = Subdomain.objects.get(
scan_history__id=task.id,
name=row[2]
)
subdomain.screenshot_path = row[4].replace(
'/usr/src/scan_results/',
''
)
subdomain.save()
# remove all db, html extra files in screenshot results
os.system('rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(
output_screenshots_path,
))
os.system('rm -rf {0}/source'.format(
output_screenshots_path,
))
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has finished gathering screenshots for {}'.format(domain.name))
def port_scanning(task, domain, yaml_configuration, results_dir):
'''
This function is responsible for running the port scan
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Port Scan initiated for {}'.format(domain.name))
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
port_results_file = results_dir + '/ports.json'
# check the yaml_configuration and choose the ports to be scanned
scan_ports = '-' # default port scan everything
if PORTS in yaml_configuration[PORT_SCAN]:
# TODO: legacy code, remove top-100 in future versions
all_ports = yaml_configuration[PORT_SCAN][PORTS]
if 'full' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -p {}'.format(
subdomain_scan_results_file, port_results_file, '-')
elif 'top-100' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -top-ports 100'.format(
subdomain_scan_results_file, port_results_file)
elif 'top-1000' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -top-ports 1000'.format(
subdomain_scan_results_file, port_results_file)
else:
scan_ports = ','.join(
str(port) for port in all_ports)
naabu_command = 'cat {} | naabu -json -o {} -p {}'.format(
subdomain_scan_results_file, port_results_file, scan_ports)
# check for exclude ports
if EXCLUDE_PORTS in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][EXCLUDE_PORTS]:
exclude_ports = ','.join(
str(port) for port in yaml_configuration['port_scan']['exclude_ports'])
naabu_command = naabu_command + \
' -exclude-ports {}'.format(exclude_ports)
if NAABU_RATE in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][NAABU_RATE] > 0:
naabu_command = naabu_command + \
' -rate {}'.format(
yaml_configuration[PORT_SCAN][NAABU_RATE])
if USE_NAABU_CONFIG in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][USE_NAABU_CONFIG]:
naabu_command += ' -config /root/.config/naabu/naabu.conf'
# run naabu
os.system(naabu_command)
# writing port results
try:
port_json_result = open(port_results_file, 'r')
lines = port_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
port_number = json_st['port']
ip_address = json_st['ip']
# see if port already exists
if Port.objects.filter(number__exact=port_number).exists():
port = Port.objects.get(number=port_number)
else:
port = Port()
port.number = port_number
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port_detail = whatportis.get_ports(str(port_number))
if len(port_detail):
port.service_name = port_detail[0].name
port.description = port_detail[0].description
port.save()
if IpAddress.objects.filter(address=json_st['ip']).exists():
ip = IpAddress.objects.get(address=json_st['ip'])
ip.ports.add(port)
ip.save()
except BaseException as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
port_count = Port.objects.filter(
ports__in=IpAddress.objects.filter(
ip_addresses__in=Subdomain.objects.filter(
scan_history__id=task.id))).distinct().count()
send_notification('reNgine has finished Port Scanning on {} and has identified {} ports.'.format(domain.name, port_count))
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/ports.json')
def check_waf():
'''
This function will check for the WAF being used in subdomains using wafw00f
'''
pass
def directory_brute(task, domain, yaml_configuration, results_dir, activity_id):
'''
This function is responsible for performing directory scan
'''
# scan directories for all the alive subdomain with http status >
# 200
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Directory Bruteforce has been initiated for {}.'.format(domain.name))
alive_subdomains = Subdomain.objects.filter(
scan_history__id=task.id).exclude(http_url__isnull=True)
dirs_results = results_dir + '/dirs.json'
# check the yaml settings
if EXTENSIONS in yaml_configuration[DIR_FILE_SEARCH]:
extensions = ','.join(
str(ext) for ext in yaml_configuration[DIR_FILE_SEARCH][EXTENSIONS])
else:
extensions = 'php,git,yaml,conf,db,mysql,bak,txt'
# Threads
if THREADS in yaml_configuration[DIR_FILE_SEARCH] \
and yaml_configuration[DIR_FILE_SEARCH][THREADS] > 0:
threads = yaml_configuration[DIR_FILE_SEARCH][THREADS]
else:
threads = 10
for subdomain in alive_subdomains:
# delete any existing dirs.json
if os.path.isfile(dirs_results):
os.system('rm -rf {}'.format(dirs_results))
dirsearch_command = 'python3 /usr/src/github/dirsearch/dirsearch.py'
dirsearch_command += ' -u {}'.format(subdomain.http_url)
if (WORDLIST not in yaml_configuration[DIR_FILE_SEARCH] or
not yaml_configuration[DIR_FILE_SEARCH][WORDLIST] or
'default' in yaml_configuration[DIR_FILE_SEARCH][WORDLIST]):
wordlist_location = '/usr/src/github/dirsearch/db/dicc.txt'
else:
wordlist_location = '/usr/src/wordlist/' + \
yaml_configuration[DIR_FILE_SEARCH][WORDLIST] + '.txt'
dirsearch_command += ' -w {}'.format(wordlist_location)
dirsearch_command += ' --format json -o {}'.format(dirs_results)
dirsearch_command += ' -e {}'.format(extensions)
dirsearch_command += ' -t {}'.format(threads)
dirsearch_command += ' --random-agent --follow-redirects --exclude-status 403,401,404'
if EXCLUDE_EXTENSIONS in yaml_configuration[DIR_FILE_SEARCH]:
exclude_extensions = ','.join(
str(ext) for ext in yaml_configuration[DIR_FILE_SEARCH][EXCLUDE_EXTENSIONS])
dirsearch_command += ' -X {}'.format(exclude_extensions)
if EXCLUDE_TEXT in yaml_configuration[DIR_FILE_SEARCH]:
exclude_text = ','.join(
str(text) for text in yaml_configuration[DIR_FILE_SEARCH][EXCLUDE_TEXT])
dirsearch_command += ' -exclude-texts {}'.format(exclude_text)
# check if recursive strategy is set to on
if RECURSIVE_LEVEL in yaml_configuration[DIR_FILE_SEARCH]:
dirsearch_command += ' --recursion-depth {}'.format(yaml_configuration[DIR_FILE_SEARCH][RECURSIVE_LEVEL])
if RECURSIVE_LEVEL in yaml_configuration[DIR_FILE_SEARCH]:
dirsearch_command += ' --recursion-depth {}'.format(yaml_configuration[DIR_FILE_SEARCH][RECURSIVE_LEVEL])
# proxy
proxy = get_random_proxy()
if proxy:
dirsearch_command += " --proxy '{}'".format(proxy)
print(dirsearch_command)
os.system(dirsearch_command)
try:
if os.path.isfile(dirs_results):
with open(dirs_results, "r") as json_file:
json_string = json_file.read()
subdomain = Subdomain.objects.get(
scan_history__id=task.id, http_url=subdomain.http_url)
subdomain.directory_json = json_string
subdomain.save()
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
send_notification('Directory Bruteforce has been completed for {}.'.format(domain.name))
def fetch_endpoints(
task,
domain,
yaml_configuration,
results_dir,
activity_id):
'''
This function is responsible for fetching all the urls associated with target
and run HTTP probe
It first runs gau to gather all urls from wayback, then we will use hakrawler to identify more urls
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine is currently gathering endpoints for {}.'.format(domain.name))
# check yaml settings
if ALL in yaml_configuration[FETCH_URL][USES_TOOLS]:
tools = 'gauplus hakrawler waybackurls gospider'
else:
tools = ' '.join(
str(tool) for tool in yaml_configuration[FETCH_URL][USES_TOOLS])
if INTENSITY in yaml_configuration[FETCH_URL]:
scan_type = yaml_configuration[FETCH_URL][INTENSITY]
else:
scan_type = 'normal'
domain_regex = "\'https?://([a-z0-9]+[.])*{}.*\'".format(domain.name)
if 'deep' in scan_type:
# performs deep url gathering for all the subdomains present -
# RECOMMENDED
logger.info('Deep URLS Fetch')
os.system(settings.TOOL_LOCATION + 'get_urls.sh %s %s %s %s %s' %
("None", results_dir, scan_type, domain_regex, tools))
else:
# perform url gathering only for main domain - USE only for quick scan
logger.info('Non Deep URLS Fetch')
os.system(
settings.TOOL_LOCATION +
'get_urls.sh %s %s %s %s %s' % (
domain.name,
results_dir,
scan_type,
domain_regex,
tools
))
if IGNORE_FILE_EXTENSION in yaml_configuration[FETCH_URL]:
ignore_extension = '|'.join(
yaml_configuration[FETCH_URL][IGNORE_FILE_EXTENSION])
logger.info('Ignore extensions' + ignore_extension)
os.system(
'cat {0}/all_urls.txt | grep -Eiv "\\.({1}).*" > {0}/temp_urls.txt'.format(
results_dir, ignore_extension))
os.system(
'rm {0}/all_urls.txt && mv {0}/temp_urls.txt {0}/all_urls.txt'.format(results_dir))
'''
Store all the endpoints and then run the httpx
'''
try:
endpoint_final_url = results_dir + '/all_urls.txt'
if os.path.isfile(endpoint_final_url):
with open(endpoint_final_url) as endpoint_list:
for url in endpoint_list:
http_url = url.rstrip('\n')
if not EndPoint.objects.filter(scan_history=task, http_url=http_url).exists():
_subdomain = get_subdomain_from_url(http_url)
if Subdomain.objects.filter(
scan_history=task).filter(
name=_subdomain).exists():
subdomain = Subdomain.objects.get(
scan_history=task, name=_subdomain)
else:
'''
gau or gosppider can gather interesting endpoints which
when parsed can give subdomains that were not existent from
subdomain scan. so storing them
'''
logger.error(
'Subdomain {} not found, adding...'.format(_subdomain))
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': _subdomain,
})
subdomain = save_subdomain(subdomain_dict)
endpoint_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'subdomain': subdomain,
'http_url': http_url,
})
save_endpoint(endpoint_dict)
except Exception as e:
logger.error(e)
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/all_urls.txt')
'''
TODO:
Go spider & waybackurls accumulates a lot of urls, which is good but nuclei
takes forever to scan even a simple website, so we will do http probing
and filter HTTP status 404, this way we can reduce the number of Non Existent
URLS
'''
logger.info('HTTP Probing on collected endpoints')
httpx_command = 'httpx -l {0}/all_urls.txt -status-code -content-length -ip -cdn -title -tech-detect -json -follow-redirects -random-agent -o {0}/final_httpx_urls.json'.format(results_dir)
proxy = get_random_proxy()
if proxy:
httpx_command += " --http-proxy '{}'".format(proxy)
os.system(httpx_command)
url_results_file = results_dir + '/final_httpx_urls.json'
try:
urls_json_result = open(url_results_file, 'r')
lines = urls_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
http_url = json_st['url']
_subdomain = get_subdomain_from_url(http_url)
if Subdomain.objects.filter(
scan_history=task).filter(
name=_subdomain).exists():
subdomain_obj = Subdomain.objects.get(
scan_history=task, name=_subdomain)
else:
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': _subdomain,
})
subdomain_obj = save_subdomain(subdomain_dict)
if EndPoint.objects.filter(
scan_history=task).filter(
http_url=http_url).exists():
endpoint = EndPoint.objects.get(
scan_history=task, http_url=http_url)
else:
endpoint = EndPoint()
endpoint_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'http_url': http_url,
'subdomain': subdomain_obj
})
endpoint = save_endpoint(endpoint_dict)
if 'title' in json_st:
endpoint.page_title = json_st['title']
if 'webserver' in json_st:
endpoint.webserver = json_st['webserver']
if 'content-length' in json_st:
endpoint.content_length = json_st['content-length']
if 'content-type' in json_st:
endpoint.content_type = json_st['content-type']
if 'status-code' in json_st:
endpoint.http_status = json_st['status-code']
if 'response-time' in json_st:
response_time = float(''.join(ch for ch in json_st['response-time'] if not ch.isalpha()))
if json_st['response-time'][-2:] == 'ms':
response_time = response_time / 1000
endpoint.response_time = response_time
endpoint.save()
if 'technologies' in json_st:
for _tech in json_st['technologies']:
if Technology.objects.filter(name=_tech).exists():
tech = Technology.objects.get(name=_tech)
else:
tech = Technology(name=_tech)
tech.save()
endpoint.technologies.add(tech)
# get subdomain object
subdomain = Subdomain.objects.get(scan_history=task, name=_subdomain)
subdomain.technologies.add(tech)
subdomain.save()
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
endpoint_count = EndPoint.objects.filter(
scan_history__id=task.id).values('http_url').distinct().count()
endpoint_alive_count = EndPoint.objects.filter(
scan_history__id=task.id, http_status__exact=200).values('http_url').distinct().count()
send_notification('reNgine has finished gathering endpoints for {} and has discovered *{}* unique endpoints.\n\n{} of those endpoints reported HTTP status 200.'.format(
domain.name,
endpoint_count,
endpoint_alive_count
))
# once endpoint is saved, run gf patterns TODO: run threads
if GF_PATTERNS in yaml_configuration[FETCH_URL]:
for pattern in yaml_configuration[FETCH_URL][GF_PATTERNS]:
logger.info('Running GF for {}'.format(pattern))
gf_output_file_path = '{0}/gf_patterns_{1}.txt'.format(
results_dir, pattern)
gf_command = 'cat {0}/all_urls.txt | gf {1} >> {2}'.format(
results_dir, pattern, gf_output_file_path)
os.system(gf_command)
if os.path.exists(gf_output_file_path):
with open(gf_output_file_path) as gf_output:
for line in gf_output:
url = line.rstrip('\n')
try:
endpoint = EndPoint.objects.get(
scan_history=task, http_url=url)
earlier_pattern = endpoint.matched_gf_patterns
new_pattern = earlier_pattern + ',' + pattern if earlier_pattern else pattern
endpoint.matched_gf_patterns = new_pattern
except Exception as e:
# add the url in db
logger.error(e)
logger.info('Adding URL' + url)
endpoint = EndPoint()
endpoint.http_url = url
endpoint.target_domain = domain
endpoint.scan_history = task
try:
_subdomain = Subdomain.objects.get(
scan_history=task, name=get_subdomain_from_url(url))
endpoint.subdomain = _subdomain
except Exception as e:
continue
endpoint.matched_gf_patterns = pattern
finally:
endpoint.save()
def vulnerability_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id):
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Vulnerability scan has been initiated for {}.'.format(domain.name))
'''
This function will run nuclei as a vulnerability scanner
----
unfurl the urls to keep only domain and path, this will be sent to vuln scan
ignore certain file extensions
Thanks: https://github.com/six2dez/reconftw
'''
urls_path = '/alive.txt'
if task.scan_type.fetch_url:
os.system('cat {0}/all_urls.txt | grep -Eiv "\\.(eot|jpg|jpeg|gif|css|tif|tiff|png|ttf|otf|woff|woff2|ico|pdf|svg|txt|js|doc|docx)$" | unfurl -u format %s://%d%p >> {0}/unfurl_urls.txt'.format(results_dir))
os.system(
'sort -u {0}/unfurl_urls.txt -o {0}/unfurl_urls.txt'.format(results_dir))
urls_path = '/unfurl_urls.txt'
vulnerability_result_path = results_dir + '/vulnerability.json'
vulnerability_scan_input_file = results_dir + urls_path
nuclei_command = 'nuclei -json -l {} -o {}'.format(
vulnerability_scan_input_file, vulnerability_result_path)
# check nuclei config
if USE_NUCLEI_CONFIG in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[VULNERABILITY_SCAN][USE_NUCLEI_CONFIG]:
nuclei_command += ' -config /root/.config/nuclei/config.yaml'
'''
Nuclei Templates
Either custom template has to be supplied or default template, if neither has
been supplied then use all templates including custom templates
'''
if CUSTOM_NUCLEI_TEMPLATE in yaml_configuration[
VULNERABILITY_SCAN] or NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
# check yaml settings for templates
if NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
if ALL in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_TEMPLATE]:
template = NUCLEI_TEMPLATES_PATH
else:
_template = ','.join([NUCLEI_TEMPLATES_PATH + str(element)
for element in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_TEMPLATE]])
template = _template.replace(',', ' -t ')
# Update nuclei command with templates
nuclei_command = nuclei_command + ' -t ' + template
if CUSTOM_NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
# add .yaml to the custom template extensions
_template = ','.join(
[str(element) + '.yaml' for element in yaml_configuration[VULNERABILITY_SCAN][CUSTOM_NUCLEI_TEMPLATE]])
template = _template.replace(',', ' -t ')
# Update nuclei command with templates
nuclei_command = nuclei_command + ' -t ' + template
else:
nuclei_command = nuclei_command + ' -t /root/nuclei-templates'
# check yaml settings for concurrency
if NUCLEI_CONCURRENCY in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][NUCLEI_CONCURRENCY] > 0:
concurrency = yaml_configuration[VULNERABILITY_SCAN][NUCLEI_CONCURRENCY]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -c ' + str(concurrency)
if RATE_LIMIT in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][RATE_LIMIT] > 0:
rate_limit = yaml_configuration[VULNERABILITY_SCAN][RATE_LIMIT]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -rl ' + str(rate_limit)
if TIMEOUT in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][TIMEOUT] > 0:
timeout = yaml_configuration[VULNERABILITY_SCAN][TIMEOUT]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -timeout ' + str(timeout)
if RETRIES in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][RETRIES] > 0:
retries = yaml_configuration[VULNERABILITY_SCAN][RETRIES]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -retries ' + str(retries)
# for severity
if NUCLEI_SEVERITY in yaml_configuration[VULNERABILITY_SCAN] and ALL not in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_SEVERITY]:
_severity = ','.join(
[str(element) for element in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_SEVERITY]])
severity = _severity.replace(" ", "")
else:
severity = "critical, high, medium, low, info"
# update nuclei templates before running scan
os.system('nuclei -update-templates')
for _severity in severity.split(","):
# delete any existing vulnerability.json file
if os.path.isfile(vulnerability_result_path):
os.system('rm {}'.format(vulnerability_result_path))
# run nuclei
final_nuclei_command = nuclei_command + ' -severity ' + _severity
proxy = get_random_proxy()
if proxy:
final_nuclei_command += " --proxy-url '{}'".format(proxy)
logger.info(final_nuclei_command)
os.system(final_nuclei_command)
try:
if os.path.isfile(vulnerability_result_path):
urls_json_result = open(vulnerability_result_path, 'r')
lines = urls_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
host = json_st['host']
_subdomain = get_subdomain_from_url(host)
try:
subdomain = Subdomain.objects.get(
name=_subdomain, scan_history=task)
vulnerability = Vulnerability()
vulnerability.subdomain = subdomain
vulnerability.scan_history = task
vulnerability.target_domain = domain
try:
endpoint = EndPoint.objects.get(
scan_history=task, target_domain=domain, http_url=host)
vulnerability.endpoint = endpoint
except Exception as exception:
logger.error(exception)
if 'name' in json_st['info']:
vulnerability.name = json_st['info']['name']
if 'severity' in json_st['info']:
if json_st['info']['severity'] == 'info':
severity = 0
elif json_st['info']['severity'] == 'low':
severity = 1
elif json_st['info']['severity'] == 'medium':
severity = 2
elif json_st['info']['severity'] == 'high':
severity = 3
elif json_st['info']['severity'] == 'critical':
severity = 4
else:
severity = 0
else:
severity = 0
vulnerability.severity = severity
if 'tags' in json_st['info']:
vulnerability.tags = json_st['info']['tags']
if 'description' in json_st['info']:
vulnerability.description = json_st['info']['description']
if 'reference' in json_st['info']:
vulnerability.reference = json_st['info']['reference']
if 'matched' in json_st:
vulnerability.http_url = json_st['matched']
if 'templateID' in json_st:
vulnerability.template_used = json_st['templateID']
if 'description' in json_st:
vulnerability.description = json_st['description']
if 'matcher_name' in json_st:
vulnerability.matcher_name = json_st['matcher_name']
if 'extracted_results' in json_st:
vulnerability.extracted_results = json_st['extracted_results']
vulnerability.discovered_date = timezone.now()
vulnerability.open_status = True
vulnerability.save()
# send notification for all vulnerabilities except info
if json_st['info']['severity'] != "info" and notification and notification[0].send_vuln_notif:
message = "*Alert: Vulnerability Identified*"
message += "\n\n"
message += "A *{}* severity vulnerability has been identified.".format(json_st['info']['severity'])
message += "\nVulnerability Name: {}".format(json_st['info']['name'])
message += "\nVulnerable URL: {}".format(json_st['host'])
send_notification(message)
# send report to hackerone
if Hackerone.objects.all().exists() and json_st['info']['severity'] != 'info' and json_st['info']['severity'] \
!= 'low' and vulnerability.target_domain.h1_team_handle:
hackerone = Hackerone.objects.all()[0]
if hackerone.send_critical and json_st['info']['severity'] == 'critical':
send_hackerone_report(vulnerability.id)
elif hackerone.send_high and json_st['info']['severity'] == 'high':
send_hackerone_report(vulnerability.id)
elif hackerone.send_medium and json_st['info']['severity'] == 'medium':
send_hackerone_report(vulnerability.id)
except ObjectDoesNotExist:
logger.error('Object not found')
continue
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
info_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=0).count()
low_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=1).count()
medium_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=2).count()
high_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=3).count()
critical_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=4).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count
message = 'Vulnerability scan has been completed for {} and discovered {} vulnerabilities.'.format(
domain.name,
vulnerability_count
)
message += '\n\n*Vulnerability Stats:*'
message += '\nCritical: {}'.format(critical_count)
message += '\nHigh: {}'.format(high_count)
message += '\nMedium: {}'.format(medium_count)
message += '\nLow: {}'.format(low_count)
message += '\nInfo: {}'.format(info_count)
send_notification(message)
def scan_failed(task):
task.scan_status = 0
task.stop_scan_date = timezone.now()
task.save()
def create_scan_activity(task, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = task
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
def update_last_activity(id, activity_status):
ScanActivity.objects.filter(
id=id).update(
status=activity_status,
time=timezone.now())
def delete_scan_data(results_dir):
# remove all txt,html,json files
os.system('find {} -name "*.txt" -type f -delete'.format(results_dir))
os.system('find {} -name "*.html" -type f -delete'.format(results_dir))
os.system('find {} -name "*.json" -type f -delete'.format(results_dir))
def save_subdomain(subdomain_dict):
subdomain = Subdomain()
subdomain.discovered_date = timezone.now()
subdomain.target_domain = subdomain_dict.get('target_domain')
subdomain.scan_history = subdomain_dict.get('scan_history')
subdomain.name = subdomain_dict.get('name')
subdomain.http_url = subdomain_dict.get('http_url')
subdomain.screenshot_path = subdomain_dict.get('screenshot_path')
subdomain.http_header_path = subdomain_dict.get('http_header_path')
subdomain.cname = subdomain_dict.get('cname')
subdomain.is_cdn = subdomain_dict.get('is_cdn')
subdomain.content_type = subdomain_dict.get('content_type')
subdomain.webserver = subdomain_dict.get('webserver')
subdomain.page_title = subdomain_dict.get('page_title')
subdomain.is_imported_subdomain = subdomain_dict.get(
'is_imported_subdomain') if 'is_imported_subdomain' in subdomain_dict else False
if 'http_status' in subdomain_dict:
subdomain.http_status = subdomain_dict.get('http_status')
if 'response_time' in subdomain_dict:
subdomain.response_time = subdomain_dict.get('response_time')
if 'content_length' in subdomain_dict:
subdomain.content_length = subdomain_dict.get('content_length')
subdomain.save()
return subdomain
def save_endpoint(endpoint_dict):
endpoint = EndPoint()
endpoint.discovered_date = timezone.now()
endpoint.scan_history = endpoint_dict.get('scan_history')
endpoint.target_domain = endpoint_dict.get('target_domain') if 'target_domain' in endpoint_dict else None
endpoint.subdomain = endpoint_dict.get('subdomain') if 'target_domain' in endpoint_dict else None
endpoint.http_url = endpoint_dict.get('http_url')
endpoint.page_title = endpoint_dict.get('page_title') if 'page_title' in endpoint_dict else None
endpoint.content_type = endpoint_dict.get('content_type') if 'content_type' in endpoint_dict else None
endpoint.webserver = endpoint_dict.get('webserver') if 'webserver' in endpoint_dict else None
endpoint.response_time = endpoint_dict.get('response_time') if 'response_time' in endpoint_dict else 0
endpoint.http_status = endpoint_dict.get('http_status') if 'http_status' in endpoint_dict else 0
endpoint.content_length = endpoint_dict.get('content_length') if 'content_length' in endpoint_dict else 0
endpoint.is_default = endpoint_dict.get('is_default') if 'is_default' in endpoint_dict else False
endpoint.save()
return endpoint
def perform_osint(task, domain, yaml_configuration, results_dir):
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has initiated OSINT on target {}'.format(domain.name))
if 'discover' in yaml_configuration[OSINT]:
osint_discovery(task, domain, yaml_configuration, results_dir)
if 'dork' in yaml_configuration[OSINT]:
dorking(task, yaml_configuration)
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has completed performing OSINT on target {}'.format(domain.name))
def osint_discovery(task, domain, yaml_configuration, results_dir):
if ALL in yaml_configuration[OSINT][OSINT_DISCOVER]:
osint_lookup = 'emails metainfo employees'
else:
osint_lookup = ' '.join(
str(lookup) for lookup in yaml_configuration[OSINT][OSINT_DISCOVER])
if 'metainfo' in osint_lookup:
if INTENSITY in yaml_configuration[OSINT]:
osint_intensity = yaml_configuration[OSINT][INTENSITY]
else:
osint_intensity = 'normal'
if OSINT_DOCUMENTS_LIMIT in yaml_configuration[OSINT]:
documents_limit = yaml_configuration[OSINT][OSINT_DOCUMENTS_LIMIT]
else:
documents_limit = 50
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': domain.name,
'domain': domain,
'scan_id': task,
'documents_limit': documents_limit
})
get_and_save_meta_info(meta_dict)
elif osint_intensity == 'deep':
# get all subdomains in scan_id
subdomains = Subdomain.objects.filter(scan_history=task)
for subdomain in subdomains:
meta_dict = DottedDict({
'osint_target': subdomain.name,
'domain': domain,
'scan_id': task,
'documents_limit': documents_limit
})
get_and_save_meta_info(meta_dict)
if 'emails' in osint_lookup:
get_and_save_emails(task, results_dir)
get_and_save_leaked_credentials(task, results_dir)
if 'employees' in osint_lookup:
get_and_save_employees(task, results_dir)
def dorking(scan_history, yaml_configuration):
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
# look in stackoverflow
if ALL in yaml_configuration[OSINT][OSINT_DORK]:
dork_lookup = 'stackoverflow, 3rdparty, social_media, project_management, code_sharing, config_files, jenkins, cloud_buckets, php_error, exposed_documents, struts_rce, db_files, traefik, git_exposed'
else:
dork_lookup = ' '.join(
str(lookup) for lookup in yaml_configuration[OSINT][OSINT_DORK])
if 'stackoverflow' in dork_lookup:
dork = 'site:stackoverflow.com'
dork_type = 'stackoverflow'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=False
)
if '3rdparty' in dork_lookup:
# look in 3rd party sitee
dork_type = '3rdparty'
lookup_websites = [
'gitter.im',
'papaly.com',
'productforums.google.com',
'coggle.it',
'replt.it',
'ycombinator.com',
'libraries.io',
'npm.runkit.com',
'npmjs.com',
'scribd.com',
'gitter.im'
]
dork = ''
for website in lookup_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'social_media' in dork_lookup:
dork_type = 'Social Media'
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'pinterest.com',
'tumblr.com',
'reddit.com'
]
dork = ''
for website in social_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'project_management' in dork_lookup:
dork_type = 'Project Management'
project_websites = [
'trello.com',
'*.atlassian.net'
]
dork = ''
for website in project_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'code_sharing' in dork_lookup:
dork_type = 'Code Sharing Sites'
code_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
dork = ''
for website in code_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'config_files' in dork_lookup:
dork_type = 'Config Files'
config_file_ext = [
'env',
'xml',
'conf',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
dork = ''
for extension in config_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'jenkins' in dork_lookup:
dork_type = 'Jenkins'
dork = 'intitle:\"Dashboard [Jenkins]\"'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
if 'wordpress_files' in dork_lookup:
dork_type = 'Wordpress Files'
inurl_lookup = [
'wp-content',
'wp-includes'
]
dork = ''
for lookup in inurl_lookup:
dork = dork + ' | ' + 'inurl:' + lookup
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'cloud_buckets' in dork_lookup:
dork_type = 'Cloud Buckets'
cloud_websites = [
'.s3.amazonaws.com',
'storage.googleapis.com',
'amazonaws.com'
]
dork = ''
for website in cloud_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'php_error' in dork_lookup:
dork_type = 'PHP Error'
error_words = [
'\"PHP Parse error\"',
'\"PHP Warning\"',
'\"PHP Error\"'
]
dork = ''
for word in error_words:
dork = dork + ' | ' + word
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'exposed_documents' in dork_lookup:
dork_type = 'Exposed Documents'
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
dork = ''
for extension in docs_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'struts_rce' in dork_lookup:
dork_type = 'Apache Struts RCE'
struts_file_ext = [
'action',
'struts',
'do'
]
dork = ''
for extension in struts_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'db_files' in dork_lookup:
dork_type = 'Database Files'
db_file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
dork = ''
for extension in db_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'traefik' in dork_lookup:
dork = 'intitle:traefik inurl:8080/dashboard'
dork_type = 'Traefik'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
if 'git_exposed' in dork_lookup:
dork = 'inurl:\"/.git\"'
dork_type = '.git Exposed'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
def get_and_save_dork_results(dork, type, scan_history, in_target=False):
degoogle_obj = degoogle.dg()
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
if in_target:
query = dork + " site:" + scan_history.domain.name
else:
query = dork + " \"{}\"".format(scan_history.domain.name)
logger.info(query)
degoogle_obj.query = query
results = degoogle_obj.run()
logger.info(results)
for result in results:
dork, _ = Dork.objects.get_or_create(
type=type,
description=result['desc'],
url=result['url']
)
scan_history.dorks.add(dork)
def get_and_save_employees(scan_history, results_dir):
theHarvester_location = '/usr/src/github/theHarvester'
# update proxies.yaml
if Proxy.objects.all().exists():
proxy = Proxy.objects.all()[0]
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(theHarvester_location + '/proxies.yaml', 'w') as file:
documents = yaml.dump(yaml_data, file)
os.system('cd {} && python3 theHarvester.py -d {} -b all -f {}/theHarvester.html'.format(
theHarvester_location,
scan_history.domain.name,
results_dir
))
file_location = results_dir + '/theHarvester.html'
print(file_location)
# delete proxy environ var
if os.environ.get(('https_proxy')):
del os.environ['https_proxy']
if os.environ.get(('HTTPS_PROXY')):
del os.environ['HTTPS_PROXY']
if os.path.isfile(file_location):
logger.info('Parsing theHarvester results')
options = FirefoxOptions()
options.add_argument("--headless")
driver = webdriver.Firefox(options=options)
driver.get('file://'+file_location)
tabledata = driver.execute_script('return tabledata')
# save email addresses and linkedin employees
for data in tabledata:
if data['record'] == 'email':
_email = data['result']
email, _ = Email.objects.get_or_create(address=_email)
scan_history.emails.add(email)
elif data['record'] == 'people':
_employee = data['result']
split_val = _employee.split('-')
name = split_val[0]
if len(split_val) == 2:
designation = split_val[1]
else:
designation = ""
employee, _ = Employee.objects.get_or_create(name=name, designation=designation)
scan_history.employees.add(employee)
driver.quit()
print(tabledata)
def get_and_save_emails(scan_history, results_dir):
leak_target_path = '{}/creds_target.txt'.format(results_dir)
# get email address
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
emails = []
try:
logger.info('OSINT: Getting emails from Google')
email_from_google = get_emails_from_google(scan_history.domain.name)
logger.info('OSINT: Getting emails from Bing')
email_from_bing = get_emails_from_bing(scan_history.domain.name)
logger.info('OSINT: Getting emails from Baidu')
email_from_baidu = get_emails_from_baidu(scan_history.domain.name)
emails = list(set(email_from_google + email_from_bing + email_from_baidu))
logger.info(emails)
except Exception as e:
logger.error(e)
leak_target_file = open(leak_target_path, 'w')
for _email in emails:
email, _ = Email.objects.get_or_create(address=_email)
scan_history.emails.add(email)
leak_target_file.write('{}\n'.format(_email))
# fill leak_target_file with possible email address
leak_target_file.write('%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.write('%.%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%.%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.write('%_%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%_%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.close()
def get_and_save_leaked_credentials(scan_history, results_dir):
logger.info('OSINT: Getting leaked credentials...')
leak_target_file = '{}/creds_target.txt'.format(results_dir)
leak_output_file = '{}/pwndb.json'.format(results_dir)
pwndb_command = 'python3 /usr/src/github/pwndb/pwndb.py --proxy tor:9150 --output json --list {}'.format(
leak_target_file
)
try:
pwndb_output = subprocess.getoutput(pwndb_command)
creds = json.loads(pwndb_output)
for cred in creds:
if cred['username'] != 'donate':
email_id = "{}@{}".format(cred['username'], cred['domain'])
email_obj, _ = Email.objects.get_or_create(
address=email_id,
)
email_obj.password = cred['password']
email_obj.save()
scan_history.emails.add(email_obj)
except Exception as e:
logger.error(e)
pass
def get_and_save_meta_info(meta_dict):
logger.info('Getting METADATA for {}'.format(meta_dict.osint_target))
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
result = metadata_extractor.extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if result:
results = result.get_metadata()
for meta in results:
meta_finder_document = MetaFinderDocument()
subdomain = Subdomain.objects.get(scan_history=meta_dict.scan_id, name=meta_dict.osint_target)
meta_finder_document.subdomain = subdomain
meta_finder_document.target_domain = meta_dict.domain
meta_finder_document.scan_history = meta_dict.scan_id
item = DottedDict(results[meta])
meta_finder_document.url = item.url
meta_finder_document.doc_name = meta
meta_finder_document.http_status = item.status_code
metadata = results[meta]['metadata']
for data in metadata:
if 'Producer' in metadata and metadata['Producer']:
meta_finder_document.producer = metadata['Producer'].rstrip('\x00')
if 'Creator' in metadata and metadata['Creator']:
meta_finder_document.creator = metadata['Creator'].rstrip('\x00')
if 'CreationDate' in metadata and metadata['CreationDate']:
meta_finder_document.creation_date = metadata['CreationDate'].rstrip('\x00')
if 'ModDate' in metadata and metadata['ModDate']:
meta_finder_document.modified_date = metadata['ModDate'].rstrip('\x00')
if 'Author' in metadata and metadata['Author']:
meta_finder_document.author = metadata['Author'].rstrip('\x00')
if 'Title' in metadata and metadata['Title']:
meta_finder_document.title = metadata['Title'].rstrip('\x00')
if 'OSInfo' in metadata and metadata['OSInfo']:
meta_finder_document.os = metadata['OSInfo'].rstrip('\x00')
meta_finder_document.save()
@app.task(bind=True)
def test_task(self):
print('*' * 40)
print('test task run')
print('*' * 40)
| import os
import traceback
import yaml
import json
import csv
import validators
import random
import requests
import logging
import metafinder.extractor as metadata_extractor
import whatportis
import subprocess
from selenium.webdriver.firefox.options import Options as FirefoxOptions
from selenium import webdriver
from emailfinder.extractor import *
from dotted_dict import DottedDict
from celery import shared_task
from discord_webhook import DiscordWebhook
from reNgine.celery import app
from startScan.models import *
from targetApp.models import Domain
from scanEngine.models import EngineType
from django.conf import settings
from django.shortcuts import get_object_or_404
from celery import shared_task
from datetime import datetime
from degoogle import degoogle
from django.conf import settings
from django.utils import timezone, dateformat
from django.shortcuts import get_object_or_404
from django.core.exceptions import ObjectDoesNotExist
from reNgine.celery import app
from reNgine.definitions import *
from startScan.models import *
from targetApp.models import Domain
from scanEngine.models import EngineType, Configuration, Wordlist
from .common_func import *
'''
task for background scan
'''
@app.task
def initiate_scan(
domain_id,
scan_history_id,
scan_type,
engine_type,
imported_subdomains=None,
out_of_scope_subdomains=[]
):
'''
scan_type = 0 -> immediate scan, need not create scan object
scan_type = 1 -> scheduled scan
'''
engine_object = EngineType.objects.get(pk=engine_type)
domain = Domain.objects.get(pk=domain_id)
if scan_type == 1:
task = ScanHistory()
task.scan_status = -1
elif scan_type == 0:
task = ScanHistory.objects.get(pk=scan_history_id)
# save the last scan date for domain model
domain.last_scan_date = timezone.now()
domain.save()
# once the celery task starts, change the task status to Started
task.scan_type = engine_object
task.celery_id = initiate_scan.request.id
task.domain = domain
task.scan_status = 1
task.start_scan_date = timezone.now()
task.subdomain_discovery = True if engine_object.subdomain_discovery else False
task.dir_file_search = True if engine_object.dir_file_search else False
task.port_scan = True if engine_object.port_scan else False
task.fetch_url = True if engine_object.fetch_url else False
task.osint = True if engine_object.osint else False
task.screenshot = True if engine_object.screenshot else False
task.vulnerability_scan = True if engine_object.vulnerability_scan else False
task.save()
activity_id = create_scan_activity(task, "Scanning Started", 2)
results_dir = '/usr/src/scan_results/'
os.chdir(results_dir)
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has initiated recon for target {} with engine type {}'.format(domain.name, engine_object.engine_name))
try:
current_scan_dir = domain.name + '_' + str(random.randint(100000000000, 999999999999))
os.mkdir(current_scan_dir)
task.results_dir = current_scan_dir
task.save()
except Exception as exception:
logger.error(exception)
scan_failed(task)
yaml_configuration = None
excluded_subdomains = ''
try:
yaml_configuration = yaml.load(
task.scan_type.yaml_configuration,
Loader=yaml.FullLoader)
except Exception as exception:
logger.error(exception)
# TODO: Put failed reason on db
'''
Add GF patterns name to db for dynamic URLs menu
'''
if engine_object.fetch_url and GF_PATTERNS in yaml_configuration[FETCH_URL]:
task.used_gf_patterns = ','.join(
pattern for pattern in yaml_configuration[FETCH_URL][GF_PATTERNS])
task.save()
results_dir = results_dir + current_scan_dir
# put all imported subdomains into txt file and also in Subdomain model
if imported_subdomains:
extract_imported_subdomain(
imported_subdomains, task, domain, results_dir)
if yaml_configuration:
'''
a target in itself is a subdomain, some tool give subdomains as
www.yogeshojha.com but url and everything else resolves to yogeshojha.com
In that case, we would already need to store target itself as subdomain
'''
initial_subdomain_file = '/target_domain.txt' if task.subdomain_discovery else '/sorted_subdomain_collection.txt'
subdomain_file = open(results_dir + initial_subdomain_file, "w")
subdomain_file.write(domain.name + "\n")
subdomain_file.close()
if(task.subdomain_discovery):
activity_id = create_scan_activity(task, "Subdomain Scanning", 1)
subdomain_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id,
out_of_scope_subdomains
)
else:
skip_subdomain_scan(task, domain, results_dir)
update_last_activity(activity_id, 2)
activity_id = create_scan_activity(task, "HTTP Crawler", 1)
http_crawler(
task,
domain,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
try:
if task.screenshot:
activity_id = create_scan_activity(
task, "Visual Recon - Screenshot", 1)
grab_screenshot(
task,
domain,
yaml_configuration,
current_scan_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if(task.port_scan):
activity_id = create_scan_activity(task, "Port Scanning", 1)
port_scanning(task, domain, yaml_configuration, results_dir)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.osint:
activity_id = create_scan_activity(task, "OSINT Running", 1)
perform_osint(task, domain, yaml_configuration, results_dir)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.dir_file_search:
activity_id = create_scan_activity(task, "Directory Search", 1)
directory_brute(
task,
domain,
yaml_configuration,
results_dir,
activity_id
)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.fetch_url:
activity_id = create_scan_activity(task, "Fetching endpoints", 1)
fetch_endpoints(
task,
domain,
yaml_configuration,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
try:
if task.vulnerability_scan:
activity_id = create_scan_activity(task, "Vulnerability Scan", 1)
vulnerability_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id)
update_last_activity(activity_id, 2)
except Exception as e:
logger.error(e)
update_last_activity(activity_id, 0)
activity_id = create_scan_activity(task, "Scan Completed", 2)
if notification and notification[0].send_scan_status_notif:
send_notification('*Scan Completed*\nreNgine has finished performing recon on target {}.'.format(domain.name))
'''
Once the scan is completed, save the status to successful
'''
if ScanActivity.objects.filter(scan_of=task).filter(status=0).all():
task.scan_status = 0
else:
task.scan_status = 2
task.stop_scan_date = timezone.now()
task.save()
# cleanup results
delete_scan_data(results_dir)
return {"status": True}
def skip_subdomain_scan(task, domain, results_dir):
# store default target as subdomain
'''
If the imported subdomain already has target domain saved, we can skip this
'''
if not Subdomain.objects.filter(
scan_history=task,
name=domain.name).exists():
subdomain_dict = DottedDict({
'name': domain.name,
'scan_history': task,
'target_domain': domain
})
save_subdomain(subdomain_dict)
# Save target into target_domain.txt
with open('{}/target_domain.txt'.format(results_dir), 'w+') as file:
file.write(domain.name + '\n')
file.close()
'''
We can have two conditions, either subdomain scan happens, or subdomain scan
does not happen, in either cases, because we are using import subdomain, we
need to collect and sort all the subdomains
Write target domain into subdomain_collection
'''
os.system(
'cat {0}/target_domain.txt > {0}/subdomain_collection.txt'.format(results_dir))
os.system(
'cat {0}/from_imported.txt > {0}/subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/from_imported.txt'.format(results_dir))
'''
Sort all Subdomains
'''
os.system(
'sort -u {0}/subdomain_collection.txt -o {0}/sorted_subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/subdomain_collection.txt'.format(results_dir))
def extract_imported_subdomain(imported_subdomains, task, domain, results_dir):
valid_imported_subdomains = [subdomain for subdomain in imported_subdomains if validators.domain(
subdomain) and domain.name == get_domain_from_subdomain(subdomain)]
# remove any duplicate
valid_imported_subdomains = list(set(valid_imported_subdomains))
with open('{}/from_imported.txt'.format(results_dir), 'w+') as file:
for subdomain_name in valid_imported_subdomains:
# save _subdomain to Subdomain model db
if not Subdomain.objects.filter(
scan_history=task, name=subdomain_name).exists():
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': subdomain_name,
'is_imported_subdomain': True
})
save_subdomain(subdomain_dict)
# save subdomain to file
file.write('{}\n'.format(subdomain_name))
file.close()
def subdomain_scan(task, domain, yaml_configuration, results_dir, activity_id, out_of_scope_subdomains=None):
'''
This function is responsible for performing subdomain enumeration
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Subdomain Gathering for target {} has been started'.format(domain.name))
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
# check for all the tools and add them into string
# if tool selected is all then make string, no need for loop
if ALL in yaml_configuration[SUBDOMAIN_DISCOVERY][USES_TOOLS]:
tools = 'amass-active amass-passive assetfinder sublist3r subfinder oneforall'
else:
tools = ' '.join(
str(tool) for tool in yaml_configuration[SUBDOMAIN_DISCOVERY][USES_TOOLS])
logging.info(tools)
# check for THREADS, by default 10
threads = 10
if THREADS in yaml_configuration[SUBDOMAIN_DISCOVERY]:
_threads = yaml_configuration[SUBDOMAIN_DISCOVERY][THREADS]
if _threads > 0:
threads = _threads
if 'amass' in tools:
if 'amass-passive' in tools:
amass_command = 'amass enum -passive -d {} -o {}/from_amass.txt'.format(
domain.name, results_dir)
if USE_AMASS_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_AMASS_CONFIG]:
amass_command += ' -config /root/.config/amass.ini'
# Run Amass Passive
logging.info(amass_command)
os.system(amass_command)
if 'amass-active' in tools:
amass_command = 'amass enum -active -d {} -o {}/from_amass_active.txt'.format(
domain.name, results_dir)
if USE_AMASS_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_AMASS_CONFIG]:
amass_command += ' -config /root/.config/amass.ini'
if AMASS_WORDLIST in yaml_configuration[SUBDOMAIN_DISCOVERY]:
wordlist = yaml_configuration[SUBDOMAIN_DISCOVERY][AMASS_WORDLIST]
if wordlist == 'default':
wordlist_path = '/usr/src/wordlist/deepmagic.com-prefixes-top50000.txt'
else:
wordlist_path = '/usr/src/wordlist/' + wordlist + '.txt'
if not os.path.exists(wordlist_path):
wordlist_path = '/usr/src/' + AMASS_WORDLIST
amass_command = amass_command + \
' -brute -w {}'.format(wordlist_path)
if amass_config_path:
amass_command = amass_command + \
' -config {}'.format('/usr/src/scan_results/' + amass_config_path)
# Run Amass Active
logging.info(amass_command)
os.system(amass_command)
if 'assetfinder' in tools:
assetfinder_command = 'assetfinder --subs-only {} > {}/from_assetfinder.txt'.format(
domain.name, results_dir)
# Run Assetfinder
logging.info(assetfinder_command)
os.system(assetfinder_command)
if 'sublist3r' in tools:
sublist3r_command = 'python3 /usr/src/github/Sublist3r/sublist3r.py -d {} -t {} -o {}/from_sublister.txt'.format(
domain.name, threads, results_dir)
# Run sublist3r
logging.info(sublist3r_command)
os.system(sublist3r_command)
if 'subfinder' in tools:
subfinder_command = 'subfinder -d {} -t {} -o {}/from_subfinder.txt'.format(
domain.name, threads, results_dir)
if USE_SUBFINDER_CONFIG in yaml_configuration[SUBDOMAIN_DISCOVERY] and yaml_configuration[SUBDOMAIN_DISCOVERY][USE_SUBFINDER_CONFIG]:
subfinder_command += ' -config /root/.config/subfinder/config.yaml'
# Run Subfinder
logging.info(subfinder_command)
os.system(subfinder_command)
if 'oneforall' in tools:
oneforall_command = 'python3 /usr/src/github/OneForAll/oneforall.py --target {} run'.format(
domain.name, results_dir)
# Run OneForAll
logging.info(oneforall_command)
os.system(oneforall_command)
extract_subdomain = "cut -d',' -f6 /usr/src/github/OneForAll/results/{}.csv >> {}/from_oneforall.txt".format(
domain.name, results_dir)
os.system(extract_subdomain)
# remove the results from oneforall directory
os.system(
'rm -rf /usr/src/github/OneForAll/results/{}.*'.format(domain.name))
'''
All tools have gathered the list of subdomains with filename
initials as from_*
We will gather all the results in one single file, sort them and
remove the older results from_*
'''
os.system(
'cat {0}/*.txt > {0}/subdomain_collection.txt'.format(results_dir))
'''
Write target domain into subdomain_collection
'''
os.system(
'cat {0}/target_domain.txt >> {0}/subdomain_collection.txt'.format(results_dir))
'''
Remove all the from_* files
'''
os.system('rm -f {}/from*'.format(results_dir))
'''
Sort all Subdomains
'''
os.system(
'sort -u {0}/subdomain_collection.txt -o {0}/sorted_subdomain_collection.txt'.format(results_dir))
os.system('rm -f {}/subdomain_collection.txt'.format(results_dir))
'''
The final results will be stored in sorted_subdomain_collection.
'''
# parse the subdomain list file and store in db
with open(subdomain_scan_results_file) as subdomain_list:
for _subdomain in subdomain_list:
__subdomain = _subdomain.rstrip('\n')
if not Subdomain.objects.filter(scan_history=task, name=__subdomain).exists(
) and validators.domain(__subdomain) and __subdomain not in out_of_scope_subdomains:
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': __subdomain,
})
save_subdomain(subdomain_dict)
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
subdomains_count = Subdomain.objects.filter(scan_history=task).count()
send_notification('Subdomain Gathering for target {} has been completed and has discovered *{}* subdomains.'.format(domain.name, subdomains_count))
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/sorted_subdomain_collection.txt')
# check for any subdomain changes and send notif if any
if notification and notification[0].send_subdomain_changes_notif:
newly_added_subdomain = get_new_added_subdomain(task.id, domain.id)
if newly_added_subdomain:
message = "**{} New Subdomains Discovered on domain {}**".format(newly_added_subdomain.count(), domain.name)
for subdomain in newly_added_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
removed_subdomain = get_removed_subdomain(task.id, domain.id)
if removed_subdomain:
message = "**{} Subdomains are no longer available on domain {}**".format(removed_subdomain.count(), domain.name)
for subdomain in removed_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
# check for interesting subdomains and send notif if any
if notification and notification[0].send_interesting_notif:
interesting_subdomain = get_interesting_subdomains(task.id, domain.id)
print(interesting_subdomain)
if interesting_subdomain:
message = "**{} Interesting Subdomains Found on domain {}**".format(interesting_subdomain.count(), domain.name)
for subdomain in interesting_subdomain:
message += "\n• {}".format(subdomain.name)
send_notification(message)
def get_new_added_subdomain(scan_id, domain_id):
scan_history = ScanHistory.objects.filter(
domain=domain_id).filter(
subdomain_discovery=True).filter(
id__lte=scan_id)
if scan_history.count() > 1:
last_scan = scan_history.order_by('-start_scan_date')[1]
scanned_host_q1 = Subdomain.objects.filter(
scan_history__id=scan_id).values('name')
scanned_host_q2 = Subdomain.objects.filter(
scan_history__id=last_scan.id).values('name')
added_subdomain = scanned_host_q1.difference(scanned_host_q2)
return Subdomain.objects.filter(
scan_history=scan_id).filter(
name__in=added_subdomain)
def get_removed_subdomain(scan_id, domain_id):
scan_history = ScanHistory.objects.filter(
domain=domain_id).filter(
subdomain_discovery=True).filter(
id__lte=scan_id)
if scan_history.count() > 1:
last_scan = scan_history.order_by('-start_scan_date')[1]
scanned_host_q1 = Subdomain.objects.filter(
scan_history__id=scan_id).values('name')
scanned_host_q2 = Subdomain.objects.filter(
scan_history__id=last_scan.id).values('name')
removed_subdomains = scanned_host_q2.difference(scanned_host_q1)
print()
return Subdomain.objects.filter(
scan_history=last_scan).filter(
name__in=removed_subdomains)
def http_crawler(task, domain, results_dir, activity_id):
'''
This function is runs right after subdomain gathering, and gathers important
like page title, http status, etc
HTTP Crawler runs by default
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('HTTP Crawler for target {} has been initiated.'.format(domain.name))
alive_file_location = results_dir + '/alive.txt'
httpx_results_file = results_dir + '/httpx.json'
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
httpx_command = 'httpx -status-code -content-length -title -tech-detect -cdn -ip -follow-host-redirects -random-agent'
proxy = get_random_proxy()
if proxy:
httpx_command += " --http-proxy '{}'".format(proxy)
httpx_command += ' -json -o {}'.format(
httpx_results_file
)
httpx_command = 'cat {} | {}'.format(subdomain_scan_results_file, httpx_command)
print(httpx_command)
os.system(httpx_command)
# alive subdomains from httpx
alive_file = open(alive_file_location, 'w')
# writing httpx results
if os.path.isfile(httpx_results_file):
httpx_json_result = open(httpx_results_file, 'r')
lines = httpx_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
try:
# fallback for older versions of httpx
if 'url' in json_st:
subdomain = Subdomain.objects.get(
scan_history=task, name=json_st['input'])
else:
subdomain = Subdomain.objects.get(
scan_history=task, name=json_st['url'].split("//")[-1])
'''
Saving Default http urls to EndPoint
'''
endpoint = EndPoint()
endpoint.scan_history = task
endpoint.target_domain = domain
endpoint.subdomain = subdomain
if 'url' in json_st:
endpoint.http_url = json_st['url']
subdomain.http_url = json_st['url']
if 'status-code' in json_st:
endpoint.http_status = json_st['status-code']
subdomain.http_status = json_st['status-code']
if 'title' in json_st:
endpoint.page_title = json_st['title']
subdomain.page_title = json_st['title']
if 'content-length' in json_st:
endpoint.content_length = json_st['content-length']
subdomain.content_length = json_st['content-length']
if 'content-type' in json_st:
endpoint.content_type = json_st['content-type']
subdomain.content_type = json_st['content-type']
if 'webserver' in json_st:
endpoint.webserver = json_st['webserver']
subdomain.webserver = json_st['webserver']
if 'response-time' in json_st:
response_time = float(
''.join(
ch for ch in json_st['response-time'] if not ch.isalpha()))
if json_st['response-time'][-2:] == 'ms':
response_time = response_time / 1000
endpoint.response_time = response_time
subdomain.response_time = response_time
if 'cnames' in json_st:
cname_list = ','.join(json_st['cnames'])
subdomain.cname = cname_list
discovered_date = timezone.now()
endpoint.discovered_date = discovered_date
subdomain.discovered_date = discovered_date
endpoint.is_default = True
endpoint.save()
subdomain.save()
if 'technologies' in json_st:
for _tech in json_st['technologies']:
if Technology.objects.filter(name=_tech).exists():
tech = Technology.objects.get(name=_tech)
else:
tech = Technology(name=_tech)
tech.save()
subdomain.technologies.add(tech)
endpoint.technologies.add(tech)
if 'a' in json_st:
for _ip in json_st['a']:
if IpAddress.objects.filter(address=_ip).exists():
ip = IpAddress.objects.get(address=_ip)
else:
ip = IpAddress(address=_ip)
if 'cdn' in json_st:
ip.is_cdn = json_st['cdn']
ip.save()
subdomain.ip_addresses.add(ip)
# see if to ignore 404 or 5xx
alive_file.write(json_st['url'] + '\n')
subdomain.save()
endpoint.save()
except Exception as exception:
logging.error(exception)
alive_file.close()
if notification and notification[0].send_scan_status_notif:
alive_count = Subdomain.objects.filter(
scan_history__id=task.id).values('name').distinct().filter(
http_status__exact=200).count()
send_notification('HTTP Crawler for target {} has been completed.\n\n {} subdomains were alive (http status 200).'.format(domain.name, alive_count))
def grab_screenshot(task, domain, yaml_configuration, results_dir, activity_id):
'''
This function is responsible for taking screenshots
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine is currently gathering screenshots for {}'.format(domain.name))
output_screenshots_path = results_dir + '/screenshots'
result_csv_path = results_dir + '/screenshots/Requests.csv'
alive_subdomains_path = results_dir + '/alive.txt'
eyewitness_command = 'python3 /usr/src/github/EyeWitness/Python/EyeWitness.py'
eyewitness_command += ' -f {} -d {} --no-prompt'.format(
alive_subdomains_path,
output_screenshots_path
)
if EYEWITNESS in yaml_configuration \
and TIMEOUT in yaml_configuration[EYEWITNESS] \
and yaml_configuration[EYEWITNESS][TIMEOUT] > 0:
eyewitness_command += ' --timeout {}'.format(
yaml_configuration[EYEWITNESS][TIMEOUT]
)
if EYEWITNESS in yaml_configuration \
and THREADS in yaml_configuration[EYEWITNESS] \
and yaml_configuration[EYEWITNESS][THREADS] > 0:
eyewitness_command += ' --threads {}'.format(
yaml_configuration[EYEWITNESS][THREADS]
)
logger.info(eyewitness_command)
os.system(eyewitness_command)
if os.path.isfile(result_csv_path):
logger.info('Gathering Eyewitness results')
with open(result_csv_path, 'r') as file:
reader = csv.reader(file)
for row in reader:
if row[3] == 'Successful' \
and Subdomain.objects.filter(
scan_history__id=task.id).filter(name=row[2]).exists():
subdomain = Subdomain.objects.get(
scan_history__id=task.id,
name=row[2]
)
subdomain.screenshot_path = row[4].replace(
'/usr/src/scan_results/',
''
)
subdomain.save()
# remove all db, html extra files in screenshot results
os.system('rm -rf {0}/*.csv {0}/*.db {0}/*.js {0}/*.html {0}/*.css'.format(
output_screenshots_path,
))
os.system('rm -rf {0}/source'.format(
output_screenshots_path,
))
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has finished gathering screenshots for {}'.format(domain.name))
def port_scanning(task, domain, yaml_configuration, results_dir):
'''
This function is responsible for running the port scan
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Port Scan initiated for {}'.format(domain.name))
subdomain_scan_results_file = results_dir + '/sorted_subdomain_collection.txt'
port_results_file = results_dir + '/ports.json'
# check the yaml_configuration and choose the ports to be scanned
scan_ports = '-' # default port scan everything
if PORTS in yaml_configuration[PORT_SCAN]:
# TODO: legacy code, remove top-100 in future versions
all_ports = yaml_configuration[PORT_SCAN][PORTS]
if 'full' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -p {}'.format(
subdomain_scan_results_file, port_results_file, '-')
elif 'top-100' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -top-ports 100'.format(
subdomain_scan_results_file, port_results_file)
elif 'top-1000' in all_ports:
naabu_command = 'cat {} | naabu -json -o {} -top-ports 1000'.format(
subdomain_scan_results_file, port_results_file)
else:
scan_ports = ','.join(
str(port) for port in all_ports)
naabu_command = 'cat {} | naabu -json -o {} -p {}'.format(
subdomain_scan_results_file, port_results_file, scan_ports)
# check for exclude ports
if EXCLUDE_PORTS in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][EXCLUDE_PORTS]:
exclude_ports = ','.join(
str(port) for port in yaml_configuration['port_scan']['exclude_ports'])
naabu_command = naabu_command + \
' -exclude-ports {}'.format(exclude_ports)
if NAABU_RATE in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][NAABU_RATE] > 0:
naabu_command = naabu_command + \
' -rate {}'.format(
yaml_configuration[PORT_SCAN][NAABU_RATE])
if USE_NAABU_CONFIG in yaml_configuration[PORT_SCAN] and yaml_configuration[PORT_SCAN][USE_NAABU_CONFIG]:
naabu_command += ' -config /root/.config/naabu/naabu.conf'
# run naabu
os.system(naabu_command)
# writing port results
try:
port_json_result = open(port_results_file, 'r')
lines = port_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
port_number = json_st['port']
ip_address = json_st['ip']
# see if port already exists
if Port.objects.filter(number__exact=port_number).exists():
port = Port.objects.get(number=port_number)
else:
port = Port()
port.number = port_number
if port_number in UNCOMMON_WEB_PORTS:
port.is_uncommon = True
port_detail = whatportis.get_ports(str(port_number))
if len(port_detail):
port.service_name = port_detail[0].name
port.description = port_detail[0].description
port.save()
if IpAddress.objects.filter(address=json_st['ip']).exists():
ip = IpAddress.objects.get(address=json_st['ip'])
ip.ports.add(port)
ip.save()
except BaseException as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
port_count = Port.objects.filter(
ports__in=IpAddress.objects.filter(
ip_addresses__in=Subdomain.objects.filter(
scan_history__id=task.id))).distinct().count()
send_notification('reNgine has finished Port Scanning on {} and has identified {} ports.'.format(domain.name, port_count))
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/ports.json')
def check_waf():
'''
This function will check for the WAF being used in subdomains using wafw00f
'''
pass
def directory_brute(task, domain, yaml_configuration, results_dir, activity_id):
'''
This function is responsible for performing directory scan
'''
# scan directories for all the alive subdomain with http status >
# 200
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Directory Bruteforce has been initiated for {}.'.format(domain.name))
alive_subdomains = Subdomain.objects.filter(
scan_history__id=task.id).exclude(http_url__isnull=True)
dirs_results = results_dir + '/dirs.json'
# check the yaml settings
if EXTENSIONS in yaml_configuration[DIR_FILE_SEARCH]:
extensions = ','.join(
str(ext) for ext in yaml_configuration[DIR_FILE_SEARCH][EXTENSIONS])
else:
extensions = 'php,git,yaml,conf,db,mysql,bak,txt'
# Threads
if THREADS in yaml_configuration[DIR_FILE_SEARCH] \
and yaml_configuration[DIR_FILE_SEARCH][THREADS] > 0:
threads = yaml_configuration[DIR_FILE_SEARCH][THREADS]
else:
threads = 10
for subdomain in alive_subdomains:
# delete any existing dirs.json
if os.path.isfile(dirs_results):
os.system('rm -rf {}'.format(dirs_results))
dirsearch_command = 'python3 /usr/src/github/dirsearch/dirsearch.py'
dirsearch_command += ' -u {}'.format(subdomain.http_url)
if (WORDLIST not in yaml_configuration[DIR_FILE_SEARCH] or
not yaml_configuration[DIR_FILE_SEARCH][WORDLIST] or
'default' in yaml_configuration[DIR_FILE_SEARCH][WORDLIST]):
wordlist_location = '/usr/src/github/dirsearch/db/dicc.txt'
else:
wordlist_location = '/usr/src/wordlist/' + \
yaml_configuration[DIR_FILE_SEARCH][WORDLIST] + '.txt'
dirsearch_command += ' -w {}'.format(wordlist_location)
dirsearch_command += ' --format json -o {}'.format(dirs_results)
dirsearch_command += ' -e {}'.format(extensions)
dirsearch_command += ' -t {}'.format(threads)
dirsearch_command += ' --random-agent --follow-redirects --exclude-status 403,401,404'
if EXCLUDE_EXTENSIONS in yaml_configuration[DIR_FILE_SEARCH]:
exclude_extensions = ','.join(
str(ext) for ext in yaml_configuration[DIR_FILE_SEARCH][EXCLUDE_EXTENSIONS])
dirsearch_command += ' -X {}'.format(exclude_extensions)
if EXCLUDE_TEXT in yaml_configuration[DIR_FILE_SEARCH]:
exclude_text = ','.join(
str(text) for text in yaml_configuration[DIR_FILE_SEARCH][EXCLUDE_TEXT])
dirsearch_command += ' -exclude-texts {}'.format(exclude_text)
# check if recursive strategy is set to on
if RECURSIVE_LEVEL in yaml_configuration[DIR_FILE_SEARCH]:
dirsearch_command += ' --recursion-depth {}'.format(yaml_configuration[DIR_FILE_SEARCH][RECURSIVE_LEVEL])
if RECURSIVE_LEVEL in yaml_configuration[DIR_FILE_SEARCH]:
dirsearch_command += ' --recursion-depth {}'.format(yaml_configuration[DIR_FILE_SEARCH][RECURSIVE_LEVEL])
# proxy
proxy = get_random_proxy()
if proxy:
dirsearch_command += " --proxy '{}'".format(proxy)
print(dirsearch_command)
os.system(dirsearch_command)
try:
if os.path.isfile(dirs_results):
with open(dirs_results, "r") as json_file:
json_string = json_file.read()
subdomain = Subdomain.objects.get(
scan_history__id=task.id, http_url=subdomain.http_url)
subdomain.directory_json = json_string
subdomain.save()
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
send_notification('Directory Bruteforce has been completed for {}.'.format(domain.name))
def fetch_endpoints(
task,
domain,
yaml_configuration,
results_dir,
activity_id):
'''
This function is responsible for fetching all the urls associated with target
and run HTTP probe
It first runs gau to gather all urls from wayback, then we will use hakrawler to identify more urls
'''
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine is currently gathering endpoints for {}.'.format(domain.name))
# check yaml settings
if ALL in yaml_configuration[FETCH_URL][USES_TOOLS]:
tools = 'gauplus hakrawler waybackurls gospider'
else:
tools = ' '.join(
str(tool) for tool in yaml_configuration[FETCH_URL][USES_TOOLS])
if INTENSITY in yaml_configuration[FETCH_URL]:
scan_type = yaml_configuration[FETCH_URL][INTENSITY]
else:
scan_type = 'normal'
domain_regex = "\'https?://([a-z0-9]+[.])*{}.*\'".format(domain.name)
if 'deep' in scan_type:
# performs deep url gathering for all the subdomains present -
# RECOMMENDED
logger.info('Deep URLS Fetch')
os.system(settings.TOOL_LOCATION + 'get_urls.sh %s %s %s %s %s' %
("None", results_dir, scan_type, domain_regex, tools))
else:
# perform url gathering only for main domain - USE only for quick scan
logger.info('Non Deep URLS Fetch')
os.system(
settings.TOOL_LOCATION +
'get_urls.sh %s %s %s %s %s' % (
domain.name,
results_dir,
scan_type,
domain_regex,
tools
))
if IGNORE_FILE_EXTENSION in yaml_configuration[FETCH_URL]:
ignore_extension = '|'.join(
yaml_configuration[FETCH_URL][IGNORE_FILE_EXTENSION])
logger.info('Ignore extensions' + ignore_extension)
os.system(
'cat {0}/all_urls.txt | grep -Eiv "\\.({1}).*" > {0}/temp_urls.txt'.format(
results_dir, ignore_extension))
os.system(
'rm {0}/all_urls.txt && mv {0}/temp_urls.txt {0}/all_urls.txt'.format(results_dir))
'''
Store all the endpoints and then run the httpx
'''
try:
endpoint_final_url = results_dir + '/all_urls.txt'
if os.path.isfile(endpoint_final_url):
with open(endpoint_final_url) as endpoint_list:
for url in endpoint_list:
http_url = url.rstrip('\n')
if not EndPoint.objects.filter(scan_history=task, http_url=http_url).exists():
_subdomain = get_subdomain_from_url(http_url)
if Subdomain.objects.filter(
scan_history=task).filter(
name=_subdomain).exists():
subdomain = Subdomain.objects.get(
scan_history=task, name=_subdomain)
else:
'''
gau or gosppider can gather interesting endpoints which
when parsed can give subdomains that were not existent from
subdomain scan. so storing them
'''
logger.error(
'Subdomain {} not found, adding...'.format(_subdomain))
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': _subdomain,
})
subdomain = save_subdomain(subdomain_dict)
endpoint_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'subdomain': subdomain,
'http_url': http_url,
})
save_endpoint(endpoint_dict)
except Exception as e:
logger.error(e)
if notification and notification[0].send_scan_output_file:
send_files_to_discord(results_dir + '/all_urls.txt')
'''
TODO:
Go spider & waybackurls accumulates a lot of urls, which is good but nuclei
takes forever to scan even a simple website, so we will do http probing
and filter HTTP status 404, this way we can reduce the number of Non Existent
URLS
'''
logger.info('HTTP Probing on collected endpoints')
httpx_command = 'httpx -l {0}/all_urls.txt -status-code -content-length -ip -cdn -title -tech-detect -json -follow-redirects -random-agent -o {0}/final_httpx_urls.json'.format(results_dir)
proxy = get_random_proxy()
if proxy:
httpx_command += " --http-proxy '{}'".format(proxy)
os.system(httpx_command)
url_results_file = results_dir + '/final_httpx_urls.json'
try:
urls_json_result = open(url_results_file, 'r')
lines = urls_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
http_url = json_st['url']
_subdomain = get_subdomain_from_url(http_url)
if Subdomain.objects.filter(
scan_history=task).filter(
name=_subdomain).exists():
subdomain_obj = Subdomain.objects.get(
scan_history=task, name=_subdomain)
else:
subdomain_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'name': _subdomain,
})
subdomain_obj = save_subdomain(subdomain_dict)
if EndPoint.objects.filter(
scan_history=task).filter(
http_url=http_url).exists():
endpoint = EndPoint.objects.get(
scan_history=task, http_url=http_url)
else:
endpoint = EndPoint()
endpoint_dict = DottedDict({
'scan_history': task,
'target_domain': domain,
'http_url': http_url,
'subdomain': subdomain_obj
})
endpoint = save_endpoint(endpoint_dict)
if 'title' in json_st:
endpoint.page_title = json_st['title']
if 'webserver' in json_st:
endpoint.webserver = json_st['webserver']
if 'content-length' in json_st:
endpoint.content_length = json_st['content-length']
if 'content-type' in json_st:
endpoint.content_type = json_st['content-type']
if 'status-code' in json_st:
endpoint.http_status = json_st['status-code']
if 'response-time' in json_st:
response_time = float(''.join(ch for ch in json_st['response-time'] if not ch.isalpha()))
if json_st['response-time'][-2:] == 'ms':
response_time = response_time / 1000
endpoint.response_time = response_time
endpoint.save()
if 'technologies' in json_st:
for _tech in json_st['technologies']:
if Technology.objects.filter(name=_tech).exists():
tech = Technology.objects.get(name=_tech)
else:
tech = Technology(name=_tech)
tech.save()
endpoint.technologies.add(tech)
# get subdomain object
subdomain = Subdomain.objects.get(scan_history=task, name=_subdomain)
subdomain.technologies.add(tech)
subdomain.save()
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
endpoint_count = EndPoint.objects.filter(
scan_history__id=task.id).values('http_url').distinct().count()
endpoint_alive_count = EndPoint.objects.filter(
scan_history__id=task.id, http_status__exact=200).values('http_url').distinct().count()
send_notification('reNgine has finished gathering endpoints for {} and has discovered *{}* unique endpoints.\n\n{} of those endpoints reported HTTP status 200.'.format(
domain.name,
endpoint_count,
endpoint_alive_count
))
# once endpoint is saved, run gf patterns TODO: run threads
if GF_PATTERNS in yaml_configuration[FETCH_URL]:
for pattern in yaml_configuration[FETCH_URL][GF_PATTERNS]:
logger.info('Running GF for {}'.format(pattern))
gf_output_file_path = '{0}/gf_patterns_{1}.txt'.format(
results_dir, pattern)
gf_command = 'cat {0}/all_urls.txt | gf {1} >> {2}'.format(
results_dir, pattern, gf_output_file_path)
os.system(gf_command)
if os.path.exists(gf_output_file_path):
with open(gf_output_file_path) as gf_output:
for line in gf_output:
url = line.rstrip('\n')
try:
endpoint = EndPoint.objects.get(
scan_history=task, http_url=url)
earlier_pattern = endpoint.matched_gf_patterns
new_pattern = earlier_pattern + ',' + pattern if earlier_pattern else pattern
endpoint.matched_gf_patterns = new_pattern
except Exception as e:
# add the url in db
logger.error(e)
logger.info('Adding URL' + url)
endpoint = EndPoint()
endpoint.http_url = url
endpoint.target_domain = domain
endpoint.scan_history = task
try:
_subdomain = Subdomain.objects.get(
scan_history=task, name=get_subdomain_from_url(url))
endpoint.subdomain = _subdomain
except Exception as e:
continue
endpoint.matched_gf_patterns = pattern
finally:
endpoint.save()
def vulnerability_scan(
task,
domain,
yaml_configuration,
results_dir,
activity_id):
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('Vulnerability scan has been initiated for {}.'.format(domain.name))
'''
This function will run nuclei as a vulnerability scanner
----
unfurl the urls to keep only domain and path, this will be sent to vuln scan
ignore certain file extensions
Thanks: https://github.com/six2dez/reconftw
'''
urls_path = '/alive.txt'
if task.scan_type.fetch_url:
os.system('cat {0}/all_urls.txt | grep -Eiv "\\.(eot|jpg|jpeg|gif|css|tif|tiff|png|ttf|otf|woff|woff2|ico|pdf|svg|txt|js|doc|docx)$" | unfurl -u format %s://%d%p >> {0}/unfurl_urls.txt'.format(results_dir))
os.system(
'sort -u {0}/unfurl_urls.txt -o {0}/unfurl_urls.txt'.format(results_dir))
urls_path = '/unfurl_urls.txt'
vulnerability_result_path = results_dir + '/vulnerability.json'
vulnerability_scan_input_file = results_dir + urls_path
nuclei_command = 'nuclei -json -l {} -o {}'.format(
vulnerability_scan_input_file, vulnerability_result_path)
# check nuclei config
if USE_NUCLEI_CONFIG in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[VULNERABILITY_SCAN][USE_NUCLEI_CONFIG]:
nuclei_command += ' -config /root/.config/nuclei/config.yaml'
'''
Nuclei Templates
Either custom template has to be supplied or default template, if neither has
been supplied then use all templates including custom templates
'''
if CUSTOM_NUCLEI_TEMPLATE in yaml_configuration[
VULNERABILITY_SCAN] or NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
# check yaml settings for templates
if NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
if ALL in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_TEMPLATE]:
template = NUCLEI_TEMPLATES_PATH
else:
_template = ','.join([NUCLEI_TEMPLATES_PATH + str(element)
for element in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_TEMPLATE]])
template = _template.replace(',', ' -t ')
# Update nuclei command with templates
nuclei_command = nuclei_command + ' -t ' + template
if CUSTOM_NUCLEI_TEMPLATE in yaml_configuration[VULNERABILITY_SCAN]:
# add .yaml to the custom template extensions
_template = ','.join(
[str(element) + '.yaml' for element in yaml_configuration[VULNERABILITY_SCAN][CUSTOM_NUCLEI_TEMPLATE]])
template = _template.replace(',', ' -t ')
# Update nuclei command with templates
nuclei_command = nuclei_command + ' -t ' + template
else:
nuclei_command = nuclei_command + ' -t /root/nuclei-templates'
# check yaml settings for concurrency
if NUCLEI_CONCURRENCY in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][NUCLEI_CONCURRENCY] > 0:
concurrency = yaml_configuration[VULNERABILITY_SCAN][NUCLEI_CONCURRENCY]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -c ' + str(concurrency)
if RATE_LIMIT in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][RATE_LIMIT] > 0:
rate_limit = yaml_configuration[VULNERABILITY_SCAN][RATE_LIMIT]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -rl ' + str(rate_limit)
if TIMEOUT in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][TIMEOUT] > 0:
timeout = yaml_configuration[VULNERABILITY_SCAN][TIMEOUT]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -timeout ' + str(timeout)
if RETRIES in yaml_configuration[VULNERABILITY_SCAN] and yaml_configuration[
VULNERABILITY_SCAN][RETRIES] > 0:
retries = yaml_configuration[VULNERABILITY_SCAN][RETRIES]
# Update nuclei command with concurrent
nuclei_command = nuclei_command + ' -retries ' + str(retries)
# for severity
if NUCLEI_SEVERITY in yaml_configuration[VULNERABILITY_SCAN] and ALL not in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_SEVERITY]:
_severity = ','.join(
[str(element) for element in yaml_configuration[VULNERABILITY_SCAN][NUCLEI_SEVERITY]])
severity = _severity.replace(" ", "")
else:
severity = "critical, high, medium, low, info"
# update nuclei templates before running scan
os.system('nuclei -update-templates')
for _severity in severity.split(","):
# delete any existing vulnerability.json file
if os.path.isfile(vulnerability_result_path):
os.system('rm {}'.format(vulnerability_result_path))
# run nuclei
final_nuclei_command = nuclei_command + ' -severity ' + _severity
proxy = get_random_proxy()
if proxy:
final_nuclei_command += " --proxy-url '{}'".format(proxy)
logger.info(final_nuclei_command)
os.system(final_nuclei_command)
try:
if os.path.isfile(vulnerability_result_path):
urls_json_result = open(vulnerability_result_path, 'r')
lines = urls_json_result.readlines()
for line in lines:
json_st = json.loads(line.strip())
host = json_st['host']
_subdomain = get_subdomain_from_url(host)
try:
subdomain = Subdomain.objects.get(
name=_subdomain, scan_history=task)
vulnerability = Vulnerability()
vulnerability.subdomain = subdomain
vulnerability.scan_history = task
vulnerability.target_domain = domain
try:
endpoint = EndPoint.objects.get(
scan_history=task, target_domain=domain, http_url=host)
vulnerability.endpoint = endpoint
except Exception as exception:
logger.error(exception)
if 'name' in json_st['info']:
vulnerability.name = json_st['info']['name']
if 'severity' in json_st['info']:
if json_st['info']['severity'] == 'info':
severity = 0
elif json_st['info']['severity'] == 'low':
severity = 1
elif json_st['info']['severity'] == 'medium':
severity = 2
elif json_st['info']['severity'] == 'high':
severity = 3
elif json_st['info']['severity'] == 'critical':
severity = 4
else:
severity = 0
else:
severity = 0
vulnerability.severity = severity
if 'tags' in json_st['info']:
vulnerability.tags = json_st['info']['tags']
if 'description' in json_st['info']:
vulnerability.description = json_st['info']['description']
if 'reference' in json_st['info']:
vulnerability.reference = json_st['info']['reference']
if 'matched' in json_st: # TODO remove in rengine 1.1. 'matched' isn't used in nuclei 2.5.3
vulnerability.http_url = json_st['matched']
if 'matched-at' in json_st:
vulnerability.http_url = json_st['matched-at']
if 'templateID' in json_st:
vulnerability.template_used = json_st['templateID']
if 'description' in json_st:
vulnerability.description = json_st['description']
if 'matcher_name' in json_st:
vulnerability.matcher_name = json_st['matcher_name']
if 'extracted_results' in json_st:
vulnerability.extracted_results = json_st['extracted_results']
vulnerability.discovered_date = timezone.now()
vulnerability.open_status = True
vulnerability.save()
# send notification for all vulnerabilities except info
if json_st['info']['severity'] != "info" and notification and notification[0].send_vuln_notif:
message = "*Alert: Vulnerability Identified*"
message += "\n\n"
message += "A *{}* severity vulnerability has been identified.".format(json_st['info']['severity'])
message += "\nVulnerability Name: {}".format(json_st['info']['name'])
message += "\nVulnerable URL: {}".format(json_st['host'])
send_notification(message)
# send report to hackerone
if Hackerone.objects.all().exists() and json_st['info']['severity'] != 'info' and json_st['info']['severity'] \
!= 'low' and vulnerability.target_domain.h1_team_handle:
hackerone = Hackerone.objects.all()[0]
if hackerone.send_critical and json_st['info']['severity'] == 'critical':
send_hackerone_report(vulnerability.id)
elif hackerone.send_high and json_st['info']['severity'] == 'high':
send_hackerone_report(vulnerability.id)
elif hackerone.send_medium and json_st['info']['severity'] == 'medium':
send_hackerone_report(vulnerability.id)
except ObjectDoesNotExist:
logger.error('Object not found')
continue
except Exception as exception:
logging.error(exception)
update_last_activity(activity_id, 0)
if notification and notification[0].send_scan_status_notif:
info_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=0).count()
low_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=1).count()
medium_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=2).count()
high_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=3).count()
critical_count = Vulnerability.objects.filter(
scan_history__id=task.id, severity=4).count()
vulnerability_count = info_count + low_count + medium_count + high_count + critical_count
message = 'Vulnerability scan has been completed for {} and discovered {} vulnerabilities.'.format(
domain.name,
vulnerability_count
)
message += '\n\n*Vulnerability Stats:*'
message += '\nCritical: {}'.format(critical_count)
message += '\nHigh: {}'.format(high_count)
message += '\nMedium: {}'.format(medium_count)
message += '\nLow: {}'.format(low_count)
message += '\nInfo: {}'.format(info_count)
send_notification(message)
def scan_failed(task):
task.scan_status = 0
task.stop_scan_date = timezone.now()
task.save()
def create_scan_activity(task, message, status):
scan_activity = ScanActivity()
scan_activity.scan_of = task
scan_activity.title = message
scan_activity.time = timezone.now()
scan_activity.status = status
scan_activity.save()
return scan_activity.id
def update_last_activity(id, activity_status):
ScanActivity.objects.filter(
id=id).update(
status=activity_status,
time=timezone.now())
def delete_scan_data(results_dir):
# remove all txt,html,json files
os.system('find {} -name "*.txt" -type f -delete'.format(results_dir))
os.system('find {} -name "*.html" -type f -delete'.format(results_dir))
os.system('find {} -name "*.json" -type f -delete'.format(results_dir))
def save_subdomain(subdomain_dict):
subdomain = Subdomain()
subdomain.discovered_date = timezone.now()
subdomain.target_domain = subdomain_dict.get('target_domain')
subdomain.scan_history = subdomain_dict.get('scan_history')
subdomain.name = subdomain_dict.get('name')
subdomain.http_url = subdomain_dict.get('http_url')
subdomain.screenshot_path = subdomain_dict.get('screenshot_path')
subdomain.http_header_path = subdomain_dict.get('http_header_path')
subdomain.cname = subdomain_dict.get('cname')
subdomain.is_cdn = subdomain_dict.get('is_cdn')
subdomain.content_type = subdomain_dict.get('content_type')
subdomain.webserver = subdomain_dict.get('webserver')
subdomain.page_title = subdomain_dict.get('page_title')
subdomain.is_imported_subdomain = subdomain_dict.get(
'is_imported_subdomain') if 'is_imported_subdomain' in subdomain_dict else False
if 'http_status' in subdomain_dict:
subdomain.http_status = subdomain_dict.get('http_status')
if 'response_time' in subdomain_dict:
subdomain.response_time = subdomain_dict.get('response_time')
if 'content_length' in subdomain_dict:
subdomain.content_length = subdomain_dict.get('content_length')
subdomain.save()
return subdomain
def save_endpoint(endpoint_dict):
endpoint = EndPoint()
endpoint.discovered_date = timezone.now()
endpoint.scan_history = endpoint_dict.get('scan_history')
endpoint.target_domain = endpoint_dict.get('target_domain') if 'target_domain' in endpoint_dict else None
endpoint.subdomain = endpoint_dict.get('subdomain') if 'target_domain' in endpoint_dict else None
endpoint.http_url = endpoint_dict.get('http_url')
endpoint.page_title = endpoint_dict.get('page_title') if 'page_title' in endpoint_dict else None
endpoint.content_type = endpoint_dict.get('content_type') if 'content_type' in endpoint_dict else None
endpoint.webserver = endpoint_dict.get('webserver') if 'webserver' in endpoint_dict else None
endpoint.response_time = endpoint_dict.get('response_time') if 'response_time' in endpoint_dict else 0
endpoint.http_status = endpoint_dict.get('http_status') if 'http_status' in endpoint_dict else 0
endpoint.content_length = endpoint_dict.get('content_length') if 'content_length' in endpoint_dict else 0
endpoint.is_default = endpoint_dict.get('is_default') if 'is_default' in endpoint_dict else False
endpoint.save()
return endpoint
def perform_osint(task, domain, yaml_configuration, results_dir):
notification = Notification.objects.all()
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has initiated OSINT on target {}'.format(domain.name))
if 'discover' in yaml_configuration[OSINT]:
osint_discovery(task, domain, yaml_configuration, results_dir)
if 'dork' in yaml_configuration[OSINT]:
dorking(task, yaml_configuration)
if notification and notification[0].send_scan_status_notif:
send_notification('reNgine has completed performing OSINT on target {}'.format(domain.name))
def osint_discovery(task, domain, yaml_configuration, results_dir):
if ALL in yaml_configuration[OSINT][OSINT_DISCOVER]:
osint_lookup = 'emails metainfo employees'
else:
osint_lookup = ' '.join(
str(lookup) for lookup in yaml_configuration[OSINT][OSINT_DISCOVER])
if 'metainfo' in osint_lookup:
if INTENSITY in yaml_configuration[OSINT]:
osint_intensity = yaml_configuration[OSINT][INTENSITY]
else:
osint_intensity = 'normal'
if OSINT_DOCUMENTS_LIMIT in yaml_configuration[OSINT]:
documents_limit = yaml_configuration[OSINT][OSINT_DOCUMENTS_LIMIT]
else:
documents_limit = 50
if osint_intensity == 'normal':
meta_dict = DottedDict({
'osint_target': domain.name,
'domain': domain,
'scan_id': task,
'documents_limit': documents_limit
})
get_and_save_meta_info(meta_dict)
elif osint_intensity == 'deep':
# get all subdomains in scan_id
subdomains = Subdomain.objects.filter(scan_history=task)
for subdomain in subdomains:
meta_dict = DottedDict({
'osint_target': subdomain.name,
'domain': domain,
'scan_id': task,
'documents_limit': documents_limit
})
get_and_save_meta_info(meta_dict)
if 'emails' in osint_lookup:
get_and_save_emails(task, results_dir)
get_and_save_leaked_credentials(task, results_dir)
if 'employees' in osint_lookup:
get_and_save_employees(task, results_dir)
def dorking(scan_history, yaml_configuration):
# Some dork sources: https://github.com/six2dez/degoogle_hunter/blob/master/degoogle_hunter.sh
# look in stackoverflow
if ALL in yaml_configuration[OSINT][OSINT_DORK]:
dork_lookup = 'stackoverflow, 3rdparty, social_media, project_management, code_sharing, config_files, jenkins, cloud_buckets, php_error, exposed_documents, struts_rce, db_files, traefik, git_exposed'
else:
dork_lookup = ' '.join(
str(lookup) for lookup in yaml_configuration[OSINT][OSINT_DORK])
if 'stackoverflow' in dork_lookup:
dork = 'site:stackoverflow.com'
dork_type = 'stackoverflow'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=False
)
if '3rdparty' in dork_lookup:
# look in 3rd party sitee
dork_type = '3rdparty'
lookup_websites = [
'gitter.im',
'papaly.com',
'productforums.google.com',
'coggle.it',
'replt.it',
'ycombinator.com',
'libraries.io',
'npm.runkit.com',
'npmjs.com',
'scribd.com',
'gitter.im'
]
dork = ''
for website in lookup_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'social_media' in dork_lookup:
dork_type = 'Social Media'
social_websites = [
'tiktok.com',
'facebook.com',
'twitter.com',
'youtube.com',
'pinterest.com',
'tumblr.com',
'reddit.com'
]
dork = ''
for website in social_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'project_management' in dork_lookup:
dork_type = 'Project Management'
project_websites = [
'trello.com',
'*.atlassian.net'
]
dork = ''
for website in project_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'code_sharing' in dork_lookup:
dork_type = 'Code Sharing Sites'
code_websites = [
'github.com',
'gitlab.com',
'bitbucket.org'
]
dork = ''
for website in code_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'config_files' in dork_lookup:
dork_type = 'Config Files'
config_file_ext = [
'env',
'xml',
'conf',
'cnf',
'inf',
'rdp',
'ora',
'txt',
'cfg',
'ini'
]
dork = ''
for extension in config_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'jenkins' in dork_lookup:
dork_type = 'Jenkins'
dork = 'intitle:\"Dashboard [Jenkins]\"'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
if 'wordpress_files' in dork_lookup:
dork_type = 'Wordpress Files'
inurl_lookup = [
'wp-content',
'wp-includes'
]
dork = ''
for lookup in inurl_lookup:
dork = dork + ' | ' + 'inurl:' + lookup
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'cloud_buckets' in dork_lookup:
dork_type = 'Cloud Buckets'
cloud_websites = [
'.s3.amazonaws.com',
'storage.googleapis.com',
'amazonaws.com'
]
dork = ''
for website in cloud_websites:
dork = dork + ' | ' + 'site:' + website
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=False
)
if 'php_error' in dork_lookup:
dork_type = 'PHP Error'
error_words = [
'\"PHP Parse error\"',
'\"PHP Warning\"',
'\"PHP Error\"'
]
dork = ''
for word in error_words:
dork = dork + ' | ' + word
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'exposed_documents' in dork_lookup:
dork_type = 'Exposed Documents'
docs_file_ext = [
'doc',
'docx',
'odt',
'pdf',
'rtf',
'sxw',
'psw',
'ppt',
'pptx',
'pps',
'csv'
]
dork = ''
for extension in docs_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'struts_rce' in dork_lookup:
dork_type = 'Apache Struts RCE'
struts_file_ext = [
'action',
'struts',
'do'
]
dork = ''
for extension in struts_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'db_files' in dork_lookup:
dork_type = 'Database Files'
db_file_ext = [
'sql',
'db',
'dbf',
'mdb'
]
dork = ''
for extension in db_file_ext:
dork = dork + ' | ' + 'ext:' + extension
get_and_save_dork_results(
dork[3:],
dork_type,
scan_history,
in_target=True
)
if 'traefik' in dork_lookup:
dork = 'intitle:traefik inurl:8080/dashboard'
dork_type = 'Traefik'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
if 'git_exposed' in dork_lookup:
dork = 'inurl:\"/.git\"'
dork_type = '.git Exposed'
get_and_save_dork_results(
dork,
dork_type,
scan_history,
in_target=True
)
def get_and_save_dork_results(dork, type, scan_history, in_target=False):
degoogle_obj = degoogle.dg()
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
if in_target:
query = dork + " site:" + scan_history.domain.name
else:
query = dork + " \"{}\"".format(scan_history.domain.name)
logger.info(query)
degoogle_obj.query = query
results = degoogle_obj.run()
logger.info(results)
for result in results:
dork, _ = Dork.objects.get_or_create(
type=type,
description=result['desc'],
url=result['url']
)
scan_history.dorks.add(dork)
def get_and_save_employees(scan_history, results_dir):
theHarvester_location = '/usr/src/github/theHarvester'
# update proxies.yaml
if Proxy.objects.all().exists():
proxy = Proxy.objects.all()[0]
if proxy.use_proxy:
proxy_list = proxy.proxies.splitlines()
yaml_data = {'http' : proxy_list}
with open(theHarvester_location + '/proxies.yaml', 'w') as file:
documents = yaml.dump(yaml_data, file)
os.system('cd {} && python3 theHarvester.py -d {} -b all -f {}/theHarvester.html'.format(
theHarvester_location,
scan_history.domain.name,
results_dir
))
file_location = results_dir + '/theHarvester.html'
print(file_location)
# delete proxy environ var
if os.environ.get(('https_proxy')):
del os.environ['https_proxy']
if os.environ.get(('HTTPS_PROXY')):
del os.environ['HTTPS_PROXY']
if os.path.isfile(file_location):
logger.info('Parsing theHarvester results')
options = FirefoxOptions()
options.add_argument("--headless")
driver = webdriver.Firefox(options=options)
driver.get('file://'+file_location)
tabledata = driver.execute_script('return tabledata')
# save email addresses and linkedin employees
for data in tabledata:
if data['record'] == 'email':
_email = data['result']
email, _ = Email.objects.get_or_create(address=_email)
scan_history.emails.add(email)
elif data['record'] == 'people':
_employee = data['result']
split_val = _employee.split('-')
name = split_val[0]
if len(split_val) == 2:
designation = split_val[1]
else:
designation = ""
employee, _ = Employee.objects.get_or_create(name=name, designation=designation)
scan_history.employees.add(employee)
driver.quit()
print(tabledata)
def get_and_save_emails(scan_history, results_dir):
leak_target_path = '{}/creds_target.txt'.format(results_dir)
# get email address
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
emails = []
try:
logger.info('OSINT: Getting emails from Google')
email_from_google = get_emails_from_google(scan_history.domain.name)
logger.info('OSINT: Getting emails from Bing')
email_from_bing = get_emails_from_bing(scan_history.domain.name)
logger.info('OSINT: Getting emails from Baidu')
email_from_baidu = get_emails_from_baidu(scan_history.domain.name)
emails = list(set(email_from_google + email_from_bing + email_from_baidu))
logger.info(emails)
except Exception as e:
logger.error(e)
leak_target_file = open(leak_target_path, 'w')
for _email in emails:
email, _ = Email.objects.get_or_create(address=_email)
scan_history.emails.add(email)
leak_target_file.write('{}\n'.format(_email))
# fill leak_target_file with possible email address
leak_target_file.write('%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.write('%.%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%.%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.write('%_%@{}\n'.format(scan_history.domain.name))
leak_target_file.write('%_%@%.{}\n'.format(scan_history.domain.name))
leak_target_file.close()
def get_and_save_leaked_credentials(scan_history, results_dir):
logger.info('OSINT: Getting leaked credentials...')
leak_target_file = '{}/creds_target.txt'.format(results_dir)
leak_output_file = '{}/pwndb.json'.format(results_dir)
pwndb_command = 'python3 /usr/src/github/pwndb/pwndb.py --proxy tor:9150 --output json --list {}'.format(
leak_target_file
)
try:
pwndb_output = subprocess.getoutput(pwndb_command)
creds = json.loads(pwndb_output)
for cred in creds:
if cred['username'] != 'donate':
email_id = "{}@{}".format(cred['username'], cred['domain'])
email_obj, _ = Email.objects.get_or_create(
address=email_id,
)
email_obj.password = cred['password']
email_obj.save()
scan_history.emails.add(email_obj)
except Exception as e:
logger.error(e)
pass
def get_and_save_meta_info(meta_dict):
logger.info('Getting METADATA for {}'.format(meta_dict.osint_target))
proxy = get_random_proxy()
if proxy:
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
result = metadata_extractor.extract_metadata_from_google_search(meta_dict.osint_target, meta_dict.documents_limit)
if result:
results = result.get_metadata()
for meta in results:
meta_finder_document = MetaFinderDocument()
subdomain = Subdomain.objects.get(scan_history=meta_dict.scan_id, name=meta_dict.osint_target)
meta_finder_document.subdomain = subdomain
meta_finder_document.target_domain = meta_dict.domain
meta_finder_document.scan_history = meta_dict.scan_id
item = DottedDict(results[meta])
meta_finder_document.url = item.url
meta_finder_document.doc_name = meta
meta_finder_document.http_status = item.status_code
metadata = results[meta]['metadata']
for data in metadata:
if 'Producer' in metadata and metadata['Producer']:
meta_finder_document.producer = metadata['Producer'].rstrip('\x00')
if 'Creator' in metadata and metadata['Creator']:
meta_finder_document.creator = metadata['Creator'].rstrip('\x00')
if 'CreationDate' in metadata and metadata['CreationDate']:
meta_finder_document.creation_date = metadata['CreationDate'].rstrip('\x00')
if 'ModDate' in metadata and metadata['ModDate']:
meta_finder_document.modified_date = metadata['ModDate'].rstrip('\x00')
if 'Author' in metadata and metadata['Author']:
meta_finder_document.author = metadata['Author'].rstrip('\x00')
if 'Title' in metadata and metadata['Title']:
meta_finder_document.title = metadata['Title'].rstrip('\x00')
if 'OSInfo' in metadata and metadata['OSInfo']:
meta_finder_document.os = metadata['OSInfo'].rstrip('\x00')
meta_finder_document.save()
@app.task(bind=True)
def test_task(self):
print('*' * 40)
print('test task run')
print('*' * 40)
| radaram | 43af3a6aecdece4923ee74b108853f7b9c51ed12 | 27d6ec5827a51fd74e3ab97a5cef38fc7f5d9168 | Сan I delete the old condition `if 'matched' in json_st`? | radaram | 35 |
yogeshojha/rengine | 527 | web/Dockerfile: Update Go to v1.17 and add command to update Nuclei & Nuclei Templates | - Starting in Go 1.17, installing executables with go get is deprecated. go install may be used instead. [Deprecation of 'go get' for installing executables](https://golang.org/doc/go-get-install-deprecation).
- Install and update Go package with `go install -v example.com/cmd@latest` or `GO111MODULE=on go install -v example.com/cmd@latest`
- Add command to update Nuclei and Nuclei Templates | null | 2021-10-25 09:33:01+00:00 | 2021-12-14 03:04:58+00:00 | web/Dockerfile | # Base image
FROM ubuntu:20.04
# Labels and Credits
LABEL \
name="reNgine" \
author="Yogesh Ojha <yogesh.ojha11@gmail.com>" \
description="reNgine is a automated pipeline of recon process, useful for information gathering during web application penetration testing."
# Environment Variables
ENV DEBIAN_FRONTEND="noninteractive" \
DATABASE="postgres"
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Install essentials
RUN apt update -y && apt install -y --no-install-recommends \
build-essential \
cmake \
firefox \
gcc \
git \
libpq-dev \
libpq-dev \
libpcap-dev \
netcat \
postgresql \
python3 \
python3-dev \
python3-pip \
python3-netaddr \
wget \
x11-utils \
xvfb
# Download and install go 1.14
RUN wget https://dl.google.com/go/go1.16.5.linux-amd64.tar.gz
RUN tar -xvf go1.16.5.linux-amd64.tar.gz
RUN rm go1.16.5.linux-amd64.tar.gz
RUN mv go /usr/local
# Download geckodriver
RUN wget https://github.com/mozilla/geckodriver/releases/download/v0.26.0/geckodriver-v0.26.0-linux64.tar.gz
RUN tar -xvf geckodriver-v0.26.0-linux64.tar.gz
RUN rm geckodriver-v0.26.0-linux64.tar.gz
RUN mv geckodriver /usr/bin
# ENV for Go
ENV GOROOT="/usr/local/go"
ENV PATH="${PATH}:${GOROOT}/bin"
ENV PATH="${PATH}:${GOPATH}/bin"
ENV GOPATH=$HOME/go
ENV PATH="${PATH}:${GOROOT}/bin:${GOPATH}/bin"
# Make directory for app
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Download Go packages
RUN go get -u github.com/tomnomnom/assetfinder github.com/hakluke/hakrawler
RUN GO111MODULE=on go get -v github.com/projectdiscovery/httpx/cmd/httpx
RUN GO111MODULE=on go get -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder
RUN GO111MODULE=on go get -v github.com/projectdiscovery/nuclei/v2/cmd/nuclei
RUN GO111MODULE=on go get -v github.com/projectdiscovery/naabu/v2/cmd/naabu
RUN GO111MODULE=on go get -u github.com/tomnomnom/unfurl
RUN GO111MODULE=on go get -u -v github.com/bp0lr/gauplus
RUN GO111MODULE=on go get github.com/tomnomnom/waybackurls
RUN GO111MODULE=on go get -u github.com/jaeles-project/gospider
RUN GO111MODULE=on go get -u github.com/tomnomnom/gf
RUN go get -v github.com/OWASP/Amass/v3/...
RUN go get -u github.com/tomnomnom/gf
# Copy requirements
COPY ./requirements.txt /tmp/requirements.txt
RUN pip3 install --upgrade setuptools pip && \
pip3 install -r /tmp/requirements.txt
# install eyewitness
RUN python3 -m pip install fuzzywuzzy \
selenium \
python-Levenshtein \
pyvirtualdisplay \
netaddr
# Copy source code
COPY . /usr/src/app/
RUN chmod +x /usr/src/app/tools/get_urls.sh
| # Base image
FROM ubuntu:20.04
# Labels and Credits
LABEL \
name="reNgine" \
author="Yogesh Ojha <yogesh.ojha11@gmail.com>" \
description="reNgine is a automated pipeline of recon process, useful for information gathering during web application penetration testing."
# Environment Variables
ENV DEBIAN_FRONTEND="noninteractive" \
DATABASE="postgres"
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Install essentials
RUN apt update -y && apt install -y --no-install-recommends \
build-essential \
cmake \
firefox \
gcc \
git \
libpq-dev \
libpq-dev \
libpcap-dev \
netcat \
postgresql \
python3 \
python3-dev \
python3-pip \
python3-netaddr \
wget \
x11-utils \
xvfb
# Download and install go 1.17
RUN wget https://golang.org/dl/go1.17.2.linux-amd64.tar.gz
RUN tar -xvf go1.17.2.linux-amd64.tar.gz
RUN rm go1.17.2.linux-amd64.tar.gz
RUN mv go /usr/local
# Download geckodriver
RUN wget https://github.com/mozilla/geckodriver/releases/download/v0.26.0/geckodriver-v0.26.0-linux64.tar.gz
RUN tar -xvf geckodriver-v0.26.0-linux64.tar.gz
RUN rm geckodriver-v0.26.0-linux64.tar.gz
RUN mv geckodriver /usr/bin
# ENV for Go
ENV GOROOT="/usr/local/go"
ENV PATH="${PATH}:${GOROOT}/bin"
ENV PATH="${PATH}:${GOPATH}/bin"
ENV GOPATH=$HOME/go
ENV PATH="${PATH}:${GOROOT}/bin:${GOPATH}/bin"
# Make directory for app
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Download Go packages
RUN go install -v github.com/tomnomnom/assetfinder@latest
RUN go install -v github.com/hakluke/hakrawler@latest
RUN GO111MODULE=on go install -v github.com/projectdiscovery/httpx/cmd/httpx@latest
RUN GO111MODULE=on go install -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest
RUN GO111MODULE=on go install -v github.com/projectdiscovery/nuclei/v2/cmd/nuclei@latest
RUN GO111MODULE=on go install -v github.com/projectdiscovery/naabu/v2/cmd/naabu@latest
RUN GO111MODULE=on go install -v github.com/tomnomnom/unfurl@latest
RUN GO111MODULE=on go install -v -v github.com/bp0lr/gauplus@latest
RUN GO111MODULE=on go install -v github.com/tomnomnom/waybackurls@latest
RUN GO111MODULE=on go install -v github.com/jaeles-project/gospider@latest
RUN GO111MODULE=on go install -v github.com/tomnomnom/gf@latest
RUN go install -v github.com/OWASP/Amass/v3/...@latest
# Update Nuclei and Nuclei-Templates
RUN nuclei -update
RUN nuclei -update-templates
# Copy requirements
COPY ./requirements.txt /tmp/requirements.txt
RUN pip3 install --upgrade setuptools pip && \
pip3 install -r /tmp/requirements.txt
# install eyewitness
RUN python3 -m pip install fuzzywuzzy \
selenium \
python-Levenshtein \
pyvirtualdisplay \
netaddr
# Copy source code
COPY . /usr/src/app/
RUN chmod +x /usr/src/app/tools/get_urls.sh
| 0x71rex | c46402b6381c17252e27baf0b1961849c51439a0 | cf30e98e0440424019cb2cad600892ce405f850e | Sorry, my bad. What a silly mistake, there was extra '-v' on line 74 :grin: | 0x71rex | 36 |
yogeshojha/rengine | 527 | web/Dockerfile: Update Go to v1.17 and add command to update Nuclei & Nuclei Templates | - Starting in Go 1.17, installing executables with go get is deprecated. go install may be used instead. [Deprecation of 'go get' for installing executables](https://golang.org/doc/go-get-install-deprecation).
- Install and update Go package with `go install -v example.com/cmd@latest` or `GO111MODULE=on go install -v example.com/cmd@latest`
- Add command to update Nuclei and Nuclei Templates | null | 2021-10-25 09:33:01+00:00 | 2021-12-14 03:04:58+00:00 | web/Dockerfile | # Base image
FROM ubuntu:20.04
# Labels and Credits
LABEL \
name="reNgine" \
author="Yogesh Ojha <yogesh.ojha11@gmail.com>" \
description="reNgine is a automated pipeline of recon process, useful for information gathering during web application penetration testing."
# Environment Variables
ENV DEBIAN_FRONTEND="noninteractive" \
DATABASE="postgres"
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Install essentials
RUN apt update -y && apt install -y --no-install-recommends \
build-essential \
cmake \
firefox \
gcc \
git \
libpq-dev \
libpq-dev \
libpcap-dev \
netcat \
postgresql \
python3 \
python3-dev \
python3-pip \
python3-netaddr \
wget \
x11-utils \
xvfb
# Download and install go 1.14
RUN wget https://dl.google.com/go/go1.16.5.linux-amd64.tar.gz
RUN tar -xvf go1.16.5.linux-amd64.tar.gz
RUN rm go1.16.5.linux-amd64.tar.gz
RUN mv go /usr/local
# Download geckodriver
RUN wget https://github.com/mozilla/geckodriver/releases/download/v0.26.0/geckodriver-v0.26.0-linux64.tar.gz
RUN tar -xvf geckodriver-v0.26.0-linux64.tar.gz
RUN rm geckodriver-v0.26.0-linux64.tar.gz
RUN mv geckodriver /usr/bin
# ENV for Go
ENV GOROOT="/usr/local/go"
ENV PATH="${PATH}:${GOROOT}/bin"
ENV PATH="${PATH}:${GOPATH}/bin"
ENV GOPATH=$HOME/go
ENV PATH="${PATH}:${GOROOT}/bin:${GOPATH}/bin"
# Make directory for app
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Download Go packages
RUN go get -u github.com/tomnomnom/assetfinder github.com/hakluke/hakrawler
RUN GO111MODULE=on go get -v github.com/projectdiscovery/httpx/cmd/httpx
RUN GO111MODULE=on go get -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder
RUN GO111MODULE=on go get -v github.com/projectdiscovery/nuclei/v2/cmd/nuclei
RUN GO111MODULE=on go get -v github.com/projectdiscovery/naabu/v2/cmd/naabu
RUN GO111MODULE=on go get -u github.com/tomnomnom/unfurl
RUN GO111MODULE=on go get -u -v github.com/bp0lr/gauplus
RUN GO111MODULE=on go get github.com/tomnomnom/waybackurls
RUN GO111MODULE=on go get -u github.com/jaeles-project/gospider
RUN GO111MODULE=on go get -u github.com/tomnomnom/gf
RUN go get -v github.com/OWASP/Amass/v3/...
RUN go get -u github.com/tomnomnom/gf
# Copy requirements
COPY ./requirements.txt /tmp/requirements.txt
RUN pip3 install --upgrade setuptools pip && \
pip3 install -r /tmp/requirements.txt
# install eyewitness
RUN python3 -m pip install fuzzywuzzy \
selenium \
python-Levenshtein \
pyvirtualdisplay \
netaddr
# Copy source code
COPY . /usr/src/app/
RUN chmod +x /usr/src/app/tools/get_urls.sh
| # Base image
FROM ubuntu:20.04
# Labels and Credits
LABEL \
name="reNgine" \
author="Yogesh Ojha <yogesh.ojha11@gmail.com>" \
description="reNgine is a automated pipeline of recon process, useful for information gathering during web application penetration testing."
# Environment Variables
ENV DEBIAN_FRONTEND="noninteractive" \
DATABASE="postgres"
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Install essentials
RUN apt update -y && apt install -y --no-install-recommends \
build-essential \
cmake \
firefox \
gcc \
git \
libpq-dev \
libpq-dev \
libpcap-dev \
netcat \
postgresql \
python3 \
python3-dev \
python3-pip \
python3-netaddr \
wget \
x11-utils \
xvfb
# Download and install go 1.17
RUN wget https://golang.org/dl/go1.17.2.linux-amd64.tar.gz
RUN tar -xvf go1.17.2.linux-amd64.tar.gz
RUN rm go1.17.2.linux-amd64.tar.gz
RUN mv go /usr/local
# Download geckodriver
RUN wget https://github.com/mozilla/geckodriver/releases/download/v0.26.0/geckodriver-v0.26.0-linux64.tar.gz
RUN tar -xvf geckodriver-v0.26.0-linux64.tar.gz
RUN rm geckodriver-v0.26.0-linux64.tar.gz
RUN mv geckodriver /usr/bin
# ENV for Go
ENV GOROOT="/usr/local/go"
ENV PATH="${PATH}:${GOROOT}/bin"
ENV PATH="${PATH}:${GOPATH}/bin"
ENV GOPATH=$HOME/go
ENV PATH="${PATH}:${GOROOT}/bin:${GOPATH}/bin"
# Make directory for app
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Download Go packages
RUN go install -v github.com/tomnomnom/assetfinder@latest
RUN go install -v github.com/hakluke/hakrawler@latest
RUN GO111MODULE=on go install -v github.com/projectdiscovery/httpx/cmd/httpx@latest
RUN GO111MODULE=on go install -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest
RUN GO111MODULE=on go install -v github.com/projectdiscovery/nuclei/v2/cmd/nuclei@latest
RUN GO111MODULE=on go install -v github.com/projectdiscovery/naabu/v2/cmd/naabu@latest
RUN GO111MODULE=on go install -v github.com/tomnomnom/unfurl@latest
RUN GO111MODULE=on go install -v -v github.com/bp0lr/gauplus@latest
RUN GO111MODULE=on go install -v github.com/tomnomnom/waybackurls@latest
RUN GO111MODULE=on go install -v github.com/jaeles-project/gospider@latest
RUN GO111MODULE=on go install -v github.com/tomnomnom/gf@latest
RUN go install -v github.com/OWASP/Amass/v3/...@latest
# Update Nuclei and Nuclei-Templates
RUN nuclei -update
RUN nuclei -update-templates
# Copy requirements
COPY ./requirements.txt /tmp/requirements.txt
RUN pip3 install --upgrade setuptools pip && \
pip3 install -r /tmp/requirements.txt
# install eyewitness
RUN python3 -m pip install fuzzywuzzy \
selenium \
python-Levenshtein \
pyvirtualdisplay \
netaddr
# Copy source code
COPY . /usr/src/app/
RUN chmod +x /usr/src/app/tools/get_urls.sh
| 0x71rex | c46402b6381c17252e27baf0b1961849c51439a0 | cf30e98e0440424019cb2cad600892ce405f850e | Haven't remove the second "-v" in line 74 btw. 😁 | 0x71rex | 37 |
yogeshojha/rengine | 468 | Installation: Check docker running status before installing reNgine. | ### Changes
- checks docker run status. Before it used to execute all `make build` commands even if docker was not running.
- Early error shown to the user
- Terminal text color changed
- made this sentence "Before running this script, please make sure Docker is running and you have made changes to .env file." color RED because it can throw errors
- "Changing the postgres username & password from .env is highly recommended." to Green since it's a suggestion
| null | 2021-08-24 16:52:53+00:00 | 2021-08-27 02:54:16+00:00 | install.sh | #!/bin/bash
tput setaf 2;
cat web/art/1.0.txt
tput setaf 3; echo "Before running this script, please make sure you have made changes to .env file."
tput setaf 1; echo "Changing the postgres username & password from .env is highly recommended."
tput setaf 4;
read -p "Are you sure, you made changes to .env file (y/n)? " answer
case ${answer:0:1} in
y|Y|yes|YES|Yes )
echo "Continiuing Installation!"
;;
* )
nano .env
;;
esac
echo " "
tput setaf 3;
echo "#########################################################################"
echo "Please note that, this installation script is only intended for Linux"
echo "For Mac and Windows, refer to the official guide https://rengine.wiki"
echo "#########################################################################"
echo " "
tput setaf 4;
echo "Installing reNgine and it's dependencies"
echo " "
if [ "$EUID" -ne 0 ]
then
tput setaf 1; echo "Error installing reNgine, Please run this script as root!"
tput setaf 1; echo "Example: sudo ./install.sh"
exit
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing Docker..."
echo "#########################################################################"
if [ -x "$(command -v docker)" ]; then
tput setaf 2; echo "Docker already installed, skipping."
else
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
tput setaf 2; echo "Docker installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing docker-compose"
echo "#########################################################################"
if [ -x "$(command -v docker-compose)" ]; then
tput setaf 2; echo "docker-compose already installed, skipping."
else
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
tput setaf 2; echo "docker-compose installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing make"
echo "#########################################################################"
if [ -x "$(command -v make)" ]; then
tput setaf 2; echo "make already installed, skipping."
else
apt install make
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing reNgine"
echo "#########################################################################"
make certs && make build && make up
tput setaf 2; echo "reNgine is installed!!!"
sleep 3
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Creating an account"
echo "#########################################################################"
make username
tput setaf 2; echo "Thank you for installing reNgine, happy recon!!"
| #!/bin/bash
tput setaf 2;
cat web/art/1.0.txt
tput setaf 1; echo "Before running this script, please make sure Docker is running and you have made changes to .env file."
tput setaf 2; echo "Changing the postgres username & password from .env is highly recommended."
tput setaf 4;
read -p "Are you sure, you made changes to .env file (y/n)? " answer
case ${answer:0:1} in
y|Y|yes|YES|Yes )
echo "Continiuing Installation!"
;;
* )
nano .env
;;
esac
echo " "
tput setaf 3;
echo "#########################################################################"
echo "Please note that, this installation script is only intended for Linux"
echo "For Mac and Windows, refer to the official guide https://rengine.wiki"
echo "#########################################################################"
echo " "
tput setaf 4;
echo "Installing reNgine and it's dependencies"
echo " "
if [ "$EUID" -ne 0 ]
then
tput setaf 1; echo "Error installing reNgine, Please run this script as root!"
tput setaf 1; echo "Example: sudo ./install.sh"
exit
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing Docker..."
echo "#########################################################################"
if [ -x "$(command -v docker)" ]; then
tput setaf 2; echo "Docker already installed, skipping."
else
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
tput setaf 2; echo "Docker installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing docker-compose"
echo "#########################################################################"
if [ -x "$(command -v docker-compose)" ]; then
tput setaf 2; echo "docker-compose already installed, skipping."
else
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
tput setaf 2; echo "docker-compose installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing make"
echo "#########################################################################"
if [ -x "$(command -v make)" ]; then
tput setaf 2; echo "make already installed, skipping."
else
apt install make
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Checking Docker status"
echo "#########################################################################"
if systemctl is-active docker >/dev/null 2>&1; then
tput setaf 4;
echo "Docker is running."
else
tput setaf 1;
echo "Docker is not running. Please run docker and try again."
echo "You can run docker service using sudo systemctl start docker"
exit 1
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing reNgine"
echo "#########################################################################"
make certs && make build && make up
tput setaf 2; echo "reNgine is installed!!!"
sleep 3
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Creating an account"
echo "#########################################################################"
make username
tput setaf 2; echo "Thank you for installing reNgine, happy recon!!"
| sbimochan | 2bd2219659fcf0f0541fc4879bd69bfa79a500c7 | e98433517e4a6198e6e2208fdf1b324f41be5bcb | It won't run in macOS or windows. | sbimochan | 38 |
yogeshojha/rengine | 468 | Installation: Check docker running status before installing reNgine. | ### Changes
- checks docker run status. Before it used to execute all `make build` commands even if docker was not running.
- Early error shown to the user
- Terminal text color changed
- made this sentence "Before running this script, please make sure Docker is running and you have made changes to .env file." color RED because it can throw errors
- "Changing the postgres username & password from .env is highly recommended." to Green since it's a suggestion
| null | 2021-08-24 16:52:53+00:00 | 2021-08-27 02:54:16+00:00 | install.sh | #!/bin/bash
tput setaf 2;
cat web/art/1.0.txt
tput setaf 3; echo "Before running this script, please make sure you have made changes to .env file."
tput setaf 1; echo "Changing the postgres username & password from .env is highly recommended."
tput setaf 4;
read -p "Are you sure, you made changes to .env file (y/n)? " answer
case ${answer:0:1} in
y|Y|yes|YES|Yes )
echo "Continiuing Installation!"
;;
* )
nano .env
;;
esac
echo " "
tput setaf 3;
echo "#########################################################################"
echo "Please note that, this installation script is only intended for Linux"
echo "For Mac and Windows, refer to the official guide https://rengine.wiki"
echo "#########################################################################"
echo " "
tput setaf 4;
echo "Installing reNgine and it's dependencies"
echo " "
if [ "$EUID" -ne 0 ]
then
tput setaf 1; echo "Error installing reNgine, Please run this script as root!"
tput setaf 1; echo "Example: sudo ./install.sh"
exit
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing Docker..."
echo "#########################################################################"
if [ -x "$(command -v docker)" ]; then
tput setaf 2; echo "Docker already installed, skipping."
else
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
tput setaf 2; echo "Docker installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing docker-compose"
echo "#########################################################################"
if [ -x "$(command -v docker-compose)" ]; then
tput setaf 2; echo "docker-compose already installed, skipping."
else
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
tput setaf 2; echo "docker-compose installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing make"
echo "#########################################################################"
if [ -x "$(command -v make)" ]; then
tput setaf 2; echo "make already installed, skipping."
else
apt install make
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing reNgine"
echo "#########################################################################"
make certs && make build && make up
tput setaf 2; echo "reNgine is installed!!!"
sleep 3
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Creating an account"
echo "#########################################################################"
make username
tput setaf 2; echo "Thank you for installing reNgine, happy recon!!"
| #!/bin/bash
tput setaf 2;
cat web/art/1.0.txt
tput setaf 1; echo "Before running this script, please make sure Docker is running and you have made changes to .env file."
tput setaf 2; echo "Changing the postgres username & password from .env is highly recommended."
tput setaf 4;
read -p "Are you sure, you made changes to .env file (y/n)? " answer
case ${answer:0:1} in
y|Y|yes|YES|Yes )
echo "Continiuing Installation!"
;;
* )
nano .env
;;
esac
echo " "
tput setaf 3;
echo "#########################################################################"
echo "Please note that, this installation script is only intended for Linux"
echo "For Mac and Windows, refer to the official guide https://rengine.wiki"
echo "#########################################################################"
echo " "
tput setaf 4;
echo "Installing reNgine and it's dependencies"
echo " "
if [ "$EUID" -ne 0 ]
then
tput setaf 1; echo "Error installing reNgine, Please run this script as root!"
tput setaf 1; echo "Example: sudo ./install.sh"
exit
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing Docker..."
echo "#########################################################################"
if [ -x "$(command -v docker)" ]; then
tput setaf 2; echo "Docker already installed, skipping."
else
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
tput setaf 2; echo "Docker installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing docker-compose"
echo "#########################################################################"
if [ -x "$(command -v docker-compose)" ]; then
tput setaf 2; echo "docker-compose already installed, skipping."
else
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
tput setaf 2; echo "docker-compose installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing make"
echo "#########################################################################"
if [ -x "$(command -v make)" ]; then
tput setaf 2; echo "make already installed, skipping."
else
apt install make
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Checking Docker status"
echo "#########################################################################"
if systemctl is-active docker >/dev/null 2>&1; then
tput setaf 4;
echo "Docker is running."
else
tput setaf 1;
echo "Docker is not running. Please run docker and try again."
echo "You can run docker service using sudo systemctl start docker"
exit 1
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing reNgine"
echo "#########################################################################"
make certs && make build && make up
tput setaf 2; echo "reNgine is installed!!!"
sleep 3
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Creating an account"
echo "#########################################################################"
make username
tput setaf 2; echo "Thank you for installing reNgine, happy recon!!"
| sbimochan | 2bd2219659fcf0f0541fc4879bd69bfa79a500c7 | e98433517e4a6198e6e2208fdf1b324f41be5bcb | This quick install script is only for Ubuntu/Debian bases OS. https://rengine.wiki/install/quick-install/
Other OS will continue to use https://rengine.wiki/install/install/ | yogeshojha | 39 |
yogeshojha/rengine | 468 | Installation: Check docker running status before installing reNgine. | ### Changes
- checks docker run status. Before it used to execute all `make build` commands even if docker was not running.
- Early error shown to the user
- Terminal text color changed
- made this sentence "Before running this script, please make sure Docker is running and you have made changes to .env file." color RED because it can throw errors
- "Changing the postgres username & password from .env is highly recommended." to Green since it's a suggestion
| null | 2021-08-24 16:52:53+00:00 | 2021-08-27 02:54:16+00:00 | install.sh | #!/bin/bash
tput setaf 2;
cat web/art/1.0.txt
tput setaf 3; echo "Before running this script, please make sure you have made changes to .env file."
tput setaf 1; echo "Changing the postgres username & password from .env is highly recommended."
tput setaf 4;
read -p "Are you sure, you made changes to .env file (y/n)? " answer
case ${answer:0:1} in
y|Y|yes|YES|Yes )
echo "Continiuing Installation!"
;;
* )
nano .env
;;
esac
echo " "
tput setaf 3;
echo "#########################################################################"
echo "Please note that, this installation script is only intended for Linux"
echo "For Mac and Windows, refer to the official guide https://rengine.wiki"
echo "#########################################################################"
echo " "
tput setaf 4;
echo "Installing reNgine and it's dependencies"
echo " "
if [ "$EUID" -ne 0 ]
then
tput setaf 1; echo "Error installing reNgine, Please run this script as root!"
tput setaf 1; echo "Example: sudo ./install.sh"
exit
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing Docker..."
echo "#########################################################################"
if [ -x "$(command -v docker)" ]; then
tput setaf 2; echo "Docker already installed, skipping."
else
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
tput setaf 2; echo "Docker installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing docker-compose"
echo "#########################################################################"
if [ -x "$(command -v docker-compose)" ]; then
tput setaf 2; echo "docker-compose already installed, skipping."
else
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
tput setaf 2; echo "docker-compose installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing make"
echo "#########################################################################"
if [ -x "$(command -v make)" ]; then
tput setaf 2; echo "make already installed, skipping."
else
apt install make
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing reNgine"
echo "#########################################################################"
make certs && make build && make up
tput setaf 2; echo "reNgine is installed!!!"
sleep 3
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Creating an account"
echo "#########################################################################"
make username
tput setaf 2; echo "Thank you for installing reNgine, happy recon!!"
| #!/bin/bash
tput setaf 2;
cat web/art/1.0.txt
tput setaf 1; echo "Before running this script, please make sure Docker is running and you have made changes to .env file."
tput setaf 2; echo "Changing the postgres username & password from .env is highly recommended."
tput setaf 4;
read -p "Are you sure, you made changes to .env file (y/n)? " answer
case ${answer:0:1} in
y|Y|yes|YES|Yes )
echo "Continiuing Installation!"
;;
* )
nano .env
;;
esac
echo " "
tput setaf 3;
echo "#########################################################################"
echo "Please note that, this installation script is only intended for Linux"
echo "For Mac and Windows, refer to the official guide https://rengine.wiki"
echo "#########################################################################"
echo " "
tput setaf 4;
echo "Installing reNgine and it's dependencies"
echo " "
if [ "$EUID" -ne 0 ]
then
tput setaf 1; echo "Error installing reNgine, Please run this script as root!"
tput setaf 1; echo "Example: sudo ./install.sh"
exit
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing Docker..."
echo "#########################################################################"
if [ -x "$(command -v docker)" ]; then
tput setaf 2; echo "Docker already installed, skipping."
else
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
tput setaf 2; echo "Docker installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing docker-compose"
echo "#########################################################################"
if [ -x "$(command -v docker-compose)" ]; then
tput setaf 2; echo "docker-compose already installed, skipping."
else
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
tput setaf 2; echo "docker-compose installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing make"
echo "#########################################################################"
if [ -x "$(command -v make)" ]; then
tput setaf 2; echo "make already installed, skipping."
else
apt install make
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Checking Docker status"
echo "#########################################################################"
if systemctl is-active docker >/dev/null 2>&1; then
tput setaf 4;
echo "Docker is running."
else
tput setaf 1;
echo "Docker is not running. Please run docker and try again."
echo "You can run docker service using sudo systemctl start docker"
exit 1
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing reNgine"
echo "#########################################################################"
make certs && make build && make up
tput setaf 2; echo "reNgine is installed!!!"
sleep 3
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Creating an account"
echo "#########################################################################"
make username
tput setaf 2; echo "Thank you for installing reNgine, happy recon!!"
| sbimochan | 2bd2219659fcf0f0541fc4879bd69bfa79a500c7 | e98433517e4a6198e6e2208fdf1b324f41be5bcb | but docker info would run in all OS right? sorry why did it fail the docker test before? | sbimochan | 40 |
yogeshojha/rengine | 468 | Installation: Check docker running status before installing reNgine. | ### Changes
- checks docker run status. Before it used to execute all `make build` commands even if docker was not running.
- Early error shown to the user
- Terminal text color changed
- made this sentence "Before running this script, please make sure Docker is running and you have made changes to .env file." color RED because it can throw errors
- "Changing the postgres username & password from .env is highly recommended." to Green since it's a suggestion
| null | 2021-08-24 16:52:53+00:00 | 2021-08-27 02:54:16+00:00 | install.sh | #!/bin/bash
tput setaf 2;
cat web/art/1.0.txt
tput setaf 3; echo "Before running this script, please make sure you have made changes to .env file."
tput setaf 1; echo "Changing the postgres username & password from .env is highly recommended."
tput setaf 4;
read -p "Are you sure, you made changes to .env file (y/n)? " answer
case ${answer:0:1} in
y|Y|yes|YES|Yes )
echo "Continiuing Installation!"
;;
* )
nano .env
;;
esac
echo " "
tput setaf 3;
echo "#########################################################################"
echo "Please note that, this installation script is only intended for Linux"
echo "For Mac and Windows, refer to the official guide https://rengine.wiki"
echo "#########################################################################"
echo " "
tput setaf 4;
echo "Installing reNgine and it's dependencies"
echo " "
if [ "$EUID" -ne 0 ]
then
tput setaf 1; echo "Error installing reNgine, Please run this script as root!"
tput setaf 1; echo "Example: sudo ./install.sh"
exit
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing Docker..."
echo "#########################################################################"
if [ -x "$(command -v docker)" ]; then
tput setaf 2; echo "Docker already installed, skipping."
else
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
tput setaf 2; echo "Docker installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing docker-compose"
echo "#########################################################################"
if [ -x "$(command -v docker-compose)" ]; then
tput setaf 2; echo "docker-compose already installed, skipping."
else
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
tput setaf 2; echo "docker-compose installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing make"
echo "#########################################################################"
if [ -x "$(command -v make)" ]; then
tput setaf 2; echo "make already installed, skipping."
else
apt install make
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing reNgine"
echo "#########################################################################"
make certs && make build && make up
tput setaf 2; echo "reNgine is installed!!!"
sleep 3
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Creating an account"
echo "#########################################################################"
make username
tput setaf 2; echo "Thank you for installing reNgine, happy recon!!"
| #!/bin/bash
tput setaf 2;
cat web/art/1.0.txt
tput setaf 1; echo "Before running this script, please make sure Docker is running and you have made changes to .env file."
tput setaf 2; echo "Changing the postgres username & password from .env is highly recommended."
tput setaf 4;
read -p "Are you sure, you made changes to .env file (y/n)? " answer
case ${answer:0:1} in
y|Y|yes|YES|Yes )
echo "Continiuing Installation!"
;;
* )
nano .env
;;
esac
echo " "
tput setaf 3;
echo "#########################################################################"
echo "Please note that, this installation script is only intended for Linux"
echo "For Mac and Windows, refer to the official guide https://rengine.wiki"
echo "#########################################################################"
echo " "
tput setaf 4;
echo "Installing reNgine and it's dependencies"
echo " "
if [ "$EUID" -ne 0 ]
then
tput setaf 1; echo "Error installing reNgine, Please run this script as root!"
tput setaf 1; echo "Example: sudo ./install.sh"
exit
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing Docker..."
echo "#########################################################################"
if [ -x "$(command -v docker)" ]; then
tput setaf 2; echo "Docker already installed, skipping."
else
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
tput setaf 2; echo "Docker installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing docker-compose"
echo "#########################################################################"
if [ -x "$(command -v docker-compose)" ]; then
tput setaf 2; echo "docker-compose already installed, skipping."
else
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
tput setaf 2; echo "docker-compose installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing make"
echo "#########################################################################"
if [ -x "$(command -v make)" ]; then
tput setaf 2; echo "make already installed, skipping."
else
apt install make
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Checking Docker status"
echo "#########################################################################"
if systemctl is-active docker >/dev/null 2>&1; then
tput setaf 4;
echo "Docker is running."
else
tput setaf 1;
echo "Docker is not running. Please run docker and try again."
echo "You can run docker service using sudo systemctl start docker"
exit 1
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing reNgine"
echo "#########################################################################"
make certs && make build && make up
tput setaf 2; echo "reNgine is installed!!!"
sleep 3
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Creating an account"
echo "#########################################################################"
make username
tput setaf 2; echo "Thank you for installing reNgine, happy recon!!"
| sbimochan | 2bd2219659fcf0f0541fc4879bd69bfa79a500c7 | e98433517e4a6198e6e2208fdf1b324f41be5bcb | Correct me if I am wrong, `docker info` will return True if docker is installed, and it has nothing to do with whether docker is running or not.
You can give it a try
Stop docker service
`sudo systemctl stop docker`
Now try with docker info
`sudo docker info`
On the other hand, try with
`sudo systemctl is-active docker`
You'll get active/inactive based on if docker service is running. | yogeshojha | 41 |
yogeshojha/rengine | 468 | Installation: Check docker running status before installing reNgine. | ### Changes
- checks docker run status. Before it used to execute all `make build` commands even if docker was not running.
- Early error shown to the user
- Terminal text color changed
- made this sentence "Before running this script, please make sure Docker is running and you have made changes to .env file." color RED because it can throw errors
- "Changing the postgres username & password from .env is highly recommended." to Green since it's a suggestion
| null | 2021-08-24 16:52:53+00:00 | 2021-08-27 02:54:16+00:00 | install.sh | #!/bin/bash
tput setaf 2;
cat web/art/1.0.txt
tput setaf 3; echo "Before running this script, please make sure you have made changes to .env file."
tput setaf 1; echo "Changing the postgres username & password from .env is highly recommended."
tput setaf 4;
read -p "Are you sure, you made changes to .env file (y/n)? " answer
case ${answer:0:1} in
y|Y|yes|YES|Yes )
echo "Continiuing Installation!"
;;
* )
nano .env
;;
esac
echo " "
tput setaf 3;
echo "#########################################################################"
echo "Please note that, this installation script is only intended for Linux"
echo "For Mac and Windows, refer to the official guide https://rengine.wiki"
echo "#########################################################################"
echo " "
tput setaf 4;
echo "Installing reNgine and it's dependencies"
echo " "
if [ "$EUID" -ne 0 ]
then
tput setaf 1; echo "Error installing reNgine, Please run this script as root!"
tput setaf 1; echo "Example: sudo ./install.sh"
exit
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing Docker..."
echo "#########################################################################"
if [ -x "$(command -v docker)" ]; then
tput setaf 2; echo "Docker already installed, skipping."
else
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
tput setaf 2; echo "Docker installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing docker-compose"
echo "#########################################################################"
if [ -x "$(command -v docker-compose)" ]; then
tput setaf 2; echo "docker-compose already installed, skipping."
else
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
tput setaf 2; echo "docker-compose installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing make"
echo "#########################################################################"
if [ -x "$(command -v make)" ]; then
tput setaf 2; echo "make already installed, skipping."
else
apt install make
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing reNgine"
echo "#########################################################################"
make certs && make build && make up
tput setaf 2; echo "reNgine is installed!!!"
sleep 3
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Creating an account"
echo "#########################################################################"
make username
tput setaf 2; echo "Thank you for installing reNgine, happy recon!!"
| #!/bin/bash
tput setaf 2;
cat web/art/1.0.txt
tput setaf 1; echo "Before running this script, please make sure Docker is running and you have made changes to .env file."
tput setaf 2; echo "Changing the postgres username & password from .env is highly recommended."
tput setaf 4;
read -p "Are you sure, you made changes to .env file (y/n)? " answer
case ${answer:0:1} in
y|Y|yes|YES|Yes )
echo "Continiuing Installation!"
;;
* )
nano .env
;;
esac
echo " "
tput setaf 3;
echo "#########################################################################"
echo "Please note that, this installation script is only intended for Linux"
echo "For Mac and Windows, refer to the official guide https://rengine.wiki"
echo "#########################################################################"
echo " "
tput setaf 4;
echo "Installing reNgine and it's dependencies"
echo " "
if [ "$EUID" -ne 0 ]
then
tput setaf 1; echo "Error installing reNgine, Please run this script as root!"
tput setaf 1; echo "Example: sudo ./install.sh"
exit
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing Docker..."
echo "#########################################################################"
if [ -x "$(command -v docker)" ]; then
tput setaf 2; echo "Docker already installed, skipping."
else
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
tput setaf 2; echo "Docker installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing docker-compose"
echo "#########################################################################"
if [ -x "$(command -v docker-compose)" ]; then
tput setaf 2; echo "docker-compose already installed, skipping."
else
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
tput setaf 2; echo "docker-compose installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing make"
echo "#########################################################################"
if [ -x "$(command -v make)" ]; then
tput setaf 2; echo "make already installed, skipping."
else
apt install make
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Checking Docker status"
echo "#########################################################################"
if systemctl is-active docker >/dev/null 2>&1; then
tput setaf 4;
echo "Docker is running."
else
tput setaf 1;
echo "Docker is not running. Please run docker and try again."
echo "You can run docker service using sudo systemctl start docker"
exit 1
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing reNgine"
echo "#########################################################################"
make certs && make build && make up
tput setaf 2; echo "reNgine is installed!!!"
sleep 3
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Creating an account"
echo "#########################################################################"
make username
tput setaf 2; echo "Thank you for installing reNgine, happy recon!!"
| sbimochan | 2bd2219659fcf0f0541fc4879bd69bfa79a500c7 | e98433517e4a6198e6e2208fdf1b324f41be5bcb | let me test in my friend's linux. | sbimochan | 42 |
yogeshojha/rengine | 468 | Installation: Check docker running status before installing reNgine. | ### Changes
- checks docker run status. Before it used to execute all `make build` commands even if docker was not running.
- Early error shown to the user
- Terminal text color changed
- made this sentence "Before running this script, please make sure Docker is running and you have made changes to .env file." color RED because it can throw errors
- "Changing the postgres username & password from .env is highly recommended." to Green since it's a suggestion
| null | 2021-08-24 16:52:53+00:00 | 2021-08-27 02:54:16+00:00 | install.sh | #!/bin/bash
tput setaf 2;
cat web/art/1.0.txt
tput setaf 3; echo "Before running this script, please make sure you have made changes to .env file."
tput setaf 1; echo "Changing the postgres username & password from .env is highly recommended."
tput setaf 4;
read -p "Are you sure, you made changes to .env file (y/n)? " answer
case ${answer:0:1} in
y|Y|yes|YES|Yes )
echo "Continiuing Installation!"
;;
* )
nano .env
;;
esac
echo " "
tput setaf 3;
echo "#########################################################################"
echo "Please note that, this installation script is only intended for Linux"
echo "For Mac and Windows, refer to the official guide https://rengine.wiki"
echo "#########################################################################"
echo " "
tput setaf 4;
echo "Installing reNgine and it's dependencies"
echo " "
if [ "$EUID" -ne 0 ]
then
tput setaf 1; echo "Error installing reNgine, Please run this script as root!"
tput setaf 1; echo "Example: sudo ./install.sh"
exit
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing Docker..."
echo "#########################################################################"
if [ -x "$(command -v docker)" ]; then
tput setaf 2; echo "Docker already installed, skipping."
else
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
tput setaf 2; echo "Docker installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing docker-compose"
echo "#########################################################################"
if [ -x "$(command -v docker-compose)" ]; then
tput setaf 2; echo "docker-compose already installed, skipping."
else
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
tput setaf 2; echo "docker-compose installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing make"
echo "#########################################################################"
if [ -x "$(command -v make)" ]; then
tput setaf 2; echo "make already installed, skipping."
else
apt install make
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing reNgine"
echo "#########################################################################"
make certs && make build && make up
tput setaf 2; echo "reNgine is installed!!!"
sleep 3
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Creating an account"
echo "#########################################################################"
make username
tput setaf 2; echo "Thank you for installing reNgine, happy recon!!"
| #!/bin/bash
tput setaf 2;
cat web/art/1.0.txt
tput setaf 1; echo "Before running this script, please make sure Docker is running and you have made changes to .env file."
tput setaf 2; echo "Changing the postgres username & password from .env is highly recommended."
tput setaf 4;
read -p "Are you sure, you made changes to .env file (y/n)? " answer
case ${answer:0:1} in
y|Y|yes|YES|Yes )
echo "Continiuing Installation!"
;;
* )
nano .env
;;
esac
echo " "
tput setaf 3;
echo "#########################################################################"
echo "Please note that, this installation script is only intended for Linux"
echo "For Mac and Windows, refer to the official guide https://rengine.wiki"
echo "#########################################################################"
echo " "
tput setaf 4;
echo "Installing reNgine and it's dependencies"
echo " "
if [ "$EUID" -ne 0 ]
then
tput setaf 1; echo "Error installing reNgine, Please run this script as root!"
tput setaf 1; echo "Example: sudo ./install.sh"
exit
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing Docker..."
echo "#########################################################################"
if [ -x "$(command -v docker)" ]; then
tput setaf 2; echo "Docker already installed, skipping."
else
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
tput setaf 2; echo "Docker installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing docker-compose"
echo "#########################################################################"
if [ -x "$(command -v docker-compose)" ]; then
tput setaf 2; echo "docker-compose already installed, skipping."
else
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
tput setaf 2; echo "docker-compose installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing make"
echo "#########################################################################"
if [ -x "$(command -v make)" ]; then
tput setaf 2; echo "make already installed, skipping."
else
apt install make
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Checking Docker status"
echo "#########################################################################"
if systemctl is-active docker >/dev/null 2>&1; then
tput setaf 4;
echo "Docker is running."
else
tput setaf 1;
echo "Docker is not running. Please run docker and try again."
echo "You can run docker service using sudo systemctl start docker"
exit 1
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing reNgine"
echo "#########################################################################"
make certs && make build && make up
tput setaf 2; echo "reNgine is installed!!!"
sleep 3
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Creating an account"
echo "#########################################################################"
make username
tput setaf 2; echo "Thank you for installing reNgine, happy recon!!"
| sbimochan | 2bd2219659fcf0f0541fc4879bd69bfa79a500c7 | e98433517e4a6198e6e2208fdf1b324f41be5bcb | confirmed. your way is right for linux. my way was right for 'macOS' | sbimochan | 43 |
yogeshojha/rengine | 468 | Installation: Check docker running status before installing reNgine. | ### Changes
- checks docker run status. Before it used to execute all `make build` commands even if docker was not running.
- Early error shown to the user
- Terminal text color changed
- made this sentence "Before running this script, please make sure Docker is running and you have made changes to .env file." color RED because it can throw errors
- "Changing the postgres username & password from .env is highly recommended." to Green since it's a suggestion
| null | 2021-08-24 16:52:53+00:00 | 2021-08-27 02:54:16+00:00 | install.sh | #!/bin/bash
tput setaf 2;
cat web/art/1.0.txt
tput setaf 3; echo "Before running this script, please make sure you have made changes to .env file."
tput setaf 1; echo "Changing the postgres username & password from .env is highly recommended."
tput setaf 4;
read -p "Are you sure, you made changes to .env file (y/n)? " answer
case ${answer:0:1} in
y|Y|yes|YES|Yes )
echo "Continiuing Installation!"
;;
* )
nano .env
;;
esac
echo " "
tput setaf 3;
echo "#########################################################################"
echo "Please note that, this installation script is only intended for Linux"
echo "For Mac and Windows, refer to the official guide https://rengine.wiki"
echo "#########################################################################"
echo " "
tput setaf 4;
echo "Installing reNgine and it's dependencies"
echo " "
if [ "$EUID" -ne 0 ]
then
tput setaf 1; echo "Error installing reNgine, Please run this script as root!"
tput setaf 1; echo "Example: sudo ./install.sh"
exit
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing Docker..."
echo "#########################################################################"
if [ -x "$(command -v docker)" ]; then
tput setaf 2; echo "Docker already installed, skipping."
else
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
tput setaf 2; echo "Docker installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing docker-compose"
echo "#########################################################################"
if [ -x "$(command -v docker-compose)" ]; then
tput setaf 2; echo "docker-compose already installed, skipping."
else
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
tput setaf 2; echo "docker-compose installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing make"
echo "#########################################################################"
if [ -x "$(command -v make)" ]; then
tput setaf 2; echo "make already installed, skipping."
else
apt install make
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing reNgine"
echo "#########################################################################"
make certs && make build && make up
tput setaf 2; echo "reNgine is installed!!!"
sleep 3
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Creating an account"
echo "#########################################################################"
make username
tput setaf 2; echo "Thank you for installing reNgine, happy recon!!"
| #!/bin/bash
tput setaf 2;
cat web/art/1.0.txt
tput setaf 1; echo "Before running this script, please make sure Docker is running and you have made changes to .env file."
tput setaf 2; echo "Changing the postgres username & password from .env is highly recommended."
tput setaf 4;
read -p "Are you sure, you made changes to .env file (y/n)? " answer
case ${answer:0:1} in
y|Y|yes|YES|Yes )
echo "Continiuing Installation!"
;;
* )
nano .env
;;
esac
echo " "
tput setaf 3;
echo "#########################################################################"
echo "Please note that, this installation script is only intended for Linux"
echo "For Mac and Windows, refer to the official guide https://rengine.wiki"
echo "#########################################################################"
echo " "
tput setaf 4;
echo "Installing reNgine and it's dependencies"
echo " "
if [ "$EUID" -ne 0 ]
then
tput setaf 1; echo "Error installing reNgine, Please run this script as root!"
tput setaf 1; echo "Example: sudo ./install.sh"
exit
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing Docker..."
echo "#########################################################################"
if [ -x "$(command -v docker)" ]; then
tput setaf 2; echo "Docker already installed, skipping."
else
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
tput setaf 2; echo "Docker installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing docker-compose"
echo "#########################################################################"
if [ -x "$(command -v docker-compose)" ]; then
tput setaf 2; echo "docker-compose already installed, skipping."
else
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
tput setaf 2; echo "docker-compose installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing make"
echo "#########################################################################"
if [ -x "$(command -v make)" ]; then
tput setaf 2; echo "make already installed, skipping."
else
apt install make
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Checking Docker status"
echo "#########################################################################"
if systemctl is-active docker >/dev/null 2>&1; then
tput setaf 4;
echo "Docker is running."
else
tput setaf 1;
echo "Docker is not running. Please run docker and try again."
echo "You can run docker service using sudo systemctl start docker"
exit 1
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing reNgine"
echo "#########################################################################"
make certs && make build && make up
tput setaf 2; echo "reNgine is installed!!!"
sleep 3
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Creating an account"
echo "#########################################################################"
make username
tput setaf 2; echo "Thank you for installing reNgine, happy recon!!"
| sbimochan | 2bd2219659fcf0f0541fc4879bd69bfa79a500c7 | e98433517e4a6198e6e2208fdf1b324f41be5bcb | Sure take your time,
for mac and windows users will continue to go through detailed installation steps.
we also need to install make, and as I said, this script is only intended for Ubuntu/debian, that's why I've done
`apt install make`
this would anyways fail on macos | yogeshojha | 44 |
yogeshojha/rengine | 468 | Installation: Check docker running status before installing reNgine. | ### Changes
- checks docker run status. Before it used to execute all `make build` commands even if docker was not running.
- Early error shown to the user
- Terminal text color changed
- made this sentence "Before running this script, please make sure Docker is running and you have made changes to .env file." color RED because it can throw errors
- "Changing the postgres username & password from .env is highly recommended." to Green since it's a suggestion
| null | 2021-08-24 16:52:53+00:00 | 2021-08-27 02:54:16+00:00 | install.sh | #!/bin/bash
tput setaf 2;
cat web/art/1.0.txt
tput setaf 3; echo "Before running this script, please make sure you have made changes to .env file."
tput setaf 1; echo "Changing the postgres username & password from .env is highly recommended."
tput setaf 4;
read -p "Are you sure, you made changes to .env file (y/n)? " answer
case ${answer:0:1} in
y|Y|yes|YES|Yes )
echo "Continiuing Installation!"
;;
* )
nano .env
;;
esac
echo " "
tput setaf 3;
echo "#########################################################################"
echo "Please note that, this installation script is only intended for Linux"
echo "For Mac and Windows, refer to the official guide https://rengine.wiki"
echo "#########################################################################"
echo " "
tput setaf 4;
echo "Installing reNgine and it's dependencies"
echo " "
if [ "$EUID" -ne 0 ]
then
tput setaf 1; echo "Error installing reNgine, Please run this script as root!"
tput setaf 1; echo "Example: sudo ./install.sh"
exit
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing Docker..."
echo "#########################################################################"
if [ -x "$(command -v docker)" ]; then
tput setaf 2; echo "Docker already installed, skipping."
else
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
tput setaf 2; echo "Docker installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing docker-compose"
echo "#########################################################################"
if [ -x "$(command -v docker-compose)" ]; then
tput setaf 2; echo "docker-compose already installed, skipping."
else
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
tput setaf 2; echo "docker-compose installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing make"
echo "#########################################################################"
if [ -x "$(command -v make)" ]; then
tput setaf 2; echo "make already installed, skipping."
else
apt install make
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing reNgine"
echo "#########################################################################"
make certs && make build && make up
tput setaf 2; echo "reNgine is installed!!!"
sleep 3
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Creating an account"
echo "#########################################################################"
make username
tput setaf 2; echo "Thank you for installing reNgine, happy recon!!"
| #!/bin/bash
tput setaf 2;
cat web/art/1.0.txt
tput setaf 1; echo "Before running this script, please make sure Docker is running and you have made changes to .env file."
tput setaf 2; echo "Changing the postgres username & password from .env is highly recommended."
tput setaf 4;
read -p "Are you sure, you made changes to .env file (y/n)? " answer
case ${answer:0:1} in
y|Y|yes|YES|Yes )
echo "Continiuing Installation!"
;;
* )
nano .env
;;
esac
echo " "
tput setaf 3;
echo "#########################################################################"
echo "Please note that, this installation script is only intended for Linux"
echo "For Mac and Windows, refer to the official guide https://rengine.wiki"
echo "#########################################################################"
echo " "
tput setaf 4;
echo "Installing reNgine and it's dependencies"
echo " "
if [ "$EUID" -ne 0 ]
then
tput setaf 1; echo "Error installing reNgine, Please run this script as root!"
tput setaf 1; echo "Example: sudo ./install.sh"
exit
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing Docker..."
echo "#########################################################################"
if [ -x "$(command -v docker)" ]; then
tput setaf 2; echo "Docker already installed, skipping."
else
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
tput setaf 2; echo "Docker installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing docker-compose"
echo "#########################################################################"
if [ -x "$(command -v docker-compose)" ]; then
tput setaf 2; echo "docker-compose already installed, skipping."
else
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
tput setaf 2; echo "docker-compose installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing make"
echo "#########################################################################"
if [ -x "$(command -v make)" ]; then
tput setaf 2; echo "make already installed, skipping."
else
apt install make
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Checking Docker status"
echo "#########################################################################"
if systemctl is-active docker >/dev/null 2>&1; then
tput setaf 4;
echo "Docker is running."
else
tput setaf 1;
echo "Docker is not running. Please run docker and try again."
echo "You can run docker service using sudo systemctl start docker"
exit 1
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing reNgine"
echo "#########################################################################"
make certs && make build && make up
tput setaf 2; echo "reNgine is installed!!!"
sleep 3
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Creating an account"
echo "#########################################################################"
make username
tput setaf 2; echo "Thank you for installing reNgine, happy recon!!"
| sbimochan | 2bd2219659fcf0f0541fc4879bd69bfa79a500c7 | e98433517e4a6198e6e2208fdf1b324f41be5bcb | Yep, so what we can do is, I will accept and merge this PR, and maybe if you want to work on a install script that works across different OS, that would be awesome. | yogeshojha | 45 |
yogeshojha/rengine | 468 | Installation: Check docker running status before installing reNgine. | ### Changes
- checks docker run status. Before it used to execute all `make build` commands even if docker was not running.
- Early error shown to the user
- Terminal text color changed
- made this sentence "Before running this script, please make sure Docker is running and you have made changes to .env file." color RED because it can throw errors
- "Changing the postgres username & password from .env is highly recommended." to Green since it's a suggestion
| null | 2021-08-24 16:52:53+00:00 | 2021-08-27 02:54:16+00:00 | install.sh | #!/bin/bash
tput setaf 2;
cat web/art/1.0.txt
tput setaf 3; echo "Before running this script, please make sure you have made changes to .env file."
tput setaf 1; echo "Changing the postgres username & password from .env is highly recommended."
tput setaf 4;
read -p "Are you sure, you made changes to .env file (y/n)? " answer
case ${answer:0:1} in
y|Y|yes|YES|Yes )
echo "Continiuing Installation!"
;;
* )
nano .env
;;
esac
echo " "
tput setaf 3;
echo "#########################################################################"
echo "Please note that, this installation script is only intended for Linux"
echo "For Mac and Windows, refer to the official guide https://rengine.wiki"
echo "#########################################################################"
echo " "
tput setaf 4;
echo "Installing reNgine and it's dependencies"
echo " "
if [ "$EUID" -ne 0 ]
then
tput setaf 1; echo "Error installing reNgine, Please run this script as root!"
tput setaf 1; echo "Example: sudo ./install.sh"
exit
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing Docker..."
echo "#########################################################################"
if [ -x "$(command -v docker)" ]; then
tput setaf 2; echo "Docker already installed, skipping."
else
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
tput setaf 2; echo "Docker installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing docker-compose"
echo "#########################################################################"
if [ -x "$(command -v docker-compose)" ]; then
tput setaf 2; echo "docker-compose already installed, skipping."
else
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
tput setaf 2; echo "docker-compose installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing make"
echo "#########################################################################"
if [ -x "$(command -v make)" ]; then
tput setaf 2; echo "make already installed, skipping."
else
apt install make
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing reNgine"
echo "#########################################################################"
make certs && make build && make up
tput setaf 2; echo "reNgine is installed!!!"
sleep 3
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Creating an account"
echo "#########################################################################"
make username
tput setaf 2; echo "Thank you for installing reNgine, happy recon!!"
| #!/bin/bash
tput setaf 2;
cat web/art/1.0.txt
tput setaf 1; echo "Before running this script, please make sure Docker is running and you have made changes to .env file."
tput setaf 2; echo "Changing the postgres username & password from .env is highly recommended."
tput setaf 4;
read -p "Are you sure, you made changes to .env file (y/n)? " answer
case ${answer:0:1} in
y|Y|yes|YES|Yes )
echo "Continiuing Installation!"
;;
* )
nano .env
;;
esac
echo " "
tput setaf 3;
echo "#########################################################################"
echo "Please note that, this installation script is only intended for Linux"
echo "For Mac and Windows, refer to the official guide https://rengine.wiki"
echo "#########################################################################"
echo " "
tput setaf 4;
echo "Installing reNgine and it's dependencies"
echo " "
if [ "$EUID" -ne 0 ]
then
tput setaf 1; echo "Error installing reNgine, Please run this script as root!"
tput setaf 1; echo "Example: sudo ./install.sh"
exit
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing Docker..."
echo "#########################################################################"
if [ -x "$(command -v docker)" ]; then
tput setaf 2; echo "Docker already installed, skipping."
else
curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
tput setaf 2; echo "Docker installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing docker-compose"
echo "#########################################################################"
if [ -x "$(command -v docker-compose)" ]; then
tput setaf 2; echo "docker-compose already installed, skipping."
else
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
tput setaf 2; echo "docker-compose installed!!!"
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing make"
echo "#########################################################################"
if [ -x "$(command -v make)" ]; then
tput setaf 2; echo "make already installed, skipping."
else
apt install make
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Checking Docker status"
echo "#########################################################################"
if systemctl is-active docker >/dev/null 2>&1; then
tput setaf 4;
echo "Docker is running."
else
tput setaf 1;
echo "Docker is not running. Please run docker and try again."
echo "You can run docker service using sudo systemctl start docker"
exit 1
fi
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Installing reNgine"
echo "#########################################################################"
make certs && make build && make up
tput setaf 2; echo "reNgine is installed!!!"
sleep 3
echo " "
tput setaf 4;
echo "#########################################################################"
echo "Creating an account"
echo "#########################################################################"
make username
tput setaf 2; echo "Thank you for installing reNgine, happy recon!!"
| sbimochan | 2bd2219659fcf0f0541fc4879bd69bfa79a500c7 | e98433517e4a6198e6e2208fdf1b324f41be5bcb | Sounds perfect. | sbimochan | 46 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | this now hard codes md5 hash. In the current version you can choose the hash function | PaulWestenthanner | 0 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | do you know how much benefit the change from int(hexdigest) to int.from_bytes alone brings? I saw you mentioned 40%-60% for three changes combined. I think this is less readable and could use a comment.
Also the byteorder might depend on the machine (c.f. https://docs.python.org/3/library/stdtypes.html#int.from_bytes and https://stackoverflow.com/questions/50509017/how-is-int-from-bytes-calculated) | PaulWestenthanner | 1 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | you don't need the auto sample attribute anymore | PaulWestenthanner | 2 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | `n_process` sounds like an integer. Better call it `process_list` or something more telling. You only copied the name but probably it's time to change it now | PaulWestenthanner | 3 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | this ignores the `max_samples` parameter and might lead to the process crashing in case of too much data / too few CPUs | PaulWestenthanner | 4 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | please add the `process_creation_method` parameter in the documentation and explain the options | PaulWestenthanner | 5 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | how is this the same as the old code? wasn't the old code just doing `shm_result[column_index] += 1`? | PaulWestenthanner | 6 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | why is this 2x2? | PaulWestenthanner | 7 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | splitting this way is not very elegant if the last chunk has to be treated separately.
using numpy array split could be helpful: https://stackoverflow.com/a/75981560.
Wouldn't it be easier if the `hash_chunk` function would hash a chunk and return an array. Then it wouldn't need the `shm_result` and `shm_offset` parameters (what does shm stand for btw?). Then you'd just concatenate all the chunks in the end? | PaulWestenthanner | 8 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | do you need to re-assign this to `np_result` or will this be updated in place? | PaulWestenthanner | 9 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | Removing it! | bkhant1 | 10 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | I will add a comment! The `digest` version is about 30% faster than the `hexdigest` version. On my machine:
```python
> import hashlib
> hasher = hashlib.md5()
> hasher.update(b"abdcde1234")
> %timeit int(hasher.hexdigest(), 16)
659 ns ± 29.5 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
> %timeit int.from_bytes(hasher.digest(), byteorder='big')
518 ns ± 11.5 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
The byte order is machine dependant only if the data comes directly from the machine. In that case, the `digest` should not be machine dependant. I tried it on
- my system (MacOS Monterey little endian)
```python
> import sys
> import hashlib
> sys.byteorder
'little'
> hasher = hashlib.md5()
> hasher.update(b"abc")
> int.from_bytes(hasher.digest(), endian='big')
191415658344158766168031473277922803570
> int(hasher.hexdigest(), 16)
191415658344158766168031473277922803570
```
- a docker system (`s390x/ubuntu` image, big endian):
```python
> import sys
> import hashlib
> sys.byteorder
'big'
> hasher = hashlib.md5()
> hasher.update(b"abc")
> int.from_bytes(hasher.digest(), endian='big')
191415658344158766168031473277922803570
> int(hasher.hexdigest(), 16)
191415658344158766168031473277922803570
```
The result is the same. | bkhant1 | 11 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | That is actually another advantage of using a shared memory instead of queues.
### In the previous implementation, there were 3 mains things that could take up memory:
1. the input dataframe, stored in `self.X` - it is copied accross all processes wether we fork or spawn because the process target is `self.require_data`.
2. the output dataframe, which is the concatenation of all `hashing_parts`. There is only one instance of it in the main process. It takes space of exactly the output dataframe when all processes are done computing their chunk of the data.
3. the `data_part` that's temporary to each subprocess, and that's created to be put on the queue after the part is calculated.
In the worst case scenario of the queue implementation, the total memory usage is: `input_size * number_of_processes + output_size + (ouput_size/max_sample)*number_of_processes`
### In the new implementation, only two things, because we write directly to the output shared memory instead of having the data transiting through the queue:
1. the input dataframe, stored in `self.X` - it is copied accross all processes wether we fork or spawn because the process has in its args `np_df`.
2. the output dataframe, which is the shared memory and is allocated only accross all processes.
The total memory usage in the shared memory implementation is: `imput_size * number_of_processes + output_size`
### We don't need max samples anymore
The memory usage is always smaller with the shared memory implementation, and does not depend on `max_samples`
| bkhant1 | 12 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | It was, but it was in the context of writting to a subset of the output dataframe. In that case we write to the "global" output dataframe, so we need to add the offset specific to the chunk we're processing.
I was worried about introducing a regression so I added `test_simple_example` where the expected result is hardcoded to the what the old code produced. In that case at least it produces the same result as the old code. | bkhant1 | 13 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | `shm` stands for "shared_memory" - will update the variable name 👍
> Wouldn't it be easier if the hash_chunk function would hash a chunk and return an array. Then it wouldn't need the shm_result and shm_offset parameters (what does shm stand for btw?). Then you'd just concatenate all the chunks in the end?
That's a very good point, I will try that | bkhant1 | 14 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | My bad thanks for catching that! I'm replacing it with `hash_ctor = getattr(hashlib, hash_key)` outside the loop and `hash_ctor()` in the loop which is even faster!
```python
> ctor = getattr(hashlib, 'md5')
> %timeit ctor()
190 ns ± 0.927 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
> %timeit hashlib.md5()
231 ns ± 4.86 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) | bkhant1 | 15 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | Ok that makes sense.
Which python/hashlib version are you using btw?
Running `hasher.update("abc")` results in a `TypeError` on my machine since it requires (as per documentation) bytes not strings. With `hasher.update("abc".encode("utf-8"))` I'm getting the same results as you | PaulWestenthanner | 16 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | I copied the wrong line from my shell! I updated my comment above. | bkhant1 | 17 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | Im not sure 😅 I am just using it to get the type of the default int array name in [that line](https://github.com/scikit-learn-contrib/category_encoders/pull/428/files/206de2d8327489c4ff4d2c7c4f566c0fc06210c1#diff-5871b042f65ccab77377b2e9a92ea2c9651cc039b020835b6d77bfcb01ffe475R187) so it could be 1 by 1! | bkhant1 | 18 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | It gets updated in place! | bkhant1 | 19 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | nice catch! without calling the `system()` function this was always `False`, was it? | PaulWestenthanner | 20 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | I"ve seen your implementation of the mulitprocess pool (here https://github.com/bkhant1/category_encoders/compare/all_optis...bkhant1:category_encoders:multiproc_pool?expand=1) and like it a lot. I think it is very clean and you should add it to the PR | PaulWestenthanner | 21 |
scikit-learn-contrib/category_encoders | 428 | Optimise `HashingEncoder` for both large and small dataframes | I used the HashingEncoder recently and found weird that any call to `fit` or `transform`, even for a dataframe with only 10s of rows and a couple of columns took at least 2s...
I also had quite a large amount of data to encode, and that took a long time.
That got me started on improving the performance of HashingEncoder, and here's the result! There are quite a few changes in there, each individual change should be in it's own commit, and here's a summary of the performance gain on my machine (macOS Monteray, i7 2.3ghz).
| | Baseline | Numpy arrays instead of apply | Shared memory instead of queue | Fork instead of spawn | Faster hashlib usage |
| --- | --- | --- | --- | --- | --- |
| n_rows=30 n_features=3 n_components=10 n_process=4 | 3.55 s ± 150 ms per loop (mean ± std. dev. of ... | 3.62 s ± 140 ms per loop (mean ± std. dev. of ... | 2.2 s ± 41.6 ms per loop (mean ± std. dev. of ... | 56.6 ms ± 2.91 ms per loop (mean ± std. dev. o... | 47.3 ms ± 516 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=10 n_process=1 | 1.24 s ± 52.6 ms per loop (mean ± std. dev. of... | 1.42 s ± 170 ms per loop (mean ± std. dev. of ... | 1.74 ms ± 32.2 µs per loop (mean ± std. dev. o... | 2.08 ms ± 91.7 µs per loop (mean ± std. dev. o... | 1.86 ms ± 173 µs per loop (mean ± std. dev. of... |
| n_rows=30 n_features=3 n_components=100 n_process=1 | 1.22 s ± 51.5 ms per loop (mean ± std. dev. of... | 1.33 s ± 60.7 ms per loop (mean ± std. dev. of... | 1.73 ms ± 29.7 µs per loop (mean ± std. dev. o... | 2.01 ms ± 148 µs per loop (mean ± std. dev. of... | 2.01 ms ± 225 µs per loop (mean ± std. dev. of... |
| n_rows=10000 n_features=10 n_components=10 n_process=4 | 5.45 s ± 85.8 ms per loop (mean ± std. dev. of... | 5.36 s ± 57.5 ms per loop (mean ± std. dev. of... | 2.23 s ± 39.6 ms per loop (mean ± std. dev. of... | 120 ms ± 3.02 ms per loop (mean ± std. dev. of... | 96.4 ms ± 2.33 ms per loop (mean ± std. dev. o... |
| n_rows=10000 n_features=10 n_components=10 n_process=1 | 1.61 s ± 30.1 ms per loop (mean ± std. dev. of... | 1.45 s ± 27.2 ms per loop (mean ± std. dev. of... | 227 ms ± 6.03 ms per loop (mean ± std. dev. of... | 236 ms ± 3.06 ms per loop (mean ± std. dev. of... | 170 ms ± 1.35 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=4 | 5.99 s ± 215 ms per loop (mean ± std. dev. of ... | 5.71 s ± 148 ms per loop (mean ± std. dev. of ... | 4.8 s ± 25.4 ms per loop (mean ± std. dev. of ... | 836 ms ± 42.3 ms per loop (mean ± std. dev. of... | 622 ms ± 33.2 ms per loop (mean ± std. dev. of... |
| n_rows=100000 n_features=10 n_components=10 n_process=1 | 5.38 s ± 53 ms per loop (mean ± std. dev. of 7... | 3.73 s ± 56.5 ms per loop (mean ± std. dev. of... | 2.25 s ± 57.4 ms per loop (mean ± std. dev. of... | 3.76 s ± 1.61 s per loop (mean ± std. dev. of ... | 1.68 s ± 19.9 ms per loop (mean ± std. dev. of... |
| n_rows=1000000 n_features=50 n_components=10 n_process=4 | 50.8 s ± 1.17 s per loop (mean ± std. dev. of ... | 56.4 s ± 2.11 s per loop (mean ± std. dev. of ... | 37.1 s ± 576 ms per loop (mean ± std. dev. of ... | 36.9 s ± 2.19 s per loop (mean ± std. dev. of ... | 26.6 s ± 1.8 s per loop (mean ± std. dev. of 7... |
| n_rows=1000000 n_features=50 n_components=10 n_process=1 | 2min 22s ± 2.05 s per loop (mean ± std. dev. o... | 2min 19s ± 3.08 s per loop (mean ± std. dev. o... | 1min 47s ± 1.15 s per loop (mean ± std. dev. o... | 2min 10s ± 18.4 s per loop (mean ± std. dev. o... | 1min 21s ± 1.67 s per loop (mean ± std. dev. o... |
The notebook that produced that table can be found [here](https://gist.github.com/bkhant1/ae2b813817d53b19a81f6774234fcfe3)
## Proposed Changes
The changes are listed by commit.
### [Add a simple non-regression HashEncoder test](https://github.com/scikit-learn-contrib/category_encoders/commit/0afe06586c71388b8fd4034d196de8a7df4ad56c)
To make sure I am not breaking it.
### [In HashingEncoder process the df as a numpy array instead of using apply](https://github.com/scikit-learn-contrib/category_encoders/commit/de124410f29778487a2910c8dd7f15ed15785705)
It has no direct impact on performance, however it allows accessing the memory layout of the dataframe directly. That allows using shared memory to communicate between processes instead of a data queue, which does improve performance.
### [In HashEncoder use shared memory instead of queue for multiproccessing](https://github.com/scikit-learn-contrib/category_encoders/commit/5235a6b85e787b3a384c0d43f314c0e3146d3daf)
It is faster to write directly in memory that to have to data transit through a queue.
The multiprocessing method is similar to what it was with queues: the dataframe is split into chunks, and each process applies the hashing trick to its chunk of the dataframe. Instead of writting the result to a queue, it writes it directly in a shared memory segment, that is also the underlying memory of a numpy array that is used to build the output dataframe.
### [Allow forking processes instead of spwaning them and make it default](https://github.com/scikit-learn-contrib/category_encoders/commit/12f8f242959314ed770750902c1e5ab8ca81263e)
This makes the HashEncoder transform method a lot faster on small datasets.
The spawn process creation method creates a new python interpreter from scratch, and re-import all required module. In a minimal case (pandas and category_encoders.hashing only are imported) this adds a ~2s overhead to any call to transform.
Fork creates a copy of the current process, and that's it. It is unsafe to use with threads, locks, file descriptors, ... but in that case the only thing the forked process will do is process some data and write it to ITS OWN segment of a shared memory. It is a lot faster as pandas doesn't have to be re-imported (around 20ms?)
It might take up more memory as more than the necessary variables (the largest one by far being the HashEncoder instance, which include the user dataframe) will be copied. Add the option to use spawn instead of fork to potentially save some memory.
### [Remove python 2 check code and faster use of hashlib](https://github.com/scikit-learn-contrib/category_encoders/commit/d2d535b4b8b2c54adcb9b13a6b06b5fc8c55286c)
Python 2 is not supported on master, the check isn't useful.
Create int indexes from hashlib bytes digest instead of hex digest as it's faster.
Call the md5 hashlib constructor directly instead of new('md5'), which is also faster.
| null | 2023-10-08 15:09:46+00:00 | 2023-11-11 14:34:26+00:00 | category_encoders/hashing.py | """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import math
import platform
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
self.auto_sample = max_sample <= 0
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def require_data(self, data_lock, new_start, done_index, hashing_parts, process_index):
is_finished = False
while not is_finished:
if data_lock.acquire():
if new_start.value:
end_index = 0
new_start.value = False
else:
end_index = done_index.value
if all([self.data_lines > 0, end_index < self.data_lines]):
start_index = end_index
if (self.data_lines - end_index) <= self.max_sample:
end_index = self.data_lines
else:
end_index += self.max_sample
done_index.value = end_index
data_lock.release()
data_part = self.X.iloc[start_index: end_index]
# Always get df and check it after merge all data parts
data_part = self.hashing_trick(X_in=data_part, hashing_method=self.hash_method,
N=self.n_components, cols=self.cols)
part_index = int(math.ceil(end_index / self.max_sample))
hashing_parts.put({part_index: data_part})
is_finished = end_index >= self.data_lines
if self.verbose == 5:
print(f"Process - {process_index} done hashing data : {start_index} ~ {end_index}")
else:
data_lock.release()
is_finished = True
else:
data_lock.release()
def _transform(self, X):
"""
Call _transform_single_cpu() if you want to use single CPU with all samples
"""
self.X = X
self.data_lines = len(self.X)
data_lock = multiprocessing.Manager().Lock()
new_start = multiprocessing.Manager().Value('d', True)
done_index = multiprocessing.Manager().Value('d', int(0))
hashing_parts = multiprocessing.Manager().Queue()
if self.auto_sample:
self.max_sample = int(self.data_lines / self.max_process)
if self.max_sample == 0:
self.max_sample = 1
if self.max_process == 1:
self.require_data(data_lock, new_start, done_index, hashing_parts, process_index=1)
else:
n_process = []
for thread_idx in range(self.max_process):
process = multiprocessing.Process(target=self.require_data,
args=(data_lock, new_start, done_index, hashing_parts, thread_idx + 1))
process.daemon = True
n_process.append(process)
for process in n_process:
process.start()
for process in n_process:
process.join()
data = self.X
if self.max_sample == 0 or self.max_sample == self.data_lines:
if hashing_parts:
data = list(hashing_parts.get().values())[0]
else:
list_data = {}
while not hashing_parts.empty():
list_data.update(hashing_parts.get())
sort_data = []
for part_index in sorted(list_data):
sort_data.append(list_data[part_index])
if sort_data:
data = pd.concat(sort_data)
return data
def _transform_single_cpu(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(X, hashing_method=self.hash_method, N=self.n_components, cols=self.cols)
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.to_numpy()
@staticmethod
def hashing_trick(X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
def hash_fn(x):
tmp = [0 for _ in range(N)]
for val in x.array:
if val is not None:
hasher = hashlib.new(hashing_method)
if sys.version_info[0] == 2:
hasher.update(str(val))
else:
hasher.update(bytes(str(val), 'utf-8'))
tmp[int(hasher.hexdigest(), 16) % N] += 1
return tmp
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
X_cat = X_cat.apply(hash_fn, axis=1, result_type='expand')
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| """The hashing module contains all methods and classes related to the hashing trick."""
import sys
import hashlib
import category_encoders.utils as util
import multiprocessing
import pandas as pd
import numpy as np
import math
import platform
from concurrent.futures import ProcessPoolExecutor
__author__ = 'willmcginnis', 'LiuShulun'
class HashingEncoder(util.BaseEncoder, util.UnsupervisedTransformerMixin):
""" A multivariate hashing implementation with configurable dimensionality/precision.
The advantage of this encoder is that it does not maintain a dictionary of observed categories.
Consequently, the encoder does not grow in size and accepts new values during data scoring
by design.
It's important to read about how max_process & max_sample work
before setting them manually, inappropriate setting slows down encoding.
Default value of 'max_process' is 1 on Windows because multiprocessing might cause issues, see in :
https://github.com/scikit-learn-contrib/categorical-encoding/issues/215
https://docs.python.org/2/library/multiprocessing.html?highlight=process#windows
Parameters
----------
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
hash_method: str
which hashing method to use. Any method from hashlib works.
max_process: int
how many processes to use in transform(). Limited in range(1, 64).
By default, it uses half of the logical CPUs.
For example, 4C4T makes max_process=2, 4C8T makes max_process=4.
Set it larger if you have a strong CPU.
It is not recommended to set it larger than is the count of the
logical CPUs as it will actually slow down the encoding.
max_sample: int
how many samples to encode by each process at a time.
This setting is useful on low memory machines.
By default, max_sample=(all samples num)/(max_process).
For example, 4C8T CPU with 100,000 samples makes max_sample=25,000,
6C12T CPU with 100,000 samples makes max_sample=16,666.
It is not recommended to set it larger than the default value.
n_components: int
how many bits to use to represent the feature. By default, we use 8 bits.
For high-cardinality features, consider using up-to 32 bits.
process_creation_method: string
either "fork", "spawn" or "forkserver" (availability depends on your
platform). See https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
for more details and tradeoffs. Defaults to "fork" on linux/macos as it
is the fastest option and to "spawn" on windows as it is the only one
available
Example
-------
>>> from category_encoders.hashing import HashingEncoder
>>> import pandas as pd
>>> from sklearn.datasets import fetch_openml
>>> bunch = fetch_openml(name="house_prices", as_frame=True)
>>> display_cols = ["Id", "MSSubClass", "MSZoning", "LotFrontage", "YearBuilt", "Heating", "CentralAir"]
>>> X = pd.DataFrame(bunch.data, columns=bunch.feature_names)[display_cols]
>>> y = bunch.target
>>> he = HashingEncoder(cols=['CentralAir', 'Heating']).fit(X, y)
>>> numeric_dataset = he.transform(X)
>>> print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1460 entries, 0 to 1459
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 col_0 1460 non-null int64
1 col_1 1460 non-null int64
2 col_2 1460 non-null int64
3 col_3 1460 non-null int64
4 col_4 1460 non-null int64
5 col_5 1460 non-null int64
6 col_6 1460 non-null int64
7 col_7 1460 non-null int64
8 Id 1460 non-null float64
9 MSSubClass 1460 non-null float64
10 MSZoning 1460 non-null object
11 LotFrontage 1201 non-null float64
12 YearBuilt 1460 non-null float64
dtypes: float64(4), int64(8), object(1)
memory usage: 148.4+ KB
None
References
----------
.. [1] Feature Hashing for Large Scale Multitask Learning, from
https://alex.smola.org/papers/2009/Weinbergeretal09.pdf
.. [2] Don't be tricked by the Hashing Trick, from
https://booking.ai/dont-be-tricked-by-the-hashing-trick-192a6aae3087
"""
prefit_ordinal = False
encoding_relation = util.EncodingRelation.ONE_TO_M
def __init__(self, max_process=0, max_sample=0, verbose=0, n_components=8, cols=None, drop_invariant=False,
return_df=True, hash_method='md5', process_creation_method='fork'):
super().__init__(verbose=verbose, cols=cols, drop_invariant=drop_invariant, return_df=return_df,
handle_unknown="does not apply", handle_missing="does not apply")
if max_process not in range(1, 128):
if platform.system() == 'Windows':
self.max_process = 1
else:
self.max_process = int(math.ceil(multiprocessing.cpu_count() / 2))
if self.max_process < 1:
self.max_process = 1
elif self.max_process > 128:
self.max_process = 128
else:
self.max_process = max_process
self.max_sample = int(max_sample)
if platform.system() == 'Windows':
self.process_creation_method = "spawn"
else:
self.process_creation_method = process_creation_method
self.data_lines = 0
self.X = None
self.n_components = n_components
self.hash_method = hash_method
def _fit(self, X, y=None, **kwargs):
pass
def _transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = util.convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
if not list(self.cols):
return X
X = self.hashing_trick(
X,
hashing_method=self.hash_method,
N=self.n_components,
cols=self.cols,
)
return X
@staticmethod
def hash_chunk(args):
hash_method, np_df, N = args
# Calling getattr outside the loop saves some time in the loop
hasher_constructor = getattr(hashlib, hash_method)
# Same when the call to getattr is implicit
int_from_bytes = int.from_bytes
result = np.zeros((np_df.shape[0], N), dtype='int')
for i, row in enumerate(np_df):
for val in row:
if val is not None:
hasher = hasher_constructor()
# Computes an integer index from the hasher digest. The endian is
# "big" as the code use to read:
# column_index = int(hasher.hexdigest(), 16) % N
# which is implicitly considering the hexdigest to be big endian,
# even if the system is little endian.
# Building the index that way is about 30% faster than using the
# hexdigest.
hasher.update(bytes(str(val), 'utf-8'))
column_index = int_from_bytes(hasher.digest(), byteorder='big') % N
result[i, column_index] += 1
return result
def hashing_trick_with_np_parallel(self, df, N: int):
np_df = df.to_numpy()
ctx = multiprocessing.get_context(self.process_creation_method)
with ProcessPoolExecutor(max_workers=self.max_process, mp_context=ctx) as executor:
result = np.concatenate(list(
executor.map(
self.hash_chunk,
zip(
[self.hash_method]*self.max_process,
np.array_split(np_df, self.max_process),
[N]*self.max_process
)
)
))
return pd.DataFrame(result, index=df.index)
def hashing_trick_with_np_no_parallel(self, df, N):
np_df = df.to_numpy()
result = HashingEncoder.hash_chunk((self.hash_method, np_df, N))
return pd.DataFrame(result, index=df.index)
def hashing_trick(self, X_in, hashing_method='md5', N=2, cols=None, make_copy=False):
"""A basic hashing implementation with configurable dimensionality/precision
Performs the hashing trick on a pandas dataframe, `X`, using the hashing method from hashlib
identified by `hashing_method`. The number of output dimensions (`N`), and columns to hash (`cols`) are
also configurable.
Parameters
----------
X_in: pandas dataframe
description text
hashing_method: string, optional
description text
N: int, optional
description text
cols: list, optional
description text
make_copy: bool, optional
description text
Returns
-------
out : dataframe
A hashing encoded dataframe.
References
----------
Cite the relevant literature, e.g. [1]_. You may also cite these
references in the notes section above.
.. [1] Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing
for Large Scale Multitask Learning. Proc. ICML.
"""
if hashing_method not in hashlib.algorithms_available:
raise ValueError(f"Hashing Method: {hashing_method} not Available. "
f"Please use one from: [{', '.join([str(x) for x in hashlib.algorithms_available])}]")
if make_copy:
X = X_in.copy(deep=True)
else:
X = X_in
if cols is None:
cols = X.columns
new_cols = [f'col_{d}' for d in range(N)]
X_cat = X.loc[:, cols]
X_num = X.loc[:, [x for x in X.columns if x not in cols]]
if self.max_process == 1:
X_cat = self.hashing_trick_with_np_no_parallel(X_cat, N)
else:
X_cat = self.hashing_trick_with_np_parallel(X_cat, N)
X_cat.columns = new_cols
X = pd.concat([X_cat, X_num], axis=1)
return X
| bkhant1 | 26ef26106fcbadb281c162b76258955f66f2c741 | 5c94e27436a3cf837d7c84a71c566e8320ce512f | maybe it would make sense to chance it to 1 by 1 as it would be sort of a minimal example. Also probably add a comment that you only need it for the datatype so people won't wonder why it is there | PaulWestenthanner | 22 |
scikit-learn-contrib/category_encoders | 398 | (WIP) Partial fix for getting feature names out | I think this is a partial fix for this opened issue:
https://github.com/scikit-learn-contrib/category_encoders/issues/395
It remains to check the behaviour of other estimators that are not ONE_TO_ONE.
Please, let me know if you like the work in progress and I will try to continue. | null | 2023-02-23 13:33:41+00:00 | 2023-03-13 11:48:24+00:00 | category_encoders/__init__.py | """
.. module:: category_encoders
:synopsis:
:platform:
"""
from category_encoders.backward_difference import BackwardDifferenceEncoder
from category_encoders.binary import BinaryEncoder
from category_encoders.gray import GrayEncoder
from category_encoders.count import CountEncoder
from category_encoders.hashing import HashingEncoder
from category_encoders.helmert import HelmertEncoder
from category_encoders.one_hot import OneHotEncoder
from category_encoders.ordinal import OrdinalEncoder
from category_encoders.sum_coding import SumEncoder
from category_encoders.polynomial import PolynomialEncoder
from category_encoders.basen import BaseNEncoder
from category_encoders.leave_one_out import LeaveOneOutEncoder
from category_encoders.target_encoder import TargetEncoder
from category_encoders.woe import WOEEncoder
from category_encoders.m_estimate import MEstimateEncoder
from category_encoders.james_stein import JamesSteinEncoder
from category_encoders.cat_boost import CatBoostEncoder
from category_encoders.rankhot import RankHotEncoder
from category_encoders.glmm import GLMMEncoder
from category_encoders.quantile_encoder import QuantileEncoder, SummaryEncoder
__version__ = '2.6.0'
__author__ = "willmcginnis", "cmougan", "paulwestenthanner"
__all__ = [
"BackwardDifferenceEncoder",
"BinaryEncoder",
"GrayEncoder",
"CountEncoder",
"HashingEncoder",
"HelmertEncoder",
"OneHotEncoder",
"OrdinalEncoder",
"SumEncoder",
"PolynomialEncoder",
"BaseNEncoder",
"LeaveOneOutEncoder",
"TargetEncoder",
"WOEEncoder",
"MEstimateEncoder",
"JamesSteinEncoder",
"CatBoostEncoder",
"GLMMEncoder",
"QuantileEncoder",
"SummaryEncoder",
'RankHotEncoder',
]
| """
.. module:: category_encoders
:synopsis:
:platform:
"""
from category_encoders.backward_difference import BackwardDifferenceEncoder
from category_encoders.binary import BinaryEncoder
from category_encoders.gray import GrayEncoder
from category_encoders.count import CountEncoder
from category_encoders.hashing import HashingEncoder
from category_encoders.helmert import HelmertEncoder
from category_encoders.one_hot import OneHotEncoder
from category_encoders.ordinal import OrdinalEncoder
from category_encoders.sum_coding import SumEncoder
from category_encoders.polynomial import PolynomialEncoder
from category_encoders.basen import BaseNEncoder
from category_encoders.leave_one_out import LeaveOneOutEncoder
from category_encoders.target_encoder import TargetEncoder
from category_encoders.woe import WOEEncoder
from category_encoders.m_estimate import MEstimateEncoder
from category_encoders.james_stein import JamesSteinEncoder
from category_encoders.cat_boost import CatBoostEncoder
from category_encoders.rankhot import RankHotEncoder
from category_encoders.glmm import GLMMEncoder
from category_encoders.quantile_encoder import QuantileEncoder, SummaryEncoder
import sklearn
import warnings
from textwrap import dedent
__version__ = '2.6.0'
__author__ = "willmcginnis", "cmougan", "paulwestenthanner"
__all__ = [
"BackwardDifferenceEncoder",
"BinaryEncoder",
"GrayEncoder",
"CountEncoder",
"HashingEncoder",
"HelmertEncoder",
"OneHotEncoder",
"OrdinalEncoder",
"SumEncoder",
"PolynomialEncoder",
"BaseNEncoder",
"LeaveOneOutEncoder",
"TargetEncoder",
"WOEEncoder",
"MEstimateEncoder",
"JamesSteinEncoder",
"CatBoostEncoder",
"GLMMEncoder",
"QuantileEncoder",
"SummaryEncoder",
'RankHotEncoder',
]
| JaimeArboleda | 5eb7a2d6359d680bdadd0534bdb983e712a47f9c | 570827e6b48737d0c9aece8aca31edd6da02c1b2 | I don't really like the input warning here. I know of some users who use this library in their project and they try to suppress the warning for their end-users.
Also, at the moment it also does not work anyway.
I'd be an favor of not issuing a warning here but have something on the `index.rst` (c.f. below comment) | PaulWestenthanner | 23 |
scikit-learn-contrib/category_encoders | 398 | (WIP) Partial fix for getting feature names out | I think this is a partial fix for this opened issue:
https://github.com/scikit-learn-contrib/category_encoders/issues/395
It remains to check the behaviour of other estimators that are not ONE_TO_ONE.
Please, let me know if you like the work in progress and I will try to continue. | null | 2023-02-23 13:33:41+00:00 | 2023-03-13 11:48:24+00:00 | category_encoders/__init__.py | """
.. module:: category_encoders
:synopsis:
:platform:
"""
from category_encoders.backward_difference import BackwardDifferenceEncoder
from category_encoders.binary import BinaryEncoder
from category_encoders.gray import GrayEncoder
from category_encoders.count import CountEncoder
from category_encoders.hashing import HashingEncoder
from category_encoders.helmert import HelmertEncoder
from category_encoders.one_hot import OneHotEncoder
from category_encoders.ordinal import OrdinalEncoder
from category_encoders.sum_coding import SumEncoder
from category_encoders.polynomial import PolynomialEncoder
from category_encoders.basen import BaseNEncoder
from category_encoders.leave_one_out import LeaveOneOutEncoder
from category_encoders.target_encoder import TargetEncoder
from category_encoders.woe import WOEEncoder
from category_encoders.m_estimate import MEstimateEncoder
from category_encoders.james_stein import JamesSteinEncoder
from category_encoders.cat_boost import CatBoostEncoder
from category_encoders.rankhot import RankHotEncoder
from category_encoders.glmm import GLMMEncoder
from category_encoders.quantile_encoder import QuantileEncoder, SummaryEncoder
__version__ = '2.6.0'
__author__ = "willmcginnis", "cmougan", "paulwestenthanner"
__all__ = [
"BackwardDifferenceEncoder",
"BinaryEncoder",
"GrayEncoder",
"CountEncoder",
"HashingEncoder",
"HelmertEncoder",
"OneHotEncoder",
"OrdinalEncoder",
"SumEncoder",
"PolynomialEncoder",
"BaseNEncoder",
"LeaveOneOutEncoder",
"TargetEncoder",
"WOEEncoder",
"MEstimateEncoder",
"JamesSteinEncoder",
"CatBoostEncoder",
"GLMMEncoder",
"QuantileEncoder",
"SummaryEncoder",
'RankHotEncoder',
]
| """
.. module:: category_encoders
:synopsis:
:platform:
"""
from category_encoders.backward_difference import BackwardDifferenceEncoder
from category_encoders.binary import BinaryEncoder
from category_encoders.gray import GrayEncoder
from category_encoders.count import CountEncoder
from category_encoders.hashing import HashingEncoder
from category_encoders.helmert import HelmertEncoder
from category_encoders.one_hot import OneHotEncoder
from category_encoders.ordinal import OrdinalEncoder
from category_encoders.sum_coding import SumEncoder
from category_encoders.polynomial import PolynomialEncoder
from category_encoders.basen import BaseNEncoder
from category_encoders.leave_one_out import LeaveOneOutEncoder
from category_encoders.target_encoder import TargetEncoder
from category_encoders.woe import WOEEncoder
from category_encoders.m_estimate import MEstimateEncoder
from category_encoders.james_stein import JamesSteinEncoder
from category_encoders.cat_boost import CatBoostEncoder
from category_encoders.rankhot import RankHotEncoder
from category_encoders.glmm import GLMMEncoder
from category_encoders.quantile_encoder import QuantileEncoder, SummaryEncoder
import sklearn
import warnings
from textwrap import dedent
__version__ = '2.6.0'
__author__ = "willmcginnis", "cmougan", "paulwestenthanner"
__all__ = [
"BackwardDifferenceEncoder",
"BinaryEncoder",
"GrayEncoder",
"CountEncoder",
"HashingEncoder",
"HelmertEncoder",
"OneHotEncoder",
"OrdinalEncoder",
"SumEncoder",
"PolynomialEncoder",
"BaseNEncoder",
"LeaveOneOutEncoder",
"TargetEncoder",
"WOEEncoder",
"MEstimateEncoder",
"JamesSteinEncoder",
"CatBoostEncoder",
"GLMMEncoder",
"QuantileEncoder",
"SummaryEncoder",
'RankHotEncoder',
]
| JaimeArboleda | 5eb7a2d6359d680bdadd0534bdb983e712a47f9c | 570827e6b48737d0c9aece8aca31edd6da02c1b2 | I fully agree, in fact warnings are annoying. Let's remove this. Do you want me to add a new commit or you prefer to do it in the merge process? | JaimeArboleda | 24 |
scikit-learn-contrib/category_encoders | 398 | (WIP) Partial fix for getting feature names out | I think this is a partial fix for this opened issue:
https://github.com/scikit-learn-contrib/category_encoders/issues/395
It remains to check the behaviour of other estimators that are not ONE_TO_ONE.
Please, let me know if you like the work in progress and I will try to continue. | null | 2023-02-23 13:33:41+00:00 | 2023-03-13 11:48:24+00:00 | category_encoders/utils.py | """A collection of shared utilities for all encoders, not intended for external use."""
from abc import abstractmethod
from enum import Enum, auto
import warnings
import pandas as pd
import numpy as np
import sklearn.base
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.exceptions import NotFittedError
from typing import Dict, List, Optional, Union
from scipy.sparse import csr_matrix
__author__ = 'willmcginnis'
def convert_cols_to_list(cols):
if isinstance(cols, pd.Series):
return cols.tolist()
elif isinstance(cols, np.ndarray):
return cols.tolist()
elif np.isscalar(cols):
return [cols]
elif isinstance(cols, set):
return list(cols)
elif isinstance(cols, tuple):
return list(cols)
elif pd.api.types.is_categorical_dtype(cols):
return cols.astype(object).tolist()
return cols
def get_obj_cols(df):
"""
Returns names of 'object' columns in the DataFrame.
"""
obj_cols = []
for idx, dt in enumerate(df.dtypes):
if dt == 'object' or is_category(dt):
obj_cols.append(df.columns.values[idx])
if not obj_cols:
print("Warning: No categorical columns found. Calling 'transform' will only return input data.")
return obj_cols
def is_category(dtype):
return pd.api.types.is_categorical_dtype(dtype)
def convert_inputs(X, y, columns=None, index=None, deep=False):
"""
Unite arraylike `X` and vectorlike `y` into a DataFrame and Series.
If both are pandas types already, raises an error if their indexes do not match.
If one is pandas, the returns will share that index.
If neither is pandas, a default index will be used, unless `index` is passed.
Parameters
----------
X: arraylike
y: listlike
columns: listlike
Specifies column names to use for `X`.
Ignored if `X` is already a dataframe.
If `None`, use the default pandas column names.
index: listlike
The index to use, if neither `X` nor `y` is a pandas type.
(If one has an index, then this has no effect.)
If `None`, use the default pandas index.
deep: bool
Whether to deep-copy `X`.
"""
X_alt_index = y.index if isinstance(y, pd.Series) else index
X = convert_input(X, columns=columns, deep=deep, index=X_alt_index)
if y is not None:
y = convert_input_vector(y, index=X.index)
# N.B.: If either was already pandas, it keeps its index.
if any(X.index != y.index):
msg = "`X` and `y` both have indexes, but they do not match. If you are shuffling your input data on " \
"purpose (e.g. via permutation_test_score) use np arrays instead of data frames / series"
raise ValueError(msg)
if X.shape[0] != y.shape[0]:
raise ValueError("The length of X is " + str(X.shape[0]) + " but length of y is " + str(y.shape[0]) + ".")
return X, y
def convert_input(X, columns=None, deep=False, index=None):
"""
Unite data into a DataFrame.
Objects that do not contain column names take the names from the argument.
Optionally perform deep copy of the data.
"""
if not isinstance(X, pd.DataFrame):
if isinstance(X, pd.Series):
X = pd.DataFrame(X, copy=deep)
else:
if columns is not None and np.size(X,1) != len(columns):
raise ValueError('The count of the column names does not correspond to the count of the columns')
if isinstance(X, list):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index) # lists are always copied, but for consistency, we still pass the argument
elif isinstance(X, (np.generic, np.ndarray)):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index)
elif isinstance(X, csr_matrix):
X = pd.DataFrame(X.todense(), columns=columns, copy=deep, index=index)
else:
raise ValueError(f'Unexpected input type: {type(X)}')
elif deep:
X = X.copy(deep=True)
return X
def convert_input_vector(y, index):
"""
Unite target data type into a Series.
If the target is a Series or a DataFrame, we preserve its index.
But if the target does not contain index attribute, we use the index from the argument.
"""
if y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
if isinstance(y, pd.Series):
return y
elif isinstance(y, np.ndarray):
if len(np.shape(y))==1: # vector
return pd.Series(y, name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[0]==1: # single row in a matrix
return pd.Series(y[0, :], name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[1]==1: # single column in a matrix
return pd.Series(y[:, 0], name='target', index=index)
else:
raise ValueError(f'Unexpected input shape: {np.shape(y)}')
elif np.isscalar(y):
return pd.Series([y], name='target', index=index)
elif isinstance(y, list):
if len(y)==0: # empty list
return pd.Series(y, name='target', index=index, dtype=float)
elif len(y)>0 and not isinstance(y[0], list): # vector
return pd.Series(y, name='target', index=index)
elif len(y)>0 and isinstance(y[0], list) and len(y[0])==1: # single row in a matrix
flatten = lambda y: [item for sublist in y for item in sublist]
return pd.Series(flatten(y), name='target', index=index)
elif len(y)==1 and len(y[0])==0 and isinstance(y[0], list): # single empty column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=float)
elif len(y)==1 and isinstance(y[0], list): # single column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=type(y[0][0]))
else:
raise ValueError('Unexpected input shape')
elif isinstance(y, pd.DataFrame):
if len(list(y))==0: # empty DataFrame
return pd.Series(name='target', index=index, dtype=float)
if len(list(y))==1: # a single column
return y.iloc[:, 0]
else:
raise ValueError(f'Unexpected input shape: {y.shape}')
else:
return pd.Series(y, name='target', index=index) # this covers tuples and other directly convertible types
def get_generated_cols(X_original, X_transformed, to_transform):
"""
Returns a list of the generated/transformed columns.
Arguments:
X_original: df
the original (input) DataFrame.
X_transformed: df
the transformed (current) DataFrame.
to_transform: [str]
a list of columns that were transformed (as in the original DataFrame), commonly self.feature_names_in.
Output:
a list of columns that were transformed (as in the current DataFrame).
"""
original_cols = list(X_original.columns)
if len(to_transform) > 0:
[original_cols.remove(c) for c in to_transform]
current_cols = list(X_transformed.columns)
if len(original_cols) > 0:
[current_cols.remove(c) for c in original_cols]
return current_cols
def flatten_reverse_dict(d):
sep = "___"
[flat_dict] = pd.json_normalize(d, sep=sep).to_dict(orient='records')
reversed_flat_dict = {v: tuple(k.split(sep)) for k, v in flat_dict.items()}
return reversed_flat_dict
class EncodingRelation(Enum):
# one input feature get encoded into one output feature
ONE_TO_ONE = auto()
# one input feature get encoded into as many output features as it has distinct values
ONE_TO_N_UNIQUE = auto()
# one input feature get encoded into m output features that are not the number of distinct values
ONE_TO_M = auto()
# all N input features are encoded into M output features.
# The encoding is done globally on all the input not on a per-feature basis
N_TO_M = auto()
def get_docstring_output_shape(in_out_relation: EncodingRelation):
if in_out_relation == EncodingRelation.ONE_TO_ONE:
return "n_features"
elif in_out_relation == EncodingRelation.ONE_TO_N_UNIQUE:
return "n_features * respective cardinality"
elif in_out_relation == EncodingRelation.ONE_TO_M:
return "M features (n_features < M)"
elif in_out_relation == EncodingRelation.N_TO_M:
return "M features (M can be anything)"
class BaseEncoder(BaseEstimator):
_dim: Optional[int]
cols: List[str]
use_default_cols: bool
handle_missing: str
handle_unknown: str
verbose: int
drop_invariant: bool
invariant_cols: List[str] = []
return_df: bool
supervised: bool
encoding_relation: EncodingRelation
INVARIANCE_THRESHOLD = 10e-5 # columns with variance less than this will be considered constant / invariant
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True,
handle_unknown='value', handle_missing='value', **kwargs):
"""
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string and categorical columns
will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform and inverse transform
(otherwise it will be a numpy array).
handle_missing: str
how to handle missing values at fit time. Options are 'error', 'return_nan',
and 'value'. Default 'value', which treat NaNs as a countable category at
fit time.
handle_unknown: str, int or dict of {column : option, ...}.
how to handle unknown labels at transform time. Options are 'error'
'return_nan', 'value' and int. Defaults to None which uses NaN behaviour
specified at fit time. Passing an int will fill with this int value.
kwargs: dict.
additional encoder specific parameters like regularisation.
"""
self.return_df = return_df
self.drop_invariant = drop_invariant
self.invariant_cols = []
self.verbose = verbose
self.use_default_cols = cols is None # if True, even a repeated call of fit() will select string columns from X
self.cols = cols # note that cols are only the columns to be encoded, feature_names_in_ are all columns
self.mapping = None
self.handle_unknown = handle_unknown
self.handle_missing = handle_missing
self._dim = None
def fit(self, X, y=None, **kwargs):
"""Fits the encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
self._check_fit_inputs(X, y)
X, y = convert_inputs(X, y)
self.feature_names_in_ = X.columns.tolist()
self.n_features_in_ = len(self.feature_names_in_)
self._dim = X.shape[1]
self._determine_fit_columns(X)
if not set(self.cols).issubset(X.columns):
raise ValueError('X does not contain the columns listed in cols')
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
self._fit(X, y, **kwargs)
# for finding invariant columns transform without y (as is done on the test set)
X_transformed = self.transform(X, override_return_df=True)
self.feature_names_out_ = X_transformed.columns.tolist()
# drop all output columns with 0 variance.
if self.drop_invariant:
generated_cols = get_generated_cols(X, X_transformed, self.cols)
self.invariant_cols = [x for x in generated_cols if X_transformed[x].var() <= self.INVARIANCE_THRESHOLD]
self.feature_names_out_ = [x for x in self.feature_names_out_ if x not in self.invariant_cols]
return self
def _check_fit_inputs(self, X, y):
if self._get_tags().get('supervised_encoder') and y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
def _check_transform_inputs(self, X):
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
if self._dim is None:
raise NotFittedError('Must train encoder before it can be used to transform data.')
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
def _drop_invariants(self, X: pd.DataFrame, override_return_df: bool) -> Union[np.ndarray, pd.DataFrame]:
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.values
def _determine_fit_columns(self, X: pd.DataFrame) -> None:
""" Determine columns used by encoder.
Note that the implementation also deals with re-fitting the same encoder object with different columns.
:param X: input data frame
:return: none, sets self.cols as a side effect
"""
# if columns aren't passed, just use every string column
if self.use_default_cols:
self.cols = get_obj_cols(X)
else:
self.cols = convert_cols_to_list(self.cols)
def get_feature_names(self) -> List[str]:
warnings.warn("`get_feature_names` is deprecated in all of sklearn. Use `get_feature_names_out` instead.",
category=FutureWarning)
return self.get_feature_names_out()
def get_feature_names_out(self) -> List[str]:
"""
Returns the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument
and determines the output feature names using the input. A fit is usually not necessary and if so a
NotFittedError is raised.
We just require a fit all the time and return the fitted output columns.
Returns
-------
feature_names: list
A list with all feature names transformed or added.
Note: potentially dropped features (because the feature is constant/invariant) are not included!
"""
out_feats = getattr(self, "feature_names_out_", None)
if not isinstance(out_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return out_feats
def get_feature_names_in(self) -> List[str]:
"""
Returns the names of all input columns present when fitting.
These columns are necessary for the transform step.
"""
in_feats = getattr(self, "feature_names_in_", None)
if not isinstance(in_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return in_feats
@abstractmethod
def _fit(self, X: pd.DataFrame, y: Optional[pd.Series], **kwargs):
...
class SupervisedTransformerMixin(sklearn.base.TransformerMixin):
def _more_tags(self):
return {'supervised_encoder': True}
def transform(self, X, y=None, override_return_df=False):
"""Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation
in order to avoid overfitting.
On training data transform should be called with y, on test data without.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] or None
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X, y = convert_inputs(X, y, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X, y)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X: pd.DataFrame, y: pd.Series) -> pd.DataFrame:
...
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
class UnsupervisedTransformerMixin(sklearn.base.TransformerMixin):
def transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X = convert_input(X, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X) -> pd.DataFrame:
...
class TransformerWithTargetMixin:
def _more_tags(self):
return {'supervised_encoder': True}
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
| """A collection of shared utilities for all encoders, not intended for external use."""
from abc import abstractmethod
from enum import Enum, auto
import warnings
import pandas as pd
import numpy as np
import sklearn.base
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.exceptions import NotFittedError
from typing import Dict, List, Optional, Union
from scipy.sparse import csr_matrix
__author__ = 'willmcginnis'
def convert_cols_to_list(cols):
if isinstance(cols, pd.Series):
return cols.tolist()
elif isinstance(cols, np.ndarray):
return cols.tolist()
elif np.isscalar(cols):
return [cols]
elif isinstance(cols, set):
return list(cols)
elif isinstance(cols, tuple):
return list(cols)
elif pd.api.types.is_categorical_dtype(cols):
return cols.astype(object).tolist()
return cols
def get_obj_cols(df):
"""
Returns names of 'object' columns in the DataFrame.
"""
obj_cols = []
for idx, dt in enumerate(df.dtypes):
if dt == 'object' or is_category(dt):
obj_cols.append(df.columns.values[idx])
if not obj_cols:
print("Warning: No categorical columns found. Calling 'transform' will only return input data.")
return obj_cols
def is_category(dtype):
return pd.api.types.is_categorical_dtype(dtype)
def convert_inputs(X, y, columns=None, index=None, deep=False):
"""
Unite arraylike `X` and vectorlike `y` into a DataFrame and Series.
If both are pandas types already, raises an error if their indexes do not match.
If one is pandas, the returns will share that index.
If neither is pandas, a default index will be used, unless `index` is passed.
Parameters
----------
X: arraylike
y: listlike
columns: listlike
Specifies column names to use for `X`.
Ignored if `X` is already a dataframe.
If `None`, use the default pandas column names.
index: listlike
The index to use, if neither `X` nor `y` is a pandas type.
(If one has an index, then this has no effect.)
If `None`, use the default pandas index.
deep: bool
Whether to deep-copy `X`.
"""
X_alt_index = y.index if isinstance(y, pd.Series) else index
X = convert_input(X, columns=columns, deep=deep, index=X_alt_index)
if y is not None:
y = convert_input_vector(y, index=X.index)
# N.B.: If either was already pandas, it keeps its index.
if any(X.index != y.index):
msg = "`X` and `y` both have indexes, but they do not match. If you are shuffling your input data on " \
"purpose (e.g. via permutation_test_score) use np arrays instead of data frames / series"
raise ValueError(msg)
if X.shape[0] != y.shape[0]:
raise ValueError("The length of X is " + str(X.shape[0]) + " but length of y is " + str(y.shape[0]) + ".")
return X, y
def convert_input(X, columns=None, deep=False, index=None):
"""
Unite data into a DataFrame.
Objects that do not contain column names take the names from the argument.
Optionally perform deep copy of the data.
"""
if not isinstance(X, pd.DataFrame):
if isinstance(X, pd.Series):
X = pd.DataFrame(X, copy=deep)
else:
if columns is not None and np.size(X,1) != len(columns):
raise ValueError('The count of the column names does not correspond to the count of the columns')
if isinstance(X, list):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index) # lists are always copied, but for consistency, we still pass the argument
elif isinstance(X, (np.generic, np.ndarray)):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index)
elif isinstance(X, csr_matrix):
X = pd.DataFrame(X.todense(), columns=columns, copy=deep, index=index)
else:
raise ValueError(f'Unexpected input type: {type(X)}')
elif deep:
X = X.copy(deep=True)
return X
def convert_input_vector(y, index):
"""
Unite target data type into a Series.
If the target is a Series or a DataFrame, we preserve its index.
But if the target does not contain index attribute, we use the index from the argument.
"""
if y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
if isinstance(y, pd.Series):
return y
elif isinstance(y, np.ndarray):
if len(np.shape(y))==1: # vector
return pd.Series(y, name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[0]==1: # single row in a matrix
return pd.Series(y[0, :], name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[1]==1: # single column in a matrix
return pd.Series(y[:, 0], name='target', index=index)
else:
raise ValueError(f'Unexpected input shape: {np.shape(y)}')
elif np.isscalar(y):
return pd.Series([y], name='target', index=index)
elif isinstance(y, list):
if len(y)==0: # empty list
return pd.Series(y, name='target', index=index, dtype=float)
elif len(y)>0 and not isinstance(y[0], list): # vector
return pd.Series(y, name='target', index=index)
elif len(y)>0 and isinstance(y[0], list) and len(y[0])==1: # single row in a matrix
flatten = lambda y: [item for sublist in y for item in sublist]
return pd.Series(flatten(y), name='target', index=index)
elif len(y)==1 and len(y[0])==0 and isinstance(y[0], list): # single empty column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=float)
elif len(y)==1 and isinstance(y[0], list): # single column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=type(y[0][0]))
else:
raise ValueError('Unexpected input shape')
elif isinstance(y, pd.DataFrame):
if len(list(y))==0: # empty DataFrame
return pd.Series(name='target', index=index, dtype=float)
if len(list(y))==1: # a single column
return y.iloc[:, 0]
else:
raise ValueError(f'Unexpected input shape: {y.shape}')
else:
return pd.Series(y, name='target', index=index) # this covers tuples and other directly convertible types
def get_generated_cols(X_original, X_transformed, to_transform):
"""
Returns a list of the generated/transformed columns.
Arguments:
X_original: df
the original (input) DataFrame.
X_transformed: df
the transformed (current) DataFrame.
to_transform: [str]
a list of columns that were transformed (as in the original DataFrame), commonly self.feature_names_in.
Output:
a list of columns that were transformed (as in the current DataFrame).
"""
original_cols = list(X_original.columns)
if len(to_transform) > 0:
[original_cols.remove(c) for c in to_transform]
current_cols = list(X_transformed.columns)
if len(original_cols) > 0:
[current_cols.remove(c) for c in original_cols]
return current_cols
def flatten_reverse_dict(d):
sep = "___"
[flat_dict] = pd.json_normalize(d, sep=sep).to_dict(orient='records')
reversed_flat_dict = {v: tuple(k.split(sep)) for k, v in flat_dict.items()}
return reversed_flat_dict
class EncodingRelation(Enum):
# one input feature get encoded into one output feature
ONE_TO_ONE = auto()
# one input feature get encoded into as many output features as it has distinct values
ONE_TO_N_UNIQUE = auto()
# one input feature get encoded into m output features that are not the number of distinct values
ONE_TO_M = auto()
# all N input features are encoded into M output features.
# The encoding is done globally on all the input not on a per-feature basis
N_TO_M = auto()
def get_docstring_output_shape(in_out_relation: EncodingRelation):
if in_out_relation == EncodingRelation.ONE_TO_ONE:
return "n_features"
elif in_out_relation == EncodingRelation.ONE_TO_N_UNIQUE:
return "n_features * respective cardinality"
elif in_out_relation == EncodingRelation.ONE_TO_M:
return "M features (n_features < M)"
elif in_out_relation == EncodingRelation.N_TO_M:
return "M features (M can be anything)"
class BaseEncoder(BaseEstimator):
_dim: Optional[int]
cols: List[str]
use_default_cols: bool
handle_missing: str
handle_unknown: str
verbose: int
drop_invariant: bool
invariant_cols: List[str] = []
return_df: bool
supervised: bool
encoding_relation: EncodingRelation
INVARIANCE_THRESHOLD = 10e-5 # columns with variance less than this will be considered constant / invariant
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True,
handle_unknown='value', handle_missing='value', **kwargs):
"""
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string and categorical columns
will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform and inverse transform
(otherwise it will be a numpy array).
handle_missing: str
how to handle missing values at fit time. Options are 'error', 'return_nan',
and 'value'. Default 'value', which treat NaNs as a countable category at
fit time.
handle_unknown: str, int or dict of {column : option, ...}.
how to handle unknown labels at transform time. Options are 'error'
'return_nan', 'value' and int. Defaults to None which uses NaN behaviour
specified at fit time. Passing an int will fill with this int value.
kwargs: dict.
additional encoder specific parameters like regularisation.
"""
self.return_df = return_df
self.drop_invariant = drop_invariant
self.invariant_cols = []
self.verbose = verbose
self.use_default_cols = cols is None # if True, even a repeated call of fit() will select string columns from X
self.cols = cols # note that cols are only the columns to be encoded, feature_names_in_ are all columns
self.mapping = None
self.handle_unknown = handle_unknown
self.handle_missing = handle_missing
self._dim = None
def fit(self, X, y=None, **kwargs):
"""Fits the encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
self._check_fit_inputs(X, y)
X, y = convert_inputs(X, y)
self.feature_names_in_ = X.columns.tolist()
self.n_features_in_ = len(self.feature_names_in_)
self._dim = X.shape[1]
self._determine_fit_columns(X)
if not set(self.cols).issubset(X.columns):
raise ValueError('X does not contain the columns listed in cols')
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
self._fit(X, y, **kwargs)
# for finding invariant columns transform without y (as is done on the test set)
X_transformed = self.transform(X, override_return_df=True)
self.feature_names_out_ = X_transformed.columns.tolist()
# drop all output columns with 0 variance.
if self.drop_invariant:
generated_cols = get_generated_cols(X, X_transformed, self.cols)
self.invariant_cols = [x for x in generated_cols if X_transformed[x].var() <= self.INVARIANCE_THRESHOLD]
self.feature_names_out_ = [x for x in self.feature_names_out_ if x not in self.invariant_cols]
return self
def _check_fit_inputs(self, X, y):
if self._get_tags().get('supervised_encoder') and y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
def _check_transform_inputs(self, X):
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
if self._dim is None:
raise NotFittedError('Must train encoder before it can be used to transform data.')
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
def _drop_invariants(self, X: pd.DataFrame, override_return_df: bool) -> Union[np.ndarray, pd.DataFrame]:
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.values
def _determine_fit_columns(self, X: pd.DataFrame) -> None:
""" Determine columns used by encoder.
Note that the implementation also deals with re-fitting the same encoder object with different columns.
:param X: input data frame
:return: none, sets self.cols as a side effect
"""
# if columns aren't passed, just use every string column
if self.use_default_cols:
self.cols = get_obj_cols(X)
else:
self.cols = convert_cols_to_list(self.cols)
def get_feature_names(self) -> List[str]:
warnings.warn("`get_feature_names` is deprecated in all of sklearn. Use `get_feature_names_out` instead.",
category=FutureWarning)
return self.get_feature_names_out()
def get_feature_names_out(self, input_features=None) -> np.ndarray:
"""
Returns the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument
and determines the output feature names using the input. A fit is usually not necessary and if so a
NotFittedError is raised.
We just require a fit all the time and return the fitted output columns.
Returns
-------
feature_names: np.ndarray
A numpy array with all feature names transformed or added.
Note: potentially dropped features (because the feature is constant/invariant) are not included!
"""
out_feats = getattr(self, "feature_names_out_", None)
if not isinstance(out_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return np.array(out_feats, dtype=object)
def get_feature_names_in(self) -> List[str]:
"""
Returns the names of all input columns present when fitting.
These columns are necessary for the transform step.
"""
in_feats = getattr(self, "feature_names_in_", None)
if not isinstance(in_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return in_feats
@abstractmethod
def _fit(self, X: pd.DataFrame, y: Optional[pd.Series], **kwargs):
...
class SupervisedTransformerMixin(sklearn.base.TransformerMixin):
def _more_tags(self):
return {'supervised_encoder': True}
def transform(self, X, y=None, override_return_df=False):
"""Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation
in order to avoid overfitting.
On training data transform should be called with y, on test data without.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] or None
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X, y = convert_inputs(X, y, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X, y)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X: pd.DataFrame, y: pd.Series) -> pd.DataFrame:
...
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
class UnsupervisedTransformerMixin(sklearn.base.TransformerMixin):
def transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X = convert_input(X, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X) -> pd.DataFrame:
...
class TransformerWithTargetMixin:
def _more_tags(self):
return {'supervised_encoder': True}
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
| JaimeArboleda | 5eb7a2d6359d680bdadd0534bdb983e712a47f9c | 570827e6b48737d0c9aece8aca31edd6da02c1b2 | so we're not actually doing anything with the `input_features` parameter? I thought the point of including it would be to explicitly tell every encoder how to calculate output features from the given input features (this is what the 1-1 mixin does, right?)
For some encoder it give a not-fitted error, but other don't need fitting.
I feel this is taking some shortcut that sklearn does not take | PaulWestenthanner | 25 |
scikit-learn-contrib/category_encoders | 398 | (WIP) Partial fix for getting feature names out | I think this is a partial fix for this opened issue:
https://github.com/scikit-learn-contrib/category_encoders/issues/395
It remains to check the behaviour of other estimators that are not ONE_TO_ONE.
Please, let me know if you like the work in progress and I will try to continue. | null | 2023-02-23 13:33:41+00:00 | 2023-03-13 11:48:24+00:00 | category_encoders/utils.py | """A collection of shared utilities for all encoders, not intended for external use."""
from abc import abstractmethod
from enum import Enum, auto
import warnings
import pandas as pd
import numpy as np
import sklearn.base
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.exceptions import NotFittedError
from typing import Dict, List, Optional, Union
from scipy.sparse import csr_matrix
__author__ = 'willmcginnis'
def convert_cols_to_list(cols):
if isinstance(cols, pd.Series):
return cols.tolist()
elif isinstance(cols, np.ndarray):
return cols.tolist()
elif np.isscalar(cols):
return [cols]
elif isinstance(cols, set):
return list(cols)
elif isinstance(cols, tuple):
return list(cols)
elif pd.api.types.is_categorical_dtype(cols):
return cols.astype(object).tolist()
return cols
def get_obj_cols(df):
"""
Returns names of 'object' columns in the DataFrame.
"""
obj_cols = []
for idx, dt in enumerate(df.dtypes):
if dt == 'object' or is_category(dt):
obj_cols.append(df.columns.values[idx])
if not obj_cols:
print("Warning: No categorical columns found. Calling 'transform' will only return input data.")
return obj_cols
def is_category(dtype):
return pd.api.types.is_categorical_dtype(dtype)
def convert_inputs(X, y, columns=None, index=None, deep=False):
"""
Unite arraylike `X` and vectorlike `y` into a DataFrame and Series.
If both are pandas types already, raises an error if their indexes do not match.
If one is pandas, the returns will share that index.
If neither is pandas, a default index will be used, unless `index` is passed.
Parameters
----------
X: arraylike
y: listlike
columns: listlike
Specifies column names to use for `X`.
Ignored if `X` is already a dataframe.
If `None`, use the default pandas column names.
index: listlike
The index to use, if neither `X` nor `y` is a pandas type.
(If one has an index, then this has no effect.)
If `None`, use the default pandas index.
deep: bool
Whether to deep-copy `X`.
"""
X_alt_index = y.index if isinstance(y, pd.Series) else index
X = convert_input(X, columns=columns, deep=deep, index=X_alt_index)
if y is not None:
y = convert_input_vector(y, index=X.index)
# N.B.: If either was already pandas, it keeps its index.
if any(X.index != y.index):
msg = "`X` and `y` both have indexes, but they do not match. If you are shuffling your input data on " \
"purpose (e.g. via permutation_test_score) use np arrays instead of data frames / series"
raise ValueError(msg)
if X.shape[0] != y.shape[0]:
raise ValueError("The length of X is " + str(X.shape[0]) + " but length of y is " + str(y.shape[0]) + ".")
return X, y
def convert_input(X, columns=None, deep=False, index=None):
"""
Unite data into a DataFrame.
Objects that do not contain column names take the names from the argument.
Optionally perform deep copy of the data.
"""
if not isinstance(X, pd.DataFrame):
if isinstance(X, pd.Series):
X = pd.DataFrame(X, copy=deep)
else:
if columns is not None and np.size(X,1) != len(columns):
raise ValueError('The count of the column names does not correspond to the count of the columns')
if isinstance(X, list):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index) # lists are always copied, but for consistency, we still pass the argument
elif isinstance(X, (np.generic, np.ndarray)):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index)
elif isinstance(X, csr_matrix):
X = pd.DataFrame(X.todense(), columns=columns, copy=deep, index=index)
else:
raise ValueError(f'Unexpected input type: {type(X)}')
elif deep:
X = X.copy(deep=True)
return X
def convert_input_vector(y, index):
"""
Unite target data type into a Series.
If the target is a Series or a DataFrame, we preserve its index.
But if the target does not contain index attribute, we use the index from the argument.
"""
if y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
if isinstance(y, pd.Series):
return y
elif isinstance(y, np.ndarray):
if len(np.shape(y))==1: # vector
return pd.Series(y, name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[0]==1: # single row in a matrix
return pd.Series(y[0, :], name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[1]==1: # single column in a matrix
return pd.Series(y[:, 0], name='target', index=index)
else:
raise ValueError(f'Unexpected input shape: {np.shape(y)}')
elif np.isscalar(y):
return pd.Series([y], name='target', index=index)
elif isinstance(y, list):
if len(y)==0: # empty list
return pd.Series(y, name='target', index=index, dtype=float)
elif len(y)>0 and not isinstance(y[0], list): # vector
return pd.Series(y, name='target', index=index)
elif len(y)>0 and isinstance(y[0], list) and len(y[0])==1: # single row in a matrix
flatten = lambda y: [item for sublist in y for item in sublist]
return pd.Series(flatten(y), name='target', index=index)
elif len(y)==1 and len(y[0])==0 and isinstance(y[0], list): # single empty column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=float)
elif len(y)==1 and isinstance(y[0], list): # single column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=type(y[0][0]))
else:
raise ValueError('Unexpected input shape')
elif isinstance(y, pd.DataFrame):
if len(list(y))==0: # empty DataFrame
return pd.Series(name='target', index=index, dtype=float)
if len(list(y))==1: # a single column
return y.iloc[:, 0]
else:
raise ValueError(f'Unexpected input shape: {y.shape}')
else:
return pd.Series(y, name='target', index=index) # this covers tuples and other directly convertible types
def get_generated_cols(X_original, X_transformed, to_transform):
"""
Returns a list of the generated/transformed columns.
Arguments:
X_original: df
the original (input) DataFrame.
X_transformed: df
the transformed (current) DataFrame.
to_transform: [str]
a list of columns that were transformed (as in the original DataFrame), commonly self.feature_names_in.
Output:
a list of columns that were transformed (as in the current DataFrame).
"""
original_cols = list(X_original.columns)
if len(to_transform) > 0:
[original_cols.remove(c) for c in to_transform]
current_cols = list(X_transformed.columns)
if len(original_cols) > 0:
[current_cols.remove(c) for c in original_cols]
return current_cols
def flatten_reverse_dict(d):
sep = "___"
[flat_dict] = pd.json_normalize(d, sep=sep).to_dict(orient='records')
reversed_flat_dict = {v: tuple(k.split(sep)) for k, v in flat_dict.items()}
return reversed_flat_dict
class EncodingRelation(Enum):
# one input feature get encoded into one output feature
ONE_TO_ONE = auto()
# one input feature get encoded into as many output features as it has distinct values
ONE_TO_N_UNIQUE = auto()
# one input feature get encoded into m output features that are not the number of distinct values
ONE_TO_M = auto()
# all N input features are encoded into M output features.
# The encoding is done globally on all the input not on a per-feature basis
N_TO_M = auto()
def get_docstring_output_shape(in_out_relation: EncodingRelation):
if in_out_relation == EncodingRelation.ONE_TO_ONE:
return "n_features"
elif in_out_relation == EncodingRelation.ONE_TO_N_UNIQUE:
return "n_features * respective cardinality"
elif in_out_relation == EncodingRelation.ONE_TO_M:
return "M features (n_features < M)"
elif in_out_relation == EncodingRelation.N_TO_M:
return "M features (M can be anything)"
class BaseEncoder(BaseEstimator):
_dim: Optional[int]
cols: List[str]
use_default_cols: bool
handle_missing: str
handle_unknown: str
verbose: int
drop_invariant: bool
invariant_cols: List[str] = []
return_df: bool
supervised: bool
encoding_relation: EncodingRelation
INVARIANCE_THRESHOLD = 10e-5 # columns with variance less than this will be considered constant / invariant
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True,
handle_unknown='value', handle_missing='value', **kwargs):
"""
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string and categorical columns
will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform and inverse transform
(otherwise it will be a numpy array).
handle_missing: str
how to handle missing values at fit time. Options are 'error', 'return_nan',
and 'value'. Default 'value', which treat NaNs as a countable category at
fit time.
handle_unknown: str, int or dict of {column : option, ...}.
how to handle unknown labels at transform time. Options are 'error'
'return_nan', 'value' and int. Defaults to None which uses NaN behaviour
specified at fit time. Passing an int will fill with this int value.
kwargs: dict.
additional encoder specific parameters like regularisation.
"""
self.return_df = return_df
self.drop_invariant = drop_invariant
self.invariant_cols = []
self.verbose = verbose
self.use_default_cols = cols is None # if True, even a repeated call of fit() will select string columns from X
self.cols = cols # note that cols are only the columns to be encoded, feature_names_in_ are all columns
self.mapping = None
self.handle_unknown = handle_unknown
self.handle_missing = handle_missing
self._dim = None
def fit(self, X, y=None, **kwargs):
"""Fits the encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
self._check_fit_inputs(X, y)
X, y = convert_inputs(X, y)
self.feature_names_in_ = X.columns.tolist()
self.n_features_in_ = len(self.feature_names_in_)
self._dim = X.shape[1]
self._determine_fit_columns(X)
if not set(self.cols).issubset(X.columns):
raise ValueError('X does not contain the columns listed in cols')
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
self._fit(X, y, **kwargs)
# for finding invariant columns transform without y (as is done on the test set)
X_transformed = self.transform(X, override_return_df=True)
self.feature_names_out_ = X_transformed.columns.tolist()
# drop all output columns with 0 variance.
if self.drop_invariant:
generated_cols = get_generated_cols(X, X_transformed, self.cols)
self.invariant_cols = [x for x in generated_cols if X_transformed[x].var() <= self.INVARIANCE_THRESHOLD]
self.feature_names_out_ = [x for x in self.feature_names_out_ if x not in self.invariant_cols]
return self
def _check_fit_inputs(self, X, y):
if self._get_tags().get('supervised_encoder') and y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
def _check_transform_inputs(self, X):
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
if self._dim is None:
raise NotFittedError('Must train encoder before it can be used to transform data.')
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
def _drop_invariants(self, X: pd.DataFrame, override_return_df: bool) -> Union[np.ndarray, pd.DataFrame]:
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.values
def _determine_fit_columns(self, X: pd.DataFrame) -> None:
""" Determine columns used by encoder.
Note that the implementation also deals with re-fitting the same encoder object with different columns.
:param X: input data frame
:return: none, sets self.cols as a side effect
"""
# if columns aren't passed, just use every string column
if self.use_default_cols:
self.cols = get_obj_cols(X)
else:
self.cols = convert_cols_to_list(self.cols)
def get_feature_names(self) -> List[str]:
warnings.warn("`get_feature_names` is deprecated in all of sklearn. Use `get_feature_names_out` instead.",
category=FutureWarning)
return self.get_feature_names_out()
def get_feature_names_out(self) -> List[str]:
"""
Returns the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument
and determines the output feature names using the input. A fit is usually not necessary and if so a
NotFittedError is raised.
We just require a fit all the time and return the fitted output columns.
Returns
-------
feature_names: list
A list with all feature names transformed or added.
Note: potentially dropped features (because the feature is constant/invariant) are not included!
"""
out_feats = getattr(self, "feature_names_out_", None)
if not isinstance(out_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return out_feats
def get_feature_names_in(self) -> List[str]:
"""
Returns the names of all input columns present when fitting.
These columns are necessary for the transform step.
"""
in_feats = getattr(self, "feature_names_in_", None)
if not isinstance(in_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return in_feats
@abstractmethod
def _fit(self, X: pd.DataFrame, y: Optional[pd.Series], **kwargs):
...
class SupervisedTransformerMixin(sklearn.base.TransformerMixin):
def _more_tags(self):
return {'supervised_encoder': True}
def transform(self, X, y=None, override_return_df=False):
"""Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation
in order to avoid overfitting.
On training data transform should be called with y, on test data without.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] or None
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X, y = convert_inputs(X, y, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X, y)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X: pd.DataFrame, y: pd.Series) -> pd.DataFrame:
...
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
class UnsupervisedTransformerMixin(sklearn.base.TransformerMixin):
def transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X = convert_input(X, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X) -> pd.DataFrame:
...
class TransformerWithTargetMixin:
def _more_tags(self):
return {'supervised_encoder': True}
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
| """A collection of shared utilities for all encoders, not intended for external use."""
from abc import abstractmethod
from enum import Enum, auto
import warnings
import pandas as pd
import numpy as np
import sklearn.base
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.exceptions import NotFittedError
from typing import Dict, List, Optional, Union
from scipy.sparse import csr_matrix
__author__ = 'willmcginnis'
def convert_cols_to_list(cols):
if isinstance(cols, pd.Series):
return cols.tolist()
elif isinstance(cols, np.ndarray):
return cols.tolist()
elif np.isscalar(cols):
return [cols]
elif isinstance(cols, set):
return list(cols)
elif isinstance(cols, tuple):
return list(cols)
elif pd.api.types.is_categorical_dtype(cols):
return cols.astype(object).tolist()
return cols
def get_obj_cols(df):
"""
Returns names of 'object' columns in the DataFrame.
"""
obj_cols = []
for idx, dt in enumerate(df.dtypes):
if dt == 'object' or is_category(dt):
obj_cols.append(df.columns.values[idx])
if not obj_cols:
print("Warning: No categorical columns found. Calling 'transform' will only return input data.")
return obj_cols
def is_category(dtype):
return pd.api.types.is_categorical_dtype(dtype)
def convert_inputs(X, y, columns=None, index=None, deep=False):
"""
Unite arraylike `X` and vectorlike `y` into a DataFrame and Series.
If both are pandas types already, raises an error if their indexes do not match.
If one is pandas, the returns will share that index.
If neither is pandas, a default index will be used, unless `index` is passed.
Parameters
----------
X: arraylike
y: listlike
columns: listlike
Specifies column names to use for `X`.
Ignored if `X` is already a dataframe.
If `None`, use the default pandas column names.
index: listlike
The index to use, if neither `X` nor `y` is a pandas type.
(If one has an index, then this has no effect.)
If `None`, use the default pandas index.
deep: bool
Whether to deep-copy `X`.
"""
X_alt_index = y.index if isinstance(y, pd.Series) else index
X = convert_input(X, columns=columns, deep=deep, index=X_alt_index)
if y is not None:
y = convert_input_vector(y, index=X.index)
# N.B.: If either was already pandas, it keeps its index.
if any(X.index != y.index):
msg = "`X` and `y` both have indexes, but they do not match. If you are shuffling your input data on " \
"purpose (e.g. via permutation_test_score) use np arrays instead of data frames / series"
raise ValueError(msg)
if X.shape[0] != y.shape[0]:
raise ValueError("The length of X is " + str(X.shape[0]) + " but length of y is " + str(y.shape[0]) + ".")
return X, y
def convert_input(X, columns=None, deep=False, index=None):
"""
Unite data into a DataFrame.
Objects that do not contain column names take the names from the argument.
Optionally perform deep copy of the data.
"""
if not isinstance(X, pd.DataFrame):
if isinstance(X, pd.Series):
X = pd.DataFrame(X, copy=deep)
else:
if columns is not None and np.size(X,1) != len(columns):
raise ValueError('The count of the column names does not correspond to the count of the columns')
if isinstance(X, list):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index) # lists are always copied, but for consistency, we still pass the argument
elif isinstance(X, (np.generic, np.ndarray)):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index)
elif isinstance(X, csr_matrix):
X = pd.DataFrame(X.todense(), columns=columns, copy=deep, index=index)
else:
raise ValueError(f'Unexpected input type: {type(X)}')
elif deep:
X = X.copy(deep=True)
return X
def convert_input_vector(y, index):
"""
Unite target data type into a Series.
If the target is a Series or a DataFrame, we preserve its index.
But if the target does not contain index attribute, we use the index from the argument.
"""
if y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
if isinstance(y, pd.Series):
return y
elif isinstance(y, np.ndarray):
if len(np.shape(y))==1: # vector
return pd.Series(y, name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[0]==1: # single row in a matrix
return pd.Series(y[0, :], name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[1]==1: # single column in a matrix
return pd.Series(y[:, 0], name='target', index=index)
else:
raise ValueError(f'Unexpected input shape: {np.shape(y)}')
elif np.isscalar(y):
return pd.Series([y], name='target', index=index)
elif isinstance(y, list):
if len(y)==0: # empty list
return pd.Series(y, name='target', index=index, dtype=float)
elif len(y)>0 and not isinstance(y[0], list): # vector
return pd.Series(y, name='target', index=index)
elif len(y)>0 and isinstance(y[0], list) and len(y[0])==1: # single row in a matrix
flatten = lambda y: [item for sublist in y for item in sublist]
return pd.Series(flatten(y), name='target', index=index)
elif len(y)==1 and len(y[0])==0 and isinstance(y[0], list): # single empty column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=float)
elif len(y)==1 and isinstance(y[0], list): # single column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=type(y[0][0]))
else:
raise ValueError('Unexpected input shape')
elif isinstance(y, pd.DataFrame):
if len(list(y))==0: # empty DataFrame
return pd.Series(name='target', index=index, dtype=float)
if len(list(y))==1: # a single column
return y.iloc[:, 0]
else:
raise ValueError(f'Unexpected input shape: {y.shape}')
else:
return pd.Series(y, name='target', index=index) # this covers tuples and other directly convertible types
def get_generated_cols(X_original, X_transformed, to_transform):
"""
Returns a list of the generated/transformed columns.
Arguments:
X_original: df
the original (input) DataFrame.
X_transformed: df
the transformed (current) DataFrame.
to_transform: [str]
a list of columns that were transformed (as in the original DataFrame), commonly self.feature_names_in.
Output:
a list of columns that were transformed (as in the current DataFrame).
"""
original_cols = list(X_original.columns)
if len(to_transform) > 0:
[original_cols.remove(c) for c in to_transform]
current_cols = list(X_transformed.columns)
if len(original_cols) > 0:
[current_cols.remove(c) for c in original_cols]
return current_cols
def flatten_reverse_dict(d):
sep = "___"
[flat_dict] = pd.json_normalize(d, sep=sep).to_dict(orient='records')
reversed_flat_dict = {v: tuple(k.split(sep)) for k, v in flat_dict.items()}
return reversed_flat_dict
class EncodingRelation(Enum):
# one input feature get encoded into one output feature
ONE_TO_ONE = auto()
# one input feature get encoded into as many output features as it has distinct values
ONE_TO_N_UNIQUE = auto()
# one input feature get encoded into m output features that are not the number of distinct values
ONE_TO_M = auto()
# all N input features are encoded into M output features.
# The encoding is done globally on all the input not on a per-feature basis
N_TO_M = auto()
def get_docstring_output_shape(in_out_relation: EncodingRelation):
if in_out_relation == EncodingRelation.ONE_TO_ONE:
return "n_features"
elif in_out_relation == EncodingRelation.ONE_TO_N_UNIQUE:
return "n_features * respective cardinality"
elif in_out_relation == EncodingRelation.ONE_TO_M:
return "M features (n_features < M)"
elif in_out_relation == EncodingRelation.N_TO_M:
return "M features (M can be anything)"
class BaseEncoder(BaseEstimator):
_dim: Optional[int]
cols: List[str]
use_default_cols: bool
handle_missing: str
handle_unknown: str
verbose: int
drop_invariant: bool
invariant_cols: List[str] = []
return_df: bool
supervised: bool
encoding_relation: EncodingRelation
INVARIANCE_THRESHOLD = 10e-5 # columns with variance less than this will be considered constant / invariant
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True,
handle_unknown='value', handle_missing='value', **kwargs):
"""
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string and categorical columns
will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform and inverse transform
(otherwise it will be a numpy array).
handle_missing: str
how to handle missing values at fit time. Options are 'error', 'return_nan',
and 'value'. Default 'value', which treat NaNs as a countable category at
fit time.
handle_unknown: str, int or dict of {column : option, ...}.
how to handle unknown labels at transform time. Options are 'error'
'return_nan', 'value' and int. Defaults to None which uses NaN behaviour
specified at fit time. Passing an int will fill with this int value.
kwargs: dict.
additional encoder specific parameters like regularisation.
"""
self.return_df = return_df
self.drop_invariant = drop_invariant
self.invariant_cols = []
self.verbose = verbose
self.use_default_cols = cols is None # if True, even a repeated call of fit() will select string columns from X
self.cols = cols # note that cols are only the columns to be encoded, feature_names_in_ are all columns
self.mapping = None
self.handle_unknown = handle_unknown
self.handle_missing = handle_missing
self._dim = None
def fit(self, X, y=None, **kwargs):
"""Fits the encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
self._check_fit_inputs(X, y)
X, y = convert_inputs(X, y)
self.feature_names_in_ = X.columns.tolist()
self.n_features_in_ = len(self.feature_names_in_)
self._dim = X.shape[1]
self._determine_fit_columns(X)
if not set(self.cols).issubset(X.columns):
raise ValueError('X does not contain the columns listed in cols')
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
self._fit(X, y, **kwargs)
# for finding invariant columns transform without y (as is done on the test set)
X_transformed = self.transform(X, override_return_df=True)
self.feature_names_out_ = X_transformed.columns.tolist()
# drop all output columns with 0 variance.
if self.drop_invariant:
generated_cols = get_generated_cols(X, X_transformed, self.cols)
self.invariant_cols = [x for x in generated_cols if X_transformed[x].var() <= self.INVARIANCE_THRESHOLD]
self.feature_names_out_ = [x for x in self.feature_names_out_ if x not in self.invariant_cols]
return self
def _check_fit_inputs(self, X, y):
if self._get_tags().get('supervised_encoder') and y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
def _check_transform_inputs(self, X):
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
if self._dim is None:
raise NotFittedError('Must train encoder before it can be used to transform data.')
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
def _drop_invariants(self, X: pd.DataFrame, override_return_df: bool) -> Union[np.ndarray, pd.DataFrame]:
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.values
def _determine_fit_columns(self, X: pd.DataFrame) -> None:
""" Determine columns used by encoder.
Note that the implementation also deals with re-fitting the same encoder object with different columns.
:param X: input data frame
:return: none, sets self.cols as a side effect
"""
# if columns aren't passed, just use every string column
if self.use_default_cols:
self.cols = get_obj_cols(X)
else:
self.cols = convert_cols_to_list(self.cols)
def get_feature_names(self) -> List[str]:
warnings.warn("`get_feature_names` is deprecated in all of sklearn. Use `get_feature_names_out` instead.",
category=FutureWarning)
return self.get_feature_names_out()
def get_feature_names_out(self, input_features=None) -> np.ndarray:
"""
Returns the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument
and determines the output feature names using the input. A fit is usually not necessary and if so a
NotFittedError is raised.
We just require a fit all the time and return the fitted output columns.
Returns
-------
feature_names: np.ndarray
A numpy array with all feature names transformed or added.
Note: potentially dropped features (because the feature is constant/invariant) are not included!
"""
out_feats = getattr(self, "feature_names_out_", None)
if not isinstance(out_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return np.array(out_feats, dtype=object)
def get_feature_names_in(self) -> List[str]:
"""
Returns the names of all input columns present when fitting.
These columns are necessary for the transform step.
"""
in_feats = getattr(self, "feature_names_in_", None)
if not isinstance(in_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return in_feats
@abstractmethod
def _fit(self, X: pd.DataFrame, y: Optional[pd.Series], **kwargs):
...
class SupervisedTransformerMixin(sklearn.base.TransformerMixin):
def _more_tags(self):
return {'supervised_encoder': True}
def transform(self, X, y=None, override_return_df=False):
"""Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation
in order to avoid overfitting.
On training data transform should be called with y, on test data without.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] or None
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X, y = convert_inputs(X, y, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X, y)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X: pd.DataFrame, y: pd.Series) -> pd.DataFrame:
...
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
class UnsupervisedTransformerMixin(sklearn.base.TransformerMixin):
def transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X = convert_input(X, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X) -> pd.DataFrame:
...
class TransformerWithTargetMixin:
def _more_tags(self):
return {'supervised_encoder': True}
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
| JaimeArboleda | 5eb7a2d6359d680bdadd0534bdb983e712a47f9c | 570827e6b48737d0c9aece8aca31edd6da02c1b2 | Well, let me explain (for simplicity, every time I say a `Pipeline` it could be that or a `ColumnTransformer` or a `FeatureUnion`...):
* We need to have the parameter `input_features` because otherwise, when the Encoder is part of a `Pipeline`, there will be an `Exception: BaseEncoder.get_feature_names_out() takes 1 positional argument but 2 were given`, because the method will be called with the parameter `input_features`.
* Now, what can we do with this parameter? There were two options: modifying all encoders to take into account how to generate the output features given the input_features, or just using the `feature_names_out_`, which is available and computed. Given that this library internally work with dataframes, I thought that the second option was a much better idea. There is only one caveat, which happens again when the encoder is part of a `Pipeline`: if the previous transformer outputted a numpy array, then the column names won't be good (they will be like col_0_A, col_0_B...). That's why I think we should inform that, to ensure full compatibility and 100% correctness of feature names, `sklearn` should be used with the `set_output="pandas"` option. In this case, everything works smoothly, as the tests prove.
* Another way to put it, in `sklearn` it makes sense to have a function with this signature, because an estimator can be fitted with a numpy array and then it should be able to "compute" the feature names of the output given a list of input_features. But for `category_encoders` this use case does not have too much sense, as the input is expected to be a pandas dataframe. And this assumption is spread everywhere: even other parameters work with column names, like the `mapping` or the `cols`; you won't see parameters expecting column names in `sklearn` because they expect that the input can be a numpy array.
* So, yes, I agree the behavior is different than in `sklearn`, but I think it makes sense as long as the library is based on this "pandas assumption". I think a big redesign will be a good idea but as we discussed in the issue it will take a lot of effort.
To be honest, if you have already set `sklearn` to output `pandas`, it does not make too much sense to use this method anyway... But at least this modification ensures that you don't get an `Exception` and it does not break anything. It's a very minor change, much less than I would have liked, but the opposite will be a big redesign that makes sense if you fully drop the "pandas assumption" of the library. | JaimeArboleda | 26 |
scikit-learn-contrib/category_encoders | 398 | (WIP) Partial fix for getting feature names out | I think this is a partial fix for this opened issue:
https://github.com/scikit-learn-contrib/category_encoders/issues/395
It remains to check the behaviour of other estimators that are not ONE_TO_ONE.
Please, let me know if you like the work in progress and I will try to continue. | null | 2023-02-23 13:33:41+00:00 | 2023-03-13 11:48:24+00:00 | category_encoders/utils.py | """A collection of shared utilities for all encoders, not intended for external use."""
from abc import abstractmethod
from enum import Enum, auto
import warnings
import pandas as pd
import numpy as np
import sklearn.base
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.exceptions import NotFittedError
from typing import Dict, List, Optional, Union
from scipy.sparse import csr_matrix
__author__ = 'willmcginnis'
def convert_cols_to_list(cols):
if isinstance(cols, pd.Series):
return cols.tolist()
elif isinstance(cols, np.ndarray):
return cols.tolist()
elif np.isscalar(cols):
return [cols]
elif isinstance(cols, set):
return list(cols)
elif isinstance(cols, tuple):
return list(cols)
elif pd.api.types.is_categorical_dtype(cols):
return cols.astype(object).tolist()
return cols
def get_obj_cols(df):
"""
Returns names of 'object' columns in the DataFrame.
"""
obj_cols = []
for idx, dt in enumerate(df.dtypes):
if dt == 'object' or is_category(dt):
obj_cols.append(df.columns.values[idx])
if not obj_cols:
print("Warning: No categorical columns found. Calling 'transform' will only return input data.")
return obj_cols
def is_category(dtype):
return pd.api.types.is_categorical_dtype(dtype)
def convert_inputs(X, y, columns=None, index=None, deep=False):
"""
Unite arraylike `X` and vectorlike `y` into a DataFrame and Series.
If both are pandas types already, raises an error if their indexes do not match.
If one is pandas, the returns will share that index.
If neither is pandas, a default index will be used, unless `index` is passed.
Parameters
----------
X: arraylike
y: listlike
columns: listlike
Specifies column names to use for `X`.
Ignored if `X` is already a dataframe.
If `None`, use the default pandas column names.
index: listlike
The index to use, if neither `X` nor `y` is a pandas type.
(If one has an index, then this has no effect.)
If `None`, use the default pandas index.
deep: bool
Whether to deep-copy `X`.
"""
X_alt_index = y.index if isinstance(y, pd.Series) else index
X = convert_input(X, columns=columns, deep=deep, index=X_alt_index)
if y is not None:
y = convert_input_vector(y, index=X.index)
# N.B.: If either was already pandas, it keeps its index.
if any(X.index != y.index):
msg = "`X` and `y` both have indexes, but they do not match. If you are shuffling your input data on " \
"purpose (e.g. via permutation_test_score) use np arrays instead of data frames / series"
raise ValueError(msg)
if X.shape[0] != y.shape[0]:
raise ValueError("The length of X is " + str(X.shape[0]) + " but length of y is " + str(y.shape[0]) + ".")
return X, y
def convert_input(X, columns=None, deep=False, index=None):
"""
Unite data into a DataFrame.
Objects that do not contain column names take the names from the argument.
Optionally perform deep copy of the data.
"""
if not isinstance(X, pd.DataFrame):
if isinstance(X, pd.Series):
X = pd.DataFrame(X, copy=deep)
else:
if columns is not None and np.size(X,1) != len(columns):
raise ValueError('The count of the column names does not correspond to the count of the columns')
if isinstance(X, list):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index) # lists are always copied, but for consistency, we still pass the argument
elif isinstance(X, (np.generic, np.ndarray)):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index)
elif isinstance(X, csr_matrix):
X = pd.DataFrame(X.todense(), columns=columns, copy=deep, index=index)
else:
raise ValueError(f'Unexpected input type: {type(X)}')
elif deep:
X = X.copy(deep=True)
return X
def convert_input_vector(y, index):
"""
Unite target data type into a Series.
If the target is a Series or a DataFrame, we preserve its index.
But if the target does not contain index attribute, we use the index from the argument.
"""
if y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
if isinstance(y, pd.Series):
return y
elif isinstance(y, np.ndarray):
if len(np.shape(y))==1: # vector
return pd.Series(y, name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[0]==1: # single row in a matrix
return pd.Series(y[0, :], name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[1]==1: # single column in a matrix
return pd.Series(y[:, 0], name='target', index=index)
else:
raise ValueError(f'Unexpected input shape: {np.shape(y)}')
elif np.isscalar(y):
return pd.Series([y], name='target', index=index)
elif isinstance(y, list):
if len(y)==0: # empty list
return pd.Series(y, name='target', index=index, dtype=float)
elif len(y)>0 and not isinstance(y[0], list): # vector
return pd.Series(y, name='target', index=index)
elif len(y)>0 and isinstance(y[0], list) and len(y[0])==1: # single row in a matrix
flatten = lambda y: [item for sublist in y for item in sublist]
return pd.Series(flatten(y), name='target', index=index)
elif len(y)==1 and len(y[0])==0 and isinstance(y[0], list): # single empty column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=float)
elif len(y)==1 and isinstance(y[0], list): # single column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=type(y[0][0]))
else:
raise ValueError('Unexpected input shape')
elif isinstance(y, pd.DataFrame):
if len(list(y))==0: # empty DataFrame
return pd.Series(name='target', index=index, dtype=float)
if len(list(y))==1: # a single column
return y.iloc[:, 0]
else:
raise ValueError(f'Unexpected input shape: {y.shape}')
else:
return pd.Series(y, name='target', index=index) # this covers tuples and other directly convertible types
def get_generated_cols(X_original, X_transformed, to_transform):
"""
Returns a list of the generated/transformed columns.
Arguments:
X_original: df
the original (input) DataFrame.
X_transformed: df
the transformed (current) DataFrame.
to_transform: [str]
a list of columns that were transformed (as in the original DataFrame), commonly self.feature_names_in.
Output:
a list of columns that were transformed (as in the current DataFrame).
"""
original_cols = list(X_original.columns)
if len(to_transform) > 0:
[original_cols.remove(c) for c in to_transform]
current_cols = list(X_transformed.columns)
if len(original_cols) > 0:
[current_cols.remove(c) for c in original_cols]
return current_cols
def flatten_reverse_dict(d):
sep = "___"
[flat_dict] = pd.json_normalize(d, sep=sep).to_dict(orient='records')
reversed_flat_dict = {v: tuple(k.split(sep)) for k, v in flat_dict.items()}
return reversed_flat_dict
class EncodingRelation(Enum):
# one input feature get encoded into one output feature
ONE_TO_ONE = auto()
# one input feature get encoded into as many output features as it has distinct values
ONE_TO_N_UNIQUE = auto()
# one input feature get encoded into m output features that are not the number of distinct values
ONE_TO_M = auto()
# all N input features are encoded into M output features.
# The encoding is done globally on all the input not on a per-feature basis
N_TO_M = auto()
def get_docstring_output_shape(in_out_relation: EncodingRelation):
if in_out_relation == EncodingRelation.ONE_TO_ONE:
return "n_features"
elif in_out_relation == EncodingRelation.ONE_TO_N_UNIQUE:
return "n_features * respective cardinality"
elif in_out_relation == EncodingRelation.ONE_TO_M:
return "M features (n_features < M)"
elif in_out_relation == EncodingRelation.N_TO_M:
return "M features (M can be anything)"
class BaseEncoder(BaseEstimator):
_dim: Optional[int]
cols: List[str]
use_default_cols: bool
handle_missing: str
handle_unknown: str
verbose: int
drop_invariant: bool
invariant_cols: List[str] = []
return_df: bool
supervised: bool
encoding_relation: EncodingRelation
INVARIANCE_THRESHOLD = 10e-5 # columns with variance less than this will be considered constant / invariant
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True,
handle_unknown='value', handle_missing='value', **kwargs):
"""
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string and categorical columns
will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform and inverse transform
(otherwise it will be a numpy array).
handle_missing: str
how to handle missing values at fit time. Options are 'error', 'return_nan',
and 'value'. Default 'value', which treat NaNs as a countable category at
fit time.
handle_unknown: str, int or dict of {column : option, ...}.
how to handle unknown labels at transform time. Options are 'error'
'return_nan', 'value' and int. Defaults to None which uses NaN behaviour
specified at fit time. Passing an int will fill with this int value.
kwargs: dict.
additional encoder specific parameters like regularisation.
"""
self.return_df = return_df
self.drop_invariant = drop_invariant
self.invariant_cols = []
self.verbose = verbose
self.use_default_cols = cols is None # if True, even a repeated call of fit() will select string columns from X
self.cols = cols # note that cols are only the columns to be encoded, feature_names_in_ are all columns
self.mapping = None
self.handle_unknown = handle_unknown
self.handle_missing = handle_missing
self._dim = None
def fit(self, X, y=None, **kwargs):
"""Fits the encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
self._check_fit_inputs(X, y)
X, y = convert_inputs(X, y)
self.feature_names_in_ = X.columns.tolist()
self.n_features_in_ = len(self.feature_names_in_)
self._dim = X.shape[1]
self._determine_fit_columns(X)
if not set(self.cols).issubset(X.columns):
raise ValueError('X does not contain the columns listed in cols')
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
self._fit(X, y, **kwargs)
# for finding invariant columns transform without y (as is done on the test set)
X_transformed = self.transform(X, override_return_df=True)
self.feature_names_out_ = X_transformed.columns.tolist()
# drop all output columns with 0 variance.
if self.drop_invariant:
generated_cols = get_generated_cols(X, X_transformed, self.cols)
self.invariant_cols = [x for x in generated_cols if X_transformed[x].var() <= self.INVARIANCE_THRESHOLD]
self.feature_names_out_ = [x for x in self.feature_names_out_ if x not in self.invariant_cols]
return self
def _check_fit_inputs(self, X, y):
if self._get_tags().get('supervised_encoder') and y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
def _check_transform_inputs(self, X):
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
if self._dim is None:
raise NotFittedError('Must train encoder before it can be used to transform data.')
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
def _drop_invariants(self, X: pd.DataFrame, override_return_df: bool) -> Union[np.ndarray, pd.DataFrame]:
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.values
def _determine_fit_columns(self, X: pd.DataFrame) -> None:
""" Determine columns used by encoder.
Note that the implementation also deals with re-fitting the same encoder object with different columns.
:param X: input data frame
:return: none, sets self.cols as a side effect
"""
# if columns aren't passed, just use every string column
if self.use_default_cols:
self.cols = get_obj_cols(X)
else:
self.cols = convert_cols_to_list(self.cols)
def get_feature_names(self) -> List[str]:
warnings.warn("`get_feature_names` is deprecated in all of sklearn. Use `get_feature_names_out` instead.",
category=FutureWarning)
return self.get_feature_names_out()
def get_feature_names_out(self) -> List[str]:
"""
Returns the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument
and determines the output feature names using the input. A fit is usually not necessary and if so a
NotFittedError is raised.
We just require a fit all the time and return the fitted output columns.
Returns
-------
feature_names: list
A list with all feature names transformed or added.
Note: potentially dropped features (because the feature is constant/invariant) are not included!
"""
out_feats = getattr(self, "feature_names_out_", None)
if not isinstance(out_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return out_feats
def get_feature_names_in(self) -> List[str]:
"""
Returns the names of all input columns present when fitting.
These columns are necessary for the transform step.
"""
in_feats = getattr(self, "feature_names_in_", None)
if not isinstance(in_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return in_feats
@abstractmethod
def _fit(self, X: pd.DataFrame, y: Optional[pd.Series], **kwargs):
...
class SupervisedTransformerMixin(sklearn.base.TransformerMixin):
def _more_tags(self):
return {'supervised_encoder': True}
def transform(self, X, y=None, override_return_df=False):
"""Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation
in order to avoid overfitting.
On training data transform should be called with y, on test data without.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] or None
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X, y = convert_inputs(X, y, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X, y)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X: pd.DataFrame, y: pd.Series) -> pd.DataFrame:
...
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
class UnsupervisedTransformerMixin(sklearn.base.TransformerMixin):
def transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X = convert_input(X, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X) -> pd.DataFrame:
...
class TransformerWithTargetMixin:
def _more_tags(self):
return {'supervised_encoder': True}
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
| """A collection of shared utilities for all encoders, not intended for external use."""
from abc import abstractmethod
from enum import Enum, auto
import warnings
import pandas as pd
import numpy as np
import sklearn.base
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.exceptions import NotFittedError
from typing import Dict, List, Optional, Union
from scipy.sparse import csr_matrix
__author__ = 'willmcginnis'
def convert_cols_to_list(cols):
if isinstance(cols, pd.Series):
return cols.tolist()
elif isinstance(cols, np.ndarray):
return cols.tolist()
elif np.isscalar(cols):
return [cols]
elif isinstance(cols, set):
return list(cols)
elif isinstance(cols, tuple):
return list(cols)
elif pd.api.types.is_categorical_dtype(cols):
return cols.astype(object).tolist()
return cols
def get_obj_cols(df):
"""
Returns names of 'object' columns in the DataFrame.
"""
obj_cols = []
for idx, dt in enumerate(df.dtypes):
if dt == 'object' or is_category(dt):
obj_cols.append(df.columns.values[idx])
if not obj_cols:
print("Warning: No categorical columns found. Calling 'transform' will only return input data.")
return obj_cols
def is_category(dtype):
return pd.api.types.is_categorical_dtype(dtype)
def convert_inputs(X, y, columns=None, index=None, deep=False):
"""
Unite arraylike `X` and vectorlike `y` into a DataFrame and Series.
If both are pandas types already, raises an error if their indexes do not match.
If one is pandas, the returns will share that index.
If neither is pandas, a default index will be used, unless `index` is passed.
Parameters
----------
X: arraylike
y: listlike
columns: listlike
Specifies column names to use for `X`.
Ignored if `X` is already a dataframe.
If `None`, use the default pandas column names.
index: listlike
The index to use, if neither `X` nor `y` is a pandas type.
(If one has an index, then this has no effect.)
If `None`, use the default pandas index.
deep: bool
Whether to deep-copy `X`.
"""
X_alt_index = y.index if isinstance(y, pd.Series) else index
X = convert_input(X, columns=columns, deep=deep, index=X_alt_index)
if y is not None:
y = convert_input_vector(y, index=X.index)
# N.B.: If either was already pandas, it keeps its index.
if any(X.index != y.index):
msg = "`X` and `y` both have indexes, but they do not match. If you are shuffling your input data on " \
"purpose (e.g. via permutation_test_score) use np arrays instead of data frames / series"
raise ValueError(msg)
if X.shape[0] != y.shape[0]:
raise ValueError("The length of X is " + str(X.shape[0]) + " but length of y is " + str(y.shape[0]) + ".")
return X, y
def convert_input(X, columns=None, deep=False, index=None):
"""
Unite data into a DataFrame.
Objects that do not contain column names take the names from the argument.
Optionally perform deep copy of the data.
"""
if not isinstance(X, pd.DataFrame):
if isinstance(X, pd.Series):
X = pd.DataFrame(X, copy=deep)
else:
if columns is not None and np.size(X,1) != len(columns):
raise ValueError('The count of the column names does not correspond to the count of the columns')
if isinstance(X, list):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index) # lists are always copied, but for consistency, we still pass the argument
elif isinstance(X, (np.generic, np.ndarray)):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index)
elif isinstance(X, csr_matrix):
X = pd.DataFrame(X.todense(), columns=columns, copy=deep, index=index)
else:
raise ValueError(f'Unexpected input type: {type(X)}')
elif deep:
X = X.copy(deep=True)
return X
def convert_input_vector(y, index):
"""
Unite target data type into a Series.
If the target is a Series or a DataFrame, we preserve its index.
But if the target does not contain index attribute, we use the index from the argument.
"""
if y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
if isinstance(y, pd.Series):
return y
elif isinstance(y, np.ndarray):
if len(np.shape(y))==1: # vector
return pd.Series(y, name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[0]==1: # single row in a matrix
return pd.Series(y[0, :], name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[1]==1: # single column in a matrix
return pd.Series(y[:, 0], name='target', index=index)
else:
raise ValueError(f'Unexpected input shape: {np.shape(y)}')
elif np.isscalar(y):
return pd.Series([y], name='target', index=index)
elif isinstance(y, list):
if len(y)==0: # empty list
return pd.Series(y, name='target', index=index, dtype=float)
elif len(y)>0 and not isinstance(y[0], list): # vector
return pd.Series(y, name='target', index=index)
elif len(y)>0 and isinstance(y[0], list) and len(y[0])==1: # single row in a matrix
flatten = lambda y: [item for sublist in y for item in sublist]
return pd.Series(flatten(y), name='target', index=index)
elif len(y)==1 and len(y[0])==0 and isinstance(y[0], list): # single empty column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=float)
elif len(y)==1 and isinstance(y[0], list): # single column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=type(y[0][0]))
else:
raise ValueError('Unexpected input shape')
elif isinstance(y, pd.DataFrame):
if len(list(y))==0: # empty DataFrame
return pd.Series(name='target', index=index, dtype=float)
if len(list(y))==1: # a single column
return y.iloc[:, 0]
else:
raise ValueError(f'Unexpected input shape: {y.shape}')
else:
return pd.Series(y, name='target', index=index) # this covers tuples and other directly convertible types
def get_generated_cols(X_original, X_transformed, to_transform):
"""
Returns a list of the generated/transformed columns.
Arguments:
X_original: df
the original (input) DataFrame.
X_transformed: df
the transformed (current) DataFrame.
to_transform: [str]
a list of columns that were transformed (as in the original DataFrame), commonly self.feature_names_in.
Output:
a list of columns that were transformed (as in the current DataFrame).
"""
original_cols = list(X_original.columns)
if len(to_transform) > 0:
[original_cols.remove(c) for c in to_transform]
current_cols = list(X_transformed.columns)
if len(original_cols) > 0:
[current_cols.remove(c) for c in original_cols]
return current_cols
def flatten_reverse_dict(d):
sep = "___"
[flat_dict] = pd.json_normalize(d, sep=sep).to_dict(orient='records')
reversed_flat_dict = {v: tuple(k.split(sep)) for k, v in flat_dict.items()}
return reversed_flat_dict
class EncodingRelation(Enum):
# one input feature get encoded into one output feature
ONE_TO_ONE = auto()
# one input feature get encoded into as many output features as it has distinct values
ONE_TO_N_UNIQUE = auto()
# one input feature get encoded into m output features that are not the number of distinct values
ONE_TO_M = auto()
# all N input features are encoded into M output features.
# The encoding is done globally on all the input not on a per-feature basis
N_TO_M = auto()
def get_docstring_output_shape(in_out_relation: EncodingRelation):
if in_out_relation == EncodingRelation.ONE_TO_ONE:
return "n_features"
elif in_out_relation == EncodingRelation.ONE_TO_N_UNIQUE:
return "n_features * respective cardinality"
elif in_out_relation == EncodingRelation.ONE_TO_M:
return "M features (n_features < M)"
elif in_out_relation == EncodingRelation.N_TO_M:
return "M features (M can be anything)"
class BaseEncoder(BaseEstimator):
_dim: Optional[int]
cols: List[str]
use_default_cols: bool
handle_missing: str
handle_unknown: str
verbose: int
drop_invariant: bool
invariant_cols: List[str] = []
return_df: bool
supervised: bool
encoding_relation: EncodingRelation
INVARIANCE_THRESHOLD = 10e-5 # columns with variance less than this will be considered constant / invariant
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True,
handle_unknown='value', handle_missing='value', **kwargs):
"""
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string and categorical columns
will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform and inverse transform
(otherwise it will be a numpy array).
handle_missing: str
how to handle missing values at fit time. Options are 'error', 'return_nan',
and 'value'. Default 'value', which treat NaNs as a countable category at
fit time.
handle_unknown: str, int or dict of {column : option, ...}.
how to handle unknown labels at transform time. Options are 'error'
'return_nan', 'value' and int. Defaults to None which uses NaN behaviour
specified at fit time. Passing an int will fill with this int value.
kwargs: dict.
additional encoder specific parameters like regularisation.
"""
self.return_df = return_df
self.drop_invariant = drop_invariant
self.invariant_cols = []
self.verbose = verbose
self.use_default_cols = cols is None # if True, even a repeated call of fit() will select string columns from X
self.cols = cols # note that cols are only the columns to be encoded, feature_names_in_ are all columns
self.mapping = None
self.handle_unknown = handle_unknown
self.handle_missing = handle_missing
self._dim = None
def fit(self, X, y=None, **kwargs):
"""Fits the encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
self._check_fit_inputs(X, y)
X, y = convert_inputs(X, y)
self.feature_names_in_ = X.columns.tolist()
self.n_features_in_ = len(self.feature_names_in_)
self._dim = X.shape[1]
self._determine_fit_columns(X)
if not set(self.cols).issubset(X.columns):
raise ValueError('X does not contain the columns listed in cols')
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
self._fit(X, y, **kwargs)
# for finding invariant columns transform without y (as is done on the test set)
X_transformed = self.transform(X, override_return_df=True)
self.feature_names_out_ = X_transformed.columns.tolist()
# drop all output columns with 0 variance.
if self.drop_invariant:
generated_cols = get_generated_cols(X, X_transformed, self.cols)
self.invariant_cols = [x for x in generated_cols if X_transformed[x].var() <= self.INVARIANCE_THRESHOLD]
self.feature_names_out_ = [x for x in self.feature_names_out_ if x not in self.invariant_cols]
return self
def _check_fit_inputs(self, X, y):
if self._get_tags().get('supervised_encoder') and y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
def _check_transform_inputs(self, X):
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
if self._dim is None:
raise NotFittedError('Must train encoder before it can be used to transform data.')
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
def _drop_invariants(self, X: pd.DataFrame, override_return_df: bool) -> Union[np.ndarray, pd.DataFrame]:
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.values
def _determine_fit_columns(self, X: pd.DataFrame) -> None:
""" Determine columns used by encoder.
Note that the implementation also deals with re-fitting the same encoder object with different columns.
:param X: input data frame
:return: none, sets self.cols as a side effect
"""
# if columns aren't passed, just use every string column
if self.use_default_cols:
self.cols = get_obj_cols(X)
else:
self.cols = convert_cols_to_list(self.cols)
def get_feature_names(self) -> List[str]:
warnings.warn("`get_feature_names` is deprecated in all of sklearn. Use `get_feature_names_out` instead.",
category=FutureWarning)
return self.get_feature_names_out()
def get_feature_names_out(self, input_features=None) -> np.ndarray:
"""
Returns the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument
and determines the output feature names using the input. A fit is usually not necessary and if so a
NotFittedError is raised.
We just require a fit all the time and return the fitted output columns.
Returns
-------
feature_names: np.ndarray
A numpy array with all feature names transformed or added.
Note: potentially dropped features (because the feature is constant/invariant) are not included!
"""
out_feats = getattr(self, "feature_names_out_", None)
if not isinstance(out_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return np.array(out_feats, dtype=object)
def get_feature_names_in(self) -> List[str]:
"""
Returns the names of all input columns present when fitting.
These columns are necessary for the transform step.
"""
in_feats = getattr(self, "feature_names_in_", None)
if not isinstance(in_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return in_feats
@abstractmethod
def _fit(self, X: pd.DataFrame, y: Optional[pd.Series], **kwargs):
...
class SupervisedTransformerMixin(sklearn.base.TransformerMixin):
def _more_tags(self):
return {'supervised_encoder': True}
def transform(self, X, y=None, override_return_df=False):
"""Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation
in order to avoid overfitting.
On training data transform should be called with y, on test data without.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] or None
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X, y = convert_inputs(X, y, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X, y)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X: pd.DataFrame, y: pd.Series) -> pd.DataFrame:
...
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
class UnsupervisedTransformerMixin(sklearn.base.TransformerMixin):
def transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X = convert_input(X, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X) -> pd.DataFrame:
...
class TransformerWithTargetMixin:
def _more_tags(self):
return {'supervised_encoder': True}
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
| JaimeArboleda | 5eb7a2d6359d680bdadd0534bdb983e712a47f9c | 570827e6b48737d0c9aece8aca31edd6da02c1b2 | I agree that the proposed change is better than what we have at the moment since it fixes a bug.
I've got one more question: If we change all encoders to properly work with the input features, will we get compatibility even without setting `set_output=pandas`? Even if we still do all the internal stuff in pandas? If so I think it would be worth it because we'd get compatibility even for sklearn versions < 1.2. This is in fact what I intended when I first raised the issue. In one of the comment I outlined what the changes for each encoder / family of encoders would be. This is far from a complete redesign of the library and also won't get rid of all dataframes. Does this make sense?
I somehow really don't like introducing dependency versions in the code since this will make it a maintenance hell in the long run | PaulWestenthanner | 27 |
scikit-learn-contrib/category_encoders | 398 | (WIP) Partial fix for getting feature names out | I think this is a partial fix for this opened issue:
https://github.com/scikit-learn-contrib/category_encoders/issues/395
It remains to check the behaviour of other estimators that are not ONE_TO_ONE.
Please, let me know if you like the work in progress and I will try to continue. | null | 2023-02-23 13:33:41+00:00 | 2023-03-13 11:48:24+00:00 | category_encoders/utils.py | """A collection of shared utilities for all encoders, not intended for external use."""
from abc import abstractmethod
from enum import Enum, auto
import warnings
import pandas as pd
import numpy as np
import sklearn.base
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.exceptions import NotFittedError
from typing import Dict, List, Optional, Union
from scipy.sparse import csr_matrix
__author__ = 'willmcginnis'
def convert_cols_to_list(cols):
if isinstance(cols, pd.Series):
return cols.tolist()
elif isinstance(cols, np.ndarray):
return cols.tolist()
elif np.isscalar(cols):
return [cols]
elif isinstance(cols, set):
return list(cols)
elif isinstance(cols, tuple):
return list(cols)
elif pd.api.types.is_categorical_dtype(cols):
return cols.astype(object).tolist()
return cols
def get_obj_cols(df):
"""
Returns names of 'object' columns in the DataFrame.
"""
obj_cols = []
for idx, dt in enumerate(df.dtypes):
if dt == 'object' or is_category(dt):
obj_cols.append(df.columns.values[idx])
if not obj_cols:
print("Warning: No categorical columns found. Calling 'transform' will only return input data.")
return obj_cols
def is_category(dtype):
return pd.api.types.is_categorical_dtype(dtype)
def convert_inputs(X, y, columns=None, index=None, deep=False):
"""
Unite arraylike `X` and vectorlike `y` into a DataFrame and Series.
If both are pandas types already, raises an error if their indexes do not match.
If one is pandas, the returns will share that index.
If neither is pandas, a default index will be used, unless `index` is passed.
Parameters
----------
X: arraylike
y: listlike
columns: listlike
Specifies column names to use for `X`.
Ignored if `X` is already a dataframe.
If `None`, use the default pandas column names.
index: listlike
The index to use, if neither `X` nor `y` is a pandas type.
(If one has an index, then this has no effect.)
If `None`, use the default pandas index.
deep: bool
Whether to deep-copy `X`.
"""
X_alt_index = y.index if isinstance(y, pd.Series) else index
X = convert_input(X, columns=columns, deep=deep, index=X_alt_index)
if y is not None:
y = convert_input_vector(y, index=X.index)
# N.B.: If either was already pandas, it keeps its index.
if any(X.index != y.index):
msg = "`X` and `y` both have indexes, but they do not match. If you are shuffling your input data on " \
"purpose (e.g. via permutation_test_score) use np arrays instead of data frames / series"
raise ValueError(msg)
if X.shape[0] != y.shape[0]:
raise ValueError("The length of X is " + str(X.shape[0]) + " but length of y is " + str(y.shape[0]) + ".")
return X, y
def convert_input(X, columns=None, deep=False, index=None):
"""
Unite data into a DataFrame.
Objects that do not contain column names take the names from the argument.
Optionally perform deep copy of the data.
"""
if not isinstance(X, pd.DataFrame):
if isinstance(X, pd.Series):
X = pd.DataFrame(X, copy=deep)
else:
if columns is not None and np.size(X,1) != len(columns):
raise ValueError('The count of the column names does not correspond to the count of the columns')
if isinstance(X, list):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index) # lists are always copied, but for consistency, we still pass the argument
elif isinstance(X, (np.generic, np.ndarray)):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index)
elif isinstance(X, csr_matrix):
X = pd.DataFrame(X.todense(), columns=columns, copy=deep, index=index)
else:
raise ValueError(f'Unexpected input type: {type(X)}')
elif deep:
X = X.copy(deep=True)
return X
def convert_input_vector(y, index):
"""
Unite target data type into a Series.
If the target is a Series or a DataFrame, we preserve its index.
But if the target does not contain index attribute, we use the index from the argument.
"""
if y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
if isinstance(y, pd.Series):
return y
elif isinstance(y, np.ndarray):
if len(np.shape(y))==1: # vector
return pd.Series(y, name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[0]==1: # single row in a matrix
return pd.Series(y[0, :], name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[1]==1: # single column in a matrix
return pd.Series(y[:, 0], name='target', index=index)
else:
raise ValueError(f'Unexpected input shape: {np.shape(y)}')
elif np.isscalar(y):
return pd.Series([y], name='target', index=index)
elif isinstance(y, list):
if len(y)==0: # empty list
return pd.Series(y, name='target', index=index, dtype=float)
elif len(y)>0 and not isinstance(y[0], list): # vector
return pd.Series(y, name='target', index=index)
elif len(y)>0 and isinstance(y[0], list) and len(y[0])==1: # single row in a matrix
flatten = lambda y: [item for sublist in y for item in sublist]
return pd.Series(flatten(y), name='target', index=index)
elif len(y)==1 and len(y[0])==0 and isinstance(y[0], list): # single empty column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=float)
elif len(y)==1 and isinstance(y[0], list): # single column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=type(y[0][0]))
else:
raise ValueError('Unexpected input shape')
elif isinstance(y, pd.DataFrame):
if len(list(y))==0: # empty DataFrame
return pd.Series(name='target', index=index, dtype=float)
if len(list(y))==1: # a single column
return y.iloc[:, 0]
else:
raise ValueError(f'Unexpected input shape: {y.shape}')
else:
return pd.Series(y, name='target', index=index) # this covers tuples and other directly convertible types
def get_generated_cols(X_original, X_transformed, to_transform):
"""
Returns a list of the generated/transformed columns.
Arguments:
X_original: df
the original (input) DataFrame.
X_transformed: df
the transformed (current) DataFrame.
to_transform: [str]
a list of columns that were transformed (as in the original DataFrame), commonly self.feature_names_in.
Output:
a list of columns that were transformed (as in the current DataFrame).
"""
original_cols = list(X_original.columns)
if len(to_transform) > 0:
[original_cols.remove(c) for c in to_transform]
current_cols = list(X_transformed.columns)
if len(original_cols) > 0:
[current_cols.remove(c) for c in original_cols]
return current_cols
def flatten_reverse_dict(d):
sep = "___"
[flat_dict] = pd.json_normalize(d, sep=sep).to_dict(orient='records')
reversed_flat_dict = {v: tuple(k.split(sep)) for k, v in flat_dict.items()}
return reversed_flat_dict
class EncodingRelation(Enum):
# one input feature get encoded into one output feature
ONE_TO_ONE = auto()
# one input feature get encoded into as many output features as it has distinct values
ONE_TO_N_UNIQUE = auto()
# one input feature get encoded into m output features that are not the number of distinct values
ONE_TO_M = auto()
# all N input features are encoded into M output features.
# The encoding is done globally on all the input not on a per-feature basis
N_TO_M = auto()
def get_docstring_output_shape(in_out_relation: EncodingRelation):
if in_out_relation == EncodingRelation.ONE_TO_ONE:
return "n_features"
elif in_out_relation == EncodingRelation.ONE_TO_N_UNIQUE:
return "n_features * respective cardinality"
elif in_out_relation == EncodingRelation.ONE_TO_M:
return "M features (n_features < M)"
elif in_out_relation == EncodingRelation.N_TO_M:
return "M features (M can be anything)"
class BaseEncoder(BaseEstimator):
_dim: Optional[int]
cols: List[str]
use_default_cols: bool
handle_missing: str
handle_unknown: str
verbose: int
drop_invariant: bool
invariant_cols: List[str] = []
return_df: bool
supervised: bool
encoding_relation: EncodingRelation
INVARIANCE_THRESHOLD = 10e-5 # columns with variance less than this will be considered constant / invariant
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True,
handle_unknown='value', handle_missing='value', **kwargs):
"""
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string and categorical columns
will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform and inverse transform
(otherwise it will be a numpy array).
handle_missing: str
how to handle missing values at fit time. Options are 'error', 'return_nan',
and 'value'. Default 'value', which treat NaNs as a countable category at
fit time.
handle_unknown: str, int or dict of {column : option, ...}.
how to handle unknown labels at transform time. Options are 'error'
'return_nan', 'value' and int. Defaults to None which uses NaN behaviour
specified at fit time. Passing an int will fill with this int value.
kwargs: dict.
additional encoder specific parameters like regularisation.
"""
self.return_df = return_df
self.drop_invariant = drop_invariant
self.invariant_cols = []
self.verbose = verbose
self.use_default_cols = cols is None # if True, even a repeated call of fit() will select string columns from X
self.cols = cols # note that cols are only the columns to be encoded, feature_names_in_ are all columns
self.mapping = None
self.handle_unknown = handle_unknown
self.handle_missing = handle_missing
self._dim = None
def fit(self, X, y=None, **kwargs):
"""Fits the encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
self._check_fit_inputs(X, y)
X, y = convert_inputs(X, y)
self.feature_names_in_ = X.columns.tolist()
self.n_features_in_ = len(self.feature_names_in_)
self._dim = X.shape[1]
self._determine_fit_columns(X)
if not set(self.cols).issubset(X.columns):
raise ValueError('X does not contain the columns listed in cols')
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
self._fit(X, y, **kwargs)
# for finding invariant columns transform without y (as is done on the test set)
X_transformed = self.transform(X, override_return_df=True)
self.feature_names_out_ = X_transformed.columns.tolist()
# drop all output columns with 0 variance.
if self.drop_invariant:
generated_cols = get_generated_cols(X, X_transformed, self.cols)
self.invariant_cols = [x for x in generated_cols if X_transformed[x].var() <= self.INVARIANCE_THRESHOLD]
self.feature_names_out_ = [x for x in self.feature_names_out_ if x not in self.invariant_cols]
return self
def _check_fit_inputs(self, X, y):
if self._get_tags().get('supervised_encoder') and y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
def _check_transform_inputs(self, X):
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
if self._dim is None:
raise NotFittedError('Must train encoder before it can be used to transform data.')
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
def _drop_invariants(self, X: pd.DataFrame, override_return_df: bool) -> Union[np.ndarray, pd.DataFrame]:
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.values
def _determine_fit_columns(self, X: pd.DataFrame) -> None:
""" Determine columns used by encoder.
Note that the implementation also deals with re-fitting the same encoder object with different columns.
:param X: input data frame
:return: none, sets self.cols as a side effect
"""
# if columns aren't passed, just use every string column
if self.use_default_cols:
self.cols = get_obj_cols(X)
else:
self.cols = convert_cols_to_list(self.cols)
def get_feature_names(self) -> List[str]:
warnings.warn("`get_feature_names` is deprecated in all of sklearn. Use `get_feature_names_out` instead.",
category=FutureWarning)
return self.get_feature_names_out()
def get_feature_names_out(self) -> List[str]:
"""
Returns the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument
and determines the output feature names using the input. A fit is usually not necessary and if so a
NotFittedError is raised.
We just require a fit all the time and return the fitted output columns.
Returns
-------
feature_names: list
A list with all feature names transformed or added.
Note: potentially dropped features (because the feature is constant/invariant) are not included!
"""
out_feats = getattr(self, "feature_names_out_", None)
if not isinstance(out_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return out_feats
def get_feature_names_in(self) -> List[str]:
"""
Returns the names of all input columns present when fitting.
These columns are necessary for the transform step.
"""
in_feats = getattr(self, "feature_names_in_", None)
if not isinstance(in_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return in_feats
@abstractmethod
def _fit(self, X: pd.DataFrame, y: Optional[pd.Series], **kwargs):
...
class SupervisedTransformerMixin(sklearn.base.TransformerMixin):
def _more_tags(self):
return {'supervised_encoder': True}
def transform(self, X, y=None, override_return_df=False):
"""Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation
in order to avoid overfitting.
On training data transform should be called with y, on test data without.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] or None
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X, y = convert_inputs(X, y, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X, y)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X: pd.DataFrame, y: pd.Series) -> pd.DataFrame:
...
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
class UnsupervisedTransformerMixin(sklearn.base.TransformerMixin):
def transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X = convert_input(X, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X) -> pd.DataFrame:
...
class TransformerWithTargetMixin:
def _more_tags(self):
return {'supervised_encoder': True}
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
| """A collection of shared utilities for all encoders, not intended for external use."""
from abc import abstractmethod
from enum import Enum, auto
import warnings
import pandas as pd
import numpy as np
import sklearn.base
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.exceptions import NotFittedError
from typing import Dict, List, Optional, Union
from scipy.sparse import csr_matrix
__author__ = 'willmcginnis'
def convert_cols_to_list(cols):
if isinstance(cols, pd.Series):
return cols.tolist()
elif isinstance(cols, np.ndarray):
return cols.tolist()
elif np.isscalar(cols):
return [cols]
elif isinstance(cols, set):
return list(cols)
elif isinstance(cols, tuple):
return list(cols)
elif pd.api.types.is_categorical_dtype(cols):
return cols.astype(object).tolist()
return cols
def get_obj_cols(df):
"""
Returns names of 'object' columns in the DataFrame.
"""
obj_cols = []
for idx, dt in enumerate(df.dtypes):
if dt == 'object' or is_category(dt):
obj_cols.append(df.columns.values[idx])
if not obj_cols:
print("Warning: No categorical columns found. Calling 'transform' will only return input data.")
return obj_cols
def is_category(dtype):
return pd.api.types.is_categorical_dtype(dtype)
def convert_inputs(X, y, columns=None, index=None, deep=False):
"""
Unite arraylike `X` and vectorlike `y` into a DataFrame and Series.
If both are pandas types already, raises an error if their indexes do not match.
If one is pandas, the returns will share that index.
If neither is pandas, a default index will be used, unless `index` is passed.
Parameters
----------
X: arraylike
y: listlike
columns: listlike
Specifies column names to use for `X`.
Ignored if `X` is already a dataframe.
If `None`, use the default pandas column names.
index: listlike
The index to use, if neither `X` nor `y` is a pandas type.
(If one has an index, then this has no effect.)
If `None`, use the default pandas index.
deep: bool
Whether to deep-copy `X`.
"""
X_alt_index = y.index if isinstance(y, pd.Series) else index
X = convert_input(X, columns=columns, deep=deep, index=X_alt_index)
if y is not None:
y = convert_input_vector(y, index=X.index)
# N.B.: If either was already pandas, it keeps its index.
if any(X.index != y.index):
msg = "`X` and `y` both have indexes, but they do not match. If you are shuffling your input data on " \
"purpose (e.g. via permutation_test_score) use np arrays instead of data frames / series"
raise ValueError(msg)
if X.shape[0] != y.shape[0]:
raise ValueError("The length of X is " + str(X.shape[0]) + " but length of y is " + str(y.shape[0]) + ".")
return X, y
def convert_input(X, columns=None, deep=False, index=None):
"""
Unite data into a DataFrame.
Objects that do not contain column names take the names from the argument.
Optionally perform deep copy of the data.
"""
if not isinstance(X, pd.DataFrame):
if isinstance(X, pd.Series):
X = pd.DataFrame(X, copy=deep)
else:
if columns is not None and np.size(X,1) != len(columns):
raise ValueError('The count of the column names does not correspond to the count of the columns')
if isinstance(X, list):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index) # lists are always copied, but for consistency, we still pass the argument
elif isinstance(X, (np.generic, np.ndarray)):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index)
elif isinstance(X, csr_matrix):
X = pd.DataFrame(X.todense(), columns=columns, copy=deep, index=index)
else:
raise ValueError(f'Unexpected input type: {type(X)}')
elif deep:
X = X.copy(deep=True)
return X
def convert_input_vector(y, index):
"""
Unite target data type into a Series.
If the target is a Series or a DataFrame, we preserve its index.
But if the target does not contain index attribute, we use the index from the argument.
"""
if y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
if isinstance(y, pd.Series):
return y
elif isinstance(y, np.ndarray):
if len(np.shape(y))==1: # vector
return pd.Series(y, name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[0]==1: # single row in a matrix
return pd.Series(y[0, :], name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[1]==1: # single column in a matrix
return pd.Series(y[:, 0], name='target', index=index)
else:
raise ValueError(f'Unexpected input shape: {np.shape(y)}')
elif np.isscalar(y):
return pd.Series([y], name='target', index=index)
elif isinstance(y, list):
if len(y)==0: # empty list
return pd.Series(y, name='target', index=index, dtype=float)
elif len(y)>0 and not isinstance(y[0], list): # vector
return pd.Series(y, name='target', index=index)
elif len(y)>0 and isinstance(y[0], list) and len(y[0])==1: # single row in a matrix
flatten = lambda y: [item for sublist in y for item in sublist]
return pd.Series(flatten(y), name='target', index=index)
elif len(y)==1 and len(y[0])==0 and isinstance(y[0], list): # single empty column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=float)
elif len(y)==1 and isinstance(y[0], list): # single column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=type(y[0][0]))
else:
raise ValueError('Unexpected input shape')
elif isinstance(y, pd.DataFrame):
if len(list(y))==0: # empty DataFrame
return pd.Series(name='target', index=index, dtype=float)
if len(list(y))==1: # a single column
return y.iloc[:, 0]
else:
raise ValueError(f'Unexpected input shape: {y.shape}')
else:
return pd.Series(y, name='target', index=index) # this covers tuples and other directly convertible types
def get_generated_cols(X_original, X_transformed, to_transform):
"""
Returns a list of the generated/transformed columns.
Arguments:
X_original: df
the original (input) DataFrame.
X_transformed: df
the transformed (current) DataFrame.
to_transform: [str]
a list of columns that were transformed (as in the original DataFrame), commonly self.feature_names_in.
Output:
a list of columns that were transformed (as in the current DataFrame).
"""
original_cols = list(X_original.columns)
if len(to_transform) > 0:
[original_cols.remove(c) for c in to_transform]
current_cols = list(X_transformed.columns)
if len(original_cols) > 0:
[current_cols.remove(c) for c in original_cols]
return current_cols
def flatten_reverse_dict(d):
sep = "___"
[flat_dict] = pd.json_normalize(d, sep=sep).to_dict(orient='records')
reversed_flat_dict = {v: tuple(k.split(sep)) for k, v in flat_dict.items()}
return reversed_flat_dict
class EncodingRelation(Enum):
# one input feature get encoded into one output feature
ONE_TO_ONE = auto()
# one input feature get encoded into as many output features as it has distinct values
ONE_TO_N_UNIQUE = auto()
# one input feature get encoded into m output features that are not the number of distinct values
ONE_TO_M = auto()
# all N input features are encoded into M output features.
# The encoding is done globally on all the input not on a per-feature basis
N_TO_M = auto()
def get_docstring_output_shape(in_out_relation: EncodingRelation):
if in_out_relation == EncodingRelation.ONE_TO_ONE:
return "n_features"
elif in_out_relation == EncodingRelation.ONE_TO_N_UNIQUE:
return "n_features * respective cardinality"
elif in_out_relation == EncodingRelation.ONE_TO_M:
return "M features (n_features < M)"
elif in_out_relation == EncodingRelation.N_TO_M:
return "M features (M can be anything)"
class BaseEncoder(BaseEstimator):
_dim: Optional[int]
cols: List[str]
use_default_cols: bool
handle_missing: str
handle_unknown: str
verbose: int
drop_invariant: bool
invariant_cols: List[str] = []
return_df: bool
supervised: bool
encoding_relation: EncodingRelation
INVARIANCE_THRESHOLD = 10e-5 # columns with variance less than this will be considered constant / invariant
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True,
handle_unknown='value', handle_missing='value', **kwargs):
"""
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string and categorical columns
will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform and inverse transform
(otherwise it will be a numpy array).
handle_missing: str
how to handle missing values at fit time. Options are 'error', 'return_nan',
and 'value'. Default 'value', which treat NaNs as a countable category at
fit time.
handle_unknown: str, int or dict of {column : option, ...}.
how to handle unknown labels at transform time. Options are 'error'
'return_nan', 'value' and int. Defaults to None which uses NaN behaviour
specified at fit time. Passing an int will fill with this int value.
kwargs: dict.
additional encoder specific parameters like regularisation.
"""
self.return_df = return_df
self.drop_invariant = drop_invariant
self.invariant_cols = []
self.verbose = verbose
self.use_default_cols = cols is None # if True, even a repeated call of fit() will select string columns from X
self.cols = cols # note that cols are only the columns to be encoded, feature_names_in_ are all columns
self.mapping = None
self.handle_unknown = handle_unknown
self.handle_missing = handle_missing
self._dim = None
def fit(self, X, y=None, **kwargs):
"""Fits the encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
self._check_fit_inputs(X, y)
X, y = convert_inputs(X, y)
self.feature_names_in_ = X.columns.tolist()
self.n_features_in_ = len(self.feature_names_in_)
self._dim = X.shape[1]
self._determine_fit_columns(X)
if not set(self.cols).issubset(X.columns):
raise ValueError('X does not contain the columns listed in cols')
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
self._fit(X, y, **kwargs)
# for finding invariant columns transform without y (as is done on the test set)
X_transformed = self.transform(X, override_return_df=True)
self.feature_names_out_ = X_transformed.columns.tolist()
# drop all output columns with 0 variance.
if self.drop_invariant:
generated_cols = get_generated_cols(X, X_transformed, self.cols)
self.invariant_cols = [x for x in generated_cols if X_transformed[x].var() <= self.INVARIANCE_THRESHOLD]
self.feature_names_out_ = [x for x in self.feature_names_out_ if x not in self.invariant_cols]
return self
def _check_fit_inputs(self, X, y):
if self._get_tags().get('supervised_encoder') and y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
def _check_transform_inputs(self, X):
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
if self._dim is None:
raise NotFittedError('Must train encoder before it can be used to transform data.')
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
def _drop_invariants(self, X: pd.DataFrame, override_return_df: bool) -> Union[np.ndarray, pd.DataFrame]:
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.values
def _determine_fit_columns(self, X: pd.DataFrame) -> None:
""" Determine columns used by encoder.
Note that the implementation also deals with re-fitting the same encoder object with different columns.
:param X: input data frame
:return: none, sets self.cols as a side effect
"""
# if columns aren't passed, just use every string column
if self.use_default_cols:
self.cols = get_obj_cols(X)
else:
self.cols = convert_cols_to_list(self.cols)
def get_feature_names(self) -> List[str]:
warnings.warn("`get_feature_names` is deprecated in all of sklearn. Use `get_feature_names_out` instead.",
category=FutureWarning)
return self.get_feature_names_out()
def get_feature_names_out(self, input_features=None) -> np.ndarray:
"""
Returns the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument
and determines the output feature names using the input. A fit is usually not necessary and if so a
NotFittedError is raised.
We just require a fit all the time and return the fitted output columns.
Returns
-------
feature_names: np.ndarray
A numpy array with all feature names transformed or added.
Note: potentially dropped features (because the feature is constant/invariant) are not included!
"""
out_feats = getattr(self, "feature_names_out_", None)
if not isinstance(out_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return np.array(out_feats, dtype=object)
def get_feature_names_in(self) -> List[str]:
"""
Returns the names of all input columns present when fitting.
These columns are necessary for the transform step.
"""
in_feats = getattr(self, "feature_names_in_", None)
if not isinstance(in_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return in_feats
@abstractmethod
def _fit(self, X: pd.DataFrame, y: Optional[pd.Series], **kwargs):
...
class SupervisedTransformerMixin(sklearn.base.TransformerMixin):
def _more_tags(self):
return {'supervised_encoder': True}
def transform(self, X, y=None, override_return_df=False):
"""Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation
in order to avoid overfitting.
On training data transform should be called with y, on test data without.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] or None
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X, y = convert_inputs(X, y, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X, y)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X: pd.DataFrame, y: pd.Series) -> pd.DataFrame:
...
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
class UnsupervisedTransformerMixin(sklearn.base.TransformerMixin):
def transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X = convert_input(X, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X) -> pd.DataFrame:
...
class TransformerWithTargetMixin:
def _more_tags(self):
return {'supervised_encoder': True}
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
| JaimeArboleda | 5eb7a2d6359d680bdadd0534bdb983e712a47f9c | 570827e6b48737d0c9aece8aca31edd6da02c1b2 | Well, my gut feeling is that it's not worth it: it will still be a very big effort and it will only fix one subtle bug (which doesn't make anything crash) and there will still be some inconsistencies. Let me try to explain...
* **Current Situation**: If the `category_encoder` is inside a composed `sklearn` transfomer, which delegates the `get_feature_names_out` to every inner estimator, there is an Exception when calling `get_feature_names_out`, because the signature is not valid. The error is `TypeError: get_feature_names_out() takes 1 positional argument but 2 were given`.
* **Super simple Fix**: By changing the signature but not doing anything with the input parameter, we remove this `Exception` and, making using of the attribute `feature_names_out_`, we get a right answer in most situations. I mean, will be right whenever the `category_encoder` is fitted and transformed alone, or whenever is included in a composed transformer but in the first layer. However, if the `category_encoder` is not in the first layer of a composed transformer, and if `set_output` is the default, then the answer will be wrong, because when fitted the previous transformer of the row will output a numpy array, and the `category_encoder` will take ["0", "1", ...] as the original column names. This is the only *issue* that will remain with this super simple fix (and the solution is as easy as setting `set_output=pandas`).
* **More thorough fix**: If we modify every encoder by adding a treatment of the input_features in order to generate the column names (which requieres some work for some of the encoders), the only added value will be removing the wrong answer in the mentioned case (when the encoder is part of a composed transformer and has another `sklearn` transfomer doing something before). My point is that, even in this situation, there will be some inconsistences that are related to the fact of assuming that the input is a pandas dataframe. One of those is the fact that some parameters of the encoders depend on column names. For example, you can specify the mapping for an `OrdinalEncoder` using this:
```python
OrdinalEncoder(mapping=[{'col': 'col_a', 'mapping': {'a': 2, 'b': 1, 'c': 3}}])
```
This code will crash in the same situation as before: if the `OrdinalEncoder` is part o a pipeline and there is another `sklearn` transformer before, the columns received during `fit` won't correspond to the columns specified in the `mapping` parameter (unless `set_output=pandas`). You can take a look at this issue [here](https://colab.research.google.com/drive/1lJJvXyAOoG9MBtNiH2qSTrqbzZtEPTeB#scrollTo=nCJtM_1keNTD&line=2&uniqifier=1). That's why I was saying that if we wish to be fully aligned with `sklearn`, there will be more things to do (in particular, for example, no column names should be used in any parameter of any encoder). So I feel more inclined to say "all or nothing": if we want full compatibility with `sklearn`, that's fine, but it will be a breaking change a lots of work, and otherwise, I think the *super simple fix* is good enough. Well, that's my opinion... | JaimeArboleda | 28 |
scikit-learn-contrib/category_encoders | 398 | (WIP) Partial fix for getting feature names out | I think this is a partial fix for this opened issue:
https://github.com/scikit-learn-contrib/category_encoders/issues/395
It remains to check the behaviour of other estimators that are not ONE_TO_ONE.
Please, let me know if you like the work in progress and I will try to continue. | null | 2023-02-23 13:33:41+00:00 | 2023-03-13 11:48:24+00:00 | category_encoders/utils.py | """A collection of shared utilities for all encoders, not intended for external use."""
from abc import abstractmethod
from enum import Enum, auto
import warnings
import pandas as pd
import numpy as np
import sklearn.base
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.exceptions import NotFittedError
from typing import Dict, List, Optional, Union
from scipy.sparse import csr_matrix
__author__ = 'willmcginnis'
def convert_cols_to_list(cols):
if isinstance(cols, pd.Series):
return cols.tolist()
elif isinstance(cols, np.ndarray):
return cols.tolist()
elif np.isscalar(cols):
return [cols]
elif isinstance(cols, set):
return list(cols)
elif isinstance(cols, tuple):
return list(cols)
elif pd.api.types.is_categorical_dtype(cols):
return cols.astype(object).tolist()
return cols
def get_obj_cols(df):
"""
Returns names of 'object' columns in the DataFrame.
"""
obj_cols = []
for idx, dt in enumerate(df.dtypes):
if dt == 'object' or is_category(dt):
obj_cols.append(df.columns.values[idx])
if not obj_cols:
print("Warning: No categorical columns found. Calling 'transform' will only return input data.")
return obj_cols
def is_category(dtype):
return pd.api.types.is_categorical_dtype(dtype)
def convert_inputs(X, y, columns=None, index=None, deep=False):
"""
Unite arraylike `X` and vectorlike `y` into a DataFrame and Series.
If both are pandas types already, raises an error if their indexes do not match.
If one is pandas, the returns will share that index.
If neither is pandas, a default index will be used, unless `index` is passed.
Parameters
----------
X: arraylike
y: listlike
columns: listlike
Specifies column names to use for `X`.
Ignored if `X` is already a dataframe.
If `None`, use the default pandas column names.
index: listlike
The index to use, if neither `X` nor `y` is a pandas type.
(If one has an index, then this has no effect.)
If `None`, use the default pandas index.
deep: bool
Whether to deep-copy `X`.
"""
X_alt_index = y.index if isinstance(y, pd.Series) else index
X = convert_input(X, columns=columns, deep=deep, index=X_alt_index)
if y is not None:
y = convert_input_vector(y, index=X.index)
# N.B.: If either was already pandas, it keeps its index.
if any(X.index != y.index):
msg = "`X` and `y` both have indexes, but they do not match. If you are shuffling your input data on " \
"purpose (e.g. via permutation_test_score) use np arrays instead of data frames / series"
raise ValueError(msg)
if X.shape[0] != y.shape[0]:
raise ValueError("The length of X is " + str(X.shape[0]) + " but length of y is " + str(y.shape[0]) + ".")
return X, y
def convert_input(X, columns=None, deep=False, index=None):
"""
Unite data into a DataFrame.
Objects that do not contain column names take the names from the argument.
Optionally perform deep copy of the data.
"""
if not isinstance(X, pd.DataFrame):
if isinstance(X, pd.Series):
X = pd.DataFrame(X, copy=deep)
else:
if columns is not None and np.size(X,1) != len(columns):
raise ValueError('The count of the column names does not correspond to the count of the columns')
if isinstance(X, list):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index) # lists are always copied, but for consistency, we still pass the argument
elif isinstance(X, (np.generic, np.ndarray)):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index)
elif isinstance(X, csr_matrix):
X = pd.DataFrame(X.todense(), columns=columns, copy=deep, index=index)
else:
raise ValueError(f'Unexpected input type: {type(X)}')
elif deep:
X = X.copy(deep=True)
return X
def convert_input_vector(y, index):
"""
Unite target data type into a Series.
If the target is a Series or a DataFrame, we preserve its index.
But if the target does not contain index attribute, we use the index from the argument.
"""
if y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
if isinstance(y, pd.Series):
return y
elif isinstance(y, np.ndarray):
if len(np.shape(y))==1: # vector
return pd.Series(y, name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[0]==1: # single row in a matrix
return pd.Series(y[0, :], name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[1]==1: # single column in a matrix
return pd.Series(y[:, 0], name='target', index=index)
else:
raise ValueError(f'Unexpected input shape: {np.shape(y)}')
elif np.isscalar(y):
return pd.Series([y], name='target', index=index)
elif isinstance(y, list):
if len(y)==0: # empty list
return pd.Series(y, name='target', index=index, dtype=float)
elif len(y)>0 and not isinstance(y[0], list): # vector
return pd.Series(y, name='target', index=index)
elif len(y)>0 and isinstance(y[0], list) and len(y[0])==1: # single row in a matrix
flatten = lambda y: [item for sublist in y for item in sublist]
return pd.Series(flatten(y), name='target', index=index)
elif len(y)==1 and len(y[0])==0 and isinstance(y[0], list): # single empty column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=float)
elif len(y)==1 and isinstance(y[0], list): # single column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=type(y[0][0]))
else:
raise ValueError('Unexpected input shape')
elif isinstance(y, pd.DataFrame):
if len(list(y))==0: # empty DataFrame
return pd.Series(name='target', index=index, dtype=float)
if len(list(y))==1: # a single column
return y.iloc[:, 0]
else:
raise ValueError(f'Unexpected input shape: {y.shape}')
else:
return pd.Series(y, name='target', index=index) # this covers tuples and other directly convertible types
def get_generated_cols(X_original, X_transformed, to_transform):
"""
Returns a list of the generated/transformed columns.
Arguments:
X_original: df
the original (input) DataFrame.
X_transformed: df
the transformed (current) DataFrame.
to_transform: [str]
a list of columns that were transformed (as in the original DataFrame), commonly self.feature_names_in.
Output:
a list of columns that were transformed (as in the current DataFrame).
"""
original_cols = list(X_original.columns)
if len(to_transform) > 0:
[original_cols.remove(c) for c in to_transform]
current_cols = list(X_transformed.columns)
if len(original_cols) > 0:
[current_cols.remove(c) for c in original_cols]
return current_cols
def flatten_reverse_dict(d):
sep = "___"
[flat_dict] = pd.json_normalize(d, sep=sep).to_dict(orient='records')
reversed_flat_dict = {v: tuple(k.split(sep)) for k, v in flat_dict.items()}
return reversed_flat_dict
class EncodingRelation(Enum):
# one input feature get encoded into one output feature
ONE_TO_ONE = auto()
# one input feature get encoded into as many output features as it has distinct values
ONE_TO_N_UNIQUE = auto()
# one input feature get encoded into m output features that are not the number of distinct values
ONE_TO_M = auto()
# all N input features are encoded into M output features.
# The encoding is done globally on all the input not on a per-feature basis
N_TO_M = auto()
def get_docstring_output_shape(in_out_relation: EncodingRelation):
if in_out_relation == EncodingRelation.ONE_TO_ONE:
return "n_features"
elif in_out_relation == EncodingRelation.ONE_TO_N_UNIQUE:
return "n_features * respective cardinality"
elif in_out_relation == EncodingRelation.ONE_TO_M:
return "M features (n_features < M)"
elif in_out_relation == EncodingRelation.N_TO_M:
return "M features (M can be anything)"
class BaseEncoder(BaseEstimator):
_dim: Optional[int]
cols: List[str]
use_default_cols: bool
handle_missing: str
handle_unknown: str
verbose: int
drop_invariant: bool
invariant_cols: List[str] = []
return_df: bool
supervised: bool
encoding_relation: EncodingRelation
INVARIANCE_THRESHOLD = 10e-5 # columns with variance less than this will be considered constant / invariant
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True,
handle_unknown='value', handle_missing='value', **kwargs):
"""
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string and categorical columns
will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform and inverse transform
(otherwise it will be a numpy array).
handle_missing: str
how to handle missing values at fit time. Options are 'error', 'return_nan',
and 'value'. Default 'value', which treat NaNs as a countable category at
fit time.
handle_unknown: str, int or dict of {column : option, ...}.
how to handle unknown labels at transform time. Options are 'error'
'return_nan', 'value' and int. Defaults to None which uses NaN behaviour
specified at fit time. Passing an int will fill with this int value.
kwargs: dict.
additional encoder specific parameters like regularisation.
"""
self.return_df = return_df
self.drop_invariant = drop_invariant
self.invariant_cols = []
self.verbose = verbose
self.use_default_cols = cols is None # if True, even a repeated call of fit() will select string columns from X
self.cols = cols # note that cols are only the columns to be encoded, feature_names_in_ are all columns
self.mapping = None
self.handle_unknown = handle_unknown
self.handle_missing = handle_missing
self._dim = None
def fit(self, X, y=None, **kwargs):
"""Fits the encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
self._check_fit_inputs(X, y)
X, y = convert_inputs(X, y)
self.feature_names_in_ = X.columns.tolist()
self.n_features_in_ = len(self.feature_names_in_)
self._dim = X.shape[1]
self._determine_fit_columns(X)
if not set(self.cols).issubset(X.columns):
raise ValueError('X does not contain the columns listed in cols')
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
self._fit(X, y, **kwargs)
# for finding invariant columns transform without y (as is done on the test set)
X_transformed = self.transform(X, override_return_df=True)
self.feature_names_out_ = X_transformed.columns.tolist()
# drop all output columns with 0 variance.
if self.drop_invariant:
generated_cols = get_generated_cols(X, X_transformed, self.cols)
self.invariant_cols = [x for x in generated_cols if X_transformed[x].var() <= self.INVARIANCE_THRESHOLD]
self.feature_names_out_ = [x for x in self.feature_names_out_ if x not in self.invariant_cols]
return self
def _check_fit_inputs(self, X, y):
if self._get_tags().get('supervised_encoder') and y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
def _check_transform_inputs(self, X):
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
if self._dim is None:
raise NotFittedError('Must train encoder before it can be used to transform data.')
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
def _drop_invariants(self, X: pd.DataFrame, override_return_df: bool) -> Union[np.ndarray, pd.DataFrame]:
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.values
def _determine_fit_columns(self, X: pd.DataFrame) -> None:
""" Determine columns used by encoder.
Note that the implementation also deals with re-fitting the same encoder object with different columns.
:param X: input data frame
:return: none, sets self.cols as a side effect
"""
# if columns aren't passed, just use every string column
if self.use_default_cols:
self.cols = get_obj_cols(X)
else:
self.cols = convert_cols_to_list(self.cols)
def get_feature_names(self) -> List[str]:
warnings.warn("`get_feature_names` is deprecated in all of sklearn. Use `get_feature_names_out` instead.",
category=FutureWarning)
return self.get_feature_names_out()
def get_feature_names_out(self) -> List[str]:
"""
Returns the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument
and determines the output feature names using the input. A fit is usually not necessary and if so a
NotFittedError is raised.
We just require a fit all the time and return the fitted output columns.
Returns
-------
feature_names: list
A list with all feature names transformed or added.
Note: potentially dropped features (because the feature is constant/invariant) are not included!
"""
out_feats = getattr(self, "feature_names_out_", None)
if not isinstance(out_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return out_feats
def get_feature_names_in(self) -> List[str]:
"""
Returns the names of all input columns present when fitting.
These columns are necessary for the transform step.
"""
in_feats = getattr(self, "feature_names_in_", None)
if not isinstance(in_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return in_feats
@abstractmethod
def _fit(self, X: pd.DataFrame, y: Optional[pd.Series], **kwargs):
...
class SupervisedTransformerMixin(sklearn.base.TransformerMixin):
def _more_tags(self):
return {'supervised_encoder': True}
def transform(self, X, y=None, override_return_df=False):
"""Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation
in order to avoid overfitting.
On training data transform should be called with y, on test data without.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] or None
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X, y = convert_inputs(X, y, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X, y)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X: pd.DataFrame, y: pd.Series) -> pd.DataFrame:
...
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
class UnsupervisedTransformerMixin(sklearn.base.TransformerMixin):
def transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X = convert_input(X, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X) -> pd.DataFrame:
...
class TransformerWithTargetMixin:
def _more_tags(self):
return {'supervised_encoder': True}
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
| """A collection of shared utilities for all encoders, not intended for external use."""
from abc import abstractmethod
from enum import Enum, auto
import warnings
import pandas as pd
import numpy as np
import sklearn.base
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.exceptions import NotFittedError
from typing import Dict, List, Optional, Union
from scipy.sparse import csr_matrix
__author__ = 'willmcginnis'
def convert_cols_to_list(cols):
if isinstance(cols, pd.Series):
return cols.tolist()
elif isinstance(cols, np.ndarray):
return cols.tolist()
elif np.isscalar(cols):
return [cols]
elif isinstance(cols, set):
return list(cols)
elif isinstance(cols, tuple):
return list(cols)
elif pd.api.types.is_categorical_dtype(cols):
return cols.astype(object).tolist()
return cols
def get_obj_cols(df):
"""
Returns names of 'object' columns in the DataFrame.
"""
obj_cols = []
for idx, dt in enumerate(df.dtypes):
if dt == 'object' or is_category(dt):
obj_cols.append(df.columns.values[idx])
if not obj_cols:
print("Warning: No categorical columns found. Calling 'transform' will only return input data.")
return obj_cols
def is_category(dtype):
return pd.api.types.is_categorical_dtype(dtype)
def convert_inputs(X, y, columns=None, index=None, deep=False):
"""
Unite arraylike `X` and vectorlike `y` into a DataFrame and Series.
If both are pandas types already, raises an error if their indexes do not match.
If one is pandas, the returns will share that index.
If neither is pandas, a default index will be used, unless `index` is passed.
Parameters
----------
X: arraylike
y: listlike
columns: listlike
Specifies column names to use for `X`.
Ignored if `X` is already a dataframe.
If `None`, use the default pandas column names.
index: listlike
The index to use, if neither `X` nor `y` is a pandas type.
(If one has an index, then this has no effect.)
If `None`, use the default pandas index.
deep: bool
Whether to deep-copy `X`.
"""
X_alt_index = y.index if isinstance(y, pd.Series) else index
X = convert_input(X, columns=columns, deep=deep, index=X_alt_index)
if y is not None:
y = convert_input_vector(y, index=X.index)
# N.B.: If either was already pandas, it keeps its index.
if any(X.index != y.index):
msg = "`X` and `y` both have indexes, but they do not match. If you are shuffling your input data on " \
"purpose (e.g. via permutation_test_score) use np arrays instead of data frames / series"
raise ValueError(msg)
if X.shape[0] != y.shape[0]:
raise ValueError("The length of X is " + str(X.shape[0]) + " but length of y is " + str(y.shape[0]) + ".")
return X, y
def convert_input(X, columns=None, deep=False, index=None):
"""
Unite data into a DataFrame.
Objects that do not contain column names take the names from the argument.
Optionally perform deep copy of the data.
"""
if not isinstance(X, pd.DataFrame):
if isinstance(X, pd.Series):
X = pd.DataFrame(X, copy=deep)
else:
if columns is not None and np.size(X,1) != len(columns):
raise ValueError('The count of the column names does not correspond to the count of the columns')
if isinstance(X, list):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index) # lists are always copied, but for consistency, we still pass the argument
elif isinstance(X, (np.generic, np.ndarray)):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index)
elif isinstance(X, csr_matrix):
X = pd.DataFrame(X.todense(), columns=columns, copy=deep, index=index)
else:
raise ValueError(f'Unexpected input type: {type(X)}')
elif deep:
X = X.copy(deep=True)
return X
def convert_input_vector(y, index):
"""
Unite target data type into a Series.
If the target is a Series or a DataFrame, we preserve its index.
But if the target does not contain index attribute, we use the index from the argument.
"""
if y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
if isinstance(y, pd.Series):
return y
elif isinstance(y, np.ndarray):
if len(np.shape(y))==1: # vector
return pd.Series(y, name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[0]==1: # single row in a matrix
return pd.Series(y[0, :], name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[1]==1: # single column in a matrix
return pd.Series(y[:, 0], name='target', index=index)
else:
raise ValueError(f'Unexpected input shape: {np.shape(y)}')
elif np.isscalar(y):
return pd.Series([y], name='target', index=index)
elif isinstance(y, list):
if len(y)==0: # empty list
return pd.Series(y, name='target', index=index, dtype=float)
elif len(y)>0 and not isinstance(y[0], list): # vector
return pd.Series(y, name='target', index=index)
elif len(y)>0 and isinstance(y[0], list) and len(y[0])==1: # single row in a matrix
flatten = lambda y: [item for sublist in y for item in sublist]
return pd.Series(flatten(y), name='target', index=index)
elif len(y)==1 and len(y[0])==0 and isinstance(y[0], list): # single empty column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=float)
elif len(y)==1 and isinstance(y[0], list): # single column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=type(y[0][0]))
else:
raise ValueError('Unexpected input shape')
elif isinstance(y, pd.DataFrame):
if len(list(y))==0: # empty DataFrame
return pd.Series(name='target', index=index, dtype=float)
if len(list(y))==1: # a single column
return y.iloc[:, 0]
else:
raise ValueError(f'Unexpected input shape: {y.shape}')
else:
return pd.Series(y, name='target', index=index) # this covers tuples and other directly convertible types
def get_generated_cols(X_original, X_transformed, to_transform):
"""
Returns a list of the generated/transformed columns.
Arguments:
X_original: df
the original (input) DataFrame.
X_transformed: df
the transformed (current) DataFrame.
to_transform: [str]
a list of columns that were transformed (as in the original DataFrame), commonly self.feature_names_in.
Output:
a list of columns that were transformed (as in the current DataFrame).
"""
original_cols = list(X_original.columns)
if len(to_transform) > 0:
[original_cols.remove(c) for c in to_transform]
current_cols = list(X_transformed.columns)
if len(original_cols) > 0:
[current_cols.remove(c) for c in original_cols]
return current_cols
def flatten_reverse_dict(d):
sep = "___"
[flat_dict] = pd.json_normalize(d, sep=sep).to_dict(orient='records')
reversed_flat_dict = {v: tuple(k.split(sep)) for k, v in flat_dict.items()}
return reversed_flat_dict
class EncodingRelation(Enum):
# one input feature get encoded into one output feature
ONE_TO_ONE = auto()
# one input feature get encoded into as many output features as it has distinct values
ONE_TO_N_UNIQUE = auto()
# one input feature get encoded into m output features that are not the number of distinct values
ONE_TO_M = auto()
# all N input features are encoded into M output features.
# The encoding is done globally on all the input not on a per-feature basis
N_TO_M = auto()
def get_docstring_output_shape(in_out_relation: EncodingRelation):
if in_out_relation == EncodingRelation.ONE_TO_ONE:
return "n_features"
elif in_out_relation == EncodingRelation.ONE_TO_N_UNIQUE:
return "n_features * respective cardinality"
elif in_out_relation == EncodingRelation.ONE_TO_M:
return "M features (n_features < M)"
elif in_out_relation == EncodingRelation.N_TO_M:
return "M features (M can be anything)"
class BaseEncoder(BaseEstimator):
_dim: Optional[int]
cols: List[str]
use_default_cols: bool
handle_missing: str
handle_unknown: str
verbose: int
drop_invariant: bool
invariant_cols: List[str] = []
return_df: bool
supervised: bool
encoding_relation: EncodingRelation
INVARIANCE_THRESHOLD = 10e-5 # columns with variance less than this will be considered constant / invariant
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True,
handle_unknown='value', handle_missing='value', **kwargs):
"""
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string and categorical columns
will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform and inverse transform
(otherwise it will be a numpy array).
handle_missing: str
how to handle missing values at fit time. Options are 'error', 'return_nan',
and 'value'. Default 'value', which treat NaNs as a countable category at
fit time.
handle_unknown: str, int or dict of {column : option, ...}.
how to handle unknown labels at transform time. Options are 'error'
'return_nan', 'value' and int. Defaults to None which uses NaN behaviour
specified at fit time. Passing an int will fill with this int value.
kwargs: dict.
additional encoder specific parameters like regularisation.
"""
self.return_df = return_df
self.drop_invariant = drop_invariant
self.invariant_cols = []
self.verbose = verbose
self.use_default_cols = cols is None # if True, even a repeated call of fit() will select string columns from X
self.cols = cols # note that cols are only the columns to be encoded, feature_names_in_ are all columns
self.mapping = None
self.handle_unknown = handle_unknown
self.handle_missing = handle_missing
self._dim = None
def fit(self, X, y=None, **kwargs):
"""Fits the encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
self._check_fit_inputs(X, y)
X, y = convert_inputs(X, y)
self.feature_names_in_ = X.columns.tolist()
self.n_features_in_ = len(self.feature_names_in_)
self._dim = X.shape[1]
self._determine_fit_columns(X)
if not set(self.cols).issubset(X.columns):
raise ValueError('X does not contain the columns listed in cols')
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
self._fit(X, y, **kwargs)
# for finding invariant columns transform without y (as is done on the test set)
X_transformed = self.transform(X, override_return_df=True)
self.feature_names_out_ = X_transformed.columns.tolist()
# drop all output columns with 0 variance.
if self.drop_invariant:
generated_cols = get_generated_cols(X, X_transformed, self.cols)
self.invariant_cols = [x for x in generated_cols if X_transformed[x].var() <= self.INVARIANCE_THRESHOLD]
self.feature_names_out_ = [x for x in self.feature_names_out_ if x not in self.invariant_cols]
return self
def _check_fit_inputs(self, X, y):
if self._get_tags().get('supervised_encoder') and y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
def _check_transform_inputs(self, X):
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
if self._dim is None:
raise NotFittedError('Must train encoder before it can be used to transform data.')
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
def _drop_invariants(self, X: pd.DataFrame, override_return_df: bool) -> Union[np.ndarray, pd.DataFrame]:
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.values
def _determine_fit_columns(self, X: pd.DataFrame) -> None:
""" Determine columns used by encoder.
Note that the implementation also deals with re-fitting the same encoder object with different columns.
:param X: input data frame
:return: none, sets self.cols as a side effect
"""
# if columns aren't passed, just use every string column
if self.use_default_cols:
self.cols = get_obj_cols(X)
else:
self.cols = convert_cols_to_list(self.cols)
def get_feature_names(self) -> List[str]:
warnings.warn("`get_feature_names` is deprecated in all of sklearn. Use `get_feature_names_out` instead.",
category=FutureWarning)
return self.get_feature_names_out()
def get_feature_names_out(self, input_features=None) -> np.ndarray:
"""
Returns the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument
and determines the output feature names using the input. A fit is usually not necessary and if so a
NotFittedError is raised.
We just require a fit all the time and return the fitted output columns.
Returns
-------
feature_names: np.ndarray
A numpy array with all feature names transformed or added.
Note: potentially dropped features (because the feature is constant/invariant) are not included!
"""
out_feats = getattr(self, "feature_names_out_", None)
if not isinstance(out_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return np.array(out_feats, dtype=object)
def get_feature_names_in(self) -> List[str]:
"""
Returns the names of all input columns present when fitting.
These columns are necessary for the transform step.
"""
in_feats = getattr(self, "feature_names_in_", None)
if not isinstance(in_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return in_feats
@abstractmethod
def _fit(self, X: pd.DataFrame, y: Optional[pd.Series], **kwargs):
...
class SupervisedTransformerMixin(sklearn.base.TransformerMixin):
def _more_tags(self):
return {'supervised_encoder': True}
def transform(self, X, y=None, override_return_df=False):
"""Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation
in order to avoid overfitting.
On training data transform should be called with y, on test data without.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] or None
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X, y = convert_inputs(X, y, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X, y)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X: pd.DataFrame, y: pd.Series) -> pd.DataFrame:
...
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
class UnsupervisedTransformerMixin(sklearn.base.TransformerMixin):
def transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X = convert_input(X, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X) -> pd.DataFrame:
...
class TransformerWithTargetMixin:
def _more_tags(self):
return {'supervised_encoder': True}
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
| JaimeArboleda | 5eb7a2d6359d680bdadd0534bdb983e712a47f9c | 570827e6b48737d0c9aece8aca31edd6da02c1b2 | Sorry that I did not answer you question properly: regarding the versions, I would not specify the dependency of 1.2 in the requirements or the setup, because with the simple fix there the remaining bugs are not, in my opinion, a big issue.
If you want, I could refactor the tests so that I don't require the version for running all tests, but only for some of the asserts (the one that will fail because the column names won't match what `get_feature_names_out` outputs). This way it will be more clear what does not work if not setting the output as pandas. | JaimeArboleda | 29 |
scikit-learn-contrib/category_encoders | 398 | (WIP) Partial fix for getting feature names out | I think this is a partial fix for this opened issue:
https://github.com/scikit-learn-contrib/category_encoders/issues/395
It remains to check the behaviour of other estimators that are not ONE_TO_ONE.
Please, let me know if you like the work in progress and I will try to continue. | null | 2023-02-23 13:33:41+00:00 | 2023-03-13 11:48:24+00:00 | category_encoders/utils.py | """A collection of shared utilities for all encoders, not intended for external use."""
from abc import abstractmethod
from enum import Enum, auto
import warnings
import pandas as pd
import numpy as np
import sklearn.base
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.exceptions import NotFittedError
from typing import Dict, List, Optional, Union
from scipy.sparse import csr_matrix
__author__ = 'willmcginnis'
def convert_cols_to_list(cols):
if isinstance(cols, pd.Series):
return cols.tolist()
elif isinstance(cols, np.ndarray):
return cols.tolist()
elif np.isscalar(cols):
return [cols]
elif isinstance(cols, set):
return list(cols)
elif isinstance(cols, tuple):
return list(cols)
elif pd.api.types.is_categorical_dtype(cols):
return cols.astype(object).tolist()
return cols
def get_obj_cols(df):
"""
Returns names of 'object' columns in the DataFrame.
"""
obj_cols = []
for idx, dt in enumerate(df.dtypes):
if dt == 'object' or is_category(dt):
obj_cols.append(df.columns.values[idx])
if not obj_cols:
print("Warning: No categorical columns found. Calling 'transform' will only return input data.")
return obj_cols
def is_category(dtype):
return pd.api.types.is_categorical_dtype(dtype)
def convert_inputs(X, y, columns=None, index=None, deep=False):
"""
Unite arraylike `X` and vectorlike `y` into a DataFrame and Series.
If both are pandas types already, raises an error if their indexes do not match.
If one is pandas, the returns will share that index.
If neither is pandas, a default index will be used, unless `index` is passed.
Parameters
----------
X: arraylike
y: listlike
columns: listlike
Specifies column names to use for `X`.
Ignored if `X` is already a dataframe.
If `None`, use the default pandas column names.
index: listlike
The index to use, if neither `X` nor `y` is a pandas type.
(If one has an index, then this has no effect.)
If `None`, use the default pandas index.
deep: bool
Whether to deep-copy `X`.
"""
X_alt_index = y.index if isinstance(y, pd.Series) else index
X = convert_input(X, columns=columns, deep=deep, index=X_alt_index)
if y is not None:
y = convert_input_vector(y, index=X.index)
# N.B.: If either was already pandas, it keeps its index.
if any(X.index != y.index):
msg = "`X` and `y` both have indexes, but they do not match. If you are shuffling your input data on " \
"purpose (e.g. via permutation_test_score) use np arrays instead of data frames / series"
raise ValueError(msg)
if X.shape[0] != y.shape[0]:
raise ValueError("The length of X is " + str(X.shape[0]) + " but length of y is " + str(y.shape[0]) + ".")
return X, y
def convert_input(X, columns=None, deep=False, index=None):
"""
Unite data into a DataFrame.
Objects that do not contain column names take the names from the argument.
Optionally perform deep copy of the data.
"""
if not isinstance(X, pd.DataFrame):
if isinstance(X, pd.Series):
X = pd.DataFrame(X, copy=deep)
else:
if columns is not None and np.size(X,1) != len(columns):
raise ValueError('The count of the column names does not correspond to the count of the columns')
if isinstance(X, list):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index) # lists are always copied, but for consistency, we still pass the argument
elif isinstance(X, (np.generic, np.ndarray)):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index)
elif isinstance(X, csr_matrix):
X = pd.DataFrame(X.todense(), columns=columns, copy=deep, index=index)
else:
raise ValueError(f'Unexpected input type: {type(X)}')
elif deep:
X = X.copy(deep=True)
return X
def convert_input_vector(y, index):
"""
Unite target data type into a Series.
If the target is a Series or a DataFrame, we preserve its index.
But if the target does not contain index attribute, we use the index from the argument.
"""
if y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
if isinstance(y, pd.Series):
return y
elif isinstance(y, np.ndarray):
if len(np.shape(y))==1: # vector
return pd.Series(y, name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[0]==1: # single row in a matrix
return pd.Series(y[0, :], name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[1]==1: # single column in a matrix
return pd.Series(y[:, 0], name='target', index=index)
else:
raise ValueError(f'Unexpected input shape: {np.shape(y)}')
elif np.isscalar(y):
return pd.Series([y], name='target', index=index)
elif isinstance(y, list):
if len(y)==0: # empty list
return pd.Series(y, name='target', index=index, dtype=float)
elif len(y)>0 and not isinstance(y[0], list): # vector
return pd.Series(y, name='target', index=index)
elif len(y)>0 and isinstance(y[0], list) and len(y[0])==1: # single row in a matrix
flatten = lambda y: [item for sublist in y for item in sublist]
return pd.Series(flatten(y), name='target', index=index)
elif len(y)==1 and len(y[0])==0 and isinstance(y[0], list): # single empty column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=float)
elif len(y)==1 and isinstance(y[0], list): # single column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=type(y[0][0]))
else:
raise ValueError('Unexpected input shape')
elif isinstance(y, pd.DataFrame):
if len(list(y))==0: # empty DataFrame
return pd.Series(name='target', index=index, dtype=float)
if len(list(y))==1: # a single column
return y.iloc[:, 0]
else:
raise ValueError(f'Unexpected input shape: {y.shape}')
else:
return pd.Series(y, name='target', index=index) # this covers tuples and other directly convertible types
def get_generated_cols(X_original, X_transformed, to_transform):
"""
Returns a list of the generated/transformed columns.
Arguments:
X_original: df
the original (input) DataFrame.
X_transformed: df
the transformed (current) DataFrame.
to_transform: [str]
a list of columns that were transformed (as in the original DataFrame), commonly self.feature_names_in.
Output:
a list of columns that were transformed (as in the current DataFrame).
"""
original_cols = list(X_original.columns)
if len(to_transform) > 0:
[original_cols.remove(c) for c in to_transform]
current_cols = list(X_transformed.columns)
if len(original_cols) > 0:
[current_cols.remove(c) for c in original_cols]
return current_cols
def flatten_reverse_dict(d):
sep = "___"
[flat_dict] = pd.json_normalize(d, sep=sep).to_dict(orient='records')
reversed_flat_dict = {v: tuple(k.split(sep)) for k, v in flat_dict.items()}
return reversed_flat_dict
class EncodingRelation(Enum):
# one input feature get encoded into one output feature
ONE_TO_ONE = auto()
# one input feature get encoded into as many output features as it has distinct values
ONE_TO_N_UNIQUE = auto()
# one input feature get encoded into m output features that are not the number of distinct values
ONE_TO_M = auto()
# all N input features are encoded into M output features.
# The encoding is done globally on all the input not on a per-feature basis
N_TO_M = auto()
def get_docstring_output_shape(in_out_relation: EncodingRelation):
if in_out_relation == EncodingRelation.ONE_TO_ONE:
return "n_features"
elif in_out_relation == EncodingRelation.ONE_TO_N_UNIQUE:
return "n_features * respective cardinality"
elif in_out_relation == EncodingRelation.ONE_TO_M:
return "M features (n_features < M)"
elif in_out_relation == EncodingRelation.N_TO_M:
return "M features (M can be anything)"
class BaseEncoder(BaseEstimator):
_dim: Optional[int]
cols: List[str]
use_default_cols: bool
handle_missing: str
handle_unknown: str
verbose: int
drop_invariant: bool
invariant_cols: List[str] = []
return_df: bool
supervised: bool
encoding_relation: EncodingRelation
INVARIANCE_THRESHOLD = 10e-5 # columns with variance less than this will be considered constant / invariant
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True,
handle_unknown='value', handle_missing='value', **kwargs):
"""
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string and categorical columns
will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform and inverse transform
(otherwise it will be a numpy array).
handle_missing: str
how to handle missing values at fit time. Options are 'error', 'return_nan',
and 'value'. Default 'value', which treat NaNs as a countable category at
fit time.
handle_unknown: str, int or dict of {column : option, ...}.
how to handle unknown labels at transform time. Options are 'error'
'return_nan', 'value' and int. Defaults to None which uses NaN behaviour
specified at fit time. Passing an int will fill with this int value.
kwargs: dict.
additional encoder specific parameters like regularisation.
"""
self.return_df = return_df
self.drop_invariant = drop_invariant
self.invariant_cols = []
self.verbose = verbose
self.use_default_cols = cols is None # if True, even a repeated call of fit() will select string columns from X
self.cols = cols # note that cols are only the columns to be encoded, feature_names_in_ are all columns
self.mapping = None
self.handle_unknown = handle_unknown
self.handle_missing = handle_missing
self._dim = None
def fit(self, X, y=None, **kwargs):
"""Fits the encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
self._check_fit_inputs(X, y)
X, y = convert_inputs(X, y)
self.feature_names_in_ = X.columns.tolist()
self.n_features_in_ = len(self.feature_names_in_)
self._dim = X.shape[1]
self._determine_fit_columns(X)
if not set(self.cols).issubset(X.columns):
raise ValueError('X does not contain the columns listed in cols')
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
self._fit(X, y, **kwargs)
# for finding invariant columns transform without y (as is done on the test set)
X_transformed = self.transform(X, override_return_df=True)
self.feature_names_out_ = X_transformed.columns.tolist()
# drop all output columns with 0 variance.
if self.drop_invariant:
generated_cols = get_generated_cols(X, X_transformed, self.cols)
self.invariant_cols = [x for x in generated_cols if X_transformed[x].var() <= self.INVARIANCE_THRESHOLD]
self.feature_names_out_ = [x for x in self.feature_names_out_ if x not in self.invariant_cols]
return self
def _check_fit_inputs(self, X, y):
if self._get_tags().get('supervised_encoder') and y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
def _check_transform_inputs(self, X):
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
if self._dim is None:
raise NotFittedError('Must train encoder before it can be used to transform data.')
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
def _drop_invariants(self, X: pd.DataFrame, override_return_df: bool) -> Union[np.ndarray, pd.DataFrame]:
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.values
def _determine_fit_columns(self, X: pd.DataFrame) -> None:
""" Determine columns used by encoder.
Note that the implementation also deals with re-fitting the same encoder object with different columns.
:param X: input data frame
:return: none, sets self.cols as a side effect
"""
# if columns aren't passed, just use every string column
if self.use_default_cols:
self.cols = get_obj_cols(X)
else:
self.cols = convert_cols_to_list(self.cols)
def get_feature_names(self) -> List[str]:
warnings.warn("`get_feature_names` is deprecated in all of sklearn. Use `get_feature_names_out` instead.",
category=FutureWarning)
return self.get_feature_names_out()
def get_feature_names_out(self) -> List[str]:
"""
Returns the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument
and determines the output feature names using the input. A fit is usually not necessary and if so a
NotFittedError is raised.
We just require a fit all the time and return the fitted output columns.
Returns
-------
feature_names: list
A list with all feature names transformed or added.
Note: potentially dropped features (because the feature is constant/invariant) are not included!
"""
out_feats = getattr(self, "feature_names_out_", None)
if not isinstance(out_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return out_feats
def get_feature_names_in(self) -> List[str]:
"""
Returns the names of all input columns present when fitting.
These columns are necessary for the transform step.
"""
in_feats = getattr(self, "feature_names_in_", None)
if not isinstance(in_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return in_feats
@abstractmethod
def _fit(self, X: pd.DataFrame, y: Optional[pd.Series], **kwargs):
...
class SupervisedTransformerMixin(sklearn.base.TransformerMixin):
def _more_tags(self):
return {'supervised_encoder': True}
def transform(self, X, y=None, override_return_df=False):
"""Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation
in order to avoid overfitting.
On training data transform should be called with y, on test data without.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] or None
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X, y = convert_inputs(X, y, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X, y)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X: pd.DataFrame, y: pd.Series) -> pd.DataFrame:
...
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
class UnsupervisedTransformerMixin(sklearn.base.TransformerMixin):
def transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X = convert_input(X, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X) -> pd.DataFrame:
...
class TransformerWithTargetMixin:
def _more_tags(self):
return {'supervised_encoder': True}
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
| """A collection of shared utilities for all encoders, not intended for external use."""
from abc import abstractmethod
from enum import Enum, auto
import warnings
import pandas as pd
import numpy as np
import sklearn.base
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.exceptions import NotFittedError
from typing import Dict, List, Optional, Union
from scipy.sparse import csr_matrix
__author__ = 'willmcginnis'
def convert_cols_to_list(cols):
if isinstance(cols, pd.Series):
return cols.tolist()
elif isinstance(cols, np.ndarray):
return cols.tolist()
elif np.isscalar(cols):
return [cols]
elif isinstance(cols, set):
return list(cols)
elif isinstance(cols, tuple):
return list(cols)
elif pd.api.types.is_categorical_dtype(cols):
return cols.astype(object).tolist()
return cols
def get_obj_cols(df):
"""
Returns names of 'object' columns in the DataFrame.
"""
obj_cols = []
for idx, dt in enumerate(df.dtypes):
if dt == 'object' or is_category(dt):
obj_cols.append(df.columns.values[idx])
if not obj_cols:
print("Warning: No categorical columns found. Calling 'transform' will only return input data.")
return obj_cols
def is_category(dtype):
return pd.api.types.is_categorical_dtype(dtype)
def convert_inputs(X, y, columns=None, index=None, deep=False):
"""
Unite arraylike `X` and vectorlike `y` into a DataFrame and Series.
If both are pandas types already, raises an error if their indexes do not match.
If one is pandas, the returns will share that index.
If neither is pandas, a default index will be used, unless `index` is passed.
Parameters
----------
X: arraylike
y: listlike
columns: listlike
Specifies column names to use for `X`.
Ignored if `X` is already a dataframe.
If `None`, use the default pandas column names.
index: listlike
The index to use, if neither `X` nor `y` is a pandas type.
(If one has an index, then this has no effect.)
If `None`, use the default pandas index.
deep: bool
Whether to deep-copy `X`.
"""
X_alt_index = y.index if isinstance(y, pd.Series) else index
X = convert_input(X, columns=columns, deep=deep, index=X_alt_index)
if y is not None:
y = convert_input_vector(y, index=X.index)
# N.B.: If either was already pandas, it keeps its index.
if any(X.index != y.index):
msg = "`X` and `y` both have indexes, but they do not match. If you are shuffling your input data on " \
"purpose (e.g. via permutation_test_score) use np arrays instead of data frames / series"
raise ValueError(msg)
if X.shape[0] != y.shape[0]:
raise ValueError("The length of X is " + str(X.shape[0]) + " but length of y is " + str(y.shape[0]) + ".")
return X, y
def convert_input(X, columns=None, deep=False, index=None):
"""
Unite data into a DataFrame.
Objects that do not contain column names take the names from the argument.
Optionally perform deep copy of the data.
"""
if not isinstance(X, pd.DataFrame):
if isinstance(X, pd.Series):
X = pd.DataFrame(X, copy=deep)
else:
if columns is not None and np.size(X,1) != len(columns):
raise ValueError('The count of the column names does not correspond to the count of the columns')
if isinstance(X, list):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index) # lists are always copied, but for consistency, we still pass the argument
elif isinstance(X, (np.generic, np.ndarray)):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index)
elif isinstance(X, csr_matrix):
X = pd.DataFrame(X.todense(), columns=columns, copy=deep, index=index)
else:
raise ValueError(f'Unexpected input type: {type(X)}')
elif deep:
X = X.copy(deep=True)
return X
def convert_input_vector(y, index):
"""
Unite target data type into a Series.
If the target is a Series or a DataFrame, we preserve its index.
But if the target does not contain index attribute, we use the index from the argument.
"""
if y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
if isinstance(y, pd.Series):
return y
elif isinstance(y, np.ndarray):
if len(np.shape(y))==1: # vector
return pd.Series(y, name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[0]==1: # single row in a matrix
return pd.Series(y[0, :], name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[1]==1: # single column in a matrix
return pd.Series(y[:, 0], name='target', index=index)
else:
raise ValueError(f'Unexpected input shape: {np.shape(y)}')
elif np.isscalar(y):
return pd.Series([y], name='target', index=index)
elif isinstance(y, list):
if len(y)==0: # empty list
return pd.Series(y, name='target', index=index, dtype=float)
elif len(y)>0 and not isinstance(y[0], list): # vector
return pd.Series(y, name='target', index=index)
elif len(y)>0 and isinstance(y[0], list) and len(y[0])==1: # single row in a matrix
flatten = lambda y: [item for sublist in y for item in sublist]
return pd.Series(flatten(y), name='target', index=index)
elif len(y)==1 and len(y[0])==0 and isinstance(y[0], list): # single empty column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=float)
elif len(y)==1 and isinstance(y[0], list): # single column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=type(y[0][0]))
else:
raise ValueError('Unexpected input shape')
elif isinstance(y, pd.DataFrame):
if len(list(y))==0: # empty DataFrame
return pd.Series(name='target', index=index, dtype=float)
if len(list(y))==1: # a single column
return y.iloc[:, 0]
else:
raise ValueError(f'Unexpected input shape: {y.shape}')
else:
return pd.Series(y, name='target', index=index) # this covers tuples and other directly convertible types
def get_generated_cols(X_original, X_transformed, to_transform):
"""
Returns a list of the generated/transformed columns.
Arguments:
X_original: df
the original (input) DataFrame.
X_transformed: df
the transformed (current) DataFrame.
to_transform: [str]
a list of columns that were transformed (as in the original DataFrame), commonly self.feature_names_in.
Output:
a list of columns that were transformed (as in the current DataFrame).
"""
original_cols = list(X_original.columns)
if len(to_transform) > 0:
[original_cols.remove(c) for c in to_transform]
current_cols = list(X_transformed.columns)
if len(original_cols) > 0:
[current_cols.remove(c) for c in original_cols]
return current_cols
def flatten_reverse_dict(d):
sep = "___"
[flat_dict] = pd.json_normalize(d, sep=sep).to_dict(orient='records')
reversed_flat_dict = {v: tuple(k.split(sep)) for k, v in flat_dict.items()}
return reversed_flat_dict
class EncodingRelation(Enum):
# one input feature get encoded into one output feature
ONE_TO_ONE = auto()
# one input feature get encoded into as many output features as it has distinct values
ONE_TO_N_UNIQUE = auto()
# one input feature get encoded into m output features that are not the number of distinct values
ONE_TO_M = auto()
# all N input features are encoded into M output features.
# The encoding is done globally on all the input not on a per-feature basis
N_TO_M = auto()
def get_docstring_output_shape(in_out_relation: EncodingRelation):
if in_out_relation == EncodingRelation.ONE_TO_ONE:
return "n_features"
elif in_out_relation == EncodingRelation.ONE_TO_N_UNIQUE:
return "n_features * respective cardinality"
elif in_out_relation == EncodingRelation.ONE_TO_M:
return "M features (n_features < M)"
elif in_out_relation == EncodingRelation.N_TO_M:
return "M features (M can be anything)"
class BaseEncoder(BaseEstimator):
_dim: Optional[int]
cols: List[str]
use_default_cols: bool
handle_missing: str
handle_unknown: str
verbose: int
drop_invariant: bool
invariant_cols: List[str] = []
return_df: bool
supervised: bool
encoding_relation: EncodingRelation
INVARIANCE_THRESHOLD = 10e-5 # columns with variance less than this will be considered constant / invariant
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True,
handle_unknown='value', handle_missing='value', **kwargs):
"""
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string and categorical columns
will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform and inverse transform
(otherwise it will be a numpy array).
handle_missing: str
how to handle missing values at fit time. Options are 'error', 'return_nan',
and 'value'. Default 'value', which treat NaNs as a countable category at
fit time.
handle_unknown: str, int or dict of {column : option, ...}.
how to handle unknown labels at transform time. Options are 'error'
'return_nan', 'value' and int. Defaults to None which uses NaN behaviour
specified at fit time. Passing an int will fill with this int value.
kwargs: dict.
additional encoder specific parameters like regularisation.
"""
self.return_df = return_df
self.drop_invariant = drop_invariant
self.invariant_cols = []
self.verbose = verbose
self.use_default_cols = cols is None # if True, even a repeated call of fit() will select string columns from X
self.cols = cols # note that cols are only the columns to be encoded, feature_names_in_ are all columns
self.mapping = None
self.handle_unknown = handle_unknown
self.handle_missing = handle_missing
self._dim = None
def fit(self, X, y=None, **kwargs):
"""Fits the encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
self._check_fit_inputs(X, y)
X, y = convert_inputs(X, y)
self.feature_names_in_ = X.columns.tolist()
self.n_features_in_ = len(self.feature_names_in_)
self._dim = X.shape[1]
self._determine_fit_columns(X)
if not set(self.cols).issubset(X.columns):
raise ValueError('X does not contain the columns listed in cols')
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
self._fit(X, y, **kwargs)
# for finding invariant columns transform without y (as is done on the test set)
X_transformed = self.transform(X, override_return_df=True)
self.feature_names_out_ = X_transformed.columns.tolist()
# drop all output columns with 0 variance.
if self.drop_invariant:
generated_cols = get_generated_cols(X, X_transformed, self.cols)
self.invariant_cols = [x for x in generated_cols if X_transformed[x].var() <= self.INVARIANCE_THRESHOLD]
self.feature_names_out_ = [x for x in self.feature_names_out_ if x not in self.invariant_cols]
return self
def _check_fit_inputs(self, X, y):
if self._get_tags().get('supervised_encoder') and y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
def _check_transform_inputs(self, X):
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
if self._dim is None:
raise NotFittedError('Must train encoder before it can be used to transform data.')
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
def _drop_invariants(self, X: pd.DataFrame, override_return_df: bool) -> Union[np.ndarray, pd.DataFrame]:
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.values
def _determine_fit_columns(self, X: pd.DataFrame) -> None:
""" Determine columns used by encoder.
Note that the implementation also deals with re-fitting the same encoder object with different columns.
:param X: input data frame
:return: none, sets self.cols as a side effect
"""
# if columns aren't passed, just use every string column
if self.use_default_cols:
self.cols = get_obj_cols(X)
else:
self.cols = convert_cols_to_list(self.cols)
def get_feature_names(self) -> List[str]:
warnings.warn("`get_feature_names` is deprecated in all of sklearn. Use `get_feature_names_out` instead.",
category=FutureWarning)
return self.get_feature_names_out()
def get_feature_names_out(self, input_features=None) -> np.ndarray:
"""
Returns the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument
and determines the output feature names using the input. A fit is usually not necessary and if so a
NotFittedError is raised.
We just require a fit all the time and return the fitted output columns.
Returns
-------
feature_names: np.ndarray
A numpy array with all feature names transformed or added.
Note: potentially dropped features (because the feature is constant/invariant) are not included!
"""
out_feats = getattr(self, "feature_names_out_", None)
if not isinstance(out_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return np.array(out_feats, dtype=object)
def get_feature_names_in(self) -> List[str]:
"""
Returns the names of all input columns present when fitting.
These columns are necessary for the transform step.
"""
in_feats = getattr(self, "feature_names_in_", None)
if not isinstance(in_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return in_feats
@abstractmethod
def _fit(self, X: pd.DataFrame, y: Optional[pd.Series], **kwargs):
...
class SupervisedTransformerMixin(sklearn.base.TransformerMixin):
def _more_tags(self):
return {'supervised_encoder': True}
def transform(self, X, y=None, override_return_df=False):
"""Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation
in order to avoid overfitting.
On training data transform should be called with y, on test data without.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] or None
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X, y = convert_inputs(X, y, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X, y)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X: pd.DataFrame, y: pd.Series) -> pd.DataFrame:
...
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
class UnsupervisedTransformerMixin(sklearn.base.TransformerMixin):
def transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X = convert_input(X, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X) -> pd.DataFrame:
...
class TransformerWithTargetMixin:
def _more_tags(self):
return {'supervised_encoder': True}
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
| JaimeArboleda | 5eb7a2d6359d680bdadd0534bdb983e712a47f9c | 570827e6b48737d0c9aece8aca31edd6da02c1b2 | > I've got one more question: If we change all encoders to properly work with the input features, will we get compatibility even without setting `set_output=pandas`?
Ok I see, this is not true since the columns to encode are always referenced by name in the `cols` parameter.
Then I agree with you that changing `get_feature_names_out` to actually do something with the `input_features` is pretty much useless since it won't fix the issue. | PaulWestenthanner | 30 |
scikit-learn-contrib/category_encoders | 398 | (WIP) Partial fix for getting feature names out | I think this is a partial fix for this opened issue:
https://github.com/scikit-learn-contrib/category_encoders/issues/395
It remains to check the behaviour of other estimators that are not ONE_TO_ONE.
Please, let me know if you like the work in progress and I will try to continue. | null | 2023-02-23 13:33:41+00:00 | 2023-03-13 11:48:24+00:00 | category_encoders/utils.py | """A collection of shared utilities for all encoders, not intended for external use."""
from abc import abstractmethod
from enum import Enum, auto
import warnings
import pandas as pd
import numpy as np
import sklearn.base
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.exceptions import NotFittedError
from typing import Dict, List, Optional, Union
from scipy.sparse import csr_matrix
__author__ = 'willmcginnis'
def convert_cols_to_list(cols):
if isinstance(cols, pd.Series):
return cols.tolist()
elif isinstance(cols, np.ndarray):
return cols.tolist()
elif np.isscalar(cols):
return [cols]
elif isinstance(cols, set):
return list(cols)
elif isinstance(cols, tuple):
return list(cols)
elif pd.api.types.is_categorical_dtype(cols):
return cols.astype(object).tolist()
return cols
def get_obj_cols(df):
"""
Returns names of 'object' columns in the DataFrame.
"""
obj_cols = []
for idx, dt in enumerate(df.dtypes):
if dt == 'object' or is_category(dt):
obj_cols.append(df.columns.values[idx])
if not obj_cols:
print("Warning: No categorical columns found. Calling 'transform' will only return input data.")
return obj_cols
def is_category(dtype):
return pd.api.types.is_categorical_dtype(dtype)
def convert_inputs(X, y, columns=None, index=None, deep=False):
"""
Unite arraylike `X` and vectorlike `y` into a DataFrame and Series.
If both are pandas types already, raises an error if their indexes do not match.
If one is pandas, the returns will share that index.
If neither is pandas, a default index will be used, unless `index` is passed.
Parameters
----------
X: arraylike
y: listlike
columns: listlike
Specifies column names to use for `X`.
Ignored if `X` is already a dataframe.
If `None`, use the default pandas column names.
index: listlike
The index to use, if neither `X` nor `y` is a pandas type.
(If one has an index, then this has no effect.)
If `None`, use the default pandas index.
deep: bool
Whether to deep-copy `X`.
"""
X_alt_index = y.index if isinstance(y, pd.Series) else index
X = convert_input(X, columns=columns, deep=deep, index=X_alt_index)
if y is not None:
y = convert_input_vector(y, index=X.index)
# N.B.: If either was already pandas, it keeps its index.
if any(X.index != y.index):
msg = "`X` and `y` both have indexes, but they do not match. If you are shuffling your input data on " \
"purpose (e.g. via permutation_test_score) use np arrays instead of data frames / series"
raise ValueError(msg)
if X.shape[0] != y.shape[0]:
raise ValueError("The length of X is " + str(X.shape[0]) + " but length of y is " + str(y.shape[0]) + ".")
return X, y
def convert_input(X, columns=None, deep=False, index=None):
"""
Unite data into a DataFrame.
Objects that do not contain column names take the names from the argument.
Optionally perform deep copy of the data.
"""
if not isinstance(X, pd.DataFrame):
if isinstance(X, pd.Series):
X = pd.DataFrame(X, copy=deep)
else:
if columns is not None and np.size(X,1) != len(columns):
raise ValueError('The count of the column names does not correspond to the count of the columns')
if isinstance(X, list):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index) # lists are always copied, but for consistency, we still pass the argument
elif isinstance(X, (np.generic, np.ndarray)):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index)
elif isinstance(X, csr_matrix):
X = pd.DataFrame(X.todense(), columns=columns, copy=deep, index=index)
else:
raise ValueError(f'Unexpected input type: {type(X)}')
elif deep:
X = X.copy(deep=True)
return X
def convert_input_vector(y, index):
"""
Unite target data type into a Series.
If the target is a Series or a DataFrame, we preserve its index.
But if the target does not contain index attribute, we use the index from the argument.
"""
if y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
if isinstance(y, pd.Series):
return y
elif isinstance(y, np.ndarray):
if len(np.shape(y))==1: # vector
return pd.Series(y, name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[0]==1: # single row in a matrix
return pd.Series(y[0, :], name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[1]==1: # single column in a matrix
return pd.Series(y[:, 0], name='target', index=index)
else:
raise ValueError(f'Unexpected input shape: {np.shape(y)}')
elif np.isscalar(y):
return pd.Series([y], name='target', index=index)
elif isinstance(y, list):
if len(y)==0: # empty list
return pd.Series(y, name='target', index=index, dtype=float)
elif len(y)>0 and not isinstance(y[0], list): # vector
return pd.Series(y, name='target', index=index)
elif len(y)>0 and isinstance(y[0], list) and len(y[0])==1: # single row in a matrix
flatten = lambda y: [item for sublist in y for item in sublist]
return pd.Series(flatten(y), name='target', index=index)
elif len(y)==1 and len(y[0])==0 and isinstance(y[0], list): # single empty column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=float)
elif len(y)==1 and isinstance(y[0], list): # single column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=type(y[0][0]))
else:
raise ValueError('Unexpected input shape')
elif isinstance(y, pd.DataFrame):
if len(list(y))==0: # empty DataFrame
return pd.Series(name='target', index=index, dtype=float)
if len(list(y))==1: # a single column
return y.iloc[:, 0]
else:
raise ValueError(f'Unexpected input shape: {y.shape}')
else:
return pd.Series(y, name='target', index=index) # this covers tuples and other directly convertible types
def get_generated_cols(X_original, X_transformed, to_transform):
"""
Returns a list of the generated/transformed columns.
Arguments:
X_original: df
the original (input) DataFrame.
X_transformed: df
the transformed (current) DataFrame.
to_transform: [str]
a list of columns that were transformed (as in the original DataFrame), commonly self.feature_names_in.
Output:
a list of columns that were transformed (as in the current DataFrame).
"""
original_cols = list(X_original.columns)
if len(to_transform) > 0:
[original_cols.remove(c) for c in to_transform]
current_cols = list(X_transformed.columns)
if len(original_cols) > 0:
[current_cols.remove(c) for c in original_cols]
return current_cols
def flatten_reverse_dict(d):
sep = "___"
[flat_dict] = pd.json_normalize(d, sep=sep).to_dict(orient='records')
reversed_flat_dict = {v: tuple(k.split(sep)) for k, v in flat_dict.items()}
return reversed_flat_dict
class EncodingRelation(Enum):
# one input feature get encoded into one output feature
ONE_TO_ONE = auto()
# one input feature get encoded into as many output features as it has distinct values
ONE_TO_N_UNIQUE = auto()
# one input feature get encoded into m output features that are not the number of distinct values
ONE_TO_M = auto()
# all N input features are encoded into M output features.
# The encoding is done globally on all the input not on a per-feature basis
N_TO_M = auto()
def get_docstring_output_shape(in_out_relation: EncodingRelation):
if in_out_relation == EncodingRelation.ONE_TO_ONE:
return "n_features"
elif in_out_relation == EncodingRelation.ONE_TO_N_UNIQUE:
return "n_features * respective cardinality"
elif in_out_relation == EncodingRelation.ONE_TO_M:
return "M features (n_features < M)"
elif in_out_relation == EncodingRelation.N_TO_M:
return "M features (M can be anything)"
class BaseEncoder(BaseEstimator):
_dim: Optional[int]
cols: List[str]
use_default_cols: bool
handle_missing: str
handle_unknown: str
verbose: int
drop_invariant: bool
invariant_cols: List[str] = []
return_df: bool
supervised: bool
encoding_relation: EncodingRelation
INVARIANCE_THRESHOLD = 10e-5 # columns with variance less than this will be considered constant / invariant
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True,
handle_unknown='value', handle_missing='value', **kwargs):
"""
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string and categorical columns
will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform and inverse transform
(otherwise it will be a numpy array).
handle_missing: str
how to handle missing values at fit time. Options are 'error', 'return_nan',
and 'value'. Default 'value', which treat NaNs as a countable category at
fit time.
handle_unknown: str, int or dict of {column : option, ...}.
how to handle unknown labels at transform time. Options are 'error'
'return_nan', 'value' and int. Defaults to None which uses NaN behaviour
specified at fit time. Passing an int will fill with this int value.
kwargs: dict.
additional encoder specific parameters like regularisation.
"""
self.return_df = return_df
self.drop_invariant = drop_invariant
self.invariant_cols = []
self.verbose = verbose
self.use_default_cols = cols is None # if True, even a repeated call of fit() will select string columns from X
self.cols = cols # note that cols are only the columns to be encoded, feature_names_in_ are all columns
self.mapping = None
self.handle_unknown = handle_unknown
self.handle_missing = handle_missing
self._dim = None
def fit(self, X, y=None, **kwargs):
"""Fits the encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
self._check_fit_inputs(X, y)
X, y = convert_inputs(X, y)
self.feature_names_in_ = X.columns.tolist()
self.n_features_in_ = len(self.feature_names_in_)
self._dim = X.shape[1]
self._determine_fit_columns(X)
if not set(self.cols).issubset(X.columns):
raise ValueError('X does not contain the columns listed in cols')
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
self._fit(X, y, **kwargs)
# for finding invariant columns transform without y (as is done on the test set)
X_transformed = self.transform(X, override_return_df=True)
self.feature_names_out_ = X_transformed.columns.tolist()
# drop all output columns with 0 variance.
if self.drop_invariant:
generated_cols = get_generated_cols(X, X_transformed, self.cols)
self.invariant_cols = [x for x in generated_cols if X_transformed[x].var() <= self.INVARIANCE_THRESHOLD]
self.feature_names_out_ = [x for x in self.feature_names_out_ if x not in self.invariant_cols]
return self
def _check_fit_inputs(self, X, y):
if self._get_tags().get('supervised_encoder') and y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
def _check_transform_inputs(self, X):
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
if self._dim is None:
raise NotFittedError('Must train encoder before it can be used to transform data.')
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
def _drop_invariants(self, X: pd.DataFrame, override_return_df: bool) -> Union[np.ndarray, pd.DataFrame]:
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.values
def _determine_fit_columns(self, X: pd.DataFrame) -> None:
""" Determine columns used by encoder.
Note that the implementation also deals with re-fitting the same encoder object with different columns.
:param X: input data frame
:return: none, sets self.cols as a side effect
"""
# if columns aren't passed, just use every string column
if self.use_default_cols:
self.cols = get_obj_cols(X)
else:
self.cols = convert_cols_to_list(self.cols)
def get_feature_names(self) -> List[str]:
warnings.warn("`get_feature_names` is deprecated in all of sklearn. Use `get_feature_names_out` instead.",
category=FutureWarning)
return self.get_feature_names_out()
def get_feature_names_out(self) -> List[str]:
"""
Returns the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument
and determines the output feature names using the input. A fit is usually not necessary and if so a
NotFittedError is raised.
We just require a fit all the time and return the fitted output columns.
Returns
-------
feature_names: list
A list with all feature names transformed or added.
Note: potentially dropped features (because the feature is constant/invariant) are not included!
"""
out_feats = getattr(self, "feature_names_out_", None)
if not isinstance(out_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return out_feats
def get_feature_names_in(self) -> List[str]:
"""
Returns the names of all input columns present when fitting.
These columns are necessary for the transform step.
"""
in_feats = getattr(self, "feature_names_in_", None)
if not isinstance(in_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return in_feats
@abstractmethod
def _fit(self, X: pd.DataFrame, y: Optional[pd.Series], **kwargs):
...
class SupervisedTransformerMixin(sklearn.base.TransformerMixin):
def _more_tags(self):
return {'supervised_encoder': True}
def transform(self, X, y=None, override_return_df=False):
"""Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation
in order to avoid overfitting.
On training data transform should be called with y, on test data without.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] or None
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X, y = convert_inputs(X, y, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X, y)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X: pd.DataFrame, y: pd.Series) -> pd.DataFrame:
...
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
class UnsupervisedTransformerMixin(sklearn.base.TransformerMixin):
def transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X = convert_input(X, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X) -> pd.DataFrame:
...
class TransformerWithTargetMixin:
def _more_tags(self):
return {'supervised_encoder': True}
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
| """A collection of shared utilities for all encoders, not intended for external use."""
from abc import abstractmethod
from enum import Enum, auto
import warnings
import pandas as pd
import numpy as np
import sklearn.base
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.exceptions import NotFittedError
from typing import Dict, List, Optional, Union
from scipy.sparse import csr_matrix
__author__ = 'willmcginnis'
def convert_cols_to_list(cols):
if isinstance(cols, pd.Series):
return cols.tolist()
elif isinstance(cols, np.ndarray):
return cols.tolist()
elif np.isscalar(cols):
return [cols]
elif isinstance(cols, set):
return list(cols)
elif isinstance(cols, tuple):
return list(cols)
elif pd.api.types.is_categorical_dtype(cols):
return cols.astype(object).tolist()
return cols
def get_obj_cols(df):
"""
Returns names of 'object' columns in the DataFrame.
"""
obj_cols = []
for idx, dt in enumerate(df.dtypes):
if dt == 'object' or is_category(dt):
obj_cols.append(df.columns.values[idx])
if not obj_cols:
print("Warning: No categorical columns found. Calling 'transform' will only return input data.")
return obj_cols
def is_category(dtype):
return pd.api.types.is_categorical_dtype(dtype)
def convert_inputs(X, y, columns=None, index=None, deep=False):
"""
Unite arraylike `X` and vectorlike `y` into a DataFrame and Series.
If both are pandas types already, raises an error if their indexes do not match.
If one is pandas, the returns will share that index.
If neither is pandas, a default index will be used, unless `index` is passed.
Parameters
----------
X: arraylike
y: listlike
columns: listlike
Specifies column names to use for `X`.
Ignored if `X` is already a dataframe.
If `None`, use the default pandas column names.
index: listlike
The index to use, if neither `X` nor `y` is a pandas type.
(If one has an index, then this has no effect.)
If `None`, use the default pandas index.
deep: bool
Whether to deep-copy `X`.
"""
X_alt_index = y.index if isinstance(y, pd.Series) else index
X = convert_input(X, columns=columns, deep=deep, index=X_alt_index)
if y is not None:
y = convert_input_vector(y, index=X.index)
# N.B.: If either was already pandas, it keeps its index.
if any(X.index != y.index):
msg = "`X` and `y` both have indexes, but they do not match. If you are shuffling your input data on " \
"purpose (e.g. via permutation_test_score) use np arrays instead of data frames / series"
raise ValueError(msg)
if X.shape[0] != y.shape[0]:
raise ValueError("The length of X is " + str(X.shape[0]) + " but length of y is " + str(y.shape[0]) + ".")
return X, y
def convert_input(X, columns=None, deep=False, index=None):
"""
Unite data into a DataFrame.
Objects that do not contain column names take the names from the argument.
Optionally perform deep copy of the data.
"""
if not isinstance(X, pd.DataFrame):
if isinstance(X, pd.Series):
X = pd.DataFrame(X, copy=deep)
else:
if columns is not None and np.size(X,1) != len(columns):
raise ValueError('The count of the column names does not correspond to the count of the columns')
if isinstance(X, list):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index) # lists are always copied, but for consistency, we still pass the argument
elif isinstance(X, (np.generic, np.ndarray)):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index)
elif isinstance(X, csr_matrix):
X = pd.DataFrame(X.todense(), columns=columns, copy=deep, index=index)
else:
raise ValueError(f'Unexpected input type: {type(X)}')
elif deep:
X = X.copy(deep=True)
return X
def convert_input_vector(y, index):
"""
Unite target data type into a Series.
If the target is a Series or a DataFrame, we preserve its index.
But if the target does not contain index attribute, we use the index from the argument.
"""
if y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
if isinstance(y, pd.Series):
return y
elif isinstance(y, np.ndarray):
if len(np.shape(y))==1: # vector
return pd.Series(y, name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[0]==1: # single row in a matrix
return pd.Series(y[0, :], name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[1]==1: # single column in a matrix
return pd.Series(y[:, 0], name='target', index=index)
else:
raise ValueError(f'Unexpected input shape: {np.shape(y)}')
elif np.isscalar(y):
return pd.Series([y], name='target', index=index)
elif isinstance(y, list):
if len(y)==0: # empty list
return pd.Series(y, name='target', index=index, dtype=float)
elif len(y)>0 and not isinstance(y[0], list): # vector
return pd.Series(y, name='target', index=index)
elif len(y)>0 and isinstance(y[0], list) and len(y[0])==1: # single row in a matrix
flatten = lambda y: [item for sublist in y for item in sublist]
return pd.Series(flatten(y), name='target', index=index)
elif len(y)==1 and len(y[0])==0 and isinstance(y[0], list): # single empty column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=float)
elif len(y)==1 and isinstance(y[0], list): # single column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=type(y[0][0]))
else:
raise ValueError('Unexpected input shape')
elif isinstance(y, pd.DataFrame):
if len(list(y))==0: # empty DataFrame
return pd.Series(name='target', index=index, dtype=float)
if len(list(y))==1: # a single column
return y.iloc[:, 0]
else:
raise ValueError(f'Unexpected input shape: {y.shape}')
else:
return pd.Series(y, name='target', index=index) # this covers tuples and other directly convertible types
def get_generated_cols(X_original, X_transformed, to_transform):
"""
Returns a list of the generated/transformed columns.
Arguments:
X_original: df
the original (input) DataFrame.
X_transformed: df
the transformed (current) DataFrame.
to_transform: [str]
a list of columns that were transformed (as in the original DataFrame), commonly self.feature_names_in.
Output:
a list of columns that were transformed (as in the current DataFrame).
"""
original_cols = list(X_original.columns)
if len(to_transform) > 0:
[original_cols.remove(c) for c in to_transform]
current_cols = list(X_transformed.columns)
if len(original_cols) > 0:
[current_cols.remove(c) for c in original_cols]
return current_cols
def flatten_reverse_dict(d):
sep = "___"
[flat_dict] = pd.json_normalize(d, sep=sep).to_dict(orient='records')
reversed_flat_dict = {v: tuple(k.split(sep)) for k, v in flat_dict.items()}
return reversed_flat_dict
class EncodingRelation(Enum):
# one input feature get encoded into one output feature
ONE_TO_ONE = auto()
# one input feature get encoded into as many output features as it has distinct values
ONE_TO_N_UNIQUE = auto()
# one input feature get encoded into m output features that are not the number of distinct values
ONE_TO_M = auto()
# all N input features are encoded into M output features.
# The encoding is done globally on all the input not on a per-feature basis
N_TO_M = auto()
def get_docstring_output_shape(in_out_relation: EncodingRelation):
if in_out_relation == EncodingRelation.ONE_TO_ONE:
return "n_features"
elif in_out_relation == EncodingRelation.ONE_TO_N_UNIQUE:
return "n_features * respective cardinality"
elif in_out_relation == EncodingRelation.ONE_TO_M:
return "M features (n_features < M)"
elif in_out_relation == EncodingRelation.N_TO_M:
return "M features (M can be anything)"
class BaseEncoder(BaseEstimator):
_dim: Optional[int]
cols: List[str]
use_default_cols: bool
handle_missing: str
handle_unknown: str
verbose: int
drop_invariant: bool
invariant_cols: List[str] = []
return_df: bool
supervised: bool
encoding_relation: EncodingRelation
INVARIANCE_THRESHOLD = 10e-5 # columns with variance less than this will be considered constant / invariant
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True,
handle_unknown='value', handle_missing='value', **kwargs):
"""
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string and categorical columns
will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform and inverse transform
(otherwise it will be a numpy array).
handle_missing: str
how to handle missing values at fit time. Options are 'error', 'return_nan',
and 'value'. Default 'value', which treat NaNs as a countable category at
fit time.
handle_unknown: str, int or dict of {column : option, ...}.
how to handle unknown labels at transform time. Options are 'error'
'return_nan', 'value' and int. Defaults to None which uses NaN behaviour
specified at fit time. Passing an int will fill with this int value.
kwargs: dict.
additional encoder specific parameters like regularisation.
"""
self.return_df = return_df
self.drop_invariant = drop_invariant
self.invariant_cols = []
self.verbose = verbose
self.use_default_cols = cols is None # if True, even a repeated call of fit() will select string columns from X
self.cols = cols # note that cols are only the columns to be encoded, feature_names_in_ are all columns
self.mapping = None
self.handle_unknown = handle_unknown
self.handle_missing = handle_missing
self._dim = None
def fit(self, X, y=None, **kwargs):
"""Fits the encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
self._check_fit_inputs(X, y)
X, y = convert_inputs(X, y)
self.feature_names_in_ = X.columns.tolist()
self.n_features_in_ = len(self.feature_names_in_)
self._dim = X.shape[1]
self._determine_fit_columns(X)
if not set(self.cols).issubset(X.columns):
raise ValueError('X does not contain the columns listed in cols')
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
self._fit(X, y, **kwargs)
# for finding invariant columns transform without y (as is done on the test set)
X_transformed = self.transform(X, override_return_df=True)
self.feature_names_out_ = X_transformed.columns.tolist()
# drop all output columns with 0 variance.
if self.drop_invariant:
generated_cols = get_generated_cols(X, X_transformed, self.cols)
self.invariant_cols = [x for x in generated_cols if X_transformed[x].var() <= self.INVARIANCE_THRESHOLD]
self.feature_names_out_ = [x for x in self.feature_names_out_ if x not in self.invariant_cols]
return self
def _check_fit_inputs(self, X, y):
if self._get_tags().get('supervised_encoder') and y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
def _check_transform_inputs(self, X):
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
if self._dim is None:
raise NotFittedError('Must train encoder before it can be used to transform data.')
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
def _drop_invariants(self, X: pd.DataFrame, override_return_df: bool) -> Union[np.ndarray, pd.DataFrame]:
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.values
def _determine_fit_columns(self, X: pd.DataFrame) -> None:
""" Determine columns used by encoder.
Note that the implementation also deals with re-fitting the same encoder object with different columns.
:param X: input data frame
:return: none, sets self.cols as a side effect
"""
# if columns aren't passed, just use every string column
if self.use_default_cols:
self.cols = get_obj_cols(X)
else:
self.cols = convert_cols_to_list(self.cols)
def get_feature_names(self) -> List[str]:
warnings.warn("`get_feature_names` is deprecated in all of sklearn. Use `get_feature_names_out` instead.",
category=FutureWarning)
return self.get_feature_names_out()
def get_feature_names_out(self, input_features=None) -> np.ndarray:
"""
Returns the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument
and determines the output feature names using the input. A fit is usually not necessary and if so a
NotFittedError is raised.
We just require a fit all the time and return the fitted output columns.
Returns
-------
feature_names: np.ndarray
A numpy array with all feature names transformed or added.
Note: potentially dropped features (because the feature is constant/invariant) are not included!
"""
out_feats = getattr(self, "feature_names_out_", None)
if not isinstance(out_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return np.array(out_feats, dtype=object)
def get_feature_names_in(self) -> List[str]:
"""
Returns the names of all input columns present when fitting.
These columns are necessary for the transform step.
"""
in_feats = getattr(self, "feature_names_in_", None)
if not isinstance(in_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return in_feats
@abstractmethod
def _fit(self, X: pd.DataFrame, y: Optional[pd.Series], **kwargs):
...
class SupervisedTransformerMixin(sklearn.base.TransformerMixin):
def _more_tags(self):
return {'supervised_encoder': True}
def transform(self, X, y=None, override_return_df=False):
"""Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation
in order to avoid overfitting.
On training data transform should be called with y, on test data without.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] or None
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X, y = convert_inputs(X, y, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X, y)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X: pd.DataFrame, y: pd.Series) -> pd.DataFrame:
...
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
class UnsupervisedTransformerMixin(sklearn.base.TransformerMixin):
def transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X = convert_input(X, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X) -> pd.DataFrame:
...
class TransformerWithTargetMixin:
def _more_tags(self):
return {'supervised_encoder': True}
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
| JaimeArboleda | 5eb7a2d6359d680bdadd0534bdb983e712a47f9c | 570827e6b48737d0c9aece8aca31edd6da02c1b2 | I pretty much agree with you that we should go for the super simple fix. Let me think a little about the version stuff, maybe there is another solution, but I think we're very close to getting this merged | PaulWestenthanner | 31 |
scikit-learn-contrib/category_encoders | 398 | (WIP) Partial fix for getting feature names out | I think this is a partial fix for this opened issue:
https://github.com/scikit-learn-contrib/category_encoders/issues/395
It remains to check the behaviour of other estimators that are not ONE_TO_ONE.
Please, let me know if you like the work in progress and I will try to continue. | null | 2023-02-23 13:33:41+00:00 | 2023-03-13 11:48:24+00:00 | category_encoders/utils.py | """A collection of shared utilities for all encoders, not intended for external use."""
from abc import abstractmethod
from enum import Enum, auto
import warnings
import pandas as pd
import numpy as np
import sklearn.base
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.exceptions import NotFittedError
from typing import Dict, List, Optional, Union
from scipy.sparse import csr_matrix
__author__ = 'willmcginnis'
def convert_cols_to_list(cols):
if isinstance(cols, pd.Series):
return cols.tolist()
elif isinstance(cols, np.ndarray):
return cols.tolist()
elif np.isscalar(cols):
return [cols]
elif isinstance(cols, set):
return list(cols)
elif isinstance(cols, tuple):
return list(cols)
elif pd.api.types.is_categorical_dtype(cols):
return cols.astype(object).tolist()
return cols
def get_obj_cols(df):
"""
Returns names of 'object' columns in the DataFrame.
"""
obj_cols = []
for idx, dt in enumerate(df.dtypes):
if dt == 'object' or is_category(dt):
obj_cols.append(df.columns.values[idx])
if not obj_cols:
print("Warning: No categorical columns found. Calling 'transform' will only return input data.")
return obj_cols
def is_category(dtype):
return pd.api.types.is_categorical_dtype(dtype)
def convert_inputs(X, y, columns=None, index=None, deep=False):
"""
Unite arraylike `X` and vectorlike `y` into a DataFrame and Series.
If both are pandas types already, raises an error if their indexes do not match.
If one is pandas, the returns will share that index.
If neither is pandas, a default index will be used, unless `index` is passed.
Parameters
----------
X: arraylike
y: listlike
columns: listlike
Specifies column names to use for `X`.
Ignored if `X` is already a dataframe.
If `None`, use the default pandas column names.
index: listlike
The index to use, if neither `X` nor `y` is a pandas type.
(If one has an index, then this has no effect.)
If `None`, use the default pandas index.
deep: bool
Whether to deep-copy `X`.
"""
X_alt_index = y.index if isinstance(y, pd.Series) else index
X = convert_input(X, columns=columns, deep=deep, index=X_alt_index)
if y is not None:
y = convert_input_vector(y, index=X.index)
# N.B.: If either was already pandas, it keeps its index.
if any(X.index != y.index):
msg = "`X` and `y` both have indexes, but they do not match. If you are shuffling your input data on " \
"purpose (e.g. via permutation_test_score) use np arrays instead of data frames / series"
raise ValueError(msg)
if X.shape[0] != y.shape[0]:
raise ValueError("The length of X is " + str(X.shape[0]) + " but length of y is " + str(y.shape[0]) + ".")
return X, y
def convert_input(X, columns=None, deep=False, index=None):
"""
Unite data into a DataFrame.
Objects that do not contain column names take the names from the argument.
Optionally perform deep copy of the data.
"""
if not isinstance(X, pd.DataFrame):
if isinstance(X, pd.Series):
X = pd.DataFrame(X, copy=deep)
else:
if columns is not None and np.size(X,1) != len(columns):
raise ValueError('The count of the column names does not correspond to the count of the columns')
if isinstance(X, list):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index) # lists are always copied, but for consistency, we still pass the argument
elif isinstance(X, (np.generic, np.ndarray)):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index)
elif isinstance(X, csr_matrix):
X = pd.DataFrame(X.todense(), columns=columns, copy=deep, index=index)
else:
raise ValueError(f'Unexpected input type: {type(X)}')
elif deep:
X = X.copy(deep=True)
return X
def convert_input_vector(y, index):
"""
Unite target data type into a Series.
If the target is a Series or a DataFrame, we preserve its index.
But if the target does not contain index attribute, we use the index from the argument.
"""
if y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
if isinstance(y, pd.Series):
return y
elif isinstance(y, np.ndarray):
if len(np.shape(y))==1: # vector
return pd.Series(y, name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[0]==1: # single row in a matrix
return pd.Series(y[0, :], name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[1]==1: # single column in a matrix
return pd.Series(y[:, 0], name='target', index=index)
else:
raise ValueError(f'Unexpected input shape: {np.shape(y)}')
elif np.isscalar(y):
return pd.Series([y], name='target', index=index)
elif isinstance(y, list):
if len(y)==0: # empty list
return pd.Series(y, name='target', index=index, dtype=float)
elif len(y)>0 and not isinstance(y[0], list): # vector
return pd.Series(y, name='target', index=index)
elif len(y)>0 and isinstance(y[0], list) and len(y[0])==1: # single row in a matrix
flatten = lambda y: [item for sublist in y for item in sublist]
return pd.Series(flatten(y), name='target', index=index)
elif len(y)==1 and len(y[0])==0 and isinstance(y[0], list): # single empty column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=float)
elif len(y)==1 and isinstance(y[0], list): # single column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=type(y[0][0]))
else:
raise ValueError('Unexpected input shape')
elif isinstance(y, pd.DataFrame):
if len(list(y))==0: # empty DataFrame
return pd.Series(name='target', index=index, dtype=float)
if len(list(y))==1: # a single column
return y.iloc[:, 0]
else:
raise ValueError(f'Unexpected input shape: {y.shape}')
else:
return pd.Series(y, name='target', index=index) # this covers tuples and other directly convertible types
def get_generated_cols(X_original, X_transformed, to_transform):
"""
Returns a list of the generated/transformed columns.
Arguments:
X_original: df
the original (input) DataFrame.
X_transformed: df
the transformed (current) DataFrame.
to_transform: [str]
a list of columns that were transformed (as in the original DataFrame), commonly self.feature_names_in.
Output:
a list of columns that were transformed (as in the current DataFrame).
"""
original_cols = list(X_original.columns)
if len(to_transform) > 0:
[original_cols.remove(c) for c in to_transform]
current_cols = list(X_transformed.columns)
if len(original_cols) > 0:
[current_cols.remove(c) for c in original_cols]
return current_cols
def flatten_reverse_dict(d):
sep = "___"
[flat_dict] = pd.json_normalize(d, sep=sep).to_dict(orient='records')
reversed_flat_dict = {v: tuple(k.split(sep)) for k, v in flat_dict.items()}
return reversed_flat_dict
class EncodingRelation(Enum):
# one input feature get encoded into one output feature
ONE_TO_ONE = auto()
# one input feature get encoded into as many output features as it has distinct values
ONE_TO_N_UNIQUE = auto()
# one input feature get encoded into m output features that are not the number of distinct values
ONE_TO_M = auto()
# all N input features are encoded into M output features.
# The encoding is done globally on all the input not on a per-feature basis
N_TO_M = auto()
def get_docstring_output_shape(in_out_relation: EncodingRelation):
if in_out_relation == EncodingRelation.ONE_TO_ONE:
return "n_features"
elif in_out_relation == EncodingRelation.ONE_TO_N_UNIQUE:
return "n_features * respective cardinality"
elif in_out_relation == EncodingRelation.ONE_TO_M:
return "M features (n_features < M)"
elif in_out_relation == EncodingRelation.N_TO_M:
return "M features (M can be anything)"
class BaseEncoder(BaseEstimator):
_dim: Optional[int]
cols: List[str]
use_default_cols: bool
handle_missing: str
handle_unknown: str
verbose: int
drop_invariant: bool
invariant_cols: List[str] = []
return_df: bool
supervised: bool
encoding_relation: EncodingRelation
INVARIANCE_THRESHOLD = 10e-5 # columns with variance less than this will be considered constant / invariant
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True,
handle_unknown='value', handle_missing='value', **kwargs):
"""
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string and categorical columns
will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform and inverse transform
(otherwise it will be a numpy array).
handle_missing: str
how to handle missing values at fit time. Options are 'error', 'return_nan',
and 'value'. Default 'value', which treat NaNs as a countable category at
fit time.
handle_unknown: str, int or dict of {column : option, ...}.
how to handle unknown labels at transform time. Options are 'error'
'return_nan', 'value' and int. Defaults to None which uses NaN behaviour
specified at fit time. Passing an int will fill with this int value.
kwargs: dict.
additional encoder specific parameters like regularisation.
"""
self.return_df = return_df
self.drop_invariant = drop_invariant
self.invariant_cols = []
self.verbose = verbose
self.use_default_cols = cols is None # if True, even a repeated call of fit() will select string columns from X
self.cols = cols # note that cols are only the columns to be encoded, feature_names_in_ are all columns
self.mapping = None
self.handle_unknown = handle_unknown
self.handle_missing = handle_missing
self._dim = None
def fit(self, X, y=None, **kwargs):
"""Fits the encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
self._check_fit_inputs(X, y)
X, y = convert_inputs(X, y)
self.feature_names_in_ = X.columns.tolist()
self.n_features_in_ = len(self.feature_names_in_)
self._dim = X.shape[1]
self._determine_fit_columns(X)
if not set(self.cols).issubset(X.columns):
raise ValueError('X does not contain the columns listed in cols')
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
self._fit(X, y, **kwargs)
# for finding invariant columns transform without y (as is done on the test set)
X_transformed = self.transform(X, override_return_df=True)
self.feature_names_out_ = X_transformed.columns.tolist()
# drop all output columns with 0 variance.
if self.drop_invariant:
generated_cols = get_generated_cols(X, X_transformed, self.cols)
self.invariant_cols = [x for x in generated_cols if X_transformed[x].var() <= self.INVARIANCE_THRESHOLD]
self.feature_names_out_ = [x for x in self.feature_names_out_ if x not in self.invariant_cols]
return self
def _check_fit_inputs(self, X, y):
if self._get_tags().get('supervised_encoder') and y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
def _check_transform_inputs(self, X):
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
if self._dim is None:
raise NotFittedError('Must train encoder before it can be used to transform data.')
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
def _drop_invariants(self, X: pd.DataFrame, override_return_df: bool) -> Union[np.ndarray, pd.DataFrame]:
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.values
def _determine_fit_columns(self, X: pd.DataFrame) -> None:
""" Determine columns used by encoder.
Note that the implementation also deals with re-fitting the same encoder object with different columns.
:param X: input data frame
:return: none, sets self.cols as a side effect
"""
# if columns aren't passed, just use every string column
if self.use_default_cols:
self.cols = get_obj_cols(X)
else:
self.cols = convert_cols_to_list(self.cols)
def get_feature_names(self) -> List[str]:
warnings.warn("`get_feature_names` is deprecated in all of sklearn. Use `get_feature_names_out` instead.",
category=FutureWarning)
return self.get_feature_names_out()
def get_feature_names_out(self) -> List[str]:
"""
Returns the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument
and determines the output feature names using the input. A fit is usually not necessary and if so a
NotFittedError is raised.
We just require a fit all the time and return the fitted output columns.
Returns
-------
feature_names: list
A list with all feature names transformed or added.
Note: potentially dropped features (because the feature is constant/invariant) are not included!
"""
out_feats = getattr(self, "feature_names_out_", None)
if not isinstance(out_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return out_feats
def get_feature_names_in(self) -> List[str]:
"""
Returns the names of all input columns present when fitting.
These columns are necessary for the transform step.
"""
in_feats = getattr(self, "feature_names_in_", None)
if not isinstance(in_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return in_feats
@abstractmethod
def _fit(self, X: pd.DataFrame, y: Optional[pd.Series], **kwargs):
...
class SupervisedTransformerMixin(sklearn.base.TransformerMixin):
def _more_tags(self):
return {'supervised_encoder': True}
def transform(self, X, y=None, override_return_df=False):
"""Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation
in order to avoid overfitting.
On training data transform should be called with y, on test data without.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] or None
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X, y = convert_inputs(X, y, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X, y)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X: pd.DataFrame, y: pd.Series) -> pd.DataFrame:
...
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
class UnsupervisedTransformerMixin(sklearn.base.TransformerMixin):
def transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X = convert_input(X, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X) -> pd.DataFrame:
...
class TransformerWithTargetMixin:
def _more_tags(self):
return {'supervised_encoder': True}
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
| """A collection of shared utilities for all encoders, not intended for external use."""
from abc import abstractmethod
from enum import Enum, auto
import warnings
import pandas as pd
import numpy as np
import sklearn.base
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.exceptions import NotFittedError
from typing import Dict, List, Optional, Union
from scipy.sparse import csr_matrix
__author__ = 'willmcginnis'
def convert_cols_to_list(cols):
if isinstance(cols, pd.Series):
return cols.tolist()
elif isinstance(cols, np.ndarray):
return cols.tolist()
elif np.isscalar(cols):
return [cols]
elif isinstance(cols, set):
return list(cols)
elif isinstance(cols, tuple):
return list(cols)
elif pd.api.types.is_categorical_dtype(cols):
return cols.astype(object).tolist()
return cols
def get_obj_cols(df):
"""
Returns names of 'object' columns in the DataFrame.
"""
obj_cols = []
for idx, dt in enumerate(df.dtypes):
if dt == 'object' or is_category(dt):
obj_cols.append(df.columns.values[idx])
if not obj_cols:
print("Warning: No categorical columns found. Calling 'transform' will only return input data.")
return obj_cols
def is_category(dtype):
return pd.api.types.is_categorical_dtype(dtype)
def convert_inputs(X, y, columns=None, index=None, deep=False):
"""
Unite arraylike `X` and vectorlike `y` into a DataFrame and Series.
If both are pandas types already, raises an error if their indexes do not match.
If one is pandas, the returns will share that index.
If neither is pandas, a default index will be used, unless `index` is passed.
Parameters
----------
X: arraylike
y: listlike
columns: listlike
Specifies column names to use for `X`.
Ignored if `X` is already a dataframe.
If `None`, use the default pandas column names.
index: listlike
The index to use, if neither `X` nor `y` is a pandas type.
(If one has an index, then this has no effect.)
If `None`, use the default pandas index.
deep: bool
Whether to deep-copy `X`.
"""
X_alt_index = y.index if isinstance(y, pd.Series) else index
X = convert_input(X, columns=columns, deep=deep, index=X_alt_index)
if y is not None:
y = convert_input_vector(y, index=X.index)
# N.B.: If either was already pandas, it keeps its index.
if any(X.index != y.index):
msg = "`X` and `y` both have indexes, but they do not match. If you are shuffling your input data on " \
"purpose (e.g. via permutation_test_score) use np arrays instead of data frames / series"
raise ValueError(msg)
if X.shape[0] != y.shape[0]:
raise ValueError("The length of X is " + str(X.shape[0]) + " but length of y is " + str(y.shape[0]) + ".")
return X, y
def convert_input(X, columns=None, deep=False, index=None):
"""
Unite data into a DataFrame.
Objects that do not contain column names take the names from the argument.
Optionally perform deep copy of the data.
"""
if not isinstance(X, pd.DataFrame):
if isinstance(X, pd.Series):
X = pd.DataFrame(X, copy=deep)
else:
if columns is not None and np.size(X,1) != len(columns):
raise ValueError('The count of the column names does not correspond to the count of the columns')
if isinstance(X, list):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index) # lists are always copied, but for consistency, we still pass the argument
elif isinstance(X, (np.generic, np.ndarray)):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index)
elif isinstance(X, csr_matrix):
X = pd.DataFrame(X.todense(), columns=columns, copy=deep, index=index)
else:
raise ValueError(f'Unexpected input type: {type(X)}')
elif deep:
X = X.copy(deep=True)
return X
def convert_input_vector(y, index):
"""
Unite target data type into a Series.
If the target is a Series or a DataFrame, we preserve its index.
But if the target does not contain index attribute, we use the index from the argument.
"""
if y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
if isinstance(y, pd.Series):
return y
elif isinstance(y, np.ndarray):
if len(np.shape(y))==1: # vector
return pd.Series(y, name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[0]==1: # single row in a matrix
return pd.Series(y[0, :], name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[1]==1: # single column in a matrix
return pd.Series(y[:, 0], name='target', index=index)
else:
raise ValueError(f'Unexpected input shape: {np.shape(y)}')
elif np.isscalar(y):
return pd.Series([y], name='target', index=index)
elif isinstance(y, list):
if len(y)==0: # empty list
return pd.Series(y, name='target', index=index, dtype=float)
elif len(y)>0 and not isinstance(y[0], list): # vector
return pd.Series(y, name='target', index=index)
elif len(y)>0 and isinstance(y[0], list) and len(y[0])==1: # single row in a matrix
flatten = lambda y: [item for sublist in y for item in sublist]
return pd.Series(flatten(y), name='target', index=index)
elif len(y)==1 and len(y[0])==0 and isinstance(y[0], list): # single empty column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=float)
elif len(y)==1 and isinstance(y[0], list): # single column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=type(y[0][0]))
else:
raise ValueError('Unexpected input shape')
elif isinstance(y, pd.DataFrame):
if len(list(y))==0: # empty DataFrame
return pd.Series(name='target', index=index, dtype=float)
if len(list(y))==1: # a single column
return y.iloc[:, 0]
else:
raise ValueError(f'Unexpected input shape: {y.shape}')
else:
return pd.Series(y, name='target', index=index) # this covers tuples and other directly convertible types
def get_generated_cols(X_original, X_transformed, to_transform):
"""
Returns a list of the generated/transformed columns.
Arguments:
X_original: df
the original (input) DataFrame.
X_transformed: df
the transformed (current) DataFrame.
to_transform: [str]
a list of columns that were transformed (as in the original DataFrame), commonly self.feature_names_in.
Output:
a list of columns that were transformed (as in the current DataFrame).
"""
original_cols = list(X_original.columns)
if len(to_transform) > 0:
[original_cols.remove(c) for c in to_transform]
current_cols = list(X_transformed.columns)
if len(original_cols) > 0:
[current_cols.remove(c) for c in original_cols]
return current_cols
def flatten_reverse_dict(d):
sep = "___"
[flat_dict] = pd.json_normalize(d, sep=sep).to_dict(orient='records')
reversed_flat_dict = {v: tuple(k.split(sep)) for k, v in flat_dict.items()}
return reversed_flat_dict
class EncodingRelation(Enum):
# one input feature get encoded into one output feature
ONE_TO_ONE = auto()
# one input feature get encoded into as many output features as it has distinct values
ONE_TO_N_UNIQUE = auto()
# one input feature get encoded into m output features that are not the number of distinct values
ONE_TO_M = auto()
# all N input features are encoded into M output features.
# The encoding is done globally on all the input not on a per-feature basis
N_TO_M = auto()
def get_docstring_output_shape(in_out_relation: EncodingRelation):
if in_out_relation == EncodingRelation.ONE_TO_ONE:
return "n_features"
elif in_out_relation == EncodingRelation.ONE_TO_N_UNIQUE:
return "n_features * respective cardinality"
elif in_out_relation == EncodingRelation.ONE_TO_M:
return "M features (n_features < M)"
elif in_out_relation == EncodingRelation.N_TO_M:
return "M features (M can be anything)"
class BaseEncoder(BaseEstimator):
_dim: Optional[int]
cols: List[str]
use_default_cols: bool
handle_missing: str
handle_unknown: str
verbose: int
drop_invariant: bool
invariant_cols: List[str] = []
return_df: bool
supervised: bool
encoding_relation: EncodingRelation
INVARIANCE_THRESHOLD = 10e-5 # columns with variance less than this will be considered constant / invariant
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True,
handle_unknown='value', handle_missing='value', **kwargs):
"""
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string and categorical columns
will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform and inverse transform
(otherwise it will be a numpy array).
handle_missing: str
how to handle missing values at fit time. Options are 'error', 'return_nan',
and 'value'. Default 'value', which treat NaNs as a countable category at
fit time.
handle_unknown: str, int or dict of {column : option, ...}.
how to handle unknown labels at transform time. Options are 'error'
'return_nan', 'value' and int. Defaults to None which uses NaN behaviour
specified at fit time. Passing an int will fill with this int value.
kwargs: dict.
additional encoder specific parameters like regularisation.
"""
self.return_df = return_df
self.drop_invariant = drop_invariant
self.invariant_cols = []
self.verbose = verbose
self.use_default_cols = cols is None # if True, even a repeated call of fit() will select string columns from X
self.cols = cols # note that cols are only the columns to be encoded, feature_names_in_ are all columns
self.mapping = None
self.handle_unknown = handle_unknown
self.handle_missing = handle_missing
self._dim = None
def fit(self, X, y=None, **kwargs):
"""Fits the encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
self._check_fit_inputs(X, y)
X, y = convert_inputs(X, y)
self.feature_names_in_ = X.columns.tolist()
self.n_features_in_ = len(self.feature_names_in_)
self._dim = X.shape[1]
self._determine_fit_columns(X)
if not set(self.cols).issubset(X.columns):
raise ValueError('X does not contain the columns listed in cols')
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
self._fit(X, y, **kwargs)
# for finding invariant columns transform without y (as is done on the test set)
X_transformed = self.transform(X, override_return_df=True)
self.feature_names_out_ = X_transformed.columns.tolist()
# drop all output columns with 0 variance.
if self.drop_invariant:
generated_cols = get_generated_cols(X, X_transformed, self.cols)
self.invariant_cols = [x for x in generated_cols if X_transformed[x].var() <= self.INVARIANCE_THRESHOLD]
self.feature_names_out_ = [x for x in self.feature_names_out_ if x not in self.invariant_cols]
return self
def _check_fit_inputs(self, X, y):
if self._get_tags().get('supervised_encoder') and y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
def _check_transform_inputs(self, X):
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
if self._dim is None:
raise NotFittedError('Must train encoder before it can be used to transform data.')
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
def _drop_invariants(self, X: pd.DataFrame, override_return_df: bool) -> Union[np.ndarray, pd.DataFrame]:
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.values
def _determine_fit_columns(self, X: pd.DataFrame) -> None:
""" Determine columns used by encoder.
Note that the implementation also deals with re-fitting the same encoder object with different columns.
:param X: input data frame
:return: none, sets self.cols as a side effect
"""
# if columns aren't passed, just use every string column
if self.use_default_cols:
self.cols = get_obj_cols(X)
else:
self.cols = convert_cols_to_list(self.cols)
def get_feature_names(self) -> List[str]:
warnings.warn("`get_feature_names` is deprecated in all of sklearn. Use `get_feature_names_out` instead.",
category=FutureWarning)
return self.get_feature_names_out()
def get_feature_names_out(self, input_features=None) -> np.ndarray:
"""
Returns the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument
and determines the output feature names using the input. A fit is usually not necessary and if so a
NotFittedError is raised.
We just require a fit all the time and return the fitted output columns.
Returns
-------
feature_names: np.ndarray
A numpy array with all feature names transformed or added.
Note: potentially dropped features (because the feature is constant/invariant) are not included!
"""
out_feats = getattr(self, "feature_names_out_", None)
if not isinstance(out_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return np.array(out_feats, dtype=object)
def get_feature_names_in(self) -> List[str]:
"""
Returns the names of all input columns present when fitting.
These columns are necessary for the transform step.
"""
in_feats = getattr(self, "feature_names_in_", None)
if not isinstance(in_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return in_feats
@abstractmethod
def _fit(self, X: pd.DataFrame, y: Optional[pd.Series], **kwargs):
...
class SupervisedTransformerMixin(sklearn.base.TransformerMixin):
def _more_tags(self):
return {'supervised_encoder': True}
def transform(self, X, y=None, override_return_df=False):
"""Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation
in order to avoid overfitting.
On training data transform should be called with y, on test data without.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] or None
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X, y = convert_inputs(X, y, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X, y)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X: pd.DataFrame, y: pd.Series) -> pd.DataFrame:
...
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
class UnsupervisedTransformerMixin(sklearn.base.TransformerMixin):
def transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X = convert_input(X, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X) -> pd.DataFrame:
...
class TransformerWithTargetMixin:
def _more_tags(self):
return {'supervised_encoder': True}
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
| JaimeArboleda | 5eb7a2d6359d680bdadd0534bdb983e712a47f9c | 570827e6b48737d0c9aece8aca31edd6da02c1b2 | Hi, I've added some thoughts on the version. I'll resolve the discussion here. Please have a look at the other comment I've made and if you're fine with it we're ready to merge! | PaulWestenthanner | 32 |
scikit-learn-contrib/category_encoders | 398 | (WIP) Partial fix for getting feature names out | I think this is a partial fix for this opened issue:
https://github.com/scikit-learn-contrib/category_encoders/issues/395
It remains to check the behaviour of other estimators that are not ONE_TO_ONE.
Please, let me know if you like the work in progress and I will try to continue. | null | 2023-02-23 13:33:41+00:00 | 2023-03-13 11:48:24+00:00 | category_encoders/utils.py | """A collection of shared utilities for all encoders, not intended for external use."""
from abc import abstractmethod
from enum import Enum, auto
import warnings
import pandas as pd
import numpy as np
import sklearn.base
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.exceptions import NotFittedError
from typing import Dict, List, Optional, Union
from scipy.sparse import csr_matrix
__author__ = 'willmcginnis'
def convert_cols_to_list(cols):
if isinstance(cols, pd.Series):
return cols.tolist()
elif isinstance(cols, np.ndarray):
return cols.tolist()
elif np.isscalar(cols):
return [cols]
elif isinstance(cols, set):
return list(cols)
elif isinstance(cols, tuple):
return list(cols)
elif pd.api.types.is_categorical_dtype(cols):
return cols.astype(object).tolist()
return cols
def get_obj_cols(df):
"""
Returns names of 'object' columns in the DataFrame.
"""
obj_cols = []
for idx, dt in enumerate(df.dtypes):
if dt == 'object' or is_category(dt):
obj_cols.append(df.columns.values[idx])
if not obj_cols:
print("Warning: No categorical columns found. Calling 'transform' will only return input data.")
return obj_cols
def is_category(dtype):
return pd.api.types.is_categorical_dtype(dtype)
def convert_inputs(X, y, columns=None, index=None, deep=False):
"""
Unite arraylike `X` and vectorlike `y` into a DataFrame and Series.
If both are pandas types already, raises an error if their indexes do not match.
If one is pandas, the returns will share that index.
If neither is pandas, a default index will be used, unless `index` is passed.
Parameters
----------
X: arraylike
y: listlike
columns: listlike
Specifies column names to use for `X`.
Ignored if `X` is already a dataframe.
If `None`, use the default pandas column names.
index: listlike
The index to use, if neither `X` nor `y` is a pandas type.
(If one has an index, then this has no effect.)
If `None`, use the default pandas index.
deep: bool
Whether to deep-copy `X`.
"""
X_alt_index = y.index if isinstance(y, pd.Series) else index
X = convert_input(X, columns=columns, deep=deep, index=X_alt_index)
if y is not None:
y = convert_input_vector(y, index=X.index)
# N.B.: If either was already pandas, it keeps its index.
if any(X.index != y.index):
msg = "`X` and `y` both have indexes, but they do not match. If you are shuffling your input data on " \
"purpose (e.g. via permutation_test_score) use np arrays instead of data frames / series"
raise ValueError(msg)
if X.shape[0] != y.shape[0]:
raise ValueError("The length of X is " + str(X.shape[0]) + " but length of y is " + str(y.shape[0]) + ".")
return X, y
def convert_input(X, columns=None, deep=False, index=None):
"""
Unite data into a DataFrame.
Objects that do not contain column names take the names from the argument.
Optionally perform deep copy of the data.
"""
if not isinstance(X, pd.DataFrame):
if isinstance(X, pd.Series):
X = pd.DataFrame(X, copy=deep)
else:
if columns is not None and np.size(X,1) != len(columns):
raise ValueError('The count of the column names does not correspond to the count of the columns')
if isinstance(X, list):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index) # lists are always copied, but for consistency, we still pass the argument
elif isinstance(X, (np.generic, np.ndarray)):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index)
elif isinstance(X, csr_matrix):
X = pd.DataFrame(X.todense(), columns=columns, copy=deep, index=index)
else:
raise ValueError(f'Unexpected input type: {type(X)}')
elif deep:
X = X.copy(deep=True)
return X
def convert_input_vector(y, index):
"""
Unite target data type into a Series.
If the target is a Series or a DataFrame, we preserve its index.
But if the target does not contain index attribute, we use the index from the argument.
"""
if y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
if isinstance(y, pd.Series):
return y
elif isinstance(y, np.ndarray):
if len(np.shape(y))==1: # vector
return pd.Series(y, name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[0]==1: # single row in a matrix
return pd.Series(y[0, :], name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[1]==1: # single column in a matrix
return pd.Series(y[:, 0], name='target', index=index)
else:
raise ValueError(f'Unexpected input shape: {np.shape(y)}')
elif np.isscalar(y):
return pd.Series([y], name='target', index=index)
elif isinstance(y, list):
if len(y)==0: # empty list
return pd.Series(y, name='target', index=index, dtype=float)
elif len(y)>0 and not isinstance(y[0], list): # vector
return pd.Series(y, name='target', index=index)
elif len(y)>0 and isinstance(y[0], list) and len(y[0])==1: # single row in a matrix
flatten = lambda y: [item for sublist in y for item in sublist]
return pd.Series(flatten(y), name='target', index=index)
elif len(y)==1 and len(y[0])==0 and isinstance(y[0], list): # single empty column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=float)
elif len(y)==1 and isinstance(y[0], list): # single column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=type(y[0][0]))
else:
raise ValueError('Unexpected input shape')
elif isinstance(y, pd.DataFrame):
if len(list(y))==0: # empty DataFrame
return pd.Series(name='target', index=index, dtype=float)
if len(list(y))==1: # a single column
return y.iloc[:, 0]
else:
raise ValueError(f'Unexpected input shape: {y.shape}')
else:
return pd.Series(y, name='target', index=index) # this covers tuples and other directly convertible types
def get_generated_cols(X_original, X_transformed, to_transform):
"""
Returns a list of the generated/transformed columns.
Arguments:
X_original: df
the original (input) DataFrame.
X_transformed: df
the transformed (current) DataFrame.
to_transform: [str]
a list of columns that were transformed (as in the original DataFrame), commonly self.feature_names_in.
Output:
a list of columns that were transformed (as in the current DataFrame).
"""
original_cols = list(X_original.columns)
if len(to_transform) > 0:
[original_cols.remove(c) for c in to_transform]
current_cols = list(X_transformed.columns)
if len(original_cols) > 0:
[current_cols.remove(c) for c in original_cols]
return current_cols
def flatten_reverse_dict(d):
sep = "___"
[flat_dict] = pd.json_normalize(d, sep=sep).to_dict(orient='records')
reversed_flat_dict = {v: tuple(k.split(sep)) for k, v in flat_dict.items()}
return reversed_flat_dict
class EncodingRelation(Enum):
# one input feature get encoded into one output feature
ONE_TO_ONE = auto()
# one input feature get encoded into as many output features as it has distinct values
ONE_TO_N_UNIQUE = auto()
# one input feature get encoded into m output features that are not the number of distinct values
ONE_TO_M = auto()
# all N input features are encoded into M output features.
# The encoding is done globally on all the input not on a per-feature basis
N_TO_M = auto()
def get_docstring_output_shape(in_out_relation: EncodingRelation):
if in_out_relation == EncodingRelation.ONE_TO_ONE:
return "n_features"
elif in_out_relation == EncodingRelation.ONE_TO_N_UNIQUE:
return "n_features * respective cardinality"
elif in_out_relation == EncodingRelation.ONE_TO_M:
return "M features (n_features < M)"
elif in_out_relation == EncodingRelation.N_TO_M:
return "M features (M can be anything)"
class BaseEncoder(BaseEstimator):
_dim: Optional[int]
cols: List[str]
use_default_cols: bool
handle_missing: str
handle_unknown: str
verbose: int
drop_invariant: bool
invariant_cols: List[str] = []
return_df: bool
supervised: bool
encoding_relation: EncodingRelation
INVARIANCE_THRESHOLD = 10e-5 # columns with variance less than this will be considered constant / invariant
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True,
handle_unknown='value', handle_missing='value', **kwargs):
"""
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string and categorical columns
will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform and inverse transform
(otherwise it will be a numpy array).
handle_missing: str
how to handle missing values at fit time. Options are 'error', 'return_nan',
and 'value'. Default 'value', which treat NaNs as a countable category at
fit time.
handle_unknown: str, int or dict of {column : option, ...}.
how to handle unknown labels at transform time. Options are 'error'
'return_nan', 'value' and int. Defaults to None which uses NaN behaviour
specified at fit time. Passing an int will fill with this int value.
kwargs: dict.
additional encoder specific parameters like regularisation.
"""
self.return_df = return_df
self.drop_invariant = drop_invariant
self.invariant_cols = []
self.verbose = verbose
self.use_default_cols = cols is None # if True, even a repeated call of fit() will select string columns from X
self.cols = cols # note that cols are only the columns to be encoded, feature_names_in_ are all columns
self.mapping = None
self.handle_unknown = handle_unknown
self.handle_missing = handle_missing
self._dim = None
def fit(self, X, y=None, **kwargs):
"""Fits the encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
self._check_fit_inputs(X, y)
X, y = convert_inputs(X, y)
self.feature_names_in_ = X.columns.tolist()
self.n_features_in_ = len(self.feature_names_in_)
self._dim = X.shape[1]
self._determine_fit_columns(X)
if not set(self.cols).issubset(X.columns):
raise ValueError('X does not contain the columns listed in cols')
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
self._fit(X, y, **kwargs)
# for finding invariant columns transform without y (as is done on the test set)
X_transformed = self.transform(X, override_return_df=True)
self.feature_names_out_ = X_transformed.columns.tolist()
# drop all output columns with 0 variance.
if self.drop_invariant:
generated_cols = get_generated_cols(X, X_transformed, self.cols)
self.invariant_cols = [x for x in generated_cols if X_transformed[x].var() <= self.INVARIANCE_THRESHOLD]
self.feature_names_out_ = [x for x in self.feature_names_out_ if x not in self.invariant_cols]
return self
def _check_fit_inputs(self, X, y):
if self._get_tags().get('supervised_encoder') and y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
def _check_transform_inputs(self, X):
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
if self._dim is None:
raise NotFittedError('Must train encoder before it can be used to transform data.')
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
def _drop_invariants(self, X: pd.DataFrame, override_return_df: bool) -> Union[np.ndarray, pd.DataFrame]:
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.values
def _determine_fit_columns(self, X: pd.DataFrame) -> None:
""" Determine columns used by encoder.
Note that the implementation also deals with re-fitting the same encoder object with different columns.
:param X: input data frame
:return: none, sets self.cols as a side effect
"""
# if columns aren't passed, just use every string column
if self.use_default_cols:
self.cols = get_obj_cols(X)
else:
self.cols = convert_cols_to_list(self.cols)
def get_feature_names(self) -> List[str]:
warnings.warn("`get_feature_names` is deprecated in all of sklearn. Use `get_feature_names_out` instead.",
category=FutureWarning)
return self.get_feature_names_out()
def get_feature_names_out(self) -> List[str]:
"""
Returns the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument
and determines the output feature names using the input. A fit is usually not necessary and if so a
NotFittedError is raised.
We just require a fit all the time and return the fitted output columns.
Returns
-------
feature_names: list
A list with all feature names transformed or added.
Note: potentially dropped features (because the feature is constant/invariant) are not included!
"""
out_feats = getattr(self, "feature_names_out_", None)
if not isinstance(out_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return out_feats
def get_feature_names_in(self) -> List[str]:
"""
Returns the names of all input columns present when fitting.
These columns are necessary for the transform step.
"""
in_feats = getattr(self, "feature_names_in_", None)
if not isinstance(in_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return in_feats
@abstractmethod
def _fit(self, X: pd.DataFrame, y: Optional[pd.Series], **kwargs):
...
class SupervisedTransformerMixin(sklearn.base.TransformerMixin):
def _more_tags(self):
return {'supervised_encoder': True}
def transform(self, X, y=None, override_return_df=False):
"""Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation
in order to avoid overfitting.
On training data transform should be called with y, on test data without.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] or None
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X, y = convert_inputs(X, y, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X, y)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X: pd.DataFrame, y: pd.Series) -> pd.DataFrame:
...
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
class UnsupervisedTransformerMixin(sklearn.base.TransformerMixin):
def transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X = convert_input(X, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X) -> pd.DataFrame:
...
class TransformerWithTargetMixin:
def _more_tags(self):
return {'supervised_encoder': True}
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
| """A collection of shared utilities for all encoders, not intended for external use."""
from abc import abstractmethod
from enum import Enum, auto
import warnings
import pandas as pd
import numpy as np
import sklearn.base
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.exceptions import NotFittedError
from typing import Dict, List, Optional, Union
from scipy.sparse import csr_matrix
__author__ = 'willmcginnis'
def convert_cols_to_list(cols):
if isinstance(cols, pd.Series):
return cols.tolist()
elif isinstance(cols, np.ndarray):
return cols.tolist()
elif np.isscalar(cols):
return [cols]
elif isinstance(cols, set):
return list(cols)
elif isinstance(cols, tuple):
return list(cols)
elif pd.api.types.is_categorical_dtype(cols):
return cols.astype(object).tolist()
return cols
def get_obj_cols(df):
"""
Returns names of 'object' columns in the DataFrame.
"""
obj_cols = []
for idx, dt in enumerate(df.dtypes):
if dt == 'object' or is_category(dt):
obj_cols.append(df.columns.values[idx])
if not obj_cols:
print("Warning: No categorical columns found. Calling 'transform' will only return input data.")
return obj_cols
def is_category(dtype):
return pd.api.types.is_categorical_dtype(dtype)
def convert_inputs(X, y, columns=None, index=None, deep=False):
"""
Unite arraylike `X` and vectorlike `y` into a DataFrame and Series.
If both are pandas types already, raises an error if their indexes do not match.
If one is pandas, the returns will share that index.
If neither is pandas, a default index will be used, unless `index` is passed.
Parameters
----------
X: arraylike
y: listlike
columns: listlike
Specifies column names to use for `X`.
Ignored if `X` is already a dataframe.
If `None`, use the default pandas column names.
index: listlike
The index to use, if neither `X` nor `y` is a pandas type.
(If one has an index, then this has no effect.)
If `None`, use the default pandas index.
deep: bool
Whether to deep-copy `X`.
"""
X_alt_index = y.index if isinstance(y, pd.Series) else index
X = convert_input(X, columns=columns, deep=deep, index=X_alt_index)
if y is not None:
y = convert_input_vector(y, index=X.index)
# N.B.: If either was already pandas, it keeps its index.
if any(X.index != y.index):
msg = "`X` and `y` both have indexes, but they do not match. If you are shuffling your input data on " \
"purpose (e.g. via permutation_test_score) use np arrays instead of data frames / series"
raise ValueError(msg)
if X.shape[0] != y.shape[0]:
raise ValueError("The length of X is " + str(X.shape[0]) + " but length of y is " + str(y.shape[0]) + ".")
return X, y
def convert_input(X, columns=None, deep=False, index=None):
"""
Unite data into a DataFrame.
Objects that do not contain column names take the names from the argument.
Optionally perform deep copy of the data.
"""
if not isinstance(X, pd.DataFrame):
if isinstance(X, pd.Series):
X = pd.DataFrame(X, copy=deep)
else:
if columns is not None and np.size(X,1) != len(columns):
raise ValueError('The count of the column names does not correspond to the count of the columns')
if isinstance(X, list):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index) # lists are always copied, but for consistency, we still pass the argument
elif isinstance(X, (np.generic, np.ndarray)):
X = pd.DataFrame(X, columns=columns, copy=deep, index=index)
elif isinstance(X, csr_matrix):
X = pd.DataFrame(X.todense(), columns=columns, copy=deep, index=index)
else:
raise ValueError(f'Unexpected input type: {type(X)}')
elif deep:
X = X.copy(deep=True)
return X
def convert_input_vector(y, index):
"""
Unite target data type into a Series.
If the target is a Series or a DataFrame, we preserve its index.
But if the target does not contain index attribute, we use the index from the argument.
"""
if y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
if isinstance(y, pd.Series):
return y
elif isinstance(y, np.ndarray):
if len(np.shape(y))==1: # vector
return pd.Series(y, name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[0]==1: # single row in a matrix
return pd.Series(y[0, :], name='target', index=index)
elif len(np.shape(y))==2 and np.shape(y)[1]==1: # single column in a matrix
return pd.Series(y[:, 0], name='target', index=index)
else:
raise ValueError(f'Unexpected input shape: {np.shape(y)}')
elif np.isscalar(y):
return pd.Series([y], name='target', index=index)
elif isinstance(y, list):
if len(y)==0: # empty list
return pd.Series(y, name='target', index=index, dtype=float)
elif len(y)>0 and not isinstance(y[0], list): # vector
return pd.Series(y, name='target', index=index)
elif len(y)>0 and isinstance(y[0], list) and len(y[0])==1: # single row in a matrix
flatten = lambda y: [item for sublist in y for item in sublist]
return pd.Series(flatten(y), name='target', index=index)
elif len(y)==1 and len(y[0])==0 and isinstance(y[0], list): # single empty column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=float)
elif len(y)==1 and isinstance(y[0], list): # single column in a matrix
return pd.Series(y[0], name='target', index=index, dtype=type(y[0][0]))
else:
raise ValueError('Unexpected input shape')
elif isinstance(y, pd.DataFrame):
if len(list(y))==0: # empty DataFrame
return pd.Series(name='target', index=index, dtype=float)
if len(list(y))==1: # a single column
return y.iloc[:, 0]
else:
raise ValueError(f'Unexpected input shape: {y.shape}')
else:
return pd.Series(y, name='target', index=index) # this covers tuples and other directly convertible types
def get_generated_cols(X_original, X_transformed, to_transform):
"""
Returns a list of the generated/transformed columns.
Arguments:
X_original: df
the original (input) DataFrame.
X_transformed: df
the transformed (current) DataFrame.
to_transform: [str]
a list of columns that were transformed (as in the original DataFrame), commonly self.feature_names_in.
Output:
a list of columns that were transformed (as in the current DataFrame).
"""
original_cols = list(X_original.columns)
if len(to_transform) > 0:
[original_cols.remove(c) for c in to_transform]
current_cols = list(X_transformed.columns)
if len(original_cols) > 0:
[current_cols.remove(c) for c in original_cols]
return current_cols
def flatten_reverse_dict(d):
sep = "___"
[flat_dict] = pd.json_normalize(d, sep=sep).to_dict(orient='records')
reversed_flat_dict = {v: tuple(k.split(sep)) for k, v in flat_dict.items()}
return reversed_flat_dict
class EncodingRelation(Enum):
# one input feature get encoded into one output feature
ONE_TO_ONE = auto()
# one input feature get encoded into as many output features as it has distinct values
ONE_TO_N_UNIQUE = auto()
# one input feature get encoded into m output features that are not the number of distinct values
ONE_TO_M = auto()
# all N input features are encoded into M output features.
# The encoding is done globally on all the input not on a per-feature basis
N_TO_M = auto()
def get_docstring_output_shape(in_out_relation: EncodingRelation):
if in_out_relation == EncodingRelation.ONE_TO_ONE:
return "n_features"
elif in_out_relation == EncodingRelation.ONE_TO_N_UNIQUE:
return "n_features * respective cardinality"
elif in_out_relation == EncodingRelation.ONE_TO_M:
return "M features (n_features < M)"
elif in_out_relation == EncodingRelation.N_TO_M:
return "M features (M can be anything)"
class BaseEncoder(BaseEstimator):
_dim: Optional[int]
cols: List[str]
use_default_cols: bool
handle_missing: str
handle_unknown: str
verbose: int
drop_invariant: bool
invariant_cols: List[str] = []
return_df: bool
supervised: bool
encoding_relation: EncodingRelation
INVARIANCE_THRESHOLD = 10e-5 # columns with variance less than this will be considered constant / invariant
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True,
handle_unknown='value', handle_missing='value', **kwargs):
"""
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string and categorical columns
will be encoded.
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform and inverse transform
(otherwise it will be a numpy array).
handle_missing: str
how to handle missing values at fit time. Options are 'error', 'return_nan',
and 'value'. Default 'value', which treat NaNs as a countable category at
fit time.
handle_unknown: str, int or dict of {column : option, ...}.
how to handle unknown labels at transform time. Options are 'error'
'return_nan', 'value' and int. Defaults to None which uses NaN behaviour
specified at fit time. Passing an int will fill with this int value.
kwargs: dict.
additional encoder specific parameters like regularisation.
"""
self.return_df = return_df
self.drop_invariant = drop_invariant
self.invariant_cols = []
self.verbose = verbose
self.use_default_cols = cols is None # if True, even a repeated call of fit() will select string columns from X
self.cols = cols # note that cols are only the columns to be encoded, feature_names_in_ are all columns
self.mapping = None
self.handle_unknown = handle_unknown
self.handle_missing = handle_missing
self._dim = None
def fit(self, X, y=None, **kwargs):
"""Fits the encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
self._check_fit_inputs(X, y)
X, y = convert_inputs(X, y)
self.feature_names_in_ = X.columns.tolist()
self.n_features_in_ = len(self.feature_names_in_)
self._dim = X.shape[1]
self._determine_fit_columns(X)
if not set(self.cols).issubset(X.columns):
raise ValueError('X does not contain the columns listed in cols')
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
self._fit(X, y, **kwargs)
# for finding invariant columns transform without y (as is done on the test set)
X_transformed = self.transform(X, override_return_df=True)
self.feature_names_out_ = X_transformed.columns.tolist()
# drop all output columns with 0 variance.
if self.drop_invariant:
generated_cols = get_generated_cols(X, X_transformed, self.cols)
self.invariant_cols = [x for x in generated_cols if X_transformed[x].var() <= self.INVARIANCE_THRESHOLD]
self.feature_names_out_ = [x for x in self.feature_names_out_ if x not in self.invariant_cols]
return self
def _check_fit_inputs(self, X, y):
if self._get_tags().get('supervised_encoder') and y is None:
raise ValueError('Supervised encoders need a target for the fitting. The target cannot be None')
def _check_transform_inputs(self, X):
if self.handle_missing == 'error':
if X[self.cols].isnull().any().any():
raise ValueError('Columns to be encoded can not contain null')
if self._dim is None:
raise NotFittedError('Must train encoder before it can be used to transform data.')
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError(f'Unexpected input dimension {X.shape[1]}, expected {self._dim}')
def _drop_invariants(self, X: pd.DataFrame, override_return_df: bool) -> Union[np.ndarray, pd.DataFrame]:
if self.drop_invariant:
X = X.drop(columns=self.invariant_cols)
if self.return_df or override_return_df:
return X
else:
return X.values
def _determine_fit_columns(self, X: pd.DataFrame) -> None:
""" Determine columns used by encoder.
Note that the implementation also deals with re-fitting the same encoder object with different columns.
:param X: input data frame
:return: none, sets self.cols as a side effect
"""
# if columns aren't passed, just use every string column
if self.use_default_cols:
self.cols = get_obj_cols(X)
else:
self.cols = convert_cols_to_list(self.cols)
def get_feature_names(self) -> List[str]:
warnings.warn("`get_feature_names` is deprecated in all of sklearn. Use `get_feature_names_out` instead.",
category=FutureWarning)
return self.get_feature_names_out()
def get_feature_names_out(self, input_features=None) -> np.ndarray:
"""
Returns the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in as an argument
and determines the output feature names using the input. A fit is usually not necessary and if so a
NotFittedError is raised.
We just require a fit all the time and return the fitted output columns.
Returns
-------
feature_names: np.ndarray
A numpy array with all feature names transformed or added.
Note: potentially dropped features (because the feature is constant/invariant) are not included!
"""
out_feats = getattr(self, "feature_names_out_", None)
if not isinstance(out_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return np.array(out_feats, dtype=object)
def get_feature_names_in(self) -> List[str]:
"""
Returns the names of all input columns present when fitting.
These columns are necessary for the transform step.
"""
in_feats = getattr(self, "feature_names_in_", None)
if not isinstance(in_feats, list):
raise NotFittedError("Estimator has to be fitted to return feature names.")
else:
return in_feats
@abstractmethod
def _fit(self, X: pd.DataFrame, y: Optional[pd.Series], **kwargs):
...
class SupervisedTransformerMixin(sklearn.base.TransformerMixin):
def _more_tags(self):
return {'supervised_encoder': True}
def transform(self, X, y=None, override_return_df=False):
"""Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation
in order to avoid overfitting.
On training data transform should be called with y, on test data without.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
y : array-like, shape = [n_samples] or None
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X, y = convert_inputs(X, y, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X, y)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X: pd.DataFrame, y: pd.Series) -> pd.DataFrame:
...
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
class UnsupervisedTransformerMixin(sklearn.base.TransformerMixin):
def transform(self, X, override_return_df=False):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
override_return_df : bool
override self.return_df to force to return a data frame
Returns
-------
p : array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied.
"""
# first check the type
X = convert_input(X, deep=True)
self._check_transform_inputs(X)
if not list(self.cols):
return X
X = self._transform(X)
return self._drop_invariants(X, override_return_df)
@abstractmethod
def _transform(self, X) -> pd.DataFrame:
...
class TransformerWithTargetMixin:
def _more_tags(self):
return {'supervised_encoder': True}
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
if y is None:
raise TypeError('fit_transform() missing argument: ''y''')
return self.fit(X, y, **fit_params).transform(X, y)
| JaimeArboleda | 5eb7a2d6359d680bdadd0534bdb983e712a47f9c | 570827e6b48737d0c9aece8aca31edd6da02c1b2 | Hi! Ok, I have added both suggestions! | JaimeArboleda | 33 |
scikit-learn-contrib/category_encoders | 398 | (WIP) Partial fix for getting feature names out | I think this is a partial fix for this opened issue:
https://github.com/scikit-learn-contrib/category_encoders/issues/395
It remains to check the behaviour of other estimators that are not ONE_TO_ONE.
Please, let me know if you like the work in progress and I will try to continue. | null | 2023-02-23 13:33:41+00:00 | 2023-03-13 11:48:24+00:00 | docs/source/index.rst | .. Category Encoders documentation master file, created by
sphinx-quickstart on Sat Jan 16 13:08:19 2016.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Category Encoders
=================
A set of scikit-learn-style transformers for encoding categorical variables into numeric with different
techniques. While ordinal, one-hot, and hashing encoders have similar equivalents in the existing scikit-learn version, the
transformers in this library all share a few useful properties:
* First-class support for pandas dataframes as an input (and optionally as output)
* Can explicitly configure which columns in the data are encoded by name or index, or infer non-numeric columns regardless of input type
* Can drop any columns with very low variance based on training set optionally
* Portability: train a transformer on data, pickle it, reuse it later and get the same thing out.
* Full compatibility with sklearn pipelines, input an array-like dataset like any other transformer
Usage
-----
install as:
.. code-block:: python
pip install category_encoders
or
.. code-block:: python
conda install -c conda-forge category_encoders
To use:
.. code-block:: python
import category_encoders as ce
encoder = ce.BackwardDifferenceEncoder(cols=[...])
encoder = ce.BaseNEncoder(cols=[...])
encoder = ce.BinaryEncoder(cols=[...])
encoder = ce.CatBoostEncoder(cols=[...])
encoder = ce.CountEncoder(cols=[...])
encoder = ce.GLMMEncoder(cols=[...])
encoder = ce.GrayEncoder(cols=[...])
encoder = ce.HashingEncoder(cols=[...])
encoder = ce.HelmertEncoder(cols=[...])
encoder = ce.JamesSteinEncoder(cols=[...])
encoder = ce.LeaveOneOutEncoder(cols=[...])
encoder = ce.MEstimateEncoder(cols=[...])
encoder = ce.OneHotEncoder(cols=[...])
encoder = ce.OrdinalEncoder(cols=[...])
encoder = ce.PolynomialEncoder(cols=[...])
encoder = ce.QuantileEncoder(cols=[...])
encoder = ce.RankHotEncoder(cols=[...])
encoder = ce.SumEncoder(cols=[...])
encoder = ce.TargetEncoder(cols=[...])
encoder = ce.WOEEncoder(cols=[...])
encoder.fit(X, y)
X_cleaned = encoder.transform(X_dirty)
All of these are fully compatible sklearn transformers, so they can be used in pipelines or in your existing scripts. If
the cols parameter isn't passed, every non-numeric column will be converted. See below for detailed documentation
Contents:
.. toctree::
:maxdepth: 3
backward_difference
basen
binary
catboost
count
glmm
gray
hashing
helmert
jamesstein
leaveoneout
mestimate
onehot
ordinal
polynomial
quantile
rankhot
sum
summary
targetencoder
woe
wrapper
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| .. Category Encoders documentation master file, created by
sphinx-quickstart on Sat Jan 16 13:08:19 2016.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Category Encoders
=================
A set of scikit-learn-style transformers for encoding categorical variables into numeric with different
techniques. While ordinal, one-hot, and hashing encoders have similar equivalents in the existing scikit-learn version, the
transformers in this library all share a few useful properties:
* First-class support for pandas dataframes as an input (and optionally as output)
* Can explicitly configure which columns in the data are encoded by name or index, or infer non-numeric columns regardless of input type
* Can drop any columns with very low variance based on training set optionally
* Portability: train a transformer on data, pickle it, reuse it later and get the same thing out.
* Full compatibility with sklearn pipelines, input an array-like dataset like any other transformer (\*)
(\*) For full compatibility with Pipelines and ColumnTransformers, and consistent behaviour of `get_feature_names_out`, it's recommended to upgrade `sklearn` to a version at least '1.2.0' and to set output as pandas:
.. code-block:: python
import sklearn
sklearn.set_config(transform_output="pandas")
Usage
-----
install as:
.. code-block:: python
pip install category_encoders
or
.. code-block:: python
conda install -c conda-forge category_encoders
To use:
.. code-block:: python
import category_encoders as ce
encoder = ce.BackwardDifferenceEncoder(cols=[...])
encoder = ce.BaseNEncoder(cols=[...])
encoder = ce.BinaryEncoder(cols=[...])
encoder = ce.CatBoostEncoder(cols=[...])
encoder = ce.CountEncoder(cols=[...])
encoder = ce.GLMMEncoder(cols=[...])
encoder = ce.GrayEncoder(cols=[...])
encoder = ce.HashingEncoder(cols=[...])
encoder = ce.HelmertEncoder(cols=[...])
encoder = ce.JamesSteinEncoder(cols=[...])
encoder = ce.LeaveOneOutEncoder(cols=[...])
encoder = ce.MEstimateEncoder(cols=[...])
encoder = ce.OneHotEncoder(cols=[...])
encoder = ce.OrdinalEncoder(cols=[...])
encoder = ce.PolynomialEncoder(cols=[...])
encoder = ce.QuantileEncoder(cols=[...])
encoder = ce.RankHotEncoder(cols=[...])
encoder = ce.SumEncoder(cols=[...])
encoder = ce.TargetEncoder(cols=[...])
encoder = ce.WOEEncoder(cols=[...])
encoder.fit(X, y)
X_cleaned = encoder.transform(X_dirty)
All of these are fully compatible sklearn transformers, so they can be used in pipelines or in your existing scripts. If
the cols parameter isn't passed, every non-numeric column will be converted. See below for detailed documentation
Known issues:
----
`CategoryEncoders` internally works with `pandas DataFrames` as apposed to `sklearn` which works with `numpy arrays`. This can cause problems in `sklearn` versions prior to 1.2.0. In order to ensure full compatibility with `sklearn` set `sklearn` to also output `DataFrames`. This can be done by
.. code-block::python
sklearn.set_config(transform_output="pandas")
for a whole project or just for a single pipeline using
.. code-block::python
Pipeline(
steps=[
("preprocessor", SomePreprocessor().set_output("pandas"),
("encoder", SomeEncoder()),
]
)
If you experience another bug, feel free to report it on [github](https://github.com/scikit-learn-contrib/category_encoders/issues)
Contents:
----
.. toctree::
:maxdepth: 3
backward_difference
basen
binary
catboost
count
glmm
gray
hashing
helmert
jamesstein
leaveoneout
mestimate
onehot
ordinal
polynomial
quantile
rankhot
sum
summary
targetencoder
woe
wrapper
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| JaimeArboleda | 5eb7a2d6359d680bdadd0534bdb983e712a47f9c | 570827e6b48737d0c9aece8aca31edd6da02c1b2 | could you please introduce a section `known issues` here after usage and before contents, that states something like:
"""
CategoryEncoders internally works with pandas `DataFrames` as apposed to sklearn which works with numpy arrays. This can cause problems in sklearn versions prior to `1.2.0`. In order to ensure full compatibility with sklearn set sklearn to also output `DataFrames`. This can be done by
```python
sklearn.set_config(transform_output="pandas")
```
for a whole project or just for a single pipeline using
```python
Pipeline(
steps=[
("preprocessor", SomePreprocessor().set_output("pandas"),
("encoder", SomeEncoder()),
]
)
```
If you experience another bug feel free to report it on github https://github.com/scikit-learn-contrib/category_encoders/issues
"""
I think this makes it sufficiently clear and is better than the status quo. Do you agree?
| PaulWestenthanner | 34 |
scikit-learn-contrib/category_encoders | 398 | (WIP) Partial fix for getting feature names out | I think this is a partial fix for this opened issue:
https://github.com/scikit-learn-contrib/category_encoders/issues/395
It remains to check the behaviour of other estimators that are not ONE_TO_ONE.
Please, let me know if you like the work in progress and I will try to continue. | null | 2023-02-23 13:33:41+00:00 | 2023-03-13 11:48:24+00:00 | docs/source/index.rst | .. Category Encoders documentation master file, created by
sphinx-quickstart on Sat Jan 16 13:08:19 2016.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Category Encoders
=================
A set of scikit-learn-style transformers for encoding categorical variables into numeric with different
techniques. While ordinal, one-hot, and hashing encoders have similar equivalents in the existing scikit-learn version, the
transformers in this library all share a few useful properties:
* First-class support for pandas dataframes as an input (and optionally as output)
* Can explicitly configure which columns in the data are encoded by name or index, or infer non-numeric columns regardless of input type
* Can drop any columns with very low variance based on training set optionally
* Portability: train a transformer on data, pickle it, reuse it later and get the same thing out.
* Full compatibility with sklearn pipelines, input an array-like dataset like any other transformer
Usage
-----
install as:
.. code-block:: python
pip install category_encoders
or
.. code-block:: python
conda install -c conda-forge category_encoders
To use:
.. code-block:: python
import category_encoders as ce
encoder = ce.BackwardDifferenceEncoder(cols=[...])
encoder = ce.BaseNEncoder(cols=[...])
encoder = ce.BinaryEncoder(cols=[...])
encoder = ce.CatBoostEncoder(cols=[...])
encoder = ce.CountEncoder(cols=[...])
encoder = ce.GLMMEncoder(cols=[...])
encoder = ce.GrayEncoder(cols=[...])
encoder = ce.HashingEncoder(cols=[...])
encoder = ce.HelmertEncoder(cols=[...])
encoder = ce.JamesSteinEncoder(cols=[...])
encoder = ce.LeaveOneOutEncoder(cols=[...])
encoder = ce.MEstimateEncoder(cols=[...])
encoder = ce.OneHotEncoder(cols=[...])
encoder = ce.OrdinalEncoder(cols=[...])
encoder = ce.PolynomialEncoder(cols=[...])
encoder = ce.QuantileEncoder(cols=[...])
encoder = ce.RankHotEncoder(cols=[...])
encoder = ce.SumEncoder(cols=[...])
encoder = ce.TargetEncoder(cols=[...])
encoder = ce.WOEEncoder(cols=[...])
encoder.fit(X, y)
X_cleaned = encoder.transform(X_dirty)
All of these are fully compatible sklearn transformers, so they can be used in pipelines or in your existing scripts. If
the cols parameter isn't passed, every non-numeric column will be converted. See below for detailed documentation
Contents:
.. toctree::
:maxdepth: 3
backward_difference
basen
binary
catboost
count
glmm
gray
hashing
helmert
jamesstein
leaveoneout
mestimate
onehot
ordinal
polynomial
quantile
rankhot
sum
summary
targetencoder
woe
wrapper
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| .. Category Encoders documentation master file, created by
sphinx-quickstart on Sat Jan 16 13:08:19 2016.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Category Encoders
=================
A set of scikit-learn-style transformers for encoding categorical variables into numeric with different
techniques. While ordinal, one-hot, and hashing encoders have similar equivalents in the existing scikit-learn version, the
transformers in this library all share a few useful properties:
* First-class support for pandas dataframes as an input (and optionally as output)
* Can explicitly configure which columns in the data are encoded by name or index, or infer non-numeric columns regardless of input type
* Can drop any columns with very low variance based on training set optionally
* Portability: train a transformer on data, pickle it, reuse it later and get the same thing out.
* Full compatibility with sklearn pipelines, input an array-like dataset like any other transformer (\*)
(\*) For full compatibility with Pipelines and ColumnTransformers, and consistent behaviour of `get_feature_names_out`, it's recommended to upgrade `sklearn` to a version at least '1.2.0' and to set output as pandas:
.. code-block:: python
import sklearn
sklearn.set_config(transform_output="pandas")
Usage
-----
install as:
.. code-block:: python
pip install category_encoders
or
.. code-block:: python
conda install -c conda-forge category_encoders
To use:
.. code-block:: python
import category_encoders as ce
encoder = ce.BackwardDifferenceEncoder(cols=[...])
encoder = ce.BaseNEncoder(cols=[...])
encoder = ce.BinaryEncoder(cols=[...])
encoder = ce.CatBoostEncoder(cols=[...])
encoder = ce.CountEncoder(cols=[...])
encoder = ce.GLMMEncoder(cols=[...])
encoder = ce.GrayEncoder(cols=[...])
encoder = ce.HashingEncoder(cols=[...])
encoder = ce.HelmertEncoder(cols=[...])
encoder = ce.JamesSteinEncoder(cols=[...])
encoder = ce.LeaveOneOutEncoder(cols=[...])
encoder = ce.MEstimateEncoder(cols=[...])
encoder = ce.OneHotEncoder(cols=[...])
encoder = ce.OrdinalEncoder(cols=[...])
encoder = ce.PolynomialEncoder(cols=[...])
encoder = ce.QuantileEncoder(cols=[...])
encoder = ce.RankHotEncoder(cols=[...])
encoder = ce.SumEncoder(cols=[...])
encoder = ce.TargetEncoder(cols=[...])
encoder = ce.WOEEncoder(cols=[...])
encoder.fit(X, y)
X_cleaned = encoder.transform(X_dirty)
All of these are fully compatible sklearn transformers, so they can be used in pipelines or in your existing scripts. If
the cols parameter isn't passed, every non-numeric column will be converted. See below for detailed documentation
Known issues:
----
`CategoryEncoders` internally works with `pandas DataFrames` as apposed to `sklearn` which works with `numpy arrays`. This can cause problems in `sklearn` versions prior to 1.2.0. In order to ensure full compatibility with `sklearn` set `sklearn` to also output `DataFrames`. This can be done by
.. code-block::python
sklearn.set_config(transform_output="pandas")
for a whole project or just for a single pipeline using
.. code-block::python
Pipeline(
steps=[
("preprocessor", SomePreprocessor().set_output("pandas"),
("encoder", SomeEncoder()),
]
)
If you experience another bug, feel free to report it on [github](https://github.com/scikit-learn-contrib/category_encoders/issues)
Contents:
----
.. toctree::
:maxdepth: 3
backward_difference
basen
binary
catboost
count
glmm
gray
hashing
helmert
jamesstein
leaveoneout
mestimate
onehot
ordinal
polynomial
quantile
rankhot
sum
summary
targetencoder
woe
wrapper
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| JaimeArboleda | 5eb7a2d6359d680bdadd0534bdb983e712a47f9c | 570827e6b48737d0c9aece8aca31edd6da02c1b2 | I think it's very good, yes. Thanks! I don't see any need for more. Let me add this piece and remove the other one so that you can merge :) | JaimeArboleda | 35 |