The project leverages
docker-compose with a custom python script so you need to have the following packages installed in your machine:
In some systems you could find pre-installed older versions. Please check this and install a supported version before attempting the installation. Otherwise it would fail.
Obviously we strongly suggest to read through all the page to configure IntelOwl in the most appropriate way.
However, if you feel lazy, you could just install and test IntelOwl with the following steps. Be sure to run
python commands with
sudo if permissions/roles have not been set
# clone the IntelOwl project repository git clone https://github.com/intelowlproject/IntelOwl cd IntelOwl/ # construct environment files from templates cd docker/ cp env_file_app_template env_file_app cp env_file_postgres_template env_file_postgres cp env_file_integrations_template env_file_integrations # start the app cd .. python3 start.py prod up # create a super user docker exec -ti intelowl_uwsgi python3 manage.py createsuperuser # now the app is running on http://localhost:80
HintThere is a YouTube video that may help in the installation process. (ManySteps have changed since v2.0.0)
IntelOwl is composed of various different services, namely:
Angular: Frontend (IntelOwl-ng)
Rabbit-MQ: Message Broker
Celery: Task Queue
Nginx: Reverse proxy for the Django API and web asssets.
Uwsgi: Application Server
Elastic Search (optional): Auto-sync indexing of analysis’ results.
Kibana (optional): GUI for Elastic Search. We provide a saved configuration with dashboards and visualizations.
Flower (optional): Celery Management Web Interface
All these components are managed via docker-compose.
Open a terminal and execute below commands to construct new environment files from provided templates.
cd docker/ cp env_file_app_template env_file_app cp env_file_postgres_template env_file_postgres cp env_file_integrations_template env_file_integrations
Environment configuration (required)¶
env_file_app, configure different variables as explained below.
REQUIRED variables to run the image:
DB_PASSWORD: PostgreSQL configuration (The DB credentals should match the ones in the
Strongly recommended variable to set:
DJANGO_SECRET: random 50 chars key, must be unique. If you do not provide one, Intel Owl will automatically set a new secret on every run.
mywebsite.com): the web domain of your instance, this is used for generating links to analysis results.
Optional variables needed to enable specific analyzers:
ABUSEIPDB_KEY: AbuseIPDB API key
AUTH0_KEY: Auth0 API Key
SECURITYTRAILS_KEY: Securitytrails API Key
SHODAN_KEY: Shodan API key
HUNTER_API_KEY: Hunter.io API key
GSF_KEY: Google Safe Browsing API key
OTX_KEY: Alienvault OTX API key
CIRCL_CREDENTIALS: CIRCL PDNS credentials in the format:
VT_KEY: VirusTotal API key
HA_KEY: HybridAnalysis API key
INTEZER_KEY: Intezer API key
INQUEST_API_KEY: InQuest API key
FIRST_MISP_API: FIRST MISP API key
FIRST_MISP_URL: FIRST MISP URL
MISP_KEY: your own MISP instance key
MISP_URL: your own MISP instance URL
DNSDB_KEY: DNSDB API key
CUCKOO_URL: your cuckoo instance URL
HONEYDB_API_KEY: HoneyDB API credentials
CENSYS_API_SECRET: Censys credentials
ONYPHE_KEY: Onyphe.io’s API Key
GREYNOISE_API_KEY: GreyNoise API (docs)
INTELX_API_KEY: IntelligenceX API (docs)
UNPAC_ME_API_KEY: UnpacMe API (docs)
IPINFO_KEY: ipinfo API key
ZOOMEYE_KEY: ZoomEye API Key(docs)
TRIAGE_KEY: tria.ge API key(docs)
WIGLE_KEY: WiGLE API Key(docs)
XFORCE_PASSWORD: IBM X-Force Exchange API (docs)
MWDB_KEY: API key for MWDB
SSAPINET_KEY: screenshotapi.net (docs)
MALPEDIA_KEY: MALPEDIA API KEY (docs)
OPENCTI_KEY: your own OpenCTI instance key
OPENCTI_URL: your own OpenCTI instance URL
YETI_KEY: your own YETI instance key
YETI_URL: your own YETI instance URL
SPYSE_API_KEY: Spyse API key. Register here: https://spyse.com/user/registration”
Optional variables needed to work with specific connectors:
CONNECTOR_MISP_KEY: your own MISP instance key to use with
CONNECTOR_MISP_URL: your own MISP instance URL to use with
CONNECTOR_OPENCTI_KEY: your own OpenCTI instance key to use with
CONNECTOR_OPENCTI_URL: your own OpenCTI instance URL to use with
CONNECTOR_YETI_KEY: your own YETI instance key to use with
CONNECTOR_YETI_URL: your own YETI instance API URL to use with
Advanced additional configuration:
OLD_JOBS_RETENTION_DAYS: Database retention for analysis results (default: 3 days). Change this if you want to keep your old analysis longer in the database.
Database configuration (required)¶
env_file_postgres, configure different variables as explained below.
If you prefer to use an external PostgreSQL instance, you should just remove the relative image from the
docker/default.yml file and provide the configuration to connect to your controlled instances.
Web server configuration (optional)¶
Intel Owl provides basic configuration for:
In case you enable HTTPS, remember to set the environment variable
HTTPS_ENABLED as “enabled” to increment the security of the application.
There are 3 options to execute the web server:
HTTP only (default)
The project would use the default deployment configuration and HTTP only.
HTTPS with your own certificate
The project provides a template file to configure Nginx to serve HTTPS:
You should change
server_namein that file.
Then you should modify the
nginxservice configuration in
volumesadd the option for mounting the directory that hosts your certificate and your certificate key.
Plus, if you use Flower, you should change in the
HTTPS with Let’s Encrypt
We provide a specific docker-compose file that leverages Traefik to allow fast deployments of public-faced and HTTPS-enabled applications.
Before using it, you should configure the configuration file
docker/traefik.override.ymlby changing the email address and the hostname where the application is served. For a detailed explanation follow the official documentation: Traefix doc.
After the configuration is done, you can add the option
--traefikwhile executing the
Important InfoIntelOwl depends heavily on docker and docker compose so as to hide this complexity from the enduser the project leverages a custom script (
start.py) to interface with
You may invoke
$ python3 start.py –help to get help and usage info.
The CLI provides the primitives to correctly build, run or stop the containers for IntelOwl. Therefore,
Now that you have completed different configurations, starting the containers is as simple as invoking:
$ python3 start.py prod up
You can add the parameter
-d to run the application in the background.
To stop the application you have to:
if executed without
if executed with
python3 start.py prod down
Cleanup of database and application¶
This is a destructive operation but can be useful to start again the project from scratch
python3 start.py prod down -v
You may want to run
docker exec -ti intelowl_uwsgi python3 manage.py createsuperuser after first run to create a superuser.
Then you can add other users directly from the Django Admin Interface after having logged with the superuser account.
Deploy on Remnux¶
Remnux is a Linux Toolkit for Malware Analysis.
IntelOwl and Remnux have the same goal: save the time of people who need to perform malware analysis or info gathering.
Therefore we suggest Remnux users to install IntelOwl to leverage all the tools provided by both projects in a unique environment.
To do that, you can follow the same steps detailed above for the installation of IntelOwl.
Update to the most recent version¶
To update the project with the most recent available code you have to follow these steps:
$ cd <your_intel_owl_directory> # go into the project directory $ git pull # pull new changes $ python3 start.py prod stop # kill the currently running IntelOwl containers $ python3 start.py prod up --build # restart the IntelOwl application
Rebuilding the project/ Creating custom docker build¶
If you make some code changes and you like to rebuild the project, follow these steps:
python3 start.py test build --tag=<your_tag> .to build the new docker image.
Add this new image tag in the
Start the containers with
python3 start.py test up --build.
Updating to >=2.0.0 from a 1.x.x version¶
Users upgrading from previous versions need to manually move
env_file_integrations files under the new
Updating to >v1.3.x from any prior version¶
If you are updating to >v1.3.0 from any prior version, you need to execute a helper script so that the old data present in the database doesn’t break.
Follow the above updation steps, once the docker containers are up and running execute the following in a new terminal
docker exec -ti intelowl_uwsgi bash
to get a shell session inside the IntelOwl’s container.
Now just copy and paste the below command into this new session,
curl https://gist.githubusercontent.com/Eshaan7/b111f887fa8b860c762aa38d99ec5482/raw/758517acf87f9b45bd22f06aee57251b1f3b1bbf/update_to_v1.3.0.py | python -
If you see “Update successful!”, everything went fine and now you can enjoy the new features!