Installation¶
Requirements¶
The project leverages docker-compose
with a custom python script so you need to have the following packages installed in your machine:
docker - v1.13.0+
docker-compose - v1.23.2+
python - v3.6+
There are additional python dependencies (mentioned in the
requirements/pre-requirements.txt
file) that can be installed using theinitialize.sh
script.
In some systems you could find pre-installed older versions. Please check this and install a supported version before attempting the installation. Otherwise it would fail.
Note
- The project uses public docker images that are available on Docker Hub
- IntelOwl is tested and supported to work in a Linux-based OS. It may also run on windows, but that is not officially supported yet.
- Before installing remember that you must comply with the LICENSE and the Legal Terms
TL;DR¶
Obviously we strongly suggest reading through all the page to configure IntelOwl in the most appropriate way.
However, if you feel lazy, you could just install and test IntelOwl with the following steps. Be sure to run docker
and python
commands with sudo
if permissions/roles have not been set
Note: We’ve added a new script initialize.sh
that will check compatibility with your system and attempt to install the required dependencies.
# clone the IntelOwl project repository
git clone https://github.com/intelowlproject/IntelOwl
cd IntelOwl/
# construct environment files from templates
cd docker/
cp env_file_app_template env_file_app
cp env_file_postgres_template env_file_postgres
cp env_file_integrations_template env_file_integrations
# start the app
cd ..
./initialize.sh
python3 start.py prod up
# create a super user
docker exec -ti intelowl_uwsgi python3 manage.py createsuperuser
# now the app is running on http://localhost:80
Hint
There is a YouTube video that may help in the installation process. (ManySteps have changed since v2.0.0)Deployment Components¶
IntelOwl is composed of various different services, namely:
Angular: Frontend (IntelOwl-ng)
Django: Backend
PostgreSQL: Database
Rabbit-MQ: Message Broker
Celery: Task Queue
Nginx: Reverse proxy for the Django API and web asssets.
Uwsgi: Application Server
Elastic Search (optional): Auto-sync indexing of analysis’ results.
Kibana (optional): GUI for Elastic Search. We provide a saved configuration with dashboards and visualizations.
Flower (optional): Celery Management Web Interface
All these components are managed via docker-compose.
Deployment Preparation¶
Open a terminal and execute below commands to construct new environment files from provided templates.
cd docker/
cp env_file_app_template env_file_app
cp env_file_postgres_template env_file_postgres
cp env_file_integrations_template env_file_integrations
cd ..
./initialize.sh
Environment configuration (required)¶
In the env_file_app
, configure different variables as explained below.
REQUIRED variables to run the image:
DB_HOST
,DB_PORT
,DB_USER
,DB_PASSWORD
: PostgreSQL configuration (The DB credentals should match the ones in theenv_file_postgres
).
Strongly recommended variable to set:
DJANGO_SECRET
: random 50 chars key, must be unique. If you do not provide one, Intel Owl will automatically set a new secret on every run.INTELOWL_WEB_CLIENT_DOMAIN
(example:localhost
/mywebsite.com
): the web domain of your instance, this is used for generating links to analysis results.
Optional variables needed to enable specific analyzers:
ABUSEIPDB_KEY
: AbuseIPDB API keyAUTH0_KEY
: Auth0 API KeySECURITYTRAILS_KEY
: Securitytrails API KeySHODAN_KEY
: Shodan API keyHUNTER_API_KEY
: Hunter.io API keyGSF_KEY
: Google Safe Browsing API keyOTX_KEY
: Alienvault OTX API keyCIRCL_CREDENTIALS
: CIRCL PDNS credentials in the format:user|pass
VT_KEY
: VirusTotal API keyHA_KEY
: HybridAnalysis API keyINTEZER_KEY
: Intezer API keyINQUEST_API_KEY
: InQuest API keyFIRST_MISP_API
: FIRST MISP API keyFIRST_MISP_URL
: FIRST MISP URLMISP_KEY
: your own MISP instance keyMISP_URL
: your own MISP instance URLDNSDB_KEY
: DNSDB API keyCUCKOO_URL
: your cuckoo instance URLHONEYDB_API_ID
&HONEYDB_API_KEY
: HoneyDB API credentialsCENSYS_API_ID
&CENSYS_API_SECRET
: Censys credentialsONYPHE_KEY
: Onyphe.io’s API KeyGREYNOISE_API_KEY
: GreyNoise API (docs)INTELX_API_KEY
: IntelligenceX API (docs)UNPAC_ME_API_KEY
: UnpacMe API (docs)IPINFO_KEY
: ipinfo API keyZOOMEYE_KEY
: ZoomEye API Key(docs)TRIAGE_KEY
: tria.ge API key(docs)WIGLE_KEY
: WiGLE API Key(docs)XFORCE_KEY
&XFORCE_PASSWORD
: IBM X-Force Exchange API (docs)MWDB_KEY
: API key for MWDBSSAPINET_KEY
: screenshotapi.net (docs)MALPEDIA_KEY
: MALPEDIA API KEY (docs)OPENCTI_KEY
: your own OpenCTI instance keyOPENCTI_URL
: your own OpenCTI instance URLYETI_KEY
: your own YETI instance keyYETI_URL
: your own YETI instance URLSPYSE_API_KEY
: Spyse API key. Register here: https://spyse.com/user/registration”DRAGONFLY_API_KEY
: Dragonfly API key. Register here.VIRUSHEE_API_KEY
: Virushee API key. (docs)STALKPHISH_KEY
: Stalkphish.io API key. Register here.
Optional variables needed to work with specific connectors:
CONNECTOR_MISP_KEY
: your own MISP instance key to use withMISP
connectorCONNECTOR_MISP_URL
: your own MISP instance URL to use withMISP
connectorCONNECTOR_OPENCTI_KEY
: your own OpenCTI instance key to use withOpenCTI
connectorCONNECTOR_OPENCTI_URL
: your own OpenCTI instance URL to use withOpenCTI
connectorCONNECTOR_YETI_KEY
: your own YETI instance key to use withYETI
connectorCONNECTOR_YETI_URL
: your own YETI instance API URL to use withYETI
connector
Advanced additional configuration:
OLD_JOBS_RETENTION_DAYS
: Database retention for analysis results (default: 3 days). Change this if you want to keep your old analysis longer in the database.
Database configuration (required)¶
In the env_file_postgres
, configure different variables as explained below.
Required variables:
POSTGRES_PASSWORD
(same asDB_PASSWORD
)POSTGRES_USER
(same asDB_USER
)POSTGRES_DB
(default:intel_owl_db
)
If you prefer to use an external PostgreSQL instance, you should just remove the relative image from the docker/default.yml
file and provide the configuration to connect to your controlled instances.
Web server configuration (optional)¶
Intel Owl provides basic configuration for:
Nginx (
configuration/nginx/http.conf
)Uwsgi (
configuration/intel_owl.ini
)
In case you enable HTTPS, remember to set the environment variable HTTPS_ENABLED
as “enabled” to increment the security of the application.
There are 3 options to execute the web server:
HTTP only (default)
The project would use the default deployment configuration and HTTP only.
HTTPS with your own certificate
The project provides a template file to configure Nginx to serve HTTPS:
configuration/nginx/https.conf
.You should change
ssl_certificate
,ssl_certificate_key
andserver_name
in that file.Then you should modify the
nginx
service configuration indocker/default.yml
:change
http.conf
withhttps.conf
in
volumes
add the option for mounting the directory that hosts your certificate and your certificate key.
Plus, if you use Flower, you should change in the
docker/flower.override.yml
theflower_http.conf
withflower_https.conf
.HTTPS with Let’s Encrypt
We provide a specific docker-compose file that leverages Traefik to allow fast deployments of public-faced and HTTPS-enabled applications.
Before using it, you should configure the configuration file
docker/traefik.override.yml
by changing the email address and the hostname where the application is served. For a detailed explanation follow the official documentation: Traefix doc.After the configuration is done, you can add the option
--traefik
while executing thestart.py
Analyzers or connectors configuration (optional)¶
Refer to Analyzers customization and Connectors customization.
Hint
You can see the full list of all available analyzers and connectors in the Usage.html or Live Demo.Run¶
Important Info
IntelOwl depends heavily on docker and docker compose so as to hide this complexity from the enduser the project leverages a custom script (start.py
) to interface with docker-compose
.You may invoke $ python3 start.py –help
to get help and usage info.
The CLI provides the primitives to correctly build, run or stop the containers for IntelOwl. Therefore,
- It is possible to attach every optional docker container that IntelOwl has: multi_queue with traefik enabled while every optional docker analyzer is active.
- It is possible to insert an optional docker argument that the CLI will pass to
docker-compose
Now that you have completed different configurations, starting the containers is as simple as invoking:
$ python3 start.py prod up
You can add the parameter -d
to run the application in the background.
Stop¶
To stop the application you have to:
if executed without
-d
parameter: pressCTRL+C
if executed with
-d
parameter:python3 start.py prod down
Cleanup of database and application¶
This is a destructive operation but can be useful to start again the project from scratch
python3 start.py prod down -v
After Deployment¶
Users creation¶
You may want to run docker exec -ti intelowl_uwsgi python3 manage.py createsuperuser
after first run to create a superuser.
Then you can add other users directly from the Django Admin Interface after having logged with the superuser account.
Extras¶
Deploy on Remnux¶
Remnux is a Linux Toolkit for Malware Analysis.
IntelOwl and Remnux have the same goal: save the time of people who need to perform malware analysis or info gathering.
Therefore we suggest Remnux users to install IntelOwl to leverage all the tools provided by both projects in a unique environment.
To do that, you can follow the same steps detailed above for the installation of IntelOwl.
Update to the most recent version¶
To update the project with the most recent available code you have to follow these steps:
$ cd <your_intel_owl_directory> # go into the project directory
$ git pull # pull new changes
$ python3 start.py prod stop # kill the currently running IntelOwl containers
$ python3 start.py prod up --build # restart the IntelOwl application
Rebuilding the project/ Creating custom docker build¶
If you make some code changes and you like to rebuild the project, follow these steps:
python3 start.py test build --tag=<your_tag> .
to build the new docker image.Add this new image tag in the
docker/test.override.yml
file.Start the containers with
python3 start.py test up --build
.
Updating to >=2.0.0 from a 1.x.x version¶
Users upgrading from previous versions need to manually move env_file_app
, env_file_postgres
and env_file_integrations
files under the new docker
directory.
Updating to >v1.3.x from any prior version¶
If you are updating to >v1.3.0 from any prior version, you need to execute a helper script so that the old data present in the database doesn’t break.
Follow the above updation steps, once the docker containers are up and running execute the following in a new terminal
docker exec -ti intelowl_uwsgi bash
to get a shell session inside the IntelOwl’s container.
Now just copy and paste the below command into this new session,
curl https://gist.githubusercontent.com/Eshaan7/b111f887fa8b860c762aa38d99ec5482/raw/758517acf87f9b45bd22f06aee57251b1f3b1bbf/update_to_v1.3.0.py | python -
If you see “Update successful!”, everything went fine and now you can enjoy the new features!