Advanced Usage

This page includes details about some advanced features that Intel Owl provides which can be optionally enabled. Namely,

Optional Analyzers

Some analyzers which run in their own Docker containers are kept disabled by default. They are disabled by default to prevent accidentally starting too many containers and making your computer unresponsive.

Name Analyzers Description
Static Analyzers PEframe_Scan, Capa_Info, Floss, Strings_Info_Classic, Strings_Info_ML, Manalyze, ClamAV
  • Capa detects capabilities in executable files
  • PEFrame performs static analysis on Portable Executable malware and malicious MS Office documents
  • FLOSS automatically deobfuscate strings from malware binaries
  • String_Info_Classic extracts human-readable strings where as ML version of it ranks them
  • Manalyze statically analyzes PE (Portable-Executable) files in-depth
  • ClamAV antivirus engine scans files for trojans, viruses, malwares using a multi-threaded daemon
Thug Thug_URL_Info, Thug_HTML_Info performs hybrid dynamic/static analysis on a URL or HTML page.
Box-JS BoxJS_Scan_JavaScript tool for studying JavaScript malware
APK Analyzers APKiD_Scan_APK_DEX_JAR identifies many compilers, packers, obfuscators, and other weird stuff from an APK or DEX file
Qiling Qiling_Windows Qiling_Windows_Shellcode Qiling_Linux Qiling_Linux_Shellcode Tool for emulate the execution of a binary file or a shellcode. It requires the configuration of its rootfs, and the optional configuration of profiles. The rootfs can be copied from the Qiling project: please remember that Windows dll must be manually added for license reasons. Qiling provides a DllCollector to retrieve dlls from your licensed Windows. Profiles must be placed in the profiles subfolder
Renderton Renderton get screenshot of a web page using rendertron (a headless chrome solution using puppeteer). Configuration variables have to be included in the `config.json`, see config options of renderton . To use a proxy, include an argument --proxy-server=YOUR_PROXY_SERVER in puppeteerArgs.

To enable all the optional analyzers you can add the option --all_analyzers when starting the project. Example:

python3 start.py prod --all_analyzers up

Otherwise you can enable just one of the cited integration by using the related option. Example:

python3 start.py prod --qiling up

Customize analyzer execution at time of request

Some analyzers and connectors provide the chance to customize the performed analysis based on parameters (params attr in the configuration file) that are different for each analyzer.

  • You can set a custom default values by changing their value attribute directly from the configuration files.

  • You can choose to provide runtime configuration when requesting an analysis that will be merged with the default overriding it. This override is done only for the specific analysis.

Info

Connectors parameters can only be changed from it's configuration file, not at the time of analysis request.

View and understand different parameters

To see the list of these parameters:

  • You can view the “Analyzers Table”, here.

  • You can view the raw JSON configuration file, here.

from the GUI

You can click on “CUSTOMIZE ANALYZERS PARAMETERS” button and add the runtime configuration in the form of a dictionary. Example:

"VirusTotal_v3_Get_File": {
    "force_active_scan_if_old": true
}

from Pyintelowl

While using send_observable_analysis_request or send_file_analysis_request endpoints, you can pass the parameter runtime_configuration with the optional values. Example:

runtime_configuration = {
    "Doc_Info": {
        "additional_passwords_to_check": ["passwd", "2020"]
    }
}
pyintelowl_client.send_file_analysis_request(..., runtime_configuration=runtime_configuration)

Analyzers with special configuration

Some analyzers could require a special configuration:

  • GoogleWebRisk: this analyzer needs a service account key with the Google Cloud credentials to work properly. You should follow the official guide for creating the key. Then you can copy the generated JSON key file in the directory configuration of the project and change its name to service_account_keyfile.json. This is the default configuration. If you want to customize the name or the location of the file, you can change the environment variable GOOGLE_APPLICATION_CREDENTIALS in the env_file_app file.

  • ClamAV: this Docker-based analyzer using clamd daemon as it’s scanner, communicating with clamdscan utility to scan files. The daemon requires 2 different configuration files: clamd.conf(daemon’s config) and freshclam.conf (virus database updater’s config). These files are mounted as docker volumes and hence, can be edited by the user as per needs.

Django Groups & Permissions

The application makes use of Django’s built-in permissions system. It provides a way to assign permissions to specific users and groups of users.

As an administrator here’s what you need to know,

  • Each user should belong to atleast a single group and permissions should be assigned to these groups. Please refrain from assigning user level permissions.

  • When you create a first normal user, a group with name DefaultGlobal is created with all permissions granted. Every new user automatically gets added to this group.

    • This is done because most admins won’t need to deal with user permissions and this way, they don’t have to.

    • If you don’t want a global group (with all permissions) but custom groups with custom permissions, just strip DefaultGlobal of all permissions but do not delete it.

The permissions work the way one would expect,

Permission Name Description
api_app | job | Can create job Allows users to request new analysis. When user creates a job (requests new analysis), - the object level view permission is applied to all groups the requesting user belongs to or to all groups (depending on the parameters passed).
api_app | job | Can view job Allows users to fetch list of all jobs they have permission for or a particular job with it's ID.
api_app | job | Can change job Allows user to change job attributes (eg: kill a running analysis). The object level permission is applied to all groups the requesting user belongs to.
api_app | job | Can change job Allows user to delete an existing job. The object level permission is applied to all groups the requesting user belongs to.
api_app | tag | Can create tag Allows users to create new tags. When user creates a new tag,
  • this new tag is visible (object level `view` permission) to each and every group but,
  • the object level `change` and `delete` permission is given to only those groups the requesting user belongs to.
  • This is done because tag labels and colors are unique columns and the admin in most cases would want to define tags that are usable (but not modifiable) by users of all groups.
api_app | tag | Can view tag Allows users to fetch list of all tags or a particular tag with it's ID
api_app | tag | Can change tag allows users to edit a tag granted that user has the object level permission for the particular tag

Authentication options

IntelOwl provides support for some of the most common authentication methods:

  • LDAP

  • GSuite (work in progress)

LDAP

IntelOwl leverages Django-auth-ldap to perform authentication via LDAP.

How to configure and enable LDAP on Intel Owl?

  1. Change the values with your LDAP configuration inside configuration/ldap_config.py. This file is mounted as a docker volume, so you won’t need to rebuild the image.

For more details on how to configure this file, check the official documentation of the django-auth-ldap library.

  1. Once you have done that, you have to set the environment variable LDAP_ENABLED as True in the environment configuration file env_file_app. Finally, you can restart the application with docker-compose up

Google Kubernetes Engine deployment

Refer to the following blog post for an example on how to deploy IntelOwl on Google Kubernetes Engine:

Deploying Intel-Owl on GKE by Mayank Malik.

Queues

Multi Queue

IntelOwl provides an additional multi-queue.override.yml compose file allowing IntelOwl users to better scale with the performance of their own architecture.

If you want to leverage it, you should add the option --multi-queue when starting the project. Example:

python3 start.py prod --multi-queue up

This functionality is not enabled by default because this deployment would start 2 more containers so the resource consumption is higher. We suggest to use this option only when leveraging IntelOwl massively.

Queue Customization

It is possible to define new celery workers: each requires the addition of a new container in the docker-compose file, as shown in the multi-queue.override.yml.

Moreover IntelOwl requires that the name of the workers are provided in the docker-compose file. This is done through the environment variable CELERY_QUEUES inside the uwsgi container. Each queue must be separated using the character ,, as shown in the example.

One can customize what analyzer should use what queue by specifying so in the analyzer entry in the analyzer_config.json configuration file. If no queue(s) are provided, the default queue will be selected.

Queue monitoring

IntelOwl provides an additional flower.override.yml compose file allowing IntelOwl users to use Flower features to monitor and manage queues and tasks

If you want to leverage it, you should add the option --flower when starting the project. Example:

python3 start.py prod --flower up

The flower interface is available at port 5555: to set the credentials for its access, update the environment variables

FLOWER_USER
FLOWER_PWD

or change the .htpasswd file that is created in the docker directory in the intelowl_flower container.

AWS support

At the moment there’s a basic support for some of the AWS services. More is coming in the future.

Secrets

If you would like to run this project on AWS, I’d suggest you to use the “Secrets Manager” to store your credentials. In this way your secrets would be better protected.

This project supports this kind of configuration. Instead of adding the variables to the environment file, you should just add them with the same name on the AWS Secrets Manager and Intel Owl will fetch them transparently.

Obviously, you should have created and managed the permissions in AWS in advance and accordingly to your infrastructure requirements.

Also, you need to set the environment variable AWS_SECRETS to True to enable this mode.

You can customize the AWS Region changing the environment variable AWS_REGION.

SQS

If you like, you could use AWS SQS instead of Rabbit-MQ to manage your queues. In that case, you should change the parameter BROKER_URL to sqs:// and give your instances on AWS the proper permissions to access it.

Also, you need to set the environment variable AWS_SQS to True to activate the additional required settings.

S3

If you prefer to use S3 to store the samples, instead of a local storage, you can now do it.

First, you need to configure the environment variable LOCAL_STORAGE to False to enable it and set AWS_STORAGE_BUCKET_NAME to the proper AWS bucket. Then you have to add some credentials for AWS: if you have IntelOwl deployed on the AWS infrastructure, you can use IAM credentials: to allow that just set AWS_IAM_ACCESS to True. If that is not the case, you have to set both AWS_ACESS_KEY_ID and AWS_SECRET_ACCESS_KEY