When you configure Cisco Umbrella (formerly, OpenDNS) integration to send log data to USM Anywhere, you can use the Cisco Umbrella plugin to translate the raw log data into normalized events for analysis.
Before the USM Anywhere can collect the Umbrella log data, you must set up Amazon Simple Storage Service (S3) log management in your Cisco Umbrella deployment. This requires that you have a self-managed Amazon S3 bucket in an Amazon Web Service (AWS) account that is configured to accept uploads from the Umbrella Service. For detailed information about this configuration, refer to this article: https://support.umbrella.com/hc/en-us/articles/231248448-Cisco-Umbrella-Log-Management-in-Amazon-S3#self-bucket
Note: USM Anywhere currently does not support the Cisco-managed buckets in Amazon S3.
To verify Amazon S3 log management in Cisco Umbrella
- Log in to the Cisco Umbrella (OpenDNS) dashboard.
- Go to Settings > Log Management.
- Click Amazon S3.
- In the Bucket Name field, enter the exact Amazon S3 bucket name.
A confirmation message in the dashboard indicates that the bucket has been successfully verified.
This procedure configures Cisco Umbrella (formerly, OpenDNS) to send log data to a USM Anywhere AWS sensor. By design, Cisco Umbrella will only export log data to a AWS S3 bucket; therefore, accessing this data requires a deployed AWS sensor.
Note: If you want to deploy a sensor to facilitate Cisco Umbrella log collection, see AWS Sensor Deployment.
To schedule an S3 bucket log collection job
- Go to Settings > Scheduler.
In the left navigation list, click Log Collection.
Note: You can use the Sensor filter at the top of the list to review the available log collection jobs on your
AWS sensor Google Cloud Platform (GCP) Sensor.
Click Create Log Collection Job.
Note: If you recently deployed a new USM Anywhere Sensor, it can take 10 to 20 minutes for USM Anywhere to discover the various log sources. After it discovers the logs, you must manually enable the
AWS GCPlog collection jobs you want before the system collects the log data.
Enter the name and description for the job.
The description is optional, but it is a best practice to provide this information so that others can easily understand what it does.
- For the Action Type option, select Amazon Web Services.
- Select the USM Anywhere Sensor for the job to run on.
For the App Action option, select Monitor S3 bucket.
- In the Bucket Name field, enter the name of the Amazon S3 bucket that is configured in Cisco Umbrella log management.
- In the Path field, enter the path on the bucket where the logs reside (in this case, dnslogs/).
- For the Source Format option, select raw.
For the Plugin option, select Cisco Umbrella.
In the Schedule section, specify when USM Anywhere runs the job:
- Select the increment as Hour, Day, Week, Month, or Year.
Set the interval options for the increment. The selected increment determines the available options.
For example, on a weekly increment you can select the days of the week to run the job.
Or, on a monthly increment you can specify a date or a day of the week that occurs within the month.
Set the Start time.
This is the time that the job starts at the specified interval. It uses the time zone configured for your USM Anywhere instance (default is UTC).
You should start seeing new Cisco Umbrella events in USM Anywhere shortly after the initial raw log data collection and normalization.
After the first log collection job completes and USM Anywhere retrieves and normalizes the raw log data from Cisco Umbrella, these events start appearing in the Events dashboard view. To provide a more focused view of these events, the Cisco Umbrella dashboard is available under Dashboards in the top navigation menu.
Available Plugin Fields
The Cisco Umbrella plugin provides support for multiple fields, which are important attributes extracted from the message. These fields are used in USM Anywhere reports and can be referenced when creating custom reports. In addition to reporting, the fields are also used by the USM Anywhere correlation rules.
Note: The custom fields are prefilled according to the values described. You cannot change them.
- customfield_1 → Resource Type ID*
- customfield_2 → Resource Type Description*
* customfield_1 and 2 contain the actual values for Resource Type ID and Resource Type Description.
‡ customheader_1 and 2 contain the words Resource Type ID and Description.
For troubleshooting, refer to the vendor documentation: