How Can We Help?

< All Topics

Lucidum User Manual

1. Lucidum Platform

1.1 Get Started with Lucidum

Lucidum is an open API, data ingestion platform that ingests any data from IT and security operations, management, protection, and detection solutions, including structured and unstructured data from data lake, through API, static files, whether on-prem or cloud. Lucidum platform applies its patent-pending machine learning to discover, triangulate, and identify all assets — even previously unknown unknowns — delivering visibility essential to truly secure, manage, and transform enterprise.

The Lucidum platform enables security, IT and other teams to:

  • Obtain complete asset visibility
  • Identify, classify and triangulate the data
  • Accelerate alert triage, incident response, investigation, and remediation
  • Enhance data security
  • Manage IT asset and vulnerability
  • Improve information security engineering and operations
  • Ensure consistent system/application versioning and upgrade

Lucidum discovers, identifies, and classifies every asset, all data, and each user:

  • Asset: Refer to any entity that stores, transmits or processes data, which includes laptops, workstations, servers, virtual machines, cloud instances, docker containers, and more
  • User: Refer to any entity that is authenticated into the enterprise environment to use the assets, which includes active directory users, HR employees, cloud IAM users, and more
  • Data: Refer to any entity that is identified and associated with certain data type and classification, for example, one use may be accessing confidential product source codes, or one asset may be storing restricted PCI data

The checklist below includes the fundamental steps to complete in order to start benefiting from the Lucidum solution:

Step Name Description Reference in this Manual
1 UI Login Login to the web UI “Lucidum UI Login”
2 License Update Upload and update license “License Management Page”
3 Connector Setup Configure and test data connectors “Connection Page”
4 Data Ingestion Trigger Airflow data ingestion jobs “Connection Page”
5 Output Summary Examine system’s output summaries and dashboards “Home Page” and “Data QC Page”
6 Query Building Build/save/schedule queries to explore the data details “Explore Page”
7 Action Creation Create/take actions from certain queries “Action Center Page” and “Integration Page”
8 Report Generation Generate CSV reports from the query results “Explore Page” and “Lab Page”

1.2 Lucium UI Login

User can input the username and password to log into the Lucidum web UI. The login session will be kept for 30 days if the user selects the “Stay signed in” option. The default password for the system “admin” user is 12345678, make sure to change this default password upon the first login in the “User Management” page.

Login with SSO

Lucidum web UI supports login with the system user/password as well as through Okta SSO (Single-Sign on). Note that the user’s account needs to be configured by the Okta administrator beforehand, and the user’s account setting (e.g., user email) in Okta should be consistent with the setting in the Lucidum UI.

Users can click the “Sign in with Okta” button on the Lucidum login page and will be redirected to the Okta sign-on page.

User then needs to input the Okta credentials. If the authentication with Okta is successful, the user will be logged into the Lucidum web UI automatically.

Login User Information

Current username is shown on the top right corner of the web UI after the user logs in.

User can

  • Click “Change password” to change current user’s password
  • Click “Logout” to log out

The bell icon next to the username will show the number of system notifications on it (e.g., license expiration warnings, product usage tips, et. al.). Users can click the icon to view the system alerts, messages and notifications.

1.3 Home Page

Home page includes three parts:

Summary statistics of user, asset and data

There are three pie charts on the homepage to display the summary statistics of user, asset, and data discovered by the Lucidum platform.

  1. User pie chart: Showing users with asset found v.s. without asset found
  2. Asset pie chart: Showing cloud vs. other assets
  3. Data pie chart: Showing different data types discovered

All three pie charts support the “drill-down” feature. Users can click certain pieces of the pie charts to search for more detailed information on the “Explore” page.

New user, asset and data found

On the right side of the homepage, new user/asset/data found recently will be shown in the descending order of risk. Users can click on each item to view its detailed information.

Data ingestion flow

The data injection flow chart shows how different data sources are being injected into the Lucidum platform. Users can hover the mouse over each input node to see the data injection status from a certain data source, including data injection start time, duration and the number of output records. For the data source with injection errors or running over 24 hours, it will show in red color on the ingestion flow chart, so user can quickly identify abnormal data injections.

 

1.4 Action Page (Retired)

Action page lists all actions the user triggers from different pages. Users can change an action’s severity, set the integration target and modify the action. Action records are searchable through the search bar on the top of the page. This page is retiring.

1.5 Explore Page

Explore page is the core of Lucidum web UI, which provides an ad-hoc query interface for users to run complex searches.

Query Builder

Users can select one table (for example, “Search Asset”) to build the query. By default, Lucidum UI provides six main tables for search, including:

    • Search Asset: Current asset table
    • Search User: Current user table
    • Search Asset-History: Asset history table
    • Search User-History: User history table
    • Search Asset-IP History: Asset-IP mapping history table
    • Search User-IP History: User-IP mapping history table

After selecting one table at the top, user can then select certain fields from this table with different query operators and field values to build the query. The query is organized by different groups of “AND/OR” conditions. User can click to add one condition and click to remove one condition. “AND” condition is at the first level in the query builder logically. Each group of “AND” conditions is separated by dotted horizontal lines and different “OR” conditions are placed under one certain “AND” condition. Users can combine different “AND/OR” conditions to create complex searches. For example, the conditions shown in the figure above can be described as:

Users can click “Run” to submit the query or click “Clear all queries” to start a new query. The query results will be listed in the table under the query builder. Within the result table, the user can click “Record Detail” under the “Detail” menu to view the details of each record.

In the “Detail” pop-up window, Lucidum UI groups the information into different categories, including:

  • Age: Age information, such as first time seen, last time seen, new asset/user or not, and asset status
  • Applications: Applications installed on the asset, including the application name, version, and data source
  • Asset: Asset information, including asset name, asset tag, IP address, MAC address, and more
  • Cloud: Cloud information, including cloud account ID, cloud account name, cloud instance ID, and more
  • Customer Fields: Special fields from the customer. For example, one customer may provide their own asset inventory spreadsheets to Lucidum and certain fields are from this customer only. These fields can be customized through the “Field Display” page and will be put into the “Customer Fields” category
  • Data: Data information, including data category, data classification, file access history, and more
  • Risk: Risk information, including risk level, standardized risk score, and top risk factors
  • User: User information, including user name, department, job title, manager name, and more
  • DataSource: Data source information. Here the raw data from the related data sources will be organized under different vertical tabs so users can click through each of them and have a holistic view of all data sources for this record
  • Tags: Tag information, such as the EC2 instance and AMI image tags from AWS
  • Compliance: Compliance information, such as number of non-compliances and compliance findings
  • Others: Other contextual information, including hardware configuration, location, region, and more

Full Term Search

Users can select “Full Term Search” to quickly search one term in a certain table. “Full Term Search” can be selected at the top of the field drop-down list:

For example, the figure above shows one full term search on the keyword, “Windows”, and the web UI will search this keyword across all fields under the asset table. Note that the keyword here is case insensitive and it must be a complete/full term, so if one field value is “Windows 10”, a full-term search using “WINDOWS” or “windows” will find this record; however, a search using “Win” (partial term) will have no results.

Query Operators

Each query can include different operators, depending on the type of fields the user selects. More details on the query operators are listed in the table below:

Field type Operator Notes
Number Is equal to
Is greater than or equal to
Is greater than
Is less than or equal to
Is less than
Is not equal to
Exists
Empty
Within (for Datetime field)
Exists: Field value is not missing and is not equal to null
Empty: Field value is missing or equal to null or Field does not exist
Within: Datetime value is within a certain time interval (i.e., certain days/weeks/months/years)
String Match
Not match
Is equal to
Is not equal to
Exists
Empty
Match: Field value includes certain characters or matches a certain pattern (case insensitive and it supports regular expression)
Is equal to: Field value equals to a string exactly (case sensitive)
List Match
Not match
Is equal to
Is not equal to
In
Not in
Exists
Empty
Match: Element in the list includes certain characters or matches a certain pattern (case insensitive and it supports regular expression)
Is equal to: Element in the list equals to a string exactly (case sensitive)
In: The comma-separated values are found in the list (case sensitive). For example, “IP_Address in 10.0.0.1, 10.0.0.2” will search for the assets with IP address 10.0.0.1 OR 10.0.0.2 literally

Search tips:

  • Special characters in “Match”/ “Not Match” operator: As match operator supports regular expression, some special characters used in regular expression (such as dot, brackets, and space) need to be escaped with “\” if these characters are in the search value. For example, to search “Firefox (ver 23.0)” using match operator, the query will be Application.Version → match → Firefox \(ver 23\.0\)
  • Special characters in full term search: Use double quotes around the search value if the value contains special characters. For example, to search IP address 10.1.2.5 with full term search, the value needs to be double quoted as “10.1.2.5”; otherwise the database engine will split the term by “.” and search for 10 or 1 or 2 or 5 individually.
  • “Match” vs. “Is equal to” vs. “In”: These three operators are all meant to search for certain strings, but they are used under different situations, as listed below. Generally, “Match” is most widely used as it is case insensitive and supports partial text search with flexible regular expression, but be cautious when the search value includes special characters as described above; “In” is very useful when searching on a list-type field (for example, data sources and IP addresses) as it accepts multiple search values at one time; “Is equal to” is very precise and runs faster than “Match” operator when searching on an exact text, and it is not influenced by special characters.
Match Is equal to In
Field Type String or List String or Number List
Case Sensitivity Case insensitive Case sensitive Case sensitive
Regular Expression Yes No No
Partial Text Search Yes No No
Exact Text Search Yes (with Regex) Yes Yes
Example Data Sources “match” “aws” Asset “is equal to” “Win_AD” Data Sources “in” “aws_s3, aws_ec2”
Example Explanation Search for data source names containing “aws” sub-string, e.g., “aws_s3” or “AWS_EC2”. Note that “AWS_EC2” with different cases satisfies Search for an asset whose name is exactly “Win_AD”. Note that “win_ad” with different cases does NOT satisfy Search for data source names that are either “aws_s3” or “aws_ec2”. Note that “AWS_EC2” with different cases does NOT satisfy

Query Menu

Users can click the button next to the “Clear all queries” to expand the query management menu.

  • Export Result: Click this to export the search results into a CSV file. Users can choose the export fields to be included in the output CSV file

  • Save Query: Click this to save current query. Users can specify the saved query name, detailed description and saved query group for future use.

  • Query Management: Click this to go into query management, which will be described with more details in later context
  • Add Comments: Click this to add comments to the selected query results, which will be described with more details in later context
  • Send to Action Center: Click this to send data or query to the new Action Center, which will be described with more details in later context
  • Edit Columns: Click this to select the desired fields to show in the query results. The list of selected fields is stored in the browser cache locally. The example below selects four columns to display in the query results: Asset name, User name, First time seen, and Last time seen. User can also click the magnifier icon next to the “Column Name” to quickly search for a certain column name and add it to the selected column list. Note that when saving a query, the selected display columns related to this query will also be saved, so when the user loads one saved query from the “Query Management”, the UI will automatically display the previously selected columns for this query.

Query Management

Query management has two tabs: Query Library and Query Run History.

Query Library lists all the queries saved by the user. Lucidum also includes some pre-built queries to help the first-time user get started. Under the “Action” menu, user can click the star icon to add certain saved queries to the favourite (the star icon will then show in red color) and these queries will be placed on top of the saved query list. Users can also re-click the star icon to remove certain queries from the favorite.

For each saved query, different actions are provided under the “Action” menu:

  • Use this: load the saved query into the query builder for a quick repeatable search. The UI will automatically display the selected columns for this query
  • Copy MQL: copy original database query string into the clipboard (for advanced user)
  • Edit: edit query name, description and group
  • Delete: delete the saved query
  • Send to Action Center: send the saved query to the new Action Center, which will be described in more details later
  • Schedule setting (retiring): schedule the saved query to run at a certain time and send out email reports. Users can click the “Schedule Setting” to expand the schedule setting panel, fill in the recipient emails (separated by comma if there are multiple recipients listed), specify the schedule, and select output fields to be included in the report. After clicking the “Confirm” button, the scheduled job will be started immediately and at the same time sent to the “Job Manager” page, which will be described in further details later. Note: This feature will be incorporated into the new Action Center as well

Query Library supports importing and exporting the saved queries. For example, one user can export some saved queries into an Excel spreadsheet and share them with other users who can then import the spreadsheet into their query libraries.

  • To export queries: Select one or more queries using the boxes on the left side, and click “Export” button to save the Excel file
  • To import queries: Click “Import” button and select the Excel file to import
  • Users can also select multiple saved queries and delete them in batches by clicking the “Delete” button. Caution: the deleted queries can not be recovered

For easier query sharing, one user can share the saved queries directly with other users. To do this, the user can select certain saved queries, click the “Share” button under the “Query Management”, and choose the users to share the queries with. The selected queries will be sent to other users’ query libraries:

Query Run History lists the recent queries run by the user. For each entry in the query run history, four different actions are provided in the “Action” menu:

  • Use this: load the query into the query builder for a quick search
  • Copy MQL: copy original database query string into the clipboard (for advanced user)
  • Delete: delete the query from the history. User can also select multiple queries and delete them in batches by clicking the “Delete” button at the top. Caution: the deleted queries can not be recovered
  • Save To Library: save the query into Query Library for future use

Add Comments

Add comments to multiple records

Users can select one or more query records from the result table and add some comments. The selected records will then have the same comments added. The records with comments will have a “note” indicator on the right side. Different users can add different comments to the same records.

Add comments to a single record

Users can also add comments to one single record when viewing the record details. In the record “Detail” pop-up window, users can click “Add Comments” to the top and add comments to this record only. Different users can add different comments to the same record as well.

Users can click “Record Detail” in the “Detail” menu to view the comment history under the “Comments” tab. The comment history will list the comment’s creator, details and created date. Users can also edit/delete their own comments as needed. Note that only system admin can delete other users’ comments.

Asset Graph (New Feature in V2.5+)

Under each asset’s details, there is a newly added “Graph” tab, which is able to show asset relationships quickly in a connected graph.

First, different fields can be added to the graph with different roles on the “Field Display Management” page:

  • Bridge: A field can be added as a “bridge” node, meaning that this field will be used to connect different assets as a bridge. For example, “User” can be a bridge node to connect the assets accessed by the same user, “Cloud Account ID” can be another bridge node to connect the assets under the same AWS account.
  • Attribute: A field can be added as an “attribute”, meaning that this field’s value will be listed in the node’s information. For example, “CPU Cores” can be set as an “attribute” to show the asset’s number of cpu cores.

After setting the fields to be included in the graph, users can go to one asset’s details and click the “Graph” tab. Current asset will be shown as a center node in the graph. Clicking this center node will list this asset’s key information, such as asset name, IP address, MAC address, operating system, et. al.. Notice that in this example, the extra “CPU Cores” information is listed as this field has been selected as a graph attribute:

By default, the asset’s users are hidden. To show all users related to this asset, turn on the “hide/show user node” switch:

To display the assets’ relationships, at least one bridge node needs to be added to the graph. Click the gear icon on the top-right corner, select the desired bridge node to be added, and click “Confirm”. For example, here “Cloud Account ID” is selected as a bridge node to show all assets under the same AWS account:

Now the bridge node is shown on the graph. Clicking the bridge node will list the assets connected through this bridge node. Currently, only 20 assets will be listed.

From the bridge node’s asset list, users can:

  • click “Open in Explore” to search for one or more assets’ detailed information in a new “Explore” webpage.
  • use the search box to type and search for one specific asset: for example, the asset, “TEST-CACHE-0001-001”, is searched in the picture below.

  • turn on the “Show in graph” switch to connect one or more assets with the center node: for example, asset “TEST-CACHE-0001-001” is now connected with asset “I-0BF72E892A4C70EBD” in the graph. Clicking the newly added “TEST-CACHE-0001-001” node will also show this asset’s details.

Users can repeat this process to add more assets to the graph through different bridge nodes, hence the different relationships among assets can be quickly and easily visualized through the “Graph” feature.

1.6 Lab Page

Lab page enables users to upload their own CSV/JSON files into Lucidum platform and do a quick search/comparison on the uploaded files.

Add data to a new table

Users can start with adding data to a new table. Under the “Add data to new table” tab, users can specify the new table name and the table description, then choose the CSV/JSON file to upload and click “Confirm” to add to the new table. Users can also click “Preview” to preview the file contents before uploading the file.

Add data to an existing table

Users can append records in a file to an existing table. For example, one user may create a NMAP_Scan table last month and upload some NMAP scan reports. This month the user does another scan. Since the scan reports have the same format, the user can upload the new report to the same NMAP_Scan table.

To do this, users can go to the “Add data to existing table” tab, where it lists all existing lab tables in the Lucidum database. Users can select one existing table and click “append to table” in the “Action” menu to add new data to this table. In the pop-up window, users can change the table description and choose the file to be appended to this table. Users can also click the “Preview” button to preview the file contents before uploading the file.

Other actions include:

  • Overwrite Table: Users can click “overwrite” to overwrite the whole records in the table instead of appending the new data. Caution: doing this will delete all previous records in the table
  • Delete Table: Users can click “delete” to delete the table. Caution: doing this will delete the whole table (if the user only wants to delete certain upload histories, uses the “View” action instead)
  • View Uploads: Users can click “view” to view the file upload history of the table. Users can also select certain upload histories to delete

Search

After uploading the file, users can go to the “Search” tab to query the data. The search functions here are very similar to those on the “Explore” page. The lab search also supports adding comments to the search results.

Under each search result, users can click the “Detail” menu and modify the field values as needed (Note that currently this modification feature is only supported for the lab tables):

Compare two tables

Users can easily compare two tables to find out the differences:

  1. Select the base table for comparison in “Table 1 (Base)”
  2. Select “Compared by” field for Table 1. Lab will use this field as an external key from Table 1 to link these two tables
  3. Select the second table for comparison in “Table 2”
  4. Select “Compared by” field for Table 2. Lab will use this field as an external key from Table 2 to link these two tables
  5. Click “Compare” button to show the differences between these two tables:
    • Red color: Records do not exist in Table 2
    • Green color: Records exist in Table 2 but not in Table 1
    • Yellow color: Records exist in both tables but have different values in Table 2
    • Users can also click the “deleted|added|modified” buttons respectively to filter the comparison results. For example, users can click “deleted” to only display the records that do not exist in Table 2

Compare two upload histories

Users can easily compare two upload histories under the same table to find out the differences between any two uploads:

  1. Select the target table in “Table”
  2. Select “Compared by” field. Lab will use this field as an internal key to link the two historical file uploads
  3. Select the first file upload as base from “Upload History (Base)”
  4. Select the second file upload from “Upload History”
  5. Click “Compare” button to show the differences between these two tables or “Export” to export the comparison results to an Excel spreadsheet
    • Red color: Records do not exist in the second file upload
    • Green color: Records exist in the second file upload but not in the first
    • Yellow color: Records exist in both file uploads but have different values in the second upload
    • UserS can also click the “deleted|added|modified” buttons respectively to filter the comparison results. For example, userS can click “deleted” to only display the records that do not exist in the second file upload

1.7 Global Search Page

The “Global Search” page is similar to the “Full Term Search” feature on the “Explore” page, but instead of doing full-term search on a specific table, “Global Search” goes further to search for any keyword across all tables within the Lucidum database. With the global search, users can quickly locate the relevant information in one or more tables.

For example, users can search for “Windows” on this page, and the results will show which table(s) include this keyword with the count of records (e.g., there are 10,365 records found with the “Windows” term from the current asset table).

Users can click the “Detail” menu to see the detailed records related to the search keyword. Users can also click the “Edit Columns” button to select the display columns:

1.8 Job Manager Page (Retired)

The “Job Manager” page is closely related to the scheduled query feature. When a user schedules a query on the “Explore” page, the scheduled job will be listed here with the query name, query creator name, query description, scheduled job status, last run time, next run time and result history.

Users can click “Last Run Time” to download the most recent results as a CSV file for a scheduled job, or click “View Result” under “Result History” to view and download more historical results. Users can also click “Stop” to stop the scheduled job, click “Run” to start the scheduled job, and click “Delete” to delete the scheduled job. Note: The “Job Manager” page will be incorporated into the new Action Center as well.

1.9 Action Center Page

The “Action Center” page lists and manages all the actions integrated with third-party external systems, such as Email, Slack, Jira, ServiceNow and so on.

Setup the integrations

Before using the Action Center, users may need to configure and setup the integrations on the “Integration” page:

The “Integration” page lists all available third-party integrations, their configuration names, creation time and update time. Users can click “test” under the “Action” to test the integration settings and click “config” to modify the integration configurations. The testing status will be shown under the “Status”. For example, the picture below shows the configurations for the “email” integration and the testing status indicates that the email configurations are valid:

Another integration example is Jira integration. After configuring the required fields and providing valid credentials as in the below picture, please make sure you click the test button to have a successful test which pulls project names and associated types for each jira project name and this information will be included in the dropdown list when you create a new action:

as you can see from the picture below that the available project names and types are listed in the dropdown menu after you have a successful test in the integration configuration page:

Create a new action

There are two different ways to initiate an action after running a query from the “Explore” page.

  • Initiation from the selected records: Users can select one or more records from the result table on the “Explore” page and send these records to the Action Center
  • Initiation from the query itself: Users can also send the query directly to the Action Center without selecting any records. The action center will process the query and include the query results automatically for certain actions

To initiate an action from the selected records:

  1. Select one or more records from the result table on the “Explore” page
  2. Click “Sent to Action Center” under the query menu and select “Send Data”

  1. Choose one or more integrations for this action in the pop-up window and click “Next”:

  1. Fill in the detailed action configurations and click “Done”. For example, email integration will require the recipients’ email addresses, while ServiceNow integration will require ServiceNow’s target class table name and field mappings. Users can also specify the action label/name and output fields under the “General Settings”:

5. Verify the status of the action you just triggered from the action center:

possible statuses include: pending, success and failure. It takes a few minutes for the action to take place so pending status is normal.

To initiate an action from the query itself:

  1. For the current active query in the query builder, click “Sent to Action Center” under the query menu and select “Send Query”. For the saved queries in the “Query Management”, select one query to send and click “Sent to Action Center” under the “Action” menu:

  1. Choose one or more integrations for this action in the pop-up window and click “Next”

  1. Specify an action schedule if needed and click “Next” after the schedule is set. Users can set the action schedule by hours/days/weeks/months. If no schedule is needed, users can shift the “Schedule Type” switch from “Schedule” to “Once”:

  1. Similarly, fill in the detailed configurations and click “Done”:

Manage actions

All configured actions will be listed in the “Action Center” and organized by different integrations. The Action Center will list the action’s label/name, the action’s creator, the action’s creation time, the action’s trigger type (by data or by schedule), the action’s results, and the action’s settings. Users can review and manage each action on this page.

  • Users can view the latest results for the actions by clicking “View Result” under the “Result”. The action result will list the action trigger time, the result data, the action status, and possible error/warning messages. Users can click the paper clip icon under the “Data” to download the result CSV file. For the action with scheduled queries, only the results from the most recent 10 runs will be saved under the “Result” by default. This can be adjusted with the “Schedule Query Limit” option in the “System Setting” page

  • Users can click “Run” under the “Operation” to trigger the action, click “Stop” to stop the action, and click “Delete” to delete the action
  • Users can click “setting” under the “Action Setting” to re-configure an action. For example, users can change the schedule time interval, modify the action name, re-select the data fields to be included in the action, and update other action configurations as needed:

1.10 Compliance Page (New Feature in V2.7)

Lucidum Compliance currently supports the following benchmarks:

The CIS Controls® and CIS Benchmarks™ are the global standard and recognized best practices for securing IT systems and data against the most pervasive attacks. The CIS Amazon Web Services Foundations Benchmark v1.3.0 consists of recommendation rules in 4 distinct categories:

  • Identity and Access Management
  • Logging
  • Monitoring
  • Networking

To run a benchmarking in Lucidum Compliance, there are several steps:

  1. Add compliance benchmark
  2. Edit the queries under each rule or control
  3. Run the rules
  4. View and export the benchmark results

Add compliance benchmark

On the “Benchmarks” tab, users can click the plus button to add a new compliance benchmark JSON file, which will be provided by Lucidum as requested. Be default, CIS AWS Foundations will be included:

Edit queries under benchmark rule

After the benchmark is added, the UI webpage will list all rules and controls under the benchmark. Users can go into each rule to:

  • show all queries: show all queries under a certain rule. For example, the picture below shows that there are 3 queries under CIS AWS Benchmark Rule 1.3, “Ensure credentials unused for 90 days or greater are disabled”:

  • add query: add more queries to a certain rule. Users can specify the name, the descriptions, and the tag for the new query. The query builder here is very similar to that on the “Explore” page, so users can easily generate a query on different Lucidum tables (including asset, user and compliance tables); Users can also click “Select Existing Query” to quickly load the saved query from current query library and change the query as needed:

  • mapping query: link existing compliance query to a certain rule so different rules may share the same query (for example, some queries on user password policies can be used for both CIS AWS and Azure benchmark rules, hence the same queries can be reused for different rules). In the pop-up “Query Mapping” window, users can select one or more existing queries from other compliance rules to quickly add them to a certain rule:

Users can also:

  • click “Delete” to delete the benchmark
  • click “Edit” to edit the benchmark JSON file
  • click “Export” to export the benchmark into a JSON file with all queries (this is a good way to backup and share compliance benchmark/queries)

Run the rules and view/export the results

After the rule queries are created, users can go to the “Dashboard” tab and:

  • click “Schedule” button to setup the running schedule for all rules
  • click “Start” to enable the schedule
  • click “Stop” to disable to schedule
  • click “Export” to export the latest scheduled run results

Users can also run a certain rule manually and export the results by:

  • selecting “Run” under the “Action” menu

  • selecting “Details” to show the query running results, and send the query/data to the Action Center if needed
  • selecting “Export Data” to export the query results into a zipped CSV file
  • selecting “Export Queries” to export the queries into a zipped file

The benchmarking summary is shown on the top of the “Dashboard” page:

  • Passed: Number of rules with queries that pass the benchmark
  • Failed: Number of rules with queries that fail the benchmark
  • Not Applicable: Number of rules that do not have any queries

Users can also quickly filter the rule running status by clicking the “Status” menu. For example, the picture below only shows the “Passed” rules from the benchmark:

1.11 Field Display Page

In “Field Display Management”, users can customize the extra fields to display in the Lucidum UI.

To add a new field display configuration, users can click “New Field Display” on this page. Under the “New Field Display” pop-up window, users can specify the raw field name, the display name and the field description. For example, if the user has a raw field, “Host_Name”, and wants to show this field as “Host Name” in the Lucidum UI, the setting is illustrated in the figure below. By default, all customized fields will be placed under the “Customer Fields” group when showing the record details.

Users can also switch the field display configuration from “basic” to “advanced” mode for more advanced settings on the field display options. Caution: Please contact Lucidum technical support if changing the advanced settings is needed as this may have negative impacts on the UI display if the configuration is set incorrectly.

1.12 Data QC Page

The Data QC page lists basic summary statistics for numerical and categorical fields in different tables so users can quickly check the data quality. The statistics include:

Data QC Statistics Description
Total count Total count of records
Non-missing count Count of non-missing records
Missing count Count of missing records
Missing percent % Percentage of missing records
Unique count Count of unique values
Unique percent % Percentage of unique values
Min Minimum value (for numerical field)
Max

Maximum value (for numerical field)

1.13 License Management Page

Apply for License

Users can apply for the free Lucidum community license on Lucidum’s website directly. If the contact information is valid, users will receive the license file in email. For enterprise license, please contact Lucidum sales and customer support.

Add License

Users can upload the license file or copy & paste the license code from the license file under Lucidum web UI: Click the “Add License” button, then click “Choose File” and select the license file to upload to the web UI. The license status on this page will be updated if the license file is valid.

License Status

License status includes the information below:

  • Licensed To: Licensee name
  • Licensed Type: License type (e.g., FULL, FREE, CLOUD, …)
  • Field Display: Lucidum UI may limit which fields to display depending on the license type, for example, risk scores may not be shown for the free trial license
  • Features: Features enabled under current license (e.g., data module, risk module, user module, asset module, lab module, action module, …)
  • Expiration: License expiration date (in UTC)

License usage graphs show some license usage metrics, including:

  • Daily License Usage: Number of assets discovered per day. Lucidum may change the license usage metrics in the future
  • Average License Usage: Average number of assets discovered in the past month. Lucidum may change the license usage metrics in the future

1.14Connection Page

The “Connection” page has three components: Connector Configuration, AirFlow Trigger and Metrics Data.

“Connector Configuration” is for the users to self-configure and self-test the Lucidum connectors to different data sources. Users can click the “Add” button to add a new connector, specify the connector’s configurations, test the connection, and save the connector settings.

The connectors will be listed under the “Connector Configuration” after being configured successfully. Users can test, configure, or delete any connectors if needed. For example, under the “aws” connector, users can click “test all” to test the connections to different AWS services. Users can also click “config” under “Action”, change the role ARNs with double quotes as a comma-separated list from all additional AWS accounts in the “Assume Role” box, and click “OK”. Lucidum web UI will test if the role assuming is working for the additional AWS accounts.

For the “api” connectors, users can click “test” to test one API connection, or click “config” to change the API connection settings (such as API password, token, or secrets). For sensitive credentials, the UI will encrypt them automatically after users input the original credentials and save the encrypted strings in the backend database for better data security. Users can click the lock icon to view the decrypted original credentials if needed:

“AirFlow Trigger” is to trigger Lucidum Airflow jobs manually. Users can click “run” under “Action” to trigger the daily scheduled “docker_dag” job. This will run Lucidum data injection pipeline and machine learning engines to generate the outputs. Please wait until the docker_dag’s status becomes “Success”. Depending on the data volume, the data injection process may take from 20 minutes up to several hours.

“Metrics Data” records the detailed data injection metrics from different data sources, including data injection status, start time, end time, duration, number of input records, number of output records, list of input fields, list of output fields, and more. The metrics can be searched by keywords and date ranges. The data injection flow chart on the Home page is generated from these metrics. Per agreement with the user, Lucidum may collect and return some of these metrics for better customer support, issue trouble-shoooting and product enhancement.

1.15 System Status

CPU/Memory/Disk Usage

The charts on the top monitor real-time CPU/Memory/Disk usages at the host level. It will turn red if the resource usage is over a certain threshold.

System Event Log

System event log records different system events, including user login information, API accesses and system errors. The logs can be searched by severity levels, keywords and date ranges.

1.16 User Management

Each user can have multiple roles and each role can have multiple permissions, and each permission defines read/write privilege to certain system resources. If a user attempts to access a UI resource without valid permission, “403 Forbidden” error will be returned.

Role Management

Lucidum UI provides a set of pre-defined roles as listed below:

Pre-defined Roles Description
Admin Administrator
IT_Operation Same as Admin except for changing admin password and managing license
Lucidum_Support Lucidum support used for customized query update only. Please don’t assign this role to the normal users
Api_Users Programmatic access to Lucidum API (cannot access UI)

Users can also create a new role under “Role Management” and assign certain permissions to this role.

To add permissions to one certain role, users can select the available permissions from the box on the left and click the “>” arrow to add the selected permissions. Similarly, to remove permissions from one certain role, users can select the permissions to remove from the box on the right and click the “<” arrow to remove the selected permissions. The table below describes the details for the available permissions.

Permissions Description
Front_*** Access to the UI sub-menu, e.g., user with the Front_DataQC permission can access sub-menu on the left side
Read Chart Read access to the Home page
Read Action Read access to the Action page
Write Actions Read/Write access to the Action page (user can add or change action)
Query Builder Access to the Explore page (user can manage saved queries)
Search Access to the Explore page (user can submit and run queries)
Read License Read access to the License page
Modify License Write access to the License page (user can upload and modify license)
UserManage Read/Write access to the User Management page (users can only change their own user settings)
RoleManage Read/Write access to the Role Management page
Read System Usage Access to the resource usage monitoring under the System Stats page
Read System Log Access to the system event logs under the System Stats page
Read System Setting Read access to the System Setting page
Write System Setting Read/Write access to the System Setting page
Start/Stop Runner This permission is retired and irrelevant
Read DataQC Access to the Data QC page
Read/Write DataMapping This permission is retired and irrelevant
Customized Query Read/Write Access to the Lucidum support page for updating the UI back-end queries (not for normal users)
API_Operator Access to the Lucidum API
Schedule Read/Write access to the query scheduling

LDAP Role Management

Lucidum UI also supports LDAP roles. However, LDAP roles need to be mapped to Lucidum local roles beforehand. For example, as shown in the figure above, LDAP role “DEVELOPER” is being mapped to Lucidum system role “IT_Operation”. Then all LDAP users with the “DEVELOPER” role will have the permissions from “IT_Operation” role.

User Management

The default password for the system “admin” user is 12345678, make sure to change this default password upon the first login by clicking “change password” under “Action”.

Only the user with the Admin role can create a new user or change other users’ profiles (e.g., user password and roles). To create a new user, click “New User” under “User Management”.

Under the “New User” pop-up window, specify the new username, user email, user password, user’s time zone, and user’s roles. Then click “Confirm” to finish the new user creation process.

1.17 System Setting

The System Setting page contains multiple setting sections. Each section can be updated and saved individually by clicking the “Update” button on the top right corner.

Data Settings

Data Setting Description
Data retention in days Data retentions days in Lucidum database, by default, data will be kept for 30 days
Data lookback in days Data lookback days in data collection, by default, Lucidum will collect data from previous 7 days

Metrics Settings

Metrics Setting Description
Metrics Log Interval

(minutes)

UI logging time interval, by default, Lucidum UI will generate the logs every 10 minutes

Query Settings

Query Setting Description
Schedule Query Limit Number of maximum results saved for the scheduled queries (in the Job Manager)
Query History Limit Number of maximum queries saved in the Query History (under the Query Manager)

Mail Settings

Sender Email Setting Description
Host Sender email’s host name, e.g., smtp.gmail.com
Port Sender email’s port number, e.g., 587
User Name Sender email address
Password Sender email account’s password
Auth By default, mail sending authorization is enabled
Start TLS By default, Email sending TLS is enabled
SSL Trust By default, Email sending SSL Trust is enabled

LDAP Settings

These are the settings for the LDAP role management, which can be obtained from the enterprise LDAP administrator.

LDAP Setting Description
LDAP Url LDAP Server URL
LDAP Base Dn LDAP Base DN
LDAP User Dn Patterns LDAP User DN Patterns
LDAP Group Dn Patterns LDAP Group DN Patterns
LDAP Manager User LDAP Manager User
LDAP Manager Password LDAP Manager User Password
LDAP Password Attribute LDAP Password Attribute
Previous Lucidum Installation Manual
Table of Contents