How Can We Help?

< All Topics

Lucidum UI User Manual

Part 1. Lucidum Web UI Pages

Login with SSO

We provide SSO(SingleSignOn) support for the OKTA account.

First, users can click the “Sign in with OKTA” button after that page will be redirected to OKTA login url.

Then input your OKTA account and password and login. If the authentication is successful, the user will be logged into lucidum automatically.

Login user information

Current user’s name is shown on the top right corner of the web UI after the user logs in.

User can

  • Click “Change password” to change current user’s password
  • Click “Logout” to log out

The bell icon next to the user name will show the number of system notifications on it. User can click the icon to review the system alerts, messages and notifications.

Home Page

Home page includes three parts:

Summary statistics of user, asset and data

There are three pie charts on the homepage to display the summary statistics of user, asset, and data discovered by the Lucidum platform.

  1. User pie chart: Showing users with asset found v.s. without asset found
  2. Asset pie chart: Showing cloud vs. other assets
  3. Data pie chart: Showing different data types discovered

All three pie charts support the “drill-down” feature. Users can click certain pieces of the pie charts to search for more detailed information.

New user, asset and data found

On the right side of the homepage, new user/asset/data found recently will be shown in descending order of risk. Users can click on each item to view its detailed information.

Data ingestion flow


The data injection flow chart shows how different data sources are being injected into the Lucidum platform. Users can hover the mouse over each input node to see the data injection status from a certain data source, including data injection start time, duration and the number of output records. For the data source with injection errors or running over 24 hours, it will show in red color on the ingestion flow chart, so users can quickly identify abnormal data injections.

Action Page

Action page lists all actions the user triggers from different pages. Users can change an action’s severity, set the integration target and modify the action. Action records are searchable through the search bar on the top of the page.

Note: Action page will be retired soon as Lucidum is building a new Action Center to provide better action management and integration.

Explore Page

Explore page is the core of Lucidum web UI, which provides an ad-hoc query interface for users to run complex searches.

Query Builder

Users can select one table (for example, “Search Asset”) to build the query. By default, Lucidum UI provides six main tables for search, including:

  • Search Asset: Current asset table
  • Search User: Current user table
  • Search Asset-History: Asset history table
  • Search User-History: User history table
  • Search Asset-IP History: Asset-IP mapping history table
  • Search User-IP History: User-IP mapping history table

The query is organized by different groups of “AND/OR” conditions. User can click

to add one condition and click

to remove one condition. “AND” condition is at the first level in the query builder logically. Each group of “AND” conditions is separated by dotted horizontal lines and different “OR” conditions are placed under one certain “AND” condition. Users can combine different “AND/OR” conditions to create complex searches. For example, the conditions shown in the figure above can be described as:

Search Asset Table where:

(Idle Instance = “yes” OR Last Time Seen is earlier than 01/01/2020) AND

Status match “running”

Users can click “Run” to submit the query or click “Clear all queries” to start a new query. The query results will be listed in the table under the query builder. Within the result table, the user can click “Record Detail” under the “Detail” menu to view the details of each record.

In the “Detail” pop-up window, Lucidum UI groups the information into different categories, including:

  • Age: Age information, such as first time seen, last time seen, new asset/user or not, and asset status
  • Applications: Applications installed on the asset, including the application name, version, and data source
  • Asset: Asset information, including asset name, asset tag, IP address, MAC address, and more
  • Cloud: Cloud information, including cloud account ID, cloud account name, cloud instance ID, and more
  • Customer Fields: Special fields from the customer. For example, one customer may provide their own asset inventory spreadsheets to Lucidum and certain fields are from this customer only. These fields will be put into the “Customer Fields” category
  • Data: Data information, including data category, data classification, file access history, and more
  • Risk: Risk information, including risk level, standardized risk score, and top risk factors
  • User: User information, including user name, department, job title, manager name, and more
  • DataSource: Data source information. Here the raw data from the related data sources will be organized under different vertical tabs so users can click through each of them and have a holistic view of all data sources for this record
  • Others: Other contextual information, including hardware configuration, location, region, and more

Query Operators

Each query can include different operators, depending on the type of fields the user selects. More details on the query operators are listed in the table below:

Field type Operator Notes
Number
Is equal to
Is greater than or equal to
Is greater than
Is less than or equal to
Is less than
Is not equal to
Exists
Empty
Exists: Field value is not missing and is not equal to null 
Empty: Field value is missing or equal to null or Field does not exist
String
Match 
Not match 
Is equal to 
Is not equal to 
Exists 
Empty
Match: Field value includes certain characters or matches a certain 
pattern (case insensitive and it supports regular expression) 
Is equal to: Field value equals to a string exactly (case sensitive) 
Not match: Opposite of “Match” 
Is not equal to: Opposite of “Is equal to”
List
Match 
Not match 
Is equal to 
Is not equal to 
In 
Not in 
Exists 
Empty
Match: Element in the field includes certain characters or matches a 
certain pattern (case insensitive and it supports regular expression) 
Is equal to: Element in the field equals to a string exactly (case sensitive)
 
In: The comma-separated values are found in the field (case sensitive). 
For example, “IP_Address in 10.0.0.1, 10.0.0.2” will search for the assets 
that have IP address 10.0.0.1 OR 10.0.0.2 literally

Search tips:

  • Special characters in “Match”/ “Not Match” operator: As match operator supports regular expression, some special characters used in regular expression (such as dot, brackets, and space) need to be escaped with “\” if these characters are in the search value. For example, to search “Firefox (ver 23.0)” using match operator, the query will be Application.Version → match → Firefox \(ver 23\.0\)
  • Special characters in full term search: Use double quotes around the search value if the value contains special characters. For example, to search IP address 10.1.2.5 with full term search, the value needs to be double quoted as “10.1.2.5”; otherwise the database engine will split the term by “.” and search for 10 or 1 or 2 or 5 individually.
  • “Match” vs. “Is equal to” vs. “In”: These three operators are all meant to search for certain strings, but they are used under different situations, as listed below. Generally, “Match” is most widely used as it is case insensitive and supports partial text search with flexible regular expression, but be cautious when the search value includes special characters as described above; “In” is very useful when searching on a list-type field (for example, data sources and IP addresses) as it accepts multiple search values at one time; “Is equal to” is very accurate and runs faster than “Match” operator when searching on an exact text, and it is not influenced by special characters.
Match Is equal to In
Field Type
String or List
String or Number
List
Case Sensitivity
Case insensitive
Case sensitive
Case sensitive
Regular Expression
Yes
No
No
Partial Text Search
Yes
No
No
Exact Text Search
Yes (with Regex)
Yes
Yes
Example
Data Sources “match” “aws”
Asset “is equal to” “Win_AD”
Data Sources “in” “aws_s3, 
aws_ec2”
Example Explanation
Search for data source names 
containing “aws” sub-string, 
e.g., “aws_s3”, “AWS_EC2”. 
“AWS_EC2” with different 
cases satisfies
Search for an asset whose name 
is exactly “Win_AD”. “win_ad” 
with different cases does not 
satisfy
Search for data source 
names containing “aws_s3” 
or “aws_ec2”. “AWS_EC2” 
with different cases does 
not satisfy

Full Term Search

Users can select “Full Term Search” to quickly search one term in a certain table.

For example, the figure above shows one full term search on the keyword, “Windows”, and the web UI will search this keyword across all fields under the asset table. Note that the keyword here is case insensitive and it must be a complete/full term, so if one field value is “Windows 10”, a full-term search using “WINDOWS” or “windows” will find this record; however, a search using “Win” (partial term) will have no results.

Query Menu

Users can click the

button next to the “Clear all queries” to expand the query management menu.

  • Export Result: Click this to export the search results into a CSV file. Users can choose the export fields to be included in the output CSV file

  • Save Query: Click this to save current query. Users can specify the saved query name, detailed description and saved query group for future use.

  • Query Management: Click this to go into query management, which will be described with more details in later context
  • Add Comments: Click this to add comments to the selected query results, which will be described with more details in later context
  • Edit Columns: Click this to select the desired fields to show in the query results. The list of selected fields is stored in the browser cache locally. For example, the screen below selects four columns to display in the query results: Asset name, User name, First time seen, and Last time seen

Query Management

Query management has two tabs: Query Library and Query Run History.

Query Library lists all the queries saved by the user. Lucidum also includes some pre-built queries to help the first-time user get started. For each saved query, five different actions are provided under the “Action” menu:

  • Use this: load the saved query into the query builder for a quick repeatable search
  • Copy MQL: copy original database query string into the clipboard (for advanced user)
  • Edit: edit query name, description and group
  • Delete: delete the saved query
  • Schedule setting: schedule the saved query to run at a certain time and send out email reports. Users can click the “Schedule Setting” to expand the schedule setting panel, fill in the recipient emails (separated by comma), specify the schedule, and select output fields to be included in the report. After clicking the “Confirm” button, the scheduled job will be started immediately and at the same time sent to the “Job Manager” page, which will be described in further details later. This feature will be incorporated into the new Action Center soon as well

Query Library supports importing and exporting the saved queries. For example, one user can export some saved queries into an Excel spreadsheet and share it with other users who can then import the spreadsheet into their query libraries.

  • To export queries: Select one or more queries using the boxes on the left side, and click “Export” button to save the Excel file
  • To import queries: Click “Import” button and select the Excel file to import
  • User can also select multiple saved queries and delete them in batch by clicking the “Delete” button

Query Run History lists the recent queries run by the user. For each entry in the query run history, four different actions are provided in the “Action” menu:

  • Use this: load the query into the query builder for a quick search
  • Copy MQL: copy original database query string into the clipboard (for advanced user)
  • Delete: delete the query from the history. User can also select multiple queries and delete them in batch by clicking the “Delete” button at the top
  • Save To Library: save the query into Query Library for future use

Add Comments

Add comments to multiple records

Users can select one or more query records from the result table and add some comments. The record with comments will have a “note” indicator on the right side. Different users can add different comments to the same query result.

Add comments to a single record

Users can also add comments to one single record when viewing the record details. In the record “Detail” pop-up window, users can click “Add Comments” to the top and add comments only to this record.

Users can click “Record Detail” in the “Detail” menu to view the comment history under the “Comments” tab. The comment history will list the comment’s creator, details and created date. Users can also edit/delete their own comments as needed. Note that only system admin can delete other users’ comments.

Lab Page

Lab page enables users to upload their own CSV/JSON files into Lucidum platform and do a quick search/comparison on the uploaded files.

Add data to a new table

Users can start with adding data to a new table. Under the “Add data to new table” tab, users can specify the new table name and the table description, then choose the CSV/JSON file to upload and click “Confirm” to add to the new table. Users can also click “Preview” to preview the file contents before uploading the file.

Add data to an existing table

Users can append records in a file to an existing table. For example, one user might create a NMAP_Scan table last month and upload some NMAP scan reports. This month the user does another scan. Since the scan reports have the same format, the user will upload the new report to the same NMAP_Scan table.

To do this, users can go to the “Add data to existing table” tab, where it lists all existing tables in the Lucidum database. Users can select one existing table and click “append” in the “Action” menu to add new data to this table. In the pop-up window, users can change the table description and choose the file to be appended to this table. Users can also click the “Preview” button to preview the file contents before uploading the file.

Other actions include:

  • Overwrite: Users can click “overwrite” to overwrite the whole records instead of appending the new data
  • Delete: Users can click “delete” to delete the table
  • View: Users can click “view” to view the file upload history of the table

Search

After uploading the file, users can go to the “Search” tab to query the data. The search functions here are very similar to those on the “Explore” page. The lab search also supports adding comments to the search results.

Compare two tables

Users can easily compare two tables to find out the differences:

  1. Select the base table for comparison in “Table 1 (Base)”
  2. Select “Compared by” field for Table 1. Lab will use this field as an external key from Table 1 to link these two tables
  3. Select the second table for comparison in “Table 2”
  4. Select “Compared by” field for Table 2. Lab will use this field as an external key from Table 2 to link these two tables
  5. Click “Compare” button to show the differences between these two tables:
    • Red color: Records do not exist in Table 2
    • Green color: Records exist in Table 2 but not in Table 1
    • Yellow color: Records exist in both tables but have different values in Table 2
    • Users can also click the “deleted|added|modified” buttons respectively to filter the comparison results. For example, users can click “deleted” to only display the records that do not exist in Table 2

Compare two upload histories

Users can easily compare two upload histories under the same table to find out the differences between any two uploads:

  1. Select the target table in “Table”
  2. Select “Compared by” field. Lab will use this field as an internal key to link the two historical file uploads
  3. Select the first file upload as base from “Upload History (Base)”
  4. Select the second file upload from “Upload History”
  5. Click “Compare” button to show the differences between these two tables or “Export” to export the comparison results to an Excel spreadsheet
  • Red color: Records do not exist in the second file upload
  • Green color: Records exist in the second file upload but not in the first
  • Yellow color: Records exist in both file uploads but have different values in the second upload
  • UserS can also click the “deleted|added|modified” buttons respectively to filter the comparison results. For example, userS can click “deleted” to only display the records that do not exist in the second file upload

Global Search Page

The “Global Search” page is similar to the “Full Term Search” feature on the “Explore” page, but it goes further to search any keyword across all tables within the Lucidum database. With the global search, user can quickly locate the relevant information in one or more tables.

For example, user can search for “Windows” on this page, and the results will show which table(s) include this keyword with the count of records (e.g., there are 10,365 records found with the “Windows” term from the current asset table). User can also click the “Detail” menu to see the detailed records related to the keyword.

Job Manager Page

The “Job Manager” page is closely related to the scheduled query feature. When user schedules a query on the “Explore” page, the scheduled job will be listed here with the query name, query creator name, query description, scheduled job status, last run time, next run time and result history.

User can click “Last Run Time” to download the most recent results as a CSV file for a scheduled job, or click “View Result” under “Result History” to view and download more historical results. User can also click “Stop” to stop the scheduled job, click “Run” to start the scheduled job, and click “Delete” to delete the scheduled job. Note: The “Job Manager” page will be incorporated into the new Action Center as well.

Action Manager Page (In Development)

The “Action Manager” page lists and manages all the actions integrated with third-party external systems, such as Email, Slack, Jira, ServiceNow and so on.

Create a new action

There are two different ways to initiate an action under the query menu from the “Explore” page.

  • Initiation from the selected records: User can select one or more records from the result table on the “Explore” page and send these records to the Action Center
  • Initiation from the query itself: User can also send the query directly to the Action Center without selecting any records. The action center will process the query and include the query results automatically for certain actions

To initiate an action from the selected records:

  1. Select one or more records from the result table on the “Explore” page
  2. Click “Sent to Action Center” under the query menu and select “Send Data”
  3. Specify the action name and click “Sent”

  1. Select one or more integration systems for this action, and fill in the detailed configurations. For example, email integration will require the recipients’ email addresses, while ServiceNow integration will require ServiceNow’s host name, connection credentials, and target class table name.

To initiate an action from the query itself:

  1. For current active query in the query builder, click “Sent to Action Center” under the query menu and select “Send Query”
  2. For the saved queries in the “Query Management”, select one query to send and click “Sent to Action Center” under the “Action” menu

  1. Specify the action name and click “Sent”
  2. Similarly, select one or more integration systems for this action, and fill in the detailed configurations

Manage actions

All actions initiated from the “Explore” page will be listed in the “Action Center” and organized by different types of the actions. User can review and manage each action on this page.

  • User can view the latest results for the actions under the “Result History”. For the action with scheduled queries, only the results from the most recent 10 runs will be saved under the “Result History” by default. This can be adjusted with the “Schedule Query Limit” option in the “System Setting” page
  • User can click “Run” to trigger the action, click “Stop” to stop the action, and click “Delete” to delete the action
  • User can click “setting” under the “Action Setting” to re-configure an action. For example, user can change the schedule time interval, modify the action name, select the data fields to be included in the action, and update the action configurations as needed.

Field Display Page

In “Field Display Management”, users can customize the extra fields to display in the Lucidum UI.

To add a new field display configuration, users can click “New Field Display” on this page. Under the “New Field Display” pop-up window, users can specify the raw field name, the display name and field description. For example, if the user has a raw field, “Host_Name”, and wants to show this field as “Host Name” in the Lucidum UI, the setting will be:

Users can also switch the field display configuration from “basic” to “advanced” mode for more advanced settings on the field display options. Please contact Lucidum technical support if changing the advanced settings is needed as this has direct impacts on the UI display.

Data QC Page

The Data QC page lists basic summary statistics for numerical and categorical fields in different tables so users can quickly check the data quality. The statistics include:

Statistics Description
Non-missing count
Count of non-missing records
Missing count
Count of missing records
Missing percent %
Percentage of missing records
Unique count
Count of unique values
Unique percent %
Percentage of unique values
Min
Minimum value (for numerical field)
Max
Maximum value (for numerical field)

License Management Page

Apply for License

Users can click the “Apply for License” button on the top right corner of the page and will be redirected to fill a form to apply for the Lucidum community license. If the contact information is valid, users will receive the license file in an email. For enterprise license, please contact Lucidum sales and customer support.

Add License

Users can upload the license file or copy & paste the license code from the license file under Lucidum web UI: First, click the “Add License” button, then click “Choose File” and select the license file to upload to the web UI.

License Status

License status includes the information below:

  • Licensed To: Licensee name
  • Licensed Type: License type (e.g., FULL, FREE, CLOUD, …)
  • Field Display: Lucidum UI may limit which fields to display depending on the license type, for example, risk scores may not be shown for the trial license
  • Features: Feature enabled under current license
  • Expiration: License expiration time (UTC)

License usage graphs show some license metrics including:

  • Daily License Usage: Number of assets discovered per day. Lucidum may change the license usage metrics in the future
  • Average License Usage: Average number of assets discovered in the past month. Lucidum may change the license usage metrics in the future

Connection Page

The “Connection” page has three components: Connector Test, AirFlow Trigger and Metrics Data.

“Connector Test” is to configure and test the connections to different data sources. For example, under the “aws” connector, users can click “test all” to test the connections to different AWS services. Users can also click “config” under “Action”, fill in the role ARNs with double quotes as a comma-separated list from all additional AWS accounts in the “Assume Role” box, and click “OK”. Lucidum web UI will test if the role assuming is working for the additional AWS accounts.

“AirFlow Trigger” is to trigger Lucidum Airflow jobs manually. Users can click “run” under “Action” to trigger the daily scheduled “docker_dag” job. This will run Lucidum data injection pipeline and machine learning engines to generate the outputs. Please wait until the docker_dag’s status becomes “Success”. Depending on the data volume, the data injection process may take from 20 minutes up to several hours to complete.

“Metrics Data” records the detailed data injection metrics from different data sources, including data injection status, start time, end time, duration, number of input records, number of output records, list of input fields, list of output fields, and more. The metrics can be searched by keywords and date ranges. The data injection flow chart on the Home page is generated from these metrics.

System Status

CPU/Memory/Disk Usage

The charts on the top monitor real-time CPU/Memory/Disk usages at the host level. It will turn red if the resource usage is over a certain threshold.

System Event Log

System event log records different system events, including user login information, API accesses and system errors. The logs can be searched by severity levels, keywords and date ranges.

User Management

Each user can have multiple roles and each role can have multiple permissions, and each permission defines read/write privilege to certain system resources. If a user attempts to access a UI resource without valid permission, “403 Forbidden” error will be returned.

Role Management

Lucidum UI provides a set of pre-defined roles as listed below:

Pre-defined Roles Description
Admin Administrator
IT_Operation Same as Admin except for changing admin password and managing license
Lucidum_Support Lucidum support used for customized query update only. Please don’t assign this role to the normal users
Api_Users Programmatic access to Lucidum API (cannot access UI)

Users can also create a new role under “Role Management” and assign certain permissions to this role.

To add permissions to one certain role, users can select the available permissions from the box on the left and click the “>” arrow to add the selected permissions. Similarly, to remove permissions from one certain role, users can select the permissions to remove from the box on the right and click the “<” arrow to remove the selected permissions.

The table below describes the details for the available permissions.

Permissions Description
Front_***
Access to the UI sub-menu, e.g., user with the Front_DataQC permission can click on



sub-menu on the left side
Read Chart
Read access to the Home page
Read Action
Read access to the Action page
Write Actions
Read/Write access to the Action page (user can add or change action)
Query Builder
Access to the Explore page (user can manage saved queries)
Search
Access to the Explore page (user can submit and run queries)
Read License
Read access to the License page
Modify License
Write access to the License page (user can upload and modify license)
UserManage
Read/Write access to the User Management page (users can only change their own user settings)
RoleManage
Read/Write access to the Role Management page
Read System Usage
Access to the resource usage monitoring under the System Stats page
Read System Log
Access to the system event logs under the System Stats page
Read System Setting
Read access to the System Setting page
Write System Setting
Read/Write access to the System Setting page
Start/Stop Runner
This permission is retired and irrelevant
Read DataQC
Access to the Data QC page
Read/Write DataMapping
This permission is retired and irrelevant
Customized Query
Read/Write Access to the Lucidum support page for updating the UI back-end queries (not for normal users)
API_Operator
Access to the Lucidum API
Schedule
Read/Write access to the query scheduling

LDAP Role Management

Lucidum UI also supports LDAP roles. However, LDAP roles need to be mapped to Lucidum local roles beforehand. For example, as shown in the figure above, LDAP role “DEVELOPER” is being mapped to Lucidum system role “IT_Operation”. Then all LDAP users with the “DEVELOPER” role will have the permissions from “IT_Operation” role.

User Management

The default password for the system “admin” user is 12345678, make sure to change this default password upon the first login by clicking “change password” under “Action”.

Only the user with the Admin role can create a new user or change other users’ profiles (e.g., user password and roles). To create a new user, click “New User” under “User Management”.

Under the “New User” pop-up window, specify the new username, user email, user password, user’s time zone, and user’s roles. Then click “Confirm” to finish the new user creation process.

System Setting

The System Setting page contains multiple setting sections. Each section can be updated and saved individually by clicking the “Update” button on the top right corner.

Data Settings

Data Setting Description
Data retention in days
Data retentions days in Lucidum database, by default, data will be kept for 30 days
Data lookback in days
Data lookback days in data collection, by default, Lucidum will collect data from previous 7 days

Metrics Settings

Metrics Setting Description
Metrics Log Interval

(minutes)
UI logging time interval, by default, Lucidum UI will generate the logs every 10 minutes

Query Settings

Query Setting Description
Schedule Query Limit
Number of maximum results saved for the scheduled queries (in the Job Manager)
Query History Limit
Number of maximum queries saved in the Query History (under the Query Manager)

Mail Settings

These are the settings for the sender email. Query scheduler will use this sender email to send out the reports.

Mail Setting Description
Host
Sender email’s host name, e.g., smtp.gmail.com
Port
Sender email’s port number, e.g., 587
User Name
Sender email address
Password
Sender email account’s password
Auth
By default, mail sending authorization is enabled
Start TLS
By default, Email sending TLS is enabled
SSL Trust
By default, Email sending SSL Trust is enabled

LDAP Settings

LDAP Setting Description
LDAP Url
LDAP Server URL
LDAP Base Dn
LDAP Base DN
LDAP User Dn Patterns
LDAP User DN Patterns
LDAP Group Dn Patterns
LDAP Group DN Patterns
LDAP Manager User
LDAP Manager User
LDAP Manager Password
LDAP Manager User Password
LDAP Password Attribute
LDAP Password Attribute

Part 2. Lucidum Web APIs

API Access Token

There are two types of tokens to access the Lucidum API. Users will need to generate either one of them for the programmatic API access.

  • Basic authorization token with Client ID and Secret: User can generate the Client ID and Secret by clicking the “edit” user link under the “User Management” page then clicking “Generate ClientID/Secret” button. The Client ID and secret will expire in 30 days or after a system restart, whichever is earlier.

  • Permanent API token: Users can generate the permanent API token by clicking the “Generate Token” on the same page. The token will not expire unless it is re-generated by the user. This token can be used directly in the API request header, for example, “Authorization: Bearer {token}”.

API-Based Query

Users can access the Lucidum main database programmatically through API. Currently, Lucidum UI limits API access to no more than 10 queries per minute to avoid negative impacts on the system. Below is an API request example:

Endpoint: /CMDB/v1/data/cmdb

REST method: POST

Request body:

{

“collectionName”: “AWS_CMDB_Output”,

“outputFields”: [ “CPU_Cores”, “sourcetype”],

“filter”: [

{

“field”: “CPU_Cores”,

“operator”: “=”,

“value”: “4”

}

],

“page”: {

“currentPage”: 1,

“itemPerPage”: 25

}

}

User can change the parameter values below as needed:

  • CollectionName: Lucidum database target table name
  • OutputFields: The field list to be included in the API response. If outputFields is set as an empty list, all data fields from the target table will be returned.
  • Filter: The record filters on the API response. If the filter is set as an empty list, all records from the target table will be returned. More information on filter operators and examples on different field types is listed below. Note that the operators used in the API call are slightly different from the operators on the Explore page.
Field type Supported operator Example
Number
>, <, =, >=, <=, !=
{“field”: “CPU_Cores”,

“operator”: “>=”,

“value”: 4}
Number
in
{“field”: “CPU_Cores”,

“operator”: “in”,

“value”: [4,8,16]}
String
=
{“field”: “Asset_Name”,

“operator”: “=”,

“value”: “ec2-1324”}
String
like
{“field”: “Asset_Name”,

“operator”: “like”,

“value”: “%abc%”}
String
in
{“field”: “Asset_Name”,

“operator”: “in”,

“value”: [“ec2-123”, “ec2-456”]}
Boolean
=
{“field”: “Is_Virtual”,

“operator”: “=”,

“value”: true}
List
contains
{“field”: “List_Users”,

“operator”: “contains”,

“value”: [“user1”, “user2”]}
List
not contains
{“field”: “List_Users”,

“operator”: “not contains”,

“value”: [“user1”, “user2”]}

Below is an API response example:

Response Body:

{

“code”: 200,

“data”: [

{

“_id”: “5cf5db6ccaf94c7fc261c08c”,

“CPU_Cores”: 8,

“sourcetype”: [

“AWS_EC2”

]

}

],

“page”: {

“currentPage”: 1,

“itemPerPage”: 25,

“totalPage”: 1,

“totalCount”: 1

}

}

The table below lists some API response codes as a reference:

Response Code Description
201
Created
401
Unauthorized
403
Forbidden
404
Not Found
400001
Invalid numeric operator
400002
Invalid string operator
400003
Invalid boolean operator
400004
Invalid list operator
400005
Invalid data operator
400006
Invalid page number
400007
Invalid item per page
401001
Invalid token
401002
Invalid collection name
401003
Invalid output field

API for Lab

Lucidum also provides API access for the Lab tables.

API for querying file upload history under a single Lab table

Below is an example of this API call:

Endpoint: /CMDB/api/upload/customer/collection_history

REST method: GET

Request body:

{

“sort”:”__lucidum__uploadtime__, asc”,

“tableName”: “_TableName_”

}

  • Sort: Upload history sorting order
  • TableName: Lab table name with the file upload history

Below is an example of the API response:

Response Body:

{

“content”: [{

“_id”: “5f68cbf1adf81c00016d5a75”,

“__lucidum__uploadtime__”: 1600703472,

“table_name”: “MyCSVTable”,

“model”: “create”,

“creator”: “admin”,

“file_name”: “user1_table.csv”,

“table_description”: “MyCSVTable”,

“upload_remark”: “upload user1_table”,

“version”: 1.0,

“upload_id”: 61

}, {

“_id”: “5f68cc49adf81c00016d5a7c”,

“__lucidum__uploadtime__”: 1600703561,

“table_name”: “MyCSVTable”,

“model”: “updateAppend”,

“creator”: “admin”,

“file_name”: “user2_table.csv”,

“table_description”: “MyCSVTable”,

“upload_remark”: “1600703561#user2_table.csv”,

“version”: 2.0,

“upload_id”: 62

}]

}

API for comparing file upload histories and getting comparison results

Below is an example of this API call. The API response includes the comparison results, which can be saved as an Excel spreadsheet if needed.

Endpoint: /CMDB/api/upload/customer/collection_history/export?params=

REST method: GET

Request body: none

params: URL encoded string of json as following

{

compareModel: “ALL”

firstId: “first upload id”

lastId: “last upload id”

sort: [],

groupBys: {

“field1”: “field2”

}

}

  • CompareModel: Comparison filter, valid values are “ALL” (including all records in comparison), “DELETED” (including records not from lastId only), “MODIFIED” (including records changed from lastId only), and “ADDED” (including new records from lastId only)
  • FirstId: First file upload history (as the base for comparison)
  • LastId: Second file upload history
  • Sort: This parameter is not used for now
  • GroupBys: field1 is the “Compared by” field from the first file upload, and field2 is the “Compared by” field from the second file upload
Previous Lucidum Installation Manual
Table of Contents