Overview
Tutorials are designed to offer a guide to complete common objectives, while also acknowledging that individual developers need to respond to individual needs of their own solutions. Tutorial frameworks provide blueprints for these solutions and should be adapted to fit individual needs.
Analytics with Birst
API Gateway
Application Development with Mongoose
Artificial Intelligence
Data Fabric
Digitical Assistant
Document Management
Integration with ION
Robotic Process Automation
Add description here
Security & User Management
Add description here
What made this section unhelpful for you?
On this page
- Overview
Analytics with Birst
On this page
- Analytics with Birst
API Gateway
API Gateway is a software system for brokering requests from API consumers, such as web and mobile applications, and API providers, such as Infor enterprise or third-party services. As a broker sits between consumers and providers (technically it is a reverse proxy), it can provide many benefits to both consumers and providers.
On this page
- API Gateway
Application Development with Mongoose
Go beyond basic fit and customize your cloud experience with extensibility tools that leverage no-code, low-code, and full-code frameworks.
On this page
- Application Development with Mongoose
Artificial Intelligence
Use Machine Learning to build an AI into your business processes.
On this page
- Artificial Intelligence
Analytics & Reporting in CloudSuite Service Industries
What made this section unhelpful for you?
On this page
- Analytics & Reporting in CloudSuite Service Industries
Analytics & Reporting in M3 Cloudsuites
What made this section unhelpful for you?
On this page
- Analytics & Reporting in M3 Cloudsuites
Backend as a Service (BaaS)
What made this section unhelpful for you?
On this page
- Backend as a Service (BaaS)
Data Fabric
Manage data on the platform so that humans and systems can securely access information from anywhere.
Infor Data Fabric is Infor’s comprehensive, cloud-native data management and processing platform. Data Fabric features spanning big data storage and query interfaces, real-time data delivery, and a suite of advanced data management tooling to structure the big data generated by Infor’s CloudSuite applications as well as third-party systems and data producers.
Infor Data Fabric centralizes common enterprise data requirements, development patterns, and tooling into a unified suite of tools to maximize data ROI that scales with continued data growth and capture.
At the center of Data Fabric is Infor’s Data Lake which uses an object storage architecture to provide long-term persistence of data in its raw, original format. CloudSuite applications and 3rd party applications, tools, and services can replicate transactional and master data for eventual processing and query analysis.
Data movement into and out of the Infor Data Lake is facilitated with a suite of tools and interfaces that include event-driven workflows orchestrated through Infor’s iPaaS platform, API endpoints, JDBC driver, and user-friendly UI interfaces. The Infor Data Fabric application also includes interfaces and tools for data query, exploration, data quality management, data cataloging, business domain metadata management, and development infrastructure.
Key Concepts & Definitions
Data Fabric | Infor’s comprehensive, cloud-native data management and processing platform. |
Data Lake | Infor’s central big data storage repository leveraging object storage architecture to provide long-term persistence of data in its raw, original format. |
Data Object | Data Lake data is stored as data objects. Data objects are formed from the sent raw data and the data object properties. |
Atlas | Data Object Explorer UI experience for viewing and managing data lake objects. |
Compass | A suite of tools that provide data consumers with interfaces for connecting to and processing ANSI SQL queries against data objects stored within the Data Lake. Supported object formats: NDJSON & DSV (CSV, TSV, PSV, or user-defined). |
Data Lake Flow | A sequence of activities orchestrated by ION that results in sending and/or retrieving Data Lake data. |
Data Catalog | The Data Catalog stores metadata about data objects that are used within the organization. |
Data Loader | An application that facilitates the one-time loading of multiple database tables at once directly to Data Lake. |
Metadata Crawler | A wizard that facilitates generating object metadata for tables, views, and materialized views stored in a database. |
Data Egress | The outbound transmission of data that traverses Infor’s cloud boundary on request of a user, client, application or system. |
Data Ledger | Reconciliation tool that aids data administrators in identifying alignments or potential disparities between applications sending data to Data Lake and the Data Lake itself. |
Metagraph | Free-form modeling canvas and tool, used to define domain-specific resource models. |
Components
Application | The Data Fabric application allows you to view, manage, and query your data objects that are stored in Data Lake The Data Fabric menu includes these menu options: – Data Lake (Atlas, Compass, Data Ledger, and Purge) – Metagraphs – Utilities |
API Suite | RESTful APIs for: – Ingesting data into Data Lake – Retrieving data from Data Lake or querying Data Lake – Managing data objects |
Homepage widgets | – Data Lake Ingestion by object – Data Lake Ingestion over time – Data Lake Storage Overview (see also: Usage reports) – Atlas Upload |
Compass JDBC driver | The Compass JDBC driver can be used to query Data Lake data through a local SQL query tool. |
Want to learn more?
Quick Reference
There is a lot to learn in the Infor Platform. A quick reference sheet is always helpful. Check out the Data Fabric Cheat Sheet.
Topical videos
Need information on a specific feature, function just a quick overview? Then short videos may be just what you are looking for. Check out our playlist on YouTube.
Product Documentation
Product documentation is the go-to reference for how specific parts of the product work. For online, searchable, easy to understand docs see this component’s documentation
Community
Collaborating with others in your industry is a great way to learn and help others. Start participating in the online community today!
Courses
Infor Campus offers Learning Paths that combine video based and instructor led teaching. If you are an Infor customer then check out courses on Infor U Campus. We recommend the following courses specifically for this component:
What made this section unhelpful for you?
On this page
- Data Fabric
Data Migration from on-premise / hybrid cloud
Overview
Data silos scattered across the enterprise make it cost prohibitive and technically challenging to harness its value through business intelligence or machine learning.
This is where the Infor Data Lake, an easily scalable and comprehensive central repository for the enterprise data, can help solve these challenges.
Components
- Security: ION API App Authorization * System Integration: Enterprise Connector * Data Management: Metadata Crawler * System Integration: Data Loader
Requirements
- Sample Database: AdventureWorks * Access to an Infor CloudSuite * User privileges to ION Desk (IONDeskAdmin), Data Fabric (DATAFABRIC-SuperAdmin), and ION API Gateway (IONAPI-Administrator) * Access to install the Enterprise Connector * Optional Infor U Campus classes: * Infor OS: Foundation for Multi-Tenant - Part 1 * Infor OS: Foundation for Multi-Tenant - Part 2
Tutorial
Difficulty: Medium Estimated Completion Time:60 minutes
In this tutorial, we walk through how you can easily migrate on-premise or private cloud data into the Infor Data Lake. In this example, we will be migrating tables from AdventureWorks, an OLTP sample database of a fictitious bicycle parts wholesaler with 300 employees, 500 products, 20,000 customers and 31,000 sales.
1. Download and install the PostgreSQL client.
Note that Postgres is used as an example in this tutorial, you are able to connect to other database systems.
2. Follow these steps to create a local database.
3. Follow these steps to load the sample data into your local database.
4. Install and configure the Enterprise Connector (detailed instructions)
5. Enable Data Loader under the ION Services Preview Features.
6. Create an authorized app in ION API gateway and download the credentials (detailed instructions)
7. Follow the instructions in the video below to set up the Data Loader
The video goes through how to pull data from the on-premise database, through the enterprise connector, and into the data lake.
https://youtu.be/J-Fv7bHG2tQ?si=DPUpn2SWYxeQ0ymd
Full tutorial on how to use Data Loader and Data Catalog Crawler
Best Practices
When registering objects in Data Catalog using the Metadata Crawler, it is recommended to properly and diligently define the object properties, these include:
- Identifier Paths: The JSON paths to the properties that uniquely identify the object. * Variation Path: The JSON path to the property that contains the variation value for the object. * Delete Indicator: The JSON path to the property that is used to indicate whether the object has been marked as deleted. * Delete Indicator Value: The value that the delete indicator must have to indicate that the object has been marked as deleted. * Timestamp Path: The JSON path to the property that contains the timestamp of the moment the object was created or updated. * Archive Indicator: The JSON path to the property that is used to indicate whether the object has been archived in the source application. * Archive Indicator Value: The value that the archive indicator must have to indicate that the object has been archived.
Specifically, for efficient variation handling in Data Lake, the Identifier Paths and Variation Path must be defined and leveraged properly.
What made this section unhelpful for you?
On this page
- Data Migration from on-premise / hybrid cloud
Exploring historical data changes
Overview
Enterprise data is becoming increasingly cloud-native and scattered across different systems of records and storage. This makes it challenging to keep track of how its history changes, especially once it is transformed and loaded into reporting-ready data warehouses and marts. This presents a host of business and governance challenges when auditing is required, as well as limiting factors when historical context is eventually needed for machine learning and data science applications.
This tutorial walks you through how to view historical changes in data in the Datalake via various tools.
Components
Requirements
- Access to an Infor CloudSuite (or any data in Datalake) * User privileges for Data Fabric (DATAFABRIC-SuperAdmin) * Optional Infor U Campus courses: * Infor OS: Foundation for Multi-Tenant - Part 1 * Infor OS: Foundation for Multi-Tenant - Part 2
Tutorial
Difficulty: Easy
Estimated Completion Time: 10 minutes
The Infor Data Lake, as a central repository for enterprise data, and by virtue of being an immutable object storage architecture, can retain the full history of records ingested into it. With its built-in data versioning abilities and synthetic functions to expose previous or deleted versions of records stored in it, the Data Lake allows building queries that travel back in time to investigate data changes, deleted records from systems of records, and unlocks a host of use cases for data science and machine learning applications.
As data is ingested in the Data Lake, data objects are indexed for future retrieval and querying. A number of properties are added and are specifically used to create what we commonly refer to as synthetic columns . These columns exist as queryable platform metadata and can be useful in data processing and exploration. Particularly, in this scenario, these can be used to effectively query across time and review version changes in data replicated from systems of records.
In this tutorial, we'll focus on tracking historical changes for specific items in the MITBAL table which comes from Infor M3. The table contains item details per warehouse. This table is part of a provisioned replication set for CloudSuite tenants where changes to the table are replicated to Data Lake at preset intervals.
1. Explore the various object properties in Atlas
Before we begin, navigate to Data Fabric/Datalake/Atlas:
Navigate to Atlas and select the table MITBAL
As you can see below, the table consists of multiple data objects that were incrementally loaded into Data Lake. Selecting any one of these objects allows you to preview its contents on the panel to the right or explore its properties.
Data Object properties in Atlas
2. Familiarize yourself with query hints and synthetic functions in Compass
Exposing these properties in a query-setting can be done in Compass through the use of a SQL query hint accompanying a SELECT statement:

Using the expression " --includeInSelectAll=s/g/p/l " before the SELECT statement, will expose and append the columns in the following image to the result set. Note that the syntax starts with two hyphens followed by an asterisk.
Query results with --*includeInSelectAll=s/g/p/l
Alternatively, and specific to our time-based analysis requirement, you can run a simplified SELECT statement and append the infor.lastmodified() function to the list of columns you'd like returned:
Query to show infor.lastmodified column
3. Use the infor.lastmodified function to visualize historical changes to records
We now introduce infor.allvariations() as another synthetic function to our query. Infor.allvariations() enables you to select all variations, including lower variations and variations marked as deleted. We can also add a WHERE clause to focus our auditing on a specific item in the table, and order our results by their lastmodified date:
Visualizing historical variations for specific records
4. Filter queries using infor.lastmodified()
The lastmodified column and function can also be used to filter while querying your data resulting in a much more efficient query performance.
For example, if I’m only interested in querying across data lake objects that landed in the data lake since yesterday, I can run my query with a WHERE statement limiting the scope of lastmodified to dates starting on that date. Keep in mind that the while the lastmodified property is returned as a timestamp in UTC in ISO8601 format with three milliseconds, you can use a date or a timestamp that does not include milliseconds in your WHERE clause against it:
Filtering query results with infor.lastmodified()
Additional Resources:
This video provides an overview of the infor.lastmodified() function covering most of the concepts included in this tutorial:
https://youtu.be/5Ic5zqz5EtM?si=zOlnCIAc8exHYP5t
Overview of infor.lastmodified() query function
Best Practices
What made this section unhelpful for you?
On this page
- Exploring historical data changes
External access to Data Lake APIs
Overview
API access is vital for managing access to services in a cloud environment. Organizations looking to leverage the Data Fabric API suite to programmatically query and retrieve datasets from the Infor Data Lake can do so through any number of available tools and development platforms.
Postman is an example of a platform that facilitates testing and scripting around APIs.
Components
- Data Management: Compass APIs * Security: ION API App Authorization
Requirements
- Access to an Infor CloudSuite * ION API Gateway (IONAPI-Administrator) * Postman desktop application * Optional Infor U Campus classes: * Infor OS: Foundation for Multi-Tenant - Part 1 * Infor OS: Foundation for Multi-Tenant - Part 2 * Infor OS: Configuring and Administering ION API
Tutorial
Difficulty: Easy
Estimated Completion Time: 15 minutes
This tutorial covers the necessary steps in setting up and testing the Data Fabric Compass APIs within Postman.
1. Download and install Postman.
2. Set up external access to Infor Data Lake with APIs
The steps to follow are:
- Create an Authorized application in the CloudSuite tenant * Identify the Data Lake APIs needed for the use case * Set up an API manager (Postman in this example) to call the APIs
The following video provides an overview of the Data Fabric Compass APIs, and walks though the process of creating an authorized app in ION API, exporting the generated credentials for use with Postman, and finally how you can create and manage Compass queries directly within Postman.
https://youtu.be/m4d0awIQ6Ag?si=dUlKXhK9g0gOu7i_
Full tutorial on how to test external API access with Postman
Best Practices
You should select an optimal size for downloading query results based on hardware constraints and network timeouts. This will also allow you to re- retrieve a portion of the result in the event of a connection failure. Keep in mind that the result of a query is static, and does not change when data is added or updated in the Data Lake. The page size is based on rows, with a maximum limit of 100,000 rows with a 10 MB size limit. The version 2 Compass APIs functionality uses OFFSET and LIMIT parameters for the result set retrieved. Query results are usually available for roughly 20 hours after receiving the FINISHED status.
You can learn more about the Compass API paging feature in this walkthrough:
https://youtu.be/JkthU7-K0jw?si=n0ShTagHvqGtZbOA
Resources
How to call an API using various Oauth2.0 authentication grants
What made this section unhelpful for you?
On this page
- External access to Data Lake APIs
Ingest Excel files into the Data Lake
What made this section unhelpful for you?
On this page
- Ingest Excel files into the Data Lake
Visualizing Data Lake data in Tableau
Overview
While queries can be run directly in Data Lake Compass user interface or through a local SQL client through the JDBC driver, visualizing data sets can go a long way in helping with data exploration activities. Particularly, for machine learning applications, the first step is to usually get a sense of the data profile and perform full exploratory analysis before building a machine learning model.
This tutorial will walk you through how to setup a connection to the Datalake for Tableau. The process is similar for other 3rd party BI tools.
Components
- Security: ION API App Authorization * Data Management: Atlas & Compass * System Integration: Compass JDBC Driver
Requirements
- Access to an Infor CloudSuite * User privileges to ION Desk (IONDeskAdmin) and ION API (IONAPI-Administrator)
Tutorial
In this tutorial, we'll use Tableau to connect to Data Lake through the JDBC driver in order to explore and visualize data from a view created in an earlier tutorial as an example.
1. Download the Compass JDBC driver:
The first steps is to download the latest Compass JDBC driver from ION Desk in your tenant. You can do so by navigating to ION Desk > Configuration > Downloads > Files > Compass JDBC Driver
Downloading the Compass JDBC driver
2. Register the Compass JDBC driver in Tableau:
The downloaded Compass JDBC driver must be saved in the Tableau Drivers folder to unlock access to this driver in the connection options. For more information on where to find this folder and how to connect to custom JDBC drivers, refer to the Tableau documentation here.
3. Create and download security credentials for the JDBC connection:
Next, under Infor ION API > Authorized Apps , you’ll need to create an authorized backend services app for the JDBC driver or re-use an existing one to download the * .ionapi file containing your service account credentials.

Always contact your security or infrastructure team before creating an authorized app with credentials.
Downloading the ionapi credentials
The JDBC connection can be configured with a simplified string authentication approach which embeds these credentials into a single URL string in this format:
jdbc:infordatalake://?ionApiCredentials= Encoded_String
Where Encoded_String here is based on the contents of the * .ionapi credentials file.
4. Encode the URL ionapi content for the URL authentication string:
The URL encoding needed here can be obtained using different encoding or text editors. In this example, we're using the open source text editor Notepad++ as demonstrated below.
Encoding the ionapi content using Notepad++
The resulting encoded string should start with a “ %7B ” representing the opening left bracket “ { “ and end with “ %7D ” which is a closing bracket “ } ”.
5. Configure the JDBC connection in Tableau:
In Tableau, the connection can be initiated under: Connect > To a Server > Other Databases (JDBC) , and by simply copy pasting the full URL string into the URL field and signing in.
Establishing a connection to Data Lake in Tableau
6. Select schema and explore tables in Tableau:
Once connected, you'll be able to access a database-like view of Data Lake where you can select the Default schema to browse the available queryable data objects.
Selecting the Default schema
Browsing the exploring the available tables
Next, we can select one or multiple tables to preview and build a data model.
7. Preview and model data in Tableau:
In this example, we're focusing on a single table from a previous tutorial where we have a sample product catalog dataset.
Exploring and previewing a data set from a select table
8. Create visualizations in Tableau from the loaded data:
With the data in place, we can proceed to build a sample visualization in a new worksheet. In the example below, we are looking at a simple statistical distribution of the unit prices across our product catalog dataset.
Creating a visualization to look at the unit price statistical distribution
What made this section unhelpful for you?
On this page
- Visualizing Data Lake data in Tableau
Best practices
What made this section unhelpful for you?
On this page
- Best practices
Digital Assistant
Streamline the user experience with a digital assistant that helps your employees navigate and access information by voice or chat.
On this page
- Digital Assistant
Document Management
Leverage Infor's central document repository to manage your enterprise document in the cloud.
On this page
- Document Management
Governance, Risk and Compliance
What made this section unhelpful for you?
On this page
- Governance, Risk and Compliance
Integration with ION
Create a unified application topology using the integration hub in the Infor Platform.
On this page
- Integration with ION
Portal and Workspaces
On this page
- Portal and Workspaces
Robotic Process Automation
RPA automates repetitive tasks and empowers your team to focus on what they do best.
On this page
- Robotic Process Automation
Security & User Management
Tutorials that help you leverage Infor's cloud security and user management capabilities.
On this page
- Security & User Management