Overview

Sections

Overview

Tutorials are designed to offer a guide to complete common objectives, while also acknowledging that individual developers need to respond to individual needs of their own solutions. Tutorial frameworks provide blueprints for these solutions and should be adapted to fit individual needs.

Analytics with Birst

API Gateway

Application Development with Mongoose

Artificial Intelligence

Data Fabric

Digitical Assistant

Document Management

Integration with ION

Robotic Process Automation

Add description here

Security & User Management

Add description here

Was this section helpful?

What made this section unhelpful for you?

On this page
  • Overview

Analytics with Birst

On this page
  • Analytics with Birst

API Gateway

API Gateway is a software system for brokering requests from API consumers, such as web and mobile applications, and API providers, such as Infor enterprise or third-party services. As a broker sits between consumers and providers (technically it is a reverse proxy), it can provide many benefits to both consumers and providers.

On this page
  • API Gateway

Application Development with Mongoose

Go beyond basic fit and customize your cloud experience with extensibility tools that leverage no-code, low-code, and full-code frameworks.

On this page
  • Application Development with Mongoose

Artificial Intelligence

Use Machine Learning to build an AI into your business processes.

Machine Learning (ML) is a category of Artificial Intelligence (AI), computer science, and mathematics that focuses on using data and algorithms to train models and use those models to make predictions on new and unseen data. Infor AI opens up the world of machine learning to a wider array of business users through its visual modeler, allowing for deployment of models with low-code and no-code implementations.

Key Concepts & Definitions

Concepts
Definitions

Infor AI

Infor’s machine learning platform, a component of Infor OS.

Quest

A flow of activities that make up the machine learning model.

Training Quest

A quest involving a predictive method that produces a trained model.

Trained Model

A model that can be used to predict outcomes based on new data.

Production Quest

A trained model that has taken steps to deployment.

Endpoint

The REST API access point of a real time production quest to process new data through the model. Endpoints can process data by being passed a CSV, a JSON message, or accessed via the API gateway.

Data Lake

Flexible and economical cloud object storage solution where data is stored in its raw format. This is Infor AI’s primary data source.

Label/Target

Terms that refer to the prediction of the model.

Categorical

Data types that are non-numeric in nature and belong to a category instead. E.g. “Country of Residence”.

Supervised Learning

Machine learning algorithms that form relationships between targeted label and input features so that the output values for unseen data can be predicted. Supervised algorithms must be trained on known outcomes.

Unsupervised Learning

Machine learning algorithms that make inferences from data using only input features without referring to known or labelled outcomes. These algorithms can discover data structures by clustering it into intuitive groups.

Best Practices

A machine learning project has a lot of flexibility and user control over it, but it is generally accepted that an ML workflow adopt the product life cycle below.

Business Case Definition

Before starting a machine learning project, it is best to step back and define the business problem at hand. It takes some finesse to match business problems with the appropriate algorithms, and the appropriate data. Often, business problems fall into one of the following categories.

Fetching Data

The enterprise can produce incredible amounts of data from different systems. For use in the Infor AI platform, data should be managed and stored in the data lake.

Data Preparation

Data preparation tasks can be numerous and time consuming. It is recommended that one be familiar with tidy data standards when it comes to cleaning data. It is also important to understand any caveats to the particular algorithm that you might use to process the data. Infor AI has the following built in tools for data manipulation and preparation, as well as the ability to run python scripts to allow for complete customization when manipulating data.

Data Block
Description

Select Columns

Select or exclude a subset of columns from the current dataset.

Remove Duplicates

Remove duplicates in selected features.

Construct Feature

Create a new feature out of the existing ones by using mathematical, logical, or casting operations.

Index Data

Transform categorical values into numeric for the selected columns. Each category will be assigned a number according to its occurrence in the data, highest occurrence having number 0.

Smooth Data

Remove noise from a dataset to allow natural patterns to stand out.

Split Data

Split the dataset into training data and test data by specifying the split ratio for the training dataset.

Scripting

Execute a customized Python script to perform an activity which is not available in the catalog.

Ingest to Data Lake

Ingest data to Infor Data Lake

One Hot Encoder

Transform categorical features into a binary matrix (vectors) to distinguish each categorical label. The vector consists of 0s in all cells, with the exception of a single 1 in a cell used uniquely to identify the label.

Feature Scaling

Scale features with varying magnitudes, units and range into normalized values.

Handle Missing Data

Replace missing values in selected features (with mean / mode / constant value / interpolation), or remove the entire row exceeding a selected ratio of missing data.

Target Encoder

Numerization of categorical variables via target – replaces the categorical variable with just one new numerical variable and replaces each category of the categorical variable with its corresponding probability of the target (if categorical) or average of the target (if numerical)

Edit Metadata

Select the Target label. Edit the metadata of the selected features by changing its data type, tagging the categorical values, changing the variable name or defining their machine learning type.

Balance Data

Balance the dataset using undersampling or oversampling methods.

Execute SQL

SQL operations (filter out data, join datasets, aggregate data etc.).

Model Training

Training a model requires the prepared dataset and the algorithm to be used in training. Supervised algorithms can be scored for accuracy using the train/test split functionality and the score and evaluate model blocks. The compare model block will allow for the training of multiple models to compare performance statistics.

Model Fitting & Tuning

Algorithm hyperparameters are available in each of the algorithm blocks. The specific parameters will be different depending on algorithm selection, and can details for each hyperparameter can be found in the documentation of the chosen algorithm.

Model Deployment

In the quest, select the checkbox on activities desired for the deployed model and push the activities to the production quest. This quest can be deployed as an endpoint accessible via the ION API gateway.

Model Maintenance

Want to learn more?

Quick Reference

There is a lot to learn in the Infor Platform. A quick reference sheet is always helpful. Check out this Cheat Sheet.

Topical videos

Need information on a specific feature or function? Or how about a quick overview? Then short videos may be just what you are looking for. Check out our playlist on YouTube.

Product Documentation

Product documentation is the go-to reference for how specific parts of the product work. For online, searchable, easy to understand docs see this component’s Documentation.

Community

Collaborating with others in your industry is a great way to learn and help others. Start participating in this component’s online Community.

Was this section helpful?

What made this section unhelpful for you?

On this page
  • Artificial Intelligence

Automated Retraining

Overview

Machine learning models are trained with data fed into them during training. Data itself is more dynamic as business conditions, economies, and sales data change rapidly. Having your models periodically, and automatically, retrain with up-to-date data will keep them from becoming stale -- ensuring that their prediction accuracy does not weaken over time.

This tutorial will cover how to set up automated retraining of your machine- learning models.


Components

Requirements

  • Access to an Infor CloudSuite * Infor AI Security Roles: * COLEMANAI-User * COLEMANAI-Administrator * User privileges to ION workflow * An AI quest that: * Draws data from Data Lake * Runs successfully * Has a deployed Endpoint * A dataflow that updates the data in the Data Lake

Tutorial

Difficulty: Easy Time estimate: 30 minutes

In this tutorial, we will create a workflow in ION that will periodically run an automatic model update for the desired Infor AI model. Before we get started with the actual workflow, we will need to download the service account credentials

In order for ION Workflow to later call the Infor AI API a service account must be created.

1. Service Account Credentials Download

  • Navigate to the security section of Infor OS. Use the menu on the left (hamburger menu) and go to Manage > Service Accounts * Click the + for Add new item * Add your username to the username field * Save the Service Account Credentials locally with the save icon. It will download a CSV file named "Service Account" that includes the tenant name and a unique identifier.

2. Navigate to ION

Automated retraining will take place in Infor ION. Navigate to ION in Infor OS and using the side menu select Monitors & Workflows > Workflows.

3. Create a new workflow

Use the interface to create a new workflow by clicking +Add. Certainly, you could add these steps to larger existing workflows, but we are going to model this in a new workflow that will just be focused on the retraining.

4. Add AI Task

Drag the AI task from the pool of available workflow activities to the workflow.

5. Settings

Click on the AI block to access its properties. In the AI task dropdown, select "Reload Datasets" for the first activity. Identify the quest name and import the service account file we created in step one. The quest name serves as the identifier for all activities, so even if you are reloading data sets select the quest that those datasets are called from. Rename the activity as appropriate.

6. Test Configuration

Move to the "Test" hub and click the test button. This should tell you if the AI task, quest name, and service account are valid and agree with each other.

7. Error Handling

In the settings tab, specify how you want errors handled. You can optionally select to continue the workflow and send error messages, however, it makes more sense in the retraining case to select "The Workflow Fails", as it doesn't make sense to continue with retraining activities if the data has failed to update.

8. Repeat for each of the four retraining activities.

There are four options available in a AI task that appear in the dropdown. Executing these tasks in succession will complete a retraining cycle. Create one activity for each of the available tasks in this order:

  • Reload Datasets * Retrain Model * Update Production Quest * Redeploy endpoint

Once completed, your workflow should look something like this, and the error/notification bubbles should all be resolved. Be sure to save the workflow.

10. Schedule retraining

Back in the left-hand ION navigation menu, under Monitors & Workflows, navigate to Workflow Schedules. Create a new workflow schedule with the "+Add" button and give it a name and description. Use the schedule panel to detail when you would like the workflow to run, and at what frequency. In the action panel, find the workflow you created above and select it in the dropdown. In the "Trigger Workflow Instance" dropdown, select skip if the workflow is running.

11. Activate

Using the toolbar at the top, save the workflow schedule and click activate. Your workflow is now configured for automated retraining and will trigger at the specified time. To verify that retraining occurred, examine the quest in Infor AI. There will be two locations that tell you the quest was triggered from ION Workflows, the quest tile on the quest home screen, and the status messages on the left-hand side of the quest when you open it.

Was this section helpful?

What made this section unhelpful for you?

On this page
  • Automated Retraining

Customer Segmentation

Overview

Treating all of your customers the same is not usually a great business strategy. Customers can be very different, but with large numbers of customers there are bound to be groups of similar ones. By segmenting a customer set into groups, business decisions can be made to address different types customers appropriately.

Scenario: A fictious big retailer chain operates a membership-model business where their customers pay an annual membership fee for access to their warehouse-like stores for great prices on bulk items.

Data: Being a membership business model, customer data is abundant in this scenario. Data includes demographic data on each customer, seasonal spending data, coupon history, and purchase history. This data is contained in multiple systems and multiple tables across the enterprise. There will need to be a data consolidation effort to prepare the relevant information for the machine learning model. We will walk through the implementation with this sample customer dataset.

Mall Customers Dataset


Components

Requirements

  • Access to an Infor CloudSuite * Infor AI Security Roles: * COLEMANAI-User * COLEMANAI-Administrator * Mall Customers download

Tutorial

Difficulty: Medium Estimate Time: 60 Minutes

Step 1: Create one or more dataset(s).

Using multiple datasets is perfectly fine, as your quest can refer to each dataset individually, get what it needs, and merge or manipulate data into its final form ready for the ML model. We will import this csv file for use with our sample, but your business data will likely live in the data lake.

https://youtu.be/M_hPQa_GcRU?si=BxCmrwmTFGOSIPwA

Here you can see Infor AI giving us a preview of the dataset.

Step 2: Save the quest

In the Quests section of Infor AI, create, name, and save your new quest. This will create the blank canvas for your model to be developed.

Step 3: Import Data

Use an import data block, and within that block select the dataset you made in the prior steps. Once saved/run, you will be able to see the current state of the data at the output node on the right hand side of any given activity in the platform. Exploring your data will be important, as finding the oddities in the data now can help you know what prepare data steps are necessary. In my exploration of this sample dataset, I noticed the following:

  • CustomerID is a unique identifier. It provides no information about the customer or their spending habits, and would not be a useful predictor of future behavior. We should not include CustomerID as a factor when creating customer groups or customers might be incorrectly grouped together because of their arbitrary position in the dataset. * Gender, and Region in this dataset, are nominal (non-ordered) categorical variables. Categorical variables will need to be encoded regardless of how many items are in the categorization. * Annual_Income and Quarterly_Income have a high correlation. In fact, Quartly_Income appears to be the Annual_Income divided by 4, creating a perfect correlation. You should avoid using both (all) when variable have a strong correlation, as it can overweight the importance of the feature. * Last_Visit_Weeks has missing values. We will need to address this before using the feature. This could be due to a new membership being created before the customer has used their membership.

Step 4: Cleanse data

Using the green prepare data blocks, you can engage in the data cleaning activities required to get your data ready for consumption by a model.

https://youtu.be/9OT5Oa7EX38?si=NsluGFRUUCcKwNVW

This blueprint addresses the bullet points in step 3 with the following decisions:

  • CustomerID will be dropped * Gender and Region will be encoded with One-Hot-Encoding. Features must first be identified as categorical * Quarterly_Income will be dropped * Last_Visit_Weeks missing values will be set to zero.

Step 5: Select a model.

Understanding which model to use and what its unique requirements are takes a bit of subject matter knowledge.

For the clustering project, we would select the K-Means clustering algorithm, or develop our own clustering algorithm using the custom algorithm functionality. Add the K-means block, or custom algorithm block to the quest.

https://youtu.be/0BHzyb5ekvE?si=pZlztr0BlHYNZN6m

Step 6: Set hyperparameters.

In the K-Means algorithm block, let the algorithm know how many columns you will be feeding it, and how many clusters should be created. You may optionally tweak other hyperparameters for performance tuning. This blueprint would set the Feature Dimensionality to 12 and K to 5.

Step 7: Train the model.

Use a train model block and pass in two inputs -- the data that has been prepared, and the algorithm chosen. The quest should resemble the image below.

If successful, the node on the right of the train model block should contain information on the centroids of each cluster. These centroids don't have practical meaning, but are multi-dimensional locations that a customer will be found closest to.

Step 8: Score the model.

In this case, since the algorithm is an unsupervised learning application, there is no "truth" so accuracy metrics are not created, but giving the model the score model block will take the customers and assign each one to a cluster. The node on the right of the Score Model block will show the dataset and a new column identifying which cluster the customer belongs to.

Step 9: Deploy the model.

At the right of the toolbar, there is a checkmark for Auto-selecting activities for real-time production. This is the option we will use if we wish to have a deployed endpoint accessible via API. The batch production would be a better option for handling a large dataset for a single processing task. After using the auto-selection tool, or making sure the checkbox on each activity in the quest is checked, save the quest, then create a production quest.

The production quest will look different, and that's okay. Save the production quest, run it, and you will then be able to deploy it as an endpoint at the right of the toolbar. Follow the prompts to name and deploy the endpoint.

Step 10: Test the endpoint.

Navigate to the Endpoints section of Infor AI. Open your newly deployed endpoint and feed it some test data. This can be in the form of a file (CSV) or a JSON message as below. You can see that once a prediction is made, the endpoint returns two values, one is the integer value representing which customer group the new customer belongs to, and then a value representing the "distance" of that customer to the cluster center.

Step 11: Test the API

To access the endpoint via API, use the app switcher to navigate to Infor ION API, where you will find Infor AI Platform in the set of available API's. You can use this API endpoint to query the model from applications or widgets.

Was this section helpful?

What made this section unhelpful for you?

On this page
  • Customer Segmentation

Implementing Custom Algorithms

Overview

Infor AI tools provide out-of-the-box algorithms that are common in machine learning. For various reasons, you have decided that the out-of-the-box models don’t deliver the functionality you are looking for, or you believe the performance can be improved on. Now, you wish to develop your own custom algorithm to improve your performance metrics.


Components

Requirements

  • Access to an Infor CloudSuite * Infor AI Security Roles: * COLEMANAI-User * COLEMANAI-Administrator

Tutorial

Difficulty: Medium Estimated completion time: 20 minutes

Step 1: Choose your development environment

You may code in a local environment or workspace if you prefer. You’ll need to package the files according to the documentation in the instructions tab. If you have chosen to develop outside the platform, ensure you understand your requirements as described in the instructions tab. Otherwise, open the Custom Algorithms section and add a new custom algorithm from the home screen. Select “Open Notebook.” and give the algorithm a name before saving to enable the notebook.

A look at using Jupyter Notebooks in Infor AI seen here:

https://youtu.be/VBHsQ-A8rfI?si=cP5bIEmB5F8ylObT

Step 2: Understand your requirements

To create a custom algorithm, you must create two essential scrips and a set of hyperparameters. These items will be treated differently based on your choice in step 1. The scripts are:

  • train: The program that is invoked to train the model * predictor.py: The program that implements the Flask web server that can be called to get predictions from the trained model.

Step 3: Understanding the interface

There are several areas of this interface in which your development will take place.

The Playbook:

The playbook tab is a non-required coding space where a Jupyter Notebook interface lets developers code and experiment with their development in a sandbox-style workspace. Creating a new notebook allows you the standard interface of a Jupyter Notebook. You can view the available Python packages and their versions with a “!pip list” command.

Train and Predictor Tabs

Train and Predictor tabs are spaces ready to receive the final train and predictor.py scripts for deployment. These spaces have no testing capabilities, so it is best practice to develop and test your scripts in the playbook tab and copy them to the train and predictor tabs when they are complete. The hyperparameters tab is where you will upload a CSV detailing the hyperparameters for your algorithm and their default values and types.

Hyperparameters Tab

The hyperparameters tab lets you individually add hyperparameters and their properties directly to the grid. You may optionally import a CSV with this information, which will then be displayed in the table and editable from there.

Datasets

The datasets section will expand from the left bar, allowing you to bring datasets already staged in Infor AI into the Jupyter Notebook environment.

Instructions

More instructions and sample files for easy access.

Step 4. Prepare your data

If you haven’t already, stage your data in Coleman.

https://youtu.be/M_hPQa_GcRU?si=qDXRAGZUhaqRfo7f

Expand the datasets panel on the left side of the jupyterhub interface. Type the name of your loaded dataset and find it in the list. Select Load to see it appear in the list of loaded datasets.

Once you have loaded a dataset, a new folder named “datasets” will appear in the jupyterhub directory.

Use the folder structure to locate your dataset, and the subfolders it might live in, create an import command to bring your dataset into the kernel for use. Use a command like read_csv in pandas:

Step 5: Write your code!

Unleash your creative juices. Code, test, recode, and compare models. Shape your code into the train and predictor scripts and copy the appropriate code to their respective tabs.

Step 6. Import Hyperparameters:

Add your hyperparameters to the grid in the Hyperparameters tab and set their attributes (default values).

Step 7. Package and Deploy:

Package and deploy the custom algorithm using the Package Files and Deploy buttons at the top right of the custom algorithms interface.

Step 8: Use the algorithm in a quest

When building a quest, the custom algorithm activity block will now let you access the deployed package.

Was this section helpful?

What made this section unhelpful for you?

On this page
  • Implementing Custom Algorithms

Incremental ML Model Training

Overview

In Machine Learning, initial model training usually requires large data sets to train on to ensure reliable predictive models. Training is naturally intense on resource consumption and can be very time-consuming. In addition, a model's life cycle does not conclude with its initial creation -- it must be regularly retrained with the latest data to stay accurate and responsive to evolving environments. Incremental training allows users to efficiently update existing machine learning models with new data, enhancing performance without retraining from scratch -- significantly reducing computational resources and time required to retrain.


Components

Requirements

  • Infor AI Security Roles: * COLEMANAI-User * COLEMANAI-Administrator * A working Infor AI quest using a custom algorithm

Tutorial

Difficulty: Easy Estimated Completion Time: 10 minutes

1. Save a custom algorithm

A saved custom algorithm is a requirement for incremental training. In an already existing quest, go to the "Train Model with Custom Algorithm" block, and make sure you have checked "Save Model." Once the quest has been executed, it will be stored in the Models Library.

2. Create the incremental dataset

Create a dataset in the Data Collection > Datasets section of the platform containing only the new and incremental data. For this activity, you are likely creating a new dataset from the data lake.

3. Update your quest data source

Go back to your quest and change the source of the data to the new set of incremental data.

4. Use the Input Model activity

Add an Input Model block to the Quest. In the sidebar, select the saved model to be used, and feed the activity into the train custom algorithm block.

5. Update the saved model (or don't)

You may or may not want to save the new model after the retraining. Return to the "Train Model with Custom Algorithm" block and verify the "Save Model" box is checked if you'd like the new model saved or not checked if you do not wish to save new versions of the model.

6. Run the training quest

Save and run the quest. Once the execution is finished, your machine-learning model will be updated to include the new data!

Was this section helpful?

What made this section unhelpful for you?

On this page
  • Incremental ML Model Training

Optimizing Assortment Selection

## Overview Optimization refers to a field of applied mathematics that is utilizes mathematical techniques and algorithms to find the most optimal solution to a given equation. Given a scenario where there are budget constraints and space constraints, a set of items need to be chosen to be placed on the shelves for sale. Working within these constraints, a maximum profit is desired. In this tutorial, we will be addressing this assortment problem, and proceed to find out which subset of items will make the most profit within the given constraints. * * * ### Components * Infor OS Platform * Infor AI ### Requirements * Acess to an Infor CloudSuite * Infor AI Provisioned in the Cloudsuite * Security Roles * COLEMANAI-Administrator * COLEMANAI-User * Download Package [Download Assortment Files](https://developer.infor.com/wp-content/uploads/2024/07/OptimizationDownloads.zip) * * * ## Tutorial https://youtu.be/vI_YMVhTDiI?si=hHLbd3LVw8y45D9v A video walkthrough of the below tutorial. 1. Navigate to the Artificial Intelligence section of the Infor CloudSuite. 2. From the InforAI home page, navigate to the 'Datasets' section option under the 'Data Collection' tab on the lefthand menu. ![](https://developer.infor.com/wp-content/uploads/2024/07/image-26.png) 3. Add the datasets included in the download above. You should add data from the files '_BudgetLimit.csv_ ', '_SelectionProblem.csv_ ', and '_SpaceLimit.csv_ ' as three distinct data sets. For each imported file, provide the details below, then save the dataset. * Name: A unique identifier referring to the current file's dataset. * File: The current file that we want to upload to the platform and import. * Delimiter: Refers to the character or symbol that is used to separate (delimit) individual fields (columns) within the dataset file. 4. Once all three files have been loaded successfully, navigate to the 'Groups' option under the 'Data Collection' tab. 5. Add a data group, select all three files for the group, and give it a name. ![](https://developer.infor.com/wp-content/uploads/2024/07/image-27.png) 6. Navigate to the 'Quests' section of the 'Optimization' option in the left hand menu. The 'Quests' feature is where we will build the model. 7. Add a new quest and provide the following: * Name: A unique identifier referring to the optimization model. In this case, we will be naming our model, '_ Profit Optimization _ '. * Set Data Collection: Refers to the selection of the data group. In this case, we will be selecting the ' _AssortmentDemostration_ ' data group (with whatever name you chose) that we just created in the prior steps. 8. In the center of the 'Quests Design' page, there are tiles at the bottom of the screen that help deal with the process of building an optimization model. You can drag and drop each option to the center of the page and within their respective columns. Be sure to click the 'Save' button at the top when finished with added activities. In this tutorial, the ' _Prepare Data_ ' tile is not used. * ' _Prepare Data_ ' : Allows you to perform data preparation tasks that are specific to the model, which include data cleaning, handling missing values, scaling variables, and encoding categorical variables. * ' _Setup Model_ ': Enables you to define a mathematical representation of the optimization problem and you are able to define a goal for the optimization problem or define any constraints. * ' _Optimize Model_ ': Triggers the optimization process and the model is solved using optimization algorithms and techniques to find the best solution that also satisfy the given conditions. * ' _Results_ ': Provides you with the output and insights from the optimization process, such as the optimal values of the objective function, decision variables, satisfied and unsatisfied constraints, and additional analysis. ![](https://developer.infor.com/wp-content/uploads/2024/07/image-28.png) 9. To setup our model, drag and drop the ' _Setup Model_ ' tile into its designated column, which is the space allocated above the tile. Provide the naming information. For our model, we will use '_Profit Optimization_ ' 10. Provide information for the Sets/Indices & Constant section. This section represents collections of related elements in the optimization model that decision variables and/or constraints are associated with. In the video walkthrough of the tutorial, we change variable names to a single character. Although not required, this extra step just helps make associated mathematical equations easier to read. Be sure to click the 'Save' button when finished. * '_Space_Limit_ ' is shortened to ' _s_ '. * '_Item_Details_ ' is shortened to ' _i_ '. * '_Budget_Limit_ ' is shortened to ' _b_ '. 11. Click on the tab for the Decision Variables . This section represents the unknown quantities that you can control or choose in an optimization problem, in this case, the variable that represents for each item, if we have selected it in our set of items for sale or not. Add the decision variable by clicking on the '+' button and provide the following information. Be sure to click the 'Save' button when finished. * Name : A unique identifier referring to the decision variable, which we will name ' _ItemSelected_ '. * Variable: Refers to the label or name that is assigned to represent the decision variable within the model. We will be referring to this as ' _x_ ', being the independent variable in the equation. * Dimension: Refers to the dimensionality that the decision variable can take. In this case, the dimension of our decision variable is _1_ . * Index: Refers to the set instances in which our decision variable will have value. In this case, our index, labeled ' _i_ ', which will serve as an indexing element for all of the potential items we consider, as each individual item will need a decision variable value -- selected or not-selected. * Domain: Refers to the valid range that the decision variable can take in order to help capture the problem's constraints, practicality, and restrictions. We will be selecting the ' _Binary_ ' option from the drop-down menu. This will better represent the options, 'yes' and 'no', for an item to be selected on assortment. 12. Move to the Constraints tab. This section represents the logical conditions that the solution to the optimization problem must satisfy. In this case, the constraints are the budget limit and the amount of space that is available to use. So, add these constraints by clicking on the '+' button and provide the following information. Be sure to click the 'Save' button when finished. * Name: A unique identifier referring to the constraint. * ' _Budget_ ' to represent our maximum budget constraint. * ' _Space_ ' to represent our space availability constraint. * Dimension: Refers to the dimensionality of the constraint. * The dimension for '_Budget_ ' is _1._ * The dimension for '_Space_ ' is _1_ * Index: Refers to the set or range of values that the constraint can take. * The index variable name for '_Budget_ ' is ' _b_ '. * The index variable name for '_Space_ ' is ' _s_ '. * Condition: An optional field that allows for more complex logical filtering. We will not be defining a condition in this tutorial. For more information, refer to the product's documentation. * Function Declaration: The mathematical expression that defines the constraint. You can use the expression builder to help build this mathematical expression by providing the appropriate parameters. Click on the pencil to provide the following information and be sure to click the 'Save' button when finished. * For '_Budget_ ', the function declaration is the ' _sigma function_ ' for the sets/indices ' _i_ ' _ & _ ' _b_ ' __ of the decision variable representation ' _x_ ' and has these parameters: ' _Item_Details.COST_ ' __ and '_ Budget_Limit.BUDGET_LIMIT _ '. This expression translates to the sum of the costs of corresponding selected items in the assortment cannot exceed or must be less than or equal to the allocated budget. * SUM{x[i]* Item_Details.COST [i]}<= Budget_Limit.BUDGET_LIMIT [b] * For '_Space_ ', the function declaration is the ' _sigma function_ ' for the sets/indices ' _i_ ' _ & _ ' _s_ ' __ of the decision variable representation ' _x_ ' and has these parameters: ' _Item_Details.SPACE_ ' __ and '_ Space_Limit.SPACE_LIMIT _ '. This expression translates to the sum of the available spaces of corresponding selected items in the assortment cannot exceed or must be less than or equal to the space available. * SUM{x[i]* Item_Details.SPACE [i]}<= Space_Limit.SPACE_LIMIT [b] 13. Move to the Objectives tab. This section represents a mathematical equation that maximizes or minimizes a numeric value to solve the optimization problem. Click on the '+' button to add a new objective function and provide the following information. Be sure to click the 'Save' button when finished. * Name: A unique identifier referring to the objective, which will be named, ' _Revenue_ '. * Sense: Specifies the direction of the goal, whether to maximize or minimize the value of the objective. In this case, select ' _MAXIMIZE_ ' from the drop-down menu. * Function Declaration: The mathematical expression that defines the objective. You can use the expression builder to help build this mathematical expression by providing the appropriate parameters. Click on the pencil to provide the following information and be sure to click the 'Save' button when finished. In this case, the function declaration is the ' _sigma function_ ' for the sets/indices ' _i_ ' __ of the decision variable representation ' _x_ ' and has the parameter ' _Item_Details.POTENTIAL_REVENUE_ '. This translates to the sum of each selected item in the assortment should reach its maximum potential. * SUM{x[i]* Item_Details.POTENTIAL_REVENUE [i]} 14. Return to the quest layout, and add an Optimize Model block to the workflow. Provide the following information and be sure to save your work when finished: * Name: A unique identifier referring to the optimize model object, which we will name ' _Optimize CBC_ '. * Solver: Refers to the optimization algorithm or solver that will be used to find the optimal solution for the model. In this case, we will stick with the default, ' _CBC_ ' (Coin-Or Branch and Cut), which is an open-source linear program (LP) and mixed-integer program (MIP) solver. This solver is intended to be used primarily as a callable library to create customized branch-and-cut solvers. Refer to the '_Solvers Reference_ ' section under '_Optimization_ ' for more information about the different solvers. * Model/Custom Algorithm: Refers to the optimization model that we are currently using, which will be ' _Profit Optimization_ ' in this tutorial. * Iterations: An optional field that refers to the limit the number of iterations that will be used. This is helpful to use for large or time-consuming tasks. * Endpoint Schema: Refers to the all the components that are going to be included in the JSON message. * Parameters Selected: Select all the parameters from the selection list on the left. * Constants Selected: Select all the constants from the selection list on the left. * When finished with selections, click on the 'Run' button and 'Save' button within the 'Endpoint Schema' window. 15. Optional: Drag and drop another ' _Optimize Model_ ' tile as we utilize another solver, so that we can make comparisons. Be sure to save your work when finished. * Name: ' _Optimize Bonmin_ ' * Solver: Select ' _BONMIN_ ' from the drop-down menu. BONMIN is an open-source C++ code for solving linear (LP), non-linear, and general mixed integer nonlinear programming (MINLP) problems. * Model/Custom Algorithm: ' _Profit Optimization_ '. * Endpoint Schema: Refers to the all the components that are going to be included in the JSON message. * Parameters Selected: Select all the parameters from the selection list on the left. * Constants Selected: Select all the constants from the selection list on the left. * When finished with selections, click on the 'Run' button and 'Save' button within the 'Endpoint Schema' window. 16. Save your current progress and run the quest by clicking on the play button. The ' _Results_ ' tile will provide insights about the model's performance when the optimizer run is complete. 17. We can then deploy the best performing solver, which is the ' _CBC_ ' solver in our case as it had less iterations, a faster runtime, and was able to identify the same maximum profit than '_BONMIN'_. Select the solver of choice and click on the 'Production Quest' button. When prompted to select the type of production quest to deploy, select ' _Realtime Production_ ' and click on the 'Create Production Quest' button. 18. Save and run the production quest. Once finished, you can deploy the endpoint by clicking on the 'Endpoint' button. To view the results or for further testing, navigate to the created endpoint.
Was this section helpful?

What made this section unhelpful for you?

On this page
  • Optimizing Assortment Selection

Using ad-hoc datasets

Overview

While most of your machine learning applications consume data from the data lake, you encountered a scenario where you need an ad-hoc import of a CSV file for some supplemental data for your machine learning model. While best practices suggest an automated import of periodic/updated data into the data lake, this is a time when a one-time-use dataset or a static dataset is required.


Components

Requirements

  • Access to an Infor CloudSuite * User privileges to Infor AI * Delimited data on a local machine * Optional InforU courses: * Infor OS: Foundation for Multi-Tenant - Part 1 * Infor OS: Foundation for Multi-Tenant - Part 2 * Coleman AI Platform: Enablement Overview

Tutorial

Difficulty: Easy Estimated completion time: 5 Minutes

The datasets section of Infor AI is the staging area of data for machine learning consumption. Importing files is straightforward, with a few items of note to pay attention to.

1. Verify data structure

Blindly importing data is dangerous. It is good practice to understand your data before import. Know its structure, and its data types. Some attention to this on the front end will save effort on the development side. For import into Infor AI, your data must be in some delimited form, usually CSV. Tab separated data can also be read, or a custom delimiter can be used. Understanding which variables (columns) are numeric and which are strings is an important distinction as well. Infor AI will make some assumptions on datatypes, but in the end, you are the keeper of the data and should verify datatypes.

Additionally, pay attention to any special datatypes like date formats and time formats. These formats are often read in as strings and it takes some processing to treat them as date/time data.

Select the appropriate delimiter before data input. You will also have the opportunity to define any custom data formats. These are particularly important for data and time data as there are many accepted formats for such data. Identifying the format early will help process the data later. While certainly not an exhaustive list, common datetime formats can be found here for reference.

Your data should also be structured such that each variable is a column and each observation (or case) is a row. Your data may optionally have a header which gives names to each variable.

It is also best practice to have done some data investigation at this point. Understanding the extent of missing values, value distribution, and correlations in your data will help develop better machine learning models.

2. Import data

In the datasets section of Infor AI, click "+Add" and select import from "File". Use the navigator to find your file. Select the appropriate delimiter and identify any custom data type formats. You will also need to select if your file has a header or not.

3. Examine Metadata

Press the preview button after selecting your data file. This should give you a brief glimpse of the data below, as well as opening up the Metadata tab. Select Metadata and view your data types. Infor AI has made some guesses as to your data types (double/string/float/boolean etc.), and you can change any you need to customize. This is also where you can rename variables as desired-- particularly if your data did not have a header. You can always modify data types in a quest, but there are time savings and computational savings involved with setting these correctly on import.

4. Save and Examine

When happy with your metadata, click the save icon and Infor AI will process and load the data. Once complete, the Statistics tab will be available allowing you some insights into the structure of your data. This is a great place to double check for missing values, and to understand the spread and shape of your data.

5. Use your data

Your dataset is now ready for use in a quest. Import data blocks in any quest will now have your new data set available in the dropdown.

Was this section helpful?

What made this section unhelpful for you?

On this page
  • Using ad-hoc datasets

Using API’s to access exposed Endpoints

Was this section helpful?

What made this section unhelpful for you?

On this page
  • Using API’s to access exposed Endpoints

Best practices

Was this section helpful?

What made this section unhelpful for you?

On this page
  • Best practices

Analytics & Reporting in CloudSuite Service Industries

Was this section helpful?

What made this section unhelpful for you?

On this page
  • Analytics & Reporting in CloudSuite Service Industries

Analytics & Reporting in M3 Cloudsuites

Was this section helpful?

What made this section unhelpful for you?

On this page
  • Analytics & Reporting in M3 Cloudsuites

Backend as a Service (BaaS)

Was this section helpful?

What made this section unhelpful for you?

On this page
  • Backend as a Service (BaaS)

Data Fabric

Manage data on the platform so that humans and systems can securely access information from anywhere.

On this page
  • Data Fabric

Digital Assistant

Streamline the user experience with a digital assistant that helps your employees navigate and access information by voice or chat.

On this page
  • Digital Assistant

Document Management

Leverage Infor's central document repository to manage your enterprise document in the cloud.

On this page
  • Document Management

Governance, Risk and Compliance

Was this section helpful?

What made this section unhelpful for you?

On this page
  • Governance, Risk and Compliance

Integration with ION

Create a unified application topology using the integration hub in the Infor Platform.

On this page
  • Integration with ION

Portal and Workspaces

On this page
  • Portal and Workspaces

Robotic Process Automation

RPA automates repetitive tasks and empowers your team to focus on what they do best.

On this page
  • Robotic Process Automation

Security & User Management

Tutorials that help you leverage Infor's cloud security and user management capabilities.

On this page
  • Security & User Management
View as Markdown

Ask an AI

Open in ChatGPTOpen in Claude