Saturday, 28 April 2018

Balancing supply assurance with effective inventory management

It’s easy to get pulled into the trap. Solving short-term supply assurance problems is, after all, a very important part of a buyer’s skill set; and in many organizations it’s the only time that a buyer gets recognized for his or her efforts. But buyers can get caught up in what is sometimes called the “tyranny of the urgent.” They spend so much time solving the latest crisis – a late supplier shipment, a drop-in requirement, a scrap loss that has created a hole in the supply plan – that they don’t have the opportunity to work on other important concerns.

We want buyers to do more than fight fires, don’t we?

At most companies, we need buyers to do a lot more than fight fires. They need to place POs on time. They need to maintain a good working relationship with their suppliers. Depending how your organization is set up, they may need to develop new suppliers, negotiate with suppliers, or even craft a commodity strategy. And, at most companies, they make up the front line of inventory management.

That last part of a buyer’s responsibilities can be the key. If a buyer does a good job at inventory management of the parts under his or her responsibility, they won’t have as many fires to fight. But companies often make two big mistakes in the way they set up their purchasing processes:

1. They expect the MRP process (in the ERP system) to automatically schedule parts to manage inventory effectively. Although MRP can do wonderful things, and although many companies fail to leverage the MRP process as much as they should (to manage inventory & maintain high materials availability), it requires ongoing maintenance of the item master and system oversight to keep it working at its most effective. Even then, it’s unrealistic to expect MRP to magically align supply and demand perfectly, without any extra effort.

2. They encourage their buyers to aggressively manage all the parts under their responsibility, instead of only the parts that really matter. Often, this just creates unnecessary and counterproductive work for the buyer. Does it really make sense to invest a half-hour of a buyer’s time to adjust the delivery schedule of a $25 reel of components by one week?

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications, IBM Management

Part of the solution is to properly apply what now seems a fairly old-fashioned concept: ABC codes, an application of Pareto’s Principle. The concept is simple: rank your purchases in order of their expected value (cost each times the quantity you expect to purchase over a time horizon – often the next three or four months). Sort the parts from the highest purchase value to the lowest, and separate the small number of items that constitute the top 75% to 85% of the total purchase value – these are the “A” parts. Separate out the items that represent the next 10% to 15% of the total purchase value. These are the “B” parts. The rest are “C” parts.

The practical application of this concept is also simple to understand: micro-manage the small number of A parts. These make the biggest difference for inventory management. On the other hand, encourage your buyers to maintain ample stock of C parts, and schedule deliveries of those parts just once or twice a month. Even if you carry an extra month of inventory of C parts, it won’t make much difference in your overall inventory turns. But keeping a comfortable level of inventory of C parts will reduce the workload on your buyers (as well as your receiving dock!), leaving them more time to focus on managing the critical A parts. (B parts, obviously, fall between the two; and it’s worth noting that B parts can shift to become either A or C parts.)

This works in every industry, but the concept is especially powerful in electronics. In my experience, within an electronics manufacturing facility, generally:

◈ The first 1% of part numbers account for at least 50% of purchased part value.
◈ The first 5% of part numbers account for at least 80% of purchased part value.
◈ The first 15% of part numbers account for at least 95% of purchased part value.
◈ And consequently, somewhere around 85% of the purchased parts in a facility are C parts – the last 5% of purchased part value. (As a matter of fact, there usually are so many C parts that many operations split the C parts further – either by identifying the last 1% of purchased part value as “D” parts, or else by separating out the parts with 0 expected purchase value.)

Here’s a simple calculation that will show the power of this concept. Let’s say that you set up your ABC codes with A parts representing the top 80% of purchased value, B parts the next 15%, and C parts the last 5%. Now, let’s target the following inventory levels (on average):

◈ 2 weeks on hand for the A parts (80% of purchased value)
◈ 4 weeks on hand for the B parts (15% of value)
◈ 8 weeks on hand for the C parts (5% of value)

What sort of overall inventory performance would that create? Let’s do the math:

(2 weeks * 80%) + (4 weeks * 15%) + (8 weeks * 5%) = 1.6 + 0.6 + 0.4 = 2.6 weeks, on average. That’s equivalent to just over 18 days on hand, or 20 inventory turns (52/2.6), which would be pretty exceptional inventory performance at most facilities.

Essentially, what we’re telling the buyers is: “just keep plenty of inventory on hand for most of your parts. But you’ve got to really aggressively manage this handful of parts (the A’s); and pay a little more attention to this group of parts (the B’s).” Most of the buyers I’ve known will happily take that deal.

It can’t be that easy, can it? What’s the catch? You should be aware of two considerations:

1. That calculation above doesn’t account for excess and obsolete materials in your facility. If you have a large value of (unreserved) E&O in your facility, that will drag down your inventory performance.
2. Demand changes over time – sometimes gradually, sometimes drastically. A parts become B parts (and sometimes C parts!); B parts become As. It isn’t enough to simply state the strategy above – you need to set up a management process to execute the strategy, and maintain it every week.

In the second half of this post (below), I’ll discuss how to set up the management process to bring your actual inventory into alignment with your strategy.

Turning fire-fighters into fire-preventers

Balancing supply assurance with effective inventory management wouldn’t be so difficult if things were just more predictable. But buyers quickly discover that surprises just go with the job.  (If we were real firefighters, it would be as if every Monday morning when we arrived for work, we’d find that someone had randomly scattered oily rags & lit candles all around town!) Even if we can’t prevent every surprise, we can avoid or block most of them… and make the remaining situations easier to resolve.

Applying Pareto’s Principle by using the ABC methodology really does work to prevent most supply problems, giving buyers the ability to focus their attention on the few parts that really matter. Allowing ample inventory for the C parts (and a “comfortable” level for the B parts) reduces the number of potential fires that we’ll need to fight, and lets us focus all our attention on the few critical parts. If there’s going to be a fire, it’ll be right in front of our eyes.

But there are a couple of problems:

1. Most operations run the ABC assignment process infrequently – once a quarter or even less often. Back when facilities ran MRP once a week and issued paper purchase orders, that might have been ok, but demand and supply tend to be a lot more volatile these days. Many of the ABC classifications will shift long before the next time the ABC assignment process is run.

2. Even if the part’s classification is stable, how do you ensure that your buyer’s parts are actually running at the desired levels – the A parts don’t have too much inventory, and the C parts don’t have too little?

Additionally, every buyer knows that even within a class of parts there are differences. Some A parts will have very stable demand and reliable suppliers. It may be possible to put those parts on a Just-in-Time basis (or something very close to that), and maintain their inventory at just days on hand. Other A parts may have unpredictable demand and less-than-reliable suppliers; a prudent buyer will want to carry a little extra inventory on those parts. On the other side of the spectrum, some C parts are just too bulky to carry six to eight weeks on hand.

So, we need a monitoring and management process that gives the buyers the latitude to manage the items under their control in a way that lets them both optimize supply assurance and reach inventory targets.

There’s a way to do this, and it isn’t terribly difficult. It relies on two key concepts:

A. Inventory Targets by ABC Classification

B. Inventory Entitlement by buyer

Inventory targets are easy to understand. Decide what your target inventory level is (in terms of days or weeks on hand) by ABC class code. The important thing to remember is that the target is an aggregate goal. The goal is to achieve the target inventory level on average. For instance, if the forecasted (total) inventory consumption for the A parts was $1M/week, and the A target was 2 weeks on hand, we would aim for $2M inventory for the A parts in total. That’s equivalent to two weeks on average, even though some parts might have 3 weeks on hand and others may only have 1.

The inventory entitlement concept keeps this process fair for each buyer. One buyer may primarily manage A parts; another may have B’s and C’s. We should encourage the second buyer to carry more inventory, and expect lower turns. (If they are mostly managing C parts, they may be able to manage a higher number of parts, too.) The entitlement is calculated by taking all of the parts under their management and calculating the total inventory value they are entitled to. (If the target for A parts is 2 weeks on hand, for instance, they would be given an entitlement of [weekly demand qty] * 2 * [cost each] for each A part. For B and C parts, the calculation would be the same, except it would use a different target number.

Then, by totaling up the actual total inventory value on hand (for the parts they manage), it’s easy to calculate an overall performance level:

Inventory Performance =        (Inventory Entitlement Value)
(Actual Inventory Value on hand)

So a buyer with 25% too much inventory value would have a performance of 80%
(1/1.25 = 0.8).

Here’s an example of what one buyer’s report might look like:

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications, IBM Management

Most of this buyer’s A parts are over their target inventory value, and most of their B and C parts are below their targets. The purchasing manager and the buyers have a simple task: bring the value on hand closer to the target value. In this example, most of the attention will need to be given to the first two parts on the report. Bringing them down to the target value, equivalent to two weeks on hand, would put this buyer’s performance above 99%. This won’t happen overnight; the second part on the list has almost six weeks’ inventory on hand. But over time the buyer will be able to bring inventory more closely in line with the target.

The same goes for the B and C parts. Planning for them to run at higher inventory levels will have a negligible effect on inventory value, but the higher inventory will eliminate the need for ongoing attention. That will let the buyer focus attention on solving the more difficult problem of keeping the A parts at just the right inventory levels.

This technique has been successful in several large electronics facilities. Of course, it tends to be more successful with steady demand and cooperative suppliers. But it is so good at pointing the buyer and his or her purchasing manager at the parts that need extra attention that it will be effective even when it turns out that 95% performance or better is hard to attain.

You can add sophistication by also measuring the volatility of demand (the normalized standard deviation of demand is a typical method) or by assessing the reliability of supply (a bit trickier; often, multiple methods are needed). Either one will help identify those parts within a classification, where the buyer should plan on a little more or a little less inventory.

Thursday, 26 April 2018

Five features of information security every cloud platform should provide

1. Identity and access management (IAM)


Any interaction with a cloud platform should start with establishing who or what is doing the interacting—an administrator, a user, or even a service. Look for providers that offer a consistent way to identify and authenticate anyone accessing applications developed in the cloud.

IBM Guides, IBM Certifications, IBM Learning, IBM Guides, IBM Cloud Platform

Similary, cloud platform vendors should offer a way for developers to build authentication into their mobile and web apps to control end user access. For example, IBM® Cloud offers developers App ID as a way to do so.

Organizations that have an existing identity and access management (IAM) system should expect a cloud provider to integrate it into the cloud platform for them—after all, IAM is extremely important to knowing who did what and when.

Finally, as part of IAM, a provider should automatically log all access requests and transactions and make them available for auditing purposes.

2. Networking security and host security


These three technologies are crucial for maintaining network security in the cloud:

◈ Security groups and firewalls — Network firewalls are essential for protecting perimeters (virtual private cloud/subnet-level network access) and creating network security groups for instance-level access. Make sure your cloud providers offer these protections.

◈ Micro-segmentation — Developing applications cloud-natively as a set of small services provides a security advantage: you can isolate them using network segments. Look for a cloud platform that implements and automates micro-segmentation through network configuration.

◈ Trusted compute hosts — Cloud platform providers that offer hardware with load-verify- launch protocols can give you highly secure hosts for running your workloads. Using trusted platform module (TPM) with Intel Trusted Execution Technology (Intel TXT) in compute hosts is an example how provider might fundamentally secure their platform.

3. Data security: encryption and key management


It’s a boot-strap dilemma of cloud platforms that encryption, to be useful, depends on keeping encryption keys from being accessed without authorization. How do you prevent administrators on a platform you don’t control from accessing your keys? Bring your own keys.

A bring-your-own-keys (BYOK) model protects cloud workloads that require encryption. In this approach, your key management system generates a key on premises and passes it to the provider’s key management service. The root keys never leave the boundaries of the key management system, and you’re able to audit all key management activities. Any platform provider serious about protecting client data should offer BYOK key management for encryption of data at rest, data in motion and container images.

4. Application security and DevSecOps


As your DevOps team members build cloud-native apps and work with container technologies, they need a way to integrate security checks without stalling business outcomes. An automated scanning system helps ensure trust by searching for potential vulnerabilities in your container images before you start running them.

However, since simply scanning registry images can miss problems such as drift from static image to deployed containers, look for a cloud vendor that also scans running containers for anomalies. For example, IBM Cloud Container Service offers a Vulnerability Advisor to provide both static and live container security through image scanning.

5. Visibility and intelligence


Expect full visibility into your cloud-based workloads, APIs, microservices—everything. Ask cloud providers you’re considering if they have a built-in cloud activity tracker that can create a trail of all access–including web and mobile access–to the platform, services, and applications. Your organization should be able to consume logs and integrate them into your enterprise security information and event management (SIEM) system. 

Wednesday, 25 April 2018

Scale security while innovating microservices fast

CISOs are notoriously risk-averse and compliance-focused, providing policies for IT and App Dev to enforce. In contrast, serving business outcomes, app dev leaders want to eliminate DevOps friction wherever possible in continuous integration and development of applications within a cloud native, microservices architecture.  What approach satisfies those conflicting demands while accomplishing the end goal: scale security?

Establishing a chain of trust to scale security


As the foundation of information security, a hardware-rooted chain of trust verifies the integrity of every relevant component in the cloud platform, giving you security automation that flexibly integrates into the DevOps pipeline. A true chain of trust would start in the host chip firmware and build up through the container engine and orchestration system, securing all critical data and workloads during an application’s lifecycle.


Hardware is the ideal foundation because it is rooted in silicon, making it difficult for hackers to alter.

The chain of trust would be built from this root using the measure-and-verify security model, with each component measuring, verifying and launching the next level. This process would extend to the container engine, creating a trust boundary, with measurements stored in a Trusted Platform Module (TPM) on the host.   

So far, so good—but now you must extend this process beyond the host trust boundary to the container orchestration level. You must continue to scale security.

Attestation software on a different server can verify current measurements against known good values. The container orchestrator communicates with the attestation server to verify the integrity of worker hosts, which in turn setup and manage the containers deployed on them. All communication beyond the host trust boundary is encrypted, resulting in a highly automated, trusted container system. 

IBM Guides, IBM Security, IBM Certifications, IBM Learning, IBM Tutorials and Materials

How to scale security management for the enterprise


What do you get with a fully implemented chain of trust?  

◈ Enhanced transparency and scalability: Because a chain of trust facilitates automated security, DevOps teams are free to work at unimpeded velocity. They only need to manage the security policies against which the trusted container system evaluates its measurements.  

◈ Geographical workload policy verification: Smart container orchestration limits movement to approved locations only.  

◈ Container integrity assurance: When containers are moved, the attestor checks to ensure that no tampering occurred during the process. The system verifies that the moved container is v the same as the originally created container. 

◈ Security for sensitive data: Encrypted containers can only be decrypted on approved servers, protecting data in transit from exposure and misuse.  

◈ Simplified compliance controls and reporting: A metadata audit trail provides visibility and audit-able evidence that critical container workloads are running on trusted servers. 

The chain of trust architecture is designed to meet the urgent need for both security and rapid innovation. Security officers can formulate security policies that are automatically applied to every container being created or moved. Beyond maintaining the policies themselves in a manifest, each step in the sequence is automated, enabling DevOps teams to quickly build and deploy applications without manually managing security.

As your team evaluates cloud platforms, ask vendors to explain how they establish and maintain trust in the technology that will host your organization’s applications. It helps to have clear expectations going in.  

Tuesday, 24 April 2018

Setting up IBM Cloud App ID with your Azure Active Directory

This feature allows you to easily manage user identities in your B2E apps while authenticating the users using existing enterprise flows and certified user repositories. In this blog we will use Azure Active Directory as an example identity provider and show how a developer can configure both App ID and Azure Active Directory so that:

◈ Active Directory authenticates app users
◈ App ID federates and manages user identities

App ID allows developers to easily add authentication, authorization and user profile services to apps and APIs running on IBM Cloud. With App ID SDKs and APIs, you can get a sign-in flow working in minutes, enable social log-in through Google and Facebook, and add email/password sign-in. The App ID User Profiles feature can be used to store information about your users, such as their app preferences. In short, App ID enables your app to be used only by authorized users and that authorized users have access to only what they should have access to. The app experience is custom, personalized and most importantly, secure.

SAML 2.0 Federation Architecture


Before we begin, we should first review the architecture and flow of a federation based enterprise login and SSO using the SAML 2.0 framework. Here, Active Directory is the identity provider that provides enterprise identity and access management (IAM).

IBM Cloud, IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications

Federation-based enterprise login and SSO using SAML 2.0

1. Application user opens an application deployed on cloud or invokes a protected cloud API.
2. App ID automatically redirects the user to the Enterprise IAM identity provider.
3. The user is challenged to sign-in using enterprise credentials and familiar UI/UX.
4. On successful login Enterprise IAM identity provider redirects user back supplying SAML assertions.
5. App ID creates access and identity tokens representing user’s authorization and authentication and returns them to the application.
6. Application reads the tokens to make business decisions as well as invoke downstream protected resources.

Configuration Steps


Before we begin:

You must have:

◈ An IBM Cloud account and logged on through a browser
◈ Created an App ID instance
◈ Setup an Azure account with Active Directory service

Step 1

Sign in to your IBM Cloud, browse to the catalog and create an App ID instance. Under the Identity Providers menu, select SAML 2.0 Federation.

IBM Cloud, IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications

Step 2

Click on the Download SAML Metadata file. This will download a file appid-metadata.xml.

IBM Cloud, IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications

Let’s review some of parameters defined in the metadata file. We need these parameters to configure the identity provider. 

◈ <EntityDescriptor> identifies the application for which the SAML identity provider is being setup. EntityID is the unique identifier of the application.

◈ <SPSSODescriptor> describes the service provider (SP) requirements. App ID requires the protocol to be SAML 2.0. The service provider must sign its assertions.

◈ <NameIDFormat> defines how App ID and the identity provider uniquely identity subjects. In this case, App ID uses emailAddress and therefore the identity provider needs to associate username with emailAddress.

◈ <AssertionConsumerService> describes the protocol and endpoint where the application expects to receive the authentication token.

Step 3

Sign into the Azure portal using your administrator account, and browse to the Active Directory > Enterprise Applications > New application > Non-gallery application section, select Add, and then Add an application from the gallery. In the app gallery, add an unlisted app by selecting the Non-gallery application tile. Enter a Name for your application.

IBM Cloud, IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications

Step 4

You can now configure the single sign-on options and behavior for your application on Azure AD.

Step 4.1

Select Configure single sign-on (required) option.

Step 4.2

Extract the AppIDConsumer Domain and URLs from the App ID metadata file `appid-metadata.xml`

◈ Identifier: This is the Entity ID value from appid-metadata.xml.
◈ Reply URL: This is the Assertion Consumer Service (ACS) URL value from appid-metadata.xml.
◈ User Identifier: Select user.email

Save the configuration.

IBM Cloud, IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications

IBM Cloud, IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications

Step 4.3

App ID supports name , email , picture and locale  custom attributes in the SAML assertions it receives from the identity provider. App ID can only consume these attributes if they are in the following format:

<Attribute NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic" Name="name"><AttributeValue>Ada Lovelace</AttributeValue></Attribute>

NameFormat is the way that App ID interprets the Name field. The format specified urn:oasis:names:tc:SAML:2.0:attrname-format:basic is also the default format if no format is provided.

Active Directory does not have these attribute mapping by default. You can add these by checking the field View and edit all other user attributes. You will notice that Active Directory already has several attributed pre-defined, but these are not in the format that App ID expects and therefore App ID ignores them in the SAML response. Custom attributes can be defined by going to Add Attribute and choosing  one of the attribute names App ID supports (such as name or picture), choosing the right Azure attribute from the drop down menu and finally pasting the name format for Namespace field.

IBM Cloud, IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications

You can set up a custom mapping for each of the App ID expected attributes in a similar manner.

Step 4.4

Click on Configure Your Application to obtain the values needed to configure Active Directory as the identity provider of App ID. You will also need to download the SAML Signing Certificate (Base64 encoded), which is a PEM encoded certificate that you will need for configuring App ID.

IBM Cloud, IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications

Step 5

You can now finish configuring the App ID instance. 

◈ entityID: Copied from SAML Entity ID field from Step 4.4
◈ Sign-in URL: Copied from SAML Single Sign-On Service URL field from Step 4.4
◈ Primary Certificate: Copied from SAML Signing Certificate – Base64 encoded from Step 4.4

Save your configuration.

Step 6

You can now test your configuration by clicking on the Test button. This will initiate an authentication request to Active Directory. Make sure you have saved your configuration before testing, otherwise Test will not work.  

IBM Cloud, IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications

Once you have entered the credential information and successfully authenticated with Active Directory, you should be presented with an App ID access token as well as an identity token.

IBM Cloud, IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications

Friday, 20 April 2018

Setting up IBM Cloud App ID with your Active Directory Federation Service

We launched our newest IBM Cloud App ID feature, SAML 2.0 Federation. This feature allows you to easily manage user identities in your B2E apps while authenticating the users using existing enterprise flows and certified user repositories. In this blog we will use Azure Active Directory as an example identity provider and show how a developer can configure both App ID and a private Active Directory Federation Service (AD FS) so that:

◈ AD FS authenticates app users
◈ App ID federates and manages user identities

App ID allows developers to easily add authentication, authorization and user profile services to apps and APIs running on IBM Cloud. With App ID SDKs and APIs, you can get a sign-in flow working in minutes, enable social log-in through Google and Facebook, and add email/password sign-in. The App ID User Profiles feature can be used to store information about your users, such as their app preferences. In short, App ID enables your app to be used only by authorized users and that authorized users have access to only what they should have access to. The app experience is custom, personalized and most importantly, secure.

SAML 2.0 Federation Architecture


Before we begin, we should first review the architecture and flow of a federation based enterprise login and SSO using the SAML 2.0 framework. Here, AD FS is the identity provider that provides enterprise identity and access management (IAM).

IBM Tutorials and Materials, IBM Certifications, IBM Learning, IBM Cloud

Federation-based enterprise login and SSO using SAML 2.0

1. Application user opens an application deployed on cloud or invokes a protected cloud API.
2. App ID automatically redirects the user to the Enterprise IAM identity provider.
3. The user is challenged to sign-in using enterprise credentials and familiar UI/UX.
4. On successful login Enterprise IAM identity provider redirects user back supplying SAML assertions.
5. App ID creates access and identity tokens representing user’s authorization and authentication and returns them to the application.
6. Application reads the tokens to make business decisions as well as invoke downstream protected resources.

Configuration Steps


Before we begin:

You must have:

◈ An IBM Cloud account and logged on through a browser
◈ Created an App ID instance
◈ Setup a local AD FS server

Step 1

Sign in to your IBM Cloud, browse to the catalog and create an App ID instance. Under the Identity Providers menu, select SAML 2.0 Federation. 

IBM Tutorials and Materials, IBM Certifications, IBM Learning, IBM Cloud

Step 2

Click on the Download SAML Metadata file. This will download a file appid-metadata.xml.

IBM Tutorials and Materials, IBM Certifications, IBM Learning, IBM Cloud

Let’s review some of parameters defined in the metadata file. We need these parameters to configure the identity provider.

◈ <EntityDescriptor> identifies the application for which the SAML identity provider is being setup. EntityID is the unique identifier of the application.
◈ <SPSSODescriptor> describes the service provider (SP) requirements. App ID requires the protocol to be SAML 2.0. The service provider must sign its assertions.
◈ <NameIDFormat> defines how App ID and the identity provider uniquely identity subjects. App ID uses emailAddress and therefore the identity provider needs to associate username with emailAddress.
◈ <AssertionConsumerService> describes the protocol and endpoint where the application expects to receive the authentication token.

Step 3

Open the AD FS Management application. To create a relying party trust instance, open the wizard by selecting Relying Party Trusts > Add Relying Party Trust… .

IBM Tutorials and Materials, IBM Certifications, IBM Learning, IBM Cloud

◈ Select Claims Aware type of application.
◈ Select Import data about relying party from a file and browse to where you saved the App ID metadata file.
◈ Set your Display Name and Access Control Policy and Click Finish.
◈ On the last page, you have the option of configuring your custom claims policy by choose Configure claims issuance policy for this application. This wizard can also be opened independently. Step 4 will cover what custom attributes we can set and how to set them.

Step 4

AppID expects all SAML Responses to include a NameID attribute which is used by App ID and the identity provider to uniquely identity subjects. This attribute much also conform to the format specified by App ID’s metadata file (in Step 2). To set up this mapping in AD FS, a custom rule must be set.

Open the Edit Claim Issuance Policy for App ID wizard and add the following rules.

◈ Your first claim rule template should be Send LDAP Attribute as Claims. This claim will map the AD attribute Email Address to a similarly named outgoing claim type.
     ◈ Give the rule a name such as LDAP Email rule
     ◈ Set Attribute Store to Active Directory
     ◈ Create a map between E-mail Address to E-mail Address
     ◈ Click OK
IBM Tutorials and Materials, IBM Certifications, IBM Learning, IBM Cloud


◈ Your second claim rule template should be Send Claims Using a Custom Rule. This claim will create a NameID or a unique identifier in the format App ID expects.
     ◈ Name the rule Custom email rule
     ◈ Copy and paste the following custom rule and then click OK

c:[Type == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"]
=> issue(Type = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value, ValueType = c.ValueType, Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/format"] = "urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress");

IBM Tutorials and Materials, IBM Certifications, IBM Learning, IBM Cloud

Step 5

App ID also supports name , email , picture and locale  custom attributes in the SAML assertions it receives from the identity provider. App ID can only consume these attributes if they are in the following format:

<Attribute NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic" Name="name"><AttributeValue>Ada Lovelace</AttributeValue></Attribute>

NameFormat is the way that App ID interprets the Name field. The format specified  urn:oasis:names:tc:SAML:2.0:attrname-format:basic is also the default format if no format is provided.

To add these additional rules, open the Edit Claim Issuance Policy for App ID wizard and add the following rule(s).

◈ Your claim rule template should be Send Claims Using a Custom Rule. This claim will create a NameID or a unique identifier in the format App ID expects.
     ◈ Name the rule Custom name rule
     ◈ Copy and paste the following custom rule and then click OK

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"] => issue(store = "Active Directory", types = ("name"), query = ";displayName;{0}", param = c.Value);

Similar rules can be added for email, picture, and locale attributes.

Step 6

You now need to obtain the AD FS metadata file  to finish configuring your App ID instance.

Step 6.1

You must first locate the file URL:

◈ Open the AD FS management console
◈ In the AD FS folder, expand Services and click Endpoints.
◈ Locate the FederationMetadata.xml file.

IBM Tutorials and Materials, IBM Certifications, IBM Learning, IBM Cloud

Step 6.2

Use a browser to navigate to that URL on the ADFS server and download the file. For example, https://https://localhost/FederationMetadata/2007-06/FederationMetadata.xml

Step 7

Finish configuring App ID by using the information in FederationMetadata.xml. 

◈ Set entityID to the attribute of the same name from the metadata file.
◈ Set Sign-in URL to the URL value for the SingleSignOnService attribute in the metadata file.
◈ Primary Certificate should be set to the base64 encoded signing certificate string located under KeyDescriptor use="signing"

Save the configuration data.

Step 8

You can now test your configuration by clicking on the Test button. This will initiate an authentication request to your AD FS and open familiar authentication UI/UX. Make sure you have saved your configuration before testing, otherwise Test will not work. 

Once you have entered the credential information and successfully authenticated with AD FS, you should be presented with an App ID access token as well as an identity token.

IBM Tutorials and Materials, IBM Certifications, IBM Learning, IBM Cloud

You have successfully configured your App ID instance using an Active Directory Federation Service!

Make sure you check out some of our upcoming blog articles in our App ID SAML series:

◈ Setting up IBM Cloud App ID with Azure Active Directory
◈ Setting up IBM Cloud App ID with Ping One

Thursday, 19 April 2018

Transforming revenue cycle management can be simpler, faster and inexpensive

Embedding technology enablers, analytics and process improvement into revenue cycle processes improve the financial performance while reducing cost of RCM operations by as much as 30%-70%

The revenue cycle management (RCM) process for healthcare providers is complex, and inefficiencies exist across activities like insurance verification, billing, cash collections or denials management. With increase in co-pays, audits and scrutiny of claims, the inefficiencies of the process are beginning to eat into the profitability and financial well-being of several healthcare providers.

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications

Having reviewed healthcare clients’ payor denial data over five years, we found 40%-70% of issues exist around the front-end benefit verification and documentation processes. Another 20% is caused in the way the billing process is implemented and barely 10%-20% fall in post billing collections life-cycle.

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications

Chart 1: Reasons for claims denial

On the patient co-pay end, there are often two reasons that drive almost 80% of process inefficiencies. The first is the lack of understanding of the patient obligation early in the billing life-cycle and the other is the delay between the actual service delivery and the billing or reimbursement request. Patients are often puzzled when they receive their bills and since the actual date is far gone, the patient does not have a high motivation to address the billing immediately.

There is often a temptation to engage in a significant technology transformation project (aka, let’s get a new system) to address these issues. This may involve changing the core billing system and tightening the processes around the new system. Unfortunately, in a landscape of over 400 EHR vendors with limited end-to-end capabilities, only a few technology solutions can address most of the critical issues around RCM life-cycle. Despite the additional time needed to realize the benefit of technology transformation, the very dynamic nature of the industry (for example regulatory change, new innovation, adoption of new standards by payors) also increases the time to benefit. We often find that providers come out with a solution after 18 months which brings them from a five year gap to a two year gap but never closes in.

In our experience, an approach which adopts agile principles and focuses on incremental change within the current technology environment yield better results at half the cost and time duration. Here are some recommendations to make such changes happen.

Adopt workflow solution that knits the front-end activities with back-end operation


While most billing systems have rudimentary workflow capabilities, their primary purpose is to support billing transactions only. The limitations caused by the basic nature of the workflow which coordinate transaction between the insurance eligibility and pre-authorization, the post service document requirements and the billing process become stark, given the urgency of the front-end processes and the nature of requirements across the payor spectrum. A workflow that takes into consideration the various payor rules increases the first time billing accuracy.

Embed analytics tied to the overall process improvement life-cycle


Given the dynamic nature of the industry, we have found that what worked six months ago may not work today. A closer analysis of denial data will show that while aggregates continue to be fairly static, at a payor level, there is significant variance. It is important to link the changes that we see at a payor level to the front-end processes. Changes in payor behavior should be reflected in the way provider handle insurance eligibility documentation and approvals.

Readiness to experiment and innovate to drive efficiency benefits


Several innovations around RCM solution have become obsolete within few years. We see new models of electronic claims submission (ICD-10 etc), payor portals and patient portals witnessing fast paced of change. As seen in other industries, the model of a core billing platform tied to the provider’s EHR must remain constant, but everything else must be evaluated for efficacy, on an ongoing basis.

A model that builds upon continuous improvement with the above as the core agenda will lead to significant benefits and help provide “world-class” performance without incurring both high cost and long lead time.

IBM has combined several of its offerings in healthcare industry including strategy, consulting and outsourcing, business process modeling and smarter analytics to drive towards targeted business outcomes.

Tuesday, 17 April 2018

Security Containerized Workloads in IBM Cloud Using Aporeto

This blog was co-authored with Amir Sharif, co-founder of Aporeto.  We’re excited to bring Aporeto’s capabilities to IBM Cloud Container Service, providing choice and flexibility to our users.

IBM Cloud


IBM Cloud (formerly IBM Bluemix) provides users with a variety of compute choices as well as over 170 IBM and third-party services. IBM Cloud Container Service combines Docker and Kubernetes to deliver powerful tools, an intuitive user experience, and built-in security and isolation to enable rapid delivery of applications all while leveraging Cloud Services including cognitive capabilities from Watson.

Aporeto


Aporeto is a Zero Trust security solution for microservices, containers and cloud. Fundamental to Aporeto’s approach is the principle that everything in an application is accessible to everyone and could be compromised at any time. Aporeto uses vulnerability data, identity context, threat monitoring and behavior analysis to build and enforce authentication, authorization and encryption policies for applications. With Aporeto, enterprises implement a uniform security policy decoupled from the underlying infrastructure, enabling workload isolation, API access control and application identity management across public, private or hybrid cloud.

Because Aporeto transparently binds to application components to provide them with identity, the result is security independent from infrastructure and network and reduction of complexity at any scale on any cloud.

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications, IBM Cloud, IBM Aporeto

Aporeto is simple to deploy and operate:

1. Pick an application and visualize it;
2. Generate and simulate security policy;
3. Enforce the security policy.

You can visualize the application of your choice by deploying Aporeto as a Kubernetes DaemonSet.  If you control the virtual machines on which your application component run, you may also deploy Aporeto as a Docker container or a userland process.

Aporeto auto-generates application security policy by ingesting Kubernetes Network Policies.  You also have the option of leveraging your application dependency graph that Aporeto creates to describe the application’s behavioral intent as policies.  In every case, you may audit and edit auto-generated policies and inject human wisdom when necessary.

Once you have policies, you may simulate their enforcement at runtime to evaluate the effects of your security policies without interrupting operations. When satisfied that your security policies are solid, you may lockdown your application and protected it with a Zero Trust approach.

Because Aporeto untethers application security from the network and infrastructure, one key benefit of Aporeto’s approach for protecting your containers, microservices and cloud applications is that you can have a consistent security approach even in a hybrid or multi-cloud setting.  As you gain experience with Aporeto in a single cluster setting, you will quickly realize how easy it is to have a consistent security posture in multi-cluster and multi-cloud settings without any infrastructure or operational complexity.

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications, IBM Cloud, IBM Aporeto

Setting up a Kubernetes cluster in IBM Cloud


The first step is to create a IBM Cloud account. After you’ve successfully logged in, the left-hand navigation will take you to Containers.

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications, IBM Cloud, IBM Aporeto

Select the Kubernetes Cluster icon. We’re going to create a standard cluster below. To create a standard cluster, set the following parameters:

◈ Cluster name
◈ Kubernetes version
◈ Datacenter location
◈ Machine type – a flavor with pre-defined resources per worker node in your cluster
◈ Number of workers – 1 to n based on capacity requirements, and can be scaled up or down after the cluster is running
◈ Private and Public VLAN – choose networks for worker nodes (we’ll create for you if you don’t have any yet)
◈ Hardware – clusters and worker nodes are always single-tenant and isolated to you, but you can choose the level of isolation to meet your needs (shared workers have multi-tenant hypervisor and hardware whereas dedicated worker nodes are single-tenant down to the hardware level)

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications, IBM Cloud, IBM Aporeto

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications, IBM Cloud, IBM Aporeto

To create a cluster from the command line, use the following command:

bx cs cluster-create –name –location –workers 2 –machine-type u1c.2×4 –hardware shared –public-vlan –private-vlan

Deploying Aporeto


You can install Enforcerd as a Kubernetes daemonset using the docker image. This section explains how to install, register, and run enforcerd as a Kubernetes daemonset.

Prerequisite – Account Registration

Prior to following this guide to install the Aporeto Enforcer on your Linux and Kubernetes compute platforms, register your account at https://console.aporeto.com/register/.  Once your registration has been accepted, you will receive an email to activate your account along with instructions on accessing the Aporeto Service.

Install apoctl

apoctl is the command line interface (CLI) that allows you to interact with the Platform. Make sure you have it installed correctly before going further.

apoctl is a self-contained binary that runs on most Linux distributions.

Install apoctl on Linux

% sudo curl -o /usr/bin/apoctl https://download.aporeto.com/files/apoctl/linux/apoctl
% sudo chmod 755 /usr/bin/apoctl

Install apoctl on macOS

% sudo curl -o /usr/bin/apoctl https://download.aporeto.com/files/apoctl/darwin/apoctl
% sudo chmod 755 /usr/bin/apoctl

Get an authentication token

In order for apoctl to perform actions on your behalf, you must provide it with a token. apoctl gets its token by reading the content of the $APOCTL_TOKEN environment variable. You can override this variable at anytime by using the –token or -t parameter.

To get a token using your Aporeto account, run the following command:

% apoctl auth aporeto –account <your-account-name> -e
Aporeto account password: <type your password>

Video Overview

https://youtu.be/GDRKoxIqwp4 (install via command-line)

https://youtu.be/NmcyrIUIc3k (install via web interface)

Installation Procedure

Aporeto automates authenticating & authorizing your Kubernetes clusters via secrets/certificates and adds a Kubernetes-specific agent.

◈ kubesquall runs as a replicaset and reads events and information from Kubernetes.
◈ enforcerd runs as a daemonset and enforces security policies on each Kubernetes node.

Register your Kubernetes cluster in the Platform

You need to declare your Kubernetes Cluster in the Aporeto first. This will install various policies, an Enforcer Profile, and will generate a bundle you can use to deploy everything in a few seconds.

% apoctl account create-k8s-cluster my-first-cluster
Kubernetes cluster created in namespace /<your-account-name>
Kubernetes configuration bundle written in ./my-first-cluster.tgz

You can see that apoctl created a tgz bundle containing everything you need to securely install Enforcerd and kubesquall on your Kubernetes cluster.

The downloaded tgz file is keyed to a single Kubernetes cluster. Do not apply this file to more than one Kubernetes cluster. To secure multiple Kubernetes clusters, repeat these step for each one of them.

Deploy Enforcerd

First, extract the content of the archive file.

% tar -xzf my-first-cluster.tgz

Then, run kubectl create on all of the yaml files from the archive file. This will trigger the automatic deployment on Kubernetes.

% kubectl create \
-f aporeto-secrets.yaml \
-f aporeto-cm.yaml \
-f aporeto-enforcer.yaml \
-f aporeto-kubesquall.yaml

configmap “aporeto-cm” created
daemonset “aporeto-enforcer” created
replicaset “aporeto-kubesquall” created
secret “aporeto-secrets” created

You can make sure everything is up and running by checking on the running pods on the kube-system namespace.

% kubectl get pods -n kube-system | grep aporeto

NAME READY STATUS   RESTARTS   AGE 
aporeto-enforcer-8lr88
aporeto-enforcer-qddtq
aporeto-enforcer-v848b
aporeto-kubesquall-d9tgj
2/2
2/2
2/2
1/1
Running
Running
Running
Running
0
0
0
0
1m
1m
1m
1m

Verify Enforcerd is running


You should be able to see the Enforcerd instance in the running state in the Aporeto web interface, under the Enforcers section.

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications, IBM Cloud, IBM Aporeto

Congratulations! Enforcerd is now running correctly as a Kubernetes daemonset!  You can now view the Platform page in the Aporeto web interface to visualize your services, their contextual identity, and network flows.

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications, IBM Cloud, IBM Aporeto

Monday, 16 April 2018

Get Started with Streaming Analytics + Message Hub

Message Hub provides a simple communication mechanism built on Apache Kafka, enabling communication between loosely coupled Bluemix services. This article shows how to communicate with Message Hub from the Streaming Analytics Bluemix service using the messaging toolkit.

Setup


◈ Create a Message Hub service on Bluemix

◈ Download the latest streamsx.messagehub toolkit. This article will use its MessageHubFileSample.

◈ Install the MessageHub toolkit to Streams Studio by following the procedure in Adding toolkit locations.

Creating a topic


In Message Hub, messages are transported through feeds called topics. Producers write messages to topics, and consumers read from topics.

The Message Hub Bluemix dashboard provides topic management tools. To create a topic click the plus sign, enter “test”, and click “Create topic”.

IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Analytics

We are now ready to use this new topic from Streams.

Using Streams to produce and consume messages


Now we will use the MessageHubFileSample Streams application to produce and consume messages.

Import MessageHubFileSample into Streams Studio from the downloaded toolkit directory (samples/MessageHubFileSample). The application should build successfully. Instructions for importing Streams applications can be found in Importing SPL projects.

We still need to tell the application where to find our Message Hub service.

1. Navigate to Message Hub’s Bluemix dashboard and click the “Service Credentials” tab. 

IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Analytics

2. Copy the credentials JSON and paste it into MessageHubFileSample’s /etc/messagehub.json file, replacing the placeholder comment.The MessageHubFileSample Streams application contains logic to both send and receive messages.

IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Analytics

◈ The “producer” part of the Streams graph (Beacon_1 → MessageHubProducer_2) uses a MessageHubProducer operator to send messages to the topic named “test” every 0.2 seconds.

◈ The “consumer” part (MessageHubConsumer_3 → Custom_4) retrieves messages from Kafka using the MessageHubConsumer operator and prints them to the console.

Build the MessageHubFileSample’s Distributed Build so that you can run it on your Streaming Analytics service on Bluemix.

Streams and Message Hub in the Cloud


Create a Streaming Analytics service on Bluemix – See “Finding the service” section of Introduction to Bluemix Streaming Analytics.

Building MessageHubFileSample creates a .sab file (Streams application bundle) in your workspace directory: workspace/MessageHubFileSample/output/com.ibm.streamsx.messagehub.sample.MessageHubFileSample/BuildConfig/com.ibm.streamsx.messagehub.sample.MessageHubFileSample.sab. This file includes all necessary information for the Streaming Analytics Bluemix service to run the Streams application in the cloud.

Upload the .sab file using the Streaming Analytics console.

1. Head to the Streaming Analytics service dashboard in Bluemix and click “Launch” to launch the Streams console.

2. Click “Submit job” under the “play icon” dropdown in the top-right of the console

IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Analytics
3. Browse for the com.ibm.streamsx.messagehub.sample.MessageHubFileSample.sab file that you built, and click Submit.

IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Analytics

The Streams application is working properly if the Streams console’s graph view shows that all operators are healthy (green circle).


You can also view the messages being printed by Custom_4 in the Streams log.

1. Navigate to the Streams console log viewer on the far left.
2. Expand the navigation tree and highlight the PE that has the Custom_4 operator.
3. Select the “Console Log” tab.
4. Click “Load console messages”.

IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Analytics

If you don’t see any messages being logged, ensure that only one instance of the job is running. You can only have one Kafka consumer per topic in each consumer group.