Friday, 20 April 2018

Setting up IBM Cloud App ID with your Active Directory Federation Service

We launched our newest IBM Cloud App ID feature, SAML 2.0 Federation. This feature allows you to easily manage user identities in your B2E apps while authenticating the users using existing enterprise flows and certified user repositories. In this blog we will use Azure Active Directory as an example identity provider and show how a developer can configure both App ID and a private Active Directory Federation Service (AD FS) so that:

◈ AD FS authenticates app users
◈ App ID federates and manages user identities

App ID allows developers to easily add authentication, authorization and user profile services to apps and APIs running on IBM Cloud. With App ID SDKs and APIs, you can get a sign-in flow working in minutes, enable social log-in through Google and Facebook, and add email/password sign-in. The App ID User Profiles feature can be used to store information about your users, such as their app preferences. In short, App ID enables your app to be used only by authorized users and that authorized users have access to only what they should have access to. The app experience is custom, personalized and most importantly, secure.

SAML 2.0 Federation Architecture


Before we begin, we should first review the architecture and flow of a federation based enterprise login and SSO using the SAML 2.0 framework. Here, AD FS is the identity provider that provides enterprise identity and access management (IAM).

IBM Tutorials and Materials, IBM Certifications, IBM Learning, IBM Cloud

Federation-based enterprise login and SSO using SAML 2.0

1. Application user opens an application deployed on cloud or invokes a protected cloud API.
2. App ID automatically redirects the user to the Enterprise IAM identity provider.
3. The user is challenged to sign-in using enterprise credentials and familiar UI/UX.
4. On successful login Enterprise IAM identity provider redirects user back supplying SAML assertions.
5. App ID creates access and identity tokens representing user’s authorization and authentication and returns them to the application.
6. Application reads the tokens to make business decisions as well as invoke downstream protected resources.

Configuration Steps


Before we begin:

You must have:

◈ An IBM Cloud account and logged on through a browser
◈ Created an App ID instance
◈ Setup a local AD FS server

Step 1

Sign in to your IBM Cloud, browse to the catalog and create an App ID instance. Under the Identity Providers menu, select SAML 2.0 Federation. 

IBM Tutorials and Materials, IBM Certifications, IBM Learning, IBM Cloud

Step 2

Click on the Download SAML Metadata file. This will download a file appid-metadata.xml.

IBM Tutorials and Materials, IBM Certifications, IBM Learning, IBM Cloud

Let’s review some of parameters defined in the metadata file. We need these parameters to configure the identity provider.

◈ <EntityDescriptor> identifies the application for which the SAML identity provider is being setup. EntityID is the unique identifier of the application.
◈ <SPSSODescriptor> describes the service provider (SP) requirements. App ID requires the protocol to be SAML 2.0. The service provider must sign its assertions.
◈ <NameIDFormat> defines how App ID and the identity provider uniquely identity subjects. App ID uses emailAddress and therefore the identity provider needs to associate username with emailAddress.
◈ <AssertionConsumerService> describes the protocol and endpoint where the application expects to receive the authentication token.

Step 3

Open the AD FS Management application. To create a relying party trust instance, open the wizard by selecting Relying Party Trusts > Add Relying Party Trust… .

IBM Tutorials and Materials, IBM Certifications, IBM Learning, IBM Cloud

◈ Select Claims Aware type of application.
◈ Select Import data about relying party from a file and browse to where you saved the App ID metadata file.
◈ Set your Display Name and Access Control Policy and Click Finish.
◈ On the last page, you have the option of configuring your custom claims policy by choose Configure claims issuance policy for this application. This wizard can also be opened independently. Step 4 will cover what custom attributes we can set and how to set them.

Step 4

AppID expects all SAML Responses to include a NameID attribute which is used by App ID and the identity provider to uniquely identity subjects. This attribute much also conform to the format specified by App ID’s metadata file (in Step 2). To set up this mapping in AD FS, a custom rule must be set.

Open the Edit Claim Issuance Policy for App ID wizard and add the following rules.

◈ Your first claim rule template should be Send LDAP Attribute as Claims. This claim will map the AD attribute Email Address to a similarly named outgoing claim type.
     ◈ Give the rule a name such as LDAP Email rule
     ◈ Set Attribute Store to Active Directory
     ◈ Create a map between E-mail Address to E-mail Address
     ◈ Click OK
IBM Tutorials and Materials, IBM Certifications, IBM Learning, IBM Cloud


◈ Your second claim rule template should be Send Claims Using a Custom Rule. This claim will create a NameID or a unique identifier in the format App ID expects.
     ◈ Name the rule Custom email rule
     ◈ Copy and paste the following custom rule and then click OK

c:[Type == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"]
=> issue(Type = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value, ValueType = c.ValueType, Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/format"] = "urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress");

IBM Tutorials and Materials, IBM Certifications, IBM Learning, IBM Cloud

Step 5

App ID also supports name , email , picture and locale  custom attributes in the SAML assertions it receives from the identity provider. App ID can only consume these attributes if they are in the following format:

<Attribute NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic" Name="name"><AttributeValue>Ada Lovelace</AttributeValue></Attribute>

NameFormat is the way that App ID interprets the Name field. The format specified  urn:oasis:names:tc:SAML:2.0:attrname-format:basic is also the default format if no format is provided.

To add these additional rules, open the Edit Claim Issuance Policy for App ID wizard and add the following rule(s).

◈ Your claim rule template should be Send Claims Using a Custom Rule. This claim will create a NameID or a unique identifier in the format App ID expects.
     ◈ Name the rule Custom name rule
     ◈ Copy and paste the following custom rule and then click OK

c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname", Issuer == "AD AUTHORITY"] => issue(store = "Active Directory", types = ("name"), query = ";displayName;{0}", param = c.Value);

Similar rules can be added for email, picture, and locale attributes.

Step 6

You now need to obtain the AD FS metadata file  to finish configuring your App ID instance.

Step 6.1

You must first locate the file URL:

◈ Open the AD FS management console
◈ In the AD FS folder, expand Services and click Endpoints.
◈ Locate the FederationMetadata.xml file.

IBM Tutorials and Materials, IBM Certifications, IBM Learning, IBM Cloud

Step 6.2

Use a browser to navigate to that URL on the ADFS server and download the file. For example, https://https://localhost/FederationMetadata/2007-06/FederationMetadata.xml

Step 7

Finish configuring App ID by using the information in FederationMetadata.xml. 

◈ Set entityID to the attribute of the same name from the metadata file.
◈ Set Sign-in URL to the URL value for the SingleSignOnService attribute in the metadata file.
◈ Primary Certificate should be set to the base64 encoded signing certificate string located under KeyDescriptor use="signing"

Save the configuration data.

Step 8

You can now test your configuration by clicking on the Test button. This will initiate an authentication request to your AD FS and open familiar authentication UI/UX. Make sure you have saved your configuration before testing, otherwise Test will not work. 

Once you have entered the credential information and successfully authenticated with AD FS, you should be presented with an App ID access token as well as an identity token.

IBM Tutorials and Materials, IBM Certifications, IBM Learning, IBM Cloud

You have successfully configured your App ID instance using an Active Directory Federation Service!

Make sure you check out some of our upcoming blog articles in our App ID SAML series:

◈ Setting up IBM Cloud App ID with Azure Active Directory
◈ Setting up IBM Cloud App ID with Ping One

Thursday, 19 April 2018

Transforming revenue cycle management can be simpler, faster and inexpensive

Embedding technology enablers, analytics and process improvement into revenue cycle processes improve the financial performance while reducing cost of RCM operations by as much as 30%-70%

The revenue cycle management (RCM) process for healthcare providers is complex, and inefficiencies exist across activities like insurance verification, billing, cash collections or denials management. With increase in co-pays, audits and scrutiny of claims, the inefficiencies of the process are beginning to eat into the profitability and financial well-being of several healthcare providers.

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications

Having reviewed healthcare clients’ payor denial data over five years, we found 40%-70% of issues exist around the front-end benefit verification and documentation processes. Another 20% is caused in the way the billing process is implemented and barely 10%-20% fall in post billing collections life-cycle.

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications

Chart 1: Reasons for claims denial

On the patient co-pay end, there are often two reasons that drive almost 80% of process inefficiencies. The first is the lack of understanding of the patient obligation early in the billing life-cycle and the other is the delay between the actual service delivery and the billing or reimbursement request. Patients are often puzzled when they receive their bills and since the actual date is far gone, the patient does not have a high motivation to address the billing immediately.

There is often a temptation to engage in a significant technology transformation project (aka, let’s get a new system) to address these issues. This may involve changing the core billing system and tightening the processes around the new system. Unfortunately, in a landscape of over 400 EHR vendors with limited end-to-end capabilities, only a few technology solutions can address most of the critical issues around RCM life-cycle. Despite the additional time needed to realize the benefit of technology transformation, the very dynamic nature of the industry (for example regulatory change, new innovation, adoption of new standards by payors) also increases the time to benefit. We often find that providers come out with a solution after 18 months which brings them from a five year gap to a two year gap but never closes in.

In our experience, an approach which adopts agile principles and focuses on incremental change within the current technology environment yield better results at half the cost and time duration. Here are some recommendations to make such changes happen.

Adopt workflow solution that knits the front-end activities with back-end operation


While most billing systems have rudimentary workflow capabilities, their primary purpose is to support billing transactions only. The limitations caused by the basic nature of the workflow which coordinate transaction between the insurance eligibility and pre-authorization, the post service document requirements and the billing process become stark, given the urgency of the front-end processes and the nature of requirements across the payor spectrum. A workflow that takes into consideration the various payor rules increases the first time billing accuracy.

Embed analytics tied to the overall process improvement life-cycle


Given the dynamic nature of the industry, we have found that what worked six months ago may not work today. A closer analysis of denial data will show that while aggregates continue to be fairly static, at a payor level, there is significant variance. It is important to link the changes that we see at a payor level to the front-end processes. Changes in payor behavior should be reflected in the way provider handle insurance eligibility documentation and approvals.

Readiness to experiment and innovate to drive efficiency benefits


Several innovations around RCM solution have become obsolete within few years. We see new models of electronic claims submission (ICD-10 etc), payor portals and patient portals witnessing fast paced of change. As seen in other industries, the model of a core billing platform tied to the provider’s EHR must remain constant, but everything else must be evaluated for efficacy, on an ongoing basis.

A model that builds upon continuous improvement with the above as the core agenda will lead to significant benefits and help provide “world-class” performance without incurring both high cost and long lead time.

IBM has combined several of its offerings in healthcare industry including strategy, consulting and outsourcing, business process modeling and smarter analytics to drive towards targeted business outcomes.

Tuesday, 17 April 2018

Security Containerized Workloads in IBM Cloud Using Aporeto

This blog was co-authored with Amir Sharif, co-founder of Aporeto.  We’re excited to bring Aporeto’s capabilities to IBM Cloud Container Service, providing choice and flexibility to our users.

IBM Cloud


IBM Cloud (formerly IBM Bluemix) provides users with a variety of compute choices as well as over 170 IBM and third-party services. IBM Cloud Container Service combines Docker and Kubernetes to deliver powerful tools, an intuitive user experience, and built-in security and isolation to enable rapid delivery of applications all while leveraging Cloud Services including cognitive capabilities from Watson.

Aporeto


Aporeto is a Zero Trust security solution for microservices, containers and cloud. Fundamental to Aporeto’s approach is the principle that everything in an application is accessible to everyone and could be compromised at any time. Aporeto uses vulnerability data, identity context, threat monitoring and behavior analysis to build and enforce authentication, authorization and encryption policies for applications. With Aporeto, enterprises implement a uniform security policy decoupled from the underlying infrastructure, enabling workload isolation, API access control and application identity management across public, private or hybrid cloud.

Because Aporeto transparently binds to application components to provide them with identity, the result is security independent from infrastructure and network and reduction of complexity at any scale on any cloud.

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications, IBM Cloud, IBM Aporeto

Aporeto is simple to deploy and operate:

1. Pick an application and visualize it;
2. Generate and simulate security policy;
3. Enforce the security policy.

You can visualize the application of your choice by deploying Aporeto as a Kubernetes DaemonSet.  If you control the virtual machines on which your application component run, you may also deploy Aporeto as a Docker container or a userland process.

Aporeto auto-generates application security policy by ingesting Kubernetes Network Policies.  You also have the option of leveraging your application dependency graph that Aporeto creates to describe the application’s behavioral intent as policies.  In every case, you may audit and edit auto-generated policies and inject human wisdom when necessary.

Once you have policies, you may simulate their enforcement at runtime to evaluate the effects of your security policies without interrupting operations. When satisfied that your security policies are solid, you may lockdown your application and protected it with a Zero Trust approach.

Because Aporeto untethers application security from the network and infrastructure, one key benefit of Aporeto’s approach for protecting your containers, microservices and cloud applications is that you can have a consistent security approach even in a hybrid or multi-cloud setting.  As you gain experience with Aporeto in a single cluster setting, you will quickly realize how easy it is to have a consistent security posture in multi-cluster and multi-cloud settings without any infrastructure or operational complexity.

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications, IBM Cloud, IBM Aporeto

Setting up a Kubernetes cluster in IBM Cloud


The first step is to create a IBM Cloud account. After you’ve successfully logged in, the left-hand navigation will take you to Containers.

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications, IBM Cloud, IBM Aporeto

Select the Kubernetes Cluster icon. We’re going to create a standard cluster below. To create a standard cluster, set the following parameters:

◈ Cluster name
◈ Kubernetes version
◈ Datacenter location
◈ Machine type – a flavor with pre-defined resources per worker node in your cluster
◈ Number of workers – 1 to n based on capacity requirements, and can be scaled up or down after the cluster is running
◈ Private and Public VLAN – choose networks for worker nodes (we’ll create for you if you don’t have any yet)
◈ Hardware – clusters and worker nodes are always single-tenant and isolated to you, but you can choose the level of isolation to meet your needs (shared workers have multi-tenant hypervisor and hardware whereas dedicated worker nodes are single-tenant down to the hardware level)

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications, IBM Cloud, IBM Aporeto

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications, IBM Cloud, IBM Aporeto

To create a cluster from the command line, use the following command:

bx cs cluster-create –name –location –workers 2 –machine-type u1c.2×4 –hardware shared –public-vlan –private-vlan

Deploying Aporeto


You can install Enforcerd as a Kubernetes daemonset using the docker image. This section explains how to install, register, and run enforcerd as a Kubernetes daemonset.

Prerequisite – Account Registration

Prior to following this guide to install the Aporeto Enforcer on your Linux and Kubernetes compute platforms, register your account at https://console.aporeto.com/register/.  Once your registration has been accepted, you will receive an email to activate your account along with instructions on accessing the Aporeto Service.

Install apoctl

apoctl is the command line interface (CLI) that allows you to interact with the Platform. Make sure you have it installed correctly before going further.

apoctl is a self-contained binary that runs on most Linux distributions.

Install apoctl on Linux

% sudo curl -o /usr/bin/apoctl https://download.aporeto.com/files/apoctl/linux/apoctl
% sudo chmod 755 /usr/bin/apoctl

Install apoctl on macOS

% sudo curl -o /usr/bin/apoctl https://download.aporeto.com/files/apoctl/darwin/apoctl
% sudo chmod 755 /usr/bin/apoctl

Get an authentication token

In order for apoctl to perform actions on your behalf, you must provide it with a token. apoctl gets its token by reading the content of the $APOCTL_TOKEN environment variable. You can override this variable at anytime by using the –token or -t parameter.

To get a token using your Aporeto account, run the following command:

% apoctl auth aporeto –account <your-account-name> -e
Aporeto account password: <type your password>

Video Overview

https://youtu.be/GDRKoxIqwp4 (install via command-line)

https://youtu.be/NmcyrIUIc3k (install via web interface)

Installation Procedure

Aporeto automates authenticating & authorizing your Kubernetes clusters via secrets/certificates and adds a Kubernetes-specific agent.

◈ kubesquall runs as a replicaset and reads events and information from Kubernetes.
◈ enforcerd runs as a daemonset and enforces security policies on each Kubernetes node.

Register your Kubernetes cluster in the Platform

You need to declare your Kubernetes Cluster in the Aporeto first. This will install various policies, an Enforcer Profile, and will generate a bundle you can use to deploy everything in a few seconds.

% apoctl account create-k8s-cluster my-first-cluster
Kubernetes cluster created in namespace /<your-account-name>
Kubernetes configuration bundle written in ./my-first-cluster.tgz

You can see that apoctl created a tgz bundle containing everything you need to securely install Enforcerd and kubesquall on your Kubernetes cluster.

The downloaded tgz file is keyed to a single Kubernetes cluster. Do not apply this file to more than one Kubernetes cluster. To secure multiple Kubernetes clusters, repeat these step for each one of them.

Deploy Enforcerd

First, extract the content of the archive file.

% tar -xzf my-first-cluster.tgz

Then, run kubectl create on all of the yaml files from the archive file. This will trigger the automatic deployment on Kubernetes.

% kubectl create \
-f aporeto-secrets.yaml \
-f aporeto-cm.yaml \
-f aporeto-enforcer.yaml \
-f aporeto-kubesquall.yaml

configmap “aporeto-cm” created
daemonset “aporeto-enforcer” created
replicaset “aporeto-kubesquall” created
secret “aporeto-secrets” created

You can make sure everything is up and running by checking on the running pods on the kube-system namespace.

% kubectl get pods -n kube-system | grep aporeto

NAME READY STATUS   RESTARTS   AGE 
aporeto-enforcer-8lr88
aporeto-enforcer-qddtq
aporeto-enforcer-v848b
aporeto-kubesquall-d9tgj
2/2
2/2
2/2
1/1
Running
Running
Running
Running
0
0
0
0
1m
1m
1m
1m

Verify Enforcerd is running


You should be able to see the Enforcerd instance in the running state in the Aporeto web interface, under the Enforcers section.

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications, IBM Cloud, IBM Aporeto

Congratulations! Enforcerd is now running correctly as a Kubernetes daemonset!  You can now view the Platform page in the Aporeto web interface to visualize your services, their contextual identity, and network flows.

IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications, IBM Cloud, IBM Aporeto

Monday, 16 April 2018

Get Started with Streaming Analytics + Message Hub

Message Hub provides a simple communication mechanism built on Apache Kafka, enabling communication between loosely coupled Bluemix services. This article shows how to communicate with Message Hub from the Streaming Analytics Bluemix service using the messaging toolkit.

Setup


◈ Create a Message Hub service on Bluemix

◈ Download the latest streamsx.messagehub toolkit. This article will use its MessageHubFileSample.

◈ Install the MessageHub toolkit to Streams Studio by following the procedure in Adding toolkit locations.

Creating a topic


In Message Hub, messages are transported through feeds called topics. Producers write messages to topics, and consumers read from topics.

The Message Hub Bluemix dashboard provides topic management tools. To create a topic click the plus sign, enter “test”, and click “Create topic”.

IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Analytics

We are now ready to use this new topic from Streams.

Using Streams to produce and consume messages


Now we will use the MessageHubFileSample Streams application to produce and consume messages.

Import MessageHubFileSample into Streams Studio from the downloaded toolkit directory (samples/MessageHubFileSample). The application should build successfully. Instructions for importing Streams applications can be found in Importing SPL projects.

We still need to tell the application where to find our Message Hub service.

1. Navigate to Message Hub’s Bluemix dashboard and click the “Service Credentials” tab. 

IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Analytics

2. Copy the credentials JSON and paste it into MessageHubFileSample’s /etc/messagehub.json file, replacing the placeholder comment.The MessageHubFileSample Streams application contains logic to both send and receive messages.

IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Analytics

◈ The “producer” part of the Streams graph (Beacon_1 → MessageHubProducer_2) uses a MessageHubProducer operator to send messages to the topic named “test” every 0.2 seconds.

◈ The “consumer” part (MessageHubConsumer_3 → Custom_4) retrieves messages from Kafka using the MessageHubConsumer operator and prints them to the console.

Build the MessageHubFileSample’s Distributed Build so that you can run it on your Streaming Analytics service on Bluemix.

Streams and Message Hub in the Cloud


Create a Streaming Analytics service on Bluemix – See “Finding the service” section of Introduction to Bluemix Streaming Analytics.

Building MessageHubFileSample creates a .sab file (Streams application bundle) in your workspace directory: workspace/MessageHubFileSample/output/com.ibm.streamsx.messagehub.sample.MessageHubFileSample/BuildConfig/com.ibm.streamsx.messagehub.sample.MessageHubFileSample.sab. This file includes all necessary information for the Streaming Analytics Bluemix service to run the Streams application in the cloud.

Upload the .sab file using the Streaming Analytics console.

1. Head to the Streaming Analytics service dashboard in Bluemix and click “Launch” to launch the Streams console.

2. Click “Submit job” under the “play icon” dropdown in the top-right of the console

IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Analytics
3. Browse for the com.ibm.streamsx.messagehub.sample.MessageHubFileSample.sab file that you built, and click Submit.

IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Analytics

The Streams application is working properly if the Streams console’s graph view shows that all operators are healthy (green circle).


You can also view the messages being printed by Custom_4 in the Streams log.

1. Navigate to the Streams console log viewer on the far left.
2. Expand the navigation tree and highlight the PE that has the Custom_4 operator.
3. Select the “Console Log” tab.
4. Click “Load console messages”.

IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Analytics

If you don’t see any messages being logged, ensure that only one instance of the job is running. You can only have one Kafka consumer per topic in each consumer group.

Saturday, 14 April 2018

Bringing Continuous Delivery with Codefresh to Kubernetes on IBM Cloud

As a certified Kubernetes provider, IBM is one of the leaders in hosted and managed Kubernetes. With integrations into Watson, upstream Kubernetes, and options to run bare metal, IBM has been a popular choice for enterprise customers. One of the most critical components of adopting Kubernetes is providing a control plane to engineers so they can deploy what they need, this is where Codefresh comes in (but we’ll get to that later).

IBM Cloud Container Service (ICCS) launched as a managed Kubernetes offering in May 2017 to deliver powerful tools, an intuitive user experience, and built-in security and isolation to enable rapid delivery of applications all while leveraging Cloud Services.  ICCS ensures a completely native user experience with K8s capabilities.  Additionally, IBM is adding capabilities to the container service including simplified cluster management, container security and isolation choices, ability to design your own cluster, leverage other IBM Cloud services (170+ in catalog), and integrated operational tools or support to bring your own tools to ensure operational consistency with other deployments.

Before containers are shipped to a production Kubernetes cluster we need to validate that the container is:

◈ Secure
◈ Performant
◈ Working (unit tests)
◈ Functional within the entire application context (integration)

This is where a continuous delivery/continuous deployment pipeline can have a big impact on how effective engineers are. Adopting containers with continuous delivery has lead to staggering stats like this one:

IBM Cloud, IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications

Continuous Delivery and Kubernetes Pipelines Help Teams Deliver Higher-Quality Code, Faster


Kubernetes already unlocks a ton of super powers that were all engineering feats in their own right before. Things like failover, high-availability, scalability, microservices, infrastructure as code and so much more. But without pairing with Kubernetes pipelines it’s hard to take advantage. Infrastructure-as-code is great if you can validate it and deliver it like code. Microservices are lovely but without meaningful integration and functional testing, teams can easily get overwhelmed. And without a central control plane for shipping code from individuals into production, it’s easy to get lost.

This is where Codefresh pairs so well with Kubernetes on IBM Cloud. Codefresh is a DevOps platform with Kubernetes pipelines. It integrates with IBM Cloud to make it easier for teams to adopt and deploy containers into production and works with IBM Cloud Container Registry for storing images.

What can Codefresh do?


Codefresh has out-of-the-box steps for working with Kubernetes and other cloud native technologies like Helm. It’s easy to create a pipeline that builds Docker images, deploys them to on-demand environments for testing and validation and then on into production. Codefresh includes dashboards for Kubernetes, Helm, and managing images.

How to setup a Kubernetes Pipeline with Codefresh and IBM Cloud


Today I’ll show you how we’ve integrated Codefresh with IBM Cloud to seamlessly deploy containers to Kubernetes.

Step 1: Create your clusters on IBM Cloud


This is easy to do in the UI. You can create one free cluster and then pick from an inexpensive plan for building something more robust.

Step 2: Configure Kubectl using IBM CLI


This assumes you’ve already installed the IBM CLI and Kubeconfig. 

Login to bluemix

  bx login   

Get cluster names

  bx cs clusters

Login to bluemix

  bx cs  cluster-config [cluster name]

Run the generated export command. It will look something like this

export KUBECONFIG=/Users/dan/.bluemix/plugins/container-service/clusters/codefresh_ibm/kube-config-dal13-codefresh_ibm.yml

Step 3: Connect Codefresh to your cluster


Create your Codefresh account if you haven’t done so already. Codefresh SaaS has support for logging in with Github, Bitbucket, or Gitlab. If you’d like to connect to on-prem Git, use Single sign on, or another setup ping us for a free POC and we’ll get you setup.

And click Kubernetes then Add Cluster. Name the cluster whatever you like and then we’ll need to gather the connection info, namely, the Host IP, the certificate, and the token.

Here’s how to get that information using your newly configured Kubectl.

Host IP

export CURRENT_CONTEXT=$(kubectl config current-context) && export CURRENT_CLUSTER=$(kubectl config view -o go-template=”{{\$curr_context := \”$CURRENT_CONTEXT\” }}{{range .contexts}}{{if eq .name \$curr_context}}{{.context.cluster}}{{end}}{{end}}”) && echo $(kubectl config view -o go-template=”{{\$cluster_context := \”$CURRENT_CLUSTER\”}}{{range .clusters}}{{if eq .name \$cluster_context}}{{.cluster.server}}{{end}}{{end}}”)

Certificate

echo $(kubectl get secret -o go-template='{{index .data “ca.crt” }}’ $(kubectl get sa default -o go-template=”{{range .secrets}}{{.name}}{{end}}”))

Token

echo $(kubectl get secret -o go-template='{{index .data “token” }}’ $(kubectl get sa default -o go-template=”{{range .secrets}}{{.name}}{{end}}”))

 Add this into Codefresh and click test. You should see a confirmation like this

IBM Cloud, IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications

Click save, and you’ll be able to see the cluster and node status.

IBM Cloud, IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications

Now that your cluster is connected, we can reference it from any pipeline, deploy services, see what images are running, manage Helm packages and lots, lots, more.

Step 4: Connect your code repository and build an image


Still on Codefresh, click on “Repositories” in the top left menu and “Add Repository.

IBM Cloud, IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications

This will guide you through connecting a repo to Codefresh. You’ll want to bring your Dockerfile with you though there are some limited templates to help you get started. Once you’ve added your repo with your Dockerfile, click build to build your first Docker image.

You’ll see that there is a simple UI to help you add some tests and you have the option to switch to a YAML view of the pipeline to do more advanced configuration.

Step 5: Deploy your first service, and automate the pipeline


Now that we have an image and a cluster, it’s time to get that image deployed and automate the build and deploy steps. Here you have a choice, keep it under version control or manually deploy and update.

Manual Deployment

IBM Cloud, IBM Tutorials and Materials, IBM Guides, IBM Learning, IBM Certifications

Now if you save and build, you’ll be able to start deploying to Kubernetes on IBM Cloud!

Optionally, you can click the YAML switch to see how this configuration looks in Codefresh YAML.

Step 6 (optional): Invite your team


Now that you’ve setup the cluster and a pipeline, invite your team to your account! They’ll be able to see what you’ve added and create their own pipelines for deployment.

Friday, 13 April 2018

Introducing IBM Cloud SQL Query

We are excited to announce that SQL Query is now publicly available in the IBM Cloud as a beta service. SQL Query supports using standard ANSI SQL to analyze CSV, Parquet, and JSON files stored in IBM Cloud Object Storage.

Because SQL Query operates in a server-less fashion, you do not have to worry about sizing a server of any kind: just author a SELECT statement and submit it.

SQL Query is tightly integrated in the IBM Cloud. For example:

◈ Its user interface is available from within the IBM Cloud console, and can be used to author and experiment with queries interactively.

◈ Its REST API is part of the IBM Cloud API, allowing for single sign on (SSO) across IBM Cloud Object Storage and SQL Query API calls.

IBM Cloud, IBM Certifications, IBM Guides, IBM Tutorials and Materials, IBM SQL Query

A single query can reference any number of data sets. These data sets can be stored as CSV, JSON, or Parquet objects in one or more IBM Cloud Object Storage instances. SQL Query automatically infers the schema of the data sets before executing the query. You can use the full power of SQL to correlate, aggregate, transform, and filter data; merge data sets; carry out complex analytic computations; and more. The result of each query is written to an IBM Cloud Object Storage instance of your choice.

IBM Cloud, IBM Certifications, IBM Guides, IBM Tutorials and Materials, IBM SQL Query

SQL Query complements IBM Cloud Object Storage perfectly because both are made for seamless elasticity. You can start with any data volume and grow at any rate to whatever volume you desire. IBM Cloud Object Storage charges only for the volume of data you have stored, and SQL Query charges only for the volume of data that you process.

Here is how easy it is to get started:

1. Provision an instance of IBM Cloud Object Storage, if you haven’t done so yet.
2. Provision an instance of IBM Cloud SQL Query. (Here is a short video for service provisioning)
3. Open the SQL Query console.
4. Select one of the samples in the top right of your screen and click the “Run” button.

There is no need to configure any server resources or data, because the samples use data that is provided out of the box, and the SQL Query service automatically creates a default target bucket for you. After a few seconds, you see the results of your query at the bottom of your screen. If you like, try running additional samples. Feel free to modify the samples to learn more about what the service can do.

IBM Cloud, IBM Certifications, IBM Guides, IBM Tutorials and Materials, IBM SQL Query

After running a few of the samples, you can continue by querying your own data:

◈ If necessary, create a new bucket in your IBM Cloud Object Storage instance to hold your input data.
◈ Upload data to your bucket, or use another IBM Cloud service (such as the Streaming Analytics service) to add data to your bucket.

Then, write your own queries for your own data. A good starting point is always to explore the schema of your data and look at a few sample records. For this you can use this query pattern:

SELECT * from cos://<endpoint>/<bucket>/<data set prefix> STORED AS CSV LIMIT 10

Replace STORED AS CSV with STORED AS PARQUET or STORED AS JSON as necessary to reflect the format of your input data.

After you have tested your queries and have identified ones that you would like to use in an application or cloud solution, you can use the IBM Cloud REST API directly. This end-to-end demo video shows an example of how to use the API. If you are a Python developer, you can use the ibmcloudsql client package instead of the REST API. For example, you can use ibmcloudsql with a Jupyter notebook and combine it with powerful visualization libraries.

Thursday, 12 April 2018

Setting up IBM Cloud App ID with Ping One

We launched our latest IBM Cloud App ID feature, SAML 2.0 Federation. This feature allows you to easily manage user identities in your B2E apps while authenticating the users using existing enterprise flows and certified user repositories. In this blog we will use Ping One (the IDaaS solution of Ping Identity) as an example identity provider and show how a developer can configure both App ID and Ping so that:

◈ Ping authenticates app users
◈ App ID federates and manages user identities

App ID allows developers to easily add authentication, authorization and user profile services to apps and APIs running on IBM Cloud. With App ID SDKs and APIs, you can get a sign-in flow working in minutes, enable social log-in through Google and Facebook, and add email/password sign-in. The App ID User Profiles feature can be used to store information about your users, such as their app preferences. In short, App ID enables your app to be used only by authorized users and that authorized users have access to only what they should have access to. The app experience is custom, personalized and most importantly, secure.

SAML 2.0 Federation Architecture


Before we begin, we should first review the architecture and flow of a federation based enterprise login and SSO using the SAML 2.0 framework. Here, AD FS is the identity provider that provides enterprise identity and access management (IAM).

IBM Cloud, IBM Security, IBM Certifictions, IBM Guides, IBM Learning, IBM Tutorials and Materials

Federation-based enterprise login and SSO using SAML 2.0

1. Application user opens an application deployed on cloud or invokes a protected cloud API.
2. App ID automatically redirects the user to the Enterprise IAM identity provider.
3. The user is challenged to sign-in using enterprise credentials and familiar UI/UX.
4. On successful login Enterprise IAM identity provider redirects user back supplying SAML assertions.
5. App ID creates access and identity tokens representing user’s authorization and authentication and returns them to the application.
6. Application reads the tokens to make business decisions as well as invoke downstream protected resources.

Configuration Steps


Before we begin:

You must have:

◈ An IBM Cloud account and logged on through a browser
◈ Created an App ID instance
◈ Have a Ping Identity account with a Ping One account

Step 1

Sign in to your IBM Cloud, browse to the catalog and create an App ID instance. Under the Identity Providers menu, select SAML 2.0 Federation. 

IBM Cloud, IBM Security, IBM Certifictions, IBM Guides, IBM Learning, IBM Tutorials and Materials

Step 2

Click on the Download SAML Metadata file. This will download a file appid-metadata.xml.

IBM Cloud, IBM Security, IBM Certifictions, IBM Guides, IBM Learning, IBM Tutorials and Materials

Let’s review some of parameters defined in the metadata file. We need these parameters to configure the identity provider.

◈ <EntityDescriptor> identifies the application for which the SAML identity provider is being setup. EntityID is the unique identifier of the application.

◈ <SPSSODescriptor> describes the service provider (SP) requirements. App ID requires the protocol to be SAML 2.0. The service provider must sign its assertions.

◈ <NameIDFormat> defines how App ID and the identity provider uniquely identity subjects. App ID uses emailAddress and therefore the identity provider needs to associate username with emailAddress.

◈ <AssertionConsumerService> describes the protocol and endpoint where the application expects to receive the authentication token.

Step 3

Open the Ping One Management console and add a New SAML Application.

IBM Cloud, IBM Security, IBM Certifictions, IBM Guides, IBM Learning, IBM Tutorials and Materials

◈ Enter the name and description of your application as requested by Ping and then click on Continue to Next Step.

◈ Under Application Configuration, select Upload Metadata option and upload appid-metadata.xml, the App ID metadata file you downloaded in Step 2. Once this file is uploaded, Ping will automatically provision the fields for Assertion Consumer Service (ACS) and Entity ID. 

IBM Cloud, IBM Security, IBM Certifictions, IBM Guides, IBM Learning, IBM Tutorials and Materials

◈ Make sure the protocol version selected is SAML v 2.0.
◈ Download the SAML Metadata file Ping provides, saml2-metadata-idp.xml. You will use this file to finish setting up App ID later.
◈ The rest of the application configuration fields are currently not required so you can click on Continue to Next Step.
◈ In the SSO Mapping Attribute section you can map attributes between App ID and Ping. We will cover this in more detail in Step 4.
◈ Finally click on Save and Publish.
◈ You have now finished setting up App ID as a SAML application in Ping.

Step 4

App ID also supports name , email , picture and locale  custom attributes in the SAML assertions it receives from the identity provider. App ID can only consume these attributes if they are in the following format:

<Attribute NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic" Name="name"><AttributeValue>Ada Lovelace</AttributeValue></Attribute>
NameFormat is the way that App ID interprets the Name field. The format specified  urn:oasis:names:tc:SAML:2.0:attrname-format:basic is also the default format if no format is provided.

To add these additional rules, select SSO Attribute Mapping > Add new attribute.

IBM Cloud, IBM Security, IBM Certifictions, IBM Guides, IBM Learning, IBM Tutorials and Materials

◈ Set Application Attribute to email
◈ Set Identity Bridge Attribute or Literal Value to Email
◈ Select Advanced and set Name Format to urn:oasis:names:tc:SAML:2.0:attrname-format:basic
◈ Save your attribute.

IBM Cloud, IBM Security, IBM Certifictions, IBM Guides, IBM Learning, IBM Tutorials and Materials

You can add similar rules for name, picture, and locale attributes.

Step 5

Finish configuring App ID by using the information in saml2-metadata-idp.xml. 

◈ Set entityID to the attribute value of EntityDescriptor entityID from the metadata file.
◈ Set Sign-in URL to the URL value for the SingleSignOnService Location attribute in the metadata file.
◈ Primary Certificate should be set to the base64 encoded signing certificate string X509Certificate located under KeyDescriptor use="signing". Ensure there is no white space at the beginning of each line.

Save the configuration data.

IBM Cloud, IBM Security, IBM Certifictions, IBM Guides, IBM Learning, IBM Tutorials and Materials

Step 6

You can now test your configuration by clicking on the Test button. This will initiate an authentication request to Ping. Make sure you have saved your configuration before testing, otherwise Test will not work. 

IBM Cloud, IBM Security, IBM Certifictions, IBM Guides, IBM Learning, IBM Tutorials and Materials

Once you have entered the credential information and successfully authenticated with Ping, you should be presented with an App ID access token as well as an identity token.

IBM Cloud, IBM Security, IBM Certifictions, IBM Guides, IBM Learning, IBM Tutorials and Materials

You have successfully configured your App ID instance using Ping One identity as a service!

Make sure you check out some of our upcoming blog articles in our App ID SAML series:

◈ Setting up IBM Cloud App ID with Azure Active Directory
◈ Setting up IBM Cloud App ID with Active Directory Federation Service

Monday, 9 April 2018

Improved Security & More: The latest for IBM Cloud Load Balancer

Our IBM Cloud Load Balancer is an infrastructure layer (IaaS) load balancer that distributes traffic “locally,” and can be configured via an API or a web UI. With high availability (HA) by default, it provides on-demand scalability, as well as usage-based billing for a fresh way to manage high-density traffic in the cloud, along with the ability to pay only for what you use.

This blog will explore new additions to the service, including:

•  Horizontal scaling
•  Internal-facing load balancer
•  Monitoring metrics
•  Cipher suite customization

Horizontal scaling


IBM Cloud Load Balancer scales up automatically when load increases (and scales down as load decreases). When the load balancer is created, it starts with two appliances, but the number of appliances can go up (to 16, as of this writing) as our monitoring system detects an increase in the load. The IP addresses of each of the active appliances is added as DNS A-Records to the Fully Qualified Domain Name (FQDN) of the load balancer.

As always, we ask that clients use the FQDN of the load balancer instead of its IP addresses when communicating with it. This ensures good load distribution across all available load balancer appliances, in addition to ensuring that if an appliance is taken off the service for any reason, clients will naturally stop using that instance.

IBM continuously monitors the load balancer appliances, and if we detect a loss of communication with an appliance, we take the appliance out of service (by removing it from DNS), and immediately replace it with another instance to restore full capacity. This new capability requires no special configuration– it is enabled by default, at no additional cost.

Internal load balancer


In its first incarnation, IBM Cloud Load Balancer was of the “Public” variety, which supports clients on the public Internet, while the server/application is on your IBM Cloud private network. By request, a highly-demanded “Internal” version of our IBM Cloud Load Balancer that wouldn’t be exposed publicly, but could be used to load balance applications within their IBM Cloud private networks (in a multi-tiered deployment, for instance, as shown in the figure below). It would be both secure, as well as consistent with the load balancer that they were already using on the public side. As a result of high demand, we quickly delivered the Internal load balancer feature to the IBM Cloud Load Balancer service.

IBM Cloud, IBM Certifications, IBM Learning, IBM Guides, IBM Tutorials and Materials

Figure: Using a combination of public and internal load balancers for a 3-tier application (logical representation)

The internal load balancer works the same way, and has all the same features as the public load balancer–including the horizontal scaling that we delved into earlier–except that it’s only exposed within your private network. All you’ll need to do now when creating a new load balancer is select the Internal option. Like the public load balancer, we assign a FQDN to each internal load balancer. These host names are registered publicly, but the addresses are only relevant within the private network where they are deployed. They are not a recommended best practice in any other environment, even if the hostname can be DNS resolved publicly.

Monitoring metrics

You can now leverage the “IBM Cloud Monitoring” service to monitor the following performance metrics associated with your load balancer and application:

• Throughput
• Connection rate
• Active connections

IBM Cloud, IBM Certifications, IBM Learning, IBM Guides, IBM Tutorials and Materials

IBM Cloud, IBM Certifications, IBM Learning, IBM Guides, IBM Tutorials and Materials

Figures: IBM Cloud monitoring service dashboard views

This new feature requires your IBM Cloud IaaS and PaaS accounts to be linked, with a few simple steps. Up to two weeks of samples are collected and displayed by the load balancer web UI. The data can also be viewed on the IBM Cloud Monitoring service portal. If you require data for longer than two weeks, depending on the volume of other cloud metrics you may be sending to your Cloud Monitoring instance, you may need to upgrade your monitoring plan.

Cipher suite customization

To improve upon the security of the service, you’re now allowed to customize the cipher suites that are used when the load balancer is configured to perform SSL termination.

When you enable SSL termination on our load balancer (by selecting HTTPS as the front-end protocol), we enable a carefully selected default set of cipher suites that conform to best security practices. We keep a close watch on any new vulnerabilities that may be discovered, and update the list accordingly. This, along with seamless security updates of software and hardware components, helps to keep your applications secure at all times.

The following image shows how you can customize the cipher suites that your load balancer service can support.

IBM Cloud, IBM Certifications, IBM Learning, IBM Guides, IBM Tutorials and Materials

On the horizon of IBM Cloud Load Balancer

As our roadmap evolves over time, we expect to deliver new services, features, and capabilities as we extend our -as-a-Service portfolio lineup to support your cloud-native workloads. We couldn’t be more excited as we modernize beyond the realms of today’s traditional cloud infrastructure offerings to deliver best-in-class security, reliability, and performance to you, our customers.