Tuesday, 26 September 2017

Introducing IBM Connections 4!

Today we announce the IBM Connections 4, the market leading social business platform. We are excited to share with you all the great new capabilities in Connections 4. To help you better understand what is in Connections 4, I will share videos on the various elements of Connections, starting with the home page experience:

Thursday, 21 September 2017

Changing Perspectives of Autism in the Workplace

There is a wealth of technical talent available in the population of high-functioning autistic individuals, but it often goes untapped for the want of proper accommodations that could make any workplace – with or without people on the autism spectrum – a better place to work.

Currently in the United States, only 14 percent of those with autism are employed, mainly in a non-professional manner.
IBM Tutorials and Materials, IBM Certification, IBM Guides

I have a nephew and friends with autism or other disabilities who, if given a chance, would make a happy and useful employee if we could just focus on their abilities. We have come far as a society in recognizing that people with disabilities can contribute as much as the rest of us. If we concentrate on what people can do, and it matches what we need, it makes great teams greater, and good managers better.

Understanding Autism

Autism is a spectrum of disorders that affects one in 68 children with differing degrees. To some degree, each child could exhibit:

◉ Social and communication problems
◉ Repetitive behaviors
◉ Limited interests or activities
◉ Intense concentration or fixation

In spite of, or because of these disabilities, a corresponding ability is revealed, such as:

◉ Taking things literally to excel at following instructions
◉ Repetitive work does not bore some people, and they love structure; while some can excel at pattern matching
◉ Being focused on one subject matter makes some astounding experts in an area
◉ Some are able to concentrate on tasks that easily wear down others

These talents often find great use areas such as software development, especially in testing. However, because many on the autism spectrum find it hard to look a person in the eye, control their repetitive or comforting behaviors, or hold a “normal” casual conversation, they often don’t fare well in interviews, are ostracized, or languish in low-paying jobs. Their talent and academic achievement is wasted, and companies miss out on talented individuals.

Changing Perspectives

IBM Tutorials and Materials, IBM Certification, IBM Guides
With targeted recruiting, a change in interviewing skills, and some reasonable workplace accommodations – largely through educating “neurotypical” staff and management to be more precise, careful and literal in their instruction – these differences can be cast aside to the benefit of both the autistic individual in getting a productive and meaningful job, the workforce in morale, and the company in productivity.

When hearing about the accommodations needed, most neurotypicals often comment, “why can’t we just do that anyway? That would make life so much easier!”.

I’m also excited by technological advances in artificial intelligence, such as the IBM AbilityLab Content Clarifier that simplifies, summarizes and augments content to increase comprehension for people with cognitive disabilities.

Organizations are already focusing on efforts to attract talented individuals with differing abilities to help drive innovation, foster a culture of diversity, and transform the business. Combined with new technology, we can help change the perspective and pre-conceived perceptions on the autistic community, while helping everyone be more empathetic and respectful.

Wednesday, 20 September 2017

Maximizing Impact with Personalized Accessibility Training

There are many outstanding accessibility tutorials online. Yet, I’m finding that designers and developers are reluctant to go through them.

I always wondered why?

As I travel to many different IBM offices around the world training product design and development teams and understanding their needs, I believe I have found what makes them more interested and engaged. The key is to make the content as personal as possible.

Steps to Improve Inclusive Design and Development

Whether the training takes place onsite and face-to-face, or via online education materials, it is always useful to anticipate the needs of the learner.

For example, let’s say there is an informative image needs to be labeled. When we specifically look at the product or offering that the learner works with, and find an image that is not labeled, we can brainstorm on what would be the best description of what we are trying to communicate. The exercise immediately becomes more engaging.

It is slightly harder to achieve the same connection online, but we can at least anticipate the most common scenarios.

Also, I find that for a designer or developer, integrating accessibility into a solution often comes down to, “What’s in it for me?”.

It’s frustrating that this is the general attitude. However, instead of insisting upon “requirements,” it is important to connect it to a user need or a fulfilling experience. Before we even start thinking about accessibility, it is important to understand why.

We can connect with the learner through a variety of ways:

◉ Through the experience of having a friend or a relative with a disability;
◉ The learner’s own loss of vision or hearing due to aging; or,
◉ The desire and ability of doing something that benefits society, in general.

For example, when we talk about good contrast or readable content it gets more interesting when we ask the audience if they can think of a grandparent using glasses while on the computer. Instead of talking about the letter size or contrast ratio, once we can connect with the kind of text your grandma would be able to read, the specs become secondary, and much easier to meet.

Finally, it’s best to discuss the “usefulness” of accessibility. Accessible solutions can reach more people, therefore increase the audience and sales for a product. And, accessible solutions can help minimize the likelihood of legal risk, or unsatisfied users.

When we hear about litigation costs and impact, it is hard to argue against reducing corporate risk by developing solutions that are accessible to all.

Accessibility in an Investment in Customers and Employees

It is easy to look at accessibility as an additional time or cost commitment, and there is truth to it. Accessibility doesn’t come for free, as much as we would like it to be.Picture of elderly woman looking at a mobile device.

But let’s face it, providing an accessible solution is a good investment, and for that matter, a marketing tool. According the World Health Organizations, 15 percent of the worldwide population has some form of disability. This means, our products can reach more people if they are accessible.

Or, we can just leave that audience for our competitors.

My favorite argument is that as our life expectancy increases, the likelihood of acquiring age related disabilities does as well. What we make accessible today will benefit us personally tomorrow. The question then becomes: How can I design something today that can adapt and be useful to me when I’ll be 80 years old?

There is always a connecting point with a learner, let that be a designer, developer, tester, manager, etc. After the connection is established, accessibility turns into a personal challenge instead of just being another requirement and box that needs to get checked.

Tuesday, 19 September 2017

Using the IBM Cloud Provider to Provision Infrastructure

The IBM Cloud Provider is a Terraform plugin that lets you provision and orchestrate IBM Cloud resources. Terraform is a popular opensource infrastructure as code (IaC) solution supporting all the leading public cloud providers. Terraform templates describe the “to be” state of the infrastructure you want. The terraform engine takes a template and does the orchestration needed to transform the “as is” state to the “to be”.

The IBM Cloud Provider supports virtual machines, load balancers, IBM Containers, Watson services, Cloudant, and more. Using the provider you can create templates to provision the entire Bluemix service catalog. This article looks at how a fictitious services company uses Terraform and the IBM Cloud Provider to deliver client solutions.

Our Scenario

JK Services is a vertical solution provider with a focus on marketing automation solutions. Their flagship solution is ADA, the Ad Delivery Analytics application. ADA is a cloud native application that leverages IBM Watson to tailor online advertising and content delivery to website and app users. Put simply, ADA is an AI ad server.

For security and marketing reasons JK customers get their own instance of ADA when they purchase the SaaS application. Provisioning all the cloud infrastructure for even one ADA implementation was a manual, error-prone, and lengthy process. Using the IBM Cloud Provider, JK is able to provision and manage the infrastructure of each client in a repeatable and automated way. In addition, JK can be confident that each instance of the ADA application has the same infrastructure and software components.

ADA Infrastructure

IBM Tutorials and Materials, IBM DevOps

ADA – the Ad Delivery Analytics app

The ADA application uses a number of cloud resources:
  • A kubernetes cluster deployed using IBM Containers is responsible for delivering content based on a visitor’s profile and preferences.
  • A click handler farm of virtual machines logs all events and actions the user takes on the delivered content (e.g. clicking on an ad)
  • The Cloudant NoSQL database stores visitor profile and preference information
  • IBM Watson Personality Insights and IBM’s BigInsigts Hadoop cluster injest the raw event logs and populate the visitor profile and preference database
  • Object Storage does log management and stores ad creatives and other delivered content

ADA Templates

Using the IBM Cloud Provider, JK Services built a number of Terraform templates to automatically provision the ADA infrastructure. Let’s look at different parts of their template…

IBM Cloud Provider Credentials

Since Terraform supports more than 50 plugin providers you have to explicitly say which provider you are using and give the necessary credentials for provisioning cloud resources.

provider "ibm" {
  bluemix_api_key = "${var.bluemix_api_key}"
  softlayer_username = "${var.softlayer_username}"
  softlayer_api_key = "${var.softlayer_api_key}"

This snippet shows the three (3) needed credentials to provision IBM Cloud resources:

◉ An IBM Bluemix API key
◉ A softlayer username
◉ A softlayer API key

You can use environment variables like SL_API_KEY to specify these credentials or you can use Terraform variables. We use variables in this example. For example, the pattern "${var.NAME}” references the variable “NAME”. Here we have variables bluemix_api_key, softlayer_username, and softlayer_api_key defined.

variable "bluemix_api_key" {}
variable "softlayer_username" {}
variable "softlayer_api_key" {}

Variables can optionally have descriptions and default values. We’ll see examples of these later.

IBM Containers Based Kubernetes Cluster

Serving ads and content is the responsibility of an ad server. The ad server runs on a Kubernetes cluster implemented with IBM Containers and has to integrate with the profiles and preferences database.

Here’s the template definition for the ad server cluster:

resource "ibm_container_cluster" "ad_server_cluster" {
  count        = "${var.create_ad_server}"

  name         = "ad-server-cluster-${random_id.name.hex}"
  datacenter   = "${var.datacenter}"
  org_guid     = "${data.ibm_org.org.id}"
  space_guid   = "${data.ibm_space.space.id}"
  account_guid = "${data.ibm_account.account.id}"
  no_subnet    = true
  subnet_id    = ["${var.subnet_id}"]

  workers = [
      name   = "worker1"
      action = "add"
      name   = "worker2"
      action = "add"
      name   = "worker3"
      action = "add"
      name   = "worker4"
      action = "add"
      name   = "worker5"
      action = "add"

  machine_type    = "${var.machine_type}"
  isolation       = "${var.isolation}"
  public_vlan_id  = "${var.public_vlan_id}"
  private_vlan_id = "${var.private_vlan_id}"

We can see that ad_server_cluster is specified as an ibm_container_cluster resource. Parts of the specification are hard-coded. For example, the workers array specifically says this is a 5-node cluster. Other parts of the spec use user-supplied values. For example, the datacenter where the cluster is deployed and the type of machine are specified using the variables datacenter and machine_type respectively.

variable "datacenter" {
    description = "Datacenter location for the cluster"
    default = "dal12"
variable "machine_type" {
    description = "Cluster node machine type"
    default = "u1c.2x4"

Integrating with the Cloudant NoSQL Database

Cloudant database instances are available as a service in the IBM Bluemix catalog. The IBM Cloud Provider uses the service_resource to specify any and all Bluemix services. Describing a service just needs the name and billing information for the service:

resource "ibm_service_instance" "profiledb" {
  name       = "profiledb-${random_id.name.hex}"
  space_guid = "${data.ibm_space.space.id}"
  service    = "cloudantNoSQLDB"
  plan       = "Lite"
  tags       = ["adserver"]

Here we are using the free (“Lite”) plan for Cloudant. As stated above, specifying a Watson service like Personality Insights would be exactly the same. The only difference would be the name of the service and the plan being used.

NOTE: One of the things you might have noticed is that all resources have a name. Since JK Services is building out the same infrastructure for each customer they need to use unique names for all the resources. For example, the Cloudant database will have a name like “profiledb-A4By” because a variable random_id is used as a suffix on all resource names.

IBM Container clusters can use any IBM Service via a service binding. The service binding makes connection credentials available in the cluster’s environment. The IBM Cloud Provider resource for this is aptly called ibm_container_bind_service.

resource "ibm_container_bind_service" "profiledb_bind_service" {
  cluster_name_id             = "${ibm_container_cluster.ad_server_cluster.name}"
  service_instance_space_guid = "${data.ibm_space.space.id}"
  service_instance_name_id    = "${ibm_service_instance.profiledb.id}"
  namespace_id                = "default"
  org_guid                    = "${data.ibm_org.org.id}"
  space_guid                  = "${data.ibm_space.space.id}"
  account_guid                = "${data.ibm_account.account.id}"

The profiledb_bind_service references the cluster’s name using the cluster resource directly (${ibm_container_cluster.ad_server_cluster.name}). The same is done for the Cloudant database service. You can reference any resource using Terraform’s ${TYPE.NAME.ATTRIBUTE} interpolation syntax.

Planning & Applying the Template

Terraform supports two separate actions with templates: plan and apply. “Plan” is a dry run. When you do a terraform plan you get back a report on the resources that needed to be created, updated, or deleted based on the template definitions and the current state of the infrastructure.

Here’s a snippet from a terraform plan report:

~/dev/ADA $ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.ibm_org.org: Refreshing state...
data.ibm_space.space: Refreshing state...
data.ibm_account.account: Refreshing state...
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ ibm_compute_autoscale_group.sample-http-cluster
    cooldown:                                                 "30"
    health_check.%:                                           "1"
    health_check.type:                                        "HTTP"
    maximum_member_count:                                     "10"
    minimum_member_count:                                     "1"
    name:                                                     "${var.auto-scale-name}-${random_id.name.hex}"
    port:                                                     "80"
    regional_group:                                           "as-sgp-central-1"
    termination_policy:                                       "CLOSEST_TO_NEXT_CHARGE"
    virtual_guest_member_template.#:                          "1"
    virtual_guest_member_template.0.block_storage_ids.#:      "&lt;computed&gt;"
    virtual_guest_member_template.0.cores:                    "1"

The terraform apply action is what takes the as-is state of the infrastructure and transforms it to the to-be state as described by the template. The apply action can take some time depending on the number and type of resources you are provisioning.

Monday, 18 September 2017

Deploying Drupal to IBM Bluemix Part 2

In Part 1, I described how to configure the Drupal distribution and deploy it as an application on Bluemix. In this post, I’ll provide instructions for how to configure the database backend for Drupal using PostgreSQL from Compose.

Using Compose

Configuring PostgreSQL on Compose is simple! First, you’ll need to sign up for a Compose account.
Then you create a new deployment by clicking the ‘Create deployment’ button:

IBM, IBM tutorial and Material, IBM Platform

Select ‘PostgreSQL’ under production deployments:

IBM, IBM tutorial and Material, IBM Platform

And fill in the database configuration. I installed successfully using PostgreSQL version 9.6.3:

IBM, IBM tutorial and Material, IBM Platform

And that’s it! Once your database has been provisioned (typically takes about 30 seconds), scroll to ‘Connection info’ to find the connection string (you’ll need to click the ‘Show’ link to display credentials):

IBM, IBM tutorial and Material, IBM Platform

Now you can use the Drush command line interface to install Drupal (step #7 from Part 1).

(Please note: the protocol from the Compose connection string is ‘postgres’ but you’ll need to change it to ‘pgsql’ for Drush):

$ drush site-install standard –db-url=’pgsql://name:pass@host:port/compose’

MySQL and Bluemix containers

In the next post, I’ll explain how to use MySQL in a Bluemix container as a Drupal database backend to simplify management and improve performance.

Thursday, 14 September 2017

Deploying Drupal to IBM Bluemix Part 1

Managing content can be a daunting task that requires careful thought about content modeling, authoring, preview, rendering, translation, localization, SEO and merchandising. Incorporating your content management system into a modern continuous delivery pipeline and deploying it to a platform-as-a-service (PaaS) can significantly ease your operations burden and give you time to focus on more important issues.

In this post, I’ll describe how to deploy the popular open-source content-management framework Drupal to IBM Bluemix.

1. Download the latest version of the Drupal 8 (version 8.2.6 at the time of this post)

2. Extract the contents of the archive:
$ tar xvzf drupal-8.2.6.tar.gz
$ cd drupal-8.2.6

3. Drupal 8 requires a recent version of PHP (at least version 5.5.9); find a Cloud Foundry buildpack that contains a suitable version. I used the Heroku buildpack for PHP available here: https://github.com/heroku/heroku-buildpack-php. You’ll use this later when deploying to Bluemix.

4. Drupal employs Composer to manage its dependencies. Depending on the buildpack and Drupal version, you may have to change the ‘composer.json’ file in the root of the source tree to ensure all required PHP extensions are loaded properly. This was the case in version Drupal 8.2.6 and the Heroku buildpack — the PHP ‘gd’ extensions wasn’t loaded. Add ‘ext-gd’ to the ‘require’ attribute in composer.json:

“require”: {
“composer/installers”: “^1.0.21”,
    “ext-gd”: “*”,
    “wikimedia/composer-merge-plugin”: “~1.3

5. The composer.json file has to be consistent with the composer.lock file, so you’ll need to run composer to update it. Ensure Composer is installed (if you’re using homebrew, the formula is named ‘homebrew/php/composer’) and run:
$ composer update

6. Deploy to Bluemix (you can download the CF CLI here: https://console.ng.bluemix.net/docs/cli/index.html#downloads)
$ cf login
$ cf push <app-name> -b https://github.com/heroku/heroku-buildpack-php

7. Set up your instance by navigating to: https://<app-name>.mybluemix.net/ or use drush:
$ drush site-install standard –db-url=’pgsql://name:pass@host:port/database’

I’m using a PostgreSQL database hosted on Compose for persistent storage. In the next post, I’ll go into detail about how to configure Drupal using a Compose service and automate deployments.

Tuesday, 12 September 2017


ETS at Building Live

Under discussion was the existing and future BIM Levels.

As we move through the levels of BIM the focus begins to change.

Level 1: Is all about the Design and Construction of the building, and moving that to a digital age.

Level 2: Starts to focus in on Facilities Management and how we can help the owners / renters and managers of a building, introducing the idea of a “soft landing” where the transition from building to inhabiting is made much smoother.

Monday, 11 September 2017

Optimize equipment reliability by listening to your data

The world is more connected than ever before. Using Internet of Things (IoT) technologies, manufacturers are collecting large amounts of data from their machines but do not know how to optimize it. It’s as if their machines are speaking on mute. They have so much to say but we simply cannot hear them.

To help deal with this massive overload of data, IBM released the new IBM Predictive Maintenance and Optimization (IBM PMO) solution. It takes large amounts of data from machines and analyzes it for patterns that can help predict equipment failure. It provides a detailed view into equipment performance. This information is used to optimize maintenance efforts, ensuring the right equipment is always available.

Do you really know the life span of your assets?

You may have a rough estimation of how long your assets and equipment will last but how confident are you in that estimate? It could be much more or much less – and you won’t know until it’s too late. Here are just a few benefits that IBM PMO helps enable:
  • Models calculate asset health scores and predict life spans
  • Real-time interactive dashboards monitor assets and processes
  • Detect asset failures and quality issues earlier
  • Explore asset performance data to learn the cause of failure
  • Provide optimized maintenance recommendations to operations
  • Customize solutions for your specific maintenance use cases

Focus on reliability

Equipment is of no use when it is broken. It can cause major problems if equipment fails in the middle of production. Building on the strengths of previous predictive maintenance solutions, IBM PMO focuses on the needs of the reliability engineer to identify and manage risks that could result in failure or a halt in operations.  A reliability engineer can build a model to determine remaining equipment life and improve maintenance strategy. The analysis helps them identify the state of current production equipment and identifies impending failure.

Optimize performance of critical equipment

For the most critical equipment, where unplanned downtime has a major impact on production and repair costs are high, it is possible to build custom predictive models. These models prepare key data and use the expertise of a data scientist to perform additional analysis. This effort can avoid huge financial loss, and minimize negative impacts by ensuring maintenance is scheduled at the most optimum times.

Pre-built predictive models for similar equipment

While custom models are ideal for some organizations, for others it will not be necessary. Organizations that have critical equipment of similar type or class (e.g., generators, motors, pumps, robots)  can use the pre-built app. This is most effective when unplanned downtime impacts production and cumulative maintenance costs are significant. Use of these standard, pre-built models enables you to monitor and analyze a variety of equipment and their current maintenance schedules. This differs from custom-built models which are specific to certain equipment.

The pre-built application enables reliability engineers to obtain both high-level and detailed reports of performance and maintenance history.  It supports analysis and reporting on all equipment, classes of equipment, or filters for properties common to a set of equipment. This flexible reporting makes it faster to analyze and understand current maintenance practices and prioritize future needs.

Understand what leads to failure

The ability to collect data including failures, maintenance history, time stamps, metrics, and events is another valuable capability of this new solution. IBM PMO aligns various pieces of  data to a fixed interval so that it can examine the relationship between multiple variables collected at different points in time (see Figure 1).  It then offers recommendations to improve maintenance strategy for individual equipment or equipment classes. It also recommends actions to take based on predictive scoring and identification of factors that positively and negatively influence equipment health. This provides a detailed comparison of historical factors affecting equipment performance.

IBM Tutorials and Materials, IBM Certifications, IBM Data
Figure 1. Comparison of data points at different time intervals.

Quickly assess maintenance needs and optimize performance

Simply put, IBM PMO allows reliability personnel to gain an understanding of all factors that affect equipment performance. This information provides a full picture for assessing past, present, and future equipment performance and needs.

Saturday, 9 September 2017

Advanced GPS for your DevOps journey

With modern GPS becoming pervasive, we often take maps for granted. It’s hard to imagine, but there was a time when maps simply didn’t exist. I find it interesting that the first maps weren’t even of the earth – they were of the stars. On a more terrestrial front, one of the oldest intact maps we have comes from what’s estimated to be approximately the 6th century BCE, the Babylonian World Map. This map, rather unsurprisingly, places Babylon in the center and largely ignores enemy territories.

Years later, expeditions would take cartographers on their ocean voyages so they could develop maps accurately showing land masses, water boundaries and other items with a high level of precision — sometimes in stunning detail considering they didn’t have a bird’s eye view. Yet even that pales in comparison to today’s map-making capabilities.

We now have satellites navigating the earth multiple times a day, scanning the earth’s surface and providing images with resolutions fine enough to read a license plate. Maps developed by these systems not only show us the shortest path to a destination, but they can also integrate real-time data to tell us the fastest route. With years of experience watching traffic, they can even predict, with a high level of confidence, how long your commute will last on a certain day and hour in the future.

Now, imagine if you could get the same type of visibility and guidance into your application and database environment for mainframes. What if there was a tool advanced enough to eliminate thousands of hours of development time?

Saving development time in the modern world

Today’s application environment is always changing. Connecting legacy applications, like those on mainframes, with newer applications can be a daunting task when the legacy applications are not fully understood. Often, documentation is either incomplete or missing entirely for these apps. “Don’t touch it because it might break” often paralyzes organizations.

The potential solutions? You could burn a lot of developer time analyzing code and documenting it. And that’s what most teams do today. At least 40 percent of development time is wasted in this effort, in my estimation. Another idea is to search for something like redundant code to “clean up” the applications. Many turn to “state-of-the-art” search techniques to scan the environment. While sometimes fast (and somewhat enlightening), the result closer to a mid-1400 cartographer’s drawing of a land mass than the crisp satellite images we enjoy today. This approach is often too high-level to be useful because just a 1 percent error in the analysis can result in hundreds of false positive identifications of redundant code.

But what if you could fully understand that environment so that you knew how a simple change in one application affected the entire ecosystem? Check out an application called Application Discovery & Delivery Intelligence (ADDI). This application is actually the seamless integration of two otherwise independent applications: Application Discovery and Application Delivery Intelligence.

Find directions to a better set of tools

◉ Users get a much richer and more robust analytics platform with the two applications combined:
◉ Extensive high-level systems views of applications for top-down analysis.
◉ Simple drill-down detail for maximum visibility at any level.
◉ Extensive core language support for complex ecosystems.
◉ Exposure of redundant test cases and code that hasn’t been tested.
◉ Visibility into operational performance data for ongoing optimization.
◉ Intuitive dashboards that actively monitor your environment.
◉ Reporting that delivers the information you need when you need it.

ADDI offers even more – source code manager support, extensive database support, detection of performance issues early in the development cycle and continuous updates of the environment.

It’s like an advanced GPS telling you where to go and how to get there in the least amount of time. Only ADDI can deliver the insight required to enable an agile, DevOps culture that moves as fast as your business.  It provides benefits throughout the entire development cycle, spanning analysis, test, development, deployment and operations. If you need a tool that delivers this functionality and also provides a fantastic financial return, this is the tool for you. But don’t just take our word for it.

Friday, 8 September 2017

How using mobility and innovation for FSM will define your future

Are you giving your field service management (FSM) staff what they deserve? Safety, efficiency, productivity – it’s what we all want for our field service technicians. Safer practices, real-time weather feeds, optimized routes, access to supply systems, automation. Mobility and analytics are at the heart of each of these aspirations and will continue to drive significant productivity gains, improve worker effectiveness and safety, and eliminate errors by capturing data directly from the work source, while integrating real-time data input from the cloud. How quickly you adopt mobility and new innovations into your FSM practices will define your organization’s ability to stay ahead of the competition.

A Field Service Management (FSM) solution enables organizations to:
  • make and keep customer commitments
  • respond quickly to emergency situations
  • provide technicians easy access to the information they need
  • increase first time fix rate
  • reduce travel and waiting time, and
  • increase the number of jobs completed per day and capacity used.

Mobile and innovative technologies are fueling growth for field service management (FSM)

The field service management market is estimated to grow from $1.78 billion in 2016 to a staggering $3.61 billion by 2021. The need for highly scalable centralized system for the management of field services and real-time collaboration, increased usage of mobile devices, and growing demand for improved enterprise efficiencies and reduced operational cost are the predominant forces driving this growth.

Gartner predictions indicate the field service management market is experiencing rapid growth in response to technology advancements in the areas of mobility, SaaS and machine learning. Gartner already predicted that field service organizations were on track to purchase as many as 53 million tablets by 2016, and, that by 2020, two out of three large field service organizations will have equipped field technicians with mobile applications in order to generate new revenue streams, improve efficiency and increase customer satisfaction.

Organizations that stick with paper methods will fall behind

Predictions like these don’t bode well for organizations that are slow to embrace a mobility platform for their field service staff. In order to stay ahead, organizations that depend on field technicians to manage and maintain business-critical enterprise assets will need to adopt mobile technology to automate the service process and eliminate duplicate data entry.

As the march towards automation continues – with paper work order systems being replaced by intelligent, integrated, scalable solutions – field technicians need instant access to critical data, even when they lose connectivity. Mobile devices can provide field staff with a secure line of communications to the back office, plus much more.

Where are you in the continuum of adopting mobile?

In order to make the most of a mobility platform, it is vital that it has the capacity to address different types of work such as long cycle construction (working in offline mode), schedule maintenance, inspection (with dynamic forms and checklists), perform asset audit and calibration, from installing a simple asset like meter to managing complex asset like airplanes.

Exploiting the natural overlap between EAM and FSM

Asset-intensive industries face the harsh realities of operating in highly competitive markets and dealing with high value facilities and equipment where each failure is disruptive and costly. At the same time, they must also adhere to stringent occupational safety, health and environmental regulations. Maintaining optimal availability, reliability, profitability, and operational safety of plant, equipment, facilities and other assets is therefore essential for an organization’s success in their respective markets.

Enterprise Asset Management (EAM) addresses the entire lifecycle management of the physical assets of an organization in order to maximize value. It is a discipline covering areas that include the design, construction, commissioning, operations, maintenance and decommissioning or replacement of a plant, equipment, facility or some other high-value asset.

Savings benefits to be gained through EAM and FSM

EAM enables the work force to handle long cycle work like managing new construction or neighborhood design and as well scheduled inspections and maintenance of assets. Pure play FSM systems, on the other hand, usually address short cycle work such as outage, or customer reported problems, e.g. a water leak or customer initiated work such as new service hookup. By combining these kinds of workloads through a single workforce and a single system, huge savings can be gained because organizations can optimize work allocation based on skills, availability, proximity and priority to achieve maximum efficiency.

IBM Tutorials and Materials, IBM Certifications, IBM FSM, IBM Guides
Figure 1: Where EAM and FSM overlap

The total asset management approach

By combining Maximo Asset Management (IBM’s core EAM solution), IBM scheduling and dispatching solutions (Scheduler and Scheduler Plus), a mobile solution such as Maximo Anywhere with location based services using Maximo Spatial and a way to easily manage customer service level and billing (Maximo Service Provider), organizations are able to address their field service management (FSM) requirements with a single system of record – IBM Maximo.

Essentially, using IBM Maximo offers organizations a total asset and field service management approach –encompassing everything from planning and scheduling of work, to booking customer appointments, to assigning and dispatching the technicians, and finally tracking the work progress in real-time – eliminates the need to go with a third party tool. Having a single system across EAM and FSM not only helps reduce total cost of ownership, but for organizations striving to achieve a single field workforce, it can help reduce the cost and complexity associated with integrating multiple systems.

IBM Tutorials and Materials, IBM Certifications, IBM FSM, IBM Guides
Figure 2: A comprehensive enterprise asset management and FSM approach.

Riding the next wave of FSM innovation with Weather Integration
The impact of weather events on organizations can be far-reaching – impacting profits, productivity and safety. In the U.S., weather has a $500B impact on the economy and is responsible for nearly 80% of all electric grid disruptions. In the case of field service personnel, weather often forces a delay or reschedule of work be it because of snow, ice, high winds or what have you.

Having weather data integrated into your FSM solution streamlines the process, enabling organizations to better manage appointments with their end clients, while enabling technician scheduling and planning is more flexible and efficient.  Here are a few real-world use cases where weather integration can help to improve safety and efficiency:

1. Avoid dangerous weather conditions

For a planned cell tower maintenance, high wind and storm conditions should be avoided when possible. Lightning must be strictly avoided. Having the visibility into weather conditions in advance enables safer, more efficient operations

2. Avoid cancellations

Incorporating the weather forecast into appointment booking helps avoid cancellations and/or reschedules thereby leading to higher customer satisfaction as well as lower operational cost

3. Ensure crew safety

With real-time weather alerts, Dispatchers can re-route the technicians to work in a safe area and reschedule the original work to a day or time that is not impacted by adverse weather. Not only does this ensure that the crews are safe, it also helps utilize them more efficiently on other high priority work while also avoiding unnecessary fuel/travel and other operational cost.

The next phase: Faster problem resolution with augmented reality and artificial intelligence

IBM is working on combining different technology advancements to help field technician staff speed-up inspections, problem determination and resolution.

It’s clear the next three to five years will be pivotal for field service management. New technology advancements like Weather integration, augmented reality, artificial intelligence (AI) visual recognition, and cognitive expert advisors are already reshaping the Field Service Management landscape. These advancements, combined with industry trends such as aging workforce and aging infrastructure, indicate there will be no shortage of opportunities where emerging innovations emerging as disruptors to traditional enterprise systems.

How quickly and to what degree organizations integrate these new technologies into their FSM practices will certainly be a factor in which organizations emerge as leaders versus laggards.

Wednesday, 6 September 2017

The economics of large-scale data protection

When business resiliency is your business, rock solid data recovery is a must. My colleagues and I at IBM Resiliency Services design data protection and recovery solutions that run millions of backups per month for organizations of all sizes and industries. We use all the popular brands of backup software, deduplication appliances and tape systems with confidence.

You can buy insurance for most business risks, but no insurance policy can return lost data. Business data must be properly protected.

In large-scale environments, data protection conversations always include economics. There is constant pressure to improve efficiency, and clients need to know they’re getting the best price.
IBM analyzes the economic impact of technology changes because clients rely on us to deliver continuous improvement.

The economics of large-scale data protection, IBM Tutorials and Materials

Because of our dual focus on quality and efficiency, my team at IBM Resiliency Services is particularly pleased with IBM Spectrum Protect v7.1.3. This release adds a new deduplication capability that rivals dedicated appliances.

Spectrum Protect v7.1.3 can manage up to 10 times more data and can ingest up to 5 times more data per day than prior releases.*

IBM Resiliency Services is now recommending data protection solutions based entirely in software, without reliance on deduplication appliances. In most environments, Spectrum Protect v7.1.3 can be a game changer that makes this design change possible.

Why is this important?

IBM Spectrum Protect is the most popular data protection platform used by IBM Resiliency Services clients, performing over 5 million backups per month. Incremental improvements add up. IBM Resiliency Services analysis projects significant operational and capital expenditure benefits:
  • Clients get an integrated software resilience solution, with fewer moving parts to manage.
  • The new container storage pool uses in-line deduplication, avoiding costly background processing (reclamation, “garbage collection” for deletion), allowing us to use very large, low-cost disk drives for the backup storage pool.
  • Software license and maintenance costs based on backend capacity can decrease as more data is deduplicated. There is no incremental cost for the new deduplication feature in Spectrum Protect.
If you manage data protection for your organization, you may get a pleasant surprise by running the numbers for software-defined deduplication.

Our experiences using IBM Spectrum Protect Dedupe Storage Containers
My team found Spectrum Protect dedupe storage containers to be simple to configure and use, especially when using the wizards in Spectrum Protect Operations Center. The new storage pools can reduce backup server maintenance requirements, so administration costs are lower. Clients using deduplication for the first time can significantly reduce backup storage capacity requirements.

Spectrum Protect deduplicated storage containers in version 7.1.3 are designed for all-disk backup environments. Tape users have legacy Spectrum Protect storage pools and can still use both legacy storage pools and new container pools on the same server.

IBM data protection solutions
IBM gives you confidence that your data is protected, whether you choose IBM Resiliency Services or on-premises Spectrum Protect software. IBM has decades of data protection experience, with solutions supported by thousands of experts and facilities around the globe.

IBM is making it easier to know that your data is protected, and built-in efficiency features help ensure that you are getting the most for your money.