Tuesday, 30 January 2018

IBM Releases Scale-Out Server Guide

The IT industry is seeing a strong shift in direction, towards Cloud deployment models, and greater insight through advanced analytics and Cognitive Computing. The latest innovation is being driven largely by collaboration and Partnerships, particularly in open source projects, including the Linux Operating System.

Thank to these trends, the scale out line of IBM Power system servers is growing and it doesn´t look like things will slow down as more partners join the OpenPOWER foundation. With these partnerships, a wider range of Linux only serves are entering the market, providing customers with every greater choice. But with all these new servers the question is: “What is the best server for my workload, and what choices do I have?”

With these questions in mind the IBM Redbooks team has created a new deliverable to assist sellers, Business Partners, and also customers in this journey. Based on the well-known and greatly respected format of IBM Redbooks, but with a new approach they have created a positioning guide. This to help understand the advantages of each L and LC server, and the major differences between them. This resulted in the “Power Systems L and LC server positioning guide”.

Traditional IBM Redbook publications provide a deep technical understanding of one specific server model. The total concept and all the options are explained, which results in a book of around 300 pages. The position guide brings a new approach. With a strong focus on workloads and use cases, the guide ensures a clear understanding of different application areas, along with some example use cases. The different server models that are designed for these purpose are then discussed, and relevant technical features and capabilities are highlighted. Informative spider charts are used to visually show the options available and help to easily digest the data.

IBM Tutorials and Materials, IBM Guides, IBM Certifications

By concentrating on the benefits of the latest technologies for specific workload areas, the new
positioning guide is an easily readable, concise guide to the Linux only Power Systems range. The team has also produced supporting material, including presentation slides and handouts covering the major workload areas. These slides can be included in your own presentations, and the handouts can be left with a prospective client, or given to the audience at your next presentation custom handouts you can leave on a prospective new client´s desk or give to the audience at your next presentation.

Sunday, 28 January 2018

Regulators moving to address cloud disparity in utility business

Cloud Security, Cloud Security Guides, IBM Certifications, IBM Learning

Information technology is a transformative force in the utility industry today. The Internet of Things, along with computing and communication technology advances are offering massive volumes of new data and methods to analyze it. Data product and service offerings are giving utilities the ability to access, analyze, share, and leverage this data in novel ways. These solutions are being delivered more quickly and securely through cloud-based platforms. As part of a broader set of issues around big data, the cloud is a growing enabler of the future of the utility industry.

Cloud enables cheaper and easier deployment, testing, and growth of new utility business models with scale and agility. Cloud services can give utilities the ability to learn from new data insights; develop more responsive products and programs, and manage infrastructure more flexibly, efficiently, and effectively. Cloud can provide a competitive advantage in an industry where business models are shifting quickly. In fact, studies estimate that cloud use could save energy and utilities companies more than $12 billion by 2020. Consider, for instance, how cloud could:

◉ quicken the pace with which utilities can adopt and deploy new applications by reducing the need for new hardware and employee training, and

◉ increase cybersecurity through working with the leaders in the field, who can update security features and monitor threats on a constant basis, rather than utilities trying to add and maintain that tangential expertise in-house.

A number of regulated utilities and new market entrants are realizing this promise by aggressively using cloud technology. Recent surveys estimate about half of the utility industry is using cloud applications. These estimates put the industry far behind the economy as a whole, however, where estimates are that approximately 95% of companies are using the cloud. Policy and regulatory factors seem to be a large part of the reason that cloud adoption is slower in the utilities sector.

The regulatory rules for utility accounting can unfairly bias the market against cloud-based solutions. These rules often mean that a utility can earn a rate of return for on-premise software, which can be included in the rate base as a part of capital expenses (CapEx). Conversely, cloud solutions are treated as operating expenses (OpEx). These expenses can be cost-recovered, but not earn a rate of return. This disparity clearly slants the playing field in favor of on-premise products, despite the numerous benefits cloud can provide. For utilities to achieve maximum efficiency, they should be able to choose the technology that best suits their needs without the kind of cost distortion that outdated regulatory principles designed for a previous technological era can create.

Policymakers across the United States are starting to take action. On the federal level, Congress is considering legislation to emphasize the role of cloud technology in modern utility systems. The Energy Savings and Industrial Competitiveness Act (S. 385, H.R. 1443) would require the U.S. Department of Energy to consider the impact of cloud computing in a report to Congress on energy data systems. The Energy and Natural Resources Act of 2017 (S. 1460) echoes that language. The Leading Infrastructure for Tomorrow’s America Act (H.R. 2479) highlights cloud technology as a part of more resilient energy systems. In the states, cloud services have been treated differently than on-premise software for accounting purposes.

State regulatory bodies have been more active in addressing the issue. For example, utilities in California are urging the Public Utilities Commission to consider rule changes and the Commission is starting to listen. After discussions on the differing accounting treatment of cloud services, the Commission decided last year to give utilities a 4% pre-tax incentive for distributed energy projects that could replace traditional generation resources. According to reports, that percentage was specifically targeted to address the cloud computing differential. Cloud applications can help utilities to better monitor and manage distributed energy resources. In that state, high profile projects involving cloud delivered services are also being launched even with the differential treatment, allowing early moving California utilities to learn how they might be able to leverage cloud in advance of potential rule changes.

In New York, the Reforming the Energy Vision (REV) effort is providing a framework for change. It was organized in 2014 to spur the deployment of clean energy resources. In a recent associated order called Track Two, the New York Public Service Commission sought to incentivize the incorporation of cloud and other resources by allowing for “earnings adjustment mechanisms” based on outcomes with regard to specific performance measures. Additionally, the Commission issued last year an order suggesting CapEx and OpEx might be combined into a new category called “TotEx.” Although that has not happened, the Commission is now allowing up-front cloud service expenses to be treated as CapEx for accounting purposes.

In Illinois, exploratory policy discussions in 2015 addressing this topic over the last few years are bearing new fruit. The Illinois Commerce Commission issued a Notice of Inquiry on cloud computing last year whose summary conclusion report reads, in part, that utility “ratemaking should not discriminate between different systems that provide the same function.” The conclusions report urges a consideration of solutions to the problem, mentioning the possibility of new accounting categories or riders, in addition to new regulatory guidance that would level the playing field. Also, in a broader effort, the National Association of Regulatory Utility Commissioners (NARUC) issued a resolution encouraging utility regulatory bodies to address the differing treatment of on-premise and cloud software. Mr. Sheahan is on the board of that organization and has been tapped to lead a NARUC Presidential Task Force on Innovation. A host of grid modernization and expanded utility data initiatives in other states have the potential to further a wave of cloud accounting policy changes across the country.

Stakeholder gatherings are also providing a forum for discussion of regulatory remedies as well as ways that cloud services can boost utility efficiency and contribute to regulatory goals. For example, Stephen Callahan (IBM Vice President for Global Strategy & Solutions, Energy Environment, and Utilities) is moderating a conference panel this week in Washington, DC on regulatory factors affecting utilities’ use of as-a-service resources, continuing the forward momentum in this area.

The conference, GridConnext is a joint conference of Clean Edge and The GridWise Alliance. The conference is intended to provide an opportunity for utility stakeholders, including policymakers, regulators, investors, service providers, end-users, and others to “explore policies and share best practices on building a modern 21st century grid.” That discussion will feature Marissa Uchin of Oracle Utilities, Sunil Cherian of Spirae, and Steve Thiel of EY. Discussions like these can help utility regulatory stakeholders to identify ways that cloud services can boost utility efficiency, as well as regulatory goals. These talks may also help set regulators on the path to realizing these benefits by addressing regulatory barriers to progress.

Friday, 26 January 2018

Design an Innovative Commerce Solution

IBM Tutorials and Materials, IBM Certifications, IBM Guides, IBM Commerce, IBM Learning

Are going digital and being innovative two different things?

The other day I left our downtown Chicago office after a long brainstorming session with some of the best minds in the Digital Commerce domain. We were working on a design for a world-class user experience for one of our top customers. Collectively, we represented 100-plus years of prominent consulting and Fortune 50 experience. We made a decision to recommend a top-of-the-line e-commerce platform instead of developing a custom engine. We completed all the math to justify the business case calculations and left the room.

Later that night, I video-chatted with my daughter Isha, who recounted her final stage performance of her middle school dance elective. She explained how she and her friends decided to focus on choreographing their own dance rather than using one of the traditional routines that they had been taught. It sounded familiar, and I recognized the choice as one I had just spent the day puzzling over: a customized experience over a ready-made solution. I admired the freedom her teachers gave them and was amazed by the ability of these teenagers to apply what they learned to create a unique experience for everyone in the theatre.

This left me wondering whether my colleagues and I gave away such freedom to think out of the box. Were we being bold enough to go deeper in our analysis? I thought again of all the pros and cons of proposing a ready-made solution vs. a custom-made online commerce platform.

My daughter and her friends were creative in coming up with their own choreographed dance, plus they had been innovative to come up with really attractive costumes within their existing $2 budget per person.

I called my colleagues immediately and started questioning every assumption we had made. Instead of coming up with an advanced digital solution, now our focus was to come up with a solution that allows our customer’s customer to get the best possible commerce experience. Before I knew it, we were solving a value maximization challenge rather than focusing on cost minimization.

With all the changes in the last decade, it would seem that the ideal commerce experience would look a little different now than in the past. My preferred personal shopping experience is all about reliability, simplicity, variety and customer loyalty: reliability in terms of scheduled delivery I get and the price I pay; simplicity in how to browse and place an order; variety not just in how many types of items are offered but also how I can navigate and narrow down the options. Accomplishing this experience online was something that I could not have imagined even a few years ago. In reality, as a customer, I still used to value the same four factors as a shopper.

Brick and mortar shops increasingly rely on digital technologies to keep the customer in mind. Every digital interaction is captured now and used for future insights by smarter commerce. The ecosystem that is required to define such an experience uses machine learning technologies and a reliable cloud solution. In short, it’s a retail business model based on digital technologies

Inspired by this thinking, the next day my colleagues and I focused on not being digital for the sake of being digital. We started focusing on defining the business problem or opportunity first. After that, we focused on how to address those using digital technologies.

Three lessons I came away with

1. Being digital and being innovative are two different things when it comes to designing your commerce solution.

2. Innovative business models are more relevant than innovative technology solutions. It is important to focus first on innovative business models.

3. Adopting a digital commerce solution can enable a futuristic business model that is customer-centric.

As for my daughter and her choreographed dance … I asked her to let me know what site or app she and her friends use to browse for costume ideas.

Friday, 19 January 2018

Transforming from traditional Outsourcing to IBMs New Cognitive services

The role of IBM as managed service provider is changing. It is not only changing, it is evolving. IBM have embraced this change and are moving with it and with it we see that the finishing line of managed services are also shifted. In the early days of managed services, the clients were looking to providers to offer a complete solution from hosting and up the stack to application providing staff to manage hardware, OS, middleware and often also application layers. With the introduction of cloud, we have seen a new era of managed services arise, with the introduction of Software as a service. Software providers can now deliver and manage software themselves, leaving traditional managed services with a challenge and the need to transform and bring innovation to enter a new era of IT services that are relevant to the market and the client.

Shifting business and market requirements

As we see the shift in service consumption, we see the focus being placed on delivery and more pressure on service providers to incorporate cognitive services/capabilities that can help enterprises meet the increasing business and market requirements that is placed on them. The requirements placed on enterprises, is to demonstrate improving financial performance and work productivity to accelerate the provisioning of business capabilities. Enterprise are also facing these changes from their clients who consume IT-services in an always-on manner and assume systems are available 24×7. Managing such systems, associated tools and solutions while leveraging the ever-increasing amounts of data, and ensuring availability, can be a challenge!

Big data is a major component of this evolution 

Big data is not just a passing phase that will disappear tomorrow but is a matter of business today. Big data drives the cognitive shift and it touches every aspect of our business. The data volumes are exploding, more data has been created in the past two years than in the entire previous history of the human race but a study in 2012 shown that we are only analysing 0.5% of that data. Recently, in GTS Nordic, we have been at the forefront of changing how we manage services for our clients, both for our existing partners and new partners who wish to maintain control of their own IT. Today we utilising big data and apply advanced analytics for automation and analytics solutions to monitor the managed and hosted systems and improving these with IBM Watson.

IBM Certifications, IBM Guides, IBM Tutorials and Materials, IBM Leaning

A new composition services appears

Our teams, together with our partners, have been pioneers in adopting innovative technology to change the way we delivers services and use the IBM Services Platform with Watson, announced in July 2017, to ensure uninterrupted operations and a decrease in the numbers of incidents impacting business critical systems.

IBM Certifications, IBM Guides, IBM Tutorials and Materials, IBM Leaning

IBM Services Platform with Watson addresses the increasing business demands and technology shifts. By enhancing the current human-led-delivery to become technology-led-execution using Cognitive Computing technology to enhance the entire managed services life-cycle from designing, building, integrating and running services. The Cognitive capabilities of this platform are a combination of automation and advanced analytics to continue to drive improvement to both IT and business automation augmented and enhanced by Cognitive capabilities with platform Consumable Services. These capabilities are also being made available as individual Offerings for those clients wishing to retain the management of their own IT.

The results are game changing

Moving traditional-monitoring-and-reactive-incident-management to a predictive technology-run state where cognitive assistants allow you to triage and investigate incoming events and behavioural changes, which are not yet disrupting services. All managed through a chat-mechanism to proactively prevent client incidents and increase customer satisfaction and system stability. This is the opportunity now to evolve how we deliver IT operations and embrace the transformation from traditional IT services to IBM’s new Cognitive services.

Thursday, 18 January 2018

Digitizing Global Trade with Maersk and IBM

A new joint venture

In January 2018, Maersk and IBM announced the intention to establish a joint venture to provide more efficient and secure methods for conducting global trade using blockchain technology. The new company aims at bringing the industry together on an open global trade digitization platform that offers a suite of digital products and integration services.

The platform is currently being tested by a number of selected partners who all have interest in developing smarter processes for trade. As we incorporate learnings and continue to expand the network, a fully open platform whereby all players in the global supply chain can participate and extract value is expected to become available.

Industry challenges

The cost of global trade is estimated at $1.8 trillion annually1 with potential savings from more efficient process of ~10 percent. The cost and size of the world’s trading ecosystems continues to grow in complexity.

The case for a better way

About the platform

It is about reducing global trade barriers and increasing efficiency across international supply chains, and bringing to market a trade platform for containerized shipping — connecting the entire supply chain ecosystem.

The platform is being built on an open technology stack and is underpinned by blockchain technology. The two main capabilities at launch will address current visibility and documentation challenges.

A shipping information pipeline

Provide end-to-end supply chain visibility that enables all actors involved in a global shipping transaction to securely and seamlessly exchange shipment events in real time.

Paperless trade

Digitize and automate paperwork filings for the import and export of goods by enabling end users to securely submit, stamp and approve documents across national and organizational boundaries.

Potential benefits using the platform

The objective of the platform is to connect and provide benefits to the supply chain ecosystem. A global network of interconnected shipping corridors linking the ports and terminals, customs authorities, shipping lines, third-party logistics (3PLs), inland transportation, shippers and other actors, all together.

Organizations already participating

Since the collaboration started in June 2016, multiple parties have piloted the platform including DuPont, Dow Chemical, Tetra Pak, Port Houston, Rotterdam Port Community System Portbase, the Customs Administration of the Netherlands, U.S. Customs and Border Protection.

A broader group of global corporations have already expressed interest in the capabilities and are exploring ways to use the new platform, including General Motors and Procter and Gamble to streamline the complex supply chains they operate and Agility Logistics to provide improved customer services including customs clearance brokerage.

Additional customs and government authorities, including Singapore Customs and Peruvian Customs, will explore collaborating with the platform to facilitate trade flows and enhance supply chain security. The global terminal operators APM Terminals and PSA International will use the platform to enrich port collaboration and improve terminal planning. With support from Guangdong Inspection and Quarantine Bureau by connecting to its Global Quality Traceability System for import and export goods, the platform can also link users to important trade corridors in and out of China.

Customize Bluemix with your Git servers

Introducing a new DevOps toolchain feature

You can now interact with your own GitHub Enterprise and GitLab instances from Bluemix public!

Both the GitHub and GitLab tiles feature a new server dropdown menu, giving you the freedom to work with code on GitHub, GitLab, or in your own company’s GitHub Enterprise or GitLab instances.

To get started interacting with your custom instance, follow the one-time setup instructions below. Then, whenever you load the GitHub or GitLab tile, you will see your custom instance appear in the server dropdown menu. From there, you have the same freedom to create, fork, or clone repositories. If you have existing repositories, you can link those as well.

In addition, you can interact with your repositories using a variety of Bluemix DevOps tools. For example, you can edit your code in Eclipse Orion Web IDE; build, test, and deploy your code with Delivery Pipelines; and analyze your code with DevOps Insights.

IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Learning

Prerequisites for custom server interaction

◈ The server must be accessible via the public Internet.
◈ You must be able to provide the root URL and a personal access token.
◈ This setup must be done per individual looking to interact with the custom instance.

One-time setup

1. First, navigate to the toolchain catalog from a new or existing toolchain.
2. Then, choose the GitHub or GitLab tool based on your instance.

IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Learning
IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Learning

3. Next, select Add a custom server from the server dropdown menu.
4. Provide a title to identify your server in the dropdown menu. It will appear in the format “Title (root URL)”.
5. Provide the root URL of your server.
6. If you already have a personal access token, proceed to step #7. Otherwise, complete the following steps depending on whether you are using GitHub or GitLab.

Instructions to retrieve your GitHub personal access token

1. On any GitHub page, click your profile icon and then click Settings.
2. On the sidebar, click Personal access tokens.
3. Click Generate new token.
4. Add a description for the token.
5. Select the repo and user checkboxes to define the access for the personal token.
6. Click Generate token.
7. Copy the token to a secure location or password management application. For security reasons, after you leave the page, you will no longer be able to see the token.

Instructions to retrieve your GitLab personal access token

1. On any GitLab page, click your profile icon and then click Settings.
2. Click on the Access Tokens
3. Provide a name for the token.
4. (Optional) Choose an expiration date for the access token.
5. Select the api checkbox to define the access for the personal token.
6. Click Create personal access token.
7. Copy the token to a secure location or password management application. For security reasons, after you leave the page, you will no longer be able to see the token.

7. Provide a personal access token. **Note** You will be the only user of your token.
When you’ve done all this, it should look like the screen shot below:

IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Learning

8. Finally, click Save custom integration.

When your custom server has been verified, the page will reload with your custom instance selected in the dropdown menu. You can now use your Git instance with Bluemix public!

IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Learning

Wednesday, 17 January 2018

Take control of your app feature rollout and measure the effectiveness using Bluemix App Launch service

Let’s say you have a popular app in a Mobile App Store which lets shoppers buy your merchandise using your popular app. Your buyers are complaining about complicated checkout process since they tend to forget user/password and unable to complete the transactions. While you are exploring various other options to replace user/pass authentication, you decide to conduct a small experiment. You have multiple segments of users – Platinum, Gold, Silver, Regular. You also have app users on iOS, Android. The experiment you want to conduct is to pick Gold users on IOS and offer TouchID based checkout. Further, you also know users complain about your app theme, you decide to try two different colors for the button and measure if there’s any significant change in Touch ID feature usage. Lastly, you know the upcoming Black Friday weekend two months from now is a great opportunity to test this hypothesis so you would want to perform this experiment only during that weekend.

To summarize you would want the following,

1. Dark launch a new feature in your existing app without exposing it to anyone
2. Define and implement a new feature and create its variants
3. Create an audience that includes only Gold iOS users
4. Select a window range when the experiment is conducted and in a real-time measure the effectiveness of this experiment during that window.

The newly launched Bluemix App Launch will help you perform all the above and much more. Let’s see step-by-step how you would accomplish all the above tasks in a simple and intuitive way.

Note: App Launch is currently an experimental service. On the left pane, select Mobile as the category and click App Launch Service.

Now, you are ready to create your first App Launch experiment!

Your mental model for App Launch service would look something like below,

1. Define a new Feature

2. Build an Audience pool

3. Create an Engagement and within the engagement,

a. Select the feature created earlier

b. Define and customize multiple variants for this feature

c. Define and select the audience

d. Select the date/time when engagement will be active

4. Download the SDK and Keys and incorporate them within the application

5. Push the new version of the app at least a month before the Black Friday weekend so enough users would have downloaded the modified (dark-launched) version from the AppStore.
Let us go one step at a time. Once you create an App Launch service instance you are landed on the Getting Started page. Quickly skim through the sections, but don’t sweat – this blog will walk you through all the steps.


Select Features & Metrics on the left-side pane. Let’s get familiar with what a Feature is. A Feature in App Launch service is a simple grouping of attributes and metrics. Start by creating a Feature by following the below steps,

1. Create a feature called, Touch ID Button
2. Add a feature attribute, called Button Label and set value to Touch ID Checkout
3. Add another feature attribute, called Button Color and set default value to #AD343E
4. Add a feature attribute, called show_button and set value to true
5. To track metric for this feature, add a metric, called TouchID Clicked
6. In the above step, you accomplished Step 2 of your requirement.


Once Feature creation is done, select Audience on the left-side pane. The Audience in App Launch is how you create a segment of users that you’d like to target this feature. Follow the steps below,

1. New Audience Attribute

a. Add a New Audience attribute, called customer type

b. Set customer to be a string array

c. In Allowed Value enter, platinum,gold,silver,regular

d. To track metric for this feature, add a metric, called TouchID Clicked

2. New Audience – Show this feature to only gold and iOS users

a. Add a New Audience, called Gold iOS users

b. Check Platform as iOS

c. Under Attributes, Select customer type

d. Check gold in the displayed customer type list

e. Select Save

In the above step, you accomplished Step 3 of your requirement.


Once a Feature is created and Audience is defined, it’s time to bring all of it together in an Engagement. An Engagement in App Launch brings together one or more Features and Audiences together and allows you to experiment by creating multiple variances. Follow the below steps,

1. Create an Engagement, called Holiday Button
2. Select Experiment Mode as A/B Testing
3. Select the feature you would like to A/B Test. From drop down select Touch ID Button
4. To A/B Test Button color for feature Touch ID Button select,

a. Variant 1, set button color as #AD343E
b. Variant 2, set button color as #846075

5. Select an audience – Gold iOS Users
6. Specify Reach % for each variant, for example, 50% each

a. Variant 1, set Reach to be 50%
b. Variant 2, set Reach to be 50%

7. Specify a Start date, if empty engagement starts immediately

a. Event type Time, when is Start, specify Date

8. Select Create

In the above step, you created an engagement along with accomplishing Step 4 of your requirement.
All that’s left now is to download the SDK, Feature Toggle, and Metric keys and start incorporating them within your TouchID logic. Let’s look at few code snippets. The Feature Codes section in Feature Details screen shows the list of codes to copy.

IBM Tutorial and Materials, IBM Guides, IBM Certifications, IBM Learning

Also, ensure the state of the Feature is set appropriately. For example, if the Feature is set to Under Development then the Feature will be unavailable to be part of an Engagement. Ensure the Feature is set to Ready to be included in an Engagement.

IBM Tutorial and Materials, IBM Guides, IBM Certifications, IBM Learning

App Launch SDKs offer a rich set of APIs to check whether a Feature is enabled or not, then get value for Feature keys. The below code snippet shows an Android sample to check for Feature and then get the label text.

IBM Tutorial and Materials, IBM Guides, IBM Certifications, IBM Learning

Once coding is complete, go ahead and post your app to App Store or Play Store. As and when users download and use your new version of the app, your users will be receiving the dark-launched feature hidden in the code. The App Launch service SDK will be registering the users as they launch the app, but since the engagement is disabled until the week of Black Friday no users will see the new TouchID Button.

As intended, on Black Friday weekend the Gold iOS users will see a new TouchID-based check out button when they checkout their items in the app. Fifty percent of those users will see one button color and the other fifty the second variant button color. As each user buys merchandise by clicking the TouchID button, App Launch service collects such events and displays the effectiveness in real-time. Once the weekend’s sale is over the feature goes to a dormant mode as the engagement goes into disable mode.

Tuesday, 16 January 2018

Using SSH tunnels and Selenium to test web applications on a continuous delivery pipeline

Developers often have a need to test their web applications. In particular they often have a need to automate these tests as part of a continuous integration (CI) pipeline. One such tool that helps facilitate this test requirement is Selenium. Selenium is a piece of software which is designed to automate browser behaviour, in that you can program it to visit a particular web page and then perform a series of actions on that web page. Most often this is leveraged to test web applications, although its functionality is not limited to that single use case. With a default configuration, however, this isn’t possible as the Selenium Server has no way of reaching an application that has been started within a CI container (see figure 1).

This article will walk you through the steps required to use SSH tunnels to allow a secure connection between our CI container and Selenium Server (see figure 2). Additionally we will also secure a Selenium Server container using SSH to prevent it being packet sniffed and used by non-authorized people.

SSH Tunnels, IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Applications

Figure 1 – the set-up when using the tools out of the box.

Black lines indicate firewalls

SSH Tunnels, IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Applications

Figure 2 – the end result of following this article.

Yellow lines indicate SSH tunnels

WebdriverIO is a Node.js framework which allows you to control a browser or web application using JavaScript. Together Selenium and WebdriverIO can be combined with an assertion library such as Chai to provide a powerful, automated test environment for things like end-to-end testing. 

Secure Selenium Server Docker container with SSH

The first step is to install the SSH server in to the Selenium Standalone Server Docker, as by default this is not installed. The Docker file to do this looks like the following:

FROM selenium/standalone-chrome:latest
RUN sudo apt-get -y update && sudo apt-get install -y openssh-server
USER root
COPY entry_point.sh /opt/bin/entry_point.sh
RUN chmod +x /opt/bin/entry_point.sh
ENTRYPOINT /opt/bin/entry_point.sh
USER seluser

As you can see, this uses the Selenium Standalone Chrome image as the base image, and then installs SSH using apt-get. Note that we have to expose port 22 ourselves for SSH communications.

Customising the seluser password at runtime

As another extension, we can make a change to the Selenium entry_point.sh file which allows us to specify the password for seluser when running the container. By default the password matches the username. Without this change we would have to manually connect to the container to change the password. Copy the entry_point.sh for the Selenium base image and add the following line:

echo seluser:$SELUSER_PASS | sudo chpasswd

This will allow us to pass in SELUSER_PASS as a variable to the run container command which will then be set as the password for seluser.

Building and running the container

To build our container to an image called ssh-selenium-standalone-chrome then simply use the command docker build -t ssh-selenium . from the directory container both our Dockerfile and edited entry_point.sh.

To run our container behind SSH (i.e. make it publicly unavailable on an exposed port as it is out of the box) then use docker run -d -p 22:22 -e SELUSER_PASS=<password_here> ssh-selenium-standalone-chrome where <password_here> is our desired password for seluser.

SSH tunnels

We now have an Selenium container running that is not accessible through the typical port of 4444. It is worth noting that localhost:4444 on the container in which it is running will still give us access to our Selenium instance. So how are we meant to access Selenium? SSH tunnels are the answer. SSH tunnels are a mechanism which allow us to create a secure connection between a local (in this case our CI pipeline where our test application is running) and a remote (in this case our Selenium Docker container) machine. By using SSH tunnels we can run the application we wish to test from our CI stage without exposing it outside of the container in which the stage is running.

The commands that follow should all be executed as part of the continuous integration stage within which you wish to run your Selenium tests, prior to executing the tests themselves.

A local SSH tunnel

The first SSH tunnel we require is what is known as a local SSH tunnel. This is executed from the CI stage where the application was started:

ssh -tN -oStrictHostKeyChecking=no -L 7777:localhost:4444 seluser@<selenium_container_ip_address> &

replacing <selenium_container_ip_address> with the IP address of the Docker container within which Selenium is running.

The -tN option allow us to complete the ssh command without input from the terminal. The -oStrictHostKeyChecking=no allows us to do so without checking host keys. It may be necessary to use the sshpass utility with the -e option to allow the ssh command to collect the required SSH password from the environment variables for your CI stage. Once this command has been completed then localhost:7777 can be used from where ssh was executed to access the Selenium container.

A remote SSH tunnel

We now have communication one way from our application to Selenium, but our Selenium container also needs to be able to reach our application. For this, we need a remote SSH tunnel:

ssh -tN -oStrictHostKeyChecking=no -R 9999:localhost:8080 seluser@<selenium_container_ip_address> &

The options are required for the same reasons as with the local tunnel. The result of doing this means that the Selenium container can use localhost:9999 to access the application that is running on port 8080 where the ssh command was executed.

Pulling it all together

Having done all of the above, we can begin to write some end-to-end tests to run on our pipeline. These tests will live as part of the source code for your application and are built with the application in a prior CI stage.

Before we can write some test assertions we have to configure a WebdriverIO client to point to both our Selenium container and our application. Typically we would do this in a before block for our tests as follows:

before(() => {
browser = webdriverio.remote({
desiredCapabilities: {
browserName: 'chrome'
host: 'localhost',
port: 7777

Note that we set the options in remote to be the address of the Selenium container. We also set the URL for our application (from the Selenium container) to be the address having set up our remote SSH tunnel. Having configured our WebdriverIO client, we can now write and run a series of tests in the necessary it statements as provided by Chai.

Friday, 12 January 2018

App Launch on IBM Cloud Services

In an era of Continuous delivery – the defect fixes, enhancements and changes are delivered swiftly into production.

If the users are happy about the change, good Job! If not, it is a risky task to roll back a feature which involves a lot of effort before a significant damage has been done.

To avoid such instances, we must partner with our users early in the app development life cycle. Business innovation must be driven by customer experience.

As the changes go in to the application –

It is essential to know:

◈ How well the end users have accepted the change
◈ Get a feedback on the app directly from the users

It would be good to try:

◈ Testing beyond the limits of traditional test lab
◈ Run experiments against the existing features

It will be helpful if you could –

◈ Switch “on” / “off” new features with very little change in the infrastructure
◈ Turn “off” features under development and be still able to push other changes

App Launch on IBM Cloud Services will help the app owner achieve all the above-mentioned objectives and much more. App Launch facilitates an app owner to launch innovations to market faster and make broader decisions based on real time feedback.

IBM Cloud Services, IBM Tutorials and Materials, IBM Guides, IBM Certifications

Below are a few Use cases to showcase how an app owner can take advantage of App Launch:

◈ Event based change

Roll out a new User Interface on based on events like Christmas, New year, Thanks giving and so on.  You can schedule when your feature has to be enabled. Seasonal App themes/features make the app more customized.  Small changes to your app makes a big difference!

◈ Special features for Special customers

Enabling a new Bio-metric login feature for Platinum users only. You may roll out this feature to a larger audience based on how the platinum users have accepted the change.

◈ Localized App features

Include a survey button based on geography. Roll out features based on geographical location.

Create Variations through A/B testing
Implement new changes in your application and run experiments with targeted end users. Gain real time customer insights.

◈ Advertise based on proximity

Send mobile promotion code when the customers are close to the store.

◈ Personalized messages

Send personalized messages to a customer. Like wishing on a birthday or on their anniversary.

◈ Introductory tutorials

Include introductory tutorials for new users. It is important for a user to understand the features of an app before using it.

◈ App updates

Inform users on the app updates or launch of a new feature. User’s confidence on the product will increase when they know that their app is always improving.

◈ Price drop

Notify users of a price fall for an item visited in the store catalogue but have not purchased, which would intrigue them to act on it.

◈ Special offers

Send a specialized offer for users who have made transactions of >X$ in the last 1 week

◈ Survey / Feedback

Roll out surveys or polls to get a feedback on the app. Getting direct feedback from the customers help improve a product.

◈ Showcase your new feature

Send notifications to identified users who have not visited a new feature / area in the application.

◈ Retain in-active users

Identify the in-active users based on the user sessions. Send notifications / offers using messages for users who are not using the app Actively.

◈ Get Application logs

Notify the app owner about the crashes / errors in the app and send detailed logs to the app owner. Crash reporting provides rapid insight into why an app fails.

◈ Metrics on how a feature is used

Get measurable data on how a feature is being used. Like, how many users use it, average number of times a feature is used per day, category of users who use the  feature and so on.

Wednesday, 10 January 2018

Invoice processing in the digital world, powered by BPM and RPA

Invoice processing is possibly one of the oldest business processes known to mankind. From barter to bitcoin, payments have come a long way. However, despite all the innovations and technology, invoice processing is still burdened with a large number of manual tasks and inefficiencies. This can result in issues such as delayed payments and incorrect payment amounts.

Typical Invoice Processing

IBM Tutorial and Materials, IBM Guides, IBM Certifications, IBM Learning

Typical invoice processing

Here are some key pain points with invoice processing that we have heard about from our customers:

◉ Many manual tasks, mostly repetitive and mundane. These include downloading invoice attachments from emails, reading them carefully or entering data into enterprise applications like ERP or accounting systems. Routine tasks can take a lot of time away from the team involved in invoice processing, leading to fatigue and even frustration.
◉ Shadow IT such as spreadsheets and local access databases. Almost every organization’s invoice processing is supported by a number of shadow IT artifacts, like spreadsheets that track tax deductions and payments. These supporting files and programs are a very important part of the puzzle. It can be too complex to integrate these workarounds into the approved IT solutions. And as a result, these shadow tasks are left out from automation initiatives.
◉ Longer cycle times. Invoice processing today takes longer than one would expect, because of the large number of manual activities and approvals required from different roles within the organization.
◉ Service level agreement (SLA) violations. Invoice processing normally requires the review of multiple approvers, which can slow the process down. SLA violations can result in not only vendor frustration but also added costs in the form of penalties.

Digital automation of invoice processing

The key to addressing the above pain points lies in analyzing the business process in depth. An analysis of invoice processing reveals that most activities belong to one of the following categories:

1. Mundane and repetitive activities like reading attachments, maintaining spreadsheets and entering data into legacy systems
2. Human activities that include managerial approvals, decisions that involve human judgement and negotiations
3. System activities, such as creating business rules for approvals, processing of accounting entries in the system, updating vendor records and payment processing

Invoice processing deserves a solution that automates these activities across all three categories. Digital automation powered by business process management (BPM) and robotic process automation (RPA) can provide nearly everything needed for invoice processing. RPA automates the mundane and repetitive activities. System activities can be configured in BPM, which orchestrates all the activities across humans, bots and systems. Only human activities such as managerial approvals remain.

This diagram shows the transformed invoice processing workflow:

Process orchestration across humans, bots and systems

The solution stack

Here is what the solution for digital automation of invoice processing looks like:

The invoice processing BPM RPA solution

The solution consists of:

◉ IBM Business Process Manager – for orchestrating activities across bots, humans and systems in a unified process layer to provide monitoring, tracking, visibility and proactive follow-ups for manual tasks.
◉ IBM Robotic Process Automation with Automation Anywhere handles routine, repetitive and mundane tasks, thus reducing manual processing time and removing copy-and-paste errors.
By bringing these two digital automation technologies together, invoice processing can be transformed into a more efficient business process. As a result, companies can increase productivity and improve customer service.

Tuesday, 9 January 2018

Expanding data protection to meet the challenges of VM growth

IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM VM, IBM Learning

While it’s easy to deploy VM-intensive networks, it’s not so easy to back up and restore all of the data stored inside hundreds, or thousands, of VMs throughout those networks. Data protection often requires special IT skills, and multiple data protection products – and time and cost considerations to get the job done.

Many organizations are excited about how easy it is to deploy VMs – and rightly so. Fast deployments bring business agility.

What many organizations don’t anticipate is the complexity of backups, data protection, data retrieval and data recovery. Managing VMs is more complex than one might assume. For example, IT leaders have to  worry about security, availability and the governance of data. These problems become more difficult when organizations are confronted with more demanding workloads, including mission-critical applications deployed in VMs.

Here are three questions that become top-of-mind for IT and business managers:

◉ How am I going to manage backup for all of these VMs?
◉ How am I going to protect security and governance of all of the data that is associated with all of these VMs?
◉ How am I going to manage all of this data from so many places in the network?

IBM Spectrum Protect Plus for VM data protection

IBM recently developed a new software platform designed to bring many enterprise capabilities to scale-out, VM-intensive environments supporting demanding workloads and large datasets. This approach addresses the data storage needs of an ever-wider range of customers: small and medium-sized businesses (SMBs); value-added resellers (VARs) managing data on behalf of SMB customers; managed service providers (MSPs) delivering cloud services; and large enterprises with substantial VM-based computing environments.

IBM Spectrum Protect Plus is an all-software solution that automatically backs up all VMs and application data associated with workloads running on the VMs that live inside an organization’s network. This is especially important for organizations utilizing the intensive analysis of transactional data to gain insights that drive new business.

Importantly, IBM designed this new software – built from the ground up for virtualized environments – to work in mixed environments that include both VMware vSphere and Microsoft Hyper-V virtual machines. The presence of both of these hypervisors mirrors the mixed-vendor environments of many customers’ virtualized networks.

The ability to find, retrieve and manage data throughout a virtualized infrastructure is becoming vital to many organizations, as VMs are increasingly supporting business-critical and mission-critical workloads. Rapidly growing data from applications and databases running inside those VMs is making new demands on the capacity and manageability of data resources throughout organizations. At the same time, new sources of data – such as social media for sentiment analysis and network “edge” data generated by sensors and consumer electronics – drive more data use. This creates more data-protection challenges for many organizations.

IBM Spectrum Protect Plus: How it works

IBM Spectrum Protect Plus software includes a point-and-click interface that allows first-time users to quickly select service-level agreements (SLAs) from a predefined list in a self-service model.  The Spectrum Protect Plus software can automatically index all data for backups and VMs under management. This process creates a metadata catalog that can find, store and retrieve data for all VMs identified for protection and data recovery. The metadata catalog is replicated into many copies, preventing single-point-of-failure issues that would undermine high availability.

Customers now have the option to connect IBM Spectrum Protect Plus with IBM Spectrum Protect software, which supports the long-term archiving of data across multiple storage tiers – on disk, tape, object stores and cloud data stores. This links VM block-based data to the multi-tiered data storage resources of the enterprise data center and the cloud.

Ease of use and automation

Ease-of-use is a key design priority for data protection software that supports wider use. Graphical user interfaces (GUIs) shorten the time needed to manage data, compared with command-line management tools. Automation and ease of use make data protection more accessible to those working in data-intensive roles, such as DevOps developers and big data analysts.

The ability to quickly access a unified view of data associated with VMs is key to ensuring SLAs for applications throughout the entire organization. To be effective, data protection solutions must protect all data – both local and remote – that is vital to their business.

Monday, 8 January 2018

Blockchain in telecom: From concept to reality

Blockchain is currently one of the most talked-about technologies. Across industries, organizations are exploring blockchain’s potential impact in their space and how they can benefit from this emerging technology. The communications service provider (CSP) industry is no exception.

The biggest questions for CSPs, however, are “Where is the bang for the buck?” and “Where and how do we get started?” The good news is the opportunity to benefit appears real. The core attributes of blockchain’s shared ledger approach help provide trust, security, transparency and control across the participating ecosystem for all points in a transaction process. This results in the potential for lower costs, faster throughput and improved experiences for all players.

We see the greatest impact for blockchain in streamlining internal processes, building blockchain-based digital services, and providing trust, security and transparency in business ecosystems, including the IoT (see Figure).

Blockchain, IBM Certifications, IBM Guides, IBM Tutorials and Materials

Streamlining internal processes

The modularity provided by smart contracts enables various aspects of CSPs’ operations to be streamlined, including billing, roaming, wholesale, NFV management and supply chain management. In the context of roaming, blockchain’s benefits include faster identification of visiting subscribers, prevention of fraudulent traffic and claims reduction. In addition, the elimination of clearing houses could lead to significant cost reduction.

Blockchain technology is also perfect for supply chain management, improving efficiency between CSPs, suppliers and distributors. We already see examples of blockchain projects in supply chain management emerging in other industries, such as at Walmart, Maersk and IGF.

Developing trusted digital services

CSPs can provide a variety of digital customer services built on blockchain, bringing them new revenue streams. Areas in which they should consider deployment of blockchains include digital asset transactions (micropayments for music, mobile games and the like), mobile money (subscriber-to-subscriber money transfers, international remittance) and identity-as-a-service.

New digital identity ecosystems are coming, and CSPs should be among the leaders and early adopters. Because CSPs enjoy a high level of customer trust, they are well positioned to offer such a service. The vast amount of data CSPs possess and the proliferation of smartphones put CSPs in a unique position to act as a source of identity and authentication. New revenue streams could be generated by offering identity management services to both subscribers and business partners.

Collaborating in ecosystems

For CSPs that want to become digital services enablers (DSEs) by creating and operating platforms, blockchain could become a foundational building block to handle complex transactions across multiple participants. Early stage examples include blockchains for advertisement sales and digital rights management, to name a few.

In addition, blockchain could play a role in machine-to-machine (M2M) and IoT environments, where devices connected to the internet automatically interact with each other by collecting and exchanging data. Blockchain and smart contracts could both monitor and orchestrate these interactions. Recognized as a trusted party, CSPs are best placed to accelerate this development to materialize their ambitions in the IoT space.

The way forward

From our survey of 174 C-suite executives from the telecommunications industry we found that a significant 36 percent of CSP-organizations are already considering or actively engaged with blockchains. Though blockchain technology is still young and evolving, many

CSP executives expressed confidence in its potential for their organization. To move toward reaping benefits, we recommend the following first steps:

◉ Spend time with a lead partner in blockchain to understand the business models and technologies, as well as understand the early use cases, proof points and emerging solutions.

◉ Evaluate where the technology stands today, the various blockchain providers and the position on standards and regulations. Join industry groups like the Linux Foundation’s Hyperledger, given these groups can facilitate agreements on standards.

◉ Invest in ideation on potential opportunities in both the revenue growth/platform business area and internal

Friday, 5 January 2018

IoT monetization by telcos: Hype or hope?

When we talk to telcos about where they’re going to generate new revenue, almost all of them have the internet of things (IoT) high on their list. IoT is going to be huge after all, with billions of connected devices worldwide. Many telco CEOs have made bold statements about how much revenue they’re going to generate from it. But, the big questions are: How will they do this? What are the business models?

Four ways for a telco to play a role in IOT

IBM Tutorials and Materials and Certifications, IBM Guides, IBM Learning

1. Network

Many operators are already active with IoT sensors, devices and network connectivity. ‘Connecting everything’ is a natural extension of their core business of ‘connecting people’.

The question, however, is how much connectivity will be realized through licensed spectrum? And how much connectivity is realized through alternative methods like WiFi or low power networks in unlicensed spectrum? The economics for telcos are changing dramatically as companies like Sigfox (a French wireless company) understand that the IoT market is getting big enough to build a low-power network in unlicensed spectrum, competing directly with the telcos.

2. Platform

Many telcos, unsatisfied with a SIM-only business, are looking to extend their machine-to-machine platforms to the platform business and to offer IoT platform capabilities via application program interfaces (APIs) to ecosystem participants and developers, who can in turn build IoT applications more quickly and efficiently.

It seems that most telcos are interested in the IoT platform business. A recent IBM study on ecosystems found that 57 percent of operators surveyed want their organization to become a platform provider. Meanwhile, companies like Vodafone and Orange are already adopting IoT platform architectures, and see orchestration, analytics and policy management as key components.

3. Applications

Some telcos are offering IoT applications and services themselves. One way they’re doing this is by buying applications from the market. Vodafone, for example, bought Cobra Automotive (now Vodafone Automotive) to accelerate its connected car strategy. Others are developing domain or industry expertise in-house to build tailor-made IoT applications.

4. Operations

Some telcos (not many, we expect) might even have the ambition to run a full operation service to manage the service level agreements (SLA) of IoT services themselves.

The data dilemma

Despite the many paths telcos can choose, one point of focus remains consistent across them all: data security. Telcos are seen as the custodians of the network and typically enjoy a highly trusted position with their customers, in many countries exceeding the trust level of financial institutions and governments

Meanwhile, the growth of IoT means hackers have a bigger playing field, and protecting consumers’ data across networks and stored devices is more urgent than ever. A huge amount of personal information at stake, and hackers can completely disturb society by abusing this data. Brand and reputational damage could end up being more of an issue than any financial damage (being the cause of it in most cases).

Security must be the bedrock of IoT development and deployment. Securing multiple points of vulnerability – such as smart watches, healthcare devices or home smart devices – is critical to IoT success.

IoT security is a cooperative initiative in the IoT ecosystem and telcos can play a central role here. They’re in the best position to develop initiatives and services focusing on increased trust and security, but only if they adopt the right technologies for transparent, private and secure environments such as:

◈ blockchain – which could be a foundational building block in the IoT ecosystem;
◈ cloud-secure technology – to provide optimal security in the cloud; and
◈ artificial intelligence – cognitive security systems that identify security threats and provide recommendations to stop them.

Wednesday, 3 January 2018

Banking digitization: The RPA perspective

Banks and financial institutions are under constant pressure to improve their financial performance, reduce costs and improve returns on capital. They want to rapidly scale their processes but the target is elusive due to complexities and costs in back end processes. In tandem with this they have to focus and prioritize the modernization and digitization of their front office and customer facing operations. This has led to less attention being paid on their back office processes which often involves thousands of people manually managing multiple, high volume and discrete data and customer requests. The result being that the back office processes continue to rely on people and paper, which leads to rising operating expenses, increased risk of financial crimes, reduced speed to market, high error rates, low human productivity, flat revenues, and challenging prospects for future growth. Furthermore, the growth of new products or offerings, merger & acquisitions activity as well as the increasing burden of regulatory compliance, has added layers of complexity to their business processes, which in turn has increased operating costs.

By 2019, it is estimated that process automation will change up to 25% of the work associated with all job categories. Even more these changes can have a profound impact to the major concerns that banking and financial institutions face today; as highlighted below.

Increased Operating Capital: Banking organizations rely heavily on their human workforce for middle and back office functions. They can effectively achieve cost reductions and increase operating capital by automating repetitive human tasks.

New Regulatory Requirements: Compliance costs are rising and “as a service” concepts are emerging to manage these costs. With automation, better overall compliance is achieved due to transparency and audibility. Higher accuracy results and lower compliance risk, as well as the ability to complete compliance tasks at a lower cost.

New Entrants: The threat from new entrants is real and a way to mitigate this is to act more quickly. Process automation helps balance some of the challenges related to technical debt and ability to act quickly and at a lower cost. Additionally, this helps banks to better offer digital engagement channels to their clients and help compete with Fintechs and startups.

Omni Channel Digitization: Digitization and improved customer experience is essential for banks but existing legacy back end infrastructure continues to present challenges. Digitization can be extended to the back office through Intelligent process automation.

With newer automation technology featuring rapid development and deployment, banks can automate their processes in multiple scenarios, including:

◉ Managing interactions between multiple systems, thus eliminating the manual sourcing of data and the use of tools used to support this

◉ Better managing middle & back office operations improving process efficiency and lowering error rates resulting in better performance

◉ Increasing speed and accuracy when dealing with large data volumes

◉ Better management of repeatable tasks, freeing up employees to focus on higher value add work

◉ Improved customer experience and engagement

◉ Laying a foundation that enables further innovation and transformation

◉ Allowing employees to focus on a handful of data exceptions & focus more on their decision making capabilities

Banks including our clients, have been experimenting with innovative automation technology enabled by cloud, and improved analytics, helping them advance on their ‘path to cognitive’.  Many have seen significantly improved outcomes during their initial pilots.  This has encouraged them to accelerate these activities and look for additional areas where they can implement similar solutions. IBM works with clients across the financial services sector, covering areas like Trade Services, Payments, Account Management, Wealth, Operations, Lending, Risk & Compliance, Finance, and Anti Money Laundering. Through our experience in process services, we know our banking customers, and can assist in evaluating potential targets and benefits from automation.

IBM Certifications, IBM Tutorials and Materials, IBM Guides, IBM Learning

IBM Certifications, IBM Tutorials and Materials, IBM Guides, IBM Learning

With the latest shift from desktop automation to robotics on the cloud, the capabilities that automation can assist with have now expanded to include:

◉ Operating 24×7
◉ Freeing up users’ desktop bandwidth
◉ Better meeting the increasing security needs of our clients
◉ Allowing for reutilization of objects in the set up and operation of software robots

With Robotic Process Automation, banking organizations can realize fully digitized end-to-end customer journeys, including the last mile of business process, which continues to be harder to automate. This can include adaptive learning capabilities—allowing for more intelligent, prescriptive and predictive decisions and putting them firmly on a ‘path to cognitive’, so they can implement cognitive solutions.

IBM uses a mix of technology to deliver automation to its clients.  Our intelligent automation continuum caters to clients based on their stage of automation maturity.

IBM Certifications, IBM Tutorials and Materials, IBM Guides, IBM Learning

IBM has deep and historically successful experience of enabling process automation and developing transformational capabilities for the Banking industry. Based on our experience we believe that automating back office operations significantly impact a customer’s journey and experiences, leading to improved business profitability, more timely regulatory compliance, along with other activities benefiting from the associated speed, scalability and agility that this transformation brings.

Today’s marketplace continues to be governed by changing financial and economic trends. Banks need to identify faster, economical, low-risk approaches, so they can reduce costs and deliver exceptional customer service & experiences. Banks should not miss the opportunity to look further into the benefits of automation. They are creating and executing process automation-transformation programs that can provide maximum impact on their business and operations.