Sunday, 24 June 2018

Taking the AI training wheels off: moving from PoC to production

In helping dozens of organizations build on-premises AI initiatives, we have seen three fundamental stages organizations go through on their journey to enterprise-scale AI.

First, individual data scientists experiment on proof of concept projects which may be promising. These PoCs then often hit knowledge, data management and infrastructure performance obstacles that keep them from proceeding to the second stage to deliver optimized and trained models quickly enough to deliver value to the organization.  Moving to the third and final stage of AI adoption, where AI is integrated across multiple lines of business and requires enterprise-scale infrastructure, presents significant integration, security and support challenges.

Today IBM introduced IBM PowerAI Enterprise and an on-premises AI infrastructure reference architecture to help organizations jump-start AI and deep learning projects, and to remove the obstacles to moving from experimentation to production and ultimately to enterprise-scale AI.

On-premises AI infrastructure reference architecture


AI and deep learning are sophisticated areas of data analytics, which is rapidly changing. Not many people have the extensive knowledge and experience needed to implement a solution (at least not today).

To help fill this knowledge gap, IBM has built PowerAI Enterprise – easy-to-use, integrated tools to get AI open source frameworks up and running quickly. These tools utilize cognitive algorithms and automation to dramatically increase the productivity of data scientists throughout the AI workflow. This tested, validated and optimized AI reference architecture includes GPU-accelerated servers purposely built for AI. There is also a scalable storage infrastructure that not only cost-effectively handles the volume of data needed for AI, but also delivers the performance needed to keep data-hungry GPUs busy all of the time.

IBM Study Material, IBM Tutorials and Materials, IBM Learning, IBM AI, IBM Certifications

IBM AI Infrastructure Reference Architecture

Ritu Joyti, Vice President of IDC’s Cloud IaaS, Enterprise Storage and Server analyst, noted “IBM has one of the most comprehensive AI solution stacks that includes tools and software for all the critical personas of AI deployments including the data scientists. Their solution helps reduce the complexity of AI deployments and help organizations improve productivity and efficiency, lower acquisition and support costs, and accelerate adoption of AI.”

One customer which has successfully navigated the new world of AI is Wells Fargo, as they use deep learning models to comply with a critical financial validation process.  Their data scientists build, enhance, and validate hundreds of models each day and speed is critical, as well as scalability, as they deal with greater amounts of data and more complicated models. As Richard Liu, Quantitative Analytics manager at Wells Fargo said at IBM Think, “Academically, people talk about fancy algorithms. But in real life, how efficiently the models run in distributed environments is critical.”  Wells Fargo uses IBM AI Enterprise software platform for the speed and resource scheduling and management functionality it provides. “IBM is a very good partner and we are very pleased with their solution,” added Liu.

When a large Canadian financial institution wanted to build an AI Center of Competency for 35 data scientists to help identify fraud, minimize risk, and increase customer satisfaction, they turned to IBM. By deploying the IBM Systems AI Infrastructure Reference Architecture, they now provide distributed deep learning as a service designed to enable easy-to-deploy, unique environments for each data scientist across shared resources.

Get started quickly


PowerAI Enterprise shortcuts the time to get up and running with an AI environment that supports the data scientist from data ingest and preparation, through training and optimization and finally to testing and inference. Included are fully compiled and ready-to-use IBM-optimized versions of popular open source deep learning frameworks (including TensorFlow and IBM Caffe), as well as a software framework designed to support distributed deep learning and scale to 100 and 1000 of nodes. The whole solution comes with support from IBM, including the open source frameworks.

The IBM Systems AI Infrastructure Reference Architecture is built on IBM Power System servers and IBM Elastic Storage Server (ESS), with a software stack that includes IBM PowerAI Enterprise and IBM’s award-winning Spectrum Scale. IBM PowerAI Enterprise installs full versions IBM PowerAI base, IBM Spectrum Conductor and IBM Spectrum Conductor Deep Learning Impact.

IBM Study Material, IBM Tutorials and Materials, IBM Learning, IBM AI, IBM Certifications

IBM Spectrum Scale’s easy-to use interface

IBM PowerAI Enterprise


IBM PowerAI Enterprise extends all of the capability we have been packing into our distribution of deep learning and machine learning frameworks, PowerAI, by adding tools which span the entire model development workflow. With these capabilities customers can develop better models more quickly, and as their requirements grow, efficiently scale and share data science infrastructure.

To shorten data preparation and transformation time, PowerAI Enterprise integrates a structured, template-based approach to building and transforming data sets. It also includes powerful model setup tools designed to eliminate the earliest “dead end” training runs. By instrumenting the training process, Power AI enterprise allows a data scientist to see feedback in real time on the training cycle, eliminate potentially wasted time and speed time to accuracy.

Bringing these and other capabilities together accelerates development for data scientists, and the combination of automating the workflow and extending the capabilities of open source frameworks unlocks the hidden value in organizational data.

Saturday, 23 June 2018

Intellectual Property and the software-defined supply chain

Intellectual Property issues are often flagged as an area of concern for the development of the software-defined supply chain, yet the nature of these perceived issues is not generally clearly identified.

IBM Study Materials, IBM Guides, IBM Learning, IBM Tutorial and Material, IBM CAD/CAM

CAD/CAM, CNC and fast prototyping techniques, FPGAs and ASICs are well established digital manufacturing techniques, however they tend to require considerable technical expertise, substantial customization and set-up effort and vary a great deal from implementation to implementation. This acts as barrier to infringement since even if one were to acquire digital design files for a particular product, the skills, equipment and effort needed to use it would be prohibitive. The software-defined supply chain emphasis on small scale local production and one hand and dependence on low cost open source solutions on the other will lead to a high degree of standardization in manufacturing platforms, so that it becomes feasible to reuse copied design files. Furthermore the dispersed nature of manufacturing means that such files will be more widely disseminated and more likely to be exposed.

The core concern is therefore that it will become possible to download copied plans for a product, and have the product manufactured without the originally creator having any control or receiving any remuneration.

Protecting your investment in distributable designs


Digital product designs will generally be protected by copyright which benefits from good harmonization worldwide, and may furthermore be the subject of Patents, Design right, registered designs and Trademarks subject to local requirements. On this basis, the consumer and manufacturer in the scenario described above would almost certainly infringe the original creator’s IP rights and expose themselves to civil action. To this extent therefore, the IP system provides satisfactory basic tools for protection of the different types of value that may be embodied in a digital design. The lowering of the manufacturing hurdle makes it all the more important to ensure that relevant IP protection is identified and secured in good time.

The real issues here relate not to the primary law that is applicable, but rather to how it can be enforced. The software-defined supply chain implies a multiplication of possible infringers, most of which will be small businesses or even private individuals, with minimal liquidity and low liability for damages. The protection available under IP rights corresponds best to large scale infringements- bringing suit for IP infringement is a long and expensive business, and in a software-defined supply chain context may be out of all proportion with the recoverable damages.

The approaches developed in the electronic media field may provide a useful starting point when looking for solutions to these perceived problems.

For certain digital products it is usual to make distribution subject to contractual provisions which define how the consumer may use the product, address warranty issues and cover redress and termination matters. Shrink wrap and click through licensing approaches have been developed and widely adopted. Open source style licensing where contract acceptance is implied through use of the code is a step further along this path which may be helpful in the context of distributable design files.

Digital Rights Management is another set of techniques developed in the digital media field. Such techniques may well be helpful in the case of distributable design files. DRM mechanisms might be tied to hardware dongles, manufacturing machinery or activation keys to ensure that only authorized manufacturers implement the designs. This suggests the development of distributable design file formats supporting encryption and other DRM enabling technologies. In some cases this may also call for contributions to the design of manufacturing machinery such as 3d printers to ensure that they are able to function correctly and securely with DRM protected distributable design files. This may well be relatively straight forward technically given the open source approach of many such printers, but may need careful management from a social point of view given the hostility of the open source movement generally to DRM.

While the primary path to enforcing IP rights is though litigation in the courts, the distributed nature of the software-defined supply chain makes this approach overly cumbersome, since the recoverable damages will often be less that the cost of the process itself. A number of Administrative and quasi judicial measures are currently available in some jurisdictions which may be taken as possible models for protection of distributable design files.

In some jurisdictions some types of IP infringement may be subject to sanctions under criminal law. This may enable the right holder to enlist the help of public law enforcement bodies to bring infringers to justice. Such activities may not always be within the remit of Police organizations, but Trading Standards and Customs bodies are often more familiar with actions of this kind. Generally such bodies are most comfortable with trademark enforcement, and organizations pursuing a software-defined supply chain model may do well to pay special attention to this aspect of their IP strategy.

Some jurisdictions have developed special measures relating to IP infringements using the internet, and in particular the downloading by consumers of media files from peer to peer networks and the like. The details of the process and available sanctions vary widely from jurisdiction to jurisdiction, and may involve a graduated series of warning messages, internet access restrictions and eventually as streamlined judicial process. These measures are designed with a view to the same need for light weight low cost processes that would be equally applicable in the case of software-defined supply chain infringements.

Other IP issues inherent in the software-defined supply chain


The Software-defined supply chain model suggests a variety of uses for open source material:

◉ Open source developed 3d printing equipment
◉ Open source firmware on products
◉ Open source distribution of design files for 3d printing

The adoption of open source materials in mission critical product manufacturing potentially exposes an organization to certain special risks, in particular

◉ the viral effects of certain open source licenses
◉ Difficulties in establishing provenance and/or licensing terms for code
◉ Difficulties in interpreting and complying with licensing terms

This suggests that businesses wishing to adopt Software-defined supply chain mode will need a high degree of sophistication and strong process for identifying and resolving such issues.

Handling third party design contributions


A key benefit of the Software-defined supply chain model is that it becomes viable to offer many variants of a product, to better correspond to local preferences or even the desires of individual consumers. While merely offering combinations of predefined choices does not raise any special IP issues, the approach’s flexibility lends itself to third parties offering custom modifications and even the creation of communities of enthusiasts modifying the original designers works, or indeed the works of other enthusiasts. In this context the issues of control and ownership arise. Generally, any such modifications will constitute derivative works of the original design, and as such would constitute infringement of the copyright in those designs unless permitted by any applicable license agreement. While such scenarios can be inhibited by the use of the mechanisms described above, it may prove advantageous to foster this type of community contribution to some extent, and the license provides the means to achieve this. Indeed, this may be seen as a further advantage of the software-defined supply chain, since IP rights in physical products may be exhausted by sale of the product, whereas in a license based software defined product the original designer decides what rights to give. Accordingly, one approach may be to separate the design files into two or more licensing categories, with certain parts of the file frozen, and other parts left open to modification. The frozen parts might be encrypted or otherwise protected by DRM type mechanisms. The parts left open to modification, which may correspond to the external appearance of the article, may be licensed under an open source type license, or a license permitting modification and distribution, but stipulating that all modifications are ceded to the original designer. The issue of how modified designed may be used commercially will also need to be addressed.

Friday, 22 June 2018

IBM – Microservices Specialization on Coursera – a learning journey

IBM Certifications, IBM Study Materials, IBM Guides, IBM Tutorials and Materials

IBM – Microservices Specialization is intended for application developers seeking to understand the benefits of microservices architecture and container-based applications. The student learns how to develop and deploy microservices applications with Kubernetes on IBM Cloud and IBM Cloud Private via a continuous release pipeline.

There are four self-study courses in this specialization, each course offers exercises followed by a badge quiz for the course. When you compete all the courses and earn the badge for each course, you will also earn the IBM Microservices Specialization badge which will become available soon. You can also take each course by itself if you only need skills in one area, by going directly to the course page.

IBM Certifications, IBM Study Materials, IBM Guides, IBM Tutorials and Materials


In enterprise environments, the architectural style of microservices is gaining momentum. In this course, you will learn why microservices are well-suited to modern cloud environments which require short development and delivery cycles.  You will learn the characteristics of microservices.  You will compare the microservice architecture with monolithic style, emphasizing  why microservices are well suited to continuous delivery.

While microservices are more modular to develop and may look simpler, you will discovery that the complexity does not go away, it shifts.  An inevitable organizational complexity comes along with many small interacting pieces.  Managing, monitoring, logging, and updating microservices creates a greater operational complexity. In this course you learn about the tools necessary to successfully deploy, manage and monitor microservice based applications.

After taking this course, you will have a much better understanding of why microservices are so well suited to cloud environments, the DevOps environments in which microservices run and the tools to manage the complexity that microservices bring to the operational and production environment.


This course provides an introduction to Microclimate, an end-to-end development environment that lets you rapidly create, edit, and deploy applications that run in containers. Microclimate can be installed locally, or on IBM Cloud Private, where you can create a pipeline for continuous integration and delivery.

In this course, you learn how to quickly set up a development environment for working with Microclimate, and import a sample application. Using the Integrated Jenkins pipeline and Github, you also learn how to deploy a microservice application to IBM Cloud Private.


In this course, you learn how to install the Kubernetes command-line interface (CLI), and create a Kubernetes cluster on which to run applications. Hands-on tutorials show you how to deploy microservices to a Kubernetes cluster. You also learn about securing and managing a Kubernetes cluster, and how to plan your Kubernetes cluster for deployment on IBM Cloud.

The ideal candidate for this course has a basic understanding of cloud computing, a working knowledge of developing microservices, and some experience working with IBM Cloud. Experience with using Docker, and familiarity with YAML is also a plus.


IBM Cloud Private is an application platform for developing and managing on-premises, containerized applications. It includes the container orchestrator Kubernetes, a private image repository, a management console, and monitoring frameworks. In this course, you learn how to install and configure IBM Cloud Private components in your environment, and how to prepare microservices applications for deployment.

Wednesday, 20 June 2018

Is pulling your organization’s IP like pulling teeth?

Many organizations suffer from lack of time and resources to get anything other than the supercritical accomplished – and often through super hero efforts at that. With the constant pressures to do more with less, who has time to write down a quick summary about an improvement to a customer design that shaved a week off the schedule or a tweak to a model that generated 10% more accurate simulation results?

IBM Certification, IBM Learning, IBM Guides, IBM Study Materials, IBM IP, IBM Tutorials and Materials

What’s more, who even recognizes the achievement as being valuable intellectual property? With so much focus on the end result the critical know how responsible for getting that result goes undocumented, unprotected and unvalued; putting freedom to operate and the ability to keep IP out of the hands of competitors at serious risk.

So how can one conquer the constant IP pull battle and create a self-sustaining, IP push culture of innovation given the intense pressures? Let’s first take a look at various innovation-pulling initiatives and point out what you have probably already seen go wrong when used in isolation.

1. Incentives


Incentives seem to come up as a method of getting blood out of stones. Although in many cases incentives may help for a short time (either intrinsic or extrinsic), things quickly return to status quo. The same responsible employees bound by their duty to disclose IP (and know how to recognize it and what to do with it at that point) continue to do so, the rest go back to fighting the fires that encompass their day jobs. Incentives programs certainly can be good for both morale and promoting good IP disclosure practices, but for them to become a driving force to change the culture will require a sustained and focused effort that may be difficult for the very reasons we discussed.

2. Quotas:


How about forcing already over-loaded workers to promise to submit or have their employees submit a certain number of disclosures as potentially valuable IP? Certainly yields the result of getting the bare minimum number to fill a quota, at the last minute and of relatively low quality. Laws of statistics argue that there may still be valuable IP amongst the chaff, but how much of it was lost from the beginning of the year when the pressure wasn’t focused on getting disclosures submitted?

2. Invention Miners:


Perhaps a special ops task force that constantly mines for IP throughout the organization, like a robotic vacuum cleaner sucking ideas out of people’s heads? Probably more effective than the first two, but now you’re going to have to get additional precious and rare requisitions that are slated for higher priority (according to a likely non-IP centric executive team). Hiring a lower cost consulting firm is an option, however the business expertise lies within your own organization and would likely require some internal resources to direct the ongoing effort – at least in the short term.

3. Innovation Lab:


What about a dedicated “innovation lab” where employees can rotate through on a part-time or full-time basis? We’ve seen significant success for organizations that set up a dedicated workspace that promotes creativity and collaboration. The drawback is, of course, that you are only receiving ideas from a small sub-set of the organization and although the ideas may be important forward thinking solutions there may be much more critical-to-the-business-today IP that is not being captured.

4. Innovation Day


Another tack may be to host an “innovation day” at the office. A message from the executive team about the importance of IP, some encouraging insights from Inventors (guests or otherwise), a word about the submission process and some food might go a long way to inspire creativity, spread the word, help employees connect with others and fill a few of the invention coffers. Although the effects may not be long lasting it might be sufficient to host something similar once or twice a year to keep getting the message across and a few inventions in the door. The trick will be to make it sincere and respected… not hokey.

5. Enlist OC:


Outside counsel can be a valuable partner resource to help with invention mining as well. If you have a good OC they should be able to assist with invention mining efforts directly with your teams. Being part of the process enables them to have a clear understanding of the inventions which typically leads to quality applications that can be filed quickly after the session. They may be expensive; however, many enjoy invention mining and may provide a discount for such services, especially if you want to try a few “pilots”. Invite them to the innovation fair while you’re at it and have them meet with some of the employees directly.

6. IP Champions:


Volunteer armies of patent or IP champions have been deployed in some organizations to instill innovation awareness, provide training and do some invention mining amongst their teams. While this can be an incredibly effective grass roots effort it takes very special individuals with a passion for IP to make it work. Most volunteers will quickly go back to their day jobs after a minimal amount of effort (hey – isn’t that what we all do?).

The truth is that the problem doesn’t lay with the innovation programs themselves, and it’s certainly not with the employees, but rather getting the right mix of programs at the right time to the right people. Easier said than done, but certainly doable. Just look at IBM’s proven patent leadership and resounding culture of innovation, which stem from just the right concoction of invention capture initiatives- carefully managed and always flexible.

Although we have yet to find the silver bullet cure for making the process of pulling IP from the organization completely pain free – using a mix of each of the invention cultivation tools at various points in the R&D cycle and calendar year coupled with careful monitoring of changes to the business and its IP needs should help to build an internally sustainable culture of innovation that creates much more push and requires much less pull. The mix of push and pull initiatives helps ensure you’re covering all your bases. At any given moment employees are working in various stages of a project, with different teams and often on very different tasks. So what works for some projects/employees might not work as well for others. At the very least your organization should be better equipped to capture critical IP, mitigate damaging risks of IP loss and/or freedom to operate, without breaking the bank.

Sunday, 17 June 2018

Self-sovereign identity: Why blockchain?

One of the most common questions I get when talking to customers and analysts about the self-sovereign identity (SSI) movement is, “Why blockchain?”

This question tends to stem from the notion that data associated with a person’s identity is destined to be stored, shared and used for verification on some form of distributed ledger technology. My hope is that this article with help to debunk that notion and provide a basic foundational understanding of how distributed ledger technology is being used to solve our identity infrastructure dilemma and resolve the impacts of the internet lacking an identity layer.

Busting the myth of on-chain PII


One of the most common myths surrounding blockchain and identity is that blockchain technology provides an ideal distributed alternative to a centralized database for storing personally identifiable information (PII). There are several flavors of this perception: (a) use blockchain to store the data; (b) use a blockchain as a distributed hash table (DHT) for PII data stored off-chain.

Yes, blockchain can technically support the placement of PII on the chain or used to create attestations on the chain that point to off-chain PII storage. Just because technology can be applied to solve a specific problem does not mean that it is the proper tool for the job. This misconception about PII storage in the early stages of the blockchain technology adoption lifecycle is so pervasive that it recently inspired a Twitter thread dedicated to the debate on why putting hashed PII on any immutable ledger is a bad Idea. From GDPR compliance, to correlation, to the cost of block read/write transactions, the debate continues.

Blockchain technology is much more than a distributed storage system. My intent herein is to help the inquisitive identity solution researcher debunk beliefs about PII storage approaches by gaining an understanding for how blockchain can be used as an infrastructure for identity attestations. My hope is this article will offer a helpful aid towards that education and awareness.

The SSI initiative is a perfect counterpunch to detrimental PII management practices. A SSI solution uses a distributed ledger to establish immutable recordings of lifecycle events for globally unique decentralized identifiers (DIDs). Consider the global domain name system (DNS) as an exemplar of a widely accepted public mapping utility. This hierarchical decentralized naming system maps domain names to the numerical IP addresses needed for locating and identifying computers, services or other connected devices, with the underlying network protocols. Analogous to the DNS, a SSI solution based on DIDs is compliant with the same underpinning internet standard universally unique identifiers (UUIDs) and provides the mapping of a unique identifier such as DID, to an entity — a person, organization or connected device. However, the verifiable credentials that are associated with an individual’s DID and PII are never placed on a public ledger. A verifiable credential is cryptographically shared between peers at the edges of the network. The recipient of a verifiable credential, known as a verifier, in a peer to peer connection would use the associated DID as a resource locator for the sender’s public verification key so that the data in the verifiable credentials can be decoded and validated.

No PII on ledger, then why blockchain?


So, what problem is blockchain solving for identity if PII is not being stored on the ledger? The short answer is that blockchain provides a transparent, immutable, reliable and auditable way to address the seamless and secure exchange of cryptographic keys. To better understand this position, let us explore some foundational concepts.

Encryption schemes


Initial cryptography solutions used a symmetrical encryption scheme which uses a secret key that can either be a number, a word or a string of random letters. Symmetrical encryption blends a secret key and the plain text of a message in an algorithmic specific manner to hide a message. If the sender and the recipient of the message have shared the secret key, then they can encrypt and decrypt messages. A drawback to this approach is the requirement of exchanging the secret encryption key between all recipients involved before they can decrypt it.

Asymmetrical encryption, or public key cryptography, is a scheme based on two keys. It addresses the shortcomings of symmetrical encryption by using one key to encrypt and another to decrypt a message. Since malicious persons know that anyone with a secret key can decrypt a message encrypted with the same key, they are motivated to obtain access to the secret key. To deter malicious attempts and improve security, asymmetrical encryption allows a public key to be made freely available to anyone who might want to send you a message. The second private key is managed in a manner so that only the owner has access. A message that is encrypted using a public key can only be decrypted using a private key, while a message encrypted using a private key can be decrypted using a public key.

Unfortunately, asymmetric encryption introduces the problem of discovering a trusted and authentic public key. Today the most pervasive technique for public key discovery in communications based on a client-server model is the use of digital certificates. A digital certificate is a document that binds metadata about a trusted server with a person or organization. The metadata contained in this digital document includes details such as an organization’s name, the organization that issued the certificate, the user’s email address and country, and the user’s public key. When using digital certificates, the parties required to communicate in a secure encrypted manner must discover each other’s public keys by extracting the other party’s public key from the certificate obtained by the trusted server.

Trust chains


A trusted server, or certificate authority, uses digital certificates to provide a mechanism whereby trust can be established through a chain of known or associated endorsements. For example, Alice can be confident that the public key in Carol’s digital certificate belongs to Carol because Alice can walk the chain of certificate endorsements from trusted relationships back to a common root of trust.

IBM Certification, IBM Guides, IBM Learning, IBM Tutorial and Material, IBM Blockchain

Our current identity authentication scheme on the internet is based on asymmetric encryption and the use of a centralized trust model. Public key infrastructure (PKI) implements this centralized trust model by inserting reliance on a hierarchy of certificate authorities. These certificate authorities establish the authenticity of the binding between a public key and its owner via the issuance of digital certificates.

Understanding the key exchange dilemma


As the identity industry migrates beyond authentication based on a current client-server model towards a peer-to-peer relationship model, based on private encrypted connections, it is important to understand the differences between symmetric and asymmetric encryption schemas:

◈ Symmetric encryption uses a single key that needs to be shared among the people who need to receive the message.
◈ Asymmetrical encryption uses a public/private key pair to encrypt and decrypt messages.
◈ Asymmetric encryption tends to take more setup and processing time than symmetric encryption.
◈ Asymmetric encryption eliminates the need to share a symmetric key by using a pair of public-private keys.
◈ Key discovery and sharing in symmetric key encryption can be addressed using inconvenient and expensive methods:

◈ Face-to-face key exchange
◈ Reliance on a trusted third party that has a relationship with all message stakeholders

◈ Asymmetric encryption eliminates the problem of private key exchange, but introduces the issue of trusting the authenticity of a publicly available key. Nevertheless, similar methods can be used for the discovery and sharing of trusted public keys:

◈ Face-to-face key exchange
◈ Reliance on a trusted third party that has a relationship with all message stakeholders
◈ Certificates that provide digitally signed assertions that a specific key belongs to an entity

Rebooting the web of trust


What if we wanted to avoid this centralized reliance on a trust chain of certificate authorities? What if we could leverage distributed ledger technology as a transparent and immutable source for verifying and auditing the authenticity of the binding between a public key and its owner?

An alternative to the PKI-based centralized trust model, which relies exclusively on a hierarchy of certificate authorities, is a decentralized trust model. A web of trust, which relies on an individual’s social network to be the source of trust, offers one approach to this decentralized alternative. However, the emergence of distributed ledger technology has provided new life to the web of trust vision. Solutions using SSI can leverage distributed ledger as the basis for a new web of trust model that provides immutable recordings of the lifecycle events associated with the binding between a public key and its owner.

Decentralized PKI in a nutshell


As explained earlier and depicted in the diagram below, in a PKI based system Alice and Bob need to establish a way to exchange and store their public keys. Conversely, in a blockchain-based web of trust model, the storage of public keys are managed on the public ledger. As participants in a global identity network, Alice and Bob create their unique DIDs, attach their public keys and write them to the public ledger. Now any person or organization that can discover these DIDs will be able to acquire access to the associated public keys for verification purposes.

IBM Certification, IBM Guides, IBM Learning, IBM Tutorial and Material, IBM Blockchain

Thursday, 14 June 2018

Environments Where Blockchain Can Thrive

IBM BlockChain, IBM Certifications, IBM Learning, IBM Study Materials, IBM Guides

A Blockchain solution can flourish in business scenarios that have a high number of participants that all want to track a particular product or item. And the more complex the tracking process, the more a Blockchain application can thrive.

For example, if a product traverses through a series of steps that starts with its creation and ends with its delivery into the hands of a consumer, then incorporating a Blockchain solution into this process can potentially offer many benefits. For example, it can enhance the overall information security, as well as provide both substantial time and cost savings for all participants in this product’s life cycle.

Blockchain Benefits


To illustrate this further, when a purchase order is received by a manufacturer for a particular item, the product’s life cycle begins. Starting with the purchase order, the product’s manufacturer builds the product then hands it over to a shipper. This shipper then sends the item to a warehouser who then ships it to a wholesaler. This wholesaler then utilizes another shipper to have it sent to a retailer. The retailer then stocks the item until a consumer purchases it. Having a way for all participants in these steps to view where the product originated from, i.e. its provenance, and trace all of its handling can add many benefits, including:

◈ Transparency within supply chains
◈ Immutable information that can be available to all participants
◈ More efficiency in maintaining records
◈ Organized data for auditors and regulators
◈ Reduced or eliminated administrative record keeping errors
◈ Reduced or eliminated processing paperwork

Blockchain for an International Air Services Provider


Recently, an international air services provider, dnata, successfully tested the use of Blockchain technology in its cargo operations. This achievement is a real life example of the aforementioned scenario.

With the help of IBM and other partners, dnata developed a logistics platform with a Blockchain infrastructure. This platform was put into effect to view supply chain transactions, starting with the purchase order of an item and ending at its delivery to a warehouse. This business use-case exemplifies where a Blockchain can thrive: an environment that has a large number of participants wanting to track products through the supply chain.

Blockchain Solution for Asset Management


Another environment where Blockchain can thrive is when a company transfers assets within a business network. When a company internally transfers a physical asset such as a laptop, or in the case of trucking company, a semi trailer, from one location within its business network to another, there can be many people involved and much related paperwork to keep track of its journey. In this case, a Blockchain can establish a clear trail for the asset that has been transferred within this business network. Acting as a shared ledger, the Blockchain can allow internal company parties to view where the company’s assets have been moved to, who it was handled by, its current state, its past state, and how many times it has been transferred and even how many times it has been used – all from the same source, i.e. the shared ledger. And it can be viewed at anytime and by anyone that has permission to do so.

Also within the asset management process, there can be many issues including having transfer information split among many different record-keeping systems, conflicting information on transactional updates, and long wait times to resolve discrepancies. These can add to costs and subtract from efficiency. A properly executed Blockchain can be the sole source of transfer information, and reduce the number of asset discrepancies as well as the time it takes to resolve them.

Wednesday, 13 June 2018

The Race to a Truly Smart Home

The concept of a smart home has been around for a long time and yet even in 2017, it remains relatively ignored by all but the early adopters.

IBM Certifications, IBM Learning, IBM Guides, IBM Tutorials and Materials, IBM Study Materials

The number of vendors offering new smart home platforms continues to expand, whether it is new startups, consumer goods giants or the Silicon Valley elite. But as the choice widens, it becomes more difficult to choose and integrate the best equipment.

In any case, smart homes are not even something the typical homeowner would even give much if any thought to. But to paraphrase a rather hackneyed quotation attributed to Henry Ford, if he had asked his customers what they had wanted, they would have asked for faster horses rather than a motor car.

The growing popularity of smart speaker devices perhaps give us the closest vision yet of the future home. However, by themselves they are little more than mobile devices connected to a loudspeaker and any interaction with lights, thermostats, security devices, etc. requires purchasing and integrating a significant amount of hardware.

The final experience is inevitably going to be rather stilted, not to mention the effort required to install and configure it in the first place. Faster horses perhaps, but horses nonetheless.

What we really need are homes that are built smart, homes where you walk in for the first time and everything just works intuitively.

Wienerberger took a step forward in this direction with the e4 house: an Arup designed housing concept with preconfigured supply chain including sensor technology and Building Information Modeling (BIM) in addition to the more traditional building products.

e4 stands for four key principles:

◈ Economy – a house that is affordable yet built to last
◈ Energy – so efficient that energy bills will be just a couple of hundred pounds per year
◈ Environment – a house that minimizes its environmental impact and is responsibly sourced
◈ Emotion – a house that people will want to live in

IBM is helping Wienerberger integrate the digital components of the house and provide the smart technology to complement the e4 principles.

It is just the beginning of the journey however the smart e4 house can now benefit from a building health monitoring system which will help identify when equipment like the boiler are about to fail enabling the homeowner take action when it is most convenient.

Whilst the building is inherently energy efficient due to the materials used, IBM’s artificial intelligence system, Watson, will perform further optimization by analyzing usage patterns and reducing unnecessary energy usage.

Sensors will detect water leaks and ensure that the system can be repaired before too much damage occurs.

Access to the BIM model will enable the homeowner to obtain useful information such as when the warranty on the boiler expires, the location of live wires behind the walls that may be hit with a misplaced drill, or the type of roof tile used during construction so that an exact replacement can be found for one lost during high winds.

IBM Certifications, IBM Learning, IBM Guides, IBM Tutorials and Materials, IBM Study Materials

It will also enable the development of applications that calculate the amount of paint/tiles/carpet required to decorate a room by accessing the dimensions from the BIM model and perhaps even allow retailers or local tradesmen to make an offer to get the job done.

Perhaps most importantly, a home concierge powered by Watson will enable the homeowner to interact with the house through a natural language interface – voice activated and via a chat interface on a mobile device. Whether it is switching on lights, boosting the heating or answering questions like “how can I reduce my electricity usage?” or “when is the recycling next collected?”.

A truly smart home needs to solve the real problems that homeowners face in order to be adopted. It needs to become as vital to today’s consumer as electricity or the internet.

House builders now have a unique opportunity to make the running in smart homes. Smart personal devices are starting to plateau in capability but smart vehicles are accelerating rapidly. Even smart workplaces are gathering momentum. Yet the race to a truly smart home has yet to really get started.

The technology is here today, but the winner will need to integrate the technology in a way that transforms the homeowner experience for the better.

Tuesday, 12 June 2018

Three success factors in security operations

IBM Guides, IBM Learning, IBM Certifications, IBM Tutorials and Materials

There are five trends that all industries are facing, but I’d argue no industry is feeling them more acutely than financial services:
  1. Shareholders increasingly demand higher margins, as customers increasingly expect more personal and convenient experiences.
  2. Digitization of Society is accelerating, with specific stress to financial services where new capabilities with new competitors will force banks out of their branches.
  3. The world’s data is doubling every 12 months. And financial transactions are among a fast-growing subsection of data types.
  4. Digital Trust is paramount for the modern business. The expansion of channels expands the threat of money laundering, fraud, and hacks — as well as regulatory requirements for necessary protections
  5. Artificial Intelligence is now being used by cyber criminals, meaning the sophistication of their methods is increasing, forcing banks to up their game.
The successful bank that emerges from those trends has a business model run on digital intelligence. It’s a model where we gather data, convert it into knowledge, create real-time insights from that knowledge and turn those insights into better decisions, actions and, ultimately, outcomes.  The model delivers better customer experiences, creates operational efficiencies, and can lead to new revenue sources. This digitally reinvented financial institution, runs on data, which is valuable and in-demand, but constantly under threat.

Financial institutions are leading the charge in building security immune systems, knowing they are most threatened. They are looking for end-to-end security operations that are flexible and scalable, data-driven and applied with automated, operational accuracy. Above all, it should build trust and deliver on the promise of security and privacy—without getting in the way of customer experience.

That type of trust-building security system requires a Six Sigma-like operational rigor, but as breach after breach teach, we must employ new tools that ensure we eliminate any variance in cyber security. In my mind, success follows with three operational goals:

1. Efficiency


Attacks will come quickly and constantly. A security system must be able to flag and defend against threats, without causing bottlenecks or burdening budgets and staff.

Automation is the only way to make operations efficient. It can flag problems and route issues to security analysts. Robotic process automation acts on set rules to sift through millions of records to catch problematic transactions. But as attacks are increasingly sophisticated, Cognitive process automation, powered by artificial intelligence (AI) is the only way to get to true efficiency. As the system flags issues to security analysts, it then assist analysts in making correct, comprehensive decisions. With the deluge of data analysts deal with, cognitive automation can adapt to new variables and react to unique situations in order to reduce false flags and detect new types of attacks. Traditional process automation often stops short, using only narrow AI capabilities that focus on the structured security data alone, which brings us to efficacy in security operations.

2. Efficacy


Visibility across your entire organization is essential to detect any threats and to take required protective actions. Cyber security information is high-frequency, high-volume data which is accelerating as we digitally transform all aspects of society. Much of that data exists in a vast ocean of unstructured data that has no value unless we can process and derive insight from it.

Enter Artificial Intelligence (AI). Unlike current AI-powered security systems, we must evolve to a new framework that can make sense of unstructured data, identify security threats within it, and take action to protect the business.

The only security systems that are truly effective are those that combine the narrow AI of typical security systems and broad AI, capable of interpreting unstructured data. This makes it possible for security systems to scour the internet, ingest and analyze unstructured cyber security information, act on a single threatening data instance, the very first time it is encountered – the proverbial needle in the haystack –, and remember it always. This is vitally important as we look to protect the business from the most sophisticated and never-before-seen threats.

3. Repeatability


For material improvement to security operations, the system must combine both automation and broad and narrow AI. A successful system is one that can both flag the most critical threats, route them appropriately, and then learn from analyst decisions, applying that logic to improve threat identification and rules enforcement to continue improving over time.

In the context of those five market trends—where your bank runs on a digital, efficient, flexible and customer-centric model—, efficiency, efficacy and repeatability can only be achieved using automation, powered by cognitive technologies. It not only reduces the cost of operation but it reduces the burden, friction, and stress to customers.

IBM’s suite of cyber security tools bring together efficiency, efficacy, and repeatability through AI and automation, to enhance security operations, while building digital trust through consistent customer experience. See what our suite of security tools can do for you.

Monday, 11 June 2018

Optimizing Content Platform and Document Capture Systems with Distillr

Organizational ECM and BPM departments are increasingly tasked with managing high volumes of content and supporting mission-critical business processes. Given the importance of these systems in operations and the interconnected nature of unstructured data in business processes with document capture systems, meeting service level agreements is critical. In order to avoid a troubleshooting fire drill, it’s necessary to identify problematic system areas before they affect end users.

To address the needs of many organizations trying to get a handle on system monitoring and optimization, we at Perficient have developed Distillr, a key enabler for content and workflow platform monitoring, where critical information spanning your complete environment is located in one place and easy to understand and analyze.

IBM Tutorials and Materials, IBM Guides, IBM Certifications, IBM Learning, IBM Study Material

Based on open source Big Data technology and best-of-breed ECM and BPM industry expertise, Distillr delivers a streamlined set of metrics with a minimal consulting services engagement compared to other tools. Using Distillr you can rapidly identify and correct issues before they happen, direct your course for system optimization, and chart your progress.

INTRODUCING DISTILLR


Distillr provides the following capabilities:

◉ Analytics for ECM/BPM/Capture Systems: Lightning fast software provides split-second, real-time analytics around system usage and technical data
◉ Built on Big Data: Capture data from across the environment, gain insights everywhere
◉ No Programming Required: Point and click interface lets anyone create their own Visualizations and Dashboards
◉ Drill Down to What Matters: All data can be quickly explored and dissected down to individual data points

PERFICIENT CLIENT SUCCESS STORY


A Utah-based financial services company wanted to improve its ability to record and understand the performance and usage of its IBM ECM systems. The records department had difficulty monitoring usage statistics and investigating the causes of performance issues. Perficient installed and implemented two proprietary ECM optimization products: Report Data eXchange – a utility that processes data generated by the IBM ECM event log and builds detailed data tables – and Distillr, a data visualization tool that monitors statistics from user-configured data sources. Distillr displays this data via dashboards including graphs, charts, and other customizable reports.

With Distillr, this company can now capture trends in system performance and quickly find and address performance issues in near-real time.

According to the Imaging System Administrator, “Distillr is a great enhancement to our content management system. The solution allows us to quickly and comprehensively analyze massive data sets generated by the system, proactively identify potential issues, and quickly uncover information about a situation.”

ABOUT PERFICIENT


Perficient is a leading provider of digital transformation solutions and an award-winning IBM partner.  Our commitment to delivering world-class solutions and impressive content and process management expertise, coupled with a proven delivery methodology, ensures a comprehensive and innovative implementation. Our enterprise content intelligence services span the full lifecycle of content-enabled workflow.

Saturday, 9 June 2018

Architecting the future with IBM Z Technical University

IBM Guides, IBM Tutorials and Materials, IBM Learning, IBM Artificial Intelligence

We are currently at an inflection point for a massive industry shift fueled by data. Exponential growth of data is coming together with exponential growth in the capabilities to apply analytics and machine learning to data. The world is becoming digital and intelligent. Nearly all existing business processes can be augmented with artificial intelligence (AI) – the ability to learn and understand what’s hidden in the data and to automate the physical world.

It is a huge opportunity for those who already have the required data. In contrast to digital startups which have the ideas and some venture capital, the established companies and the incumbents have the data. An estimated 80 percent of the world’s critical data is private, generated and owned by corporations worldwide. This is the phase where these incumbents become disruptors. With the data, they are able to disrupt existing models, extending and replacing it with a new, digital model.

There is so-called structured and unstructured data. Structured data is considered to be of highest value. It is the data that resides in databases denoting customer transactions or the status of company resources. Transaction performance and analysis of the resulting data is usually well-established.

New sources of data, (primarily unstructured data) need to be analyzed more deeply especially if we deal with language, images, videos and sensor data from devices from the Internet of Things (IoT). The technology to help us understand unstructured data is evolving very fast. Applications can integrate and correlate structured transactional data with insights from previously unstructured data.

A new set of hybrid applications will evolve, augmenting the informational backbones of corporations. Cloud-native concepts leveraging microservices will be used for most of these digital innovations. They need to be combined with growing confidentiality and transactional consistency and integrity requirements for those digital backbones. The resulting application and infrastructure architectures can be smart, efficient and flexible, if there is a unique source of truth with unlimited scalability and unmatched security.

At IBM Z Technical University in London, starting on May 14, 2018, you can learn about this innovation journey at the Architecting the Future track. This two-day track will bring the best technical leaders and speakers with a broad background and experiences together for an exceptional agenda covering data, AI, cloud, security, blockchain, DevOps, microservices and more. You will understand why the backbone of the digital business world, IBM Z, is differentiating for an architecture of the future.

The Architecting the Future sessions are developed specifically for CTO’s, chief architects and innovation managers from all kinds of organizations. It will be valuable for anyone who needs to drive data innovation and digital transformation. Please come to Architecting the Future – you will learn from and interact with IBM technical leaders, including many IBM distinguished engineers and inspiring speakers.

IBM Guides, IBM Tutorials and Materials, IBM Learning, IBM Artificial Intelligence

Friday, 8 June 2018

9 Ways IBM is Reinventing Recruiting

IBM Certifications, IBM Study Material, IBM Guides, IBM Learning

The talent industry is facing a major shift to reinvent itself. We are moving to a new era where companies need to work and operate with increased speed and agility, along with more efficiency, to predict solutions for problems before they occur. Upgrading the abilities of talent teams and enabling the use of new recruiting technologies require a host of new skills, capabilities, roles, and processes that are action-oriented and talent-centric. Reinventing the talent acquisition function is critical to sustaining a competitive advantage and driving maximum value to the ever-changing needs of both the business and new era of consumer grade, candidate experiences.

What concerns me as a talent leader?

◈ High performing employees are 800% more productive than average talent.
◈ 82% of Fortune 500 executives don’t believe their companies recruit highly talented people.
◈ In the U.S., there’s an expected talent shortage of 23 million employees by 2020.
◈ Based on the current pace of change, the gender gap will not close until 2186.

To borrow a quote from Einstein: today’s problems cannot be solved with the same thinking that created them. Many of you are going through your own HR transformation journey. At IBM, our HR function has been transforming in various phases for the past decade. We started with outsourcing, then moved to centralization, followed by optimization. Today, we are facing our biggest transformation to date: the cognitive era, powered to deliver smarter outcomes. Cognitive has allowed us to accelerate our transformation, impacting the end-to-end process of our talent lifecycle, from attracting candidates and onboarding new hires, to retaining and growing our talent.

As we look ahead to new recruiting trends for 2018, here are 9 ways IBM will #ReinventRecruiting in talent acquisition.

1. UPSKILL THE FUNCTION to differentiate with domain and organization capabilities


As we enter the cognitive and digital era, we will all work differently. 100% of roles will change in the future. To support the changing needs of businesses and candidates, talent acquisition teams must expand the role of traditional recruiters. This means recruiters must embrace 21st century recruiting skills that focus on driving business value and outcomes rather than acting as an administrative function to fill open requisitions.

2. HORIZONTALLY SOURCE to build ready-now talent pipelines


Our competitors have changed. Many of our business lines are competing against the same companies. Skills are no longer reserved for specific business units; they are viewed horizontally, to support various verticals, as distinct lines of businesses seek analogous skills in Agile, cognitive, and cloud. At IBM, we moved away from requisition-based sourcing and now focus on the commonality of horizontal across all skills within various businesses, domains, and industries.

3. WORK AGILE to increase speed and predictability


To utilize Agile effectively in a non-technical role, like recruiting, companies must incorporate the methodologies of Agile into the DNA of the talent function. Working on an Agile talent team increases speed and predictability, prioritizes requisitions based on complexity and value, and sets timed sprints with “Kanban” boards to manage the hiring process and create social contracts for hiring managers to gain commitment. A successful integration of Agile into the talent function will deliver solutions and outcomes to the business in faster, more impactful ways than ever before.

4. CREATE A RECRUITING-FIRST CULTURE to support continuous hiring


Building a recruiting first culture means treating every candidate like a customer and empowering every employee to become talent ambassadors. What can you do to create a recruiting first culture? Know your talent better than you know your customers. It’s about crafting personalized, digital messages and engaging assets for employees to share on their social networks, which extends the message’s reach and eventually leads to talent engagement. This creates an environment of employees who are more engaged and connected, which leads to increased retainment rates and higher quality talent.

5. TRUST-BASED HIRING to create candidate pooling and optimize talent pipelines


In many organizations, skills that are high in demand across multiple business units fight over the same talent pool, without taking the needs of candidates into consideration. IBM takes the opposite approach. Our trust-based hiring model looks at preferred skill profiles and showcases them under one requisition, encompassing the demand across all business lines and filling the talent pipeline with quality and diversity. Then, we have subject matter experts we call “Cognitos” form a central team of interviewers to help both the business and the candidate decide the ideal placement for the highest chance of success and contentment.

6. PROACTIVELY SOURCE to increase passive hiring and instantly match for performance success


Watson is IBM’s artificial intelligence platform that helps businesses make more-informed decisions, touching nearly every industry across the globe. Within talent acquisition, Watson’s ability to proactively source and find applicants that match key success profiles (that may have been missed by recruiters) has kept our talent pipeline buzzing and more diverse than ever before. Tapping into Watson’s vast network of data and predictive technologies has allowed us to recruit more inclusively and increase diversity across our talent pool.

7. COGNITIVELY ASSIST CANDIDATES to engage new prospects


A dynamic world requires dynamic tools and talent. Our cognitive-infused tool, “Watson Candidate Assist,” allows us to personalize the candidate’s experience on our career website. With traditional job-matching sites, applicants find jobs based on skills they enter. With IBM, Watson personally engages with job seekers by asking questions to learn what’s most important to them in a job and company. Think of it as a Pinterest board for pinning key attributes about a company’s work environment, culture, specialties, and more. So far, we have seen 86% of individuals engage with Watson and 35% more people apply for a job based on their interaction with Watson.

8. PERSONALIZE OFFERS to enhance the candidate experience


Every candidate is unique. There is no “one-size-fits-all” benefits package. We developed the “Personalized Offer” tool to empower candidates to customize their benefits package based on their unique needs. When early professional hires receive an offer from IBM, a text message is sent to them from Watson inviting them to personalize their package. Through their mobile device, they can customize up to three benefits: up-front cash as a signing bonus, funding for learning grants, and deferred cash for their 401K. In 2018, we will be extending the number of benefit options to customize as well as improving the end-to-end experience.

9. INTERVIEW WITH COGNITIVE to help hiring managers and candidates better prepare for interviews


Given the human element involved in the interview process, some hiring managers may display an unconscious bias in the questions they ask candidates, whether they realize it or not. To reduce this predisposition during live interviews and keep the focus on the quality of the candidate, Watson continuously looks for triggers and indicators that suggest performance success with the candidate. The hiring manager is then presented with questions to ask that are focused solely on performance success.

I believe the roles of a talent team are even more critical to the success of a company in this digital era, especially with the rise of artificial intelligence. In a recent report from Entelo on 2018 recruiting trends, a whopping 62% of companies plan to invest in AI technology for recruiting purposes in 2018.

To continue to grow in this function, we must learn to harness technology to develop domain expertise, deeper candidate relationships and be trusted advisors for the business. This is just the beginning. At IBM, we are making big bets, experimenting often and learning every step of the way as we transform our organization. Following these nine strategies, we have seen success with our opening efforts to reinvent recruiting, and we are excited to take them to the next level.

Wednesday, 6 June 2018

Creating good vibrations with app modernization

What do sleigh bells, bongos, empty soda cans, trains and a barking dog having in common with application modernization? To explain, let’s travel back to the 1960s.

IBM Certifications, IBM Study Materials, IBM Learning

In 1966, the Beach Boys released their album Pet Sounds, which is often regarded as one of the most influential albums in the history of music.

Pet Sounds advanced the field of music production through the way the band was able to capture a wider, more complex mix of sounds into one song. It had not really been done before. The Beach Boys’ Brian Wilson, the creative genius behind Pet Sounds, accomplished this through the use of multitrack recording, which is the process of capturing different channels of sound on the same recording medium, then dividing them into separate tracks that run parallel with each other for playback.

In much the same way, developers deconstruct traditional, monolithic applications into microservices, each running in their own container.

Wilson’s audio innovation enabled the band to record on both four-track and eight-track recorders to mix a vast array of musical instruments rarely used in rock music, such as an accordion and an ukulele, and record completely off-the-wall things such as a soda can, bicycle bell and even a dog barking.

By using the same microservice ideology of creating loosely coupled services instead of a large monolithic application, Wilson, along with every modern-day musician, can record music in loosely coupled audio tracks instead of recording all of the elements of a song on a single track.

Before the use of the multitrack recorder, all of the singers and band instrumentalists would have to sing and play together during a recording session that was captured on one single track. If a musician wanted to make a change to the melody of a vocal, change the tune of a guitar or simply just experiment with the song, they would have to start from scratch and re-record it all together. This process is eerily similar to how a monolithic application operates, and it raises the same problems developers face in lacking the flexibility needed to scale functions, make alterations and add new features. It also creates higher costs related to maintenance.

Multitrack recording had been around for a few years prior to Pet Sounds, but that groundbreaking album made the containerization of music into multiple tracks an industry standard and paved the way for new sounds from other acts, such as The Beatles and Pink Floyd. Many recording artists have departed from the small-ensemble electric rock band format and have championed the ease of scaling new sounds into musical compositions by separating the elements of a song in multiple tracks.

The same can be said about app modernization. The idea of containerizing and breaking down older, monolithic applications into smaller services to use cloud services while reducing costs and simplifying operations is beginning to revolutionize the way enterprises support their entire application estate. There are also other approaches businesses can take to start their modernization journey that may better align with their inventory and needs.

A few approaches include:

◈ Containerize the monolith to reduce costs and simplify operations.
◈ Expose on-premises assets with APIs, which enable established assets that are difficult to enable on cloud.
◈ Refactor into microservices by breaking down monoliths into deployable components.
◈ Add new microservices to innovate incrementally and establish success early.
◈ Selectively refactor or strangle the monolith to incrementally sunset it.

Much like how the Beach Boys enlightened millions of musical acts by breaking down barriers for how music was traditionally produced, IBM is empowering enterprises to accelerate agility and reduce operational costs by helping them modernize their existing environments.

It can be a serious challenge for an organization to get started on their modernization journey in a multicloud environment, which is why IBM is bringing app modernization expertise to a city near you. To learn more, register for one of these upcoming application modernization events.

Tuesday, 5 June 2018

IBM

Automating Tasks Using IBM Robotic Process Automation with Automation Anywhere

IBM Tutorials and Materials, IBM Guides, IBM Certifications

With robotic process automation (RPA), you can automate your routine tasks quickly and cost effectively. RPA bots can easily integrate with your broader automation initiatives — such as process and decision automation, or data capture initiatives — to expand the value of your automation program.

-Accelerate time to value. Create, test and deploy new automation schemes in hours, instead of days or months.
-Reduce human error. Virtually eliminate all copy-and-paste mistakes that result from swivel-chair integration.
-Increase throughput. Complete automated tasks in seconds or minutes, around the clock, to deliver higher value for your customers.

Learn how to create and manage bots in the new WB501 – Automating Tasks Using IBM Robotic Process Automation with Automation Anywhere, 5 day course (WB501 (Classroom) ZB501 (Self-paced))

Description


IBM Robotic Process Automation with Automation Anywhere can be used to create a digital workforce to automate repetitive tasks, maximizing your knowledge workers’ productivity by allowing them to focus on higher-value activities.

Through instructor-led presentations and hands-on lab exercises, you learn about the core features of IBM Robotic Process Automation with Automation Anywhere. You also receive intensive training in developing Bots, deploying Bots, and running and managing Bots. The course uses realistic scenarios and a case study to illustrate the principles and good practices for developing Bots. The lab environment for this course uses Windows 2012 Server R2 Standard.

Learning objectives


After completing this course, you should be able to:

• Provide an overview of Robotic Process Automation and it’s uses
• Describe the benefits of implementing an IBM Robotic Process Automation with Automation Anywhere solution
• Identify the components and features of the product
• Understand the terminology, tools, and capabilities of the product
• Describe the high-level architecture
• Understand when to use various recorders and commands
• Gain experience in developing simple Bots to automate repetitive tasks
• Describe the most common considerations when deploying and managing Bots
• Gain experience in deploying, revising, and managing Bots

Monday, 4 June 2018

Save Money with an IBM Cloud Digital Subscription

Save your company money by locking in your spending and terms up front. How? If you’re currently a IBM Cloud Lite account user you can upgrade to a IBM Cloud subscription account.

If you’ve test driven AI, machine learning, IoT, and database capabilities through IBM Cloud Lite, and you’re ready to unlock premium services such as containers and blockchain, then a subscription account is for you.

Before today, the only way to upgrade your account to a subscription, was by calling the IBM Cloud team. Now, through a ‘digital’ subscription, you can upgrade your account via your dashboard without calling the IBM Cloud team. Starting at $100 USD for a 12-month term, your digital subscription helps you save thousands of dollars when you upgrade.

For example, if you project your monthly spend to be between $5,000 – $9,999 with a 12-month term, you could save 12% through a subscription. Of course your cost savings increase as your spend and term limits increase.

Upgrade in 5 quick steps


Step 1: In your dashboard, select billing on the left panel.

IBM Cloud, IBM Learning, IBM Certifictions, IBM Tutorial and Material

Step 2: Setup your subscription amount and terms.

IBM Cloud, IBM Learning, IBM Certifictions, IBM Tutorial and Material

Step 3: Review your payment schedule and charges.

IBM Cloud, IBM Learning, IBM Certifictions, IBM Tutorial and Material

Step 4: Setup your billing information.

IBM Cloud, IBM Learning, IBM Certifictions, IBM Tutorial and Material

Step 5: Finalize billing information and save money.

IBM Cloud, IBM Learning, IBM Certifictions, IBM Tutorial and Material

To recap, here are 3 reasons why you should upgrade your account:

◈ Save
Lock in your monthly spend and term commitments to save money
◈ Predict
Forecast spending by ordering more or less based on your needs
◈ Simplify
Upgrade your account with a few easy clicks

Sunday, 3 June 2018

IBM Cloud Private v2.1.0.3 Boosts Scalability and Security

IBM® just announced the release of version 2.1.0.3 of IBM Cloud Private which provides guidance for General Data Protection Regulation (GDPR) compliance and adds new capabilities for securing, managing, and scaling your platform. Additionally, 2.1.0.3 includes support for both Microclimate and select open source runtimes.

General Data Protection Regulation


The new GDPR regulation is now in effect in the European Union. IBM has developed a dedicated web page about IBM Cloud Private platform considerations for GDPR readiness to provide you with information about features that you can configure, and aspects of the product’s use that you should consider to help your organization with GDPR readiness.

Latest version of Kubernetes


IBM Cloud Private continues to evolve in lock step with the community. This release includes version 1.10.0 of Kubernetes.

Tighter security options for administrators


We continue to tighten security on the platform and provide more options for administrators to control access to various parts of the system. The following enhancements are now available in 2.1.0.3:

● Role-Based Access Control (RBAC) for Helm repos and individual charts within a repo. You can now control which teams have access to which charts, limiting who can deploy, update, and delete your most critical applications.

● Use Service IDs and Service API Keys to better control which programs can access services running on your platform and to customize their access privileges.

● Use the IBM Cloud Private CLI to manage Kubernetes Secret passwords that secure communications to key services in the IBM Cloud Private platform. For example, you can set your own password for our built-in MongoDB service that stores authorization and authentication information. You can also set up password rules that ensure only strong passwords are used to protect your system.

● Audit logging of authentication and authorization actions on your system is now available.

● Set up end-to-end TLS encryption for your ELK stack. When enabled, all data passed between the Elasticsearch, Logstash and Kibana components is encrypted and secured with PKI-based authentication.

Certified scalability to 1000 nodes (!)


We continue to increase the scale testing and have now certified IBM Cloud Private to work with up to 1000 nodes.

IBM Cloud, IBM Certifications, IBM Learning, IBM Security, IBM Scalability, IBM Study Materials

Day 2 Management & Usability


IBM Cloud Private was designed from the ground up using a microservices-based architecture. It was therefore natural in version 2.1.0.3 to use Helm to deploy our optional services, such as metering, monitoring, service catalog, ISTIO, and Vulnerability Advisor. This makes future adds, removes, updates and rollbacks of management services much easier. In this release, we start by providing the ability to enable the Vulnerability Advisor post-installation.

Clients also need to change their cluster topology post-installation. For quite some time you had the option to add or remove worker nodes in your cluster. IBM Cloud Private now supports post-installation addition or removal of proxy, management, host groups, and Vulnerability Advisor nodes by using the CLI. We can also leverage a VMware or OpenStack Cloud Provider to provision worker or proxy nodes from images.

Other enhancements that make your management of the product easier includes:

● “Launch” links in the dashboard so that you can directly open an application’s UI with one click
● More catalog filters, so you can find and launch applications faster
● Release notes information for each Helm chart, including the version, what’s new, and any fixes, or enhancements added.
● The internal Helm repository named local-charts can now be added to the Helm CLI as an external repository.
● The ability to use the metering service to measure usage of your own applications as well as IBM products running outside the IBM Cloud Private cluster.

Cloud Foundry Enhancements


IBM Cloud Private now provides a better way to deploy and manage Cloud Foundry. This improved Cloud Foundry now includes a new management console (technology preview), container-to-container networking, integrated monitoring, updated buildpacks, new OpenStack support, and an upgrade to Cloud Foundry version 270.29.

IBM Cloud, IBM Certifications, IBM Learning, IBM Security, IBM Scalability, IBM Study Materials

Microclimate and Runtimes Support


Whether modernizing existing applications or building new cloud native microservices, cloud-based applications are increasingly composed of components built using multiple programming languages and frameworks. This is why IBM Cloud Private now includes support for Microclimate and open source Java, Node.js and Swift runtimes along with select web and microservice frameworks. Microclimate enables end to end development that lets you rapidly create and edit Java, Node.js and Swift applications and deploy them through an automated DevOps pipeline using Jenkins. (Microclimate replaces Microservice Builder, which was available in earlier releases.) Together Microclimate, Runtimes Support and IBM Cloud Private provides a complete, end-to-end solution for development and deployment on the most popular open source frameworks.

Betas and Technology Previews


Container Storage Interface (CSI) is now available as Beta.

The following features are available as Technology Previews:
  • ISTIO is now deployable by Helm
  • Horizontal pod auto scaling by using custom metrics
  • Installing your cluster by using containerd as a runtime for cluster nodes is available