Wednesday, 31 May 2017

How to Design An Effective Work Environment

Optimal working conditions, like many other things in life, depend on the individual needs, team needs, objectives, and resources available.  Thinking there is one tool, one way, one method, to solve all problems leaves powerful opportunities for success on the table and lowers effectiveness down to the lowest common denominator.

Since it usually all depends, that means it is critical we take the time to understand the…
  • individuals
  • teams
  • objectives
  • organizations
…that we are serving when we take on the challenge of designing and implementing solutions to achieve optimal working conditions.

How we think about our challenges directs our ideas and approaches for solving a problem.  When scaling is the goal, our minds fast forward to serving all people with our solutions.  While there is nothing wrong with a bold vision, it is critical we show some results, even small ones.  In order to do this, we need to think small at first.  Small assumptions, small experiments, small implementations, small impact.

Whom we decide to serve can also direct our ideas and approaches for solving a problem. One simple way to begin making this decision is by segmenting your audience.  When you segment the market, you have the opportunity to intentionally select one group of people, start small, and fully address their problems before moving on to the next or scaling that particular solution.

As you embark on designing and implementing solutions that contribute to an optimal work environment, consider applying the following process.

Segment the Audience

There are several ways you can segment your audience.  Clayton Christensen, Harvard Business School professor, suggests segmenting by the “jobs” people or teams are attempting to perform.  Ultimately, solutions help people do jobs more effectively, so focus on jobs as the primary method of segmentation.  Here are examples of some jobs people or teams try to complete in a company:
  • Learn new skills
  • Complete heads-down work (state of flow)
  • Meet with individuals (one-on-ones)
  • Meet with groups of people (meetings of 3+)
  • Schedule and complete group working sessions
  • Rest, reset, break
  • Eat
  • Network (internally & externally)
  • Obtain feedback
  • Get promoted
  • Contribute to the organization’s success
  • Understand how their work contributes to success
The list can go on and on.  When we segment our audience this way, we can begin addressing specific and clearly-defined situations that can be solved more effectively and thoroughly.

To begin this segmentation exercise, list as many “jobs” as you can, and then decide what segment you will serve first.  Within the segment you choose, start with people or teams that have this job in common and then further filter this group down to those you can access easily.  Keep the group small.  Don’t go for the kill (i.e. take on too many) on your first attempt, because if you miss, it will cost you a great deal.  Take several experimental jabs at your problem first.

Study the Segment Experience

With a job segment and group of people selected, go talk to people in your target groups so that you can investigate this journey they undergo to complete the job in question.  For instance, learn about everything related to how people in your organization go about learning.  You mission here is to obsess over this problem because then and only then will you be positioned to identify and deliver the most innovative and effective solutions.  Consider the following steps:
  • Observe how individuals and teams engage in the job you are studying. In the case of learning, you might observe people attending a company class or people at a company training event.  You could also ask a few people to complete a specific task related to finding learning opportunities and watch them look for this.  All the while, you are taking notes on their experiences, processes, successes, and pain points.
  • After learning from observation, you can begin talking to people
about their experiences in learning and development; listen carefully for pain points.  In these conversations, ask mostly open-ended questions (i.e. who, what, when, where, why, and how).  When you hear a pain point, note it, and when the time is appropriate, repeat it to them to be sure you understood clearly and ask follow-up questions.
Ask questions about their most successful experiences in learning and development so that you can understand the good that already exists. This will provide you with an existing foundation to build from – no sense in reinventing the wheel.
  • Ask questions about their least successful, most painful, and failed attempts at learning and development. These questions will illuminate the pain points, ineffective processes, and possible misunderstandings.  There always stands the possibility that pain point is nothing more than a misunderstanding in the current processes that could easily be resolved with minimal effort.
  • Finally, ask them if there are any last thoughts or comments. Usually, after an effective interview, related ideas may have surfaced that would be of value to capture.

Brainstorm Solutions

Review your research.  Regroup with your team and review the problems discovered during these sessions.  Wherever possible, categorize them and identify themes.  Should you find themes within one segment, you have the opportunity to prioritize the most significant themes first.  Then, as you engage in other segments, you might find similar themes across segments.  This is evidence of an opportunity to scale a solution beyond a single segment.  This scaling opportunity is not the same as scaling for larger audiences, that will come later.  [use image of segment jobs, then themes, then circle the overlapping themes]

Brainstorm with your team.  Begin brainstorming solutions with your team around these validated pain points.  For this activity, find a room with a white board, list your validated pain point themes and then, using one post-it per idea, stick up as many ideas as possible by each theme.  It does not mean one solution cannot be scaled to another theme, this is just for the sake of keeping things as organized as possible.

Brainstorm with your clients.  Repeat the same exercise with your clients.  Invite them to a session and ask them to list their ideas, one per post-it, near the appropriate theme.  By engaging the customer in the solution-building process, you will not only validate the ideas you came up with as aligned with clients’, but you will also stand the best possibility of having implemented solutions being met with the most support.

Select ideas for experimentation.  With several ideas listed per theme, now comes the task of selecting which ones to experiment with.  In order to reduce the list, first look for overlapping ideas and consolidate them.  Next, look for product/market fit, that is, look for which solutions most closely fit the problem; look for ideas that meet no more, no less than (to a few degrees) the problem theme it is addressing.  Some ideas will be too much firepower for a particular problem and others may not be enough to effectively address the scope of the problem.  Find the right fit.

Experimenting and Measuring

Design a prototype.  Now that you have a few customer-approved solutions in hand, begin designing low-cost experiments (i.e. minimal viable products) to test.  This is the simplest and roughest prototype you can get away with and still deliver minimally acceptable value to the client.  In other words, this is the absolute least someone would pay for.

Select your experimental group.  Select a group of clients and work closely with them to set up and conduct the experiment.  Find your baseline data, this will often come from your studies on the segment experience plus some analytics on the data you gathered.  This is your control data.  However, you can also select a blind control group that you will measure against after the experiment.  Any group of people engaging in the “job” that were not part of your experiment will satisfy this control experiment.  Always favor those you have easy access to.  Blind studies are best because it reduces the risk of them being lead in any way.  Before you conclude this step, decide on the metrics you will measure, qualitative or quantitative.  You may not be able to measure everything, so do your best to track as much of the result as possible.  This part will get easier as you find that other experiments may be measured by the same metrics.  Thus the first few will be more challenging.

Measuring results.  Once your experiments are set up, begin measuring the results.  In order to make quick decisions, know what you are looking for, that is, decide what range qualifies as success, worthy of discussing, and simply ineffective.  This will allow you to make quick decisions and move on to the next steps where you either pivot (i.e. adjust your approach) or persevere (continue down the current solution path).

Pivot or Persevere

Equipped with results from your experiments, you can now begin to review these results, decide which experiments were the most impactful, and invite your clients to review your conclusions with you.  Your clients will help validate the data you captured as well as provide qualitative feedback you may not have been able to capture with metrics.  In addition, including your clients in this process will help build support for the first phase of this implantation.  As each successful implementation concludes, the team can commence subsequent implementations.  However, as the audience grows, there will be a new challenge to address – scaling to large audiences.


This is where large companies do best and they must because scaling is a necessity!  Equipped with valuable lessons, validation, results, case studies, success stories, and most importantly, satisfied customers, you will have the best evidence in hand to make the strongest case possible for funding the larger phases of implementation.

Do keep in mind, scaling does not just mean duplicating this effort to go from 10 satisfied clients (teams) to 100 new teams.  Scaling is a unique challenge of its own, made easier by having strong evidence and support for the particular solution you are attempting to scale.  You are now going to encounter new clients, in different geographies, with different cultures, people, languages, ways of working, businesses, etc.  These broader differences will present new challenges to your solution and the manner in which you apply them. A one-size-fits-all implementation strategy will likely not work.  It will be necessary to segment your larger roll-out audience by categories that affect implementation.  For instance, if your solution requires specific systems, start with those groups that already have access to and experience using those systems.

Essentially, when you are ready to scale, repeat this process, with scaling set as the new challenge.

Monday, 29 May 2017

The new “hype” in Sweden – Large Scale File and Object Storage

Something is changing in Sweden at the moment. A huge increase in the demand for large scale storage systems for file and object access has reached different vendors. The requirements within these solutions also differ a lot from the standard SAN or NAS storage systems nomally proposed to an also standard proposed virtual environment. So what is driving this change, except for the normal growth rate of already existing applications? One reason I know to be true is of course the changing requirements of the applications and business functions running in our Swedish IT environments. These applications are moving away from only handling the traditional business focused functions (CRM, ERP etc…) to also handling data used and created for the classic buzz words right now: Analytics, IoT and Big Data. Specifically Sweden I feel are doing fun stuff within this area and facts on the table we are pretty tech-savy in Sweden (Spotify, Skype, Minecraft etc..) and we like to be on the edge of technology innovation.

IBM Tutorials and Materials, IBM Certifications, IBM Guide

This change has been spoken about since a long time back…. I can remember many meetings years ago, where I have on multiple occasions talked about this growth. Usually on slides with a graph showing the unstructured data growth and how it will explode – you know which one I mean (see below).

IBM Tutorials and Materials, IBM Certifications, IBM Guide

But in real life this explosion didnt really happened during this time. In the end they still needed the same type of system without a real architectual difference. So we proposed a standard SAN or NAS system with some cool functions – every vendor of course has one of those and competed with us for the deal.
So why am I writing about this now. Well I believe now is the time to really talk about this unstructured data growth and how the different vendors can solve this problem. We have seen so many requests for a storage solution where the share amount of data can not be handled in the standard SAN or NAS system as it does not scale enough (performance or capacity) or for that reason can not be managed efficient enough or have the availability/reliability needed in these large configurations.

Why are we different?

From now on I will be pretty straight forward. Just a mere start to a list of reasons why and where we stick out from the rest of the normal NAS products delivered by all the major vendors in the market.

“Spectrum Scale – a true software defined solution by IBM”

The Basics:

  • We provide a storage solution that can export one or multiple filesystems and/or object stores over standard and native network interfaces with added advanced functions such as replication, snapshots etc…

IBM Tutorials and Materials, IBM Certifications, IBM Guide

The “why” list:

  • You create your own storage controller configuration. We can be installed on any server platform from any vendor in the market. x86, Power and Mainframe ?? – yes you guessed it……#SDS.
  • We support any form of backend storage media that can be presented as a block device. This means that we even have a larger support matrix then Spectrum Virtualize (Virtualize/SVC = +400 storage system support matrix). #open
  • We support the use of tape as a active tier and storage pool. As this is implemented the files on tape media will still be visual in the OS but blurred out. When the file is accessed we fetch the file to faster media to match the performance needed. #coldstorage #weneedtosaveourdataforever
  • The “storage media” used can also be the cloud. We support Amazon, Swift and Softlayer as storage targets and can be used together with local storage as one single file system. NO GATEWAY needed. #hybridcloud
  • Summary of the first two points = put our software on whatever server you want, no matter the vendor or CPU  and at the same time use any storage device, internal or external to store the data on, no matter the vendor or architecture. This opens endless possibilities in creating an up to date storage solution with the best storage media possible today and tomorrow. Not the best storage media 3 years ago when you bought your SAN array – which of course got old and where the vendor didn’t develop that hardware product further etc… happened to you???. #prettyopeniwouldsay
  • Just the fact that you can build your storage solution on the IBM Power processor and server architecture you have the option of creating a storage system that almost only Mainframe users will be able to experience !! The Power CPU has without a doubt the best architecture for processing data and this can be yours without buying the biggest badest storage system in the market (a k a DS8000). #ibmpower
  • Our solution scales further then any competitor on the market. 3-digit PB scale systems are running around the world today preforming at 3-digit GB/s speed. These numbers are the definition of the new age of data. This will of course not be for eveyone, but when the need for at least 2-digit or even 1 digit of above scaling units you will need to really think about the type of solution you implement. #infinitescaling
  • How do we achieve above performance and scale ?? – Well Spectrum Scale is a parallel file system, where the intelligence is in the client and the clients spreads the load across all storage nodes in a cluster even for individual files, while in traditional Scale-Out NAS, one file can really only be accessed through one node at a time by an individual client (BOTTLENECK!!!). The architecture also lets us scale performance independently from capacity as well as the other way around. #highestperformance
  • The Spectrum Scale file system is a global file system with the ability to enable collaboration between different geographical locations all with access to the same data using roles based functions to control how users are able to cache, write, read and push data along/between the different locations for best effciency and performance. Add Aspera (=high-speed data transport protocol) to that and you can really enable a #global-high-speed-filesystem
  • We use analytics to get insight into the data, files and objects that you are storing within you solution based on patterns. Those patterns can be usage, users, groups, names, extensions, metadata, capacity, performance, time etc.  From this insight we can move, find, identify or even remove data based on these patterns to make your system work at the highest efficiency as well as lower your total cost of ownership. #cognitivedatamanagement
Except all this: End-to-end checksum, unified File and Object native interfaces, new enhanced graphical user interface, snapshots, async. and sync. replication, backup/restore-integration, policy driven compression, encryption, hadoop integration and much more.

Sunday, 28 May 2017

Lighting: India’s smart IoT solution

As street lighting continue to be operated and maintained manually by local municipalities in India, power consumption and transmission losses are getting too high to ignore. While some are making a shift to LED lights to save power, automation is the surest way to real savings

The need for automation

Different municipalities have different budgets and vendors for street lighting, raw materials and installation. Of the millions of streetlights currently installed, only a small percentage use LED lights, while others might be CFLs, metal halide or sodium vapor. Thus an automation solution must work with the current infrastructure, without needing major overhaul.

Automation considerations

Remote monitoring: A street lighting automation system must allow supervisors to view streetlight statuses from the Internet. Important data such as operational hours, energy consumption, and faulty equipment must be made available at the click of a button.

Integration with existing infrastructure: It’s not feasible to change the millions of existing streetlights to suit an automation system. Instead, it is essential for any automation system to work with the existing infrastructure.

Fail-safe nature: Automation systems must be designed to work without a continuous Internet connection, and in all weather. It is imperative that streetlights are not affected, even if the solution itself fails.

Schedule: What’s the use of an automation system that still requires human intervention? An automation system must have schedules to operate lights according to the time of day. Going a step further, the schedules must be flexible enough to account for changing sunrise and sunset timings throughout the year.

Manual override: While the system should run without human intervention, of course the final authority to switch a streetlight on or off must rest with a human being. Under certain circumstances, it may be important for supervisors to control the lights—for example, switching off streetlights when under maintenance, or switching them on when the schedule is faulty.

Sensor integration: Automation systems would be more efficient if they could sense the intensity of surrounding light. For example, in foggy, stormy or smoggy conditions, it would be essential for the streetlights to activate, regardless of the time of day. Thus, automation systems should include sensor integration.

Wireless nature: An automated street lighting solution should avoid extra wiring, digging and re-paving of roads to enable monitoring and control. Instead, the solution must be wireless, plug and play, and low cost in nature.

Automation models and solutions using IoT

Smart lighting automation system

Automation systems for streetlights cannot have a one-size-fits-all model. Existing hardware, budgets and installation efforts must be considered before moving forward. So let’s talk about two broad categories of the automation system: phase wise control and individual light control.

Phase wise control

This solution would control streetlights based on phases. A feeder panel (switching point), along with a gateway device and possibly an energy meter, would work perfectly in this situation. The energy meter would be used to find the phase consumption, and the gateway would upload it to the Internet. The gateway would also be responsible for implementing the schedule for phase operation. While the solution controls the three phases individually, it would not be able to control lights individually. As a result, pinpointing faulty equipment wouldn’t be possible with such a solution.

On the other hand, the solution would costs less than the alternative. Street lighting modifications would only be necessary where the streetlights are not LED-based and are not going to be replaced.

Individual lighting control

This solution would control each light individually. Each streetlight will have circuit board installed to control the light, read the consumption, and transmit all data wirelessly. In order to successfully control the light, the board must be integrated with the LED driver. For the high range of data transmission, the chip must use a far-reach technology and a mesh protocol to maintain robust connectivity. A gateway device would be needed to collect data from and control streetlights, upload the data to the Internet, and control the lights based on either the schedule, if-then rules or manual control. The solution would be able to pinpoint faulty lights and track energy consumption by individual light. However, this solution costs more than phase wise control and should be considered if LED lights are replacing the existing lamps or if new streetlights are being installed.

Optimization in real world

Optimization of lighting

Automation systems, combined with certain policies, will help lower the operating costs of streetlights without leaving anyone in the dark.

Dimming in high traffic: Cities face the highest concentration of traffic between 5 PM and 11 PM, and streets are illuminated with headlights and streetlights during that time. We can guess that if lights are programmed to work on 60% to 80% of their capacity at this time of the day, we can save on operational costs.

Integration with light sensor: Light sensors would help to automatically switch off the streetlights during daytime, and switch them on in the evenings. Streetlights switched on during the daytime would soon become a thing of the past, saving a lot of money.

Special schedules: Individual control of lights gives us great flexibility in their operations. Schedules that would allow one in two, or one in three, to be switched off or dimmed would save energy. These schedules would usually work best post-midnight—say after 3 AM, when traffic is minimal.

Ready-to-use infrastructure: The existing light control infrastructure could be used for additional data collection. Since the existing mesh network is already connected to the Internet, the infrastructure could be used for pollution monitoring, fire alarms and Wi-Fi hotspots, amongst other things. With minimal additional cost and some planning, the lighting infrastructure could be turned into a multi-purpose smart initiative platform.

The time of streetlight automation is upon us

If we envision smart cities initiatives being launched around the globe, smart streetlights should be a part.

Internet of Things automation systems are still in their early days. But smart streetlights will bring a number of benefits to communities and their governments. They will provide baseline data to help governments make informed policy decisions. They help reduce energy consumption and eliminate the need for staff dedicated to streetlight operation. And, as the technology matures, there will be solutions for periphery lighting, especially college and university campuses, gated communities and townships.

The positive potential of modernizing the lighting industry is just too big to ignore.

Thursday, 18 May 2017

Ready for success with IBM Storage

Partnerships between IT leaders provide very strong benefits to clients by streamlining solution design, simplifying deployment and reducing risk. These relationships also create business synergy in which the strength of the partnership is greater than the sum of the individual partners’ capabilities. These are some of the important reasons why IBM created the Ready for IBM Storage program.

“Business partnerships are crucial for IBM to deliver comprehensive solutions to our clients,” notes Eric Herzog, VP, Product Marketing and Management – IBM Storage Systems and Software Defined Infrastructure. “These relationships also benefit our partners, while bringing many advantages to our clients.”

IBM Tutorials and Materials, IBM Storage

Most importantly, for current and prospective IBM clients, the Ready for IBM Storage program expands the scope of solutions by leveraging partnerships with leading technology vendors. The program features combined solutions of proven IBM Storage and partner offerings that have been co-designed to fit specialized client use cases.

“This simplifies everything,” states Bill Reed, Chief Technology Officer, Arizona State Land Department and a recent client of IBM and our partner, QCM Technologies, Inc. “We continue to work with the solution providers we know and trust, now with even greater confidence because we know they are validated by IBM.”

The Ready for IBM Storage program utilizes the IBM Solutions partner ecosystem to offer validated storage and software-defined solutions to users globally. For IBM’s Business Partners, it provides a framework for partnering with IBM to validate the interoperability of each partner’s solution components and IBM Storage. The partnerships are recognized by inclusion on IBM web platforms and by the provision of a “Ready for IBM Storage” mark that can be used on partner marketing materials. The objectives of the program include:
  • Identifying and bringing to market solutions that use third-party technology and create or reinforce key strategic partnerships.
  • Enabling Solution partners to self-validate software and hardware interoperability with IBM Storage products.
  • Creating a central repository of validated solutions and collateral available to clients, sellers and partners.
  • Addressing client use cases that cannot be met by IBM Storage alone.
  • Expanding the IBM Business Partner ecosystem.
  • Assisting IBMers with building an extensive partner ecosystem with comprehensive Solution partner programs.
“Ready for IBM Storage will ensure that our clients are getting solutions that are co-validated by IBM and our Solutions partners after having passed rigorous technical tests,” explains Jeff Eckard, VP Storage Solutions, IBM Systems. “This validation will give clients the assurance they require when making business-critical purchasing decisions.”

Both IBM partners and clients benefit from Ready for IBM Storage. The program helps our partners by:
  • Enhancing their market positioning through association with IBM Storage.
  • Increasing their brand awareness through inclusion in solution directories and joint marketing opportunities.
  • Accelerating sales activities by identifying a single program and point of contact to engage in partnering opportunities across IBM Storage brands.
  • Utilizing IBM Sales and Business Development resources.
The Ready for IBM Storage program reinforces IBM’s commitment to our Business Partners while delivering substantial value to our clients. This powerful new program can lead everyone to more successful solutions that address real business problems.

Thursday, 11 May 2017

Transform your Content Experience with IBM Content Navigator

You might be familiar with the Office Online Server integration with SharePoint Server or Exchange Server, but did you know that IBM Content Navigator also integrates with Office Online Server?

Microsoft Office Online Server (OOS) is an on-premises server for editing office documents in your browser. Several of my customers with Office 365 are considering taking advantage of OOS for greater control and lower network latency.

Transform your Content Experience with IBM Content Navigator

Specifically, this new Navigator integration enables users to create or edit existing IBM Content Foundation documents in OOS directly from their Navigator desktop. Navigator users can create new documents with OOS using document templates. Documents are also not downloaded to the local computer desktop, as new document versions are automatically saved directly to Content Foundation.  Multiple users can even collaborate by editing the same document in real time. To learn more about these detailed features, read our new solution brief.

Furthermore, this paradigm shift from desktop editing to editing in a web browser addresses several pervasive content management challenges. Here are some examples that I have encountered in my discussions with customers:

–    Call center users streamline the correspondence process with tailored Microsoft Word templates
–    Insurance auditors in remote offices can co-author Excel spreadsheets while collaborating on conference call
–    Marketing teams can easily follow their document publishing workflow with automatic check-in and versioning of documents
–    Lifecycle governance teams reduce their risk by eliminating the need to download financial documents to the workstation

I will demonstrate the Office Online Server integration and other exciting Navigator features, like role based redaction, at our IBM Content 2017 events.

Saturday, 6 May 2017

Move to IBM Datacap on Cloud: Your Bridge to Digital Transformation

Cloud computing has become synonymous with modernization. IBM recently did a C-suite study and discovered that 66 percent of CIOs expect cloud computing and services to transform the way businesses operate. IBM’s managed cloud hosting solutions can help CIOs and IT leaders focus on growth assignments by reducing the daily management of enterprise applications.

IBM Tutorials and Materials, IBM Certifications, IBM Guide

So, why are more and more organizations moving to the cloud? Here are my top picks for you to consider:
  1. It’s the future. Whether you like to or not, your organization will be moving to the cloud. Applications and technologies are more accessible when you move to the cloud. Researchers predict there will be more than 8.2 billion active mobile devices by 2020, and this alone will drive cloud significant adoption.
  2. It’s safer. It’s very important for those that are in charge of public sector data and their suppliers to exercise security, including data protection, data security and data jurisdiction when using cloud services. Move to cloud for backup and recovery will save time, avoid capital expenditures, and use third-party expertise.
  3. It’s elastic. Managed service offerings are perfect for organizations with increasing or variable demands. It’s easy to scale up or down and this level of agility can give your organization using managed service offerings a big competitive advantage. After all, CIOs rank business agility as a top driver for cloud adoption.
  4. It’s cost-friendly. Cloud services can greatly reduce hardware costs when you pay as you go and reap the benefits of a subscription-based model. With convenient setup and management, it’s never been easier to take the first step to cloud adoption.
  5. It’s collaborative. Your teams can access, edit and share documents anytime, anywhere easily in the cloud. You can manage your projects more effectively by making edits in real-time with having full visibility of the team’s collaboration.
Now that you understand the benefits of moving to the cloud, I want to share with you a new IBM managed services offering called IBM Datacap on Cloud. If you are looking to streamline the capture, recognition and classification of your business documents, then this offering is the way to go.

IBM Datacap on Cloud  includes all the features and functions of the on-premise IBM Datacap solution, while avoiding the need for capital expenses, Datacap servers, help desk staff, Datacap software upgrades support feeds, and more. Organizations of all sizes can leverage IBM Datacap on Cloud to deploy applications. Simplified configurations enable deployment without the need to count, track or pay for specific users. It’s simple and easy. You can also purchase add-ons, including additional storage in one Terabyte increments (either dedicated environment or non-production environment) in order to meet your organization’s data storage and performance needs. Separately priced optional services are available, which include: data migration, conversion and training, and integration with other business applications. IBM Datacap is also available in a simple, fixed amount per model, per month to support a set of users, storage and features.

IBM Tutorials and Materials, IBM Certifications, IBM Guide

Why wait? Get on the cloud and start seeing how your organization can capitalize on the agility, scalability and cost-effectiveness of the cloud. A digital approach can lower your costs, enhance customer engagement, and deliver better business outcomes.

Friday, 5 May 2017

Turn Archived Information Into Big Data and Analytical Assets

Datawatch Extends IBM Content Manager OnDemand to Help Unlock Value of Data

Many of the decision support, compliance, reporting and operational issues that businesses face are not efficiently solved with complex database query tools or data warehouses.

The answers, more often than not, lie in an organization’s transactional content. These reports, statements and correspondence – ingested and archived in IBM Content Manager OnDemand (CMOD) – create a treasure trove of valuable business information, if only you can extract the information from the page.

Monday, 1 May 2017

Increase Business Agility with IBM Case Manager and Box

Increase Business Agility with IBM Case Manager and Box

I’m sure you have heard about the IBM and Box partnership that was announced last year.  Our strategic partnership with Box brings secure content collaboration to our IBM ECM portfolio, including Content Navigator, Datacap, Case Manager and Stored IQ.  Now that we have over a year of working together, the Box and IBM Partnership is stronger than ever and we have lots of great things planned to bring secure content collaboration and enterprise content management together to help solve some of the toughest challenges of becoming a digital business.

IBM Tutorials and Materials, IBM ECM, IBM

To thrive in this new digital economy, organizations across all industries have to produce, manage, store, and distribute business content.   The question then becomes: how can organizations activate that business content and put it to work for them?  The answer is Advanced Case Management. Case Management helps to gather together all the relevant content for people to understand, gain insight and take the next best action to ensure positive business outcomes.   Case Management is all about working smarter to help make teams more productive and improve business outcomes. You use content everyday to get work done and you need an easy way to bring the right content together from different sources, like Box, at the right time, in order to make the right decision. IBM Case Manager brings together data, people and process to improve the way work gets done.

IBM Case Manager integrates with Box, allowing users to access Box content from within a case and post documents directly to Box.  Using IBM Case Manager to workflow-enable Box content can help to streamline critical processes, making you more productive and efficient.  The integration allows you to:
  • Collaborate with external parties and customers from inside the case environment
  • Boost speed and accuracy with users easily updating info or outcomes as part of case work
  • Pull in content into Case Manager from a Box folder, or post documents to a Box folder directly from Case Manager
Our partnership with Box is truly changing the game in enterprise content management, and provide even more capabilities for case workers who need to collaborate with internal or external parties.