Posted by: wiltjk | April 16, 2012

Frameworks Don’t “Do” IT Architecture

Simply put, frameworks don’t “do” IT architecture, architects do.

For about 18. months, I spent a week in a different country 3X a month working with many of our industry’s top EAs, Solution, Infrastructure, and Information architects. Many were certified in a given framework – which I would certainly promote as a worthy use of one’s time and investment in their career.

So why were all these large corporations bringing me in if they already had invested so much in getting their architects certified in a given framework? Go back to the first line again. Although these enterprises had many architects certified in a framework, they were not executing well in the practice of architecture with that framework alone.

So, this is where something like IASA’s 5-pillars of Architect Core balanced the execution equation in that it compliments investments in frameworks by polishing the skills architects need to execute effectively with whatever frameworks they’ve adopted.

A hammer and nail are effective tools in building something, but remember that it is a skilled carpenter who can use them to build something beautiful. It is important that we equally promote the architectural skills required to deliver sound solutions. A class is a start, however, an IT Architecture Apprenticeship program would be an even greater investment/payback should an organization be willing to build one.

Advertisements
Posted by: wiltjk | April 30, 2011

Lessons Learned from the First Cloud Burst

So, Amazon cloud services (AWS) is the first to fail – and much attributed to human error during an upgrade. From an architectural perspective, what have those who rely on Amazon services learned?

image_thumb[1] Amazon has noted the many facets of what they learned from this experience and from a cloud provider perspective, what they must do to prevent this from ever happening again.
 
This is great to know and I’m sure all cloud service providers will likewise beef-up their operations procedures to circumvent this from happening on their turf.
 
However, it is important to note the following: Cloud is young. It is 1-2 years into what I call the new technological movement gestation period (a 5-year process). These things will continue to occur.

Since it is going to take time for our cloud service providers to mature to what we might call an acceptable, robust industry operations level, what have we learned from our architectural perspective to make what is available today to work better for us?

  1. Know what is a Could Service and What is Not. I will suggest that 75% of what, today, is being sold as a “cloud” service is simply a hosted infrastructure for some product. Period. I contend the tenants of cloud are (1) Redundancy, (2) Elasticity, and (3) a simple Pay-As-You-Go (PAYG) financial model. Those who tout themselves as cloud service providers, but are not, are setting themselves up for a greater fall than the AWS situation.
     
  2. State[less] Management Rules. Architecting solutions that are as stateless as possible means that you have no focal point in which your logic depends. State is the new single point of failure as found when maintained in data repositories that a not redundant in real-time.
     
    Excuse the short tangent – but – I think the future place to store state will ultimately be in the client – like client devices such as phones, pads, etc.
     
  3. Elasticity is Tricky. Real Tricky. My exposure to cloud solutions has pointed out that automated elasticity is a rarity. Most that I have seen is done by manual anticipation of spike activity. Netflix uses their “N+1” Redundancy model to protect themselves. Others manually configure their fabric according to anticipated activity. Continued manual experimentation will be necessary before automated means can be designed. Once better understood, automated elasticity will take a generous amount of experimental iterations as, initially, automation simply allows us to break things much faster.
     
  4. React to Poor Health Before Failure. Many are identifying that to work in the cloud, you must: Fail early. Fail often. Deal with it. If we monitor health prior to failure, we can make necessary modifications to our service operations to mitigate failure. This involves trend analysis algorithms to react in real-time to trend changes as they begin, not after they fail.
     
  5. Real Cloud Storage is in it’s Infancy. Using models that mimic your antiquated relational practices in the cloud may be the cause of your demise. Many new cloud storage deviations from the norm are surfacing that warrant learning a new way over making the cloud work like your existing internal systems. If you get burned by using your traditional storage mechanisms in a cloud service, then you are the responsible party when it fails.
     
  6. Cloud Solutions Need Operational Automation. Those AWS customers who faired well in the burst and those who failed poorly have one thing in common. Their success or failure is attributed to how they manually reacted from an operations perspective. This is key, as all have stated future success will require more automation from the operations perspective. This simply means that we cannot rely on the cloud service provider’s operational architecture to take care of our interests. Rather, we must create our own frameworks around theirs to promote greater operational excellence to survive their inadequacies.
     
  7. Learn and Grow from this Experience. Now is not the time to bail on cloud computing, rather, it is the time to recognize it is becoming a more mature. The lesson here, however, is to make your cloud entry strategy one of moderation. Do not throw all your service eggs into one cloud basket as there are more maturity lessons to be learned. Better to learn them with less mission critical solutions so that when the time comes, both the cloud service providers and your expertise will have evolved to create a solid architecture around those mission critical candidate cloud solution opportunities.
Posted by: wiltjk | February 2, 2011

An IT Architect’s 3 Most Important Words

“As Measured By”

Sounds simple, doesn’t it? Not hard to say nor understand. However, they are also the least executed words in any IT solution today.

In terms of “as measured by”, IT has much to learn from the example set forth in the business strategy tools we use. Norton & Kaplan Strategy Maps have a column labeled metrics; Kim & Mauborgne’s Blue Ocean Strategy relies on measurable differences in the factors of their four actions framework; Carlson & Wilmot’s NABC promotes a measurable Benefit from the offered strategic Approach.

The pattern here is that since “as measured by” is built into these strategy tools, we might consider building it into our IT architecture tools as well.

image

Business Technology Strategy

Let’s start with the IASA Business Technology Strategy pillar. When we architects analyze and model business requirements, we include justifications, reasons, and tradeoff considerations in our designs, but rarely do we clearly define success and value for the business – in distinct business measures.

If we add “as measured by” at the end of every business requirement and require our business stakeholders to define that “as measured by” in distinct business measurables, then we can design our solutions to provide the necessary evidence to show that business value is being achieved.

An example is to set a success business requirement to something specific and measurable like, “a 10% increase in online sales,” and then build the necessary metrics into the solution that can measure and provide evidence of this.

Quality Attributes

When examining the IASA Quality Attributes pillar, it only seems natural that we include “as measured by” in our definitions, yet, this area is probably where we fail most often.

There are Quality Attributes we blindly set without any “as measured by” perhaps thinking their definition alone means we will achieve it. Think of the many times we state a usability QA of “a simple user interface” without any explicit definition as to what that means. Leaving it up to user response after deployment may be disastrous. Rather, if I choose an “as measured by” along the lines of “as signed-off from a UI focus group acceptance review,” then my QA also secures a commitment of having resources assigned.

For those times when we do include a measure for a given QA, we often get ourselves into trouble when we don’t assign a meaningful metric that we can measure in the solution itself. For example, creating a performance QA like, “a sub-second screen refresh” on an internet solution when I have no control over the network delivery from my server to my client just might signing up for failure. Rather, if I set my “as measured by” to something along the lines of “a refresh equivalent to that of a search site,” then I have a metric that is measurable and attainable.

It Extends to all Pillars

BTS and QA are the most direct applications of “as measured by”, however, they can creatively fall into usage in the other IASA pillars of Design, Human Dynamics, and IT Environment. For example, an architectural design might be approved “as measured by” passing an Architecture Tradeoff Analysis Method (ATAM) or Perspectives-Based Architecture (PBA) review.

Three Simple Words – Huge Business Value

By applying “as measured by” to our architectures, we will not only clearly understand our business goals, we will also be able to better achieve them by delivering a solution that, through its operation, produces its own evidence of its business value.

Posted by: wiltjk | January 16, 2011

Phone-Wars – What’s an IT Architect to do?

Where’s the Beef?

image In 1986, advertising agency Ogilvy & Mather created a hilarious commercial for Hardee’s. At that time, the three national burger chains had created their own brand icons with a heated battle amongst them creating an almost religious following based on which icon you liked the most – not necessarily which burger.

This was aptly known as Burger Wars
 

The Ogilvy & Mather commercial celebrated the irony of this by depicting a marvelous spoof war waged by McDonald’s, Wendy’s and Burger King poised by platoons of their respective icons (Ronald McDonald, Wendy’s Clara Peller ”Where’s the beef”, and Burger King’s Herb the nerd) in a WW2–like battle scene against each other.

The 60-second segment brought to light that the respective campaigns of these chains had gone well beyond the reality that their products were, after all, fast-food burgers!

If we were to carry this same theme to the current onslaught of smart-phone advertisements, it would have to be depicted more as a Star Wars battle in outer space with humans against androids…

The following is my attempt to call out the clutter and identify the main business enablers when choosing a strategy to deliver smart-phone solutions.

An Objective Perspective – No Kool-Aid Here

Because biases often interfere with objectivity in these matters, let it be known that I currently do not own an iPhone, Android, BlackBerry, nor Windows Phone 7.

The only Kool-Aid I drink leaves a stain on my upper lip, not an opinion in my heart.

I will admit, however, there is no way to fully understand all dynamics involved and welcome comments to this post to instill greater clarity on the subject.

Phone-Wars is Truly Multi-Dimensional

The first thing to consider in this topic is that it is far from cut & dry. Advertising campaigns make it all simple and 2D, but it simply is not so (see Why do we only apply 2D thinking to 3D problems?)

SmartPhoneInfluences

There are many internal and external influences to smart phone platforms. Each influence can be multi-dimensional within itself as well. Let’s examine several of these many dimensions which must be considered…

Varying Business Models: Volume vs. Profitability

Focusing first on IASA’s Business Technology Strategy pillar, I want to take what may be an over-simplified look at the sustainable income business models used. I want to focus on a really great studies by Asymco (US Population by Phone Operating System and Can Android change the distribution of profit among phone vendors?) where we see volume of sales and profitability are two different beasts:

image
(Volume Comparison)

image
(Profit Comparison)

image Apple: Apple obtains its profit from the hardware device sale, they take a [healthy] percentage on any apps purchased, and they have residual profit coming from memberships (MobileMe) and ads (iAds). Delivering solutions on this platform will find profits mainly through iTunes sales and minimally through iAds.

Bottom line, profitability is found in a customer base willing to pay for what it gets. You don’t need to capture a majority segment of the business domain.

imageAndroid: Android devices, on the other hand, are subsidized by Google which actually is a media company (i.e., sells advertising like ABC, Time, etc.). Their push for advertiser subsidized "free" apps on the Android platform is very much in line with this very strategic approach (give away the OS and sell advertising as a reoccurring revenue stream on the devices). When an Android phone sells, both the hardware manufacturer and Google take their slice of the pie (either directly or indirectly). Delivering solutions on this platform will focus on many small profits through the Android ads system.

Bottom line, profitability is found in advertising on freely distributed device apps. Capturing more of the market is paramount for success.

imageimageEnterprise/Subscription: Windows Phone 7 and BlackBerry utilize an Enterprise License strategy where their devices while masked as a consumer device are more highly positioned with tight integrations to enterprises where there is a healthy investment in an underlying licensed infrastructure. Additionally, as in the case of Windows Phone 7, there may exist integration into a membership service like Xbox Live that subsidizes profitability. Delivering solutions on this platform will focus more on the platform vendor subsidizing your development costs and sales (i.e., what’s in it for them to subsidize your costs?).

Bottom line, profitability is found in enterprise licenses and subscription service memberships, not consumer device sales.

Can’t We All Just Get-Along?

These three different business models all individually work well and can theoretically coexist in separate market spaces if you consider the market as a Blue Ocean Strategy form of non-competition (supported by the Volume Comparison above) – but the vendors don’t see it that way, do they?

Vendors attempt to obtain exclusive loyalties not only with their consumer customers, but especially with their supporting vendors and integrators (regardless of how many times they state they utilize an open-standards architecture) by enticing exclusivity in their relationships, enforcing development platform restrictions, pressuring adoption of their revenue models, etc. They do everything they can so that apps run best on their platform, if not only on their platform.

image

In a deep red ocean of competition, they push their products into markets outside their own business models in an attempt to increase overall market share (e.g., Apple iAds). Next, they position hardware manufacturers and/or telecoms against each other (e.g., Android on Samsung vs. HTC). Finally, there are the back-stabbing political attacks to instill fear of a competitor’s perceived weakness to promote their own agenda (e.g., remember Antenna-Gate?).

It is very easy to see how Phone-Wars has evolved.

Testosterone Metrics

image

 

Beware the metrics! (or, understanding the FUD) 

One interesting commonality about phone marketing is how items being sold are singled out by using some metric that may or may not have anything to do with product use or quality, but we are led to believe that it is a most important must-have capability. Here are some of the major ones in Phone-Wars:

 

  • Number of apps in their app store – what does it matter if you have 1,000 apps, 10,000 apps, or 100,000 apps if the app you really want is not to be found anyway? The reality is that the best-of-the-best apps will most likely be replicated on all device platforms.
     
  • Can you surf the internet while you’re talking on the device? – Do you really need to do this? Maybe if you are trying to read/write email while driving using the device as your GPS and monitoring your stocks. Even better, with video calls, your connected party can actually see that you are ignoring them while surfing the web! Still, the marketing media will point out that if you cannot find a certain coffee shop while talking to your friend, your are, indeed, a failure.
     
  • Millions of Units Sold – a valuable measure for market penetration, but not necessarily analogous to unit profitability as shown in the Asymco study.
     
  • OS Upgradability – How easy is it to upgrade your phone’s OS? One word of advice, don’t make decisions based on futures. When you do, you most likely will be disappointed because your device may not be capable of delivering the future capability (anyone buy a Notebook PC that was Vista ready only later to find when it released and you upgraded you could only run Aero Basic?). Because there are multiple points of responsibility/failure, any one of the many parties involved (hardware, OS, telecom service) may or may not support the upgrade. My 5-year old smart-phone has never been upgraded because both the telecom and hardware manufacturer did not support it (i.e., two points of failure).
     
  • Network Coverage & Speed – who has the fastest and largest? These need to be weighed appropriately and the fine print really must be examined. Assumptions will quickly lead to trouble. Coverage is only important for where you and your users will be. A network can claim the largest network in the world, but if it doesn’t work where you will be, then does it really matter? My phone network does not work in Europe or Asia unless I purchase a global phone which is not even a smart-phone. The expense of using your phone out-of-country may actually be cost prohibitive, so an internet connectable device using Skype or Google Voice may be a better approach anyway.

What is an IT Architect to do?

Well, it depends… (how all architectural questions are answered)

My overall advice is to push back against the pressures to lock into a single vendor platform. The phone device market is changing much faster than any other form of PC market. The rise and fall of hardware vendors, OS’s, and telecom services can alienate those who are less agile in their adoption, support, and solution thinking.

image When building a phone app, consider what I call the Angry Birds Pattern. It is based on the Tech N’ Marketing interview with Peter Vesterbacka. With only a modest sized development team,  Rovio has created a version of their highly successful game on multiple phone platforms using that platform’s revenue model.

  • This means, a purchased/free-light version on Apple, add based on Android, and for the enterprise platforms – if they want it bad enough, then have them pay you for the development on their platform.
  • Without question, developers will have their own platform bias (Rovio developers like Apple best) – you will have to deal with this.
  • Build versions to their native platform – meaning avoid the temptation to rely on a one-framework-supports-all-platforms approach. These Philosopher’s Stone frameworks only promote mediocrity never delivering fully to their promises – so avoid the many headaches awaiting by not even going there to start with.
  • Instead, create an architecture of your own re-usable components (assets) where possible and branch to native platform specifics where necessary.

imageConsider the telecom service too. Phone apps aren’t only reliant on the hardware and OS. The call stack provided by the telecom vendor will have an impact on your development efforts. Further, the various telecom service vendors will often tweak their hosting OS (when they can) to provide supposed value-add features to entice adoption of the particular device on their network. It can all become very nasty, so appropriate scheduling of timelines for development must take into account this impact.

imageEnterprise Solutions have their own dynamics:

  • Should you standardize on a specific device/platform? Warning Will Robinson!
    • How can you standardize on any platform that has a life expectancy of 6-12 months? We have seen the damage done by doing this with enterprise PCs where many are permanently locked into Windows XP. Do you even want to repeat this?
    • Compatibility from one platform version to the next may require much restructuring of your solution – plan for this, do not be a victim here.
    • Integrations to email and file shares (e.g., Dropbox, SkyDrive, etc.) make it much easier to share information from any platform with the enterprise – meaning the more flexible an enterprise can be, the better off their users will be in having the freedom to use their personal devices.
       
  • Should you build device apps or mobile format browser apps?
    • Let your business case, not your development capabilities, dictate this.
    • Bring in external expertise where you are lacking and learn from their example.
    • For many initial start-ups, mobile formatted browser solutions often have a more immediate return on investment – just be sure to build to your least common denominator.

imageLook to the Cloud for phone based solution architectures. Some of the most successful phone apps are built on cloud-solution architectures, and for good reason. The phone concentrates on delivering a great end-user experience while the cloud back-end does all the heavy lifting.

The best phone-cloud architectures will often have the client phone-app cache major portions locally so they can remain functional when off-line.

Using the cloud for phone apps actually is a great inroad (excuse, motivation, you pick which applies) to adopt a cloud infrastructure and create frameworks for many future applications that will support devices yet to be developed – so the more you polish your expertise in this area, the better off you will be in the long run.

Bottom Line – Who Wins in Phone-Wars?

image

Quite simply, the consumer wins. That is to say, if they can filter out all the smoke and noise and then take the time to choose a path that best fits their needs & desires.

If you tend to follow the crowd, as a consumer, then you may be one of those individuals who is changing their platform every 6-12 months, subsidizing all of us who can pick and stay with a platform, for say, perhaps, 5-years.

I guess it’s time my business partners and I revisit our standard corporate phone choice and actually consider upgrading!

Posted by: wiltjk | January 3, 2011

OLD IT, not SKYNET, will Kill the Enterprise

It’s a new year and now the time for the oversell of health club memberships, optimism for a better year, and predictions. I have to say, I will only offer one prediction this year:

We will begin to see the eminent fall of large organizations, not because über intelligent neural systems will end them, but because their dated/stagnate IT that is now on life-support will fail them, incapacitating them to be easily overtaken by smaller, more agile competitors.

 
Please don’t mistake me for a mainframe-hater. I have my career to thank for their existence. Upon graduating with a degree in Physics, I moved to Pasadena, CA to work on the Data Communications module of the Burroughs Medium Systems (B27/8/900-B47/8/900) Master Control Program (MCP) operating system.

burroughsb5900

It was great working with some of the most impactful technologists of our time building OS modules for what later would be known as the villain of the initial TRON movie (the MCP) and developing CANDE, the precursor to today’s user interfaces and IDEs.

But, let’s be honest here, that all should have been history by the end of the 1980’s. Right?

Well, by the end of 2010, I met face to face with more mainframe based 3090 systems than I ever was exposed to in the 1980’s

  • Flying home from an engagement in October, I was stranded in Amsterdam due to high winds. The airline system being accessed from a high-tech flat-screen PC to rebook my flight and select my seat was via a 3270 terminal emulator complete with half-duplex transactions using command line keywords.
  • A few weeks ago in Bangalore, I went to five (5) different book stores looking for Switch: How to Change Things When Change Is Hard by Chip Heath & Dan Heath. Not finding it on the selves, I had each store look it up on their inventory system. Again, it was a PC running a 3270 emulator with a keyword based UI that displayed results in pages of results. 5 different stores, 5 different inventory systems, all 3090 based applications!!!
  • Recently, I was purchasing light bulbs at a major hardware store chain, and noticed that the cash registers all ran 3270 terminal emulators. When supplying information, the tell-tale segmented underline fields were used.
  • Earlier this year, my wife and I were ordering new kitchen appliances at a major retail chain. Again, the system used yielded screen after screen of 3270 data input, ALL IN UPPER CASE!!!

I had no idea that IBM maintained such a large infrastructure presence in business!

These large organizations are all literally relying on their inventory, POS, scheduling, delivery, and many other business systems to be run from hardware and operating systems that literally are on life-support.

sickpc

Why life-support?
(a) The physical hardware has not been manufactured in years
(b) Few exist who can support them (most retired, many no longer around)
(c) Nobody with hopes of improving their career will ever desire to work on them

Organizations that wear these archaic systems as a badge of strategic IT efficiency (i.e., impervious to viruses) will be the ones that later make headlines like:

1024 Flights Grounded Indefinitely as Airline transfers back to Manual Bookings While New System is Built 
 

After 128 Years of Service, Another Retail Giant is Forced to Close its Doors After IT System Crashes
 

Last COBOL Developer Leaves Retirement for $125M Salary Eclipsing Tiger Woods

 

The New Old-Mainframe
I have not only seen the typical old-mainframe systems in use around the world, but also the new old-mainframe systems that are headed down the same path:

sickxp

I was recently at a research facility for a leading technology firm. The technologists had the latest must-have hardware to be envied by any Geek, and yet they all were running Windows XP.

When I went down the path of “Why???,” it was quickly pointed out that they had at least one mission critical application in their portfolio that could only run on XP. This is a primary reason enterprises avoid getting out of the rut of everything Windows XP with the next most popular excuse being they have tens of thousands of units that make supporting multiple OS versions too difficult a task to even attempt.

To this I ask:

  • Microsoft has given the special capability to run specific applications in XP-Mode through virtualization in Vista and Windows 7, why is this not an acceptable means to this requirement?
  • How many resources and how much time does it take to make the latest hardware run Windows XP counting in 64-bit hardware and all the drivers required?
  • Is the weakest link – that one mission critical XP application -or- is it really an IT department incapable of continuous IT improvement such that it cannot embrace and incorporate modern technology?

When I surmise all these antiquated systems that literally “run” the enterprise, I find myself realizing that in many organizations, IT is a major cost center. It not only is a financial burden to the enterprise, but it also is fuelling the stifling of innovation and agility that may contribute to the ultimate demise of the organization.

Who really needs such a self serving IT department?

skynet

Is your IT the real SKYNET?
(or, Is it time for new IT for a business to survive & be competitive?)
If SKYNET is smart machines taking control of your world, I sometimes wonder if IT organizations might be the same to the enterprise, but instead of an intelligent take-over, it’s more from the perspective of their limiting or dumbing down of capabilities to choke the business rather than to make it easier for the business breathe.

If your IT department is not moving toward becoming a profit center, then yes, it may be time to replace them.

If your new projects are not developed with an upgrade/advancement plan three years after deployment, then it may be time to clean house.

If products like the new tablets and phones emerging are banned from the enterprise because there is no IT support or they are viewed as a threat, then it may be a good time to find IT support that embraces it.

If your IT development staff is not begging to build solutions using the new devices emerging over the past couple years, then you may be at risk of stagnating the business to fit in the bounds of your IT department’s limitations.

If you think (or are told) your business is so unique that it needs to have a very specialized IT organization, then consider the following:

  • Every business considers themselves totally unique; its what gives them their competitive advantage.
  • Every business that is unique has been able to train their IT staff enough about the business to get them to their current level of efficiency.

What-if there was a fix?
What-if the IT staff took the initiative to become savvy not only in technology, but also in business practices and the actual business domain in which they serve?

What-if they could clearly articulate the the business of the business and communicate the strategy and value of a solution in strictly business terms?

What-if, due to their unique intimate knowledge of the business processes in which they have built existing systems, they were to become innovators and identify improved business strategies around these business processes – outside the normal bounds of IT?

What-if your IT did all this, could they then be considered a profit center?

thinkers2

Peter Till’s artwork (above) does a great job depicting what we expect of our portfolio of enterprise solutions – namely to think on behalf of our business – perhaps so that we don’t have to.

This is wrong! People solve problems, computers execute algorithms. Our IT organizations need to be in a position of performing the former and not blaming the later when they fall short.

My work has brought me in touch with organizations where they are seriously training their IT staff in the business domain as they recognize them as uniquely qualified problem solvers that can actively contribute to the business in this capacity. This enhances their ability to align the technologies that will support people-solved problems making technology the tool, not the solution.

These organizations are looking at this investment to turn their IT into a profit center. But it goes much further than what appears on the surface. If their IT is now an internal profit center through innovations and greater efficiencies in the business, what if they sold this expertise to other organizations who are burdened by their cost center based IT organizations? Now, this business/technology knowledge pairing becomes a product that can be sold as a service. Now, IT definitely achieves profit center status!

Ironically, the best way to avoid the real-world SKYNET through dilapidated IT hardware and personnel is to embrace the initial motivation behind the movie entity where continual investment in, improvement of, and advancement of our IT people and systems is paramount.

Posted by: wiltjk | November 26, 2010

Frameworks are our Friend???

Trying to get some work done on a document authoring management system, I bask in the time I can save by having a wonderful framework from which I build my solution.

Or do I?

Looking at the email alerts I’m creating, I come upon this…

image

The issue here is identifying the responsible party (both in code and development) as the cause for this is too obscure to identify from a simple “Not Responding” status.

When we rely on frameworks, we get much from a consistency and productivity perspective. However, we loose control of granularity and relinquish responsibility for performance amongst other Quality Attributes.

How can we hold anyone accountable when there is no direct path for responsibility? Do Frameworks simply give us an excuse to not fix bad situations like the one shown above?

Time to reboot Outlook so I can get back to work…

Posted by: wiltjk | October 25, 2010

Tablet/Pad Devices – Past, Present, & Future

My Past

For me, it all began with the Etch-a-Sketch:
clip_image001

A portable device that channeled creativity. It came out the year I was born and has been a staple in both my kid’s and my childhoods.

From an IT perspective, it hit again when Microsoft introduced their rendition of the Tablet PC. This pen based user experience made the PC personal once again as it produced a more intimate interaction between human and machine.

The Compaq TC1000 slate tablet was my first experience back in 2003.
clip_image002

Microsoft did their homework and provided both a supportive OS with decent handwriting recognition and, over the next few years, applications like OneNote and InfoPath with tablet integration features that made application development on this platform a breeze.

Since its introduction, I have been privileged to develop tablet solutions for both the medical and technology industries with great ease. Those who know me, also know that I have scarcely used a keyboard since that time, becoming a complete pen-adopter.

clip_image003

Office 2007 was the peak of Tablet PC awareness with some great product integrations including OneNote, my most used application to date. The design of the ribbon and placement of user interface components just worked perfect with a pen enabled device. InfoPath made it possible to develop pen aware and freehand drawing based applications without code! Creating solutions around the slate tablet profile was actually exciting and proved to make the platform a contender for many industries.

In meetings, the slate tablet simply made sense. It was unintimidating and never created the barrier that a laptop does when sitting between individuals. My Motion Computing tablet became my new right arm making travel a breeze as the narrow space between airline seats never was a problem because of its size and profile.

Having recognized the beauty of the tablet PC slate platform, I found it amusing during various engagements when the very people who complained it lacked a keyboard were the same ones who pushed back on early PCs because they were two-finger typists who were more frustrated with the keyboard stating, "if it only worked like a pen and pad of paper, I could use it more…"

Convertible Tabletsthe beginning of the end
clip_image004

Don’t interpret this incorrectly. I love my Lenovo X200 tablet because it is fast, powerful, and very durable. BUT, the whole concept of a convertible tablet PC is a most unfortunate compromise to accommodate a user community who has no business using tablet PCs in the first place. The weight/form factor in a convertible greatly diminishes its value. They are 3 times heavier, thicker, and much more bulky than their slate alternatives. Battery life is dismal at best making it necessary to carry your power cord and endlessly hunt for one of the 5 rare power outlets available at most airports. The light weight/easy to handle form factor of a slate is what made the tablet more personal in the first place. Unfortunately, in order to get the faster processors and larger memory configurations, a convertible is required.

The convertible tablet is simply an abomination created by manufacturers who should not have listened to their focus groups who said "if it only had a keyboard, I would buy one…" because guess what, they didn’t!

The Evolution of the UX…
When Vista came along, minor, but substantial improvements to the overall user experience ensued. During development of Vista and well into Windows 7, however, it seemed that this tablet platform was becoming an after-thought.

clip_image005
Vista did come out with a new Tablet Input Panel (TIP) interface that in some ways was a major improvement, but in other ways failed by removing its write-anywhere feature. Third party Evernote emerged as they produced their Ritescript ritePen product filling that gap with an excellent replacement offering.

With innovation came the need for increased power and resources. At this time I learned how weak slate tablet PCs were in order to extend battery life. The system of TIP services became increasingly CPU intensive causing the tablet to lock-up/fail under duress when the InputPersonalization service ran its learning retention cycle.

Because of the performance issues raised with the new TIP system, I researched alternatives including ritePen and ShapeWriter third-party handwriting recognition which could be tuned to stretch the CPU utilization to nearly half what the TIP required. They also opened the door to alternative means of interfacing with the tablet.

A New Paradigm for Tablet Input…
Years prior to the debut of the Tablet PC, Dr. Shumin Zhai invented Shorthand-Aided Rapid Keyboarding (SHARK) using the Alphabetically Tuned and Optimized Mobile Interface Keyboard (ATOMIK ) at the IBM Almaden Research Center. I had experimented with early versions of SHARK and found it to be a serious paradigm shift from traditional input, but in a very positive way. Since its earliest renditions, SHARK has now become a business of its own known as ShapeWriter and is now my preferred means of interfacing with any tablet device.

Using Actual Windows Manager, I found I could actively control the display and footprint of ShapeWriter to maintain a transparent display and automatically shrink into a rolled-up title bar when inactive to support better use of screen real estate.
ShapeWriter
The transparency aspect greatly improves the user experience as our minds tend to work better when we see larger context surrounding the area in which we are writing.

[Technical Note: ShapeWriter has since been purchased by Nuance who has currently suspended downloading and purchases.]

A Solution for the Increasing Need for Power…
As mentioned earlier, slate tablets are often built on weaker hardware configurations to keep them light and to preserve battery life. At some point, you realize that no matter how much you like the slate tablet platform, it simply is not powerful enough for your serious work. Not wanting to give up on the slate tablet form factor, I developed the following scenario (which I continue to use today) to extend the use of slate tablets in the enterprise where their stand alone power is insufficient.

My approach has been, utilize the native slate tablet PC for remote/off-site information/requirements gathering, and then remotely connect via VNC/RDP to a more powerful enterprise computer for more intense activities:
clip_image009

This mode of utilization provides the personal user experience with all the power needed for any heavy-hitter activity. The only drawback to this, however, is that many of Microsoft’s products disable their tablet-aware features when the application is not running on a tablet. This is unfortunate as other third party apps do not, providing the exact same functionality when hosted by a tablet, or hosted on a desktop/laptop and connected with a tablet (e.g., IE loses many features like hand-panning when not hosted on a tablet where the Firefox and Chrome Grab and Drag add-ins always work).

My Present

This takes us to our current stage of evolution. Today’s convertible tablets weigh in the 4-6 lb range and even under Windows 7 sustain a measly 2 hour actual battery life. They are bulky and no longer offer any real advantage over a laptop.

Windows 7 and Office 2010 have become increasingly less tablet unfriendly. The Windows 7 TIP recognition has worsened since Vista and Office 2010 Visio (a drawing application!) not only does not work well with the TIP, it doesn’t even work with pen-flick gestures anymore.

My world as a tablet user was fast coming to an end until…

clip_image010
The Apple iPad, or as I call it, a portable Microsoft Surface.

This is not going to be a debate of one vendor over another, it’s about tablets. I have not touched an Apple product since I developed on the very first Macs, so I’m coming from a whole different perspective.

In an industry that typically sells less than two million tablet PCs annually (collectively – all vendors combined), Apple’s iPad has started to sell over that many units per month. I would be remiss to ignore this impact on the tablet industry.

The user experience on the iPad is snappy, engaging, and very responsive. For PC users, think of it as a Silverlight UI to everything. I’m more pleased to task-switch than multi-task because it, too, is faster and less taxing on weaker hardware. Simply put, the iPad is a game changer because it gets back to the original roots from which Microsoft started. The iPad is a personal user experience in an appropriately sized 1.5 lb form factor with a battery life that exceeds 10 hours.

This means you can go an entire day without a power cord. That is a big deal. And, it does it for 1/3 the cost! This from Apple where compared to my son’s single MacBook Pro, I was able to purchase my daughter two Dell laptops over her 5 years in college and still be $100 cheaper!

Because of its weight and battery life, I use my iPad to remote connect to my more powerful Windows 7 PC using iTap VNC & RDP and Jump Desktop apps:

clip_image011
This actually provides a far richer Windows tablet user experience than my previous tablets can, not only from the real portability it sustains, but also in its screen resolution. Viewing my remotely connected PC with 1400×1050 resolution scaled to the iPad’s 1024×768 native screen is actually better than my laptop’s 1280×800 native screen.

The iPad, however, does have its own shortcomings. The largest is that iPad keyboard is opaque with no transparency option and Apple has not opened their keyboard API so that ShapeWriter can be used as a replacement (this would be a seriously smart move for them).

But as far as devices go, I finally have one that can sustain a long travel time and go on the road for an entire day without the necessity to carry along every form of cord for power and connectivity. I can actually go into an airport and not hunt for a bench by a power outlet! Freedom!

The iPad is not about a vendor’s operating system, per se. It is about a device that enables you to readily access those systems you will always be accessing in a manner that enhances your user experience and mobility. For me, it is the window into my preferred Windows world for enterprise productivity. The Apple Works apps will never replace my rich Office apps which I will continue to use to produce 99% of my content. The iPad device, however, is my preferred entry point to accessing those apps on my PC.

Microsoft actually is a winner here (though they may not as of yet recognize it). Devices used to access their systems are promoting the effectiveness of the many productivity applications they produce.

Android based tablet devices are now emerging with Swype technology which has some seriously promising characteristics. I look forward to these alternatives as they mature. This level of competition may be what was missing in previous tablet/pad iterations to drive accelerated growth through competitive innovations.

Our Future

The device industry is to PCs today what PCs once were to mainframes. The writing is on the wall, and the PC as we know it today has a limited life ahead of it. Those who adopt and embellish on the device movement will have the jump on those who will propose to hold onto the PC platform indefinitely.

The cloud is my next laptop. Did you get that? The cloud is my next laptop. This means that before I fork over and purchase a new laptop to which I connect from my more portable tablet device, I will sooner lease this capability from the cloud where it can replicate and move wherever it needs to keep costs and my carbon footprint to a minimum.

I have already proven that I can do everything remotely on my existing PCs from my iPad, it’s not a far move to simply do it on a cloud virtual server instead.

These offerings are beginning to emerge in the form of Desktop as a Service (DaaS) from providers such as molten.

The real future of tablet devices…
Connecting to a virtualized PC is just a step to that which is even more exciting to me, the future cloud services like Microsoft Office 365.

clip_image012

There is a video on their site where they state "Office 365 delivers our products the way we intended them to be delivered." This leads me to believe that Microsoft is going to reclaim the presentation of their products by either stricter guidance on the hardware in which it runs (like they do with Windows Phone 7) or by making it accessible to any device like Office 365 through the browser with a richer HTML 5 based user experience.

Bill Gate’s dream of tablets is slowly coming to fruition – be it in a completely different fulfillment model than what he originally thought when he introduced the first tablet PC for the enterprise. One might look at his original idea as the right solution at the wrong time, but I would disagree. If he never introduced his initial versions, we would never have the great devices emerging today. If Apple could have gone from the Newton to the iPad, they would have; they needed 7-8 years of Microsoft tablets to warm the market to that in which they now find success!

If I were to fault Microsoft for anything, it would be that it too often listens to the focus group research and sometimes not enough to the brilliant innovative minds inside its walls. Bill’s original tablet idea is what Apple is seeing great success with, not the renditions generated by customer feedback.

You cannot use an iPad app without some form of connectivity as they rely on access to some form of cloud service whether it be a map, recipe, or weather app. The future of tablet devices will likewise be dependent on cloud services from Google, Amazon, Microsoft, and others. The tablet device itself will become a loss leader for cloud vendors to grasp market share for the services they offer and the real solutions will be found in those cloud services and the flashy presentations they provide to consumers on whatever devices are out there as they provide a tangible, physical connection to that nebulous virtual world known as the cloud.

I am genuinely stoked about this future!

Posted by: wiltjk | May 12, 2010

Why do we only apply 2D thinking to 3D problems?

You’ve seen it happen and you’ve participated in the discussions. You know the ones. Some topic of interest hits the public domain and everyone weighs in on their opinion often second guessing what others are saying about the given topic. Have you ever given much thought before you offer an initial opinion about a topic? If so, do you give it a 2D or a 3D level of thought?

Let’s start with a 2D Problem: I want to vote.

Consider the following diagram where the red curve is the problem space (in 2D): I must meet the requirements, the black point, P,  is the answer sought: I am qualified to vote, and the blue tangent line is the solution to get to that point: I must register to vote.

From a mathematical perspective, the solution is as simple as taking the first derivative of the red curve and setting it to 0 to arrive at the point P. Simple, 2D.

2D problems with 2D solutions fit together perfectly as they are easy. That’s why we like them so much.

3D problems, however, are much more difficult. I can best illustrate with an example. Summer is almost upon us and that means vacations and travel. The recent Gulf oil spill with higher demand means there most likely will be an increase in gas prices soon. The 2D debate begins — but in a 3D problem space…

Above is my feeble attempt at drawing a continuous surface curve in 3D which we will use the point to represent the real–world 3D debate over consumer choices for cost conscious and Eco-friendly transportation. Because most of us want to keep things as simple as possible, we jump on a given 2D bandwagon that satisfies at a more emotional level like the following 2D derivatives taken against the 3D surface at the given point, P:

  • H – is a tangent line that represents the passionate drive that Hybrid vehicles are our salvation
  • E – is a tangent line that represents those who believe there is only resolution in the use of purely electric vehicles as they require no petrol at all
  • D – is a tangent line which represents those who know Diesel vehicles are the answer because they can operate on pure vegetable derivatives of fuel

Notice that each of these is a valid 2D solution (each is a line that is tangent to the curve at the given point) and yet none are aligned with the others, hence the debate that rages on. Mathematically, there are an infinite number of lines tangent to the given point, P,  just as there are an infinite number of 2D cost conscious and Eco-friendly transportation solutions to be passionate about (e.g., walking, cycling, buses, trains, etc.).

Wow! If there are an infinite number of 2D solutions to a 3D problem. Is any one of the individual 2D solutions really any closer to solving the 3D problem? Quite possibly not! They may give us an illusion that we are solving the 3D problem and may even make us feel good for awhile, but soon reality sets in and we see that the true 3D problem remains. We all drive Hybrids and we still run out of fuel; we all drive electrics and we can only travel 40 miles round trip maximum; we all drive Diesels and we no longer have vegetables to eat.

So, what does a real 3D solution look like? The answer is as simple as the math: An infinite number of lines tangent to a given point on a continuous curve surface is the definition of a plane!

Again, apologies for my feeble attempt at illustrating this. The fact is that when facing a real-world 3D problem, we should be seeking a real-world 3D solution, always. This means we are looking for planes, not lines.

The solution to the cost conscious and Eco-friendly transportation problem is found in the rewarding through incentives for the use of any alternative that has a lower carbon footprint, in general, and not selecting any given alternative specifically. Further, education would promote that any of these are contributing to solving the problem from the same plane and nit-picking one over another is just plain silly.

The same can be found true in our IT architectural problems and solutions. We too quickly are looking for simple, cost-effective 2D products to apply to our 3D problem spaces when it would be better to define a 3D solution framework from which multiple 2D product solutions can coexist.

Sometimes, the 3D solution is just too hard to find, even for the best 3D problem solvers. How do we deal with this?

Reality Check: 3D solutions are at best much harder than 2D solutions to solve. Even if you are capable of solving 3rd-degree partial differential equations, it will take you exponentially longer to do so than if you did it in 2 dimensions. It really is that much more work!

We can’t always invest the time and money required to “do the complete math” so we often choose instead to rely on close approximations.

I can recommend that in these situations, you seek individuals who represent the best 2D solutions in the given problem space to team up and build a 3D approximation framework using the following guidelines:

  1. The individual 2D views must be vetted as a valid 3D approximations
  2. Pick individuals whose 2D perspectives are more orthogonal than parallel – as this will lead to an approximation framework that is more complete allowing for greater interpolations
  3. All individuals must be on the same plane or your approximation will be invalid
  4. Because the 3D approximation is an approximation, it will never be the complete solution, so don’t treat it like it is; stakeholders must understand this

Being on the same plane is paramount. Many will state they are on the same plane, but their own agendas will place them on completely different planes. Learn to recognize this!

As an example, look at our multi-party government’s view of national health care reform. All parties will state that because they want reasonably cost-effective and all-encompassing health care they are all on the same plane. However, because of hidden agendas where one party wishes to outshine the other, for example, there is no common plane, and efforts to produce a 3D solution approximation continue to just be a battle of 2D solutions from different planes.

I’ve had opportunity to work with some of the best 3D and 4D problem solvers of our time and must say that when following the 3D approximation model where are all coming from the same plane, I have seen 3D approximation frameworks produced that really make a difference.

What is the source of this common plane? Usually it stems from a common business goal, strategy, or shared passion for the betterment of others.

Case in point is all that is being done to promote the education and practice of IT architecture. From IASA to The Open Group to the now defunct MCA, a common plane exists around making IT solutions better through the best holistic practices in IT architecture. There are those that look at this promotion of more 3D thinking and problem solving skills to be too much to ask of those interested in expanding their horizons around IT architecture. These organizations have never lowered the bar nor accepted 2D practices to be acceptable substitutes in this 3D problem space. While many 2D thinking critics take their stabs at their inflexibility, I can only appreciate their true understanding of the 3D problem space they are defending.

That said, I come back to several variations on my my title. Why does the world insist we must solve our real world 3D problems with 2D solutions? Why don’t more of us stand up and identify how silly it is to belabor around an infinite selection of 2D solutions? How long do we tolerate 2D solution seekers for 3D problems who can never be satisfied? Lastly, how do we promote 3D thinking in a world that labels those who promote this as ivory tower?

Thinking beyond 3D…

What of 1D, 4D, and beyond?

Let me start by qualifying that my response may be more conjecture than scientific fact. I trust those much smarter than me will correct my thinking with the solid facts.

Now, let me try to classify the dimensions:

1D
.
Acting without thinking. When someone cuts you off in traffic and you release an explicative, they can’t usually hear you and your explicative doesn’t correct their behavior, but it sure makes you feel good.

.

2D

.

Using logical inferences to draw a valid conclusion (like a tangent line). If the sun comes out today, then it will be warm. Note, not all valid arguments are sound.

.

3D

.

Using valid logic and in pursuit of true premises to draw a sound conclusion that may not be absolute, but possibly a framework that encompasses many (ideally, all) valid perspectives (like a tangent plane).

.

4D+.

.

Beyond logical inference to creatively discovering truths that would otherwise be obscured by logic alone. Einstein with his many theories is an example here.

.

Being a 2.5D thinker at best, I do not believe multidimensional thinking is linear in any way nor do I propose a pragmatic path to get oneself there. I simply am basing my conjecture on observation of those whom I observe who practice the art. Consider the non-linear curve as shown:

Stacked side by side:

This might imply two things:

  1. Multidimensional thinking has a “sweet-spot
  2. There exists a threshold – which I refer to as the “threshold of human comprehension

The latter imposes a whole new debate. There are those who would imply anything and everything can be explained, however, due to what ever limitations exist in our complex minds, we cannot comprehend everything.

l might suggest that individually, our personal threshold, moves up and down as we learn, mature, grow, etc. However, I would also suggest that while our personal thresholds are relatively dynamic, there exists an absolute limit to what we as biological beings can comprehend. I firmly believe we are yet to even come close to seeing that threshold, but there are times when we see a glimpse of it and have to simply take things “on faith“.

-j

As my daughter graduates from university this Saturday, I’m bringing back my post on the skills that graduates need to be competitive…

[originally posted on 11/29/2008]

I’m bummed.

This Saturday is my semi-annual Central Michigan University College of Science & Technology Advisory Board meeting and I’ll be returning from a business trip and have to miss it. One reason beyond getting to meet with some of the greatest scientific minds of our time is that we have a really cool agenda. The focus of this posting is:

6.    Discussion – What knowledge, skills, and experiences do our graduates need to be competitive? To get those neurons firing…

Assuming that the core knowledge is in the major, what might be missing? What skills are needed for success? Statistics, data analysis, communication, comfort with international communication and travel? Finally, what experiences should our students have? Do they need internships, co-ops, summer jobs, research projects?

I didn’t have to think long at all on this one as it comes up frequently among my colleagues (whom I hope post their opinions as well). Here’s a quote from a recent gathering of world-wide colleagues when we were discussing new-hires:  .

“Back in the 80’s, new-hires worked 80-90 hour weeks on their own, thinking they were making a difference, and they did. Today, new hires feel they are owed the privilege of working 32 hour weeks and lack both the passion and work ethic to make a difference.”

Discussion led to some possible reasons why:

  • Four (4) day school weeks and student/parent centered (not education centered) posturing of universities tend promote a lazy research/work ethic and will often set student’s expectations as such
  • Many new hires are too often content with mediocrity. Perhaps employers need to create a reward system that capitalizes off the new “gamer” mentality  that prevails in many technology new-hires (e.g., each successful project results in “moving-up to the next dev level with 5 virtual quality tokens”)

My opinion:

Universities need to choose who they want to be:

  • Vocation training institutions who’s metric of success is based on the quantity of job placements
  • Higher education institutions who’s metric of success is based on the quality of their graduates’ ability to excel at thinking and learning

Universities can excel at one or the other, but too often fail when they attempt both.

Skills which make a difference in industry:

  • Communication (written, interpersonal, presentation)
  • Problem Solving (those that tend to come from mathematics and the pure sciences)
  • Learning to Accept Failure (gracefully) and to Learn from Failure
  • Internships and travel abroad can be most beneficial and a foreign language will take you even further
  • Learning to quickly learn and come up to speed on new concepts
  • Oral Exams to promote Thinking on your Feet will go a long way
  • If there is no passion, encourage students to instead pursue that for which they have a passion

Consider my Diagram of Success:

No matter where you set your sights (dashes), gravity (reality) takes you one step lower (solid lines).

  • If you enter higher education with the goal of becoming rich, you will never be rich, satisfied, or content; you may eventually consider your life is in the toilet.
    .
  • If you set your passion and goals to reaching some form of perfection (the star) in your chosen profession, you will never reach that perfection (and at times, this will frustrated you), but you will always seem to have enough finances to get by and you will find contentment and reward in your efforts. You may even have a major impact on industry.

-jw

Posted by: wiltjk | April 28, 2010

When 60% is an “A”

[originally posted on 11/29/2008]

At the recent Strategic Architecture Forum (SAF) in San Francisco, I was given opportunity to discuss the topic of “What an Architect Needs to Know

My opening slides contained some statistics of what industry research organizations believe we should  know:

Roger Sessions recently posted about Senate Bill S.3384 and Public Sector IT identifying how government through committee oversight can “fix” this. Roger accurately identifies, “There is no  possible way that S.3384 can be implemented successfully unless the government first takes steps to understand and manage IT complexity.

A key point Roger captures here is complexity. It seems to me that many organizations prefer to adopt a philosophy around Occam’s Razor thinking they will have greater success if they pursue simplicity and eliminate complexity. There obviously is some merit with this philosophy except when the problem space/solution truly is complex.

The government suggests the best approach to deal with complexity is to manage it up front and halt the development process once there is a 20-40% deviation for the committee to evaluate the situation. The idea we can mitigate the effects of complexity by unraveling it all up front implies, to me at least, the problem space/solution was never complex to begin with.

In my talk, I offered a different way to look at all of this.

While pursing my physics degree, I was soberly awakened to the approach the Central Michigan University Physics Department uses to teach the art of dealing with complex problems.

When I would take a physics exam, 50% of the test covered material which we were exposed leading up to the exam, and 50% of the test was material we had never before seen. My professors were interested in not only knowing how much of the material I had learned, but more importantly, how did I apply that which I knew to that which I had never seen and did not know.

What this meant was that a 60% on a test could easily be an “A”

Is it just me, or does this closely align with the type of situations we IT Architects deal with in our problem space/solution deliveries? We are brought in to apply our previous experiences and knowledge (often in the form of proven patterns) to problems with many unknowns. While we can’t guarantee success in terms of what outside observers use to measure success, we can apply our Deep Smarts to attack the problems presented using the best known techniques and practices to solve the unknown. Deep Smarts are the engine of any organization as well as the essential value that individuals build throughout their careers. Distinct from IQ, this type of expertise consists of practical wisdom: accumulated knowledge, know-how, and intuition gained through extensive experience.

.

As the unknown becomes more known, our industry responds with frameworks and even packaged software solutions. But, this does not occur overnight or with the first iteration of a solution. It takes  years for this to happen. At one time, for example, a spreadsheet was a completely new concept from a software perspective. Today, it is a commodity.

As we work with customers, we try to get a grasp for how well what they are seeking to accomplish with technology aligns with the business problem they seek to solve and if a package or framework exists that will expedite the solution. We then compare the knowns to the unknowns to suggest solution approaches that, on one end, will be more “out-of-box” but perhaps less aligned with business needs to the other end with more complex adaptations that offer the flexibility to more closely align to the business strategy and promote user adoption. Of course, projected timelines and costs equally span.

If we were to consider that approaching IT Solutions is more like a Physics Exam, we might conclude that the outside assessment of 60% is a more accurate representation of the problem space and not so much the solution process. When looking at solutions with more unknowns in this way, we see that having real “problem solvers” in the solution delivery team is as important to reliability & repeatability as is the proper choice of process, methodology, and if a framework or package applies. It is the resources that do the deep-thinking that’s required to deal with the solution’s unknowns.

We spend much time focusing on tools (processes, methodologies, frameworks, scheduling, etc.) to promote better alignment of technology with business strategy, but our outside observer metrics just don’t seem to improve at a rate you might expect. Perhaps our focus is neglecting the inclusion and refinement of proper problem-solving resources with the technology and business strategy components.

A formula for better success? They must all work together!

Industry often diminishes the human factor by promoting stellar tools that seek to automate/ensure quality. I would suggest that industry research has clearly identified those alone are not enough and perhaps our smoking gun for the poor industry metrics is both not understanding what they really identify (a tough problem space) and the missing leg of the stable solution triad (problem-solving resources).

If organizations seek to better develop and utilize their individuals with Deep Smarts as an active role in their solution delivery, they might realize steeper gains in success metrics.

In a future posting, I hope to fall back to physics to contemplate approaches for problem-solvers to address problem spaces with many unknowns.

-jw

« Newer Posts

Categories

%d bloggers like this: