Posted by: wiltjk | August 2, 2016

How to achieve “A” Outcomes from “B” Teams

2004 August - Deutschland 787bFor many years I’ve had the pleasure of sporting around in a smart fortwo cabrio micro-car. By all U.S.A. standards, it barely even ranks as a car–so let’s bypass all the traditional criticisms and focus on how a three-cylinder micro-car can survive in an SUV biased world.

When it comes to performance, payload capacity, and comfort, I personally rank a micro-car at about 87%, 89%, and 72% of a conventional vehicle. It is surprisingly spacious inside with a cargo capacity (12 cu ft w/passenger seat folded down) close to the trunk volume of many sedans. For eight years, I have accommodated most everything anyone really needs to do with my micro-car, simply weathering the many opinions others have of it (too small, slow, unsafe).

This brings light to a comment an IT director shared with me recently: we will never be able to afford the level of developer talent of a Netflix or Facebook. Essentially, we have to work with what we have. I thought long about this comment and realized that this statement describes nearly all IT organizations as they must work with talent that may possess only 85% of the abilities of industry trail-blazers.

However, it’s not only those industry icons that are turning out stellar solutions–many organizations have some exciting and leading-edge solutions resulting from much more limited resources. Why? What makes them so capable of delivering “A” outcomes?

Rule 1: Stop Comparing

Even with all its pep, a micro-car could never hope to challenge the performance of a BMW M-series in any form (well, perhaps parking), so it’s obvious that I wouldn’t even try. The same should go for our IT teams. Should we really expect them to deliver bleeding-edge cloud-based solutions by inventing solution patterns that amaze the world?

If this is not the case, we should stop comparing those bleeding edge organizations’ teams to our own. Just because Netflix is pioneering some new cloud service approach doesn’t mean our internal IT teams should try to compete with them. Sure, there will be pattern elements to learn and apply, but we really need to reconsider projecting the expectation for our teams to mimic systems that address workloads we will never experience or apply solution strategies for consumption patterns that are wildly different from our own.

When you openly or inadvertently talk about your talent as being sub-par, the best you will ever receive from their efforts will be sub-par.

When a team already considers themselves sub-par, you will hear statements like:

  • That won’t work here – evidence that culture has overrun best practices in the advancement of solution development.
  • You’ll have to teach me because that’s not how we do it today – evidence that micromanagement has stifled the empowerment to self-educate and learn/experiment.
  • We’ve tried that n-times before and we always abandon it – evidence there’s lack of consensus on a common strategy among leaders.
  • Because that’s outside my domain, you’ll have to deliver X for me – evidence that team expectations are so controlled, anything outside their domain (i.e., promoting a stretch goal) will cause a complete shutdown.

It’s almost like telling a minor league hockey team to mimic play-books of a leading professional football team. Sure, there are elements of strategy that may apply, but overall, it’s going to be oil & water.

So, stop!

Rule 2: Empower Greatness

While my micro-car may never be a performance icon, it has unbelievable maneuverability and pep. I suppose because it was conceived by Mercedes in 1972 and took 25-years to come to market, there is some level of performance DNA deep in its core. Perhaps that’s why it requires premium fuel–to allow me to paddle-shift around tight corners, slalom around shopping carts with great precision, and with C-class size disc brakes, it can literally stop in a distance so short it defies logic.

To empower greatness in any team, look to focus on the following:

  • Increase Curiosity – encourage teams to be curious about what’s going on in the industry and encourage them to experiment (and fail!) with emerging ideas and patterns. While they may not adopt or use an emerging technology directly, they will certainly be able to apply their learnings.
  • Reduce Fear of Failure – provide sufficient think-time in tactical schedules to allow teams to experiment and fail; every failure results in multiple successes later. While building to a timeline is a reality, it too often shuts down creativity resulting in only the best mediocrity offers. By baking failure into the lifecycle, you actually will greatly reduce failures post release.
  • Eliminate Restrictions – several cloud vendors and many organizations limit the ability to learn emerging platforms and practices (e.g., access caps and spending limits). The surest way to shut down creativity is to restrict access to resources. ING Direct in Australia purposefully removed their developer’s resource restrictions and has seen tremendous innovation in their solution deliveries. The additional cost to experiment and fail-fast is easily returned with faster and more robust solutions.
  • Create Incentives and Rewards around Growth – publicly recognize those teams who have delivered greatness by stepping outside the box. Offer incentives to any individual or team who self-elevates their skills. Resources that invest in themselves accomplish much more than those who don’t.

Rule 3: Stop Assuming and Raise Expectations

Because a micro-car is small, everyone assumes it is an unsafe death-trap. According to our nation’s Fatality Analysis Reporting System (FARS), nine people lost their lives in micro-car crashes (0.02%) in 2014 (the latest year recorded). Compare this to a standard pickup truck which is assumed to be very safe at 6,041 lost lives (13.50%).

Likewise, even though a team may not necessarily invent the next storage pattern, never stop expecting anything short of Spectacular from them. Rather, when expectations are set high enough (i.e., just-out-of-reach) paired with sufficient “think-time” to grow necessary skills and knowledge, great accomplishments are achieved. The self-satisfaction that accompanies teams who achieve these goals serves to fuel accelerated delivery in the next round.

A common mistake is to exasperate by setting unattainable goals too early. Start small and land your first stretch-goal completely prior to imposing anything further. It’s a crawl, walk, run, fly approach.

Secret Truth: There are no “B” Teams – only “B” Players

Yes, it’s cliché, but the reality is that all organizations are populated with a mix of “A” and “B” players. The differentiating factor is neither skill nor ability, rather, it is attitude and drive.

When sitting on architect review boards, after a candidate presents their greatest achievement, I ask the following question: “if you were to do this project today, would you change anything?” One type of candidate will think for a moment and then answer, “no, I do not believe there is anything else I would do different in that endeavor.” Another candidate, without hesitation, will immediately go to the whiteboard and show how all the things they learned would be used to advance their architecture into levels greater than they were able to deliver the first time around. Which of these two types are “A” players and which are “B”?

How leadership guides their talent pools will determine if they are encouraging mediocre or spectacular outcomes.

If leadership sets scope based on skill & ability alone, mediocrity will prevail. However, if there is an environment of curiosity that fuels attitudes of adventure (which drive self-advancement), spectacular outcomes will be right on the horizon.

Only by investing in the right things the right way can we produce teams and cultures of innovation. This inspires the best out of our talent, empowering them to deliver those “A” outcomes you only read about in big name journals.

Final thoughts

Early one Sunday morning, a colleague and I were at a stoplight before entering a freeway. I revved my three cylinders like a pro. The light turned green and we were off! His six turbo-charged cylinders took him so far in front of me that his black sedan appeared to be as small as a period on a page. It took some time for me to catch up, but I eventuality passed him as he settled into the speed-limit–and I was, let’s say, slightly over.  When we arrived at our destination for a meeting, I joked about how fast he was able to completely lose me at the start.

His reaction was unexpected. He said, “Well, of course I could take you from the start, but when you passed me, that little micro-car is like a bullet!” It was certainly an unexpected outcome, but if you accept it’s actually an “A” player, it shouldn’t be so surprising.

 

 

Posted by: wiltjk | May 9, 2016

3 Exec Stakeholder Interactions

InstantFirst: The Instant Snapshot

The first interaction generally addresses the current issue (fire) at hand. What’s been happening in the last week, days or hours. It’s an instant snap in time of current-state.

Stakeholders at this time are more interested in, “what have you done for me, today?”

Best response comes from sage advise of Metrics Reporting, Inc: Listen until your ears bleed. Be honest with yourself. At this stage, there simply is not sufficient knowledge and direction for you to offer anything actionable that will drive results. So Listen.

Second: The Composed Portrait Composed

The second interaction is more composed. Immediate needs have already been communicated, so this is the time for understanding the current plan, strategy, and blockers that are in play for the next time period (generally, the next quarter).

Stakeholders at this time are more interested in, “how will you help me reach these goals?”

Best response is to repeat what you’ve heard to get validation that you genuinely understand (active-listening). Avoid immediate response to the question on goals until you’ve executed sufficient “think-time” on the matter. Instead, give a date when you can supply what and how you can help achieve their goals.

MasterpieceThird: The Masterpiece

The third interaction is a composite of aligning your goals response to the master vision and strategy. This is about long-term strategy achieved through deliverables focused on intermediate goals along the roadmap.

Stakeholders at this time are interested in, “how do we effectively drive transformation to achieve our vision?”

Best response comes from synthesizing vision into actionable outcomes along a roadmap that progresses the organization to a higher state of maturity (active delivery).

The ultimate goal in these interactions is to move from “what can you do for me” to “what can we accomplish together“.

Posted by: wiltjk | April 29, 2014

Less Search, More Find

Author’s Apology

The idea came while on holiday with my wife; I wrote it on our plane trip home so I could submit it to an internal "Think-Week" call, but, unfortunately I missed the deadline for submission by 30 minutes. So, the blog-sphere can now vet-out my idea 🙂

Respectfully submitted to the world,
-jim

How to Interpret Lassie

When Lassie comes running up to the farm barking, Timmy’s adoptive parents, Paul and Ruth Martin, could begin crunching all of the data about the nearby wilderness and create a massive search party to meticulously canvass the entire county (big-data approach). Instead, they use less data and concentrate more on context, behavior, and emotion to determine Timmy is most likely in some form of trouble down at the abandoned mine. [Incidentally, of all his mishaps, Timmy was never stuck in a well.]

Context comes from local observation (coal dust on Lassie’s coat) to greater breadth (Timmy’s questions about the mine at breakfast).

Behavior comes from parents knowing their child’s reaction to specific direction, "Timmy, I don’t want you playing near the abandoned mine, it’s not safe," too often is an open invitation to a 7-yr. old.

Emotion shows it’s evidence from Timmy’s concern for the baby raccoons he mentions he saw near the abandoned mine after their mother was killed by a wolf (disobedience under good intentions).

Only four to five data points under the umbrella of context, behavior, and emotion are required to inform the Martins with the greatest accuracy that Lassie’s presence and barking (more contextual data) means they need to get over to that abandoned mine, pronto.

The above illustration forms the basis of this thesis: modern computing systems need to stop out-thinking themselves searching meticulously through mountains of data to present smaller, prioritized mountains of options a consumer must further digest. Instead, at the base-class level, they must factor in queues from contextual interpretation, behavioral observation, and emotional interpolation, working together to inform computational behavior.

This thesis is not about putting humanity into computational systems, rather it’s about computational devices and services reading the humanity around them to simply do what is being asked of them.

Big Data is Dead

Perhaps a better way to phrase this section is, "I most likely will be dead before we learn how to analyze big-data to gain insight in real-time."

There is no question there is value in the pursuit of insight from the many data sources we create and interact with. What’s missing, unfortunately, are actual outcomes from any insights!

For years I’ve checked the option to send data anonymously to Microsoft and other applications providers under the "promise" of product improvement. I am still awaiting improvements from the many insights I’ve provided by my utilization and even more, specific comments I provide when overly frustrated with computational behaviors.

I can only assume that the millions of data sources providing a continuous stream of "insight-data" back to application providers simply overwhelms any hope of gaining actual meaningful insight.

On the surface, it seems as though our computational systems are spending more and more time searching for what they will do next and less time finding what the user/consumer really wants.

From a public search engine to an enterprise portal, simply extracting context, behavior, and emotion from subsequent search strings can build greater insight for the desired results than any neutral network or AI algorithm. Another illustration:

image

To a conventional search engine, each successive search string is atomic and independently requires the same excessive computational load because it treats each request equally. The application simply is doing what it’s asked and has no concept of computational failure.

A Context/Behavior/Emotion (CBE) aware search engine, however, would quickly realize it’s fast attaining a state of computational failure due to its acute observation of the humanity in successive search strings. Less data, more accurate interpretation. Now adjust the computational loads accordingly.

Context – thankS for Nothing!

Multiple levels of context can be utilized together for greater CBE insight realization. In the Lassie story, coal dust on Lassie’s coat was an immediate, localized queue. The conversation at breakfast was an additional queue.

Let’s map out a more concrete example using the Touch Keyboard and Handwriting Panel (tabtip.exe) that I’m using to write this. Below are two messages with concentric circles illustrating the multiple levels of context from which the software [should] draw:

clip_image001clip_image002

The word, thanks, was interpreted with advanced handwriting analysis, stored data, and corrective history, yet it still comes out wrong. Let’s see how contextual insight increases the accuracy for, "thankS":

image

This is obviously an oversimplification for illustrative purposes, however, the point clearly remains: context eliminates the search of isolated data points and allows software to find the desired result in very few iterations.

Behavior and Emotion

A small vocabulary is all that’s required to categorize computational observation. The actual observation is actually less complicated if you factor in and utilize the many sensors at our immediate disposal. Yes, if there’s a user facing camera and microphone, the base CBE class should be watching and listening to report the state of behavior and emotion.

My XBox-One uses facial recognition to log me in. When it fails to recognize me (often, unfortunately), it apologizes and says it will try harder next time. Nice, putting humanity into the user experience, however, it is still not getting any better at recognizing me. By the time I grab the remote control and log-in manually, the sensors in the control reporting hostility and my facial expression/language reporting frustration should clearly be interpreted as "epic-fail".

image

A study of "The Platinum Rule" provides a most comprehensive guidance in the interpretation of behavior and emotion.  This can than be used to create the most effective computational behavioral response to contextual inputs.

Behavior and emotion paired with context provides the most direct and profound insight to a user/consumer’s immediate needs for which a direct response from software can evoke the "find" elation most sought.

It’s not about "Her"

A recent movie relating the connection between human emotion and artificial intelligence based on device sensorial receptivity paired to compute-anywhere technology has raised awareness of how near or far we are at bringing humanity into software. This and systems like Apple’s Siri strive to bring an emotional connection to everyday computing by mimicking human-like behavior in its response. 

This is not what CBE strives to do.

CBE is more focused on reading the user/consumer directly and responding with a focused computational (non-human) behavior.

Call to Action

Now is the time to break some development cycles away from traditional big-data cloud calculated insights and begin experimenting with a CBE framework. A CBE framework can give simple applications unprecedented insight to human behaviors through both passive and active (sensorial) observation.

These simple apps, today, may come in the form of CBE based text word offerings on your phone where how hard you press influences what words are offered.

This then grows into audio queues influencing word selection, and later facial recognition directing an appropriate form of response.

Placing the basics of these observations into a base-class CBE framework means applications need only make a call to ascertain the three values of CBE for which it can respond appropriately.

The final question is not how to make technology search for answers on your behalf, rather, how do we teach to technology actually listen/observe you to find where it is you want to be?

References:

[Wikipedia 16 February 2014] Lassie (1954 TV series), http://en.wikipedia.org/wiki/Lassie_(1954_TV_series)

[english-at-home.com 2013] English words that describe behaviour, http://www.english-at-home.com/vocabulary/words-that-describe-behaviour/

[english-at-home.com 2013] English words that describe emotion, http://www.english-at-home.com/vocabulary/english-word-for-emotions/

[Mars Cyrillo, CI&T February 16, 2014] The world we see in the movie Her isn’t far off, http://venturebeat.com/2014/02/16/the-world-we-see-in-the-movie-her-isnt-far-off/

[Ilya Gelfenbeyn, Speaktoit Feb. 15, 2014] After Her: Why our love of technology will remain unrequited, http://gigaom.com/2014/02/15/after-her-why-our-love-of-technology-will-remain-unrequited/

[Alessandra, Tony and O’Connor, Michael J. Dec 14, 2008] The Platinum Rule: Discover the Four Basic Business Personalities and How They Can Lead You to Success

Posted by: wiltjk | November 25, 2013

Post Modern IT

I have experienced several IT transformations in my experience:

Punched-card – Mainframe
… to Teletype – Mainframe
… to Terminal – Mainframe/Mini
… to PC – Mainframe/Mini
… to PC – LAN
… to PC – Client/Server
… to Browser – Static Web
… to Browser – Server-side Web
… to Device – Restful Web
… to Device – Cloud

All this has driven to Jeffery Hammond’s Modern Application which blends our device/cloud/big-data into one user experience which has sparked our current move to Modern IT where Apps rule and the Cloud does all our heavy-lifting.

Success is measured by the size of your app store and your social following.

This model/phase should easily fuel the consuming economy for the next few years and as corporate enterprises onboard, even longer.

But… what’s next? What happens after we are flooded with more apps & services than we can consume? After the current cash-cow feeding on the “modern” movement wanes, where will that leave IT?

I propose that those long-range planners amongst us consider the reality that we will soon need to deal with the Post Modern IT era of our technological evolution.

Once we’ve completed establishing a stable, structured, and rationalized devices & services oriented ecosystem, our constituents will become bored and react with chaos, flux, and an unsatisfied need for change. The moment we believe we have once again gained control over our destiny, we will upset that cart seeking a post-modern solution to our further need for individuality and expression beyond what the best commodity computing can offer.

Pimping our ride” with surface decoration will soon fall short and post-modern models will be needed to pull a despondent user base from modern IT apathy into a new era of Post Modern IT where their individual interests, needs, and expression can be accommodated built upon the great Modern IT infrastructure we are currently building today.

Are you one of our long-range thinkers? What will you see as a focus in our Post Modern IT era?

Posted by: wiltjk | November 4, 2013

Change Enablement

I am a true fan of Kotter and even ADKAR change management methodologies. Knowing what change must occur can be effectively managed using the principles around these methodologies, which essentially manage the human behaviors necessary to make desired change occur.

What about when you do not exactly know what that change should be? What about when ideas and capabilities are emerging from the unknown?

This is where we need to leave the stability of a given methodology which effectively drives known, needed change and enter a new area of individual empowerment to drive what I call Change Enablement.

Change Enablement is based on the idea of promoting Innovation driven by committed individuals.

  • Individual(s) are empowered to lead and drive their sometimes unchartered idea in the way they believe it needs to be run.
  • These same individuals are held both responsible and accountable for their innovative change for the duration of that change – whether or not they remain in a role related to this effort.
  • A timeframe for the change is established and a judgment is made at this time to continue if sufficient forward motion is achieved or abandon if not.
  • A tried and abandoned effort will be treated as successful when lessons learned are able to promote greater education in the specific domain space of the change.

The value of Change Enablement as an innovation stimulus differs from traditional change management in:

  • Only innovative behaviors by individuals with great commitment are empowered – leading to higher probability for success (reduced risk in choosing which innovation to back).
  • Innovation is promoted even when the risk of failure hovers as education through failure is far more valuable than education from wrote.
  • Innovative change is a key differentiator as modern businesses compete in the fast moving world of devices, services, and profit from their offerings.
Posted by: wiltjk | October 3, 2013

Why Architect Certification is so Important

I was recently talking to a fellow architect whom I truly admire, Bruce Joseph. Bruce was sharing with me how over his career, fellow architects in all companies he’s worked in and with have varying degrees of collaborative skills. In most situations, architect to architect collaborations are somewhere on the scale from dismal to even abusive.

Like Bruce, I have seen both those collaborations that work and those which fail miserably. My inclination is to examine which Human Dynamics skills promote greater collaboration, but Bruce got me thinking…

Those situations in which I see the greatest success are not due to some magic communication skill. Rather, they are often among architects who operate at an elevated level of maturity where when they disagree with differing opinions and even knowledge, they work together to attain the best outcome. Let me be clear, This is rarely an outcome based on compromise, it is based on achieving the business goals at hand.

As I run through the inventory of positive outcomes I’ve experienced, the one thing I repeatedly see is that the architects who behave well generally are in their role with some form of Professional or Mastery architect certification. This could be Open Group, IASA, MCA, etc.

These certifications tend to recognize architectural skills that execute across a common taxonomy with a high level of repeatability of which one attribute or discipline is Human Dynamics.

When architects execute collaboratively, they achieve greater outcomes in far less time. They can’t do this by being “taught” the skill as it is going to be a skill they build over time. While there will always be exceptions, architect certification is a “tell” to know the potential and probability is very high that your architects will work together to attain the greatest outcome.

Posted by: wiltjk | June 18, 2013

Is Your Cloud "Clean" or "Dirty"?

As we are ramping up to do everything in the cloud, I wonder sometimes how we all are going about it.

clip_image001 First is what I might suggest to be the "clean" approach: SaaS and PaaS provide a forward thinking approach to tackle our business problems from a new, greenfield perspective. They require us to think of our approach using a mindset that is bound by the best intent from the whole concept of what cloud computing actually is (fabric, elasticity, pay-as-you-go/use).

 
This reminds me of a bright, sunny, snow-covered landscape in Michigan in January or February which is brilliant and breathtaking.

clip_image002 Next we have what I might suggest to be the "dirty" approach: IaaS which stems from taking how we operate today on physical infrastructure and simply moving it directly to cloud hosted virtualized infrastructure. Essentially, It is all about taking your existing mess on-premises to your mess in-the-cloud. We tend to like this approach because it is quick and, well, dirty – meaning we don’t have to think about it because thinking is either a taxing or expensive process we should avoid.

 
This reminds me of all the salt-dirt-snow mix in Michigan in March which makes you yearn for Spring to sweep in and wipe out this horrible mess of a landscape.

I wonder if maybe the same will be true of IaaS in the short time I expect SaaS and PaaS to become mainstream. If this prediction does come to fruition, I might suggest that we focus our efforts on things like Office 365 and Azure over the currently popular IaaS mainstream push we see today because Spring for our cloud solutions may be just around the corner.

I was recently discussing the modern application with my good friend/colleague, J.D. Meier, and the recent Forrester definition really rings solid, “Modern applications are composed of  systems of systems, you shouldn’t separate your mobile strategy from your cloud strategy, or your big data strategy” – Jeffrey Hammond, Forrester Research.
 
The article highlights that the modern application will consist of so much more than just a mobile user interface/experience; that the actual cloud and big data components behind it are going to be equally (sometimes, more) important.
Shuttle

In his work, Mr. Hammond cites six (6) tenants of the modern application:

  1. Omni-channel – modern applications are designed to work across all devices (natively)
  2. Elastic – successful modern applications are designed to spin-up or spin down as needed
  3. API-oriented – modern applications compose and expose APIs everywhere
  4. Responsive – modern applications are built to deal with the realities of a public network topology that is increasingly out of IT’s control
  5. Organic – modern applications tend to evolve more like a biological organism than a big bang product release
  6. Contextual – modern applications increase the use of contextual data at their disposal from data from sensors, to machine to machine (M2M) data, and complex events

It is this last tenant that I think warrants a consideration for the fourth piece of the puzzle. If the modern application is composed of devices, cloud, and big data – I propose we add one more component: Transient Data in Motion (TDiM). This is short term, meaningful data that is short-lived but carries the highest value to the experience. Yes, it may come from sensors and it may eventually be stored in big data repositories (but doesn’t need to be). The key is that it is truly meaningful for only a short time and then, essentially, can be discarded.

In short, consider TDIM as the repository and resource for the "situational awareness" of an application.

Perhaps some real-world examples would help:

  • Waze is one of my favorites because it is the quintessential social navigation app. It adds active information from other waze users to enrich your experience by providing simple information like the average speed on the road in both directions, where a traffic jam resides, and identifies law enforcement locations through active and passive contextual means. All this information is very meaningful for a short duration and there is no need to save it once it becomes stale.
     
  • At work, we have an amazing shuttle app that allows us, from our phone, to request a shuttle from one building on campus to another. The app "knows" your current location and you choose your destination. In the cloud, it selects the right shuttle to pick you up and then displays this on your phone with a picture of the shuttle, it’s number, and a live map of it coming to you. Once you get on, all the above is no longer needed and goes away.
     
  • Glympse like waze is another great example of TDiM at work.

The important point, here, is that transient data in motion is becoming a very important part of our modern application ecosystem and we can now begin to look for more and more ways to utilize this temporal information to enhance and enrich our solutions.

Obviously, in the personal app arena, it has taken off. However, in the enterprise setting, it can be used equally well when providing simple, relative information about your local environment to enhance consumption of your service offerings.

J.D. offered up the example of a coffee shop chain which wants to use a mobile app to communicate the number of customers vs. capacity at each of its stores so customers who want a private meeting over coffee can pick a store which is not overly busy.

I will be curious how others will look to use TDiM – please comment if this is in your future!

I have been evangelizing public cloud for several years now, trying to mitigate the fear, uncertainty, and deception (FUD) that those threatened by cloud throw out.

In my talk, Cloud Computing-Who wins-loses-How to survive, I suggest we push the threshold for cloud concepts which means for sovereignty issues, we might mitigate all this corporate FUD by encrypting data prior to cloud storage – keeping keys in countries where this is an issue.

Behaviorally, enterprises have been content delaying this discussion with the assumption that a straight “data must reside in the country where laws dictate” response is all that is needed.

As with any disruptive concept, consumerization often leads into addressing the seas of uncertainty faster than in the enterprise. The popularity of personal cloud storage services such as DropBox, Skydrive, iCloud, and G-Drive are now returning the discussion back to who owns the data and has rights to access it.

I suppose my thoughts on this are no different than before. If you encrypt your data before putting it into cloud storage, does it really matter?

For those who are certain breaking encryption is a drop-in-the-bucket, please be the first to decrypt this file for a free lunch at Applebee’s on me…

I look forward to the day forward-thinking providers will automatically do this encryption when information is stored in their cloud service against a customer provided key!

Posted by: wiltjk | April 18, 2012

Windows 8 Editions, Explained

The recently announced Windows 8 offerings are actually some well-thought scenarios. I will share my understanding of how these relate to consumers and businesses.

Home User on Desktop/Laptop/DeviceWindows 8 makes sense as most hardware providers will offer an Office bundle option. This, however, is definitely a poor choice for the consumer who wants connectivity to their corporate environment.

Enterprise Ready Desktop/Laptop/Premium DeviceWindows 8 Pro makes sense as it is full featured for necessary enterprise hardening. Office will be licensed by the corporation, so no bundling is needed.

Consumer Devices Brought to WorkWindows 8 RT is the newcomer to facilitate a new growing breed of devices. Naturally, storage will be limited, but consumption is the assumption for these devices. Bundled Office capabilities will provide necessary access to the many items created on the other platforms, but just as SharePoint Web Apps have this form of access, they are not for mainstream workloads. Bundling these makes so much sense as this is a great disruption in IOS and Android devices where interfacing with Office documents is dismal at best.

Connecting to the corporate network, but not the domain as in the Pro edition, also makes sense as these devices should have access to every capability inside that they do outside, but no more. This is where RDP (Client) will play a big part in that it allows access to line-of-business and domain apps in a more secure manner.

Others may argue finer points in the details, but on the surface, these three editions make it easy for consumers and businesses to decide on their Windows 8 journey accordingly.

Older Posts »

Categories

%d bloggers like this: