Skip to content

Recent Articles

21
Mar

Deployment Blocker No. 1: Silo your talent

Silos that organizations commonly have, and which can deeply hurt the speed and success of the deployment:

Active Directory Administrators, who often control (but not always) the

Group Policy Administrators, who rarely talk to the

Desktop Administrators, who might participate in planning or more likely outsource image work to

Image Developers, who usually do not consult with the

Help Desk Technicians, who are painfully aware of but often cannot control the behavior of

Local Administrators, who are just trying to get the real work of the organization moving along

 And all of the above hate the Security team

Actually, there are good reasons for many of these silos.  Local Administrators out in field offices frequently
regale me with war stories which prove if they didn’t operate as independently as possible everything would go down.  I am sympathetic to their plight, and wary of wading in and changing things too
quickly on that front.

However, leaving the situation as is won’t improve matters and is a major blocker to any deployment.
These silos are simply not conducive to rapid and effective deployments.  They are as un-cloud-like as you can get, structurally hostile to collaboration and enterprise services.  The only thing that alleviates the  problem is when personal relationships exist between the silos.

Security teams who are earnestly trying to do their job are often the most divisive of all, despite the fact they have the best of intentions and the most to lose should the environment get compromised.  The problem seems to be one of two (and sometimes both) in my experience:  either the Security team has all the responsibility and none of the control (purportedly they have control, but not truly), or they have a really hard time keeping track of what the other groups are doing to these systems, because the tools
are complicated and spread out.

As a result security teams often become marginalized and scared, or control freaks deploying intrusive scanning systems on every corner of the network.  The best security teams strike a balance and work hard to communicate with the other groups—but this is a rare and beautiful thing.

The Better Way:  I’ve learned over time that it is absolutely essential to get operations, security, and support staff in the same room as early as possible to hash out all the decisions in one pass prior to building the master image.  Often that is the first time they really sat down with one another.  Not enough, but it’s a start.

Getting a meeting like that is challenge enough, but beyond it, think about how to re-organize these silos so the right communications continue to pass back and forth between people? It has been different for each customer, since any personal relationships that already exist between the groups are where we usually have to begin this process of change.  There is plenty of turf protection, constant alarms over territory that might be lost, which everyone has to get through for the greater good.

I often tell customers that wherever they end up with their re-organization of IT staff, make sure there are checks and balances in the end state.  A natural tension should exist between operations and security—they have to balance each other out.  Help desk ought to have enough power to call out problems and force
Operations to deal with the root source of the problem—but rarely does.

This is part of a ten part series of blogs “Top Ten Deployment Blockers”

21
Mar

Deployment Blocker No. 2: Think of deployment as a one-off

The deployment of the new operating system is going to happen just once, right? So don’t plan for the longer term, beyond the immediate problem of how to roll out this major new system.

The penchant to treat deployment as a one-time event, instead of establishing a culture that welcomes regular deployments, and cultivating a healthy, willing pilot group, is a huge mistake on many levels:

  • Your culture will see deployments as one-time events to suffer through once every 3-5 years, and will resist the next one all the more
  • Your staff and users will be out of practice the next time you have to deploy a major change
  • Your image(s) will get exceedingly stale, out of date, within 2-3 months
  • The moment the image is laid down, it will start changing, and you are going to lose sight of the baseline you sought to establish if you don’t work to gain some basic control over when and how it changes.
  • Your user population will resist moving applications into the cloud, where they don’t directly install and control their configuration and updates.

Your baseline cannot be permanently set in concrete, or you will increase chances for:

  1. Attacks from the outside, which steadily weakens security:  hackers are infinitely creative, and you haveto continually evolve to respond suitably to those attacks
  2. Attacks from the inside, derived from a need for flexibility:  Users need to feel you will adjust when
    necessary to accommodate their needs—otherwise they will revolt, and there is
    nothing worse than an uncooperative user population

The Better Way:  Embrace inevitable evolution of your desktop configurations, and do what you can to lightly control it within a suitable range.  Start out by establishing regular drops of a new image with the latest updates embedded–often quarterly will work.  This gives everyone a goal to focus upon that is not too static nor too dynamic.  Push out with each quarterly image—which sets your new enterprise standard for the desktop, the new line in the sand–a delta package.  That package will implement the same updates and configuration changes to your existing desktops which have the older image, to bring them in line with the new baseline.

In short, set up a process that allows you to control your foundational baseline, a “waterline” you can flood across the enterprise.  This marks what you have decided is (for the moment) the ideal desktop base configuration, so when you test an application upon that baseline and it works, you know it has a high chance of succeeding in the production network.  Try wherever you can to change the culture to treat deployment mechanisms as an integral part of operations, a daily habit, much as is done today with public appstores.  Stand up small “point man” pilot groups (no more than 3-4 per group): actively cultivate and support these so you can constantly test updates in smaller, more controlled user communities.  Give them tender loving care, with a dedicated Help Desk staff to hear their complaints, and true responsiveness to problems they discover.  You do this because there is no reasonable way to emulate real life in the laboratory, and no better way to raise confidence, than to prove it works in the actual users’ production environments.

This is part of a ten part series of blogs “Top Ten Deployment Blockers”

20
Jan

Deployment Blocker No. 3: Fiddle with System Permissions

Some customers want to play around with system file permissions and lock their computers and servers down even further, often using guidance that is decades old, built for another operating system.  As mentioned before in Deployment Blocker No. 5:  Separate partitions for applications and/or data blog, unless it is specifically recommended by the manufacturer of the Operating System, changing default system permissions is generally not a good idea.  Usually these types of lockdowns emerge from an overzealous security team, but we’ve seen plenty of Operations folks get into this bad habit as well.

There are a number of other problems with getting creative with file permissions, beyond what the manufacturer of the operating system agrees is supported and necessary.   For Windows, consult with Microsoft Premier, the support arm of Microsoft’s Services groups.  Many of them are Tier III engineers who are tied at the hip to Microsoft Product Groups.   Premier has on some occasions told my customers they need to  wipe and load all their desktops if they are to get support, otherwise the only thing Premier can do to resolve reported problems is “best effort”.  That can be painful for everyone.

Here are just a few of the issues which deployment consultants and I have seen directly impact a deployment once permissions were changed too aggressively:

  1. Group Policy fails to apply.  A customer had added file permissions that affected the area into which Group Policy applied the user’s local settings.  We’ve also seen some cases where the volume of file permission changes are so great, logon times become unacceptably long.
  2. Applications fail.  An application works on the system without the customer’s creative file permissions, but fails on the locked down system.  In some cases, I’ve seen the customer’s field personnel throw their hands up in frustration and change the critical folder Program Files to give Everyone Full Control.  Whoops. Now it’s less secure than it was out of the box.
  3. Major services fail to start. We’ve had a wealth of problems with servers, among others, where the services were shut down by the new permissions or operated poorly.  Among the problems we’ve seen:  replication chokes and web services that won’t start.

The Better Way:  Don’t do it.  The kinds of headaches these cause are simply not worth the effort.  Ask your team what their justifications are for making such changes.  Consider whether your organization is willing to toss out the years of testing done by Microsoft in their Application labs with thousands of COTS applications, testing which was done to make sure these out-of-the-box security permissions and settings do not cause application compatibility issues.  Keep in mind the current security model was developed after a great deal of feedback  from security standards bearers out there, who get early looks at the operating system.  They have already worked and continue to work to identify potential target vectors so Microsoft can close them off, and second guessing is likely to open more holes than it plugs.

This is part of a ten part series of blogs “Top Ten Deployment Blockers”

2
Jan

Deployment Blocker No. 4: Promise to capture all data everywhere

Like the promise to bring all applications over to the new environment, this is a promise that cannot be kept, especially if the users have been running unmanaged, which means they are storing their data—um, where? And does anybody  really know which data is important to keep in these unmanaged or even the managed environments? Not bloody likely.

There is no data transfer tool in the world that can read people’s minds.  And if there was, you probably wouldn’t really want to know what was in there.

Accordingly, there are two methods that data transfer tools use:

Location specific:  Instruct the data transfer tool to go to a specific location, and back up all files there.  For instance, Microsoft’s User State Migration Tool by default will automatically capture all profiles under c:\Users, or C:\Documents and Settings, and (not surprisingly) is excellent at capturing all the preferences of Microsoft products, including Microsoft Office.

Vacuum cleaner:  based on a list of file extensions you give the data transfer tool, it hunts through the entire disk, or disks, to find those file extensions no matter where they exist, gather them up, and dump them into a new target directory.

The problem with the location specific approach is that in unmanaged environments, where each individual user has the ability to save anywhere they want, you don’t really know which locations to direct the data transfer towards.

The Vacuum Cleaner approach is effective, but you have to know what file extensions you have to  capture—and if you don’t really know much about the applications users are running, you aren’t going to be able to guess what those file extensions are (see mind reader comment above).  The vacuum cleaner approach can also produce an exceedingly messy result on the other end.  The target directory on the
new system can be organized with the same file folder structure the original file was discovered in upon the old system, and this is helpful, but it is still rather confusing to many users.

Worst of all, regardless of which method (or both) is used, application preferences may get lost if you don’t capture them too.  It is not always blazingly obvious where those files or registry entries are, it can take quite a bit of research to track them down.

Then there are certain customers that do have legal or contractual requirements to maintain data for extended periods of time.  But even in those situations, I have occasionally discovered that the regulations are a lot less onerous than the lawyers’ interpretation.  People who are unfamiliar with the tools think it makes sense to gather up everything “just in case”, not realizing the huge hit to time and storage this will cause.

The point is please don’t promise the moon here, because users will inevitably be disappointed somewhere along the line.  Until you get to a managed environment, you cannot in conscience guarantee capture of all the data.

A Better Way:  Set expectations with the users up front, and push as much of the data backup and cleanup responsibility toward them.  Only they know where their important data is, so work with them to help them put where the data transfer tool will capture it.  Also, after identifying all applications that will be moving to the new environment, research where those applications store personal preferences and what their data extensions are (e.g. *.trm).  If an application is poorly written, and it has user’s preferences for the application configuration stored in the system or Program Files directories rather than the users’ own profile, you’ll have to dig that fact out application by application.  Finally, use file copy technology for the image, so you can move data in place on the local partition as you wipe and load to the new system, rather
than being forced to move it over the network to a remote location.  I’ll write more on this shortly in another blog; this is a huge time saver and limits the risks of massive data transfers.

This is part of a ten part series of blogs “Top Ten Deployment Blockers”

11
Dec

Deployment Blocker No. 5: Separate partitions for applications and/or data

Some of my customers swear by the value of having separate logical partitions on their desktops.  Typically, they want to establish two partitions, one for the system files, and one for the data.  Some customers go so far as to separate applications and program files from system files, upon a third partition.

I’m not convinced there is any value in this, and rather doubtful there ever was.  The argument for partitioning rests mainly on two ideas:

1)   Security will be improved because you can protect the valued assets (data or executables) more easily:  one protects the partition with additional file permissions

2)   Deployment will be easier because you just wipe and load the system partition, leaving data in place.

Back in Windows NT days, there was indeed a slight security advantage.  The Anonymous SID included the Everyone security context, and this was potentially dangerous.  However, starting with XP SP2, this issue was eliminated.  With Vista and Windows 7, file permissions tightened up to the point where it is probably a lot more secure to keep everything on the same partition.  The main problem is when people start shifting things around in order to partition, then someone has to start to playing around with file permissions.  This is where the human element leads to inevitable slip-ups.

And what about the argument that multiple partitions simplify deployment?  This one always floored me, because in my experience it greatly complicates deployments.  There are a host of issues, but I’ll summarize the two top assumptions behind this argument, and why I don’t think they hold much water:

  1. Application and data information can be neatly sliced and diced into their respective partitions.  I wish this were true, but current standards don’t force developers to be so careful.  Unfortunately, it takes a lot of research with each application to determine precisely where the application information, preferences, and key files are stored.  Data keeps popping up on the system drive, hard coded by those legacy and even newer applications which barely acknowledge Windows profiles exist.  Your packagers will be spending a lot of time on all the little details that this partition decision kicks up, time that would be better spent upon developing packages more quickly and securely.
  2. System partition can be wiped and new operating system loaded, and everything just works.  No, sorry.  Again, I wish this was true.  All the applications need to be re-loaded and re-tuned because the load information is in the registry, not just in the Program Files folder.  Profiles have to be primed to point to the new data location.  Oh, and your users had better be running with User not Administrator rights, because otherwise you’re going to have no idea where the data and applications are really stored.

Additional partitions come with a lot of hidden support costs and do not guarantee a clear security boost.  They add a lot more work for packagers and support personnel.  It leads you down the path of an exceedingly custom-crafted solution for the enterprise, which is always an invitation for trouble.

I’ve also observed multiple partitions leading to unexpected incompatibilities with updates, with the installation of new applications and even key components of Windows, including remote access.  This led to unnecessary fire drills for that already overworked packaging teams, who had to jigger applications that would have worked right out of the gate, had the partitioning NOT been added to the mix.  Try hard to resist the temptation to partition.

The better way:  Once you get to managed desktops, you can start identifying important data, and work with your users to extract and move it into a private or public cloud.  Use application virtualization to distribute your applications wherever possible, to decouple the applications from the system and stream them from a centrally controlled point.  Voilà:  partitioning the smart way.  Just understand it takes some time to move any traditionally organized enterprise to that model.

This is part of a ten part series of blogs “Top Ten Deployment Blockers”

6
Dec

A Quick Note about Delayed Posts

Apologies to those who started out with me on this, and who may have noticed the curious fact that the flow of posts were suddenly interrupted for over two months.  I have a family medical emergency I am trying to attend to these days.  Thanks for being patient.  I’ll continue to post whenever I can out here.  More to come!

6
Dec

Deployment Blocker No. 6: Stay in denial–pretend to run managed

I cannot number the times I sit in a kick-off meeting and the IT staff tells me they run a managed environment, yet cannot say with any certainty how many desktop or even server applications were out there.  They have no packaging team or process.  The inventory lists I am handed are stale.

By the end of these meetings these customers admit they have no true control of large portions of their network.  In fact, fiefdoms are multiplying out there.  I’ve had customers tell me their responsibility stops at the wall jack, or the internal firewalls they felt obliged to throw up to protect their network, and yet with a straight face declare they are running well (enough) managed systems.  Or that the desktops don’t really matter because their servers are secure.

I don’t buy it.  If the end points that consume and/or contain the sensitive data are not secure, you are not secure.

It particularly alarms me when the security team at these customers calmly informs me it has never had a successful attack.  Such confidence is not helpful in a security team.  I prefer security personnel who are nervous, with a haunted look in their eyes, sleep-deprived people who bite their nails.

If you see zero successful attacks in an unmanaged environment, you clearly aren’t monitoring closely enough, and the hackers already own you.    My favorite quote came from a member of a security team who made this observation about his environment:  “I can’t track attacks if I don’t know what is normal out there, and nothing is normal out there right now”.  Tell me about it.

Why does pretending to be managed act as a blocker to deployments? Because there is a false sense of security that arises in this kind of environment, and planning for the unpredictable environment is inevitably shortchanged.  Once these customers actually get out in the field and try to deploy, they hit all sorts of unexpected problems.  On the security side, it is impossible to determine whether events are just another breakdown in the unmanaged environment (was that traffic spike a Denial of Service attack?)or a serious threat to the enterprise, a sign of real trouble.

The better way:  Best to admit where you stand, and deal with it directly.  Identify the reasons why users are revolting and why they insist on running with local administrator rights.  Often these users have legitimate grievances or concerns, such as XYZ critical application doesn’t run well when logged in as User.  Address those problems, remove all their objections.  Work to move from unmanaged to more fully and more consistently managed systems, wherever you can.  Don’t get too draconian too fast, just take it step by step, group by group, and constantly monitor to make sure newly established standards stay firm.

This is part of a ten part series of blogs “Top Ten Deployment Blockers”

31
Oct

Deployment Blocker No. 7: Continue to run unmanaged systems

By unmanaged, I mean a system in which users have the right to load any application they want on their local desktop.

If you have an enterprise that is more than a few dozen systems, or you have any important data to protect, the best business decision you could make is to move to a managed environment.  It saves  money and it saves time.

But it isn’t cool to control…or is it?

Think of it this way:  if you had a fleet of company cars, and handed them out to your employees to use, would you permit them to start changing the color, seats, air bags and brakes, or cut off the roof to get a nice cool convertible?  Or perhaps come back into the company car pool having converted the original car to a low rider?

If not, then ask yourself what you are doing with the company computers.  Why be so casual about setting standards for what software runs in your enterprise?

We’ve seen customers who struggled for years to deploy.   My observation is when customers started to make their way towards a managed environment, they abruptly accelerated adoption of new technologies.  These customers, who had always lagged behind by years, were now deploying within six months of release to manufacturer.  Their users were clamoring for the next upgrade.  Moreover, the miracle was that many of these were our largest customers, who had to move to managed out of sheer necessity.  We’re not talking about a paltry few thousand machines, but hundreds of thousands of machines.

Moving to managed desktops can and must be done if you are serious at all about using Information Technology as a true tool.  Managed desktops change the game, allowing you to prove to management that IT isn’t just an equipment distributor, but much, much more.  Managed desktops can establish standards to reform the way business is done.   You can begin to study, respond to, and then help employees work in a smarter and more integrated way.  You can’t do that if the systems are changing beyond recognition, the minute they hit your Ethernet.

The better way:  Use the move to the new operating system as an opportunity to clean house and set new standards.  Do not give users local administrator rights unless absolutely necessary (I will blog later on who legitimately passes that bar).  Make sure you provide users what they need to do their jobs.  Implement as much self-service as possible to allow them to feel that within the boundaries you set, they can still install any application they need at will.

This is part of a ten part series of blogs “Top Ten Deployment Blockers”

25
Oct

Deployment Blocker No. 8: Brew your own deployment tools

The decision has been made that nothing out there can do your  deployment just the way you want it.  Your staff starts to construct a set of custom scripts.  It is fabulously successful: it does exactly what you want, and executives are terribly impressed with your custom tool.

Well, sort of, because the developer is still working on the new version which is going to solve a little problem the installers are having, and oh, there was an update to that component that broke the script, so they are working on that…

Stop right there.  Please.

Ten years ago, I might have agreed to this approach, after looking over the operation and making sure the team had the rigor and organization to keep the scripts up to date over time.  I have done a lot of custom scripts and routines to speed deployments over the years (and picked up a bunch from others, such as The Deployment Guys or the MicrosoftScript Center).  There is a certain pride of ownership.

But times have changed, and I now concentrate on keeping custom scripts to a minimum.  Deployment tools for Windows are extremely mature now, and I’ve yet to see a good  excuse to walk away from what is now freely available, in order to stubbornly brew your own.  Don’t get me wrong:  some coding is always necessary to get the deployment streamlined and to smooth out rough corners.  But the framework should not be built from scratch.  If there is a tool that does 90% of what you need, start with that and add the other 10%.  This is one case where it pays to resist your team’s more excessive creative impulses, or you will be on your own during the deployment, unable to compare notes with the rest of the world.

Why wouldn’t it be a good idea to have something that is custom molded to your needs, when you’ve got the talent to do it? Some reasons:

  • Custom-crafted code is hard to maintain, and difficult to take into the next generation.
  • Your talent can get hit by a bus tomorrow or more likely, leave for a better job.  Do you have full documentation? Can a brand new developer walk in and pick it up tomorrow?
  • Who is going to support this code? The help desk will have to be somewhat familiar with the installation routines when things break down.  And they will break down.  Never trust a developer who says there are no bugs in their code.
  • Who is going to test and QA this code as it evolves? It is extremely bad practice to have the developer do his or her own testing.

The better way:  Use the latest tools that are available and keep abreast of the upgrades to those tools.  Give the manufacturer of the deployment tool noisy feedback when you notice missing or clunky features.  Train your staff to use these approved tools to their fullest extent.  Be wary of anything that is leading down a path that is not being hiked by anyone else.  Push your staff to research the Internet heavily for existing scripts that have a track record, and utilize those wherever you can,to counter the “not made here” syndrome.  Invest in a packaging team instead of developers.  It is a much better use of your money.

This is part of a ten part series of blogs “Top Ten Deployment Blockers”

20
Oct

Deployment Blocker No. 9: Carry all applications over into the new environment

Tempting as it may be to promise users that they can hang on to every single one of their old, familiar applications, there are three problems with this offer.  The moment you announce you will carry everything over from the old environment into the new environment, the following will happen:

1) High priority applications, the ones that really drive your business, get precisely the same weight as lower priority applications.

2)  Many applications which are simply not appropriate and never were appropriate will get dragged into the new environment, despite severe security issues or obvious duplication of functionality

3) You’ve tipped off stakeholders that you don’t really believe in change.

Application decisions and testing is where the rubber meets the road.  Many environments begin the project by telling me they think (key word, think) they have thousands if not tens of thousands of applications running on their desktops.  I’ve had customers draw up elaborate project plans showing it will take them several years to test each application, one by one, on Windows 7, and then throw their hands up in desperation.  We’ve done scans of systems several of these large customers (50,000+ desktops) and discovered as many as 80,000+ unique executables installed on the desktops.  None of this is unusual.

But here is what I noticed once we dug into what was going on:  the vast majority of our customers, once they really sit down and comb through a recent, accurate software inventory, almost immediately drop between 60% to 80% of the applications from their list.  They do this by first eliminating executables that are unique to Windows XP, replacing them with equivalent built-in Windows 7 features or Windows 7 compatible applications.  Then they use four common sense rules of engagement:

  1. Replace older versions with a single (preferably the latest) version of each application
  2. Agree on a minimal seat count below which the business will not invest in testing and supporting an application.   For instance, the business will not support anything installed on less than ten desktops, or fifty, or one hundred.
  3. Eliminate duplicate functionality, e.g. multiple versions of Antivirus or compression tools, or utilities whose capabilities are already integrated into Windows 7 (examples: zip compression, CD burning)—here is a chance to establish a standard for the enterprise
  4. Demand installation media.  Require application stakeholders, those who want the application, to produce legitimate licensed installation media

The first two rules alone frequently eliminate over half of my customer’s applications, with few to zero objections from the various stakeholders.  The third is really a matter of whether the customer wishes to establish some sort of enterprise standard for the most commonly utilized applications—some do, but some shy away from that despite clear cost savings.  I’m big on standardization wherever it is possible, because I’ve seen such enormous savings result from it.

Do not demand the fourth step (gathering installation source media) before doing 1 through 3—cull through the list before making people go to the effort to find the CDs or it will simply annoy people.

Our experiences show that asking for installation media really drops the numbers.  I have noticed that some of the most vocal application stakeholders drift away quietly when asked for the installation media. Suddenly it isn’t so important after all.

If an application is important, the customer ought to have the installation media.  For those who continue to insist that they must have that missing application on the new OS, even when they cannot find the installation media, it is probably a true need.  Examine the reasons and if it is agreed it is a priority, have a cache of funds ready to help people buy or upgrade to the latest compatible version.

A better way:  Start with a fresh inventory, agree on basic rules and policies for software purchases going forward (e.g. it passes certain certifications, such as the Windows Logo program), and apply those rules rigorously.  Scrub the list with the basic common sense rules listed above, and start controlling what new applications come into your environment in the future, if only in self-defense.  Share the list frequently with the application stakeholders so they see how it shaping up and can protest if they see something missing.  Keep things open and transparent as to why something got dropped—never delete items from the list, just filter them out and clearly explain the reasons for each filter.  Don’t allow the application stakeholders to hijack the process, making it an exercise for personal preferences.

This is part of a ten part series of blogs “Top Ten Deployment Blockers”

Follow

Get every new post delivered to your Inbox.