Resolving Amazon Echo registration error 10:2:5:60:1

January 29, 2017 2 comments

When trying to register a new “Amazon Echo” into my home network, I received the following error:

There was an error registering your device.  Visit Help for troubleshooting tips.  Error 10:2:5:60:1

I followed all of the usual steps (reset device, reset router, uninstall/reinstall mobile app, etc.), but nothing worked. I also setup a second Wi-Fi network to see if that was the issue.

I should point out that I already had another working Amazon Echo Dot device that was previously setup and still working on the same WiFi network that didn’t have any issues registering. I also took that offline to ensure that wasn’t an issue.

I then removed all other WiFi devices from the network.  It still didn’t work.

I called Amazon support. They were great, but it still didn’t work.  So they sent me a new one and received it in two days.  They’re awesome.

What wasn’t awesome was that the new device had the SAME ERROR!  So now I know it IS the network.

What I found was that with all devices off of the network, I check the logs (Advanced > Logs) on my Cable Modem device (Netgear c6300, Firmware V2.01.14) that I saw that the device was reporting a DoS attack that were coming from the Echo device.  Below is the error I saw:

[DoS attack] Port Scan PROTO:UDP SPT:50395 DPT:123

The firewall didn’t have any restrictions on the devices that could be on the network, nor blocking outbound traffic, so this was weird.  I found a setting to ignore this type of traffic and not block it.  I crossed my fingers and gave it a try.

First, login to your device (typically http://192.168.0.1/ ), then click on the “Advanced” tab.  Click “Setup” in the left menu, then click on “WAN Setup”.  Enable the Checkbox for, “Disable Port Scan and DoS Protection”, then click “Apply”.

Once your device has reset, then go through the normal setup process using Amazon Echo.  Now your Echo should be working.  After it’s working, go back into your Netgear setting and disable the setting we applied earlier.  Afterward, the Echo should still work AND you’re protected from Port Scan and DoS protection.

I hope this helps you in your situation.

Categories: Amazon Alexa, IoT, Technology

To Zune or to Spotify… That’s the question

July 26, 2011 Leave a comment

When I heard that Spotify was coming to the United States I thought, “What’s the big deal”?  With such a question in hand, I when to the site to get notified when the service was available.  I was lucky enough to get approved for early access only a few days later. After using the free version of the service, I quickly saw how it could be a game changer.

First off, I’ll point out that I’ve been using the Microsoft Zune services since it came out.  Full Disclosure: I worked for Microsoft at the time Zune came out.  Regardless, it’s a quality product. 

To keep this post short, I’ll get to the basics.

Zune Pass: http://zune.net

  1. User Interface is easy and beautiful.
  2. Great integration with Windows Phone (I have a test phone and it works like a charm), XBOX 360, PC, and Zune HD (cool but limited device)
  3. Each month I get to keep 10 songs via MP3, everything else is protected (WMA format) but as long as I keep paying the subscription I can get as much as I want.

Spotify: http://www.spotify.com

  1. Free version allows for (free) streaming of music.  (Awesome)
  2. Paid versions allow access via you mobile device.  I have an iPhone and it’s pretty nice.  It also works on other mobile platforms such as Windows Phone and Android.
  3. Streaming is super fast.

Winner: To be determined, but right now I’m starting to think I’ll go the way of Spotfiy unless Zune makes a few changes.  If Spotify can get away with the free service, why can’t Zune?  Plus I like being able to keep 10 songs via MP3 each month.

Categories: Mobile, Technology

Great day at the beach

August 1, 2010 Leave a comment

Today was a great day at the beach. I think we spent about 4 or 5 hours there enjoying the waves, sunshine, and hanging out with some of our old neighbors yet still very close friends. Today I learned how to Skimboard. I wasn’t that great at it, but it was my first day and loads of fun.

You know, as the kids get older, I find it more fun everyday doing things outside. I find that they generally want us parents around and doing cool stuff is always a plus. Take this video stills from some of the footage we shot today.

They loved getting involved and making something cool. They’re the main starts of my planned summer time movie. I’m getting great stuff from them. Sometimes I wish we had the camera rolling all the time, because their best stuff comes in unplanned moments. Come to think of it, so does their worse.

Categories: Beach, Family

Conrad Blogging 3.0

July 30, 2010 Leave a comment

While this is the same blog URL (http://agramont.net) that I’ve been blogging from for a while, I’m taking a new direction in how and where I blogged.  In the past my blog has been more about professional topics and content.  While this was a great way to get out interesting information about the products and/or services I worked on, it didn’t give me a good place to share other types of information.  While I did use spaces like Facebook to share very personal content, that content could never really be shared with the awesomeness of the Internet world.  Plus, my previous blogging was done on a shared hosting service (http://ASPnix.com ) and I had to both deploy and maintain the blogging software (Community Server) which took too much of my time.  So now I’ve moved over to WordPress.

So where is all of that previous  content?  Well for now, it’s nowhere.  But I do have plans to get the most popular content and stick in this sites new archive space.

So here are my new rules for content:

  1. Post all Corporate/Business content on my companies blog site.  It’s really the right place to put this information.  Plus, if I move on , I don’t have to maintain that content or service. 
  2. Post all really personal information on Facebook.  Facebook for now, although I’m starting to fall out of love with them because of their weak and inflexible security model.  They clearly haven’t learned from other big players such as Microsoft and Google.  But I haven’t found a better replacement.
  3. Picture or Video for all posts.  Having a blog post with just text is getting pretty old.  It’s more fun when there is a picture or video to help tell the “Story” and give some visual stimulation.  Thanks to the Neistat Brothers for the inspiration (more on them later).
  4. No topic is off limits.  It’s my blog and I can blast if I want to (within reason).  I’ve been holding back for a long time many of my views on politics, products, and other stupidness.  I’d like this to be my truly personal space to document that.  Is it a risk?  Sure, but sometimes you gotta have your voice heard….right?
    Listen, Think, and Share.  I’ve found that  some of the best blogs listen to their readers and have an open and honest dialog.  I’d like to try that out, but first I’ll need readers.  Like you! (right?)

OK, enough of that…. Let’s have fun now!

Here’s a super short video of my lovely wife Pam “striking out”

Categories: News

Virtual Desktops and Federal Desktop Core Configuration (FDCC)

July 31, 2009 Leave a comment

A few years ago the United States Federal Government, specifically the U.S. Office of Management and Budget) created a PC standard for then entire government to follow.  The provided over 300 settings for Windows XP and Windows Vista in order to create a standard for all computers.  This is what is now knows as Federal Desktop Core Configuration (FDDC).  There is a ton of resources on the Internet, mostly on the .gov sites, that provides guidance on what these settings are and how to audit those settings using publicly available tools.

As with any IT Department, defining the policy is one major leap.  But to some degree, that’s the easy part.  Now you must deploy that configuration and ensure it stays enforced, not to mention audited and reported on.  With the U.S Government, having a mandate from the OMB is pretty powerful, thus making this problem space even more critical.

The FDCC is a perfect fit for Virtual Desktops from a deployment and management perspective.  Virtual Desktops is all about OS and Application standardization and consistency.  Thinking of having a pool of available OS instances, just waiting for a user to login from a remote device which could be a hardened thin-client or legacy PC.  All of those OS instances are based on a “Master Image” that has been fully configured with the FDCC policies.  When a user logs in, all of their applications are delivered via “Application Virtualization” (e.g. Microsoft App-V or Citrix XenApp) which is still abstracted from the underlying “Master Image”, thus keeping the desktop within FDCC standards.  All of the users data and application data is stored on a centralized store (e.g. SAN) which again keeps the “Master Image” clean of user data and provides additional benefits for the user and IT (e.g. daily backups of all user data).

So what about those users that go on the road?  Well this is where Virtual Desktop is still in play.  Using Microsoft MED-V or Citrix XenDesktop, a user can still take their FDCC approved image and applications on the road with them.  The bonus about Virtual Desktop deployments is that the process and image based deployments can be done directly on a physical machine as well.  You just take that master image, settings, and even application virtualization and deploy it directly on a laptop.  Using something like Microsoft System Center Configuration Manager and the Microsoft Deployment Toolkit (a solution accelerator) delivers this type of deployment scenario for both virtual and physical deployments.

Just like in any Virtual Desktop deployment, it’s not like Server Virtualization!  Managing the deployment and operations for a Virtual Desktop Infrastructure (VDI) is extremely different and requires lots of up front planning.  Not to say that Server Virtualization doesn’t, but when you consider the number of different users actually logging onto those Virtual Desktops, there are lots of end user scenarios you have to think through.  Even with the guidance of the OMB for FDCC (see, here comes the acronym soup), you may still define additional policies for given user roles.  Which could include access to applications via a variety of delivery models (e.g. web applications, application virtualization, etc.)

Categories: Technology, Virtualization

Hosted Virtual Machines with XenApp

July 30, 2009 Leave a comment

Today on one of Citrix’s Blogs, they announced a new upcoming technology called “Hosted Virtual Machines” (HVM).  As if the Virtualization Soup of technology wasn’t big enough already, but this does solve an interesting problem.  Without much more information on the subject, here is my take.

Short Version: You want to host an application on a managed VM, but Terminal Services won’t work for a number of reasons.  With HVM, you use a client OS such Windows XP to run the application, but the presentation of it (just like Terminal Services) is then sent to the user.

Long Version: It seems that “Virtualization” is getting more and more attached to every new technology, but at the end of the day it’s about access to applications (that includes the OS and other applications).  Let’s put aside the delivery of an OS for now and focus just on the application.  There are a number of ways to provide a user with access to an application.

  1. Traditional – This is where you get a CD or copy files from a file share and install the application locally
  2. Terminal Services – Based on using a single OS instance, such as Windows Server 2008, and allowing multiple users to logon at one-time, but they each have their own “space”/desktop.  The display of that OS, or sometimes just a given application, is presented to the user.  Everything runs on the server, but show to the user on their computer
  3. Application Virtualization – There are a few flavors of this.  This simplest view is about delivery.  The application is “preinstalled” and “captured” on a given OS (do a traditional install, but all files, registry settings, shortcuts, etc. are captured) and then deployed to any number of users.  So one “install” is then executed on any number of computers.  The application will run on the local computer, BUT it’s not installed there.  No files, registry, or shortcuts are anywhere to be found on your computer, but it still works locally.  That’s the virtual part.  Again, it’s all about deployment.

The big issue here is the ability to still provide “Terminal Services” like deployment of applications, but overcome some of the issues that “Terminal Services” (TS) has.  What kind of TS issues?  Well TS is still a Server OS.  It doesn’t have may of the client components (e.g. Windows 7) that some applications require.  TS is also Multi User based and there are some application that don’t work there either.

So why can’t Application Virtualization (e.g. Microsoft App-V or Citrix XenApp) work?  First off, there are certain applications that are developed either by a custom software development shop and built for a given customer and for a given OS/Application mix.  There are other applications that are certified by an Independent Software Vendor (ISV) that has specific requirements.  And then there are organizations like the Government, Health Care, and more that need to ensure that certain applications and data behave in a given way.  For all of these scenarios, an IT shop may want to provide an application to their users, but refrain from deploying them locally, it won’t work via TS, Application Virtualization won’t fulfill their requirements, and whatever else.

So the solution by Citrix XenApp (in the future) opens some very interesting doors.  I don’t think it will be part of the mass adoption, but it will break down certain barriers.

This leads me to think of other solutions such as Microsoft MED-V (Part of MDOP) and MokaFive that provide this kind of host based virtualization, although with HVM Citrix also allows this to be hosted on a server.  I guess I’ll have to wait for a Citrix demo and trial for me to learn more.

BTW, I wonder how this will impact hosters looking to get into the application delivery model.  Since this does require another client OS, Citrix rightfully notes that you’ll need the Microsoft VECD license.  Too bad VECD is not on the Microsoft SPLA list.  Bummer

Categories: Uncategorized

Microsoft VDI Suites Licensing

July 14, 2009 Leave a comment

I just read the announcement that Microsoft put out about their new licensing model for Virtual Desktop Infrastructure (VDI):

http://blogs.technet.com/virtualization/archive/2009/07/13/Microsoft_1920_s-new-VDI-licensing_3A00_-VDI-Suites.aspx

If you’ve ever had to figure out Microsoft licensing for any type of business use, you’ll know how complex (and frustrating) it can be.  There’s plenty of good reasons why it’s so complicated.  With the VDI scenario, it makes licensing even harder since you’re not just talking end devices anymore (e.g. your laptop), but you’re also dealing with many virtual components (e.g. virtual applications deployed on a virtual desktop, deployed on a virtual server, deployed on a thin-client).

So based on the new announcement, when it comes to doing an all Microsoft VDI solution, what licensing components do you need to keep in mind?

  1. Microsoft Virtual Desktop Infrastructure Standard Suite (VDIS) – This is the “Platform” license.  It covers all of the licenses you need to run a complete Microsoft solution for VDI.  It spans Virtual OS (Hyper-V), Management (System Center + MDOP), and Server CAL (Remote Desktop).  (Premium Suite includes additional rights for Session Based Remote Desktop….Formerly named Terminal Services)
  2. Microsoft Windows Virtual Enterprise Centralized Desktop (VECD) – This is actual client OS license.  It’s covered per device (e.g. the thin-client that you connect from) and allows you to run up to 4 OS instances from that device (which can be spread across any number of servers). Even if your NOT using Microsoft for your Hypervisor and/or management (e.g. you’re using VMware or Citrix), you MUST still purchase this license.

So what does this mean from a cost perspective?  Both of the licenses above are priced on a per device (e.g. thin-client or “Legacy” PC) on a per year basis.

  1. VDIS Standard: $21.00 (US) per year
  2. VECD for SA: $23.00 (US) per year – This is if your device is a Windows Client OS that ALREADY has Software Assurance on it.
  3. VECD: $110 per year. – This is for a traditional Thin-Client.

So why does Microsoft do a VECD license in the first place?  If you look at the license of the Windows Client OS (like I’m sure we all do), you’ll notice that the license is perpetual.  So where you install it, it must stay there.  Not only that, I need an OS license for each OS Instance I use.  With VECD, I don’t have that same headache.  The IT department can deploy any number of combinations of Windows XP, Windows Vista, and Windows 7 for specific role, tasks, training, or whatever.  It doesn’t need to track the total number of virtual OS instances for licensing as the OS license is being tracked by the number of end devices using a given image.  Now this doesn’t mean that an IT department will deploy thousands of images (what a headache) as there are better ways to use “golden” images and to dynamically deploy new Virtual Machines to a “Pool” of available clients (Future Post!!).  But this does free up the IT department to provide OS Instances and Applications on demand for customers because VECD covers them to do so!  Again, this is something that is a MUST for ALL VDI DEPLOYMENTS no matter what vendor you use for Virtual Desktop.

I think the license change from Microsoft will make it MUCH easier for customers to budget for Virtual Desktop using the Microsoft platform.

Categories: Uncategorized

ISV Guidelines for Hosted Microsoft Dynamics CRM 4.0 Part 1

February 16, 2009 Leave a comment

ISV Guidelines for Hosted Microsoft Dynamics CRM 4.0 Part 1

The intent of this series of blogs is to provide a basic guideline for Service Providers looking to offer CRM as a target platform for ISV’s looking to deploy their application on the Internet as a Software as a Service (SaaS) model.  It’s also a guide for ISV’s to understand that the design decisions they make during development will have a profound impact on their available hosting with regards to deployment architectures and pricing.

  • Part 1: Introduction
  • Part 2: CRM as an Application or Platform
  • Part 3: Shared or Virtualization Deployment & Licensing
  • Part 4: Provisioning & Control Panels
  • Part 5: Making the Leap

Introduction

With the release of Microsoft Dynamics CRM 4.0 (MSCRM4), Microsoft has provided not only a great CRM application, but also a business application platform.  There are many software vendors and consulting organizations that have already leveraged MSCRM4 in the traditional deployment where the application or solution is installed locally on a customers server.  This scenario is typically called “On-Premise”.  While this deployment model works great for some customers, many business departments are looking to gain access to applications that improve their business, but without the hassle and cost of deployment and operations within their IT department.  It’s not that an IT department can’t handle new applications, but it takes time, money, and knowledge to add a new application into the business.  Business applications that are hosted on the Internet and accessible via a traditional browser is known as “Software as a Service” (SaaS).  The business world is all abuzz about SaaS and it’s potential impact to deliver rich applications to departments, on-demand, with a monthly fee, and without the need for upfront deployment or hardware costs.  Sounds Great, right?  Well for some scenarios it is pretty great, but there are a number of other reasons why this might not be so hot (e.g. Security, Internet Outage, Performance, End User Training, etc.).

Here are some examples of where the SaaS deployment model is so interesting for many customers:

  1. Trial – Your software may be great and the value is high, but how will the customer know if they can’t try it in all it’s glory?  Sure they could download the software and use it, but not it requires hardware, time, and the knowledge to get it installed and configured.  With the SaaS based model, they can get access to your application instantly!  Even if they are interested in an on-premise deployment, they can at least get the feel for it right away which will help with their buying decision.
  2. Temporary Usage – Some customers may decide that they do want the on-premise version.  This could be for any number of reasons the customer may have or because your on-premise version has more capabilities (e.g. integration with a VOIP solution, devices, etc.) than the SaaS version.  In this scenario, the customer goes beyond the trial online and wants to continue to use it.  Let’s say it’s going to take six months for the customers IT team to purchase, deploy, and operationalize [killing the English language] an environment for the on-premise version.  So until then, the customer uses the SaaS version.  This gives the customer some flexibility in their deployment, instant access to the application which will improve their business, and increases your sales and revenue.
  3. Migrations – I’m sure you’d seen a number of customers that would LOVE to go to a new version of a software application they’ve been using, but the time and cost to upgrade hardware, update the data, and learn the new platform is just too much for them.  This is another great scenarios for SaaS to meet the business needs of the customer, removing the strain on their IT department, and increasing revenue for you (SaaS Vendor).
  4. SaaS Everything – There is a growing trend for many organizations to outsource more and more of their applications.  Well, that’s what the industry says at least.  For those businesses, you at least need to have SaaS as a delivery option for them or you may lose some business.

There are a number of Service Providers out there today that are offering hosted solutions for Microsoft Exchange Server (for consumer and business mail), Windows Server (for web hosting with Internet Information Server which is part of Windows Server), SQL Server (for databases), and SharePoint Services (for document and information collaboration).  MSCRM4 is a natural extension for Service Providers to also offer this service.  While there is much competition in the space of CRM systems on the Internet, including the current leader Salesforce.com, MSCRM4 is easily configurable, extensible, and leverages the Microsoft .NET Framework which will enable the army of Microsoft developers hooked on their Microsoft Visual Studio development environment to build rich business applications.

When developing software, the sky is the limit!  Especially when developing on the Microsoft platform and technologies, but you must be careful that you follow some basic guidelines to ensure your application can be hosted as a SaaS application and meet your target business objectives.  There is much to consider and I hope you find the rest of this series helpful.

Note: If there are specific areas you’d like me to cover in future posts, please post a comment below.

Categories: Cloud, CRM

My first book: How to Cheat at IIS 7 Server Administration

April 16, 2007 Leave a comment

I was fortunate enough to be asked to contribute to a new book focused on IIS 7.0.  I thought this would be a great way to dive deep into the product which would in turn help me sell the platform as Windows Server 2008 comes out (although the book focuses on Windows Vista).  The final name of the book is “How to Cheat at IIS 7 Server Administration“.  The target audience for this book is the IT generalist that is looking to quickly learn IIS 7 and perform the standard operational and support function.  This was my first time contributing to a book and it was quite the effort.  It took up many nights and weekends, plus I had a two week deadline (that was rough).  I may contribute again in the future, but I’ll have a better idea of what I’m getting into and plan accordingly (with my family that is).

Chris Adams was the Technical Editor and I was one of the contributing authors and I produced Chapter 6: Troubleshooting 101.  Although I’m not listed as an official author, my name is printed on the cover of the book and I’m listed as a contributor within the book as well.

  • Paperback: 384 pages
  • Publisher: Syngress (May 28, 2007)
  • Language: English
  • ISBN-10: 1597491551
  • ISBN-13: 978-1597491556
  • Product Dimensions: 9 x 7.4 x 1.2 inches

Here is a blog post by Chris Adams as he gives a brief overview of the book: http://blogs.iis.net/chrisad/archive/2007/07/13/first-it-pro-focused-iis7-book-hits-market-how-to-cheat-at-iis7.aspx

Get your copy today!

http://www.amazon.com/gp/product/1597491551?ie=UTF8&tag=agramontnet-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=1597491551

 

Categories: Uncategorized

Copy/Increase a VHD using VHDMount

January 4, 2007 Leave a comment

So after my first post on the subject of “expanding a VHD” by actually copying the contents of a VHD image using WAIK (Copy a Microsoft Virtual Machine VHD/Increase VHD size using Windows Automated Installation Kit), I got some good feedback on my laziness of not going down the VHDMount method. So I took a bit of time tonight to work through that scenario.

A key new feature in Service Pack 1 for Microsoft Virtual Server 2005 R2 is the availability of a new tool named, “Offline VHD Mounting”. It’s a command-line utility that allows you to mount a Virtual Hard Drive (VHD) as a local drive on a host server. The great advantage to doing it this way versus using an imaging technology (like I did using ImageX.exe as part of the WAIK), is that you don’t need the overhead of taking a snapshot, deploying, rebooting, etc. You simply need to mount the original image, mount the new image, and then do a copy.

Before we get started, please make sure that you have downloaded and installed the following

So here are the steps I took to “expand a VHD” by coping the contents from the original to a new VHD that was created at a much larger maximum size.

  1. Ensure the Virtual Machine (VM) that holds the original VHD is currently stopped.
  2. Mount the original image
    1. Open a Command Prompt and change directories

      cd “C:\Program Files\Microsoft Virtual Server\Vhdmount\”

    2. Mount the original image (the path I have below reflects my deployment.

      vhdmount.exe /m F:\Images\Configs\DepA-Web02\DepA-Web02.vhd V

  3. Create a new VHD – Using the Virtual Server Administration Website, create a new Virtual Hard Drive (VHD) that has the new storage capacity that you feel you’ll need.
    1. Click Start > All Programs > Microsoft Virtual Server > Virtual Server Administration Website
    2. From the Virtual Disks menu, select “Dynamically Expanding Virtual Hard Disk” or “Fixed Sized Virtual Hard Disk
    3. Provide the new hard drive name and location. You may also notice that the default size for a VHD is no longer 16 GB, but is now 127 GB.
    4. Click the Create button
  4. Mount the new VHD (same steps as step #2, but you need to point to the new VHD location)
  5. Once the new VHD is mounted, I wasn’t able to actually see it using File Explorer. As any new drive, virtual or physical, it has yet to be partitioned and formatted. So the next step is to create a partition and format the new VHD. This (as is everything else) is done on the host machine. There are two ways that we can perform this task: The GUI way using the “Disk Management” MMC Snap-in or the command-line way using the “DiskPart” utility. Below are the steps using DiskPart
    1. Open a Command Prompt and execute:

      diskpart

    2. Find the disk number for the new image. It should have a “Size” and “Free” value that are the same number AND should be the size of the VHD that you created.

      list disk

    3. Now we’ll create the partition on the drive (the disk number for my new VHD was 4, but you should use the number that you found from the above command) and assign it a drive letter

      select disk 4

      clean

      create partition primary

      assign letter=w

      exit

    4. The next step is to format the new drive

      format w: /FS:NTFS /Q

    5. Finally, we’ll go back into diskpart and activate the partition

      diskpart

      select disk 4

      select partition 1

      active

      exit

  6. At this point, we now have a VHD drive ready to be used. So now we’ll simply start copying all of the contents from the original image to the new image. (This will take a while…)
    1. Using the default xcopy command within Windows, we’ll copy all of the contents

      xcopy v:\ w:\ /E/H/K/O

  7. Once the copying is complete, we should now unmount the drives
    1. Open a Command Prompt and change directories

      cd “C:\Program Files\Microsoft Virtual Server\Vhdmount\”

    2. Mount the original image

      vhdmount.exe /u v

      vhdmount.exe /u w

  8. We’re now ready to point the target Virtual Machine from the original VHD to the new VHD.
    1. Click Start > All Programs > Microsoft Virtual Server > Virtual Server Administration Website
    2. From the “Virtual Machines” menu on the left side, hover over the “Configure” section and then click on the target Virtual Machine from the pop out window.
    3. Click on the “Hard Disks” configuration section link
    4. We don’t “really” need to remove and add a new hard drive. What we’ll do instead is point the old path to the new path in the “Fully qualified path to file” section for the appropriate disk. Then click OK.
  9. We’re all done! Now you can start up you VM and it should now be using your new VHD, but with the previous (and I’m sure lots of time invested in created) content. But don’t delete you VHD just yet. I’d give it a few days or hours of testing before you delete the original VHD to save space.

I’m sure most of this could be scripted out and perhaps that’s another late night project for me. But for now, this should at least get you going down a path that worked for me.

Update:

I thought some of you might find the before and after file sizes interesting:

  • Original VHD: 15 GB
  • WIM of original VHD: 5.58 GB
  • VHD using WAIK: 12.5 GB
  • VHD using VHDMount: 13.3 GB
Categories: Uncategorized