VMWare Fusion 5.0 and Remote Desktop (RDP) Port Forwarding

So, the dilemma with my previous post’s setup is that I now have two VM’s that reside on the same VLAN, and one of them is an AD.

The current problem is that I want to access the second (non-AD) server from the outside world, outside of the host machine.  Normally, this is a private VLAN and there’s no access, but you can configure port forwarding so that you map port 3389 (external host) to one of the statically mapped internal hosts (for me, located at

Here’s the deal:

The file /Library/Preferences/VMware Fusion/networking appears to be produced by the VMWare Fusion networking configuration dialogs.  It appears to drive the creation of the files /Library/Preferences/VMware Fusion/vmnet8/dhcp.conf and /Library/Preferences/VMware Fusion/vmnet8/nat.conf.

The latter of these two files, nat.conf, contains the location where we need to change the NAT settings to allow port 3389 requests to the host to be forwarded to the private VLAN.  The problem?  This file is periodically overwritten by VMWare processes when VMWare is restarted, network configuration changes are made in the GUI, etc.

I don’t have a long term fix for you here -> I can’t find any answers online or in the documentation.  Ideally, you would make the changes to the /Library/Preferences/VMware Fusion/networking file, and restart VMWare or reset the VMware network stack, and the system would regenerate working copies of the nat.conf file.  The problem is the VMWare GUI doesn’t support port forwarding configurations, and you can “hand hack” the nat.conf file if you’re willing to backup you changes or risk losing them periodically.

Networking file sample (/Library/Preferences/VMware Fusion/networking)

answer VNET_1_DHCP yes
answer VNET_1_DHCP_CFG_HASH 4DB1B0245BA0BF9FBCD5D55DA675F9D605B179EF
answer VNET_8_DHCP yes
answer VNET_8_DHCP_CFG_HASH 946433F88FFC278E2E5B8A325353B972A0B5D762
answer VNET_8_NAT yes
add_bridge_mapping en1 2

Basically, you can see there are no available settings for port forwarding.  None the less, this file is transformed by backend processes into nat.conf, which includes this snippet at the end:


# Use these with care – anyone can enter into your VM through these…
# The format and example are as follows:
# = <VM’s IP address>:<VM’s port number>
#8080 =
3389 =

I’ve added the last line to this file to map the RDP service into the VLAN’ed VM.  So, any RDP requests coming into my host machine will be redirected into one of my VM’s.

Once this change is made, you can reset the network interface and load these changes as follows:

sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --stop
sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --start

The only problem now?  This file is overwritten when restarting VMware, so I’ve backup up the configuration, and have a script that I quick rerun prior to starting the VM’s (copy over file, stop, then start the interface).  Clunky, but workable.  If you know a way around this, let me know….

Another way, BTW, to see this “overwrite” process is to simply run the configure command:

sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --configure

This will blow away your nat.conf changes, which you’ll need to reset.

Working with Static IP’s, Virtual Networks, and Shared Internet Access in VMWare Fusion 5

VMWare Fusion 5.0 and Static IP’s for Windows AD and Client Machines

Here’s the problem. You have a two Windows VM’w (one of them an AD, another a server that uses the first as its Active Directory). You want these two VM’s to remain in synch, and they both must see each other on the local, virtual network, but you also want them both to be able to reach the shared internet connection used by your Mac). What now? You main challenge is ensuring that the Windows Server that isn’t the Active Directory (AD) won’t have name resolution issues.

Core Question: How can two servers on a private network both share the host internet access AND always see each other, particularly when one of those client machines is both an Active Directory (AD) as well as DNS server for the private network???

Step 1

Check the file dhcpd.conf in Library/Preferences/VMware Fusion/vmnet8. Find the following information:

subnet netmask {
option broadcast-address;
option domain-name-servers;
option domain-name localdomain;
default-lease-time 1800; # default is 30 minutes
max-lease-time 7200; # default is 2 hours
option netbios-name-servers;
option routers;

Note that this VMware 5 instance is using 172.16.165.X as the private VLAN for VM’s.

This means that any address where X is between 3 and 127 can be used as static addresses for one or many VM’s. Meanwhile, your gateway should be, as this is the routing address.

NOTE: Your VM may use a different range of address, in which case replace 172.16.165 with your own address.

Step 2

Take VM #1 (your AD server), set the VMware Network Adapter to “Share with My Mac” (which is NAT, essentially). Configure the network settings as so.

DNS Server:

Step 3 – Take VM#2 (your server that will access the AD server), set the VMware Network Adapter to “Share with My Mac” (which is NAT, essentially). Configure the network settings as so.

DNS Server: (note that we use the AD server as the DNS server).

This allows VM#2 to use the AD server for authentication and any DNS within your private network. This allows VM#1 to act as both DNS and AD locally, but passes through any other requests to the host system.

So, since you are now using static IP’s, when you VM’s startup, they can always find each other…. good news!

Quickly looking up documents using Document ID’s in SharePoint – Part 2

In my last post, I discussed some of the issues I had seen with the DocIdRedir.aspx, and I’ve concluded that the main solution is to author a new version of the page, with some enhanced “abilities”.  In my case, I ended up authoring an .ashx page, rather than an .aspx page.  In doing this, I started with two main references:

Kobi’s excellent post on writing an .ashx handler for Doc Id redirection:


The decompiled code from the current DocIdRedir.aspx (as a basis for our new functionality):


In my case, however, I want an entirely different behavior from the “redirect me to the file” behavior.  I’m looking to significantly enhance the data that the handler can return, so that I can get all of the links, metadata, and publishing status of the current document (including returning the link to the latest published version, not the latest version of the file).  As a result, I’ve chosen to return a payload of JSON, with the purpose of letting the calling page / javascript choose the appropriate path based on the returned data.

In this example, my modified DocIdBatchData.ashx has the following features:

  • You can submit a GET request to return the data for a single document OR you can post a collection of Document Id’s to resolve many documents at once.
  • All data is returned in JSON format, allowing javascript to quickly parse and deal with the results.
  • You can request additional metadata to be returned by posting those fields and Document Id’s into the handler.

A sample of the returned data is as follows:

"data": [{
"mData": {
"Title": "SABCS Poster Book"
"docId": "3AWMK2HN32DR-4-3 ",
"versionLabel": "",
"docSiteUrl": "http://myserver:40921",
"docWebUrl": "http://myserver:40921",
"docWebServerRelativeUrl": "/",
"listGuid": "caa79391-faf7-4095-9ed1-486a55dba9bf",
"listName": "Library Objects",
"listItemId": "3",
"listItemFilename": "LOFile2.pdf",
"urlHistory": "_layouts/Versions.aspx?list={caa79391-faf7-4095-9ed1-486a55dba9bf}&ID=3",
"versionLatest": "1.0",
"versionLatestUrl": "Library Objects/LOFile2.pdf",
"versionLatestFolder": "Library Objects",
"versionRequested": "1.0",
"versionRequestedIsPub": true,
"versionRequestedUrlDisplay": "Library Objects/Forms/DispForm.aspx?ID=3&VersionNo=512",
"versionRequestedUrl": "Library Objects/LOFile2.pdf",
"versionRequestedContentType": "0x010100548E9644DF2341A6AE310514AE3913D900130033306EFE434696B5DD9689DC30A4",
"errorStatus": 0,
"errorMessage": null
"errorStatus": 0,
"errorMessage": null

So, the primary value proposition here is that we’ve used the existing DocIdRedir.aspx file resolution, but instead of simply redirecting the user, we are allowing the caller to get some additional data back, and allowing that user / page / javascript / client code to then make a CHOICE about where to go.

Challenges with quickly looking up document properties using Document ID’s in SharePoint

Just a few running notes on my work with Document ID’s in SharePoint 2010.  The main challenge that I’m seeing in general with Doc Id’s is that they can only be resolved in one of two ways to actual documents:

  1. You can run a search within your SharePoint search center in the form “docid: 6WJUPFHUC735-11-1” (you can use your own ID in this case).  I presume in the event that you are using FAST, you’ll want to search on “spdocid:” but I haven’t verified this yet.
  2. You can send a query to “http://<serverurl>/_layouts/DocIdRedir.aspx?ID=6WJUPFHUC735-11-1&#8221;, which simply issues a 302 (object moved redirect) and sends your browser to the file content.

In both of these cases, however, the challenge is that you are unable to immediately get to document properties or other ways of manipulating the document.  You either need to take the resulting search result (parsing the search result) to track down a reference to the document (solution #1, above), or you need to parse the contents of the 302 redirect and perform an additional query to lookup the document properties.

From a Document Id, there is not a straightforward way (in a single hop) to get a reference to the document object or the document metadata.  Perhaps I’m obsessing here, but it just feels like you shouldn’t need two hops from Document Id before you can view document metadata.

I’ll be posting further on this once I find a solution…

Removing Unversioned Files and Modified Files as Part of a Continuous Integration Build

As part of my continuous integration builds using CruiseControl, I’ve fallen into the habit of the following pattern:

  1. Perform an SVN Update (get the latest release)
  2. Overwrite the updated project directory with a set of static files. For example, if my project lives in ${cc.home}/projects/${project.name}, I’ll have another directory under ${cc.home}/nonvcsfiles/${project.name} in which I store unversioned content.
  3. Perform the build
  4. Copy out the build artifacts.

Why do I follow such a pattern, you ask? The reasons are twofold:

First, I deal with some rather complex deployments to multiple servers, etc. and this is an easy way of overwriting properties files, web.xml files (Java), and web.config files (.NET) with server specific values. Since I also use CruiseControl to deploy to these servers as part of a larger, integration test scheme, I might have the same build doing a lot of different things. This way, I can store a directory of new or modified files that is simply xcopied over the build directory AFTER the SVN update. When producing artifacts such as WAR files, it allows me to pre-tailor the web.xml file prior to deployment.

Second, it allows me to copy in large quantities of binary files that might be linked in. A number of my projects are now making use of Maven and Artifactory to maintain version relationships to Jar files, etc., and I’m aware that you can use svn:externals to store external libraries or code. In some cases, however, I’m using the same CruiseControl instances to build .NET, Java (ant) and Java (maven) projects all at once, and the simplest thing to do is copy external library binaries into the requisite lib directories without putting them explicitly under version control. Maybe you’ve got a problem with this, and I can understand that, but sometimes, the quick solution is the easy solution. We keep good records of our library dependencies, and store them under a directory structure that makes versioning evident; additionally, we document the dependencies and versions in a repeatable way. Enough about that, that’s another holy war…

The net result of this is that once I’m done with a build in CruiseControl, my build directory is littered with modified configuration files, and newly copied binary libraries and some other crud. The question is, what do you do to revert your build directory back to it’s pristine, trunk versioned goodness?

The answer is twofold: revert any modified files using svn revert, and then delete unversioned files and directories.

The second part of that turned out to be a little trickier than I would have thought. If you want to delete unversioned files automatically, you’ve got a bit of problem: you need some external scripting or a quick macro / command line argument, which required some research.

After a little looking, I found a nice summary of methods that you can use at this link: Automatically Remove Subversion Unversioned Files. It shows a number of methods and number of scripting languages.

For my use, I need to do this on a Windows 2003 machine with the subversion command line tools. Unfortunately, in the link above, I did find a solution for a command line script on windows, but it doesn’t work for directories, and it doesn’t work for files with spaces in the name. However, with a few tweaks, here’s the three lines of script needed.

svn revert -R .
for /f "tokens=1*" %i in ('svn status ^| find "?"') do del /q "%j"
for /f "tokens=1*" %i in ('svn status ^| find "?"') do rd /s /q "%j"

The first line reverts any of the files that are modified. The second line deletes any unversioned files, and the third line deletes any unversioned directories (and their contents, recursively).

There are two GREAT things about this process:

  • If there are certain build files or directories that you don’t want deleted or cleaned up, you can add these to the svnignore list, and these will be ignored by version control and NOT deleted via this process.
  • This ensures that any additional non-versioned files that make it onto your build server over time are accounted for, either appearing in version control, or appearing the special nonvcsfiles directories.

The final step is to drop these items into an ANT task that can be called either before a new build, or at the end of a build to clean up. Just use the exec method for each line, like this:

<!-- clean out nonversioned files -->
<target name="clean">
<!-- revert all SVN version controlled files -->
<exec executable="svn">
<arg value="revert"/>
<arg value="-R"/>
<arg value="."/>

<!-- delete all unversioned files not explicit ignored by SVN-->
<exec executable="cmd">
<arg value="/c"/>
<arg value="for /f &quot;tokens=1*&quot; %i in ('svn status ^| find &quot;?&quot;') do del /q &quot;%j&quot;"/>

<!-- delete all unversioned directories not explicit ignored by SVN-->
<exec executable="cmd">
<arg value="/c"/>
<arg value="for /f &quot;tokens=1*&quot; %i in ('svn status ^| find &quot;?&quot;') do rd /s /q &quot;%j&quot;"/>


More Items that Broke During Snow Leopard Upgrade…

Having recently been through a Snow Leopard upgrade on my Macbook Pro (2.4 GHz Core Duo model), I previously noted a number of issues with my Ruby on Rails install related to two core issues:

  • Java is automatically upgraded to version 1.6
  • Many of your Ruby gems need to be reinstalled due to Snow Leopard’s 64 bit support

My latest issue is around my Logitech keyboard and mouse. They still work, but the Logitech Control Center doesn’t. This is message I see in the Control Center: “No Logitech Device Found”.

The solution appears to be a simple reinstallation of the Logitech drivers, which can be found here:
Logitech Control Center

Alternately, I found some great instructions at TUAW.

Git / Eclipse / The egit Plugin / My Perspectives

I’ve starting working in Git for some of my Ruby work, and I’m trying to work some Eclipse integration into the mix. I’m a big fan of IDE’s from the standpoint that IDE’s can provide some real efficiencies for certain environments. For example, I use Aptana Studio for my Ruby development, and I really like some of the features.

One of my complaints so far, however, is that Git has lagged behind Subversion in terms of quality integration with the IDE, or simply visual client inspectors in general. Frankly, when I work in version control (such as subversion), I expect that my IDE will be able to diff between two branches in the version control system (VCS), and I expect to be able to “double click” a file, and see a simple diff between two versions of the same file. That should ALL be integrated.

I shouldn’t be hand cobbling together a loose consolidation of tools that all don’t work very well in order to inspect the files and work with them for a diff’ing operation, and when I reconcile code changes between two branches… well, excuse me if I think that the best tools should have those features integrated. Purists might disagree, but if I’m at the command line, manually comparing text diffs between two files… I don’t feel that efficient.

Let’s be honest here moment #1:

Tortoise, the windows client for Subversion releases by tigris.org, is one of the MAIN reasons behind the widespread acceptance of Subversion. What does it do well? It provides an environment for quickly browsing and merging changes graphically, and while it is an external tool, it’s tied directly into the file explorer (like it or hate it, users understand it).

Let’s be honest here moment #2:

Not all developers have the same skill level. Not all developers are going to immediately understand how to transition to a new VCS tool, and they aren’t all going to jump there unless the basics (installing it, basic checkin / out, basic merge functionality) is at LEAST on par with what they currently use. At least not en masse.

It’s this fact alone that was responsible for some of the changes in the Eclipse development platform with respect to downloading and installing things like VCS / Team providers. The menu completely changed in the most recent versions to a graphical menu that prompts you to download the appropriate native SVN or SVNkit libraries. Basically, you’ll get a significant uptick in usage if you spend more time on making the initial install painless.

This brings me to egit, which is the latest effort to bring Git functionality to Eclipse through the Mylyn team integration. It looks very basic for the moment, and I certainly applaud the effort. This is what it will take to bring Git mainstream. But…. in spite of the simplicity, I can’t get it to work for some Git projects.

Simply trying to share a new project through the team menu results, for me, the dreaded “spinning beachball of death”, meaning that that eclipse simply locks up… (boo!). After selecting my new project (already under Git control), my system simply locks up.


The primary issue here seems to be that my project is ALREADY under Git control. If I attempt to use the Team -> Share -> Git option for non-VCS controlled directories, things seem to work normally.

Anyhow, I’ve logged what I think is my issue over at the google code issue tracker for the egit project. Let me know if you see something similar yourselves!


Other than that, however, I certainly appreciate the first steps that are being taken by the egit team!

Ruby / MacBook / Snow Leopard Upgrade

After upgrading to Snow Leopard, I was greeted with the following upon trying to run my latest Ruby application:

/path/to/gems/ruby-debug-base-0.10.3/lib/ruby_debug.bundle: dlopen(/path/to/gems/ruby-debug-base-0.10.3/lib/ruby_debug.bundle, 9): no suitable image found.  Did find: (LoadError)
/path/to/gems/ruby-debug-base-0.10.3/lib/ruby_debug.bundle: no matching architecture in universal wrapper -/path/to/gems/ruby-debug-base-0.10.3/lib/ruby_debug.bundle

/path/to/gems/linecache-0.43/lib/../lib/trace_nums.bundle: dlopen(/path/to/gems/linecache-0.43/lib/../lib/trace_nums.bundle, 9): no suitable image found.  Did find: (LoadError)
/path/to/gems/linecache-0.43/lib/../lib/trace_nums.bundle: no matching architecture in universal wrapper - /path/to/gems/linecache-0.43/lib/../lib/trace_nums.bundle

/path/to/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_specification.rb:76:in `establish_connection': Please install the postgresql adapter: `gem install activerecord-postgresql-adapter` (dlopen(/path/to/gems/pg-0.8.0/lib/pg.bundle, 9): no suitable image found.  Did find: (RuntimeError)
/path/to/gems/pg-0.8.0/lib/pg.bundle: no matching architecture in universal wrapper - /path/to/gems/pg-0.8.0/lib/pg.bundle)

I did find a quick fix: basically, the native system gems need to be reinstalled for 64-bit gems:

I found a good script for doing this at:


From the irb interface, try this:

$ irb
  irb> `gem list`.each_line {|line| `sudo env ARCHFLAGS="-arch x86_64" gem install #{line.split.first}`}

After about 20 minute of trashing, my system was up and functional again!

As pointed out by Jeffrey Lee, this could be a little more verbose with the following modifications:

`gem list`.each_line {|line| puts "Installing #{line.split.first}"; `sudo env ARCHFLAGS="-arch x86_64" gem install #{line.split.first}`}

Ruby on Rails and iPhone Web App Development – Part 1

In this article, I’ll be describing my setup for a simple iPhone application that I’ve built using Ruby on Rails.The primary objective was to quickly build out a prototype application demonstrating the use of a mobile device in entering critical data in the field (in this case, for Pharmaceutical Sales Reps) as well as content integration to another application for rapidly searching and repurposing legacy Word and Excel 2003 content (that part comes later!)

This is a high-level description, so I’ll be only providing an overview of some of the rails functionality, so those people looking for a full walkthrough might be a little disappointed.

PART 1 – Building the Basic Application and Simple Data Entry

My environment:

OS: Macbook Pro (Leopard 10.5.7)

Primary Dev Environment: Eclipse (Galileo) + Aptana RadRails (v1.5.1)

Target iPhone Device: iPhone 3G

My Plugins / Gems/ Rails Addons

jRails v0.4 – An excellent substitute for the default prototype libraries used in Rails.

tank-engine – austinrfnd’s branch branch of Noel Rappin’s tank-engine project.

Creating the Basic Application

After creating the basic application in Aptana studio, I created very a very basic page for handing new medical information requests:

script/generate scaffold mirequest summary:string description:string potential_ae:string potential_pc:string primaryproduct

At this point, I could access basic RESTful functions for index, new, edit, etc. through a standard browser.

Adding iPhone Functionality

In order to best use the iPhone, I first started with Noel Rappin’s tank-widget plug-in. This is simple successor to to his original rails-IUI plugin, but features integration with the jQuery library instead of using the iui javascript and css files. I actually started with one of the most current branches of this code based on some changes by austinrfnd in his own branch (tank-engine).

There are some good demonstrations of using this functionality in a series of articles hosted by IBM;

Developing iPhone applications using Ruby on Rails and Eclipse…

So, setting up the environment looked something like this:

Install the jQuery Rails plugin (my version is 0.4)

sudo script/plugin install git://github.com/aaronchi/jrails.git

Manually copy the javascript files in the plugin. to your javascripts directory.

Install the tank-engine widget

sudo script/plugin install git://github.com/austinrfnd/tank-engine.git

rake tank_engine:install

At this point, I needed to modify a few files to add the iPhone rendering format to my mirequests index view.

Copy and rename views/mirequests/index.html.erb to views/mirequests/index.iphone.erb.

The next step is to enable the controller to detect iPhone specific requests and to include the plug-in helpers for tank-engine.

  acts_as_iphone_controller :test_mode => true

  include TankEngineHelper

Finally, you can get a basic set of buttons and a title-bar by editing the index.iphone.erb.

Here’s what mine looks like.


l = { :caption => ‘Create MI’, :url => new_mirequest_path, :html_options => { :class => ‘te_slide_left’ } }

r = { :back => true, :caption => ‘Back’, :url => “/”, :html_options => {} }

te_navigation_bar( r, “MI Request”, l ) %>

<% panel do %>


<h2>Pending MI Requests:</h2>


<% fieldset do %>

<% @mirequests.each do |mirequest| %>

<% row mirequest.summary do %>

<%=h mirequest.description %>

<%= te_link_to ‘Show’, mirequest %>

<% end %>

<% end %>

<% end %>

<% end %>

You can find documentation on these helpers in the tank-engine github site as well as the IBM articles listed above. I did have some formatting issues, however, and the usage of some of the tank-engine helpers wasn’t completely documented. One thing that I learned was that unless you place the row and fieldset helpers inside the panel helper, you won’t get the intended look and feel.

The result should look something like this:


In the next article, we’ll discuss some of the formatting issues and shortcomings with the tank-engine library and how I chose to spice this up a little bit as follows: