Loading data with KNIME, part 2

So in part 1 I grabbed data from an ftp site and saved it to my computer. Now in part 2 I will move it from one database to another database.

Doing so involves three parts.

  1. Unzipping the files
  2. Connecting KNIME to the .dbf file
  3. Loading that data into SQL Server

Step 1:


Node 70:    Location:  Wherever you saved the unzips from the first part.  Wildcard: F*.zip

Node 153:  This takes the last 4 digits of the file,  Needed to find out the year field. Append column “year”

reverse(substr(reverse(string($Location$)),4 ,4))

Node 154:  Switch Statement in Java to take into account the renaming of the files in the 90s.  Out column “result”

//String resultz;
switch (c_year) {
    case “BF00″: out_result = “2000″; break;
    case “BF94″: out_result = “1994″; break;
    case “BF95″: out_result = “1995″; break;
    case “BF96″: out_result = “1996″; break;
    case “BF97″: out_result = “1997″; break;
    case “BF98″: out_result = “1998″; break;
    case “BF99″: out_result = “1999″; break;
    default: out_result = c_year;

Node 155:  Sort from most recent file (2013) to the first.  Use the “result” column from above and choose descending.  This is important for step 3.

Node 71:  Loop Start.  No changes

Node 74:  Include Location field only.

Node 75:  Creating the directory string. Append Column “NewPath”

join(“C:\\Users\\kxe2732\\Desktop\\farsfiles\\”,replaceChars(substr($${SLocation}$$,35 ),”.zip”,”\\” ))

Node 79:  Creating the file string.  Append Column “NewDIR”  replaceChars(substr($${SLocation}$$,35 ),”.zip”,”b” )

Node 83:  Making the directory.  Outbut the “resultz” variable if you want to check (not critical).

String resultz;
File file = new File(c_NewPath);
    resultz = “new”;
    new File(c_NewPath).mkdir();
    //new File(“$C:\\Users\\kxe2732\\Desktop\\farsfiles\\FARS1975″).mkdir();
} else {
    resultz = “else”;

//File dir = new File(“nameoffolder”);
out_result = resultz;

Node 77:  No changes,

Unzip:  Show variable ports, pass Node 77′s red line to the input variable port of this node.   Change the input of source to use the “Location” variable.  Output directory* needs to be configured to where yo want to drop the unzipped data.  This location important for step #2.  It is important that you ALSO choose the “Overwrite” radio button in the “If a file exists…” section.

Step 2

This will require some work outside of KNIME before setting up the nodes.  MANUALLY unzip the most recent year’s file into the same Output directory* location referenced immediately above.  You should see a number of files ending with .dbf.  Open up your ODBC manager/administrator in Windows (I usually just type in ODBC).  Create a new ODBC connection.  Use a User DSN (or a system DSN if you want).  Press “Add” and scroll down the Microsoft dBase Driver (*.dbf).  There could be multiple versions and different ways to spell it so keep your eyes open for something similar.  For reference I am using version 6.01.7601.xxxxx.  Next you will see this screen:


Now name it whatever you want, and choose version 5.0 and then select the same Output directory* listed previously.  If you followed the unzip instructions at the start of step 2 then you should see some .dbf files.  Now we can get back to KNIME and the nodes in step 2:


odbc:  This is the node that uses the odbc connection you just made.  Change the Database drive to “sun.jdbc.odbc.JdbcOdbcDrive” and change the Database URL to “jdbc:odbc:FARS2012″.  You probably want to check “Allow spaces in column names”.

accident (top database reader node): Connect the output variable port from Unzip to the input to this.  This serves no purpose other than to convince KNIME that this section is part of the loop.  Put “SELECT * FROM accident” into the SQL Statement.

vehicle (middle database reader node):  Same as above except change the SQL statement to “SELECT * FROM vehicle”

person (bottom database reader node):  Same as above except change the SQL statement to “SELECT * FROM person”

Node 141 & Node 148:  The vehicle and person databases do not contain the year.  If we are going to find any usable information we most certainly need to inject year into the data somehow.   You can just inject the variable ‘year’ calculated previously in step 2 or just copy the same formula: reverse(substr(reverse(string($${SLocation}$$)),4 ,4)).    Update:  Use “year” so you don’t populate the bad 90s name and later on have to do updates on your database.

Step 3

Unfortunately here is where the loop fails to be a good loop and requires MANUAL work.  Not every year will have the same fields to load.  This loop will fail eventually.  Thankfully KNIME doesn’t crash and will politely handle these instances.  It will pause the loop and wait for you to manually correct things.  This setup though limits the manual work and makes it extremely easy.  First of all make another odbc connection to the destination database.  Set up the nodes like so:


Node 145:  Table Name:  tbl_accident_A.  Under the “Append Data” heading check both “… to existing table (if any!)” and “insert NULL for missing columns”.  This is important as is the “A” in the table name.

Node 143: Table Name: tbl_vehicle_A

Node 149: Table Name:  tbl_person_A

Node 72:  Just connect the output of unzip from step 1 to here.  No changes.

Column Filter Nodes 152, 150, & 151This is where you will need to make a decision.  Which columns to keep and which to eliminate?  Do you choose less fields and enable the system to run more, or do you choose more fields and require more manual intervention?  If you choose 1 field that’s in all the files (quite a few, refer to the FARS documentation for details) then the loop will run correctly the whole time.  However if you want every field for every year then this will fail.  You will need to manually come to these nodes, remove the non-existent columns, then change the Nodes 145,143,149 and put the table names to “tbl_xxxx_B”.    Each time incrementing the letter for each failure.  Then in SQL do a simple insert from say E to D with null for the missing columns.  (Then tbl_xxxx_D to tbl_xxxx_C, then tbl_xxxx_C to tbl_xxxx_B then tbl_xxxx_B to tbl_xxxx_A).  This was easy, but a little tedious.


Note for checking your database.  It’s important to use “with (nolock) on your queries because otherwise your SQL Server will possible wait until KNIME is done with the table to run your ‘what-you-thought-would-be-quick-select-top-1o’ query. 

Grabbing tons of data via KNIME loop. US Traffic fatalities. Part 1

Andy Cotgreave did an amazing Tableau workbook about Seasonality in US road fatalities and it is located here:

It is a cool wonderful vizualization, that’s obvious as he’s a Zen Master.  In addition, he went into detail about his design choices.  But how did he get all that data (or rather how can you get it)?  Well the US has several great government resources for various pieces of data.  The National Highway Traffic Safety Administration (NHTSA) has a reporting system called FARS which contains a wealth of data.  Now FARS which is short for exactly what it is: Fatality Analysis Reporting System.  I would like to personally call out this administration for having an extraordinarily clear name for something.  I mean even if you Googled something approximate to those words you would probably find it.

Here is an easy KNIME loop to download the data:

KNIME came out with some new features, several of these were related to grabbing files.  No longer do you have to program it in Java or Python.  KNIME now offers an easy way to do it.  What I like the most about KNIME is it is essentially self-documenting.  This data flow clearly shows me what is happening, plus allowing for some annotation.

The settings of the nodes used:

Node62:  The FTP Connection is simply “ftp.nhtsa.dot.gov” the port is the default FTP port of 21.  Make sure ‘none’ is on Authentication.  Then test your connection and BAZINGA:

Node 63:  Change directory to /FARS/

Node 64:  This extracts the Uniform resource identifier (URI) information.  Just make sure “Path” is checked.

Node 65:  This String Manipulation module adds in string($Path$+”/DBF/”), this is just going to append a 2nd subdirectory, that thankfully is consistent throughout all files.

Node 67:  No changes.

Node 61:  Download.  Show the variable ports, and take the directory you constructed with the above nodes and feed it into the source file or folder.  Change your target folder to wherever you want the data to be downloaded.  THIS IS VERY IMPORTANT.  Do not download all files, as it is a waste to grab things you don’t need.  Make sure you check “Only download files that match” and then choose F*.zip and a wildcard.  This will limit the number of downloaded files.  Also, if you are only interested in a subset of the years then add a  2nd string manupulation, use “substr($Path$,6 , 4)” and a row filter node to remove unneeded folders from the list.

Node 68:  No changes.

“1 Row Test Case”  This node can replace node 67 if you don’t want a loop and just want to download 1 at a time.

Vanpoolin’ Using Tableau’s table-calcs to visualize the change in commuting miles driven.

Rolling with the homies to work. How my vehicle miles traveled (VMT)  have changed.

Impatient? download the workbook.

Here you can see how my commuting patterns have changed over the past few years.   I have owned a 1998 Volvo S70, a Ford Focus, and a Chevy Equinox.  The Volvo was driven exclusively by me, and the Focus/Equinox have been a mix of me and my wife.  For the past several years I have been recording my mileage in a program called Vehical.  It is available for the iPhone.  Unfortunately it only records 1 car, but since my cars were all a very different miles, its was easy to distinguish over the years.  This brings me to the point of this visualization, which is adjusting Tableau Table calcs to display disparate quantities in a comprehensible way.

Problem:  A simple graphing of the data shows little information because all odometers and dates are very different.  Trend line slopes and VMT are not comparable.

By using table calcs, we can force all lines into the same range so that they can be compared easily.  Now we can see some interesting things.  First off, although my wife and I have traded off driving the Equinox and Focus, for the entirety of my vanpooling I was driving the Focus (orange).  You can see a significant departure in cumulative mileage of the focus from the other two cars.  In addition, although the Equinox is no longer used to commuting, we put more road-trip miles on it and those are far more visible than the above graph.


In addition there are other questions that remain unanswered.  For instance, how am I doing compared to the US average.  Although diagonal reference lines are very-much possible, I will choose another approach and create a rate table calculation.  I used nearly the same formula from above, but used “-1″ to tell Tableau to make the calculation relative to the previous value instead of “first().”  More tends become visible and road trips become far more obvious.


Sweet!  Now as you can see some of the trends are way more obvious.  Road trips are obvious and a job change during the Volvo days starts to become more visible.  In additiona, I added a reference line at the 40 Mi/Day mark.  This represents the current VMT that I found from the Federal Reserve.  If you’re interested in US economic data, then the FRED is it.  They also have a nice Excel add-in, and their data is also in .txt files directly linkable via the web.

US 12 month rolling VMT Per working-age population:

Now we will add a moving average table calc.  This mellows out the data and you can see how the trends have moved over time.  Tada!  I’m driving much less than average.



Check out the full workbook.

Custom Tableau Color Palettes

Tableau comes pre-packaged with many useful color palettes. Tableau also allows you to use a custom sequential or custom diverging palette from withing Tableau. Unfortunately the custom option only allows a two color options. But lets say you see a Tableau packages 3-color-palette you like and you want to make a few adjustments to it and make it lighter or darker.  Here is an example of what I did to the temperature palette, I added 2 lighter lights.


The first think you will need to do is of course take a screen-shot of the tableau palette.  You can then paste that into a paint program and use the eyedropper tool.  I suggest that you download a free tool such as Paint.Net.  That program gives you an eye-dropper tool and gives you the color code in hexadecimal (ie #FFFFFF is white).  Unfortunately Microsoft’s pre-loaded paint program doesn’t give you this info.  It can be converted from the link below though if you don’t want to download Paint.net.

Go to http://colllor.com  (I have no idea how to pronounce that, I can only assume its similar to the many ‘L’s after someone scores a goal in soccer), choose paste in the hexadecimal (or choose it from the selector).  Collor will give you a wide range of similar colors.  I was interested in the shades/tones section right on top.  I wanted to take the default temperature palette that Tableau provides and lighten it up for a bit.   Another option for finding similar colors and hex numbers is this website http://encycolorpedia.com/.   It could be useful in certain circumstances, but it provides far too many options.  So instead the links below will be via colllor.  I used the eyedropper tool and found that Tableau has these 5 colors for that first temperature palette.


I have gone ahead and picked two sets of lighter Tableau Temperature colors:  “Lighter” and “Lightest”.

Navigate to here:  C:\Users\<username>\Documents\My Tableau Repository\  and edit Preferences.tps in a text editor (make a copy in case).   Between the <preferences> tag add in this XML

<color-palette name=”Temperature_Lighter” type=”ordered-diverging”>
<color-palette name=”Temperature_Lightest” type=”ordered-diverging”>

This will give you two additional temperature palettes.



Tableau Zen Master Joe Mako gave me a incredibly simple tip.  This is exactly why the Tableau Zen masters are recognized by Tableau.  They are actively looking in the community and offering their advice.  Joe advised taking the current temperature palette, then clicking on one of the colors, making no changes to the values, and then pressing ‘add to custom palette.’ This changes the xml in the book and reveals the exact colors Tableau uses for their continuous palettes.  I’m not going to pretend that I regularly look over my Tableau books’ xml but it was fairly easy to locate the palette.  Name your sheet something you will remember then search for that name within the XML.  For instance I named mine ‘CustomDiverg’ and found this line immediately:

<worksheet name=’CustomDiverg’>

under the style header, you should be able to find this line, and the 5 colors under it:

<color-palette custom=’true’ name=” type=’ordered-diverging’>



Here is an animated example courtesy of Joe:



US Economic Progress

So I decided to create an Economic index using data from the Federal Reserve.  I decided on these 6 metrics to monitor our economic progress:

1. Civilian Labor Force Participation Rate
2. Compensation of Employees: Wages & Salary Accruals vs. ½ GDP
3. GDP (x2) vs All Total Debts
4. GDP vs Consumer Price Index
5. Current Real Median Household Income in the United States vs Max
6. M2 Velocity: Velocity of Money.

Link to Tableau Workbook: https://public.tableausoftware.com/views/EconomicProgressIndex/ReportDashboard

Full Detailed Explanation:
1. Civilian Labor Force Participation Rate: This is a better measure than unemployment because it captures actual workers vs the rest of the population. Discouraged workers are included in this. Assumed that higher is better.

2. Compensation of Employees: Wages & Salary Accruals vs. ½ GDP: (Private industries): How much of the American Pie is going to private workers? It is related to metric #1 but captures whether workers are being paid better for the work they are doing. Various factors could structurally alter this number. Obviously automation can and will lower this number. However even in the late 90s it peaked to .4.

3. GDP (x2) vs All Total Debts: A board measure of all debts to all income. Gov’t, Student Loan, Mortgage, Credit, Business etc. to 2x GDP. How leveraged are we? The higher this metric the more risk we have. One can take out debt, and improve the other metrics but it exposes the country to much more risk than before. This metric assumes a 2:1 Debt to income level is ideal.

4. GDP vs Consumer Price Index: CPI is a flawed measure, but still somewhat valid. CPI continues to rise, but how does it track in relation to GDP? If the CPI goes up faster than GDP, we are relatively becoming less rich. If level, then we are holding ground. I’d rising slower than GDP then we are getting richer. More is better.

5. Real Median Household Income in the United States vs max: The household is the basic block of America. How is the average household doing relative to the best year? As long as REAL median household income shows continued gains this metric will be better.

6. M2 Velocity: Velocity of Money. Overall this tells how fast money passes from one individual holder to the next. “If the velocity of money is increasing, then transactions are occurring between individuals more frequently.” according to Wikipedia. Hard to define an ideal, but generally assumed that more is better.

Dualing Chartists. Ebola vs Auto Accidents

The first chart comes from an organization called Sightline , which is a Pacific Northwest sustainability blog.  Sustainability roughly translates to ‘don’t use cars’ for the lay person.  As such their POV is against cars and the deaths and pollution they cause.  The author’s point is that we underestimate risk and the coverage of Ebola is way overblown.

Anyways here is their chart


Source:  Sightline: http://daily.sightline.org/2014/10/16/ebola-versus-cars/

Now one of the commentators in the above link contributed this chart:


What I love about this is that neither person is wrong.  They are both properly using bar charts with the axis starting at zero.  The first chart author However the 2nd person is missing the point of the author’s article.  I also love how both as backing up their stories with DATA AND CHARTS!  Properly sourced data and properly used charts to boot!  Now the first chart is showing how we improperly treat risk.  We fear an rare event with a high mortality and completely ignore a very frequent event with a very low mortality.  Regardless though I thought this was a great view of dueling charts.

Algorithm influenced human behavior. The possible dangers of mass customization

A friend of mine clued me into this interesting article:  https://www.yahoo.com/tech/i-liked-everything-i-saw-on-facebook-for-2-days-heres-94435047974.html   You should read the article but the gist is that a person ‘liked’ everything they saw on Facebook and then their feeds quickly went to a place devoid of actual human posts.  The resulting posts were especially polarized and extreme and driven not by regular people.  Mobile went that way quicker than the desktop version, probably due to less screen real estate and the higher importance of mobile ads.  Much of FB’s content is through mobile.  It was an interesting experiment that the author: Mat Honan conducted.  It led me to think and then write this post.

I’ll go on a tangent here about the nature of humor.  I think other human experiences are similar, though their mechanisms will necessarily differ.  Not lets take all the things that a person thinks is funny. Lets call this a person’s ‘Humorspace.’ Now this space is greatly influenced by what you thought was funny in the past. Your sense of humor develops over time.  Situational humor is funny for many groups of people because many people experience the exact same situations and the comedian uses that to craft something funny.  Something is funny (there are many theories to this) because in general your brain rewards you for strengthening existing-but-weak neural connections.  Something is funny because the incident is not common, but not so far away as to have no connection (excluding any absurdists).  Two people will think something is funny when their brains share similar strengths of connections for the topic.  Not lets take all the things that a person thinks is ‘funny’ and lets call this a person’s Humor-space.  Now this space is greatly influenced by what you thought was funny in the past.  Your sense of humor develops over time.  We can say if two people think something is funny then there is an intersection of their humor-space at that point in time.


Now go to Pinterest and type in humor in the search bar (or just click http://www.pinterest.com/all/humor/).  Try it via different IP addresses, or the same IP Address but logged in and not.  Are there differences in the content displayed?  I was able to see completely different information when logged in vs not.  Now what happens if you and I experience different content?  Would not our humor-space drift apart over time?  With such a vast amount of content, it’s improbably that we would have the same content.  Are the algorithms that feed us content divergent, convergent, or simply psuedo-random walks?    Now suppose that we discover a way to live forever, a likely way would be some method of uploading our consciousness to a virtual environment.  We will likely still be able to be fed and consume media content in the immortal age.  After decades and decades, or centuries of time, it will probably be inevitable that our experiences will drift farther from other people.

What would happen to two people after many years.  Would their humor-spaces diverge far from each other so that they are no longer similar. Would we be able to make the other person laugh?


Is Live-Event TV growing?  I’ve heard some chatter off and on about the way that networks are combating streaming TV services, like Netflix, Hulu, and Amazon Prime by offering more live-event TV.  (I use the term Live-Event-TV to include live-TV and simple time-shifted TV viewing like on a DVR).   Think about it for it a minute…  What was the last show you routinely watched and then went into the office/school the next day and said “Hey did you see XYZ last night? Pretty awesome/thrilling/funny huh?”  For me it was some of the earlier seasons of ‘The Office’, which ended in 2013.  It seems to me that networks are driving more content to live-events that cannot be streamed later.  This is a way of converging people towards one set of content and it differs.  It seems to me that many people are interested in this as they sense a drift in their experience space from other people they know.  The Superbowl is still a fun event, even though pro-football doesn’t interest me.  I love it because we have parties and everyone is there sharing the same experience.  Our experience-spaces are becoming more similar and we like it.  Growth of live-event-tv will continue as our media space becomes increasingly balkanized.

I use ‘balkanization’ on purpose.  The definition seems to indicate subdivisions with increasing hostility between them.  The article referenced above indicates that content was driven to hostile and incompatible extremes.

Data Engineering example. Java + SSIS to gather Macroeconomic data from the FED.

The Federal Reserve has a great service for data called FRED (Federal Reserve Economic Data), which is maintained by the St. Louis FED.  It is one of the best sources of economic data about the United States.  One of the datapoints they provide, (other than the national overview) is more detailed slices of data by states and metro areas. The ‘FRED’ also provides any easy was to download the data with .txt link. So if I wanted to see the seasonally adjusted unemployment data in Alaska, then I could just click on this link:  http://research.stlouisfed.org/fred2/data/AKUR.txt

Now if I wanted to gather that information for all 50-ish states (DC and PR may be there) then I ‘could’ click every link and then download every file. That would be a time-consuming endeavor, especially for data sets that may come out monthly.

Since the FRED is very consistent, and we know the data and exist in a simple .txt extension, all we have to do it gather it using code and then load it into the database.  Just by looking at the file, I could see it was <STATE CODE><METRIC>.txt.   Thankfully that was mercifully easy.  I then wrote a data loading program in Java using Eclipse. I ran it twice, once for seasonally adjusted unemployment (SA) data and once for not seasonally adjusted (NSA) data.  Java Code: savefiles.java
This data is saved in a default location which is based on your Eclipse installation, and for me it was saved here: C:\Users\<username>\workspace\savefiles\

Once you execute that java code, then you have to go through them and load them all.  You could manually open up every file and load them, but that also would take a long time.  Some open source methods include Talend and KNIME, BOTH of which have java modules.  If you want to productionize this specific example, you will want to explore those first.  For this first attempt though, I used SQL Server Integration Services (SSIS) to easily load the files since I have it and I’m very familiar with it.   Here are the basic parts of the SSIS package:

  • The File-name variable: We need to know what state and what attribute we’re trying to save. Create one called ‘filename’ and default it to the first file in the directory.
  • The Container: Add a ForEach Loop Container and add a DFT into it.
  • Container properties: Click on Collection, ensure the ‘Enumerator’ is set to ‘Foreach File Enumerator’; change the folder to wherever you saved the data previous, change ‘Files’ to *.* (ensure no other files are there)
  • Click on Variable Mappings and choose the filename system variable you made earlier.
  • Next create your source and destination connections Your source will be a flat-file connection, and your destination will be your database.
  • Flat File Connection: In this instance, make sure you skip the first 11 rows for the FED data:
  • Next click on Preview and verify that everything looks okay:
  • Next go into the The Data Flow Task (DFT). Add in ‘Flat-File Source,’ add in ‘Derived Column’, and OLE DB Destination. Connect the modules like this:
  • Click on the ‘Derived Column’ module (called Get File Name in the image above) and add in a derived column called ‘File’ and configure it like this:

Choose a destination table and you will be able to load a bunch of FED data easily. Modify the file names and locations and you can then download & load a variety of state-level data sources fairly easily.

Why did I use Java and SSIS to do all of this? Well I had pulled files from the internet using Java in the past… and I had also used SSIS to load multiple files from a directory. So I just mashed together two easy things I had done before and it didn’t take much time. I knew Java had ways to interface with the internet, and I knew SSIS could loop and connect easily to a DB. Both of these are obvious. Unfortunately SSIS is not open and requires someone to have a SQL Server so this method is pretty restrictive for the part-time data engineer out there. Regardless, I was able to quickly capture 102 text files and load them into a database and build this visualization comparing the Seasonal vs Non-seasonal unemployment rates:



Rules and Probabilities for Double Monopoly

Tableau workbook with complete probabilities based on 4 board orientations and 2 8-sided dice with the doubles rule: bit.ly/1qEXfnK

So double monopoly has 2 boards.  It’s twice the fun! Boards a set corner-to-corner.  People move in a figure 8.  Boards can be joined at any corner but to keep consistent, lets try Go to Go (GO2GO), Jail to Jail (J2J), Free Parking to Free Parking (FP2FP), or Goto Jail to Goto Jail (GJ2GJ).  As they will cross that space twice, nothing is diminished from the space’s probability.  If you were to use different corners, one corner will be hit only once.  Click on this:

Tableau workbook:


A complete Monopoly can consist of the correctly corresponding properties from either reality.  i.e. Park Place and Imperial Palace are a match.  Yodas hut and Farmer Maggots is a match.  If you have Boardwalk and Imperial Palace that’s not a match.  Nor is Hey,Jude; Abbey Road and Ganondorf.  etc.etc. Owning 2 complete monopolies of the same color doubles the value of the rent.  Any unimproved properties quadruples the unimproved price.  Having hotels on Boardwalk and Park-Place, but nothing on Mt Doom and Barad Dur still doubles the rent on BW and PP

In order to speed things up, 8-sided dice are used.  Or three 6 sided ones, if you ensure 2 are the same color so you can still use the 3 doubles-go-to-jail rules.  Also to ensure those hard-to-land on properties are hit, we can modify the free parking rule.  Free parking allows you to move to the next un-purchased property.  After all properties are purchased FP reverts to however you’d usually use it.  Another method of speeding the game up is to shuffle and deal 3-5 properties at the beginning and have the person pay for them at the start.  These should help speed up the game.

Cards will apply only to the board their origin. Go-to cards will then move the player to the same board enabling them to ‘bypass’ the 2nd board.  Go-to-Jail card is the same.Go-to-Jail (G2J) space applies to the board they had just left if the boards are joined at G2J.  Any special rules for that alternate reality board apply to that board only.

$3k is given out.  Due to inflation :), $1s are not used, but now become $1,000s.  Any prices are rounded to nearest 5, Mediterranean Ave must be landed on twice for any pay.  Thus the only change to the starting money is 1 – $1k, and an additional 1-$500 and 1-$5.

Currency options.  It is possible to keep both monies separate and required to pay off debts and purchases in each reality.  So if you need imperial credits, and all you have is ‘love’ then you would have to trade of use the bank as a currency exchanger of last resort.  Since ‘money can’t buy you love’ then you could make it a rule that the bank cannot exchange monopoly money for Beatles ‘love’ bucks.  If you let the bank charge a large fee, say 50%, ($100 monopoly money becomes 50 imperial credits) then other players can act as currency traders and arbitrageurs.  Forcing a very high fee, or ‘money can’t buy you love’ rule add an additional level of screwage that other players can enact on the person.  However the bank will need to use a-previously-agreed-upon exchange rate for bankruptcy proceedings.  Preferably with something like this line:  “Republic credits? Republic credits are no good out here. I need something more real.”

Railroad rents:   Several multiplier options, haven’t decided on which

RR Options
RRs owned x2 Half x1.90
1 25 13           25
2 50 25           48
3 100 50           90
4 200 100         171
5 400 200         326
6 800 400         619
7 1600 800     1,176
8 3200 1600     2,235

Utility rents:

Utilities Owned Multiplier x2.5
1 4
2 10
3 25
4 62.5

With two 8 sided die, a 16 would yield 1000, making the utilities pretty lucrative, if only rarely (1/64 chance with 2 8-sided dice).


Tableau workbook with complete probabilities based on 4 board orientations and 2 8-sided dice with the doubles rule: bit.ly/1qEXfnK

Java used build to calc the probabilities.  Should work in a standard Eclipse installation of Java 7: dub.java