Compounded Annual Growth Rate (CAGR) in MicroStrategy

The idea behind using a compounded annual growth rate is to smooth out the rate of change that something sees period over period for a successive number of periods.  Out of the box in MicroStrategy there is no function that creates this value automatically, so if a CAGR calculation is needed, either a custom metric or a custom subtotal is required.

The calculation for a compounded annual growth rate looks like this:

formula

In the CAGR equation we divide the ending value by the beginning value, raise it by a power — in this case the number of years that compounded — and subtract the 1 from it to get a percentage value.

…and in Excel, the formula to do this will use the caret to denote that we are raising the proportion of change by a power equivalent to the number of periods that the data spans.

CAGRinExcel

To do this in MicroStrategy, the best place to apply this logic is in a subtotal.  For the uninitiated, the ability to create new subtotals is very powerful, and often the best way to shoehorn a report look and feel into a grid.

To create a new subtotal for CAGR in MicroStrategy we have to specify the exponent using the power() function, and the At sign is going to be the

subtotal

Here’s the code for the calculation:

1
(POWER((LAST(x) {@}  / FIRST(x) {@} ), (1 / COUNT(YEAR) {@} )) - 1)

The second to last step in the process is to add the subtotal to a report.  Since this subtotal is for a year, adding the CAGR means that it should be the last column on the grid report.  To make the CAGR subtotal visible as an option, it needs to be added to the list of available subtotals for the metric.  In this example we are working with a revenue calculation so we need to move the new subtotal as being available to the revenue metric.

addsubtotal

The final piece once there is a report that contains both the Year attribute and the Revenue calculation is to add the subtotal at the right level.  For this, go to the advanced subtotals section of the grid, and put the CAGR subtotal across the Year level.

acrosslevel

The final output should look something like this:

report

Note that the calculation used in this subtotal is for year, but the concept can apply to any time period.  With the same implementation, but using a different attribute (month) the calculation will only differ by the attribute name:

1
(Power((Last(x) {@}  / First(x) {@} ), (1 / Count(Month) {@} )) - 1)

 

Extended Data Access

When MicroStrategy first announced their extended data access (XDA) support many moons ago, the concept didn’t really resonate with me.  Most of the use cases for XDA were one-off hacks that really didn’t have a place in the traditional data warehouse based-environments.  In recent years, however, the number of open access APIs has exploded, and combined with multi-source capabilities, dashboards, Visual Insight, and especially Mobile, the utility inherent to being able to gather a data set from a REST, SOAP, or WebDAV means that the data warehouse isn’t as critical as it used to be.

One of my favorite sites to peruse is data.gov — when the site was first created much of the data was a disparate set of Excel dumps, XML fragments, CSV, text, etc..  Not only does is make it harder to integrate these formats, but it requires someone to update the extracts.  Since I have an interest in environmental and regulatory matters, I wanted to integrate business intelligence with some EPA data.  I have also been doing some research to find ways to avoid writing ETL jobs whenever possible, and having the option to use a RESTful interface within MicroStrategy means that I can integrate data without having to write a set of data integration packages.

The high-level flow of tasks needed to create a report based on a data API is as follows:

  1. create a database instance for Xquery
  2. gather the proper Xquery syntax using the Xquery Editor and Generator
  3. create a new freeform SQL report
  4. drop the xquery definition into the freeform SQL editor
  5. create a visual insight document on top of the new report

So, creating a new xquery database instance is pretty straightforward.  The only trick needed is to reference the xquery data base type:

The real effort comes when getting the proper xquery syntax together.  When I first started researching how to create a report, the documentation wasn’t very clear on what the syntax needed to look like, so I was worried that it would entail a lot of hand coding, which leads to mistakes, which leads to me getting frustrated and then giving up.  But as I started to play with the editor, I realized that it was quite flexible, and could pretty quickly spin out the syntax I needed.

So, long story short, here’s what the query looks like in a free form SQL report:

None of this needs to be typed.  The Xquery Generator and Editor handles it all.  In this example I am using the following URL to request data from the EPA (New Hampshire):

http://iaspub.epa.gov/enviro/efservice/tri_facility/state_abbr/NH

The full query:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
declare copy-namespaces no-preserve, no-inherit;
declare namespace mstr-rest = 'http://www.microstrategy.com/xquery/rest-functions';
declare function mstr-rest:post($uri,$payload,$http-headers) external;

let $uri := ('http://iaspub.epa.gov/enviro/efservice/tri_facility/state_abbr/NJ')
let $http-headers := ( )
let $payload := ( )
let $result := mstr-rest:post($uri,$payload,$http-headers)


return
<Table><ColumnHeaders>
<ColumnHeader  name='FACILITY_NAME' type='xs:string' />
<ColumnHeader  name='PREF_LATITUDE' type='xs:string' />
<ColumnHeader  name='PREF_LONGITUDE' type='xs:string' />
<ColumnHeader  name='FAC_CLOSED_IND' type='xs:string' />
</ColumnHeaders>
<Data>{for $ancestor0 in $result/tri_facilityList/tri_facility
order by $ancestor0/FACILITY_NAME
return <Row>
<FACILITY_NAME>{fn:data($ancestor0/FACILITY_NAME)}</FACILITY_NAME>
<PREF_LATITUDE>{fn:data($ancestor0/PREF_LATITUDE)}</PREF_LATITUDE>
<PREF_LONGITUDE>-{fn:data($ancestor0/PREF_LONGITUDE)}</PREF_LONGITUDE>
<FAC_CLOSED_IND>{fn:data($ancestor0/FAC_CLOSED_IND)}</FAC_CLOSED_IND>
</Row>}</Data></Table>

This entire string was created by the editor (found in the MicroStrategy –> Desktop folder).  The only trick that I had to figure out was that the fields in the tree view response section have to be dragged and dropped onto the Main Grid View section.  When the desired fields are in place, the code can be generated and loaded.  The fully query is available in the Xquery generation window and can be copied as-is and dropped into the freeform SQL.

Once the query is in the report editor, the number of XML ColumnHeaders has to correlate to the number of attributes / metrics defined in the report.  In my example there are four column headers, thus four attributes in my report.

Because the data that is returned by the url is XML, MicroStrategy knows how to crosstab and render the results.  I had to make one little adjustment to the data and that involved putting a dash before the PREF_LONGITUDE field, otherwise the data will map to Russia…which isn’t very useful.

The resulting data is the lat / long coordinates of EPA toxic release sites. The high level data doesn’t indicate exactly what has happened at the site, but presumably additional extended queries to the other TRI tables would yield more information. The lat and long is easily consumed by the Visual Insight report, giving us this final output for Pennsylvania (my link gives New Hampshire data which is smaller and comes back faster):

One note: I haven’t been able to tell whether MicroStrategy will be able to consume JSON as a data format any time soon.  Doing so would presumable expand the number of native data sets that MicroStrategy could report off of.

MicroStrategy has several features that they just seem to throw out there without much fanfare or for that matter documentation.  Having SOAP and REST support is important and opens up the platform to doing pretty interesting (and real-time) things — especially on Mobile devices.  Here’s just one idea: combine NOAA and Twitter data to localize weather information and sentiment.  Is it really cold outside?  If so, just how cold?  500 tweets in your zip code in the last hour cold???

 

Hubway Data Challenge Part 2

Once I had the Hubway bike system data loaded into a database, and modeled into MicroStrategy, I could start to play with the data and do some basic profiling.  The more I looked at the data, the more I wanted to add things.  For example, the trips table as the birth year of subscribing riders, which lends itself to creating an age attribute.  To model in age, I created an attribute and used the pass-through applysimple function.  This is the basic syntax needed: ApplySimple(“(year(#0)-#1)”, [start_date_desc], [birth_date]).

When added to a report by itself, the age attribute will generate the following SQL:

1
2
SELECT DISTINCT (YEAR(a11.start_date_desc)-a11.birth_date) CustCol_3
FROM trips a11

As mentioned in the part 1 post, the data offers the opportunity to add more layers and texture because the dimensions are so generic.  Latitude and longitude coordinates can be used to derive the elevation, which would answer one of the questions on the Hubway Data Challenge web site, Do Hubway cyclists only ride downhill?  A data dimension could be used to correlate against the academic schedule, or even gas prices.  Anyway, on to the eye candy…

For those of you who read Stephen Few you know that visual design isn’t easy.  Few’s philosophy espouses simplicity and elegance over complexity and flash.  If you can’t generally understand the data in less than ten seconds you have failed your audience.  Basic, muted colors that make careful use of highlights is preferred over harsh and bright color tones throughout.  These are all great recommendations, and as I progress through the different phases of my interaction with the data I will adhere more closely to these recommendations.  In the meantime, I simply want to profile the data using some basic charts and graphs.  The alternative to graphing the data is that you get wide and long grids of data with visual appeal.  The tradeoff is that you get to pivot the data, sort it, filter it, etc., but exceptions, trends, and a general sense of the data quality doesn’t readily present itself.

So, a quick and dirty way to start to understand the data is to graph it.  I have gotten used to the MicroStrategy graphing options, but many developers will cite the core graphing technology as one of the weaker aspects of the platform.  The widgets and visual insight graphics have exceeded the Desktop graphing capabilities, but I still like to use the graph formatting to create vertical / horizontal bar charts, scatterplots,  and time-series analyses. So, simply to get a flavor of the data I created a few graphs.

This graph shows the activity (trips) for a month — in the page-by — and I tried see if there was a way to quickly tell whether temperature spikes led to a decrease in usage.  To do this correctly I’d likely want to average out the trips by weekday and get a rolling temperature average.  Only with the means in place can I get a true understanding of whether a 10 degree shift in temperature leads to an n% variation in usage.

Trips and Temp, dual axis

One of the data challenge sample questions asks whether rentals after 2 AM are more likely to come from the under 25 population.  I extended this question to ask whether usage varies by gender.  I took liberties with the coloring for effect, but I would mute these tones in a dashboard.  I also incorporated another axis (trip distance) to see whether rides are longer at certain times in the day, but since I didn’t use an average metric, the second axis isn’t very meaningful.

Male Female Usage by Hour

No basic correlation study should go without a scatterplot.  The r values are included, but aren’t very telling.  To make this graph work I had to clear out the trips that involved 0 distance (i.e., the rent and return location are the same).  Because this graph also had month in the page-by, some months showed a higher r value than others.  Again, I’m simply using this to get a feel for the data and get some general answers to high level questions.

Scatterplot, male female correlation

Based on some feedback I got from a colleague, I was advised to try and label the axes.   I tried to do something that tied the color of the axis to the metric, and this is what I got.  To me this graph is telling in that it appears to suggest that as the bike rental program became part of the city culture, people started taking longer rides.

dual axis trending

So, it’s a start.  With some basic profiling underway I am starting to compile a list of some high level questions that might be telling or informative about the data.  Station analysis and trip patterns are a good place to go with the data, and some of the questions that I’ve started to formulate go along these lines:

  • Which station sees the most usage?
  • What percent of trips end and start in the same place?
  • What bikes see the most usage, and of them, what side of the river do they spend the most time on?
  • How has usage changed this year versus last year?  Can the data be used to illustrate the growth of the program in some neighborhoods versus others?
These questions have their parallels to the business community, and represent the typical deep dive that a business analyst would do.  The next layer of analysis is to take this data set and make predictions against the data.  For example, looking at the data at the end of July how closely could I predict usage at the various stations using the historical data, especially the trends elicited from July 2011?  Given a time of day, could I predict what percentage of bikes rented from station x will wind up at station y?  If overcrowding at a station is a problem, and people can’t drop their bikes off because the racks are full, do I need to transport bikes away from certain stations at a certain time of day?

 

 

Hubway Data Challenge Part 1

I was interested to see that a data set had been posted and that a competition had been started to visualize data collected from the Hubway bike system.  For the uninitiated, the Hubway is a bike rental system with racks scattered across boston.  Users pay with a credit card or have a subscription to use the bikes.  When I was working in downtown Boston I would see these bikes all over the place, especially along the Esplanade, and going down Boylston Street.

The data set itself is a set of two Excel files — stations, trips — totaling about 10 MB zipped.  While quite simple, the data by itself represents an opportunity to do some interesting analysis based on the lat/long pairs of associated with the start and end points of the bike rental system.  The date pairs also represent lend themselves to time-series analysis.  With date as a hinge, other data can be incorporated, and in my example I added a comprehensive Date Dim table that extends the data into a time hierarchy (weeks, months, years), and I pulled weather data from noaa.gov to give myself an opportunity to do some basic correlations.

Some of the challenges that I faced in working with this data in MicroStrategy included:

  1. Modeling the same table (stations) for the start and end points
  2. Calculating distance from a lat/long pair
  3. Using a web service to automate the elevation of the stations
  4. Plotting the lat/long coordinates on a map

I have yet to overcome items 3 & 4, but the first two were interesting problems.  The ultimate goal of this exercise is to produce a meaningful visualization, and since MicroStrategy 9.3 was just released, this data set provides an opportunity to test some of the network diagrams, mapping widgets, and Visual Insight capabilities.

For problem #1, the solution in MicroStrategy is to use table aliases.  Basically, from a modeling standpoint aliases mean that architects do not need to create views to replicate a table.

The table alias within MicroStrategy tells the SQL generation engine that the same table can be used twice.

To create a table alias, go to the schema → tables folder, and right click on a table that has already been modeled in.  Select “Create Table Alias” and a new copy of the table will appear.  For my purposes I created 2 stations tables, one that referenced the start, and one for the end.  Within the attributes that reference the table, make sure that the mapping is set to manual, otherwise the automatic mapping will try to point to both the old and aliased table.

The resulting SQL for a report that wants to join Start Station and End Station would look something like this:

1
2
3
4
5
6
7
8
9
10
select a11.end_station_id  id,
a11.start_station_id  id0,
count(distinct a11.id)  WJXBFS1
from trips a11
join stations a12
on (a11.end_station_id = a12.id)
join stations a13
on (a11.start_station_id = a13.id)
group by a11.end_station_id,
a11.start_station_id

By aliasing the stations table twice, the engine is forced to join against itself, but the overhead from the database side is minimal.  From this we can start to glean some basic information from the data.  The South Station / North Station (TD Garden) ride is the most commonly used, and this is explained by the fact that there is no good way to get to South Station from North Station or vice versa!  Taking a bike probably constitutes a ~ seven minute ride.  I would speculate that these rides happen during rush hour, but I’ll table that speculation for future analysis.

The next challenge was to calculate distances between stations.  I found a good site that showed how to do this in Excel, and fortunately transposing Excel syntax into MicroStrategy is straightforward since the functions are named exactly the same.  Here is what the calculation looks like in Excel:

and here is what it looks like in MicroStrategy:

With this calculation in place, the previous report could be enhanced to include distance, and then by combining the distance with the trips you could derive a total mileage value.

The downside of this is that unless the start and end stations are different, then the total distance will be 0, as is the case with the Boston Public Library bike rack.

So, this is how I started the data analysis, and I have continued to build out other attributes to fully form the data and make it more interesting.  The next steps are to start to visualize the data.  I started to play with this, and with the availability of Cloud Personal, I threw up some data slices and created a first pass of a visualization.

In the coming weeks there should start to be some submissions coming online.  I have been more focused on pulling outside data together to add flavor and color to the raw data set, and a colleague suggested I analyze other events like Red Sox games or holidays into my analysis.  Any other suggestions?

 

 

 

 

Correlation, Causation, and … lag()

I gave a presentation back in June at a MicroStrategy Meetup and I used a simple data set to illustrate that even one dimension and three data points can yield interesting results.  My data included the following three things:

  • Daily closing price of gold
  • Daily closing price of oil
  • Daily closing value of the VIX (fear index)
Oil, Gold, and Vix by Time
One dimension, three data points

The recent 4 year data set, when visualized looks like this:

Three values, graphed over time

The complete data set:

Gold, Oil, VIX graphed
The three values, back to 1983

My thesis in working with these three data points was that somewhere in this data we could find evidence of correlation.  So, I went about the task of building out some reports that correlated some of the combinations of the data and I plotted them out:

The VIX and Oil saw a high correlation swing between 2007 and 2009, but the overall trend leading up to 2007 was inching upwards to 1.  The sudden drop in crude prices in 2008/2009 could partially explain the easing of the VIX since the financial crisis.

When I plotted the VIX against gold, I saw more dramatic correlation swings year over year.  I found these variation differences to be more interesting than the oil and fear relationship because I had assumed that these two would stay generally correlated above 0.  To see the VIX and gold dip so low in 2008 suggests that one wasn’t keeping up with other.

In the last step I plotted oil and gold together, and found similar precipitous changes year over year.  With the first two correlations there at least seems to be a pattern, but with this last one not so much.   What I was looking for was some consistency (stay above or below 0) in the correlation, but I did not observe this with this data.

Rather than looking for the obvious perfect match between these variables over time, my next thought was to insert a lag into the data and see whether some sort of offset would smooth out the relationships.  The thinking behind this being that socionomic forces exist behind these data points, but the shifts are either reactive or proactive.  For example, it is possible that the fear index responds to changes in oil prices, or that the daily price of gold  reflects the speculation that the economy is worsening and that the only good place to invest assets is in a common precious metal.  To accomplish this I created a series of objects in MicroStrategy that allowed to quickly change the lag parameter and test my assumption.

Metric Edito
Using an embedded lag function

My correlation metric is defined as Correlation([Gold Close (lag n)], [Fear Index Close (lag n)]) {Year}

Gold Close (lag n) is defined as:  Lag([Gold Close], ?[Lag Value Gold (Number)], 0) where the “?” represents a prompt value.

From this I could run a series of quick tests and using the standard deviation of the results I could start to see that embedding a negative lag (-30, -60, then -90) into the data started to lower the dispersion of values.

Standard Deviation - Lag -90

I could certainly do more with this data, and if I was desperate to find that perfect leading indicator that could predict where commodities or the S&P were headed I suppose I was start by extending this and looking for variation of the data that yielded the lowest possible standard deviation in correlation coefficients.  Beyond the sheer number of possibilities this small data set affords me, one could easily start to add more variables into the mix — the DOW closing price, pork belly futures, or the foreign currency exchange rates.