Skip to main content

It’s a Witch Hunt @ VizClub!

Quick notes to recap:

  • We’ve finally moved to a pub (ode to true VizClub fashion). Next time we’ll bring a pocket projector so we can ALL see the screen.
  • Our original dataset on weed sales/activity in Denver didn’t pan out (our meeting was on April 20th)
  • We differed back to Sophie’s suggestion on a dataset about the witches of Scotland

Gotta be honest, Sophie really delivered on this dataset. Despite the strange and obscure topic, the dataset is robust. Not only does it span almost 300 years, but comes with 72 pages of documentation/definitions and schema map. It even has the lot/lats of where they were accused and case notes.

Really cool right? Even the case notes are old-timey and creepy! Really cool right? Even the case notes are old-timey and creepy!

As a side note, it doesn’t seem like Tableau recognizes Scotland’s county’s at all. At first Andy suggested this may be the old recorded name… but then was corrected that its still named that today. #everythingisoldhere

Here’s where we ended up:

Surprisingly, there are male witches?

The chart just above this is a % of total split between male and female for every year in the dataset. I am surprised that even the “low season” of witch hunts, there can still be high proportions of men persecuted.

Do witches run in families?

Yes. The dataset includes first and last names as separate fields as well as each year they were accused. We created a word cloud and ended up concluding that there may have been 3 generations of Thomsons that were accused of being witches.

Corroborating the data with the outside world

Mara was on her phone furiously doing research and asked us to look up tidbits of info on Scottish witches and looked up Isobel Gowdie. I always find it to be a surreal experience when a real life story is reflected in the data. However, the team ended up getting sidetracked and started entering random names into the search filter.

Sizing for Mobile

You might have noticed that there are lots of interesting details I’ve mentioned here that didn’t make it to the final viz. While not an explicit talking point, I think we all agreed that we’re reading and looking at our vizzes on our phone, hence the mobile size. Maybe next time we’ll be able to add things more scroll-y style a la a Matt Francis, but it was 9:30 (time to wrap up!). I expect we’ll be doing more of these in the future.

To Wrap Up

The night rolled out in a completely unexpected way – from analyzing weed sales in Colorado, to then exploring the world of witches in early modern Scotland. You really never know where you’ll end up at Viz Club.

Oo and thanks to the Pauls for being good sports. That was all Andy.

Thatcher Thatcher Milk Snatcher: Make your Data Stories More Engaging with this Amazing Tableau Tip!

Just want to say upfront that VizClub March Edition was a huge improvement on the last session.

  • #ThrowbackThursday was on (that was me)
  • The dataset was required less than 3.5 hours of cleanup (thank you JMac)
  • Beer was on tap (which I got in on since I realized Coffee and Pizza just didn’t make sense)

Big shout out to Sophie who was kind enough to provide the pizzas despite being sick.

Sophie also made another huge contribution to our viz session – being the first to introduce unicode characters as a navigation element to create a fluid user experience. I first wrote about unicode characters in vizzes with the Data School so I’m always finding new uses for them.  It’s so easy to do, I’ll be stealing it for my own future vizzes!

Here’s a gif of the viz moving through the storypoints with unicode characters.

Read below on how to do this for your own vizzes!

Step 1: Copy and Paste a Large Circle Unicode character  (like this one!) into your storypoints template.

Step 2: Here’s the magic. Set the font so the unicode character will stand out.

Here I’ve set it to size 28. Go into Story->Format and set the shading in Navigator to “No Fill”. Then squish the titles on the top and bottom so they line up perfectly line a series of dots…just like navigating through a slick app.

Voila! Next level design in 1 minute right?

Getting back to Viz Club, in terms of exploring the dataset, we found a couple of interesting insights.

  • Overall consumption of sugar as a product in itself has gone down over the years (probably displaced by other sugary products)
  • Vegetable consumption has also gone down 🙁

Then somehow we got to the introduction of our story when we were looking at the consumption of milk in the early 80s. Wondering why there was a spike in ’82, we realized that Margaret Thatcher’s tax cuts in education boomed the consumer market for milk. For me and Andy, who are from North America, this is something we never knew about UK politics!

Couldn’t believe it actually shows up in the google search

Another interesting insight was looking at how potato consumption has changed in the UK.

@davidmpires Took this picture of the area charts of potatoes. I think the dip was actually due to a bad crop year, rather than a sudden decreased appetite.  @davidmpires Took this picture of the area charts of potatoes. I think the dip was actually due to a bad crop year, rather than a sudden decreased appetite.

You might be curious to wonder why the middle of that horrendous area chart there is a dip. Luckily we were too, and found out there was a heatwave in 1976 and a potato shortage in the US.

More importantly, what did people eat if not potatoes? Turns out, they SWITCHED to a new market of potatoes – notably instant potatoes.

By the way, if you click on the this image, it will take you to this viz where the ad will start playing. This was intentional as we've already established it was #throwbackthursday and automatic music on websites was THE hip thing to do in the 2000's #RIPmyspace. By the way, if you click on the this image, it will take you to this viz where the ad will start playing. This was intentional as we’ve already established it was #throwbackthursday and automatic music on websites was THE hip thing to do in the 2000’s #RIPmyspace.

Apparently the people at Cadbury’s were smart cookies and did this very clever(?) ad for Smash Mash Potatoes. I’m told everyone and their grandmother has seen this but here it is if you haven’t. Skyrocketed the sales of instant potatoes I’m sure.

We capped off the night with another beer as I sped formatted the viz. Overall, it was a fantastic night of pizza, beer and learning about politics and robot aliens! Can’t wait for the next one 😀



How to Scale Tableau User Filters in 5 Steps

I’ve been working on a sales dashboards where I am visualizing a set of transactional data. One of the requirements was for Tableau Server to recognize the user and their territory upon login. Easy peasy with Tableau User Filters right?

I want to show you how you can do this easily if you have the following complications:

  • Multiple sales reps to 1 region– joining these two datasets will multiply your records (and over-report sales dramatically!).
  • Have a large number of users (I have over 100 reps) and no time to update it manually and constantly as the team changes.
  • Lots of sheets in your workbook and did not want to spend the time adding the user filter set to every sheet.
  • You also dread Tableau’s inevitable “*” error on a blends for your joining table.

Big thanks to the Great Chris Love for this tip!

Step 1: Find Your List of Users and Pivot your Data

You’ll need 2 pieces of data – assigned Tableau Server Usernames, and their respective regions. I’ve mocked up a dummy data set to help illustrate with our favorite retail store – Superstore. Usually the USERNAME() function will be a reflection of your organization’s system username if your Tableau Server is using Active Directory to authenticate/login your users.

As you can see, we have 4 sales regions at Superstore, and 2 sales reps look after each region.

Quick view of my raw dataset

Tableau Usernames for Tableau User Filters

We need to pivot the data so it ends up as a lookup table, but concatenating usersnames to sit in 1 cell. I did this bit in Alteryx.

Alteryx Workflow for Tableau User Filters -Here’s what my workflow looked like with annotations. Alteryx Workflow for Tableau User Filters -Here’s what my workflow looked like with annotations.

BONUS TIP! Did you know you could annotate your tools instead of using the comments box? Much easier when you’re moving around tools in your workflow.

Step 2: Join your lookup table against your transactional data and output your data into Tableau!

Step 3: Connect your output (.tde/.csv) into Tableau and login to your Tableau Server

Connect your data source and sign into Tableau Server. Connect your data source and sign into Tableau Server.

Step 4: Create the following calculation:

Since our usernames are concatenated to exist in 1 cell for every territory, this creates the association between 1 territory and many usernames upon a user’s sign-on.

Step 5: Create a data source filter using the calculation above and Select “True”.
The calculation will look up the username and will filter to the territory specified in your lookup table (now joined against the transaction data)

That was the last step!

You can use the navigation at the bottom to change users views just as you would with a set recommended by Tableau. Hopefully you’ll use this clever method of data pivoting to bring a better UI experience to your Tableau Server users.

If you’d like to download this workbook to see the other calculations, visualisations I have used, you can access the workbook here. Note that the filter will not work with your Tableau Server so I have removed it to avoid confusion.

XML Parsing for Beginners with Alteryx

Recently I’ve come across using XML parsing with web scraping and combining many XML tables. Because of its syntax, I wanted to outline some basic rules about XML and how to work with XML files within Alteryx.

XML files are different from flat columnar tables (the ones we’re used to!) because instead of headers and rows, the data is nested within tags, where the field headers are identified for every record with <> brackets. Below is the example from  Reading XML by Alteryx where the same data is presented as a columnar table and the other in XML. Because of its nested nature, it needs a bit of digging/examining the original data to figure out which tags to target and parse.

Example 1: Parsing from 1 XML file

Let’s go through one recent example I was working on for a client – parsing clinical trial data from XML files. Let’s say I wanted to 2 pieces of information – the name of the trial and when did it start. I like to take a look at the XML file to see where is the information located so I can figure out the best way to parse it.

I’ve opened the file in notepad++ and highlighted the tags which show which pieces of data I am looking for. Note that the root element in this file is “clinical study”, and it just so happens that <brief_title> and <start _date> are child values.

What happens when I select this file in Alteryx’s input tool?

Alteryx’s default settings is to pick up only 1 set of the file’s child values, so there’s a lot of information not yet coming through.

I can bring the information we are trying to parse into Alteryx by selecting “Return Root Element” and  Return Child Values” as the information we are looking for is only 1 layer beneath the Root Element.

Simply add a select tool for the fields we wanted to parse and voila!

But what if I want to parse data that is nested deeper within the XML file?

What if you wanted to find who sponsored the clinical trial?

The data is nested within the <agency> tags, so its not a child value of the root element. This is where can use the Outer XML configuration to bring in all XML tags within the root element. From there, we can parse out the <agency> tags with the ..


Simply select “Return Root Element”  and “Return Outer XML” in the input tool so Alteryx can identify both the root element and its respective nested tags.

Then add the XML Parse tool and configure the tool to look for the <agency> tag within the new field “clinical_study_OuterXML”.

I’ve got the results window above with the <agency> data parsed!

XML parsing requires a bit of evaluating the dataset and identifying where your data is located. Because of its nested nature, you might have quite of a few XML parse tools in your workflow. But the same concepts can be taken into web scraping with Alteryx so learning XML and its tagging system has transferable value!

I hope this has been a clear introduction to working with XML files. Let me know if you have any questions in the comments section below.