Two Mega Trends: Big Data and the iPad … Where do they converge?

It’s no secret that Big Data is an emerging mega trend now and into the forseeable future. David Feinleib’s slideshare presentation on Big Data Trends shows a concise and current summary of where things are headed in the Big Data movement

Enter mega trend #2, the iPad. The current market share for iPads is strong and is projected to continue until 2016, according to the recent IDC study. I can say first-hand that most executives in our network are now in the habit of bringing their iPads with them wherever they go.
Big Data and iPad Mega Trends

So if the leaders and decision-makers are about to be consumers of Big Data (they may not know it yet), and if they are all toting their iPads to their meetings, there must be an opportunity or two for forward-looking Big Data thinkers. This post is intended to start a conversation around the question:

If Big Data is growing like mad
And business leaders are using iPads more and more …
What’s our collective best guess as to …


Where these two mega trends converge?

I’m sure this post will generate a decent discussion thread. To kick things off, I’ll put out my own thoughts.

There will be an increasing need to simplify the “so what” message
Tablet apps can be beautiful to look at, but they are rarely as successful when trying to pack a lot of information into a small space. Designers will increasingly need to give disproportionate attention to the “so what” message when reporting Big Data results.
So what

As Lachlan James outlined in the recent post, on Top Business Intelligence dashboard design best practices intentional, effective and clear communication must be priority number one.

So if we agree with that idea, then instead of filling 90% of the reporting space with different charts and tables of results, perhaps the future way of iPad-friendly reporting would be like headlines in a newspaper, with catchy titles like: “We can accurately predict 80% of our adverse hospital events based on these 5 factors” or “65% of our customer retention in the Pacific Northwest and be explained by these 3 attributes”. Underneath the headline would be the supporting detail and charts.

This presents a challenge in automating the process of taking Big Data results, and explaining what they are saying in plain english. Perhaps there is a whole new area of opportunity here, with some links to artificial intelligence.

People will want to play
By the light-hearted nature of the iPad device, it lends itself to playing. Not that one would expect there to be a Big Data version of Angry Birds, but the concept of playing and interacting with Big Data seems like a likely user expectation. Perhaps as leaders interact with the summarized results of Big Data efforts, they will want to do things like:

  • Evaluate “what if” scenarios, such as “What if this pattern observed in this one customer segment applied to our whole customer base?”
  • Take an observed Big Data finding and forecast it into the future (i.e. If this trend continues, what will things look like 1 year from now?)
  • Play with different ways of visualizing complex Big Data results, using different charting tools, plotting symbols, colors, etc. (i.e. a techie version of “arts and crafts”).

Angry data
Parts of the dashboard may be ever-changing
The nature of Big Data is that it is ever-growing and ever-evolving. Which means that “what was interesting and useful” today, might be taken as a given tomorrow. In addition, as companies use more or and more external data (as opposed to just using their own internal data) it may introduce another element of variability in terms of where the Big Data stories are. So, unlike previous BI and dashboard reporting efforts (i.e. with KPIs and measures that generally don’t change that much), the reporting canvas for Big Data may be constantly changing.

Translating this to the iPad experience, a core competency in reporting Big Data results through an iPad might be “the ability to educate as you go”. Leaders and executives will constantly be exposed to new findings and new measures, and they will need help getting up to speed regarding on what the findings mean. Conceivably, this may need to take place on the fly during the reporting stage, using popup videos or animations – using a broadcast email to communicate updates won’t likely cut it any more!

There will be an increasing need to simplify and track the “doing” step
As is often the case with reporting great results, nothing really matters if there’s no “doing” step. As leaders view the Big Data results in their iPads, they will inevitably get to a point in the meeting where someone says “We should do something about that”. The process of tracking “who is acting on what” will become more important for a few reasons:

  • Many people will see the results, but it might not be clear if anyone has started taking action. Nobody wants to duplicate efforts, but at the same time nobody wants to drop the ball.
  • There will be a lot of results, and a lot of actions to take, so if the full value of the information is to be realized then it’s important for there to be a means for tracking the actions.

The reporting of Big Data in the near future may be more like the Social Media experience and the Customer Relationship Management experience, with lots of communication and interaction.

I’m sure there are many people out there who know much more on this subject, so I encourage you to weigh in, whatever your point of view is.

Tips for Executives – Researching Your Local Market for Analytical Talent

As more and more articles predict a major shortage for analytical talent, many organizations are in a rush to quickly build up their analytical team. But, in the spirit of “crawl, walk, run” it never hurts to do some labour market research before launching your recruiting efforts. This homework will help your organization set more realistic timeline for building your internal analytical team. PS - Dwg - Crawl, walk, run R2

Here are some tips that executives and leaders can use to research your local labour market for Analytical Talent:

Tip 1: Learn from other organizations in your area
Each region is different in terms of the local talent pool, so it’s a good idea to learn as much as you can from other organizations in your area that already have an Analytical Team. They can share their lessons learned, as well as, their recruiting and retention costs, and give you a sense of what it would take to build up a team in your organization. There should be plenty that you can learn from organizations in other industries, especially when you are just starting out.

Tip 2: Get advice from experts
There are many experts that can offer you advice on building an Analytical Team. Some potential experts include:

  • Recruiters that specialize in analytical professionals. They will be able to give you a sense of the analytical talent pool in your region.
  • A college or university with a well-recognized program in applied analytics will often be able to tell you where their graduates are being hired.
  • Consultants or consulting firms that actively specialize in analytical work. As service-oriented people they will likely be more helpful than you might think. Alternatively, you could hire them to help you with your recruiting campaign.

Tip 3: Check out your competition
Try reviewing the job postings for analytical talent in your area. It’s a pretty basic idea, but it’s still worth doing. You’ll find out:

  • Which companies are hiring, and how many openings there are
  • What they are offering to new job seekers, in terms of salary and benefits
  • How they are communicating to the talent pool
  • What job titles they are using
  • What level of experience, and credentials they are looking for

For example, if you go to a job posting site like Monster as a job seeker, and type in the keyword “data” and your location you will quickly get a good sense of your local market. When I ran this search today I found over 1,000 results in San Jose, but only 62 results in Boise, Idaho.

Applying these tips can save you a lot of time, and help you increase your odds of building your Analytical Team right the first time. There are many experts out there on this subject. Please feel free to weigh in with your point of view if you have something to add.

Tips for Executives – What to do Before Building Your Analytical Team

As the concept of using analytics as a strategic advantage is gaining more and more traction, many organizations are asking the question:

  How do we get started building our Analytical Team?

How to get started

In an effort to quickly catch up, some organizations make the mistake of hiring too quickly and firing too slowly. These situations can be avoided with a bit of strategizing at the leadership level. Here are some tips that executives and leaders can use to increase their chances of success:

Tip 1: Develop shared goals on why you want an Analytical Team

Most organizations that have Analytical Teams complain that their team is juggling so many different demands that they don’t use them as much as they would like to. The teams are busy, but the question is … are they busy working on the most important things? So before even building an Analytical Team it’s worthwhile for a leadership team to crystallize their top 3 goals for having a team. It’s strongly encouraged to keep it focused, because you can take it as a given that people will find new ways to use their talents.

Example shared goals might be:

  • To increase long-term customer retention by better understanding their buying patterns.
  • To support the leadership team in making major decisions using evidence-based methods.
  • To increase the cost-competitiveness of the organization.

It will likely require a brainstorming session or two to figure this out, but it is incredibly important ground work if you want to build your team right the first time.

Tip 2: Under each goal, identify one or two desired outcomes

To increase the clarity of what each goal actually means, next attempt as a leadership team to identify the specific outcomes that you’d like to target. These targeted outcomes would ideally be very tangible and expressed with numbers and an expected timeline. For example, if the goal is “to increase the cost-competitiveness of the organization” then some potential desired outcomes might be:

  • To outperform the industry average in inventory holding costs by 10% within 2 years.
  • To decrease in-warranty repair costs by $1m per year.
  • To increase operational productivity by 15% in three years.
  • To decrease the cost per customer acquisition by 10% on the next product launch.

The specific desired outcomes will often reflect the leadership team’s best educated guess, but that’s ok … the figures can be firmed up later, and in the meantime they further clarify the “what” and the “why” behind building an Analytical Team. You can imagine how this stage plays a big role in determining what talents and skills you will need for your team.

Tip 3: Estimate the value of achieving these outcomes

As shown in the previous example, it’s important to convert the desired outcomes into actual dollar amounts. This helps clarify how much opportunity the team believes is on the table. It also starts to paint a picture of what it’s worth to have the right analytical team. A safe approach would be to take the estimated total value per year from all three goals, and assume that 10% to 25% of them will actually be realized within the first 2 years. The resulting figure (total estimated value x 10%) will still likely be a much bigger number than you had planned to invest in building the team.

By using these tips, you can gain clarity on why you want an Analytical Team, the value you expect them to bring, and the cost of the team. By doing this pre-work you can significantly increase your chances of building the right Analytical Team the first time. In a future post, I’ll share some tips on how to recruit an Analytical Team.

 
There are many experts out there on this subject. Please feel free to weigh in with your point of view if you have something to add.
 

Positive Psychology and Employees – Data people need recognition too!

The following guest article by Alexa Thompson, discusses how recognizing and encouraging employees’ individual skills and talents – often termed positive psychology – can lead to happier and more productive workplaces. Thompson writes about the connection between happy and productive employees. In our analytical team we’ve learned the importance of recognizing and supporting the individual strengths of each individual. Alexa prepared this article in response to our How to Create a Culture of Evidence post and has authored several pieces for an online psychology education resource.

Until rougly the 1950s, the psychological state was rarely a consideration in the workplace. Managers (even to this day) assume that a reward system of promotions and paychecks would be sufficient to motivate employees. However, the reality of the human psyche has proven far more complex than can be accounted for by the conventional ‘carrot on a stick’ approach.

Positive Psychology

The late Dr. Harry Levinson – a pioneer in workplace psychology studies – argues that a psychological contract exists between employees and employers. When employees feel that their ingenuity and skill set are ignored in the workplace, it can lead to feelings of depression and thus low productivity by disgruntled workers. Research by Levinson and his contemporaries showed that company culture can have a significant impact on worker productivity, loyalty and pride.

Much of the modern thinking on positive psychology can be traced back to 1998, when Martin Seligman, president of the American Psychological Association and professor of Psychology at the University of Pennsylvania, developed master’s program for the study of positive emotion. Over a decade of research since then has found that happiness at work can improve revenue, profitability, staff retention, customer loyalty and workplace safety, as well as increase creativity and problem-solving ability.

Studies of small groups have identified the effects of human resource management. A report by Bloom and Van Reenen for the National Bureau of Economic Research uncovered a number of psychological factors, including security and a sense of fulfillment and connection, that affect an employee’s mindset. “As firms expand in their scope both geographically and in product space, local information will become more costly to transmit so this will […] favor decentralization.” This decentralization allows information to be processed at the level where it is used, lowering the cost of communication as well as increasing productivity through rising job satisfaction. Bloom and Van Reenen state that the “delegation of responsibility goes along with more employee involvement, greater information sharing and a greater participation of lower level staff.” This in turn enhances the quality of work and employees’ alignment with their company’s goals.

Findings in another study for the American Psychological Association further corroborate the importance of positive psychology. In the report, the authors conclude that “well-being in the workplace is, in part, a function of helping employees do what is naturally right for them by freeing them to do so… – through behaviors that influence employee engagement and therefore increase the frequency of positive emotions”. In other words, an environment of altruism and goodwill is often instrumental in creating a healthy, productive workplace culture.

The data and thinking on the subject matter continue to evolve, as it has been for the last six decades, ever since positive workplace psychology has been studied in earnest. Nevertheless, one major theme has emerged and remained clear: paying attention to the individuality of each employee will create a more positive environment for the employee and be of great benefit to the employer, even if there is an initial investment that needs to be made.

Tips for Leaders – Driving Change with Stories and Numbers

One of the biggest challenges that leaders face when driving change is getting everyone on board with the new direction. A powerful tool that Change Leaders can use is the combination of story-telling with numbers. When done right it can create the inspiration and momentum that both makes the change initiative happen and makes it stick. Here are some tips that leaders can use to get started:

Tip 1: Brainstorm the story
Chances are if you’re the Change Leader that you already know inherently why you want to drive the change. So the first challenge is “How do I transfer my excitement to other people?”

One of the best tools for getting people on board is the use of stories. Stories have the power to take boring, dry facts and make them personal and memorable.
Telling stories with numbers
A good source of relevant stories can be those that describe the frustration that people experience in the current state. These can be situations where things don’t work like they are supposed to, situations involving missed opportunities, or just things that are just plain annoying. A well-crafted story will be engaging and memorable, and inspire the listener to take some action. You’ll want to keep it short, because if things go right you’ll be telling this story over many times.

A major benefit of using a story is that the listener is more likely to remember it, and if the story is engaging then the listener will be inclined to retell it to others. Ideally you will have a story that will connect with the different types of people involved in the change, from the leadership team down to the front line, but if not, you may consider developing different stories for different audiences.

Once you have a few story ideas you can start thinking about the next step … finding the numbers in the story.

Tip 2: Find the numbers in the story
Many of your listeners will be on board after hearing your compelling story, but the more cynical listeners will say “That’s a great story, but it’s just an anecdote.” So the next challenge is finding the numbers both in the story, and the numbers that translate the story to the bigger picture.

When looking for numbers in the story, you may want to think about:

  • How bad was the situation? Can parts of it be measured and quantified? For example, if the story is about a situation where a customer was dissatisfied about a long wait, how long was the wait? To put it in context, how much longer was the wait in comparison to the industry standard?
  • What efforts went into fixing the bad situation? Did the bad situation result in many different people getting involved? If so, how much time did they spend? For example if the dissatisfied customer spent time with the manager, then with customer service, and then finally escalated the complaint to the leadership team, how many hours of effort went into trying to fix the situation?

When translating the numbers in the story to the big picture, you may want to think about:

  • How often do situations like this occur? Is this a one-off, or does this problem repeat itself every day? If the bad situation occurs frequently, what is it costing your organization?
  • If you don’t know how often this occurs, how frequently does it need to happen for it to be important? For example, in situations involving a person’s safety, one bad occurrence might be enough for it to be important.

Now that you have the story, and the numbers that back it up, the next step is to connect it back to the change you’re driving.

Tip 3: Make the hero of the story be the change
In the best stories the main character faces a challenge that seems impossible, and then somehow figures out how to overcome that challenge. The hero can be the person who came up with the bright new idea, or even better, can be the improvement idea itself.

As the Change Leader, you will want to find the connection between your change initiative and how the hero of the story overcame their challenge. For example, if your change initiative is about reducing wait times for customers, and if your change initiative involves a new screening process to identify customers with complex requirements, then the hero of the story can be the bright team member who thought of the idea, and the manager who was willing to try it out to see if it would work.

Tip 4: Performance manage with the story
You can take your story-telling even further by linking it to your performance management. It can be as simple as tracking the key numbers in your story on an on-going basis, and setting performance targets around them. Tracking tools can range from a good old fashioned white board to a fully automated electronic dashboard – the main thing is to measure what’s important, and to have the discipline to stick with it.

As you review the performance measures with your team, take every opportunity to refer to the characters of the story, and the situations that they went through. This will remind the people involved in the change why this is important, and will also help get new members of the team on board as they hear the story for the first time.

If you’re a numbers person, you might not have much experience with telling stories. If so, a great resource on storytelling is Peter Guber’s “Tell to Win”. His book describes the important components of any memorable story.

Hopefully these tips will help Change Leaders use the powerful combination of story-telling and numbers to drive change. There are many experts out there that I’m sure will have more to add. Please feel free to weigh in with your point of view.

Tips for Change Leaders – How to Show Your Impact

We work with a lot of leaders who are responsible for driving change. A common question that they ask us is “How do I show the impact of what we’re doing?” Of course, they have their standard measures (i.e. improved outcomes, increased cost-efficiency, reduced delays, etc), but the following are some of the tricky scenarios that they share with us:

  • “Our numbers don’t show how well we’re doing … what do I do?”
  • “It will take a while before we start seeing an impact, but I need to show results now!”
  • “The team is really working better now, but we’re still not hitting our targets. How do I prove that it’s worthwhile to keep going?”

Sometimes projects involving change don’t get the support they need to realize to their full potential, but it doesn’t always have to be that way. Here are some tips that Change Leaders can use to set the odds in their favor:

How to Show Your Impact

Tip 1: Begin with the end in mind
In addition to being one of my favorite Stephen Covey habits, beginning with the end in mind is incredibly practical tool for successful projects. When leaders are driving change this concept applies equally well. Some examples of applying this to change initiatives include:

  • Getting crystal clear on how things will be better once your change initiative is complete. Think about the conversations you will have in that future state, such as, “We’re way better at retaining our customers than we were in the past”. Then think about the numbers that you’d like to say to back it up, such as “We’ve decreased our customer churn rate by 60%”. In order to do these comparisons in the future you will need a baseline reading of your current performance. This thought exercise can be an easy way of identifying the performance measures that will be essential to show an impact.
  • If you’ve led a change initiative previously, you probably have learned that things rarely go as well as planned. So, it’s important to set realistic expectations on when you will hit your performance targets. The rule of “under-promise and over-deliver” comes into play here.
  • Many seasoned change leaders also know that there will be some periods in the initiative where the efforts are high but the outcomes are low to non-existent. It’s important to think about the milestones along the way, or the interim performance measures, that can show that you’re making progress in the right direction, and that the initiative should keep going.
  • Where possible, choose performance measures that you have direct influence on. The last thing you want when leading a change initiative is being evaluated on a performance measure that you’re not able to directly influence.

Tip 2: Set yourself up for success
It’s fairly common for change initiatives to generate a lot of excitement, and a lot of positive momentum where the people involved “just know” that they are making a positive impact. But at some point you do have to prove it. Some considerations to set yourself up for success include:

  • Track your performance along the way. Try to avoid what many change leaders do, which is, leave the performance evaluation to the very end. By tracking performance along the way, both you and the team involved can keep your eyes on the numbers that matter, and more importantly, correct the course if things aren’t going in the right direction.
  • Plan for achievable interim wins. It’s easier for a change initiative to be supported if it’s showing incremental progress towards the goal. It’s harder to stay the course when it’s a situation of “just trust me … in 3 years this will all work great”. Give yourself and your change initiative some achievable wins along the way to the finish line.
  • Make sure your numbers tell the full story. If the numbers aren’t trending in the right direction but you know that the change initiative is generating positive outcomes, then it may be time to rethink your metrics. Try and be as creative as possible in thinking through how that benefit can be quantified. Stakeholder surveys can often help round out the full impact of the change. Try to avoid having the performance of the change initiative be strictly based on financial factors alone, or solely on productivity measures. There are costs to “softer” considerations, they are just harder to quantify.

Tip 3: Get some help from a data friend

Not everybody is good with numbers, performance measures, or target setting. If this is you, then do your conceptual thinking of the performance measures and then lean on someone who is good with spreadsheets, data, and/or basic statistics. They will be able to coach you on how you can set up your measures so that a before and after comparison is valid and meaningful. They may even help you set up a tracking spreadsheet if you buy them a coffee!

Hopefully these steps will give change leaders some actionable tools you can use to make sure that you can show the impact of your change initiative. There are many experts out there that I’m sure will have more to add. Please feel free to weigh in with your point of view.

Tips for Executives – How to Create a Culture of Evidence

We’re often asked how do we create a Culture of Evidence? Most leaders know that they should be more evidence-based in how they work, but don’t know how they can go about doing it.

We’ve all heard the phrase “Culture eats strategy for breakfast” and anyone who’s attempted to drive change in a complex organization knows how true that statement can be. And, many seasoned leaders know that culture change doesn’t happen overnight, but here are some tips that you can use to get started.

Culture of Evidence

Tip 1: Paint a picture of “What a Culture of Evidence looks like”
If you want to make meaningful progress towards creating a culture of evidence, there’s no better place to start than envisioning your future state. Things to consider include:

  • How will life be better? For you, your team and for the company?
  • What opportunities will you be able to access?
  • What risks will you be able to avoid?
  • What decisions will be smarter?
  • What time will be saved?

If you can create a compelling vision of your organization in the future that thrives in a Culture of Evidence, then you can use this to win supporters.

Tip 2: Set the standard for “What counts as evidence?”
In the spirit of “crawl, walk, run”, getting started with using evidence doesn’t have to begin with hiring a team of scientists, researchers and lawyers. To begin with it may be as simple as using data to support your decision-making, carrying out basic research, or using spreadsheets to do “what if” analysis. Most leaders do this already, but many others still rely on their intuition to make their decisions.

The following is an illustrative example of “what counts as evidence?”:

  • A declarative statement of your position such as “I believe that we should launch a social media awareness campaign for our red widgets”
  • Some form of objective proof that shows how you formed your position, such as “According to our market data 85% of our target customers have never heard of our red widgets, and 57% of them use social media. The campaign would be cost effective even if it only generated a 5% increase in our market share.”
  • A disclosure of what you don’t know, such as “Admittedly our market data is one year old, so we’re assuming that the patterns still hold.”
  • An action statement, such as “I’d like to update our market data but the delays and costs outweigh the risk of missing an opportunity … I recommend that we launch the campaign and track performance.”

The ultimate goal of evidence is that it holds up to the review process, meaning that another leader could review the evidence and arrive at the same conclusions. Along those lines, “what counts as evidence?” could be just that … an objective analysis that has been peer reviewed.

Manager Reading Data

Tip 3: Put the tools in place
To set your team up for success, you will want to make sure that the basic tools are available for evidence-based thinking. Some questions to consider include:

  • Are the right investments being made to collect the right data?
  • Does your team have access to the data they need? Is the data being collected at the source, but it’s not being stored in the data warehouse? Or is the data there, but the privacy levels are too restrictive?
  • Do they have the skills for working with the data, or alternatively, is the right information available in insightful reports or visual dashboards?
  • Do they have the right technical and human resources perform deeper analyses, in response to important business questions that arise?

Tip 4: Lead by example
If you want to convince your team and your peers that you are fully behind this idea of a Culture of Evidence, then you’ll need to walk the talk. This will require effort at the beginning, but after a while it will become just “the way things are done around here”. Leading by example can include shifting your own language from “I think this is what we should do …” into “The evidence tells me that this is what we should do …”

It can also include making a concerted effort to not do things the old way because “that’s the way we’ve always done it” but instead doing things in ways that are proven to generate the right outcomes. This relates to everyday decision-making and operations, as well as longer-term strategy and planning.

Tip 5: Reward the adopters
It is often said that “you get what you reward”. This is an easy concept to apply to building a Culture of Evidence. For example you can reward your team for using evidence in situations like:

  • Decision-making on special projects: Projects that have proposals that have supporting evidence are often approved, whereas other projects often don’t.
  • Decision-making on budget: Budget increases (or exemptions from budget cuts) are generally provided to those departments that can prove that they need it, whereas departments that can’t prove their value miss out.
  • Decision-making on promotions: Team members that demonstrate the effective use of evidence are generally promoted to higher positions, whereas other team members don’t.

By taking this approach it won’t take long for people in your organization to learn that the way to win is by embracing an evidence-based approach. Team members will either adopt the new direction or self-select themselves out of your organization. Over time this will increase the momentum of the culture change, and gradually you will find that your organization attracts talent that values a Culture of Evidence.

Tips for Executives – How to Get the Data You Need

One of the most common complaints that we hear from leaders and executives is that they have “too much data” and “not enough information”. Some examples of what they mean by “too much data” include:

  • Reports that consist of pages and pages of numbers
  • Tables of figures with no overall summary number
  • Charts that are cluttered and confusing
  • Analyses that show a lot of numbers but no “so what” message

It doesn’t have to be that way. Here are a few tips that executives can use to get the data they need:

Tip 1: Ask yourself “What information would help me be more effective?”
It may sound selfish, but you should ask yourself “What information would help me be more effective in my job?” This might be information that helps you save your own time, make better decisions, or seize big opportunities.

The Data Thinker

Another way to approach this question is to review the data that you already have available and ask yourself “What isn’t this telling me?” or “Why is this not useful to me?”

Based on this thought process, prepare a simple table with two columns. In the first column include a description of what you want, and in the second column identify why you want it. Then choose your top 3 to 5 items on the list. Now you’re ready to start the next step – following up with your Data Team and/or your Business Intelligence people to have a first conversation about your top-ranked items.

Tip 2: When people say data isn’t available, use the “5 Whys”
Many data people have difficultly seeing the world beyond the standard data that they use every day. So, when you meet with them and tell them about the data that you need, chances are that they will reply by saying “that just isn’t available”.

When it comes to data – almost anything is available – it’s just a matter of how much you’re willing to fight to get what you need.

The “5 Whys” is a simple process of getting to the root of an issue. When your data people tell you that getting the data you need is impossible, ask “why”. They will give you a list of reasons such as “it’s not in the data warehouse”, or “we don’t measure that”, or “the system doesn’t allow that type of reporting”. Pick any of the reasons, and then ask “why” again, which will generate a new list of reasons. Continue this until you’ve reached the root of the issue (hopefully in 5 or less “whys”). The root issue is often one or more of the following:

  • Nobody thought to ask for this before
  • At some point in the past, somebody decided that it was too hard to collect the data
  • The people running the analysis and reporting are limiting themselves based on the capabilities of their reporting tools
  • Nobody has thought of taking a prospective data collection approach, and/or nobody has thought of doing a sampling approach (to reduce data collection costs)

Through a few meetings, you now should have the real reasons why you don’t currently have the information you need. You may even have a sense of how much it would cost.

Tip 3: Estimate the cost of not having the information you need
The last step is where you can make your convincing argument. For each of your top-ranked ideas, you can think about what it’s costing you to not have access to that information.

Does it translate to productivity? Lost time? Missed opportunities? Lost revenue? Customer loyalty? Employee turn-over? If so, then you can translate these consequences into real tangible costs. This isn’t an exercise of doing high-precision activity based costing – instead this is just getting the cost estimates roughly right.

These figures give you an idea of how much your organization could potentially invest into better data and reporting. If you’re business-minded and you could work out the actual investment amounts that would still generate a positive return on investment.

Armed with this analysis, now you’re in a position to convince others what this information is worth. Which brings us to our last step.

Tip 4: Gain the support of the leadership team
Chances are that the information that will help you be more effective in your role, will also be useful to others in the leadership team and throughout your organization. If you can gain the support of the rest of the leadership team then you can increase the chances of getting what you want.

Each team dynamic is different, but a one-on-one approach often works well. These can be quick conversations with each leader with a real focus on “what’s in it for them”. You may be surprised with how many of your peers are equally frustrated by the lack of good information.

With the support of the team, the cost of not having the information and some return on investment estimates, you’ll be able to drive to get the information you need to be successful.

These are just a few tips, but I’m sure there are many of leaders out there who have many more great ideas and experiences. If you have suggestions, or alternate points of view, please weigh in.

 

Note: What is a Data Team?
When we refer to “Data Teams” it’s a catch all for groups of technical, statistical, and subject-matter domain experts that are involved in providing information to support their organization. These teams are sometimes called “Business Intelligence”, “Decision Support”, or “Information Management”, but they can also be internal consultants such as “Operations Analysts”, “Strategic Information” or “Research”. Many of these concepts equally apply to teams of Data Scientists.


Tips for Data Teams – The Consistency Check

Have you ever delivered an analysis, only to hear from your client that “these numbers can’t be right”? It’s hard to convince someone that your results are credible when they don’t even pass the first 5 seconds of review. As much as we may not want to admit it, sometimes the numbers are indeed wrong, so how do we avoid these situations from happening? One type of check that a Data Team can adopt is the “Consistency Check”. Here are some questions that you can ask yourself when doing a consistency check:

Consistent numbers

Question 1) Are the numbers consistent with themselves?
When building complicated analyses, different sections of the analysis can fall “out of sync” with each other if they are not all updated in the same way. When this happens it can produce inconsistent summary results (i.e. the cover page reports 255 conversions per hour, but the supporting details on other pages show 237 conversions per hour). Sometimes we place too much faith on our reporting tools and assume that they will report exactly as intended. In other situations it’s just a matter of being too close to the work. After a while the numbers are burned into your short term memory and you lose your ability to critically review them with an objective eye. Suggested work-arounds include:

  • Have another member of your team do a consistency check on the results, preferably someone who hasn’t been involved in the work.
  • Take an old school approach. Print out the results, and use different colored highlighters for each type of metric. Highlight the summary numbers that represent the same result, and confirm that they are indeed consistent. Continue until you’ve highlighted all summary numbers.
  • Take another old school approach. Get your calculator out or use a separate spreadsheet, and confirm that you can replicate the summary numbers just based on the results that are being presented. You may be surprised with how many of your clients are doing this with your results already.

Question 2) Are the numbers consistent with your previous analyses?
When a client receives a new set of results they often pull up the previous results that you gave them. They are asking the question “how much have things changed?” You can beat them to the punch by doing this consistency check yourself. To be more specific:

  • Start with the previous result that was presented or released. Compare the summary numbers from the previous results to your current summary numbers.
  • Assess if the changes are interpretable. If they are, then this interpretation will likely be part of what you communicate when you release the new results. If the changes are not interpretable, then it’s time to go back into your current results, or your previous results to diagnose why the changes aren’t explainable.

Question 3) Are the numbers consistent with other reports?
Stepping into the shoes of your audience, you can think about the other reports that they are referring to on an on-going basis. It doesn’t matter if the other reports that they use came from a completely different source – from their perspective all data from all sources is supposed to tell the same story. In a similar manner to Question 2, you can do some additional homework so that your results are valuable to your audience as possible. For example you could:

  • Ask your clients if they have any other reports that they use frequently, and if they would be willing to share them with you. You can frame it honestly – you want to make sure that your results are valid, and if they are different from other sources, you want to be able to explain why.
  • Do a little research on your own, in particular, reviewing any routine corporate reporting, or industry reporting. Sometimes, a skeptic can be won over by proving that you did your homework. Again if the numbers line up from other sources, it becomes something you can report as proof of consistency. If the numbers don’t line up and you can’t explain the difference, then it may be an indication that you need to review your analysis.

Question 4) Are you telling the right story?
Taking all of the above into account, you should be able to deliver your results confidently. You should now know that the numbers in the report are consistent amongst themselves, that the analysis is consistent with previous analyses, and that the results are interpretable in comparison to other sources. This now can become part of your summary and presentation of your stunning new work. Or at least it can form as an addendum to the email, or the presentation that shows your audience the efforts that you went through to ensure that the numbers are the right numbers. Then you have the foundation to begin telling the actual story of the analysis (the “so what” message).

These are just a few tips, but I’m sure there are many of experts out there who have many more great ideas. If you have suggestions, or alternate points of view, please weigh in.

Note: What is a Data Team?
When we refer to “Data Teams” it’s a catch all for groups of technical, statistical, and subject-matter domain experts that are involved in providing information to support their organization. These teams are sometimes called “Business Intelligence”, “Decision Support”, or “Information Management”, but they can also be internal consultants such as “Operations Analysts”, “Strategic Information” or “Research”. Many of these concepts equally apply to teams of Data Scientists.


Reducing Rework in a Data Team

As much as we’d all like to get things done right the first time, with analysis and modeling it’s not always possible.

When delivering results, it’s fairly common to receive requests for minor revisions – and most of that we can all handle. But every so often the situation catches you by surprise. You’re delivering what you think is a great piece of work only to learn that it missed the mark completely. You hear statements like “This isn’t what I asked for!” or “You misunderstood what I asked for!” and you wonder where things went wrong.

Sometimes you can rightfully blame the person who requested the analysis, and then conveniently changed their mind. But more often the breakdown happens around communication and agreeing on expectations.

Final version

So what do you do? Here are some coping strategies:

1) Ask the question “What does a job well done look like?”
The next time you’re asked to run a major analysis where you feel that you don’t have an adequate understanding of what is being asked, try this script:

“I want to make sure that I give you what you want. Would you mind if I grabbed a couple of minutes to clarify a few things?”

Then ask your clarifying questions. For example:

  • What’s the business question that this analysis is supporting you with?
  • Do you just want the summary, or did you want the supporting details?
  • Is this analysis just for your reference, or is it going to be distributed?
  • How accurate does this need to be?

The answers to these questions can make a big difference in determining the final deliverable. If you only have time for one question, the first question is the best one to ask.

If you’re lucky enough that the person making the request is willing to spend more than a couple minutes with you, then you can try to get crystal clear on “What does a job well done look like?” The following are some of the statements that you might hear:

  • It will help me answer this questions …
  • The numbers will be consistent with our annual report
  • The summary of results will be jargon-free
  • The results will be delivered by Friday morning at 10 am, both by email as well as a color print out on my desk

2) Put your understanding in writing
Now, with your heightened clarity you can now put it into writing. A short follow up email of the form “Thanks for clarifying. So, just to recap I will …” will provide one more opportunity for corrective feedback.

In many situations you won’t be able to do the first step (getting clear on “what a job well done looks like”) because the person making the request is too busy. But even in these situations it’s still worthwhile putting into writing. You can write the same short email, but this time it will have an opening line of the form “I know you’re too busy to discuss the analysis, so I’ll make the following assumptions when I do it …” And then, you can add a closing line “Hopefully that captures it. If I don’t hear otherwise from you, I’ll deliver results based on this understanding.”

3) When delivering your result, include the original request
You’ve done the hard work of clarifying expectations, you’ve done the analysis, and now this is the easy part. When summarizing the results, make sure that you attach your analysis to the clarifying email. If you’re delivering it in hard copy, you can attach a print out of the clarifying email to the top.

Using this approach the person making the request will be able to see their role in the entire process. It won’t take long for people to see the value of slowing down and spending a few minutes getting clear on the request.

4) Follow up after the fact
The worst situations are when you’ve put in the hard work, but it wasn’t really what the requester wanted, and so they don’t use it. They’ve wasted their time, your time, and they still didn’t get what they want. Because they feel embarrassed about not using the work, they will often not bother giving you feedback.

So, it’s up to you to solicit feedback after each major deliverable. A brief check-in after the fact can yield great feedback. If you’re not getting rave reviews about the great work you did, you can ask “What could I have done to make it even better?” This seemingly innocent question prompts the requester to give candid feedback, and demonstrates that you really care about the value of your work.

How's my analysis?

These coping strategies are not for everyone, and are not needed in every situation (especially the quick and easy analyses). But it’s the times when we get it wrong where we really appreciate the value of clarifying expectations. If you have your own coping strategies, please weigh in.

Note: What is a Data Team?
When we refer to “Data Teams” it’s a catch all for groups of technical, statistical, and subject-matter domain experts that are involved in providing information to support their organization. These teams are sometimes called “Business Intelligence”, “Decision Support”, or “Information Management”, but they can also be internal consultants such as “Operations Analysts”, “Strategic Information” or “Research”. Many of these concepts equally apply to teams of Data Scientists.