Posts Tagged ‘thoughts’

Effective way for data scientists to grow impact

Sunday, October 5th, 2014

In order to get things done people need to communicate effectively. At school, teachers present to students. In consulting, consultants make powerpoint slide decks. In research, researchers make presentations and talks to spread their ideas.

When it comes to data scientists, many of us write code (in R, Python, Julia etc) in order to analyze data, inform decisions. To many people, what we do is rocket science.

What is the most effective and easy way to spread our ideas and grow impact? A good answer is interactive visualization. And not just for data scientists, but for anyone working with analytics.

Sure enough, pretty and intuitive graphics are a good way to deliver insight. And, with modern technologies interactive visualization can grow into products, viral marketing campaigns, and journalism pieces.

I have been doing interactive visualization for a while. Below is a visualization I made to explore geographic enrolment patterns of HarvardX. What started as an exploratory project ended up as a product -- an interactive analytics platform we called HarvardX Insights. It ended up on the cover of Campus Technology, and several universities from around the world contacted HarvardX to get the code.

See databit HarvardX Certificates World Map by Sergiy Nesterko on Databits.

And here is something for data scientists -- a visualization of the Hamiltonian Monte Carlo algorithm. I taught it to my students last year during a graduate course on statistical computing and interactive visualization at Harvard Statistics. This visualization was one of several I created for the course together with students.

See databit Hamiltonian (Hybrid) Monte Carlo by Sergiy Nesterko on Databits.

People who work with data increasingly need to acquire and apply creative coding skills in order to put their ideas to work. This helps come closer to the end user of an analytic insight, and avoid possible operational distortions and dead ends along the way. That's why, resources that promote and teach creative coding are in high demand among my peer data scientists. I am a big fan of Mike Bostock's Blocks, and other resources such as Codepen, JSFiddle, and Stack Overflow.

Recently, I have been using and contributing to Databits more and more. Databits is a website for data scientists, data journalists, and other creative coders to share work, connect, and grow impact. I believe that eventually, the site will allow to be more targeted and specifically learn from and follow peer data scientists and other creative coders who are focused on producing effective interactive visualization and other cool stuff. For example, I look forward to learning some Processing applications from this guy. In the meanwhile, I helped put together a simple databit based on Processing.js:

See databit Processing.js Hello World Sketch by Sergiy Nesterko on Databits.

The site also runs Challenges, an initiative aimed at finding meaningful problems for creative data scientists to solve, and put on their portfolios. I find this pretty cool.

I look forward to learning new things, finding cool problems, and making the world a better place with data. Now my creative work has a home -- you can check out my creative endeavors and interests on my Databits profile page.

Democratization of data science: why this is inefficient

Sunday, November 4th, 2012

The use of data in industry is increasing by the hour, and so does investment in Big Data. Gartner, an information technology research and advisory firm, says the spending on big data will be $28 billion in 2012 alone. This is estimated to trigger a domino effect of $232 billion in spending through the next 5 years.

The business world is evolving rapidly to meet the demands of data-hungry executives. On the data storage front, for example, new technology is quickly developed under the Hadoop umbrella. On the data analysis front, there are startups that tackle and productize related problems such as quid, Healthrageous, Bidgely, and many others. What drives this innovation in analyzing data? What allows so many companies to claim that their products are credible?

Not surprisingly, the demand for analytic talent has been growing, with McKinsey calling Big Data the next frontier of innovation. So, let's make this clear - businesses need specialists to innovate, to generate ideas and algorithms that would extract value from data.

Who are those specialists, where do they come from? With a shortage of up to 190,000 data specialists projected for 2018, there is a new trend emerging for "the democratization of data science" which means bringing skills to meaningfully analyze data to more people:

The amount of effort being put into broadening the talent pool for data scientists might be the most important change of all in the world of data. In some cases, it’s new education platforms (e.g., Coursera and Udacity) teaching students fundamental skills in everything from basic statistics to natural language processing and machine learning.
...
Ultimately, all of this could result in a self-feeding cycle where more people start small, eventually work their way up to using and building advanced data-analysis products and techniques, and then equip the next generation of aspiring data scientists with the next generation of data applications.

This quote is optimistic at best. Where is the guarantee that the product developed by a "data scientist" with a couple of classes worth of training is going to work for the entire market? In academic statistics and machine learning programs, students spend several years learning the proper existing methods, how to design new ones, and prove their general validity.

When people without adequate training make analytic products and offer them to many customers, such verification of the product is crucial. Otherwise, the customer may soon discover that the product doesn't work well enough or not at all, thus bringing down the ROI on the product. The customer will then go back and invest in hiring talent and designing solutions that would actually work for the case of this customer. If all customers have to do this, the whole vehicle with the democratized data science becomes significantly inefficient.

Behind each data analysis decision there must be rigorous scientific justification. For example, consider a very simple Binomial statistical model. We can think about customers visiting a website through a Google ad. Each customer is encoded as 1 if he or she ends up purchasing something on the website, and zero otherwise. The question of interest is, what proportion of customers coming through Google ads ends up buying on the website?

Below is a visualization of the log-likelihood and inverse Fisher information functions. Many inappropriately trained data specialists would not be able to interpret these curves correctly even for the simple model like this. But what about the complex algorithmic solutions they are required to build on a daily basis and roll out on the market?

We can simply take the proportion of customers who bought something, that will be our best guess about the underlying percentage of buying Google ad website visitors. This is not just common sense, the average can be proved to be the best estimator theoretically.

The uncertainty about our estimate can also be quantified by the value of the inverse observed Fisher Information function (picture, left) at the estimated value of p. The three curves correspond to the different numbers of customers who visited our website. The more customers we get, the lower our uncertainty about the proportion of the buying customers is. Try increasing the value of n. You will see that the corresponding curve goes down - our uncertainty about the estimated proportion vanishes.

This is the kind of theory that we need specialists who develop algorithmic products to be equipped with. It requires an investment in their proper education first. If we skip the proper education step, we risk lowering the usefulness and practicality of the products such data scientists design.

Algorithms as products: lucrative, but what is the real value?

Friday, October 12th, 2012

Recently I attended a talk by Nate Silver (@fivethirtyeight) who leads a popular NYT election forecasts blog, where he talked about how he uses algorithms to predict the results of the election given the information available on the day of. Nate didn't go in-depth on how his algorithms work, though there were such questions from the audience. On the one hand, it makes sense. Why tell how the algorithms work, what matters is whether they predict the election right. Indeed, it did in 2008, predicting 49/50 states right, as well as all of the 35 Senate races.

But on the other hand, if Nate Silver never publicly discloses how it works, how do we really know what the algorithm is based on, what are the weights on surveys, how it accounts for all the biases, etc? In science, algorithms are always disclosed and can be replicated by third parties. Such approach is not employed by Nate Silver, and it is understandable. His algorithm is a product, it gives him a job at NYT, prestige, and status. What would happen if anybody could replicate it?

The same non-disclosure strategy is employed by LinkedIn for its Talent Brand Index algorithm. The index is a new measure offered by LinkedIn of how attractive the company is for prospective and current employees.

The index will prove to be very lucrative for LinkedIn:

While there is likely to be a lot of quibbling about how the numbers are calculated, this product has the potential to make LinkedIn the “currency” by which corporations measure their professional recruitment efforts.

No wonder the company is trading at 23 X sales.

However, there is a key difference between LinkedIn's Talent Brand Index and Nate Silver's election forecast algorithms: it can never be checked whether the Talent Brand Index is right. Indeed, do we know how it is constructed? Here's what I could find on that:

Last year, LinkedIn was home to over 15 billion interactions between professionals and companies. We cross-referenced our data with thousands of survey responses to pinpoint the specific activities that best indicate familiarity and interest in working for a company: connecting with employees, viewing employee profiles, visiting Company and Career Pages, and following companies. After crunching this data and normalizing for things like company size, we developed our top 100 global list. We then applied LinkedIn profile data to rank the most sought-after employers among professionals in five countries and four job functions.

The index cannot be re-created not only because there is no publicly available description of how it is calculated, but also because LinkedIn's data on which it is calculated is proprietary.

So, the Talent Brand Index is a black box, recruiters don't know how it works. But, they will pay to get access to it because the index provides employer rankings in terms of "people's perception of working for them". The companies will then work and invest heavily to improve their index ranking because the information is publicly available, and will help them recruit better talent.

However, how are the employers going to find out what is their ROI trying to improve their Talent Brand Index if they don't know how it works? Not having the information on how the index works makes it a hard task. Let me give an example.

For simplicity, let us assume that the Talent Brand Index gives the weight of 5 to the positive sentiment expressed about the company by the current employees on their LinkedIn profiles, and a weight of 100 on the number of times the profiles of the employees are viewed on LinkedIn. Since the information on weights is hidden from the employers, they'd have to first run a randomized experiment to determine the effect of a particular company policy on employee profile views, and then measure the impact on the index. This is very costly and hard to implement, because it is hard to devise a potentially index-improving policy that would only involve a part of company's employees (treatment group), and not the other part (control group), and to randomly assign employees to those parts, and then to measure the profile clicks, and so on.

But in our example, LinkedIn gives a very large weight to the number of views of the employees' profiles! How can the employers find that out?

Practically, the answer is - they cannot.

This means that while the Talent Brand Index is a lucrative product for LinkedIn, the real value it provides to companies is vague. It provides no information as to what areas of an employer's HR policy need to be improved in order to increase the Talent Brand Index, and in what priority. That's why, the high-index companies will enjoy an increased influx of great talent, while the low-index companies will suffer a talent drain. This will reinforce the leaders' positions, and worsen the positions of the HR underdogs.

Coming back to the broader picture, there are algorithms, and there are algorithms. Nate Silver's election prediction algorithm is in fact a valuable product to its users even though its details are largely unknown. This is because it can be checked for truthfulness. LinkeIn's Talent Brand Index product will bring double digit growth to the company due to the Big Data hype, but will it be really useful to its consumers in terms of helping them improve their hiring? The answer is not straightforward.

Algorithms as products should be designed with enough transparency to make them useful, or with a mechanism to externally verify them. Otherwise, their value to the customer is questionable.

Management consulting view on big data

Monday, June 25th, 2012

The Economist

The amount of data recorded and analyzed in business, medicine, education and public policy is increasing every day at a rapid rate, to the extent that it is hard to keep pace with it. I am particularly interested in how, and whether, the leaders of organizations and government bodies are responding to and extracting value from the phenomenon.

Particularly interesting is the point of view of top management consulting firms, who are also very interested in the trend. For example, McKinsey Institute published a report on big data a year ago. More recently, there was a recording of a QA session with a senior partner of BCG Philip Evans on big data posted on Schumpeter blog on The Economist about a week ago.

Specifically, Mr. Evans eluded to how the emergence of "big data" may change the course of strategic development of companies. The most recent method has been vertical integration, when companies aim to acquire/develop more entities along the supply chain (i.e., electric power supplier aims to operate not only power plants, but also raw materials, power grids etc) to reduce costs. According to Mr. Evans, during the "big data" era, we will see more of horizontal integration, when instead of operating several entities along the supply chain, a company focuses on one, and grows by scaling the product up to many markets. As per Mr. Evans, an example of this approach is Google.

Additionally, Mr. Evans stated that companies will become fragmented into two camps, the one where there exists a well-defined serializable product or service around which a company can scale up, such as "inferring patterns in large amounts of data", and another where more unique individual skills are needed, such as entrepreneurship, creativity etc.

I found the interview very interesting. We do see successful companies employing horizontal integration (Google, Apple, Amazon). That is, they do focus on a few important products or services, and scale them up to multiple markets. Does this have anything to do with "big data"? It certainly does, as horizontal integration is employed by big players in the big data realm as well, such as EMC. However, horizontal integration is inherent more to the concept of the Internet and the evolution of IT, as is the "big data" phenomenon.

Secondly, I have to disagree with the statement that inferring patterns in large amounts of data is (easily) serializable. This task is an open scientific problem that is a subject of active current research. The only solutions existent at the moment are those belonging to the second camp as defined by Mr. Evans. A task of attempting to design an algorithm to extract a specific answer to a specific question from a dataset in a given format needs to be approached individually by qualified specialists such as statisticians. Such project does involve creativity and a substantial amount of intellectual effort. After an approach is developed, it can be replicated for the specific dataset it has been designed for (say, when more observations have been collected), and not for other datasets, otherwise the results may be unreliable.

More broadly, what does the phenomenon mean for companies? Horizontal integration is implied by the ability to quickly scale up products and services implied by the development of the Internet and the IT, as is big data. So, what is the message of the latter by itself?

Let us not make the matter overly complicated. Buried in the terabytes of "big data" is the ability of companies to be better informed about the market around them and their own internal operations, to optimize activities better, to find out what the competition is up to better, to price their products better than competition, and so on. "Being better informed" is a value generating asset, and companies with large amounts of repeated features (many instances of the same product/service sold, large numbers of employees, many visitors seeing their ads on the Internet) need to realize this. The first ones that do, and those who employ the better methods of extracting interpretable information from the relevant data sources will benefit from the value of being better informed than others.

I couldn't be more excited about the fact that companies, governments, educational institutions and public policy agencies are beginning to realize the value of being better informed by patterns inferred from data, be they massive, big, or not so big. The fact that top management consultants are talking about it means that top executives are demonstrating this interest.

The gap between academia and current industry practices in data analysis

Sunday, March 25th, 2012

The demand for specialists who can extract meaningful insights from data is increasing, which is good for statisticians as statistics is, among other things, the science of extracting signal from data. This is discussed in articles such as this January article in Forbes, and also the McKinsey Institute report published in May last year, an excerpt from which is given below:

There will be a shortage of talent necessary for organizations to take advantage of big data. By 2018, the United States alone could face a shortage of 140,000 to 190,000 people with deep analytical skills as well as 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions.

This sounds encouraging for the current students in fields such as statistics as they are looking to get out on the hot job market after graduation. However, are they prepared for what industry jobs need them to do?

One news that hasn't been covered in the press yet is that methods and data related problems in industry are often different from those described in the body of scientific publications. By different I mean either scientifically invalid or scientifically valid and cutting edge.

An example of this phenomenon is the so-called Technical Analysis of financial data, which is often used by algorithmic trading groups to devise computer-based trading strategies. Technical analysis is a term that people came up with to describe a set of methods that are often useful, and yet their validity is questionable from the scientific perspective. Quantitative traders have been employing this type of analysis for a long time without knowing whether it is valid.

Another example is a project I worked on, which was to create an algorithm of optimizing annual marketing campaigns for a large consumer packaged goods company (over $6 billion sales) to achieve 3-5% revenue increase without increasing expenditure described in this post. Essentially, this was an exercise in Response Surface methods with dimensionality as high as 327,600,000. There are no scientific papers in the field that consider problems of such high dimensionality. And yet companies are interested in such projects, even given the fact that methods for their solution are not scientifically verified (we worked hard to justify the validity of our approach for the project).

Recently I received an email inviting quantitatively oriented PhD's to apply for a summer fellowship in California to work on data science projects. Here is a quote from the email:

The Insight Data Science Fellows Program is a new post-doctoral training fellowship designed to bridge the gap between academia and a career data science.

Further, here is what is stated on the website of the organization sponsoring the program:

INSIGHT DATA SCIENCE
FELLOWS PROGRAM
Bridging the gap between academia and data science.

As with algorithmic trading about 15 years ago, the use of sometimes scientifically questionable data analysis techniques is commanded by the increased demand for insights from quantitative information. Such approaches, which in the world of quantitative finance are called Technical Analysis, during the current data boom are named Data Science.

When using the term, one should be careful that while the methods employed by inadequately trained "data scientists" may be scientifically valid, they may well not be. There is an inherent danger in calling something that encompasses incorrect methods as a sort of "science" as this instills a perception of a field that is well-established and trustworthy. However, the term is about a couple years old. In my opinion, a more accurate one would be "current data analysis practices employed in industry".

The way we name the phenomenon does not change what it is. It is the fact that there is a lot of data and a lot of problems in industry that often go beyond what has been seen or addressed in academia. This is an exciting time for statisticians.

Informative versus flashy visualizations, and growth in Harvard Stat concentration enrollment

Sunday, December 4th, 2011

Some time ago my advisor Joseph Blitzstein asked me to create a visualization of the numbers of Harvard Statistics concentrators (undergraduate students who major in Statistics). The picture would be used by the chair of the department to illustrate the growth of the program for university officials, so I decided to make it look pretty. The first form that came to my mind was showing the enrollment growth over years using a bar plot.

Starting in 2005, the numbers follow exponential growth, which is a remarkable achievement of the department. We then decided to follow the trend, and extrapolate by adding predicted enrollment numbers for 2011, 12 and 13. At that moment, there was no data for 2011. (more…)

theory.info, a new project

Tuesday, July 12th, 2011


Recently I purchased the domain and created an interactive logo/visualization for Theory Information Analysis, a screenshot of which is presented above. Theory is a new project which I would like to represent applied real word work, including quantitative consulting and applied research. (more…)

Data science term in The Economist

Thursday, May 19th, 2011

Seems that there is no stopping now: the term data science appears prominently in the headline article of the current issue of The Economist.

The Economist

Compared with the rest of America, Silicon Valley feels like a boomtown. Corporate chefs are in demand again, office rents are soaring and the pay being offered to talented folk in fashionable fields like data science is reaching Hollywood levels. And no wonder, given the prices now being put on web companies.

It is indeed quite misleading that the term has the word science in it as it implies an established field, while in fact the science of data is statistics. I wrote a post on the subject earlier in an attempt to single out what is it that distinguishes data science from statistics. That set aside, however, the article is supportive of the rise in demand for our profession, which is a good news for the specialists. Hopefully, the tech bubble mentioned there won't be inflated further by people who misuse the data science term.

Drastic R speed-ups via vectorization (and bug fixes)

Friday, April 29th, 2011

RDS visual

Figure 1: A screenshot of the corrected and enhanced dynamic visualization of RDS. Green balls are convenience sample, pink balls are subsequently recruited individuals, pink lines are links between network nodes that have been explored by the process, and numbers in circles correspond to sample wave number.

It is common to hear that R is slow, and so when I faced the necessity to scale old R code (pertaining to material described in this post) to operate on data 100 times larger than it used to, I was initially at a loss. The problem with the old code was that it took several days and about 4000 semi-parallel jobs to complete. With the size of data increasing by a factor of 100, the task was becoming infeasible to complete. Eventually however, I was able to achieve an over 100-fold speedup of the R code, with the speedup being due to addressing two issues: (more…)

The data science puzzle

Monday, April 11th, 2011

Throughout the past few years, I have heard several times that the demand for quantitatively and data oriented professionals is growing. Clearly, this is good news for statisticians, as statistics is central to the process of extracting a meaningful and actionable signal from data. The terms data science, and data scientist have been accompanying many of the related articles. So, I have decided to do some research and look for evidence of increase in information analysis demand. My goal has been to understand the peculiarities of how our profession is perceived in the community, and attempt to clarify the meaning of the new data science term. (more…)