Subscribe via E-mail

Your email:

Follow Me

Our Blog

Current Articles |  RSS Feed

Big Data Part 4

 

“Big Data” is about volume, but it’s more than that... Big data is also about Variety and a couple other characteristics that will be described in follow up blogs…

CPG companies are no stranger to variety. In addition to their own internal variety
of data residing in databases such as Access, Excel, Oracle, main frames, Teradata, DB2, Netezza so forth. You have multiple
applications such as trade promotion management applications, supply chain, manufacturing, planograms, CRM applications, forecasting and a slew of others.

In addition, the variety of data coming in from point of sale (POS) sources include retailer files that include EDI 852 files, EDI 867 files, AS2, flat files, other EDI files, retailer portal downloads and syndicated data from AC Nielsen, IRI, NPD and others. Most companies are also buying competitive market data, demographic information, surveys, weather trends, currency conversion information, and might even be trying to integrate emerging market data. In addition, you might have space information, displays and diagrams that are unstructured or semi-structured.

Those are all examples of various data sources that have existed over the years. Some of these sources are newer than others. But the newest variety of data is coming in via the web. These sources are coming from various applications that track your “Social Reputation,” clicks, and media presence to name a few.

Marketing teams also have ads, including print, on-line ads, tv commercials, radio spots. They might also have online targeted marketing on social media that include offers on web sites, mobile offers, YouTube videos, etc. All these sources are in different data formats containing different information. All of this adds up to a lot of variety.

Big data just got bigger with more variety from the internet. In these last two blogs we discussed volume and variety, but it's also about velocity and one other key characteristic that will be discussed in the next two blogs. Watch for our next blog, Big Data Part 5 on velocity.

Sign up for a Demo \u0026amp\u003B See How BlueSky Integration Studio Integrates Big Data

Big Data Part 3

 

In the next 4 blogs, we'll explore the characteristics of Big Data.

Volume is one key characteristic!

Data volumes today are incredible. I'll continue to use a consumer goods manufacturer as an example for this series of blogs.

Think about a consumer goods company that has 2000 sku’s that are selling through 100 different retailers. That could consist of 100,000 stores.

Now imagine that every day, each store is sending that CPG company sales information
including what was sold, how many items were sold, the time and date of the sale, potentially the price and potentially even the loyalty and market basket information which would tell them who the customer was and what they bought with your product.

We’re talking massive, massive data volumes on top of the ERP data already available from inside sources.

Now consider sources like the company Facebook page, your LinkedIn page, Twitter feeds about your company and brands, your YouTube commercials, and so on. Your talking huge
data volumes.

I recently heard a supply chain expert define big data as a Petabyte. We all kind of chuckled at that because this came from an analyst who knows about supply chain reports but has zero experience in data warehousing, databases or anything related to IT infrastructure. Relational Solutions has unsurpassed experience working in very disparate IT environments. A petabyte is a number. But just because that’s a big number doesn’t mean a terabyte isn’t big data to another company.

Volume to one company or even to one individual can cause an issue even if the data volume is
the same as that of another company, who has no issue dealing with that same volume. Every company has different environments, different users and different ways of managing data. So even smaller amounts of data can cause issues for one company and not another. To apply a specific number to big data is irrelevant.

That said, volume is one characteristic of “Big Data.” Look for our next blog that discusses
Variety…

Watch our Big DataTraining 101

Big Data Part 2

 

"Big Data Part 2" builds off my earlier blogs called “Before Big Data” and “Big Data Part 1.”

In this blog we will explore the different types of data and explain the differences at a high level. I thought of breaking this blog into three blogs due to length, but felt the subject matter was better served in one article.

So what's the difference in these various data types?

The first cylinder represents structured data. This includes data from ERP sytems, mainframes and data warehouses. Although structured, these data types are structured differently.

In my earlier blog, "Big Data Part 1," I separated these structured data types into two separate circles. That's because they are structured differently.

ERP data and other transactional systems are structured in a way that allows for easy data entry.

Data warehouse and business intelligence solutions are structured in a way that allows for easy retrieval of information. This is why I had them in separate circles on the previous blog. That said, both transactional and analytical systems are structured.

As described in my blog on "Analytical versus Transactional Business Intelligence," ERP and other transactional data sources are designed to RUN your business. Data warehouse and business intelligence solutions are designed to help MANAGE your business. These are data sources typically stored in a traditional database and therefore has structure to them.

The second cylinder contains unstructured data. This is data mainly found out there on the web. This includes social media data that includes things like “Tweets” and “Comments." But unstructured data also includes your activity, including your searches.

The internet captures a lot of different activity. Today, your social authority or clout can be tracked by determining how many followers you have and how many people follow you and how many times things you post are reposted, etc. Different applications apply different algorythms, but social authority is tracked in a variety of ways.

Authority can be tracked based on the number of people you have the capacity to influence. Someone with 100 followers does not have the same clout as someone with 3000 followers for example. However, someone with 3000 followers who is never on-line commenting, compared to someone who has 500 followers and regularly posts or tweets what they hear, could have a higher ranking authority level.

Big Data received a lot of attention in the press this summer. There were a lot of concerning stories. In June, "The Wall Street Journal" published an article that the NSA, America’s National Security Agency, was obtaining a complete record of all Verizon customers and their calling history, including all local and long distance calls within the US.

This made the news because it made a lot of people upset. The idea that the government is listening in on our calls means a potential invasion of privacy. Government claims it tracks and uses this information to help identify terrorists. We hope that’s true. But the fact that they have the capability and are monitoring this information can be unsettling.

Big data has also come up in recent stories associated with the monitoring of certain journalists calls and activities. In addition it is related to the IRS scandal which required search capabilities that would targeting certain non-profit, applications. Regardless of political affiliation, most people found this disturbing because targeting groups for political gain is wrong.

Monitoring these activities requires the government to leverage big data. But right or wrong, for good, for bad or for profit the capability to capture and leveraging big data does exist.

Most companies leverage big data to target market and to manage their brand and company reputations. Either way, technology exists today that allows us to track and monitor and profile just about whatever and whomever we want.

The last cylinder represents multi-structured data or hybrid data. A lot of data sources can fall into this space.

For the purposes of a consumer goods manufacturer, I used common outside data sources in the cylinder to represent hybrid data. Lets use point of sale data for example. Point of sale (POS) data comes in from multiple retailers with varying data elements at different times of the month. Even one retailer could have multiple ways of providing POS data.

Target is a good example of the ways in which POS data can arrive. If you are vendor for Target, you might get POS data in an EDI 852 file. You might also get POS data from Info Retriever or IRI. In addition, you might purchase data from A.C. Nielsen or Symphony IRI. All these sources contain different data elements. But they also all contain point of sale (POS) data.

Let's start with the POS data coming in from an EDI file. That EDI file is structured. However, although it’s supposed to be standardized, it is not. Different retailers provide different data. Rules aren't followed. Files can be missing days or data elements. EDI from one retailer will be different from another retailer. Also, EDI from Target today, might be different than the EDI coming from Target was last year. There could also be missing or duplicate data. In addition, retailers often "recast" data, etc. We classify this as "hybrid" data because of the inconsistent, lose, structure of the data and all the work around it required to make it work well with other data.

In addition to missing or invalid or duplicated data. Data has different hierarchy's, end dates, etc. Outside data needs to align with your internal hierarchy’s and calendars. It also needs to be aligned with outside data sources like weather trends, currency conversion, A.C. Nielsen, Symphony IRI, NPD and other data sources.

These are just a few examples of data issues that arise from outside data sources. In other words, there is some structure to it, but the structure needs to be altered to be managed, integrated into other sources and ultimately provide more value.

Watch for my next week blogs where I explain in more detail the way big data is further defined and described by the industry.

Watch our Big DataTraining 101

Big Data Part 1

 

It can be argued that big data has been around for many years. Although big data can include internal data from old mainframes, new ERP systems and data warehouses, it also includes external data from outside sources. It also includes "new" data being generated on the internet.

 

Software companies referring to big data today are generally referring to unstructured data on the web. They talk about volume, variety and velocity. But it's more than that. I will cover that in my "Big Data Part 4" blog and in this series of blogs on big data.

 

Unstrutured big data includes social media chatter that comes from Facebook, Google+ & Twitter. It also includes Comments, Announcements and posts from professional networking
sites like LinkedIn. These are the common areas that people think of when it comes to big data. But there are many other forms of unstructured, big data.

Think about things like, speech to text. Audio on-line is unstructured. Translating that audio from a recording into text starts to become more structured, but still not in the format most databases want it in.

Another example would include tags, Alt tags, meta keywords associated with images and video for example. This searchable text is an example of big data.

Data posted on sites like YouTube is big data. Think of all the photo’s posted on “Instagram” and “Facebook.” Even profile pictures on LinkedIn. That’s big data. Now add, goespacial information or location information. This information can be used by companies to do things like track shipments, identify missing cars, monitor storms, and so forth.

How about blogs like this one? Comments on blogs are tracked by companies to determine what’s being said about them. Google searches content so when you are looking for information on big data, you will find blogs like this!

Companies also use tools like Google Alerts and Tracker to figure out when competitors are making a new announcement. They can be used to identify comments being made about your
products and other things.

Engineering companies can do things like scan and share schematics and blue prints over the web.

Another form of big data involves click stream analysis. Relational Solutions actually implemented what we believe was the first click stream analysis application back in the 90’s for a telecom company who wanted to track where their customers were going on their website. We could figure out what pages were getting the most visits and where they are going from each page. Today, we can do much more. 

Today click stream analysis is also used to target market customers, based on what they seem to like, based on their clicks and demographic information provided. Clicks can track what you buy and information on your age, location, where you went to high school, etc.

This automatic tracking of big data can be done in the cloud using tools like Cloudera, Teradata Aster and other MapReduce technologies. But the value gets even greater when integrated with other internal, structured, big data.

So why is “Big Data” associated with these items in the last circle depicted above, and what’s the difference in the data? Why do I have circles around the different data types and what determines where each of these items reside? Well, it’s related to structure.

I recently heard an expert in supply chain analytics refer to “big data” as anything over a petabyte of data. I thought it was a funny comment coming from someone with no database background and very little technology insight.

A petabyte is an arbitrary number. It is just a number that focuses strictly on volume. And big data is about more than just volume. 

In my series of big data blogs, I will walk you through the big data evolution. Look for my next blog called "Big Data Part 2" to come out next week.

 

Before Big Data

 

Before Big data we had mainframes, ERP systems and data warehouses (and by the way, we still do). You could make the claim that big data started in the 1950’s with IBM’s “Big Iron” and “Big Data Processing” to handle mixed work-loads.

Or you could say big data started when Oracle coined the acronym, VLDB back in the 90’s to describe "very large databases." I can't decide which is bigger, “very large” or “big.”

Or was it Teradata, who back in 1992, built the first system over 1 terabyte for Walmart? This was the biggest implementation for its time. Teradata called their platform "massively parallel processing" or "MPP." I don’t know about you, but I definitely think “massive” sounds a lot bigger than “big.”  

Big, large, massive…regardless of the adjective used, database companies have been in the “big data” business for years. But “big data” today is NOT the same as “big data” from ten years ago. Today, there has been a full blow explosion of data.

One of my customers recently said “If you ask 50 people what “Big Data” is, you’ll get 50
answers.” My goal in this blog is to first explain the evolution of big data and in a series of blogs and help clarify some of the confusion.

Big Data encompasses many areas, starting with internal data in both transactional and analytical systems.

In transactional systems, the data is constantly changing and being updated. In analytical
systems, the data warehouse is typically updated once a day or sometimes more frequently. In analytical systems such as the enterprise data warehouse, companies are analyzing data, attempting to learn more about their customer, their buying patterns, their behaviors and how to best market to them.

Simply put, analytical systems are designed to help “manage” your business.
Transactional systems are designed to “run” your business.

The big data explosion started with data from applications designed to run your business. Mainframes were first on the scene. ERP (Enterprise Resource Planning) applications, really took off from the 80's & 90’s. Companies like SAP, Oracle, Microsoft, Sas, Infor, JDA & JDE all offer ERP solutions. They include applications for manufacturing, logistics, invoicing, order placement, call centers, etc. These applications also have reports associated with each of
their applications.

In the early 90’s companies started getting serious about using the data to improve knowledge, business processes and profits. All the buzz words back them evolved from Decision Support Systems (DSS) to Executive Information Systems (EIS) to data warehousing and business intelligence. Unfortunately, I’m old enough to remember these things.

Over the past ten years, we started seeing cooperation and data sharing between partners. I once felt like a missionary in the consumer goods space back in the 90’s trying to explain the value of sharing point of sale data. Retailers thought I was crazy and manufacturers said it will never happen. But today, they finally get it. Today they understand the value. Some more than others, but it’s finally caught on.

As more and more outside partners and vendors began offering new data and insights, we were
able to leverage that data through an architecture that allows new data sources to be integrated within a company’s existing data warehouse. That’s why Relational Solutions always stress the importance of a solid infrastructure.  

Some of these outside data sources include point of sale data, EDI files, syndicated data from companies like IRI, Nielsen and NPD, panel data, demographic data, currency conversions, weather trends and other sources.

New data sources are being shared every including loyalty data and emerging market data. Wholesalers, distributors, brokers and other selling partners are also starting to share data (not just reports).

Most companies just bought reports in the past. They didn’t understand the full value of having an infrastructure in place to leverage all data including future data, but that is starting to change. In the past, business users who had budget, would just go out and buy reports. Infrastructure didn’t matter to them and they didn't understand it. But as the market evolves, companies and people are maturing in their understanding of how important it is to have a big data infrastructure and more and more we see IT involved in those decisions.

Now, the latest evolution of “Big Data.” Combined, it is all big data, but in the pure sense of how software companies refer to Big Data today, they are mainly talking about unstructured data on the web.

See my next blog, "Big Data Part 1" as a follow up to this blog, “Before Big Data.” 

 

Data Marts and Data Warehouses and Big Data

 

Data marts and Data warehouses are often confused. Simply put, an enterprise data warehouse is the union of all marts. But that depends greatly on the underlying architecture.

 

A data mart can be stand alone reporting solution or it can be soundly integrated into an enterprise data warehouse.

 

Relational Solutions has been building enterprise data warehouses since the mid 90’s and pioneered the concept of an incremental, iterative approach.

 

This approach allows companies to get a fast ROI (return on investment) that will address
immediate needs of the business users. It will also provide a foundation that will allow you to get incremental benefits as new data is integrated. The design withstands the test of time and lets the data warehouse grow with your business and with the evolution of new data sources, including #bigdata.

 

Unfortunately, most data marts were designed as one off reporting solutions. When designed as
a stand alone, they are often referred to as a “stove pipe” or “silo” of informations.

 

Today, I hear some so-called, expert, CPG industry analysts use these terms as if this is some new concept. These are not new concepts or new terms. They are just new to these so-called experts. These "experts" are finally understanding what we’ve been preaching to them about the importance of architecture for years.

 

Data warehousing consultants have used these terms since the 90’s. They’re used to describe stand-alone reporting solutions. Typically these stand alone solutions are developed by individual teams or departments.

These groups develop “silo’s” or “stove pipe” reporting databases to achieve a specific goal that they were unable to get financial approval for. If they have a need for something that you can't get approval for, you resort to building something on your own. It happens in every company and every department.

 

That said, all data marts are not created equal. Some are in access, some in spreadsheets, some are in SQL Server or Oracle. Some are silo's and some are not. Data marts do not have to be silos. Designed correctly, a data mart can be integrated and should be fed from a single staging area where business rules are applied. Thus, a sound data warehouse is the union of all marts being fed by a single source.

 

Having an infrastructure that stages the data, cross references the data, cleanses it, harmonizes the data, and feeds it into a data model that then feeds subject specific marts
offers the best growth potential. The shared dimensions from one data mart to another provides consistency from department to department.

 

Relational Solutions are experts in data modeling and offer customized classes and consulting services in this area. Data modeling techniques vary depending on the database target. Data modeling is a big topic that involves too much description for this blog.

 

In short, designing the data model correctly allows business rules to be applied and data
to be accessed easily by the users. This design also maintains consistency from department to department. It also provides IT with a manageable solution that is designed to evolve over time to accommodate new data sources and new user requirements.

Companies who have a properly designed data warehouse can integrate internal data, outside data and even Big Data.

 

My next blog will start to explain big data and what makes various data sources different
today.

 

Learn Mor4e about Relational Solutions Services.

 

 

Transactional versus Analytical Business Intelligence

 

The easiest way to understand the difference between a transactional and analytical system is to think of transactional systems as those applications designed to run your business and analytical systems are those designed to manage your business.

Applications like SAP, Oracle Financials, JD Edwards and JDA for example, are transactional applications. They provide reports, but they tend to be reports from their systems unless you separately acquire their data warehouse modules. In most cases, even their data warehouse modules handle their own data better than other data sources. In general, ERP (Enterprise Resource Planning) systems are systems that are modeled for data entry. They are updated constantly throughout the day.

Reports derived from these systems are reports designed to understand what is going on at this moment. For example, what time did my last truck leave? Is my manufacturing formula set correctly today? What did that last customer complain about? They answer the "What?" not the "What if?" questions. 

These are transactional reports, coming from transactional systems. They are necessary reports required to run your day to day operations. A report pulled from a transactional system at noon will produce different results than a report pulled at 12:01 because the operational system is constantly changing. Even reports pulled symotaneously will likely produce different results. That is because in a transaction system, the route of each query can take different paths. In addition you never know who might be updating the system at any one point in time. 

We call this the “twinkling database effect.” That is because the data is constantly changing.

These “twinkling databases” are fine for pulling operational reports. But trying to produce an analytical report from a transactional system is not wise.

First, the data is formatted for data entry, not data retrieval. Therefore it could can take days to query the system. In addition, an ad-hoc query against a transactional system will effect the performance of that operational system. It will also negatively effect end users using the system. The last thing you want to do is make it difficult for people to enter orders. This could have a direct and negative impact on sales. Not to mention the negative, time wasting effect it will have on other job functions.

Analytical queries against a transaction system will put an undue burden on your network. In addition, it will return inconsistent, and often times, inaccurate results.

That is why data warehouse solutions became a necessity. Transaction systems are designed to RUN the business, data warehouses are designed to help MANAGE the business.

The data warehouse is modeled in a way that business users can easily find and retrieve the data they need. The underlying infrastructure of an enterprise data warehouse (EDW) offers an architecture that will align data, provide business rules and accommodate growth and change in an iterative manner.

Query tools allow for easy analysis and business intelligence. Users need fast access to reliable information with the flexibility to change the view. They need to be able to drag, drop, drill, sort, compare and ultimately learn and act on the information they are receiving.

More and more, we are hearing business analysts referred to as “Data Scientists.” This is because today, they should have the capability to think outside the box with information available to them. Rather than spending their day gathering and cobbeling together reporting information, they can be freed up to analyze it. Today, data integration can be automated and put into a usable format for data exploration.

By leveraging ALL your data, companies and their Data Scientists can understand not only WHAT is happening, but WHY!

The data warehouse is fed by the operational system and typically updated on a nightly
basis. Sometimes more often but most often, nightly. More and more, we see the data warehouse is also being fed by other outside sources. Relational Solutions advocates leveraging all the information you have access to. Unfortunately, not all data warehouses are created equally, so it's not always as easy as it sounds.

I’m pointing out these differences because this all background information needed to understand how companies can use big data. Big data creates a potentially “fuzzy area” for reporting depending on how it’s defined.

In my next blog I'll explain the difference between data marts and data warehouses, and how evolving data sources such as "big data," should be leveraged in your enterprise data warehouse and offer more business intelligence.

 

What Is Business Intelligence and Why Do We Need It?

 

Business Intelligence, what is it and why do we need it?

Business intelligence is the ability to make “fact based decisions” based on reliable, integrated, data.

Business intelligence leverages data to provide you with reports and information, and allows users to move away from “Hunch based decisions” to “Fact Based Decisions.”

Business intelligence can come from both transactional and analytical reports. But in the true sense of analytical, business intelligence the data is derived from an enterprise data warehouse. Designed correctly, we call it “The Truth Database.”

Business intelligence can arguably also be derived from “stove pipe” solutions. These are point solutions typically developed within a department to answer specific questions. It can also be argued that ERP reports provide business intelligence. Business intelligence can also be derived from reports that end users had to manually integrate in order to develop reports for management, buyers or others.

Various types of reports are delivered through different means. As purists, we believe
analytical business intelligence should be derived from the data warehouse. But again,
that doesn’t mean all reports are the same. We also recognize that even business intelligence reports, designed for managing the business are derived in a multitude of ways.

Do we really need Business Intelligence? Back in the 90’s business intelligence (BI) was
considered a “luxury.” That’s because building a data warehouse was very costly to build and BI tools were very expensive to license. Today, business intelligence is NOT a luxury, it is a necessity. Your competitors are understanding more about their business and therefore you must as well, in order to maintain your competitive advantage.

Companies in the 90’s built data warehouses to GAIN a competitive advantage. Today it’s needed just to MAINTAIN your competitive advantage. Those companies who were visionaries in the 90’s and recognized business intelligence as a way to achieve competitive advantage are the same companies today who are leveraging big data from other sources to GAIN a competitive edge.

At Relational Solutions we believe it's important to note that operational reports are not the same as Analytical reports. That will be described in my next blog. But this series of blogs will discuss the evolution of big data.

A true analytical, business intelligence application is designed to support management decisions. It includes reports that are derived from a single, queryable source, of reliable, integrated data, fed by a single staging area. It should have applicable business rules established for your business users. In an ideal world, business intelligence is derived from a data warehouse that is designed, based on the union of all marts. The presentation layer should offer fast access to information that’s easily understood by the end users.

Look for my next blog that describes the differences between transactional and analytical reports and delves deeper into the evolution of big data.

 

All Posts