A rotated array is a combination of two sorted array. so let’s find a pointer on where the array is rotated. Then let’s split the input array into two sorted array. Now if the search key is present in left array, we can pass the binary search function for left array or vice versa.

 

binrotsort

Code:

int binarySearch(int [] in, int x){

int low =0, high = in.length-1;

while (low <= high){

int mid = (low + high)/2 ;

if(x > in[mid])

low = mid + 1 ;

else if(x < in[mid])

high = mid – 1;

else

return mid;

}

return -1;

 

}

 

int binarySearchRotate(int []in, int x){

int low =0, high = in.length-1;

int start = in[0], pointer=0, k=1, val = -1;

 

//Loop to find the pointer of where the array is rotated

while(start < in[k])

pointer = k++;

pointer++;

 

//Split the array into two and store it on left and right array based on the pointer value like merge sort

 

//Store 0 to pointer in left array

int left[] = new int[pointer];

for(int i=0; i<left.length;i++)

left[i]=in[i];

 

//Store Pointer to input array length on right array

int right[] = new int[in.length-pointer];

for(int j=pointer;j<in.length;j++)

right[j-pointer] = in[j];

 

//If search value is from 0 to pointer-1, pass binary search function for left array

if((x >= in[low]) &&(x <= in[pointer-1]))

val = binarySearch(left,x);

 

//If search value is from pointer to length of input array, pass binary search function for right array

else if((x >= in[pointer]) &&(x <=in[high]))

val = pointer + binarySearch(right, x);    //Increment the final output

 

return val;

}

 

SELECT “vw_bluetooth_failure_by_device_by_profile”.”device_manufacturer” AS “Parent Category Label”,

“vw_bluetooth_failure_by_device_by_profile”.”device_manufacturer” AS “Product Category”,

1 AS “Level”,

1 AS “Path”,

SUM(“vw_bluetooth_failure_by_device_by_profile”.”failure_count” ) AS “Failure Count”

FROM “public”.”vw_bluetooth_failure_by_device_by_profile” “vw_bluetooth_failure_by_device_by_profile”

GROUP BY “vw_bluetooth_failure_by_device_by_profile”. ”device_manufacturer”

UNION

 

SELECT “vw_bluetooth_failure_by_device_by_profile”.”device_model” as [Parent Category Label],

“vw_bluetooth_failure_by_device_by_profile”.”device_manufacturer” & ” > ” & “vw_bluetooth_failure_by_device_by_profile”.”device_model”  as [Product Category],

2 AS [Level],

1 AS [Path],

SUM(“vw_bluetooth_failure_by_device_by_profile”.”failure_count” ) AS “Failure Count”

FROM “vw_bluetooth_failure_by_device_by_profile”

GROUP BY “vw_bluetooth_failure_by_device_by_profile”.”device_model”, “vw_bluetooth_failure_by_device_by_profile”.’device_manufacturer”

UNION

 

SELECT “vw_bluetooth_failure_by_device_by_profile”.”make” as [Parent Category Label],

“vw_bluetooth_failure_by_device_by_profile”.”device_manufacturer” & ” > ” & “vw_bluetooth_failure_by_device_by_profile”.”device_model” & ” > ” & “vw_bluetooth_failure_by_device_by_profile”.”make”  as [Product Category],

3 AS [Level],

1 AS [Path],

SUM(“vw_bluetooth_failure_by_device_by_profile”.”failure_count” ) AS “Failure Count”

FROM “vw_bluetooth_failure_by_device_by_profile”

GROUP BY “vw_bluetooth_failure_by_device_by_profile”.”make”, “vw_bluetooth_failure_by_device_by_profile”.”device_model”, “vw_bluetooth_failure_by_device_by_profile”.”device_manufacturer”

UNION

 

SELECT “vw_bluetooth_failure_by_device_by_profile”.[Model] as [Parent Category Label],

“vw_bluetooth_failure_by_device_by_profile”.”device_manufacturer” & ” > ” & “vw_bluetooth_failure_by_device_by_profile”.”device_model” & ” > ” & “vw_bluetooth_failure_by_device_by_profile”.”make” & ” > ” & “vw_bluetooth_failure_by_device_by_profile”.”model”  as [Product Category],

4 AS [Level],

1 AS [Path],

SUM(“vw_bluetooth_failure_by_device_by_profile”.”failure_count” ) AS “Failure Count”

FROM “vw_bluetooth_failure_by_device_by_profile”

GROUP BY “vw_bluetooth_failure_by_device_by_profile”.[Model], “vw_bluetooth_failure_by_device_by_profile”.”make”, “vw_bluetooth_failure_by_device_by_profile”.”device_model”, “vw_bluetooth_failure_by_device_by_profile”. ”device_manufacturer”

UNION

 

 

SELECT “vw_bluetooth_failure_by_device_by_profile”.”device_manufacturer” as [Parent Category Label],

“vw_bluetooth_failure_by_device_by_profile”.”device_manufacturer” as [Product Category],

1 AS [Level],

203 AS [Path],

SUM(“vw_bluetooth_failure_by_device_by_profile”.”failure_count” ) AS “Failure Count”

FROM “vw_bluetooth_failure_by_device_by_profile”

GROUP BY “vw_bluetooth_failure_by_device_by_profile”. ”device_manufacturer”

UNION

SELECT “vw_bluetooth_failure_by_device_by_profile”.”device_model” as [Parent Category Label],

“vw_bluetooth_failure_by_device_by_profile”.”device_manufacturer” & ” > ” & “vw_bluetooth_failure_by_device_by_profile”.”device_model”  as [Product Category],

2 AS [Level],

203 AS [Path],

SUM(“vw_bluetooth_failure_by_device_by_profile”.”failure_count” ) AS “Failure Count”

FROM “vw_bluetooth_failure_by_device_by_profile”

GROUP BY “vw_bluetooth_failure_by_device_by_profile”.”device_model”, “vw_bluetooth_failure_by_device_by_profile”.”device_manufacturer”

UNION

SELECT “vw_bluetooth_failure_by_device_by_profile”.”make” as [Parent Category Label],

“vw_bluetooth_failure_by_device_by_profile”.”device_manufacturer” & ” > ” & “vw_bluetooth_failure_by_device_by_profile”.”device_model” & ” > ” & “vw_bluetooth_failure_by_device_by_profile”.”make”  as [Product Category],

3 AS [Level],

203 AS [Path],

SUM(“vw_bluetooth_failure_by_device_by_profile”.”failure_count” ) AS “Failure Count”

FROM “vw_bluetooth_failure_by_device_by_profile”

GROUP BY “vw_bluetooth_failure_by_device_by_profile”.”make”, “vw_bluetooth_failure_by_device_by_profile”.”device_model”, “vw_bluetooth_failure_by_device_by_profile”. ”device_manufacturer”

UNION

SELECT “vw_bluetooth_failure_by_device_by_profile”.”model” as [Parent Category Label],

“vw_bluetooth_failure_by_device_by_profile”.”device_manufacturer” & ” > ” & “vw_bluetooth_failure_by_device_by_profile”.”device_model” & ” > ” & “vw_bluetooth_failure_by_device_by_profile”.”make” & ” > ” & “vw_bluetooth_failure_by_device_by_profile”.”model” as [Product Category],

4 AS [Level],

203 AS [Path],

SUM(“vw_bluetooth_failure_by_device_by_profile”.”failure_count” ) AS “Failure Count”

FROM “vw_bluetooth_failure_by_device_by_profile”

GROUP BY “vw_bluetooth_failure_by_device_by_profile”.”model”, “vw_bluetooth_failure_by_device_by_profile”.”make”, “vw_bluetooth_failure_by_device_by_profile”.”device_model”, “vw_bluetooth_failure_by_device_by_profile”. ”device_manufacturer”

 

/***********
UUID     TIMESTAMP
uuid1      2016-01-01 21:10:05.123
uuid1      2016-01-01 21:12:05.123
uuid2      2016-01-01 21:14:12.433
uuid1      2016-01-01 21:12:25.123
uuid3      2016-01-01 21:14:12.433
uuid2      2016-01-01 21:18:12.433
uuid1       2016-01-01 22:22:25.123
Write an algorithm that generate a list of user sessions.
e.g.
the above input will result in 4 user sessions, if we assume session timeout of 30 minutes

uuid1     2016-01-01 21:10:05.123     –   2016-01-01 21:12:25.123
uuid1     2016-01-01 22:22:25.123    –
uuid2    2016-01-01 21:14:12.433     –   2016-01-01 21:18:12.433
uuid3    2016-01-01 21:14:12.433     –

***********/

void sessionizer() {

        // An input map to store the UUID and Timestamp

       Map<String, Long> in = new HashMap<String, Long>();

      // Map to store the uuid as key and new timestamp as value

      Map<String, Long> newTimestamp = new HashMap<String, Long>();

      in.put(uuid[0], timestamp[0]);

     // diff.put(uuid[0],timestamp[0]);

     for (int i = 1; i < uuid.length; i++) {

                   //Check if uuid is in the map

                   if (in.containsKey(uuid[i])) {

                                    //Compute session time out

                                   long difference = timestamp[i] – in.get(uuid[i]);

                                 //Check if the session time out is no greater than 30 minutes

                                  if (difference < 3000000) {

                                                newTimestamp.put(uuid[i], timestamp[i]);

                                    }

                                 //If time out between successive time stamps of same uuid is greater than 30 minutes

                                 else if (newTimestamp.containsKey(uuid[i])) {

                                              //add staring and ending timestamp in output set

                                              out.add(uuid[i] +   + in.get(uuid[i]) +         ” + newTimestamp.get(uuid[i]));

                                           //add the uuid with the new timestamp after end of session

                                             in.put(uuid[i], timestamp[i]);

                                          //Remove the UUID whose timestamp has been added to the map

                                          newTimestamp.remove(uuid[i]);

                                 }

                      }

              //If the uuid is not present in the map add it to the input map

             else {

                      in.put(uuid[i], timestamp[i]);

               }

         }

        //Iterate through the maps to store the final output

         for (String s : in.keySet()) {

                   if (newTimestamp.get(s) != null)

                              out.add(s +   + in.get(s) +         + newTimestamp.get(s));

                  else

                              out.add(s +   + in.get(s) +       + “”);

         }

}

Finding duplicates is usually done either by Brute force or collections. So, I came up with a solution without using any of those and which is very efficient for sorted array!

Screen Shot 2016-04-15 at 8.12.04 AM

int[] withoutSet(int []in){

       int []val = new int[in.length];    // to Store the value from the input array

       int []count = new int[in.length]; // to count the occurrences of elements in the input array

       int j=0, k=0, index=0, sum=1;

       Arrays.sort(in);   //Sorting the elements of the input array

       for(int i=0; i< in.length-1;i++){

               //since the array is sorted, till we find a new number if loop will run continously

             //For Example: 1,1,1,1,2,3,3,4. It would be running inside the if loop with val[k] = 1 and sum=4

              if(in[i] == in[i+1]) {  //Checking the first and second element of the array

                                 val[k] = in[i];      // to copy the duplicate element from the input array

                                 sum = sum + 1;      // add the total occurrences

                                count[k] = sum;    //  assign the total occurrences to count

                                index= index + 1;  // Index is to track the total length of output array

                }

           //If it comes to Else condition it means we a new number is found, so we are resetting the value of k and sum

             else{

                            k=k+1;           

                           sum = 1;

                 }

          //subtract output array index by 1 if the same number occurs more than once in input array

          if((val[k] !=0) &&(count[k]>2))

          index= index – 1;

        }

  int []out = new int[index];    //Create output array of size index obtained from above steps

  for(k=0;k<count.length;k++){    //Run through a loop till the size of count/val/in array

           if(count[k] > 0)          //Count[k]=0 for unique elements so make a condition to find duplicates

          out[j++] = val[k];    //store values in output array if the occurrences is > 0

     }

  return  out;

}

ABSTRACT

Creating a relational data model for the Company database based on the given Scenario and Business Rules and then to run the SQL queries to test the database using Oracle. Modeled the entire system with Entity Relationship Diagrams and generated complex SQL queries.

DatabaseDesignProject_Stanford_Coursera

List of Hotels with Map View

List of Hotels with Map View

 

Dashboard

    Dashboard – Ratings in Bar Chart and Sentiment Analysis in Pie Chart

 

Trend Chart

Trend Chart Analysis

 

Reviews from Customers

Reviews from Customers

 

 

 

Code for this Project : https://github.com/Arunprakash1990/Hotel-Inspection-Prediction-System

 

Introduction and Motivation:

Data is present and generated on a large scale from every aspect of daily life. There is a huge need to understand this data and analytics plays a huge role in understanding this data so that businesses can gain valuable insight in order to improve their services. As a result, analytics is required in all domains whether it is product recommendations or hotel inspections. Big data analytics, however, has played a very minimal role in the field of hotel inspections.

There exists a huge amount of hotel data that is present both online and on paper. Most travel planning websites and travel guides are vast sources of information for people intending to travel either on business or pleasure because of customer reviews. Currently, very few websites analyze this vast set of data. The most common form of analysis performed on this vast data is the analysis of the data in order to locate the best hotels in a particular location.

Our motivation in analyzing this data is slightly different. Travel guides in general publish recommendations for hotels based on hotel inspector reviews and feedback. Most hotels are evaluated by hotel inspectors once or twice in a year. These hotel inspectors rely on experience to rank hotels but these rankings can be biased. Most hotels make it their business to know when a hotel inspector will be inspecting the hotel and in general the hotel’s service are running at maximum efficiency and extremely well while the hotel inspector is there for an inspection. This leads to a favourable review of a hotel that may not always be true. However, there exists a vast repository of information on travel websites of both satisfied customers and disgruntled customers for each of these hotels. These customers paint a more accurate view of a hotel’s services. This information can be used to provide a detailed analysis of the hotels which can be used by hotel inspectors during their inspections.

On the basis of the feedback given by customer’s, hotel inspectors can make informed decisions during their inspections regarding which aspect of a hotel or it’s service is lacking. These reviews can also help us generate a priority of hotels that are required to be inspected immediately. For example, a Hotel ABC has X number of customers complain about uncleanliness of the hotel’s restaurant. On analyzing the data, Hotel ABC is appears first on the priority list of hotels to be inspected. This indicates to the hotel inspector that Hotel ABC requires immediate inspection with more focus on evaluating the hygiene conditions in the restaurant even though this problem didn’t seem to be present when the hotel was inspected on previous occasions. Thus, these reviews help make inspection much easier for inspectors as the results indicate which aspect of the hotel requires more stringent inspection.

Challenges Faced:
1. Presence of duplicate hotels

While parsing the data, we found there were a large number of duplicate hotels in the dataset that were assigned to unique hotel ids. The presence of such duplicate hotels made it difficult to analyse the exact sentiment of the reviews towards the hotels. We were required to clean such duplicate data in the pre-processing stage.

2. Large number of hotel reviews

The TripAdvisor dataset we used had many reviews for each hotel in a single file. It was necessary to parse the data of each hotel in order to retrieve the content for each review belong to a hotel.

3. Reviews conflicting with the rating

A customer in a hotel may give a rating of 4 stars out of 5 stars but the customer’s review might mention that they were disappointed with the restaurant or some other feature of the hotel. This makes it difficult to gauge the true sentiment of the customer.

4. Coming up with the idea..

When we were coming up with an idea for the project, we decided to implement something that doesn’t already exist as a tool. This proved to be a huge challenge as most of the domains use big data these to perform some sort of analysis. We had to brainstorm quite a bit to come up with something new.

We thought to go through certain publicly available data sets and came up with an idea for an analysis tool for hotel inspection. The task of inspecting hotels may seem routine and you’d probably wonder what kind of analysis can be done to help this profession. Surprisingly enough we found that, as it impacted the travel and tourism industry of a place indirectly, analysis was fairly necessary to improve the travel experience of people.

5. Getting the data

Having said the above, it was an equally difficult task to get the data we required that matched our idea which was also of considerable size so that better analysis can be performed on it. Since our domain was new, we had to improvise with the data we had – hotel reviews.

6. Data Preprocessing

With any big data project, the most time consuming and challenging part is pre-processing it to match the requirements of the system. The data we were using consisted of a few hundred reviews for hotels across the world separated across different files and written in a json format. (Further description of the data is given below). The challenge was to parse this data into separate elements and store it into a database for easy extraction.

7. SentimentAnalysis

Our reviews was a large text written by the users which were not always perfect and simple English statements. We had to find a way of classifying the reviews as positive or negative and also find out what exactly about the hotel the users consider good/bad. So, accurately analysing the sentiments of people was a challenge. Also the data we had was unlabelled and hence we had to build a semi-structured learning model for sentiment analysis.

8. Visualization

Now that we have the data, we had to think from the perspective of the hotel inspector who’s using this tool. So in order to fulfill the requirements of a hotel inspector, we had to come up with our own ideas on what he/she might expect from such a tool which resulted in us creating an overall as well as a break down analysis of the hotel in question.

Approach:

Since we knew very little of this domain to begin with,we resorted to a very reliable hotel inspection checklist provided by the AAA. It gave us a detailed list of what the hotel inspector checks for in a hotel.
The following is a link to the checklist we followed: http://www.ncdsv.org/images/hotelinspectionchecklist.pdf

From this, we extracted a set of features that the users have rated in the reviews and based on this checklist, we assigned scores to each feature to predict what the users look most for in a hotel they plan to stay.

The top 5 criteria are as follows:

        1. Cleanliness

        2. Service

        3. Location

        4. Internet and other facilities

        5. Price

We used an existing approach (based on a research paper) for opinion mining and visualization of the reviews.

The main reasons for choosing this approach are as follows:

1. This approach made better utilization of the data as compared to other approaches.

2. Sentiment analysis results from Naive Bayesian classification generated better results as per the paper.

3. We managed to generate better ideas for visualization of our results.

The following steps are involved in our approach:

1. Data cleaning and loading (MapReduce)

The first step in our approach involves cleaning the data and loading it into a database. Each JSON file was parsed using MapReduce. It consists of two parts: data cleaning and data extraction.

a. DataPre-processing

Data preprocessing involves cleaning and transforming the obtained data as per our requirements. The preprocessing techniques we used is as follows:

1. Removal of HTML tags: HTML tags (e.g. <a></a>) need to be removed as it does not contribute to classification.

2. Replacement of ‘,’ and ‘…’ with white spaces

3. Removal of punctuation: P​unctuation should be removed from thedata. (e.g. ?, !)

4. Removal of additional white spaces: A​ny trailing white spaces that are present in the data need to be removed.

5. Conversion to lowercase: Conversion of the data to lowercase willmake the reviews uniform.

6. Removal of numbers: e​.g XYZ27 i.e. the number 27 has nocontribution during sentiment analysis and can be         removed.

7. Removal of stopwords: T​he commonly occurring words (stopwords) should be removed. Pronouns, conjunctions and prepositions areexamples of commonly occurring words. (e.g. a, is, the, etc)

8. Removal of duplicates: T​he duplicate hotels in the data should beremoved as they conflict with the results.

b. Data Extraction

For each file i.e. hotel we extract the data such as reviews data and the hotel information. The reviews contain information about the individual ratings given by a customer, the overall rating, the date it was published and the actual content of the review. The hotel information data contains the hotel name, the URL to the website of the hotel, the address which contains street and city data and the hotel id of the hotel.

2. Sentiment classification of the reviews (Mahout)

The data we obtained did not have labels for classification. Along with manually labelling the reviews, we also used the ratings in order to label the reviews. Thus, we used a semi-supervised approach for learning. The model we used for classification is the Naive-Bayesian model. The classifier was built using these labelled reviews in Mahout.

3. Analyze the overall sentiment towards the hotel

Based on the data obtained by the manual labelling of the reviews using the overall ratings given, we trained the model by converting the training data to sequence files and then to sparse vectors and using TF-IDF vectors. Thus the model, when tested gave an accuracy of 80% which was good enough for our classifier.

Using this classifier, we calculated the sentiment of the reviews for all the hotels and found the total number of positive and negative reviews for each hotel which we stored in a csv file to use for visualization. The results were not used in mongoDB as visualization with d3.js was done better with csv files rather than values from mongoDB.

This overall sentiment analysis of the hotels helped in building the priority list for hotel inspection. We did face a few challenges here to build the priority list as the hotels differed from each other in the total number of reviews. For example, there were hotels which had around 500-800 reviews of which 50% were negative while there were a few which had fewer than 10 reviews and all were negative. This second case gave 100% negativity while we should prioritize the first case.

To help this case, we assigned weights for the sentiment of the hotels: assign a higher score for the hotels which had more than the average number of reviews and more number of negative reviews.

Weighted Percentage of Negativity = (negative reviews/total no.of.reviews) * (total no.of reviews/avg no.of reviews) *100

= (no.of negative reviews/mean no.of reviews) *100

4. Analyze the sentiment towards individual features of the hotel

We have extracted the sentiment towards individual features using the ratings given by the user for the hotel features. The users have rated a lot of features of the hotels. We calculated the sentiment towards the features of the hotel by calculating the average rating for that particular feature. We plan to use the reviews for feature extraction in the future work.

5. Visualize the data(JavaScript,D3.js and jChartFX)

The last step of our approach was building a simple and effective visualization on the front-end where a hotel inspector can immediately obtain the information they need at first glance. The front-end visualizations includes the priority list of hotels to be inspected and a world map that depicts the count of hotels on the priority list at a particular location. You can further drill down on the hotels in the priority list in order to visualize the overall ratings, overall sentiment, changing trends, and the reviews of the hotel that is selected in the priority list.

Tools/Technologies Used:

1. Apache Mahout:

Apache Mahout is a scalable machine learning library that supports big data sets. We used the Naive Bayes algorithm as a classification model for sentiment analysis from this library. This algorithm in Mahout uses a medium sized dataset which contains a lot of text for classification.

2. MapReduce:

JSON in an industry standard data interchange format. In order to parse the data, we had to feed the JSON files as input to the mappers. We then transform this data to extract the values we require for sentiment analysis. This transformed data is then sorted, merged and presented to the reducer. As there is no common key, there is no reducer step and the output is written directly to the file.

3. JavaScript, D3.js and jChartFX:

We have used JavaScript in order to incorporate dynamic interactivity in our project. D3.js is dynamic JavaScript library that is used for creating interactive data visualizations in web browsers. On our front end, we have used D3.js in order to display the priority list and the world map. jChartFX is a tool used in data visualization for HTML5, jQuery and JavaScript. We used jChartFX in order to visualize the overall ratings, overall sentiments and rating trends of a particular hotel.

Implemented System:

The system we built is an analysis tool that is based on customer reviews and ratings. We focussed more on negative review classification in order to generate which hotels require inspection. It provides a hotel inspector with a priority list of hotels for immediate hotel inspection. Using the world map, it is easy to identify the locations in the world that have highest number of hotels that require inspection immediately. The system also provides an analysis of the notable features of a hotel and the customers’ sentiment towards these features. The inspector also has access to all the reviews on the dashboard. The system also consists of a trend chart which can be viewed by the inspector in order to see the fluctuations between the positive and negative ratings over a span of one year.

Analytics with connected cars

Any application that is created involves data, specifically in E-commerce applications. Such applications take data as an input and produce more data as a result thereby defining the application into data product. The term big data is coined for any collection of data sets that increases by time and could not be managed by the traditional databases.

These traditional database systems are not effective these days considering the scale of operation. The main aim of relational database architecture was to provide consistency for complex transactions which can be easily rolled back if one of the datasets fail. But managing and replication between the data servers involved in the transaction is difficult.

Data sets can be stored effectively in “No SQL databases “or “Non-relational databases”. Google’s BIGTABLE and Amazon’s DYNAMO uses this database as their model of structure. “Cassandra and Base “are the two products of among the millions to have established in providing an absolute consistency. Google have introduced “MAPREDUCE” approach which uses the divide and conquer strategy and for distributing the large problem in large cluster so as to effectively resolve it.

MapReduce has two phases as the name implies 1> Map 2> Reduce. The first phase involves processing of the input data one by one and transforming them into intermediate set. In second phase the intermediate set which is generated is further reduced to what is known as summarised sets which is the desired end product. An example to illustrate this process is the task of counting unique words in a document. In map phase the words are identified and given a count of 1 and in reduce phase the counts add up together for a word.

There are three distinct operations in this MapReduce.

  • Loading data   2> map reduce   3> Extracting the result

Hadoop is the open source implementation of the above. It is agile and is used effectively in data analysis and follows “agile practices”. It is a batch system; monitor and control the job running. It processes data upon arrival and shows results in real time. These are useful in publishing the trending data that one can see in twitter. It features only soft real time reports as these trending does not require millisecond accuracy.

Another tool that is used by the data scientist is machine learning. The mobile and other web applications have started to incorporate the concept of artificial intelligence in ways such as face detection and picture detection like google googles etc. There are many libraries for machine learning such as Pybrain in python and weak in java.

Mechanical truck is an important tool which is used in machine learning. In this the data is further taken and labelled as training sets and later on is classified by human readable terms in order to machine to incorporate them.

The main problems of explosion of big data in companies are called as three “v’s”: volume, velocity and Varity. There is growing of data as time progresses and hence volume or capacity is store the data sets increases which also increases the process time and deliverables which defines the velocity also there is wide various formats involved in this process called as Varity.

In memory computing incorporates big data concept to move the data closer to the processors. In traditional analytics of data the data streams are too slow in processing and hence in memory computing address this problem. In other words it addresses the volume and velocity problems of big data.

 

References

Big data now current perspectives from O’Reilly radar. (2011). Sebastopol, CA: O’Reilly    Media.

Ohlhorst, F. (2013). Big data analytics turning big data into big money. Hoboken, N.J.: John Wiley & Sons.

Big data. (n.d.). Retrieved January 12, 2015, from http://en.wikipedia.org/wiki/Big_data

 

 

 

Close Bitnami banner
Bitnami