IBM | The Big Data Institute [PDF]

Using Hive QL ( Hive Query language) we can use SQL like syntax to query data stored in Hadoop HDFS. The Sql Support for

3 downloads 22 Views 234KB Size

Recommend Stories


PDF Big Data
What we think, what we become. Buddha

PDF Big Data
Those who bring sunshine to the lives of others cannot keep it from themselves. J. M. Barrie

The Big Data Paradox
Happiness doesn't result from what we get, but from what we give. Ben Carson

the big data agenda
Kindness, like a boomerang, always returns. Unknown

The Big Data Payoff
I cannot do all the good that the world needs, but the world needs all the good that I can do. Jana

Big Boss? Big Data!
The only limits you see are the ones you impose on yourself. Dr. Wayne Dyer

Big data, Big Brother?
Before you speak, let your words pass through three gates: Is it true? Is it necessary? Is it kind?

big data
Just as there is no loss of basic energy in the universe, so no thought or action is without its effects,

Big Data
Don't count the days, make the days count. Muhammad Ali

Big Data
When you do things from your soul, you feel a river moving in you, a joy. Rumi

Idea Transcript


The Big Data Institute Premier source for IOT, Big Data, Data Science and Advanced Analytics

Tag Archives: IBM

Addressing the big data security! SEPTEMBER 5, 2013 BY SUSHIL (NICK) PRAMANICK 1 Data Security (http://en.wikipedia.org/wiki/Data_security) rules have changed in the age of Big Data (http://en.wikipedia.org/wiki/Big_data). The V-Force (Volume, Veracity and Variety) has changed the landscape for data processing and storage in many organizations. Organizations are collecting, analyzing, and making decisions based on analysis of massive amounts of data sets from various sources: web logs, click stream data and social media content to gain better insights about their customers. Their business and security in this process is becoming increasingly more important. IBM (http://www.google.com/finance?q=LON:IBM) estimates that 90 percent of the data that now exists have been created in the past two years. A recent study conducted by Ponemon Institute LLC in May 2013 showed that average number of breached records was 23,647. German and US companies had the most costly data breaches ($199 and $188 per record, respectively). These countries also experienced the highest total cost (US at $5.4 million and Germany at $4.8 million). On average, Australian and US companies had data breaches that resulted in the greatest number of exposed or compromised records (34,249 and 28,765 records, respectively). A Forrester report, the “Future of Data Security and Privacy: Controlling Big Data”, observes that security professionals apply most controls at the very edges of the network. However, if attackers penetrate your perimeter, they will have full and unrestricted access to your big data. The report recommends placing controls as close as possible to the data store and the data itself, in order to create a more effective line of defense. Thus, if the priority is data security, then the cluster must be highly secured against attacks. According to ISACA’s white paper – Privacy and Big Data published in August 2013, enterprises must ask and answer 16 important questions, including these key five questions, which, if ignored, expose the enterprise to greater risk and damage: Can the company trust its sources of Big Data? What information is the company collecting without exposing the enterprise to legal and regulatory battles? How will the company protect its sources, processes and decisions from theft and corruption? What policies are in place to ensure that employees keep stakeholder information confidential during and after employment? What actions are company taking that creates trends that can be exploited by its rivals? Hadoop, like many open source technologies such as UNIX and TCP/IP, wasn’t originally built with the enterprise in mind, let alone enterprise security. Hadoop’s original purpose was to manage publicly available information such as Web links, and it was designed to format large amounts of unstructured data within a distributed computing environment, specifically Google’s (http://www.google.com/finance?q=NASDAQ:GOOG). It was not written to support hardened security, compliance, encryption, policy enablement and risk management. Here are some specific steps you can take to secure your Big Data: Use Kerberos authentication for validating inter-service communicate and to validate application requests for MapReduce (http://en.wikipedia.org/wiki/MapReduce) (MR) and similar functions. Use file/OS layer encryption to protect data at rest, ensure administrators or other applications cannot gain direct access to files, and prevent leaked information from exposure. File encryption protects against two attacker techniques for circumventing application security controls. Encryption protects data if malicious users or administrators gain access to data nodes and directly inspect files, and renders stolen files or copied disk images unreadable Use key/certificate management to store your encryption keys safely and separately from the data you’re trying to protect. Use Automation tools like Chef and Puppet to help you validate nodes during deployment and stay on top of: patching, application configuration, updating the Hadoop stack, collecting trusted machine images, certificates and platform discrepancies. Create/ use log transactions, anomalies, and administrative activity to validate usage and provide forensic system logs. Use SSL or TLS network security to authenticate and ensure privacy of communications between nodes, name servers, and applications. Implement secure communication between nodes, and between nodes and applications. This requires an SSL/TLS implementation that actually protects all network communications rather than just a subset. Anonymize data to remove all data that can be uniquely tied to an individual. Although this technique can protect some personal identification, hence privacy, you need to be really careful about the amount of information you strip out. Use Tokenization technique to protect sensitive data (http://en.wikipedia.org/wiki/Data_loss_prevention_software) by replacing it with random tokens or alias values that mean nothing to someone who gains unauthorized access to this data. Leverage the Cloud database controls where access controls are built into the database to protect the whole database. Use OS Hardening – the operating system on which the data is processed to harden and lock down data. The four main protection focus areas should be: users, permissions, services, logging. Use In-Line Remediation to update configuration, restrict applications and devices, restrict network access in response to non-compliance. Use the Knox Gateway (“Gateway” or “Knox”) that provides a single point of authentication and access for Apache Hadoop (http://hadoop.apache.org/) services in a cluster. The goal is to simplify Hadoop security for both users (i.e. who access the cluster data and execute jobs) and operators (i.e. who control access and manage the cluster). A study conducted by Voltage Security (http://www.voltage.com/technology/ibe.htm) showed 76% of senior-level IT and security respondents are concerned about the inability to secure data across big data initiatives. The study further showed that more than half (56%) admitted that these security concerns have kept them from starting or finishing cloud or big data projects. The built-in Apache Hadoop security still has significant gaps for enterprise to leverage them as-is and to address them, multiple vendors of Hadoop distributions: Cloudera, Hortonworks, IBM and others have bolstered security in a few powerful ways. Cloudera’s Hadoop Distribution now offers Sentry, a new role-based security access control project that will enable companies to set rules for data access down to the level of servers, databases, tables, views and even portions of underlying files. Its new support for role-based authorization, fine-grained authorization, and multitenant administration allows Hadoop operators to: Store more sensitive data in Hadoop, Give more end-users access to that data in Hadoop, Create new use cases for Hadoop, Enable multi-user applications, and Comply with regulations (e.g., SOX, PCI, HIPAA, EAL3) RSA NetWitness and HP ArcSight ESM now serve as weapons against advanced persistent threats that can’t be stopped by traditional defenses such as firewalls or antivirus systems.

(https://thebigdatainstitute.files.wordpress.com/2013/09/big-data-securitycloudera.png) Figure 1: Cloudera Sentry Architecture Hortonworks partner Voltage Security offers data protection solutions that protect data from any source in any format, before it enters Hadoop. Using Voltage Format-Preserving Encryption™ (FPE), structured, semi-structured or unstructured data can be encrypted at source and protected throughout the data life cycle, wherever it resides and however it is used. Protection travels with the data, eliminating security gaps in transmission into and out of Hadoop and other environments. FPE enables data de-identification (http://en.wikipedia.org/wiki/Deidentification) to provide access to sensitive data while maintaining privacy and confidentiality for certain data fields such as social security numbers that need a degree of privacy while remaining in a format useful for analytics.

(https://thebigdatainstitute.files.wordpress.com/2013/09/big-data-securityhortonworks.png) Figure 2: Hortonworks Security Architecture IBM’s BigInsights provides built-in features that can be configured during the installation process. Authorization is supported in BigInsights by defining roles. InfoSphere BigInsights provides four options for authentication: No Authentication, Flat File authentication, LDAP authentication and PAM authentication. In addition, the BigInsights installer provides the option to configure HTTPS to potentially provide more security when a user connects to the BigInsights web console.

(https://thebigdatainstitute.files.wordpress.com/2013/09/big-data-security-ibm-biginsights.png) Figure 3: IBM BigInsights Security Architecture Intel, one of the latest entrants to the distribution-vendor category — came out with a wish list for Hadoop security under the name Project Rhino First of all, although today the focus is on technology and technical security issues around big data — and they are important — big data security is not just a technical challenge. Many other domains are also involved, such as legal, privacy, operations, and staffing. Not all big data is created equal, and depending on the data security requirements and risk appetite/profile of an organization, different security controls for big data are required.

Report this ad

Report this ad Posted in Big Data, Security. Tagged Apache Hadoop, BigData, Cloudera, Data security, Google, Hadoop, IBM, MapReduce

Hadoop meets SQL JULY 24, 2013 BY VENKUMAR 1 Hadoop meets SQL Big data technologies like hadoop are providing enterprises a cost effective way to store and analyze data. Enterprises are looking at using hadoop to augment their traditional data warehouse. Compared to traditional data warehouse solutions, hadoop can scale using commodity hardware and can be used to store both structured as well as unstructured data. Traditional data warehouses based on relational database technologies have been around for a long time and have mature sets of tools for querying and analysis. Business Users use SQL as the query language to perform ad-hoc queries against these warehouses. Also reporting tools like Cognos , Business Objects, MicroStrategy rely on SQL heavily. The real value of hadoop is realized when users can access and perform ad-hoc queries data directly on hadoop using tools that support SQL. Querying a hadoop data store means knowing Map-Reduce programming or writing Pig and Hive scripts. Hadoop at its core consists of HDFS storage and Map-Reduce Engine. Map-reduce programs being typically written in Java are difficult to write. They are not at all easy to use from a business users perspective. The ability to use SQL to analyze data stored in hadoop will help in making hadoop go main stream. This will also enable the business user to reuse their existing SQL knowledge to analyze the data store in hadoop. Various initiatives are underway both in the open source as well as in various companies to solve the problem of enabling SQL on Hadoop. The following are some of the most common ones. Hive: Facebook developed Hive as a way of bringing a SQL Like interface for querying hadoop. A Hive Warehouse needs to be created first that provides a schema on top of the data stored in HDFS. Using Hive QL ( Hive Query language) we can use SQL like syntax to query data stored in Hadoop HDFS. The Sql Support for Hive is very limited at this point. It does not offer support for full ANSI Sql. In the case of joins Hive supports ANSI Join syntax only. It only supports equi-joins at this point. Hive also does not support correlated sub queries which are commonly used in most traditional warehouse queries . Hive is not designed for low latency queries. Hive actually launches map-reduce jobs in the background. So even for small Hive tables, the query will take several minutes. It is really designed to run queries against massive amounts of data, where the query will return results in a few hours. Hive is not suited for real time querying and analysis Impala: Cloudera’s Impala provide a fast real time query capability for your data stored in Hadoop using Sql. Impala is based on Google’s Dremel paper. Currently Impala supports a subset of Ansi-92 Sql. There are still some issues on the join table sizes with Impala. If the join results do not fit into the amount of memory available in the hadoop cluster, the join would fail. Your queries are limited by the amount of memory you have. Impala currently supports hash Joins. Cloudera does provide some recommendations on memory size for data nodes based on their beta customer experience Impala also provides connection using JDBC and ODBC as well as a command line tool. Cloudera provides some interesting performance data on their site for Impala(http://blog.cloudera.com/blog/2013/05/cloudera-impala-1-0its-here-its-real-its-already-the-standard-for-sql-on-hadoop/ (http://blog.cloudera.com/blog/2013/05/cloudera-impala-1-0-its-here-its-real-itsalready-the-standard-for-sql-on-hadoop/)). BigSql: BigSql is an enterprise class SQL query engine from IBM and is available on IBM’s hadoop distribution BigInsights version 2.1. BigSql provides full ANSI Sql support as well as support for correlated subqueries. There is no memory size limitation for join tables. BigSql runs on top of a Hive Warehouse or Hbase . BigSql also provides an option for using adaptive Map-reduce to improve the performance of Map-reduce Jobs. Adaptive Map-reduce comes from IBM’s experience in high performance computing clusters. BigSql supports standard as well as ANSI Join, cross join and non-equi join syntax. It also provides a much wider support for data types compared to Hive. Any BI and visualization tool that uses JDBC/ODBC drivers can use BigSql to connect to a BigInsights Hadoop cluster. BigSql also comes with a command line query tool called JSqsh, which is similar to Oracle Sql*Plus or MySql command line tools. IBM has announced their free download for BigInsights Quickstart VM that comes bundled with BigSql (http://www-01.ibm.com/software/data/infosphere/biginsights/quick-start/ (http://www-01.ibm.com/software/data/infosphere/biginsights/quick-start/)) Google BigQuery: Google launched BigQuery based on their Dremel tool to enable real time querying of their data using Sql Queries. BigQuery provides both synchronous and asynchronous running of queries. However BigQuery is available only if data is loaded into Google’s cloud storage. Google provides a set of RESTFUL API’s to access the queries as well. It supports joins, but there is a table size limitation for one the joining tables. BigQuery is a powerful querying tool if you are using Google Cloud to store your data. HAWQ: Greenplum announced their HAWQ query engine that runs on top of their Pivotal HD ( Greenplum’s hadoop Distribution) and can run execute SQL queries against Hadoop. With HAWQ users can query data stored in Hbase, Hive or HDFS. HAWQ uses the same query optimizer that is used by Greenplum DB. HAWQ uses dynamic pipelining which is the combination of a bunch of different Greenplum technologies that have been built for the parallel relational database. The dynamic pipelining is a job scheduler for queries (different than job tracker and Name node used by hadoop). This would be a good option for customers who already use Greenplum DB as their warehouse. They can run the same queries against the Greenplum warehouse as well as the Pivotal HD hadoop cluster. HAWQ is a proprietary solution from Greenplum. What is coming next? Other open source initiatives are in place to address the issue of providing real time query features on Hadoop. Apache Drill: This is a new open source initiative based on Googles Dremel paper. The aim is to provide near real time query capabilities on hadoop similar to Google BigQuery. Stinger : The Stinger Initiative is a Apache project managed by HortonWorks and Microsoft. The aim is to leverage hadoop 2.0 and Yarn to help improve the performance and sql capabilties of Hive. With Stinger, Hive queries will be 100X faster than the current queries. Hive will also support sub queries and better alignment with ANSI Sql. Key things to consider when looking for Sql Capabilities n Hadoop: Deploying a hadoop distribution that is open source or has open source support will be important. Hadoop has been a open source initiative and there are lot of contributors who are adding and building capabilities into the platform. Companies like IBM and EMC Greenplum are now adding enterprise class features into it and enabling integration with other enterprise data stores. These query tools are designed to be used by small group of users. All the Sql type query tools , run map reduce jobs in the background. Map-reduce inherently scale up very well but does not scale down. Enable a limited set of business users to run SQL queries against Hadoop using these tools. Enterprise class SQL features are available thru IBM BigSql and GreenPlum HAWQ. These two query engines are both relatively new. They are also proprietary solutions tied to their own hadoop distributions. Both of them do aim to support full ANSI SQL and an enable enterprises to port and reuse existing queries. If you have an existing warehouse running on DB2, Teradata etc that you want to augment and reuse queries quickly, a BigSql running on a BigInsights hadoop cluster would be a logical choice. All the Query engines have limitations and are not as robust and mature as the standard SQL query tools that are available on traditional warehouses. To expect business users to quickly use these tools is still a stretch and will need help from technical experts who understand hadoop. Posted in Uncategorized. Tagged Apache Hadoop, BigQuery, BigSql, Cloudera, Cognos, Dremel, Drill, Greenplum, HAWQ, Hive, HortonWorks, IBM, Impala, MapReduce, SQL, Warehouse

Will Hadoop replace or augment your Enterprise Data Warehouse? MAY 7, 2013 BY VENKUMAR 5 Will Hadoop replace or augment your Enterprise Data Warehouse (http://en.wikipedia.org/wiki/Data_warehouse)? There is all the buzz about BigData and Hadoop these days and its potential for replacing Enterprise Data Warehouse (EDW). The promise of Hadoop has been the ability to store and process massive amounts of data using commodity hardware that scales extremely well and at very low cost. Hadoop is good for batch oriented work and not really good at OLTP (http://en.wikipedia.org/wiki/Online_transaction_processing) workloads. The logical question then is do enterprises still need the EDW. Why not simply get rid of the expensive warehouse and deploy a Hadoop cluster with Hbase and Hive. After all you never hear about Google or Facebook using data warehouse systems from Oracle or Teradata (http://www.google.com/finance?q=NYSE:TDC) or Greenplum. Before we get into that a little bit of overview on how Hadoop stores data. Hadoop comprise of two components. The Hadoop Distributed File System (http://hadoop.apache.org/) (HDFS) and the Map-Reduce (http://en.wikipedia.org/wiki/MapReduce) Engine. HDFS enables you to store all kinds of data (structured as well as unstructured) on commodity servers. Data is divided into blocks and distributed across data nodes. The data itself is processed by using Map-Reduce programs which are typically written in Java. NoSql (http://en.wikipedia.org/wiki/NoSQL) Databases like HBase (http://hbase.apache.org/) and Hive provide a layer on top of HDFS storage that enables end users to use Sql language (http://www.iso.org/iso/catalogue_detail.htm?csnumber=45498). In addition BI reporting ,visualization and analytical tools like Cognos, Business Objects, Tableau, SPSS, R etc can now connect to Hadoop/Hive. A traditional EDW stores structured data from OLTP and back office ERP systems (http://en.wikipedia.org/wiki/Enterprise_resource_planning) into a relational database using expensive storage arrays with RAID disks. Examples of this structured data may be your customer orders, data from your financial systems, sales orders, invoices etc. Reporting tools like Cognos, Business Objects, SPSS etc are used to run reports and perform analysis on the data. So are we ready to dump the EDW and move to Hadoop for all our Warehouse needs. There are some things the EDW does very well that Hadoop is still not very good at: Hadoop and HBase/Hive are all still very IT focused. They need people with lot of expertise in writing Map reduce Programs in Java, Pig etc. Business Users who actually need the data are not in a position to run ad-hoc queries and analytics easily without involving IT. Hadoop is still maturing and needs lot of IT Hand holding to make it work. EDW is well suited for many common business processes, such as monitoring sales by geography, product or channel; extract insight from customer surveys; cost and profitability analyses. The data is loaded into pre-defined schemas/data marts and business users can use familiar tools to perform analysis and run adhoc Sql Queries. Most EDW come with pre built adaptors for various ERP Systems and databases. Companies have built complex ETL, data marts , analytics, reports etc on top of these warehouse. It will be extremely expensive, time consuming and risky to recode that into a new Hadoop environment. People with Hadoop/Map-reduce expertise are not readily available and are in short supply. Augment your EDW with Hadoop to add new capabilities and Insight. For the next couple of years, as the Hadoop/BigData landscape evolves augment and enhance your EDW with a Hadoop/BigData cluster as follows: Continue to store summary structured data from your OLTP and back office systems into the EDW. Store unstructured data into Hadoop that does not fit nicely into “Tables”. This means all the communication with your customers from phone logs, customer feedbacks, GPS locations, photos, tweets, emails, text messages etc can be stored in Hadoop. You can store this lot more cost effectively in Hadoop. Co-relate data in your EDW with the data in your Hadoop cluster to get better insight about your customers, products, equipments etc. You can now use this data for analytics that are computation intensive like clustering and targeting. Run ad-hoc analytics and models against your data in Hadoop, while you are still transforming and loading your EDW. Do not build Hadoop capabilities within your enterprise in a silo. Big Data/Hadoop technologies should work in tandem with and extend the value of your existing data warehouse and analytics technologies. Data Warehouse vendors are adding capabilities of Hadoop and Map-reduce into their offerings. When adding Hadoop capabilities, I would recommend going with a vendor that supports and enhances the open source Hadoop distribution. In a few years as newer and better analytical and reporting capabilities develop on top of Hadoop, it may eventually be a good platform for all your warehousing needs. Solutions like IBM (http://www.google.com/finance?q=LON:IBM)‘s BigSql and Cloudera’s Impala will make it easier for business users to move more of their warehousing needs to Hadoop by improving query performance and Sql capabilities. Posted in Big Data, Hadoop. Tagged BigData, BigSql, BusinessObjects, Cloudera, Cognos, EDW, Hadoop, IBM

Join us this Thursday, May 2nd for our next Big Data Developer Meetup! APRIL 29, 2013 BY SUSHIL (NICK) PRAMANICK 1 This meetup will focus on Real-Time Location Based Analytics and will include a presentation, demo, hands-on session (bring your laptop), and pizza! Doors open at 5:30pm for registration and networking, while the program starts at 6:30pm. With smart phones and fully instrumented cars, the amount of data we can collect from moving objects is growing at staggering rates. In a few years, the automotive industry will be the largest data producer of data after utilities — bigger than health care. And with this Big Data volume comes Big Data challenges. The opportunity for applying all this data in real-time to problems in transportation, congestion management, emergency response, microweather prediction, supply chain management, and so on is tremendous. But this requires a real-time analytics platform that can integrate GPS locations, telematics messages and sensor readings, video, and other kinds of information–and scale up to any level. Join us to learn about how IBM InfoSphere Streams is addressing this Big Data challenge. The event will kick off with a presentation and will be followed by a live demo. Bring your laptop, because you’ll then have an opportunity to get hands on with InfoSphere Streams and apply this exciting new technology yourself! Pizza & beverages will be provided! Agenda 5:30pm: Registration & Networking 6:30pm: Presentation 7:15pm: Pizza Break 7:30pm: Demo 8:15pm: Hands On with Real-time Analytics

To visit Big Data Developers, go here: http://www.meetup.com/BigDataDevelopers/ (http://www.meetup.com/BigDataDevelopers/) Posted in Uncategorized. Tagged Big Data, IBM Create a free website or blog at WordPress.com.

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.