Thursday, November 07, 2013

Steps to Configure Google Talk Account in Pidgin



If you’ve ever tried to setup your Google Talk account for your own domain in the Pidgin multi-protocol instant messenger client. Here’s how to do it.
Open up Pidgin and choose Accounts –> Manage Accounts.

image

Then click the Add button.
Then you’ll want to enter in your username as the beginning of your email address, and the Domain as the part after the @ symbol. For instance, mine is abc@gmail.com, so I’m using abc as the username and gmail.com as the Domain.
Then flip over to the Advanced tab, and Connection Security : Requires encrypiton and enter talk.google.com as the Connect server. You can pretty much leave the other settings alone if you want.
And flip to Proxy tab and enter proxy details
Proxy Type: HTTP
Host : www-proxy.xyz.com
Port : 80 

Click on Save Button and restart Pidgin. Now all google talk contacts will be visible in pidgin and able to initiate conversation.

Wednesday, October 23, 2013

Myntra's Mistakes

Myntra.com is India’s largest online fashion and lifestyle store for men, women, and kids. Shop online from the latest collections of apparel, footwear and accessories, featuring the best brands. We are committed to delivering the best online shopping experience imaginable.”

I too agree Myntra is one of the best India's Online Shopping website, which gives products at a very discounted price. Most of them might think how could they deliver products for such cheaper price when compared to others, myntra need not pay rent and salaries to the staff for which other's shell out big bucks. and secondly by delivering Old Products with the same price as old as 3 years

 Today i received 2 parcels from myntra both are defective one

In One parcel the quantity is 1 but they Charged for 2 and the parcel has 1 item and in another parcel Products are as old as 3 years but the price is same.

When i spoke to the Customer Care representative he told that , all products undergo stringent Quality and Quantity tests.

Following are some Images of received Products which clearly states the Manufacturing Date (MFD) is 2011 and another image Quantity is 1 but charged for 2 and parcel has only 1 item.




Be careful while Purchasing Products on-line.

Tuesday, October 22, 2013

No more 'unable to find valid certification path to requested target'

No more 'unable to find valid certification path to requested target'

Some of you may be familiar with the (not very user friendly) exception message
javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
when trying to open an SSL connection to a host using JSSE. What this usually means is that the server is using a test certificate (possibly generated using keytool) rather than a certificate from a well known commercial Certification Authority such as Verisign or GoDaddy. Web browsers display warning dialogs in this case, but since JSSE cannot assume an interactive user is present it just throws an exception by default.
Certificate validation is a very important part of SSL security, but I am not writing this entry to explain the details. If you are interested, you can start by reading the Wikipedia blurb. I am writing this entry to show a simple way to talk to that host with the test certificate, if you really want to.
Basically, you want to add the server's certificate to the KeyStore with your trusted certificates. There are any number of ways to achieve that, but a simple solution is to compile and run the attached program as java InstallCert hostname, for example:
% java InstallCert ecc.fedora.redhat.com
Loading KeyStore
 /usr/jdk/instances/jdk1.5.0/jre/lib/security/cacerts...
Opening connection to ecc.fedora.redhat.com:443...
Starting SSL handshake...

javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException:
  PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException:
  unable to find valid certification path to requested target
  at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:150)
  at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1476)
  at com.sun.net.ssl.internal.ssl.Handshaker.fatalSE(Handshaker.java:174)
  at com.sun.net.ssl.internal.ssl.Handshaker.fatalSE(Handshaker.java:168)
  at com.sun.net.ssl.internal.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:846)
  at com.sun.net.ssl.internal.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:106)
  at com.sun.net.ssl.internal.ssl.Handshaker.processLoop(Handshaker.java:495)
  at com.sun.net.ssl.internal.ssl.Handshaker.process_record(Handshaker.java:433)
  at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:815)
  at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1025)
  at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1038)
  at InstallCert.main(InstallCert.java:63)
Caused by: sun.security.validator.ValidatorException: PKIX path building failed:
  sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid
  certification path to requested target
  at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:221)
  at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:145)
  at sun.security.validator.Validator.validate(Validator.java:203)
  at com.sun.net.ssl.internal.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:172)
  at InstallCert$SavingTrustManager.checkServerTrusted(InstallCert.java:158)
  at com.sun.net.ssl.internal.ssl.JsseX509TrustManager.checkServerTrusted(SSLContextImpl.java:320)
  at com.sun.net.ssl.internal.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:839)
  ... 7 more
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid
  certification path to requested target
  at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:236)
  at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:194)
  at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:216)
  ... 13 more

Server sent 2 certificate(s):

 1 Subject CN=ecc.fedora.redhat.com, O=example.com, C=US
   Issuer  CN=Certificate Shack, O=example.com, C=US
   sha1    2e 7f 76 9b 52 91 09 2e 5d 8f 6b 61 39 2d 5e 06 e4 d8 e9 c7
   md5     dd d1 a8 03 d7 6c 4b 11 a7 3d 74 28 89 d0 67 54

 2 Subject CN=Certificate Shack, O=example.com, C=US
   Issuer  CN=Certificate Shack, O=example.com, C=US
   sha1    fb 58 a7 03 c4 4e 3b 0e e3 2c 40 2f 87 64 13 4d df e1 a1 a6
   md5     72 a0 95 43 7e 41 88 18 ae 2f 6d 98 01 2c 89 68

Enter certificate to add to trusted keystore or 'q' to quit: [1]
What happened was that the program opened a connection to the specified host and started an SSL handshake. It printed the exception stack trace of the error that occured and shows you the certificates used by the server. Now it prompts you for the certificate you want to add to your trusted KeyStore. You should only do this if you are sure that this is the certificate of the trusted host you want to connect to. You may want to check the MD5 and SHA1 certificate fingerprints against a fingerprint generated on the server (e.g. using keytool) to make sure it is the correct certificate.
If you've changed your mind, enter 'q'. If you really want to add the certificate, enter '1'. (You could also add a CA certificate by entering a different certificate, but you usually don't want to do that'). Once you have made your choice, the program will print the following:
[
[
  Version: V3
  Subject: CN=ecc.fedora.redhat.com, O=example.com, C=US
  Signature Algorithm: MD5withRSA, OID = 1.2.840.113549.1.1.4

  Key:  SunPKCS11-Solaris RSA public key, 1024 bits
        (id 5158256, session object)
  modulus: 1402933022884660852748661816869706021655226675890
635441166580364941191074987345500771612454338502131694873337
233737712894815966313948609351561047977102880577818156814678
041303637255354084762814638611185951230474669455913908815827
173696651397340074281578017567044868711049821409365743953199
69584127568303024757
  public exponent: 65537
  Validity: [From: Wed Jan 18 13:16:12 PST 2006,
               To: Wed Apr 18 14:16:12 PDT 2007]
  Issuer: CN=Certificate Shack, O=example.com, C=US
  SerialNumber: [    0f]

Certificate Extensions: 2
[1]: ObjectId: 2.16.840.1.113730.1.1 Criticality=false
NetscapeCertType [
   SSL server
]

[2]: ObjectId: 2.5.29.15 Criticality=false
KeyUsage [
  Key_Encipherment
]

]
  Algorithm: [MD5withRSA]
  Signature:
0000: 6D F4 2A 63 76 2A 05 70   A2 21 0E 1E 4A 31 BE 6B  m.*cv*.p.!..J1.k
0010: 15 64 D8 BB 35 36 82 B0   0D 2A 96 FA 7A 9F A1 59  .d..56...*..z..Y
0020: CA 90 C3 28 C5 A6 9B 59   05 3B EB B2 8D C9 5E 38  ...(...Y.;....^8
0030: 62 ED 1A D7 93 DF 2A A5   D6 54 94 23 15 A2 0C E5  b.....*..T.#....
0040: 13 40 2C 3E 59 E4 2A EB   51 AC 9E 28 44 23 87 B1  .@,>Y.*.Q..(D#..
0050: 34 0B AC F3 E0 39 CA B8   35 B4 78 07 BF 28 4C C4  4....9..5.x..(L.
0060: 9A 2B A3 E9 04 26 78 19   F0 62 EA 0A B5 BB DC 0B  .+...&x..b......
0070: 90 59 E7 77 90 F8 BC 8A   1B 74 4B 4D C1 F8 3B 6C  .Y.w.....tKM..;l

]

Added certificate to keystore 'jssecacerts' using alias
'ecc.fedora.redhat.com-1'
It displayed the complete certificate and then added it to a Java KeyStore 'jssecacerts' in the current directory. To use it in your program, either configure JSSE to use it as its trust store (as explained in the documentation) or copy it into your $JAVA_HOME/jre/lib/security directory. If you want all Java applications to recognize the certificate as trusted and not just JSSE, you could also overwrite the cacerts file in that directory.
After all that, JSSE will be able to complete a handshake with the host, which you can verify by running the program again:
% java InstallCert ecc.fedora.redhat.com
Loading KeyStore jssecacerts...
Opening connection to ecc.fedora.redhat.com:443...
Starting SSL handshake...

No errors, certificate is already trusted

Server sent 2 certificate(s):

 1 Subject CN=ecc.fedora.redhat.com, O=example.com, C=US
   Issuer  CN=Certificate Shack, O=example.com, C=US
   sha1    2e 7f 76 9b 52 91 09 2e 5d 8f 6b 61 39 2d 5e 06 e4 d8 e9 c7
   md5     dd d1 a8 03 d7 6c 4b 11 a7 3d 74 28 89 d0 67 54

 2 Subject CN=Certificate Shack, O=example.com, C=US
   Issuer  CN=Certificate Shack, O=example.com, C=US
   sha1    fb 58 a7 03 c4 4e 3b 0e e3 2c 40 2f 87 64 13 4d df e1 a1 a6
   md5     72 a0 95 43 7e 41 88 18 ae 2f 6d 98 01 2c 89 68

Enter certificate to add to trusted keystore or
'q' to quit: [1]
q
KeyStore not changed
I hope that helps. For more information about the InstallCert program, have a look at the (**2011-10-11 edit the link now has a 404 (gone from the web) so I posted the source below) source code. I am sure you can figure out how it works.


The source to InstallCert.java

http://blogs.oracle.com/gc/entry/unable_to_find_valid_certification


/*
 * Copyright 2006 Sun Microsystems, Inc.  All Rights Reserved.
 *
 * Redistribution and use in source and binary forms, with or without
 * modification, are permitted provided that the following conditions
 * are met:
 *
 *   - Redistributions of source code must retain the above copyright
 *     notice, this list of conditions and the following disclaimer.
 *
 *   - Redistributions in binary form must reproduce the above copyright
 *     notice, this list of conditions and the following disclaimer in the
 *     documentation and/or other materials provided with the distribution.
 *
 *   - Neither the name of Sun Microsystems nor the names of its
 *     contributors may be used to endorse or promote products derived
 *     from this software without specific prior written permission.
 *
 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
 * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
 * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
 * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
 * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
 * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
 * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
 * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
 * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
 * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
 * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 */
/**
 * Originally from:
 * http://blogs.sun.com/andreas/resource/InstallCert.java
 * Use:
 * java InstallCert hostname
 * Example:
 *% java InstallCert ecc.fedora.redhat.com
 */

import javax.net.ssl.*;
import java.io.*;
import java.security.KeyStore;
import java.security.MessageDigest;
import java.security.cert.CertificateException;
import java.security.cert.X509Certificate;

/**
 * Class used to add the server's certificate to the KeyStore
 * with your trusted certificates.
 */
public class InstallCert {

    public static void main(String[] args) throws Exception {
        String host;
        int port;
        char[] passphrase;
        if ((args.length == 1) || (args.length == 2)) {
            String[] c = args[0].split(":");
            host = c[0];
            port = (c.length == 1) ? 443 : Integer.parseInt(c[1]);
            String p = (args.length == 1) ? "changeit" : args[1];
            passphrase = p.toCharArray();
        } else {
            System.out.println("Usage: java InstallCert [:port] [passphrase]");
            return;
        }

        File file = new File("jssecacerts");
        if (file.isFile() == false) {
            char SEP = File.separatorChar;
            File dir = new File(System.getProperty("java.home") + SEP
                    + "lib" + SEP + "security");
            file = new File(dir, "jssecacerts");
            if (file.isFile() == false) {
                file = new File(dir, "cacerts");
            }
        }
        System.out.println("Loading KeyStore " + file + "...");
        InputStream in = new FileInputStream(file);
        KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType());
        ks.load(in, passphrase);
        in.close();

        SSLContext context = SSLContext.getInstance("TLS");
        TrustManagerFactory tmf =
                TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
        tmf.init(ks);
        X509TrustManager defaultTrustManager = (X509TrustManager) tmf.getTrustManagers()[0];
        SavingTrustManager tm = new SavingTrustManager(defaultTrustManager);
        context.init(null, new TrustManager[]{tm}, null);
        SSLSocketFactory factory = context.getSocketFactory();

        System.out.println("Opening connection to " + host + ":" + port + "...");
        SSLSocket socket = (SSLSocket) factory.createSocket(host, port);
        socket.setSoTimeout(10000);
        try {
            System.out.println("Starting SSL handshake...");
            socket.startHandshake();
            socket.close();
            System.out.println();
            System.out.println("No errors, certificate is already trusted");
        } catch (SSLException e) {
            System.out.println();
            e.printStackTrace(System.out);
        }

        X509Certificate[] chain = tm.chain;
        if (chain == null) {
            System.out.println("Could not obtain server certificate chain");
            return;
        }

        BufferedReader reader =
                new BufferedReader(new InputStreamReader(System.in));

        System.out.println();
        System.out.println("Server sent " + chain.length + " certificate(s):");
        System.out.println();
        MessageDigest sha1 = MessageDigest.getInstance("SHA1");
        MessageDigest md5 = MessageDigest.getInstance("MD5");
        for (int i = 0; i < chain.length; i++) {
            X509Certificate cert = chain[i];
            System.out.println
                    (" " + (i + 1) + " Subject " + cert.getSubjectDN());
            System.out.println("   Issuer  " + cert.getIssuerDN());
            sha1.update(cert.getEncoded());
            System.out.println("   sha1    " + toHexString(sha1.digest()));
            md5.update(cert.getEncoded());
            System.out.println("   md5     " + toHexString(md5.digest()));
            System.out.println();
        }

        System.out.println("Enter certificate to add to trusted keystore or 'q' to quit: [1]");
        String line = reader.readLine().trim();
        int k;
        try {
            k = (line.length() == 0) ? 0 : Integer.parseInt(line) - 1;
        } catch (NumberFormatException e) {
            System.out.println("KeyStore not changed");
            return;
        }

        X509Certificate cert = chain[k];
        String alias = host + "-" + (k + 1);
        ks.setCertificateEntry(alias, cert);

        OutputStream out = new FileOutputStream("jssecacerts");
        ks.store(out, passphrase);
        out.close();

        System.out.println();
        System.out.println(cert);
        System.out.println();
        System.out.println
                ("Added certificate to keystore 'jssecacerts' using alias '"
                        + alias + "'");
    }

    private static final char[] HEXDIGITS = "0123456789abcdef".toCharArray();

    private static String toHexString(byte[] bytes) {
        StringBuilder sb = new StringBuilder(bytes.length * 3);
        for (int b : bytes) {
            b &= 0xff;
            sb.append(HEXDIGITS[b >> 4]);
            sb.append(HEXDIGITS[b & 15]);
            sb.append(' ');
        }
        return sb.toString();
    }

    private static class SavingTrustManager implements X509TrustManager {

        private final X509TrustManager tm;
        private X509Certificate[] chain;

        SavingTrustManager(X509TrustManager tm) {
            this.tm = tm;
        }

        public X509Certificate[] getAcceptedIssuers() {
            throw new UnsupportedOperationException();
        }

        public void checkClientTrusted(X509Certificate[] chain, String authType)
                throws CertificateException {
            throw new UnsupportedOperationException();
        }

        public void checkServerTrusted(X509Certificate[] chain, String authType)
                throws CertificateException {
            this.chain = chain;
            tm.checkServerTrusted(chain, authType);
        }
    }
}

Wednesday, September 18, 2013

How to Prepare Your iDevice for iOS 7

Apple’s Craig Federighi introduces new features of iOS 7 at WWDC in June. Photo: Alex Washburn/WIRED
Apple’s iOS 7, the biggest change to iOS since its debut, launches Wednesday. While you may be itching to get your fingers on the new operating system, you’ll want to take some time to make sure your device is 100 percent ready for this major software update.
First, make sure all the media and memories you’ve stored on your iDevice are backed up. Of course, you’ll also need to verify your device is able to upgrade to iOS 7 in the first place. Here’s what you need to do before you tap that download button.

Check Compatibility

Not every iOS device is capable of being upgraded to iOS 7*. On the iPhone front, only Retina display handsets can support the new OS. That’s the iPhone 4, 4s, and 5, and obviously the iPhone 5c and iPhone 5s starting Friday. As for iPads, anything second gen or higher will be able to support iOS 7. That includes the iPad 2, 3, 4, and iPad mini. The fifth generation iPod touch is also iOS 7 upgradeable.

Back It Up

Next, you’ll want to back up all your photos and videos, if you don’t already do this regularly. Plug in your iDevice to your computer, then, if you run OS X, use Image Capture or Preview to Import All of your memories (or you can do selectively import only the media you really want to keep). On Windows, use Windows Explorer to view your photos and copy them to your machine.

House Cleaning

Before you upgrade, why not do a bit of app house cleaning. Delete apps you never use, and update ones that need updating. Do you really still need those 10 flashlight apps and the Army of Darknesssoundboard? Probably not. This is your chance for a fresh start with a brand new OS.
If you’ve done some major reorganizing, you’ll want to sync and backup to iTunes and iCloud (again, if you don’t regularly do this already). Your transition from iOS 6 to iOS 7 will likely be smooth and problem-free, but if something does go awry, you’ll be kicking yourself if your device isn’t fully backed up.
After this, everything should be primped and primed for your much-anticipated download of iOS 7.
*If you have an older device like an iPhone 4 or iPad 2, it may be worth your while to wait before downloading iOS 7. Sometimes older devices can have performance issues with the latest version of iOS because it’s almost always optimized for Apple’s latest hardware. I found this to be the case with iOS 5 and the iPhone 3GS, but had no problems with iOS 6 on the iPhone 4. If you wait a week before updating, you can avoid any negatives associated with updating.

Thursday, July 25, 2013

Cloudera Sentry Overview | Cloudera Sentry Basics


Cloudera has introduced Sentry, a new Apache licensed open source project that provides what it calls the first "fine-grained authorization framework" for Hadoop.

Sentry is an independent security module that integrates with open source SQL query engines Apache Hive and Cloudera Impala, providing advanced authorization controls to enable multi-user applications and cross-functional processes for enterprise datasets.
Cloudera says this level of granular control is essential to meet enterprise Role Based Access Control requirements of highly regulated industries, such as healthcare, financial services and government.
Sentry alleviates the security concerns that have prevented some organizations from opening Hadoop data systems to a more diverse set of users, extending the power of Hadoop and making it suitable for new industries, organizations and enterprise use cases.
The company says it plans to submit the Sentry security module to the Apache Incubator at the Apache Software Foundation later this year.
For data safeguards to be deemed compliant with standard data regulatory requirements, there are four functional areas of information security that must be achieved, including perimeter, data, access, and visibility.
Perimeter relates to guarding access to the cluster itself through network security, firewalls and ultimately, authentication to confirm user identities; protecting the data in the cluster from unauthorized visibility through masking and encryption, both at rest and in transit; access in regards to defining what authenticated users and applications can do with the data in the cluster through file system ACLs and fine-grained authorization; and visibility in terms of reporting on the origins of data and on data usage through centralized auditing and lineage capabilities.
Recent developments by the Hadoop community, as well as integration with solution providers, have addressed the perimeter and data elements through authentication, encryption and masking.
The release of Cloudera Navigator earlier this year brought Visibility to Hadoop with centralized auditing for files, records and metadata.
As a fine-grained authorization solution for Apache Hadoop, Sentry gives database administrators holistic, granular user access control that addresses the limitations of previous solutions.
Features of the Sentry security module include secure authorization, which enables administrator to prevent authenticated users from accessing data and/or having privileges on data, fine-grained authorization that grants Hadoop administrators unprecedented, comprehensive and precise control to specify user access rights to subsets of data within a database, role-based authorization which simplifies permissions management by allowing administrators to create and assign templatized privileges based on functional roles, and multi-tenant administration which empowers central administrators to deputize individual administrators to manage security settings for each separate database or schema.
Cloudera has worked closely with the open source community to expand Hadoop’s security capabilities, including the improved security features in a new HiveServer2 release, which delivers concurrency and Kerberos-based authentication for Hadoop.

Prerequisites
Sentry depends on an underlying authentication framework to reliably identify the requesting user. It requires:
• CDH4.3.0 or later.
• HiveServer2 with strong authentication (Kerberos or LDAP).
• A secure Hadoop cluster.
This is to prevent a user bypassing the authorization and gaining direct access to the underlying data.
In addition, make sure that the following are true:
• The Hive warehouse directory (/user/hive/warehouse or any path you specify as
hive.metastore.warehouse.dir in your hive-site.xml) must be owned by the Hive user.
– Permissions on the warehouse directory must be set as follows:
– 777 on the directory itself (for example, /user/hive/warehouse)
– 750 on all subdirectories (for example, /user/hive/warehouse/mysubdir)
For example:
$ sudo hdfs hdfs -chmod 777 /user/hive/warehouse
$ sudo hdfs hdfs -chmod 750 /user/hive/warehouse/*
Important: These instructions override the recommendations in the Hive section of the CDH4
Installation Guide.
• If you used Cloudera Manager to set up HiveServer2, turn off HiveServer2 impersonation in Cloudera Manager.
Note: You should not need HiveServer2 impersonation because Sentry provides fine-grained
access control. But if you still want to use HiveServer2 impersonation for some reason, you can
do so by configuring it manually in the Sentry Configuration File on page 10, sentry-site.xml:
sentry.allow.hive.impersonation
true
• The Hive user must be able to submit MapReduce jobs. You can ensure that this is true by setting the
minimum user ID for job submission to 0. Set this value in Cloudera Manager under MapReduce Properties,
or (if you are not using Cloudera Manager) edit the taskcontroller.cfg file and set min.user.id=0.

Roles and Privileges
Sentry uses a role-based privilege model. A role is a collection of rules for accessing a given Hive object. The
objects supported in the current release are server, database, table, and URI. Access to each object is governed
by privileges: Select, Insert, or All.
Note: All is not supported explicitly in the table scope; you have to specify Select and Insert
explicitly.
For example, a rule for the Select privilege on table customers from database sales would be formulated as
follows:
server=server1->db=sales->table=customer->action=Select
Each object must be specified as a hierarchy of the containing objects, from server to table, followed by the
privilege granted for that object. A role can contain multiple such rules, separated by commas. For example a
role might contain the Select privilege for the customer and items tables in the sales database, and the
Insert privilege for the sales_insights table in the reports database. You would specify this as follows:
sales_reporting =
\server=server1->db=sales->table=customer->action=Select,
\server=server1->db=sales->table=items>action=Select,

\server=server1->db=reports->table=sales_insights>action=Insert

Users and Groups
• A user is an entity that is permitted by the authentication subsystem to access the Hive service. This entity
can be a Kerberos principal, an LDAP userid, or an artifact of some other pluggable authentication system
supported by HiveServer2.
• A group connects the authentication system with the authorization system. It is a collection of one or more
users who have been granted one or more authorization roles. Sentry allows a set of roles to be configured
for a group.
• A configured group provider determines a user’s affiliation with a group. The current release supports
HDFS-backed groups and locally configured groups. For example,
analyst = sales_reporting, data_export, audit_report
Here the group analyst is granted the roles sales_reporting, data_export, and audit_report. The members
of this group can run the HiveQL statements that are allowed by these roles. If this is an HDFS-backed group,
then all the users belonging to the HDFS group analyst can run such queries.
User to Group Mapping
You can configure Sentry to use either Hadoop groups or groups defined in the policy file.
Important: You can use either Hadoop groups or local groups, but not both at the same time.
To configure Hadoop groups:
Set the sentry.provider property in sentry-site.xml to
org.apache.sentry.provider.file.HadoopGroupResourceAuthorizationProvider.
OR
To configure local groups:
Define local groups in a [users] section of the Sentry Configuration File on page 10, sentry-site.xml. For
example:
[users]
user1 = group1, group2, group3

user2 = group2, group3
Installing Sentry
1. To download Sentry, go to the Sentry Version and Download Information page.
2. Install Sentry as follows, depending on your operating system:
• On Red Hat and similar systems:
$ sudo yum install sentry
• On SLES systems:
$ sudo zypper install sentry
• On Ubuntu and Debian systems:

sudo apt-get update; sudo apt-get install sentry

Friday, July 19, 2013

PATCH : HTTP Method RFC 5789


"In a PUT request, the enclosed entity is considered to be a modified version of the resource stored on the origin server, and the client is requesting that the stored version be replaced. With PATCH, however, the enclosed entity contains a set of instructions describing how a resource currently residing on the origin server should be modified to produce a new version."

HTTP Patch Method is used to update the JSON Resource efficiently.

JSON Patch

JSON Example :

{
  "name": "abc123",
  "colour": "blue",
  "count": 4
}

and you want to update the “count” member’s value to “5”.
Now, you could just PUT the entire thing back with the updated value, but that requires a recent GET of its state, can get heavyweight (especially for mobile clients),

For these and other reasons, many APIs define a convention for POSTing to resources that allows partial updates. E.g.

POST /widgets/abc123?action=incrementCount

PATCH /widgets/abc123 HTTP/1.1
Host: api.example.com
Content-Length: ...
Content-Type: application/json-patch

[
  {"replace": "/count", "value": 5}

]

Easy to understand, and even write by hand. If it succeeds, the response can be as simple as:

HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close


Your patch succeeded. Yay! 

Friday, July 05, 2013

OPEN LDAP STEP BY STEP INSTALLATION ON LINUX

PREREQUISITES:
Download Berkeley DB (db-4.8.30.NC.tar.gz) from Following Link
Sudo su
root$ mkdir /usr/local/BerkelyDB4.8
cd /usr/local/BerkelyDB4.8/
chown -R rjuluri:dba  /usr/local/BerkelyDB4.8/
tar xvf db-4.8.30.NC.tar.gz
cd db-4.8.30.NC

cd build_unix

INSTALL BERKELY DB: LINK FOR INSTALLATION


$ ../dist/configure
$make
$ make install

Exit the root

INSTALLATION OF BERKELY DB IS COMPLETED, NOW INSTALL OPENLDAP

Get the software
You can obtain a copy of the software by following the instructions on the OpenLDAP download page (http://www.openldap.org/software/download/). It is recommended that new users start with the latest release.

tar xvf openldap*.gz

cd /scratch/rjuluri/openldap-2.4.35/

CPPFLAGS="-I/usr/local/include -I/usr/local/BerkeleyDB.4.8/include" LDFLAGS="-L/usr/local/lib -L/usr/local/BerkeleyDB.4.8/lib -R/usr/local/lib -R/usr/local/BerkeleyDB.4.8/lib -R/usr/local/ssl/lib" LD_LIBRARY_PATH="/usr/local/BerkeleyDB.4.8/lib" ./configure --prefix=/scratch/rjuluri/openldap-2.4.35
make depend
                   make
                   make test
                   sudo su (root)
                   make install
Added these lines to /scratch/rjuluri/openldap-2.4.35/etc/openldap/slapd.conf
include         /scratch/rjuluri/openldap-2.4.35/etc/openldap/schema/cosine.schema
include         /scratch/rjuluri/openldap-2.4.35/etc/openldap/schema/inetorgperson.schema
include         /scratch/rjuluri/openldap-2.4.35/etc/openldap/schema/nis.schema

Edit the configuration file.
Use your favorite editor to edit the provided slapd.conf(5) example (usually installed as /usr/local/etc/openldap/slapd.conf) to contain a BDB database definition of the form:
database bdb
suffix "dc=,dc="
rootdn "cn=Manager,dc=,dc="
rootpw secret
directory /usr/local/var/openldap-data

Be sure to replace  and  with the appropriate domain components of your domain name. For example, for example.com, use:
database bdb
suffix "dc=example,dc=com"
rootdn "cn=Manager,dc=example,dc=com"
rootpw secret
directory /usr/local/var/openldap-data


START OPENLDAP:
You are now ready to start the stand-alone LDAP server, slapd(8), by running the command:
su root -c /scratch/rjuluri/openldap-2.4.35/libexec/slapd

To check to see if the server is running and configured correctly, you can run a search against it with ldapsearch(1). By default, ldapsearch is installed as /scratch/rjuluri/openldap-2.4.35/ bin/ldapsearch:
ldapsearch -x -b '' -s base '(objectclass=*)' namingContexts

Note the use of single quotes around command parameters to prevent special characters from being interpreted by the shell. This should return:
dn:
namingContexts: dc=example,dc=com

vi example.ldif

## DEFINE DIT ROOT/BASE/SUFFIX ####
## uses RFC 2377 format
## replace example and com as necessary below
## or for experimentation leave as is

## dcObject is an AUXILLIARY objectclass and MUST
## have a STRUCTURAL objectclass (organization in this case)
# this is an ENTRY sequence and is preceded by a BLANK line

dn: dc=example,dc=com
dc: example
description: My wonderful company as much text as you want to place
in this line up to 32K continuation data for the line above must
have <CR> or <CR><LF> i.e. ENTER works
on both Windows and *nix system - new line MUST begin with ONE SPACE
objectClass: dcObject
objectClass: organization
o: Example, Inc.

## FIRST Level hierarchy - people
## uses mixed upper and lower case for objectclass
# this is an ENTRY sequence and is preceded by a BLANK line

dn: ou=people, dc=example,dc=com
ou: people
description: All people in organisation
objectclass: organizationalunit

## SECOND Level hierarchy
## ADD a single entry under FIRST (people) level
# this is an ENTRY sequence and is preceded by a BLANK line
# the ou: Human Resources is the department name

dn: cn=Robert Smith,ou=people,dc=example,dc=com
objectclass: inetOrgPerson
cn: Robert Smith
cn: Robert J Smith
cn: bob  smith
sn: smith
uid: rjsmith
userpassword: rJsmitH
carlicense: HISCAR 123
homephone: 555-111-2222
mail: r.smith@example.com
mail: rsmith@example.com
mail: bob.smith@example.com
description: swell guy
ou: Human Resources

#######################################################################

./ldapadd -x -D "cn=Manager,dc=example,dc=com" -W -f example.ldif
./ldapsearch -x -b '' -s base '(objectclass=*)' namingContexts

Friday, June 28, 2013

Oracle NoSQL Database with Cloudera Distribution for Hadoop

Using Oracle NoSQL Database with Cloudera Distribution for Hadoop

By Deepak Vohra
Get a test project up and running to explore the basic principles involved.
Introduced in 2011, Oracle NoSQL Database is a highly available, highly scalable, key/value storage based (nonrelational) database that provides support for CRUD operations via a Java API. A related technology, the Hadoop MapReduce framework, provides a distributed environment for developing applications that process large quantities of data in parallel on large clusters.
In this article we discuss integrating Oracle NoSQL Database with Cloudera Distribution for Hadoop (CDH) on Windows OS via an Oracle JDeveloper project (download). We will also demonstrate processing the NoSQL Database data in Hadoop using a MapReduce job.

Setup

The following software is required for this project. Download and install anything on the list you don’t already have according to the respective instructions.
Install Java 1.7 in a directory (without spaces in its name) in the directory path. Set the JAVA_HOME environment variable.

Configuring Oracle NoSQL Database in Oracle JDeveloper

First, we’ll need to configure the NoSQL database server as an external tool in JDeveloper. Select Tools>External Tools. In the External Tools window select New. In the Create External Tool wizard select Tool Type: External Program and click Next. In Program Optionsspecify the following program options.
Field
Value
Program Executable
C:\JDK7\Java\jdk1.7.0_05\bin\java.exe
Arguments
-jar ./lib/kvstore-1.2.123.jar kvlite
Run Directory
C:\OracleNoSQL\kv-1.2.123

Click Finish in Create External Tools:
nosql-hadoop-f1 
Oracle NoSQL Database is now configured as an external tool; the external tool name may vary based on whether other tools requiring the same program executable are also configured.  Click on OK in External Tools.
Next, select Tools>Java 1. The Oracle NoSQL Database server starts up and a key-value (KV) store is created. 
nosql-hadoop-f2
The NoSQL Database store has the following args by default:
Arg
Value
-root
kvroot
-store
kvstore
-host
localhost
-port
5000
-admin
5001

On subsequent runs of the external tool for the NoSQL Database server the existing KV store is opened with the same configuration with which it was created: 

nosql-hadoop-f3


Running the HelloBigDataWorld Example

The NoSQL Database package includes some examples in the C:\OracleNoSQL\kv-1.2.123\examples directory. We will run the following examples in this article:
  • hello.HelloBigDataWorld
  • hadoop.CountMinorKeys 
The HelloBigDataWorld example can be run using an external tool configuration or as a Java application.
Using as an External Tool
To run HelloBigDataWorld as an external tool select Tools>External Tools and create a new external tool configuration with the same procedure as with the NoSQL Database server. We need to create two configurations, one for compiling the HelloBigDataWorld file and another for running the compiled application. Specify the following program options for compiling HelloBigDataWorld.
Program Option
Value
Program Executable
C:\JDK7\Java\jdk1.7.0_05\bin\javac.exe
Arguments
-cp ./examples;./lib/kvclient-1.2.123.jar examples/hello/HelloBigDataWorld.java
Run Directory
C:/OracleNoSQL/kv-1.2.123

The program options for compiling the hello/HelloBigDataWorld.java file are shown below. Click Finish.
 nosql-hadoop-f4
An external tool Javac gets created. Select Tools>Javac to compile the hello/ HelloBigDataWorld.java class. Next, create an external tool for running the hello.HelloBigDataWorld class file using the following configuration.
Program Option
Value
Program Executable
C:\JDK7\Java\jdk1.7.0_05\bin\java.exe
Arguments
-cp ./examples;./lib/kvclient-1.2.123.jar hello.HelloBigDataWorld
Run Directory
C:/OracleNoSQL/kv-1.2.123

The classpath should include the kvclient-1.2.123.jar file. Click Finish
nosql-hadoop-f5
To run the hello.HelloBigDataWorld class select Tools>Java. The hello.HelloBigDataWorld application runs and a short message is written.
 nosql-hadoop-f6
Running in a Java Application
Next, we will run the hello.HelloBigDataWorld application as a Java application in an Oracle JDeveloper project. To create a new application:
  • Select Java Desktop Application in New Gallery.
  • Specify an Application Name (e.g., NoSQLDB) and select the default directory. Click Next.
  • Specify a Project Name (e.g., NoSQLDB) and click Finish
Next, create a Java class in the project.
  • Select Java Class in New Gallery and click OK.
  • In Create Java Class specify class name as “HelloBigDataWorld” and package as “hello”. Click OK. The hello.HelloBigDataWorld class is added to the application.
  • Copy the hello/HelloBigDataWorld.java file from the C:\OracleNoSQL\kv-1.2.123\examples directory to the class file in Oracle JDeveloper.
In the example application, a new oracle.kv.KVStore is created using the KVStoreFactory class:
store = KVStoreFactory.getStore(new KVStoreConfig(storeName, hostName + ":" + hostPort));
Key/value pairs are created and stored in the KV store:
        final String keyString = "Hello";
        final String valueString = "Big Data World!";
store.put(Key.createKey(keyString), Value.createValue(valueString.getBytes()));
The key/value are retrieved from the store and output. Subsequently the KV store is closed.
final ValueVersion valueVersion = store.get(Key.createKey(keyString));
System.out.println(keyString + " " + new String(valueVersion.getValue().getValue())+ "\n ");
store.close();
The hello.HelloBigDataWorld class is shown below.
 nosql-hadoop-f7
To run the HelloBigDataWorld class add the C:\OracleNoSQL\kv-1.2.123\lib\kvclient-1.2.123.jar file to the Libraries and Classpath.
 nosql-hadoop-f8
To run the application right-click on the class and select Run. The hello.HelloBigDataWorld class runs and one line of output is generated.  The example application creates only one key/value pair.
In the next section we will run the hadoop.CountMinorKeys.java example. To prepare for that, rerun the HelloBigDataWorld example to create additional key/value pairs in the KV store:
 nosql-hadoop-f9 

Processing NoSQL Database Data in Hadoop

Next, we will run the Hadoop example in C:\OracleNoSQL\kv-1.2.123\examples\hadoop\CountMinorKeys.java. Create a Java class called hadoop/CountMinorKeys.java and copy the \examples\hadoop\CountMinorKeys.java file to that class.
 nosql-hadoop-f10
Add the CDH jar file to the project..
 nosql-hadoop-f11
Configuring the Hadoop Cluster
Next, we will configure the Hadoop cluster. In CDH2 there are three configuration files: core-site.xml, mapred-site.xml, and hdfs-site.xml.  In the conf/core-site.xml specify the fs.default.name parameter, which is the URI of NameNode.





        fs.default.name
        hdfs://localhost:9100
   
The core-site.xml is shown below.
 nosql-hadoop-f12
In conf/mapred-site.xml specify the mapred.job.tracker parameter for the Host or IP and port of JobTracker. Specify host as localhost and port as 9101.



        mapred.job.tracker
        localhost:9101
   
The conf/mapred-site.xml is shown below.
 nosql-hadoop-f13
Specify the dfs.replication parameter in conf/hdfs-site.xml configuration file. The dfs.replication parameter specifies how many machines a single file should be replicated to before becoming available. The value should not exceed the number of DataNodes. (We use one DataNode in this example.)




   
        dfs.replication
        1
   
The conf/hdfs-site.xml is shown below.
nosql-hadoop-f14

Having configured a Hadoop cluster, we now start the cluster. But, first, we need to create a Hadoop Distributed File System (HDFS) for the files used in processing the Hadoop data. Run the following command in Cygwin.
>cd hadoop-0.20.1+169.127
>bin/hadoop namenode -format
A storage directory, \tmp\hadoop-dvohra\dfs, is created.
nosql-hadoop-f15
 
  • We also need to create a deployment profile for the hadoop.CountMinorKeys application. Select the project node in Application Navigator and select File>New.
  • In New Gallery select Deployment Profiles JAR File and click OK.
  • In Create Deployment Profile, specify Deployment Profile Name (hadoop) and click OK.
  • In Edit JAR Deployment Profile Properties, select the default settings and click OK.
  • A new deployment profile is created. Click OK.
To deploy the deployment profile right-click on the NoSQL project and select Deploy>hadoop.
 nosql-hadoop-f16

In Deployment Action, select Deploy to JAR file and click Next. Click Finish in Summary. The hadoop.jar gets deployed to the deploy directory in the JDeveloper project. Copy the hadoop.jar to the C:\cygwin\home\dvohra\hadoop-0.20.1+169.127 directory as the application shall be run from the hadoop-0.20.1+169.127 directory in Cygwin.
Starting the Hadoop Cluster
Typically a multi-node Hadoop cluster consists of  the following nodes.
Node Name
Function
Type
NameNode
For the HDFS storage layer management. We formatted the NameNode to create a storage layer in the previous section.
master
JobTracker
MapReduce data processing management; assigns tasks
master
DataNode
Stores filesystem data, HDFS storage layer processing  
slave
TaskTracker
MapReduce processing
slave
Secondary NameNode
Stores modifications to the filesystem and periodically merges the changes with the current HDFS state.


Next, we shall start the nodes in the cluster. To start the NameNode run the following commands in Cygwin.
> cd hadoop-0.20.1+169.127
> bin/hadoop namenode
 nosql-hadoop-f17
Start the Secondary NameNode with the following commands:
> cd hadoop-0.20.1+169.127
> bin/hadoop secondarynamenode
nosql-hadoop-f18

Start the DataNode:
> cd hadoop-0.20.1+169.127
> bin/hadoop datanode
nosql-hadoop-f19

Start the JobTracker :
> cd hadoop-0.20.1+169.127
> bin/hadoop jobtracker
nosql-hadoop-f20

Start the TaskTracker:
> cd hadoop-0.20.1+169.127
> bin/hadoop tasktracker
 nosql-hadoop-f21


Running a MapReduce Job

Next, we shall run the hadoop.CountMinorKeys application for which created the hadoop.jar file. The hadoop.CountMinorKeys  application runs a MapReduce job on the Oracle NoSQL Database data in the KV store and generates an output in the Hadoop HDFS. The NoSQL Database server Java API is in the kvclient-1.2.123.jar directory. Copy the kvclient-1.2.123.jar from the C:\NoSQLDB\kv-1.2.123\lib directory to the C:\cygwin\home\dvohra\hadoop-0.22.0\lib directory, which is in the classpath of Hadoop. Run the hadoop.jar with the following commands in Cygwin.
> cd hadoop-0.20.1+169.127
> bin/hadoop jar hadoop.jar hadoop.CountMinorKeys   kvstore dvohra-PC:5000 hdfs://localhost:9100/tmp/hadoop/output/  
The MapReduce job runs and the output is generated in the hdfs://localhost:9100/tmp/hadoop/output/  directory.
 nosql-hadoop-f22
List the files in the temp/hadoop/output directory with the following command.
> bin/hadoop dfs -ls hdfs://localhost:9100/tmp/hadoop/output
The MapReduce job output is generated in the part-r-00000 file, which gets listed with the previous command.
nosql-hadoop-f23

Get the part-r-00000 file to the local filesystem with the command:
bin/hadoop dfs -get hdfs://localhost:9100/tmp/hadoop/output/part-r-00000  part-r-00000
The MapReduce job ouput is shown in Oracle JDeveloper; the output lists the number of records for each major key in the KV store, which was created with the first example application, hello.HelloBigDataWorld.
 nosql-hadoop-f24
Congratulations, your project is complete! 

Popular Posts