Thursday, June 21, 2012

How to Backup/Restore Oracle Beehive Email,Calendar Data - Part II ( Using ThunderBird )

How to Backup/Restore Oracle Beehive Email,Calendar Data  - Part II           ( Using ThunderBird )

In my previous article i demonstrated step-by-step process in backing up and restoring Oracle Beehive data( Email,Calendar,Contacts)  using Microsoft Outlook 2007 , In this blog i am demonstrating step-by-step process for ThunderBird Client.

There are 2 ways to backup/restore Oracle Beehive data using ThunderBird

1) Using ImportExportTools Add-on

This is a add-on to Mozilla Thunderbird, 

a) Install Add-on 

In Mozilla Thunderbird .. Tools -> Add-ons 

search for ImportExportTools Add-on then download and install .

After Installation 

Tools --> ImportExportTools 

Export : 


Choose on of the option
Export all the folders / Export all the folders with structure .. 
then select a folder to which all this folders will be exported.

you can also select a single folder (Inbox) instead of all folders 


Email Messages can be exported in multiple formats
EML / HTML ...etc , then select the folder to which this mails to be exported.



Above image represents the mails exported in EML format.

IMPORT :

select Tools --> ImportExportTools 

then select one of the option, import messages / import mbox / import all messages from a directory.

select import all messages from a directory --> also its subdirectories

select the location of the folder (backup) where you exported your mails .

then thunderbird imports all the EML messages to inbox of the thunderbird.

At the bottom of thunderbird you can check the status of import ...

2) Copy Thunderbird profile folder 

Export:

a) stop thunderbird
b) go to mozilla profile directory (C:\Users\rjuluri\AppData\Roaming\Thunderbird\Profiles\drkctipd.default)
c) copy Mail, ImapMail, calendar-data folder on to a external disk
this folder consists of INBOX.msf , Sent.msf .. all the mails are stored as a single msf file

Import :

a) stop thunderbird on the machine to which you want to import
b) from external disk copy INBOX.msf , Sent.msf to the mozilla profile directory of the another machine to which you want to copy
c) start Mozilla thunderbird .. all the archived mails will be displayed.








HADOOP INSTALLATION ON LINUX

HADOOP INSTALLATION ON LINUX


In pioneer days they used oxen for heavy pulling, and when one ox couldn’t budge a log,they didn’t try to grow a larger ox. We shouldn’t be trying for bigger computers, but for more systems of computers.
  
                                                                             —Grace Hopper


We live in the data age. It’s not easy to measure the total volume of data stored electronically,but an IDC estimate put the size of the “digital universe” at 0.18 zettabytes in 2006 and is forecasting a tenfold growth by 2011 to 1.8 zettabytes.1 A zettabyte is1021 bytes, or equivalently one thousand exabytes, one million petabytes, or one billion terabytes. That’s roughly the same order of magnitude as one disk drive for every person in the world.


Hadoop was created by Doug Cutting, the creator of Apache Lucene, the widely used text search library. Hadoop has its origins in Apache Nutch, an open source web search engine, itself a part of the Lucene project.




The Hadoop projects that are covered in this book are described briefly here:

Common
A set of components and interfaces for distributed filesystems and general I/O
(serialization, Java RPC, persistent data structures).


Avro
A serialization system for efficient, cross-language RPC and persistent data
storage.

MapReduce
A distributed data processing model and execution environment that runs on large clusters of commodity machines.

HDFS
A distributed filesystem that runs on large clusters of commodity machines.

Pig
A data flow language and execution environment for exploring very large datasets. Pig runs on HDFS and MapReduce clusters.

Hive
A distributed data warehouse. Hive manages data stored in HDFS and provides a query language based on SQL (and which is translated by the runtime engine to MapReduce jobs) for querying the data.

HBase
A distributed, column-oriented database. HBase uses HDFS for its underlying
storage, and supports both batch-style computations using MapReduce and point queries (random reads).

ZooKeeper
A distributed, highly available coordination service. ZooKeeper provides primitives such as distributed locks that can be used for building distributed applications.


Sqoop
A tool for efficient bulk transfer of data between structured data stores (such as relational databases) and HDFS.

Oozie
A service for running and scheduling workflows of Hadoop jobs (including Map-
Reduce, Pig, Hive, and Sqoop jobs).

Download a stable release from one of the apache download mirror (http://www.apache.org/dyn/closer.cgi/hadoop/common/)
, which is packaged as a gzipped tar file.

Hadoop 2.0.0 is the latest version (hadoop-2.0.0-alpha.tar.gz) download from (http://apache.mirrors.lucidnetworks.net/hadoop/common/hadoop-2.0.0-alpha/

Unpack this file 

% tar xzf hadoop-2.0.0-alpha.tar.gz

set JAVA_HOME and HADOOP_INSTALL variables , as hadoop is written in java it requires java installation location.


% export HADOOP_INSTALL = /usr/rjuluri/HADOOP/hadoop-2.0.0-alpha

% export JAVA_HOME = /usr/rjuluri/middleware/Jdev11.1.3/jdk160_18

% export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin

To verify the installation, run the following command

% hadoop version

Hadoop 2.0.0-alpha
Subversion http://svn.apache.org/repos/asf/hadoop/common/branches/branch-2.0.0-alpha/hadoop-common-project/hadoop-common -r 1338348
Compiled by hortonmu on Wed May 16 01:28:50 UTC 2012
From source with checksum 954e3f6c91d058b06b1e81a02813303f

Hadoop can be run in one of three modes:

Standalone (or local) mode
There are no daemons running and everything runs in a single JVM. Standalone
mode is suitable for running MapReduce programs during development, since it
is easy to test and debug them.

Common     fs.default.name                              file:/// (default)
HDFS          dfs.replication                                N/A

YARN     yarn.resourcemanager.address              N/A

In standalone mode, there is no further action to take, since the default properties are set for standalone mode and there are no daemons to run.

Pseudodistributed mode
The Hadoop daemons run on the local machine, thus simulating a cluster on a
small scale.

Common     fs.default.name                            hdfs://localhost/
HDFS          dfs.replication                              1

YARN     yarn.resourcemanager.address             localhost:8032

Modify config files under etc/hadoop directory of hadoop installation

Common:
fs.default.name
hdfs://localhost/

HDFS:
dfs.replication
1

MAP-REDUCE:
yarn.resourcemanager.address
localhost:8032
yarn.nodemanager.aux-services
mapreduce.shuffle







fs.default.name
hdfs://localhost/






dfs.replication
1






mapred.job.tracker
localhost:8021




If you are running YARN, use the yarn-site.xml file:





yarn.resourcemanager.address
localhost:8032


yarn.nodemanager.aux-services
mapreduce.shuffle






make sure that SSH is installed and a server is running
Then, to enable password-less login, generate a new SSH key with an empty passphrase:

% ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
% cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Test this with:

% ssh localhost

To start the HDFS and YARN daemons, type:

% start-dfs.sh
% start-yarn.sh

These commands will start the HDFS daemons, and for YARN, a resource manager and a node manager. The resource manager web UI is at http://localhost:8088/

You can stop the daemons with:

% stop-dfs.sh
% stop-yarn.sh



Fully distributed mode
The Hadoop daemons run on a cluster of machines.

Common     fs.default.name                            hdfs://namenode/
HDFS          dfs.replication                              3 (default)
YARN     yarn.resourcemanager.address    resourcemanager:8032




Friday, June 01, 2012

How to Backup/Restore Oracle Beehive Email,Calendar Data

How to Backup/Restore Oracle Beehive Email,Calendar Data - Part I ( Using Microsoft Outlook )




In this blog i will demonstrate Step-by-Step process in backing up and restoring of your Oracle Beehive data (emails, contacts, calendars, etc) using Microsoft Outlook 2007.This procedure for both importing and exporting are similar to Outlook 2010.


To launch the Import/Export wizard, Select File menu, navigate to Open, and click Import and Export (as shown in the screenshot below).






























Exporting : 


In the Import And Export Wizard, select Export to a file option and hit Next





Under Export to a File panel, select Personal Folder File(.pst) and hit Next




































Now make sure Include subfolder is checked, this makes sure that all data including, Emails, Calendar, Contacts, Drafts, etc are exported. To export only a single folder, choose that folder and hit Next.



































Now set backup file a name and choose the destination where you want it to be stored and hit Finish


Export Outlook File

Lastly, enter the Password to secure the backup file, as this .pst file can be imported to other's outlook also so for security reasons set the password to this file.


Create Outlook Data File Password

Once done, it will again ask you to enter the password. Enter the same password again and it will begin exporting your data to the file.


Outlook Data File

This Completes exporting of your Oracle Beehive Data.

IMPORTING: 


In the Import And Export wizard, select Import from another program or file option and hit Next


Export And Import Wizard


Under File Type to import from, select Personal Folder File(.pst) and hit Next.



























Now choose the file to import, select the options, and hit Next.


Import Outlook Data File

It will then ask you to enter the password for the backup file, which you set during the export.


enter password to import the file

Once importing is done, finally select the folder to import from and make sure Include subfolders is checked. You can either import the file to the current folder or choose the folder.


Import Outlook File


This Completes Importing of Oracle Beehive Data to your Microsoft Outlook























Popular Posts