Hadoop facilitates solving problems with huge numbers of data in many business applications. Thanks to Freelancer.com, Hadoop experts can now find many related jobs on the internet to earn some extra cash.

Hadoop is typically a program that is under the Apache licensing and it is one of the most popular open-source software frameworks today. This program works by making it possible for other programs to break down data into petabytes. Hadoop jobs solve complicated problems involving big data numbers that can be complex or structured or a combination of both. Hadoop jobs require a deep understanding of analytics skills, particularly clustering and targeting. These jobs can also be applied in other fields, in addition to computers.

If you are a Hadoop expert seeking to go online, then Freelancer.com is right for you. This is a job-posting website, matching freelancers with jobs in their particular professions. The site is also providing a wide range of Hadoop jobs and just as with others, these come with several benefits. Perhaps the greatest boon is the impressive rates for the jobs. The fact that hundreds of Hadoop jobs are posted on Freelancer.com 24/7 is also assuring the ease of the hiring process.

Hire Hadoop Consultants

Filter

My recent searches
Filter by:
Budget
to
to
to
Skills
Languages
    Job State
    145 jobs found, pricing in USD

    We are looking for a trainer to deliver workshop on Big Data and Hadoop in third Week of this month. Trainer should have experience of delivering workshops on Big Data and Hadoop. Only experienced trainers must bid.

    $9 - $24
    $9 - $24
    0 bids
    Project for Huy N. 6 days left
    VERIFIED

    Chào Huy, mình có xem profile của bạn. thấy cũng match với project bên mình. Bạn có hứng thú làm full-time không. Nếu có. mình có thể bàn chi tiết về công việc

    $650 (Avg Bid)
    $650 Avg Bid
    1 bids
    BIG DATA HADOOP 2 days left

    Please find details about training /consulting requirement Kindly find theContents: Read Kafka data and put into HDFS using Scala and spark streaming  Read Mysql data and put into HDFS using spark and Scala streaming  Hadoop Production Resource Allocation  Druid  Oozie scheduler and The JAVA API/framework integration with the Hadoop cluster More details: Is it real time data processing or batch processing? There is no real-time data processing here. • Spark streaming Job -> which reads data from Kafka and loads into HDFS using Spark & Scala • Spark Batch Job -> which reads data from MySql and loads into HDFS using Spark & Scala • Oozie -> used for scheduling both the kind of jobs. 2. More detail on data ingestion part - After loading data into HDFS using the above jobs there is a Druid server which does indexing part. 3. Experience of Audience in BigData - None of the Audience as experience in BigData apart from couple of Team Members with Java programming background

    $178 (Avg Bid)
    $178 Avg Bid
    15 bids
    Make Data Job 2 days left

    I am looking for a Hadoop Big Data guy. Please apply only Data Science and Machine Learning Expert. Should have experience on Cluster. Must have own cluster to run my experiment. Should have experience in R and Python.

    $97 (Avg Bid)
    $97 Avg Bid
    9 bids
    Need Java Programmers 1 day left
    VERIFIED

    Need Java Programmers for a small task

    $24 (Avg Bid)
    $24 Avg Bid
    28 bids

    I had worked on Big data technologies/ scala from 2years and i am looking for some one who can help me on the project issues.

    $136 / hr (Avg Bid)
    $136 / hr Avg Bid
    36 bids

    For full time job for a startup inside enterprise company,  needed full stack developer. The ideal candidate will be responsible for conceptualizing and executing clear, quality code to develop the best software. Strong debugging and troubleshooting skills 3+ years' of development experience.  Complex system business logic engine development from scratch. UX and Graphic Design Expert.  Optimizing the flow to the possible minimum of inputs. External APIs integrations vast experience.   Angular 4, [url removed, login to view], Hadoop, C#, WCF, SQL SERVER, Micro Services. Agile methodology live knowledge and prolong usage in previous history. Go-getter with can-do attitude. Team player. Paying attention to details . Please , publish your best complex works for us to get impressed. Flows, Designs etc.

    $20 / hr (Avg Bid)
    $20 / hr Avg Bid
    54 bids

    it is a 7 months project in gurgaon , we are looking for senior profile in hadoop & machine learning

    $127 (Avg Bid)
    $127 Avg Bid
    6 bids

    Needs to consolidate RDBMS & Unstructured data in hadoop for statistical analysis

    $34 / hr (Avg Bid)
    $34 / hr Avg Bid
    14 bids

    Needs to consolidate RDBMS & Unstructure data in hadoop for statistical analysis

    $45 / hr (Avg Bid)
    $45 / hr Avg Bid
    13 bids

    HI, We are working on a reporting tool. At the moment we are able to query only 1 table from single source at a time. We plan to use Apache Spark to do data fusion on multiple data tables from multiple sources. For example one table can be PostgreSQL & other can be on MySQL. We should be able to do data joins efficiently without having need to move big result sets over the network (any ideas like MapReduce or something similar can be considered)

    $123 (Avg Bid)
    $123 Avg Bid
    17 bids

    we are looking Freelancer Data Analysis Using R /Machine Learning and Bigdata and Hadoop Trainer Trainer Should have good communication

    $154 (Avg Bid)
    $154 Avg Bid
    8 bids

    Seeking for a HDFS (Hadoop) Architect on a contract basis.

    $20 (Avg Bid)
    $20 Avg Bid
    9 bids

    I have a requirement for Hadoop Bigdata trainer with Hive, Mapreduce, Pig Big Sql, Scoop Flume, Spark, Scada/python

    $356 (Avg Bid)
    $356 Avg Bid
    11 bids

    I need a support for Hadoop analytics or cloud developer role.

    $489 (Avg Bid)
    $489 Avg Bid
    26 bids

    Please read this post in its entirety before bidding.. 1.) DO NOT RESPOND if you are NOT an expert with PostgreSQL 2.) DO NOT RESPOND If your profile does not have anything about architecting or supporting a PostgreSQL Database 3.) I will not be able to send you anything from the database, everything will be done via screen sharing. It will be a collaborative session with my engineers. If this is not acceptable DO NOT RESPOND I need a person that is an expert with PostgreSQL to help us troubleshoot an issue with a database upgrade and to help us with performance tuning of the database and server. Our current system we need assistance on is: CentOS Linux release 7.2.1511 (Core) psql (PostgreSQL) 9.4.9 The issue we are experiencing is: When trying to create a Foreign key Constraint from a newly created table (FOO) to a certain existing table (BAR) the process goes into waiting. We are able to successfully create foreign keys to any other table except this one, For example table FOO with a Foreign key to BUU works successfully. When trying to reindex BAR the process goes into waiting. I believe the table BAR is corrupt, although I can successfully select from the table.

    $18 / hr (Avg Bid)
    $18 / hr Avg Bid
    5 bids

    Datasource [url removed, login to view] /[url removed, login to view] Download monthwise *[url removed, login to view] files for the years 2009 to 2017. Dimensions – Time, ASN, People, GDP, Location Concept Hierarchy Time – month < quaterly < year ASN - AS pool > Allocated ASs > Advertised Ass People – users < population GDP Location - Region > Country Region = { AFRINIC, APNIC,ARIN,RIPENCC,LACNIC} List of countries in each reagion can be found in [url removed, login to view] Perform the following OLAP operations 1. Construct a datacube using the dimensions Time=quaterly, ASN = {AS pool, Allocated Ass, Advertised Ass}, People = {users, population}, GDP, Location=country where country=India 2. Construct a datacube using the dimensions Time=yearly, ASN = {AS pool, Allocated ASs, Advertised ASs}, People = {users, population}, GDP, Location=Region where Region = { AFRINIC, APNIC,ARIN,RIPENCC,LACNIC} sample data for cross reference when you download the data from given url

    $155 (Avg Bid)
    $155 Avg Bid
    6 bids

    Datasource [url removed, login to view] /[url removed, login to view] Download monthwise *[url removed, login to view] files for the years 2009 to 2017. Dimensions – Time, ASN, People, GDP, Location Concept Hierarchy Time – month < quaterly < year ASN - AS pool > Allocated ASs > Advertised Ass People – users < population GDP Location - Region > Country Region = { AFRINIC, APNIC,ARIN,RIPENCC,LACNIC} List of countries in each reagion can be found in [url removed, login to view] Perform the following OLAP operations 1. Construct a datacube using the dimensions Time=quaterly, ASN = {AS pool, Allocated Ass, Advertised Ass}, People = {users, population}, GDP, Location=country where country=India 2. Construct a datacube using the dimensions Time=yearly, ASN = {AS pool, Allocated ASs, Advertised ASs}, People = {users, population}, GDP, Location=Region where Region = { AFRINIC, APNIC,ARIN,RIPENCC,LACNIC} sample data for cross reference when you download the data from given url

    $831 (Avg Bid)
    $831 Avg Bid
    2 bids

    we need a part time resource on Hadoop and spark to give online support to US people on weekdays morning around 90 minutes IST 6 00 am to 8 00 am will provide 20000 per month minimum 4 years of experienced candidate only eligible for the bid.

    $443 (Avg Bid)
    $443 Avg Bid
    21 bids

    Datasource [url removed, login to view] /[url removed, login to view] Download monthwise *[url removed, login to view] files for the years 2009 to 2017. Dimensions – Time, ASN, People, GDP, Location Concept Hierarchy Time – month < quaterly < year ASN - AS pool > Allocated ASs > Advertised Ass People – users < population GDP Location - Region > Country Region = { AFRINIC, APNIC,ARIN,RIPENCC,LACNIC} List of countries in each reagion can be found in [url removed, login to view] Perform the following OLAP operations 1. Construct a datacube using the dimensions Time=quaterly, ASN = {AS pool, Allocated Ass, Advertised Ass}, People = {users, population}, GDP, Location=country where country=India 2. Construct a datacube using the dimensions Time=yearly, ASN = {AS pool, Allocated ASs, Advertised ASs}, People = {users, population}, GDP, Location=Region where Region = { AFRINIC, APNIC,ARIN,RIPENCC,LACNIC} sample data for cross reference when you download the data from given url

    $33 (Avg Bid)
    $33 Avg Bid
    2 bids

    Top Hadoop Community Articles