Filter

My recent searches
Filter by:
Budget
to
to
to
Skills
Languages
    Job State
    8 jobs found, pricing in USD

    We are a Digital Out Of Home Provider process large amounts of Data. We required an AWS specialised to assist in creating a Listener to process extremely large amount of Webhook data coming from Wi-Fi infrastructure. We need to retrieve and queue this information to be processed by Redshift. Once the information is stored we need further analysis to generate it into reports.

    $40 / hr (Avg Bid)
    $40 / hr Avg Bid
    14 bids

    From SQL , Oracle , flatfiles to Redshit

    $897 (Avg Bid)
    $897 Avg Bid
    4 bids

    I would like to build a live or close to live data pipeline from Amazon RDS (postgres) instance to Amazon Redshift and would like to get help from someone who has already DONE IT. I am not looking for help with general Amazon RDS or Redshift. I am only interested in applications that have actually done such an integration in real life.

    $46 / hr (Avg Bid)
    $46 / hr Avg Bid
    7 bids

    Description - Looking for experienced candidate who can migrate apllications cloud..Please mention your experience with AWS and also mention how many applications you have migrated successfully Skills- VPC,EC2,ELASTIC LB,REDSHIFT,ROUTE53,GLACIER

    $9 - $23
    $9 - $23
    0 bids

    Looking for someone who is expert in python and familiar in using databases such us Redshift to build few python packages for a very small basic ETL solution to read data from S3 files in Amazon AWS and load to Redshift by applying some transformations on the data. time is very critical and I need this project to be delivered within 5 days. If you need more details on this project you can contact me directly for this.

    $352 (Avg Bid)
    $352 Avg Bid
    8 bids

    1)Please provide one (small / medium ) use case of your Hadoop ETL work in detail. 2)f there are some X number of customers and they made some purchases. Can you write an SQL to find out the TOP 5 customers who made the most purchases. 3)What did you use Map Reduce for? What did you use PIG for ? What did you use Hive for ? where does the data gets transformed? / While performing transformations where is the data? -------------------------------------------------- 4)Please provide one (small / medium ) use case of your work with Spark in detail -------------------------------------------------- 5)Please provide one (small / medium ) use case of your work with Redshift in detail. 6)Did you perform the ETL work? where does the data gets transformed? / While performing transformations where is the data? Or Did you simply load the data from Source to Redshift? 7)What are your Data Sources? How did you get the Data from Source to the AWS environment? 8)Did you use Python? If yes, What was the purpose you use Python for? Explain in Detail.

    $57 / hr (Avg Bid)
    $57 / hr Avg Bid
    19 bids

    I am looking for a redshift DWH developer in or around Berlin, who can help me with the local Redshift DWH implementation. Must be able to travel to Berlin occasionally.

    $32 / hr (Avg Bid)
    Local Featured Urgent
    $32 / hr Avg Bid
    1 bids

    [login to view URL] seo; [login to view URL] deals; [login to view URL] traffic;

    $544 (Avg Bid)
    $544 Avg Bid
    17 bids