Filter

My recent searches
Filter by:
Budget
to
to
to
Skills
Languages
    Job State
    70 jobs found, pricing in USD

    We are looking to develop an Apache Spark or equivalent Pipeline to replace our hosted ML platform, and host the server ourselves. The current hosted platform is doing quite well, with ROC / AUC, Precision, Accuracy, Recall and F1 all above .98 (mostly above .99), but with 300,000+ Production API Transactions, it is getting quite expensive, so we are looking for alternatives. Our current 3 classification models (in an ensemble configuration) utilize 300 or so features and seem to perform best with decision tree models, but we would like to explore different models to see which perform optimally. We have a training data set available in a MSSQL that we can give you access to, and we will need to develop a streaming API so we can submit samples for analysis in real-time, then upload the results (along with the extracted features) to a different table in the MSSQL database. Our desktop software first checks the database to see if an identical sample has already been submitted and analyzed, and if it is not in the database, our desktop software submits the features to the hosted ML platform for analysis, then uploads the results and extracted features to the database.

    $616 (Avg Bid)
    $616 Avg Bid
    10 bids
    Kafka Experts Needed 4 days left
    VERIFIED

    Kafka Expert Needed with knowledge on Kafka Stream and Kafka Connect [url removed, login to view]

    $148 (Avg Bid)
    $148 Avg Bid
    7 bids

    Hi Alexander V., I noticed your profile and would like to offer you my project. We can discuss any details over chat.

    $20 / hr (Avg Bid)
    $20 / hr Avg Bid
    3 bids

    Explain about project life cycle, documentation, day to day project activities and real time issues faced in the project code(if possible) etc.

    $32 (Avg Bid)
    $32 Avg Bid
    6 bids
    Lead Engineer - Big Data 20h left
    VERIFIED

    Tavant is a digital products and solutions company that delivers cutting-edge products and solutions to its customers across a wide range of industries such as Consumer Lending, Aftermarket, Media & Entertainment, and Retail in North America, Europe, and Asia-Pacific. We are executing a large enterprise big data project for the worlds largest credit bureau and are looking for people who can join our team and do some awesome work

    $23 - $196
    $23 - $196
    0 bids

    I am looking for scala spark developer who can teach me spark with scala in couple of days

    $122 (Avg Bid)
    $122 Avg Bid
    2 bids

    I had worked on Big data technologies/ scala from 2years and i am looking for some one who can help me on the project issues.

    $137 / hr (Avg Bid)
    $137 / hr Avg Bid
    35 bids

    HI, We are working on a reporting tool. At the moment we are able to query only 1 table from single source at a time. We plan to use Apache Spark to do data fusion on multiple data tables from multiple sources. For example one table can be PostgreSQL & other can be on MySQL. We should be able to do data joins efficiently without having need to move big result sets over the network (any ideas like MapReduce or something similar can be considered)

    $123 (Avg Bid)
    $123 Avg Bid
    17 bids
    Sparkstreaming Ended
    VERIFIED

    Design sparkstreaming mechanism.

    $187 (Avg Bid)
    $187 Avg Bid
    6 bids

    I need a support for Hadoop analytics or cloud developer role.

    $489 (Avg Bid)
    $489 Avg Bid
    26 bids

    Datasource [url removed, login to view] /[url removed, login to view] Download monthwise *[url removed, login to view] files for the years 2009 to 2017. Dimensions – Time, ASN, People, GDP, Location Concept Hierarchy Time – month < quaterly < year ASN - AS pool > Allocated ASs > Advertised Ass People – users < population GDP Location - Region > Country Region = { AFRINIC, APNIC,ARIN,RIPENCC,LACNIC} List of countries in each reagion can be found in [url removed, login to view] Perform the following OLAP operations 1. Construct a datacube using the dimensions Time=quaterly, ASN = {AS pool, Allocated Ass, Advertised Ass}, People = {users, population}, GDP, Location=country where country=India 2. Construct a datacube using the dimensions Time=yearly, ASN = {AS pool, Allocated ASs, Advertised ASs}, People = {users, population}, GDP, Location=Region where Region = { AFRINIC, APNIC,ARIN,RIPENCC,LACNIC} sample data for cross reference when you download the data from given url

    $155 (Avg Bid)
    $155 Avg Bid
    6 bids

    Datasource [url removed, login to view] /[url removed, login to view] Download monthwise *[url removed, login to view] files for the years 2009 to 2017. Dimensions – Time, ASN, People, GDP, Location Concept Hierarchy Time – month < quaterly < year ASN - AS pool > Allocated ASs > Advertised Ass People – users < population GDP Location - Region > Country Region = { AFRINIC, APNIC,ARIN,RIPENCC,LACNIC} List of countries in each reagion can be found in [url removed, login to view] Perform the following OLAP operations 1. Construct a datacube using the dimensions Time=quaterly, ASN = {AS pool, Allocated Ass, Advertised Ass}, People = {users, population}, GDP, Location=country where country=India 2. Construct a datacube using the dimensions Time=yearly, ASN = {AS pool, Allocated ASs, Advertised ASs}, People = {users, population}, GDP, Location=Region where Region = { AFRINIC, APNIC,ARIN,RIPENCC,LACNIC} sample data for cross reference when you download the data from given url

    $831 (Avg Bid)
    $831 Avg Bid
    2 bids

    This is an urgent requirement for a stealth mode start-up requiring developers with experience in Big Data, Data Analytics and related field. This will be a contract engagement for a minimum of 3 months and based out of Bangalore. We're looking for candidates with 4-6 years of experience in Python, Apache Kylin, Apache Spark and related technologies.

    $944 (Avg Bid)
    $944 Avg Bid
    17 bids

    Datasource [url removed, login to view] /[url removed, login to view] Download monthwise *[url removed, login to view] files for the years 2009 to 2017. Dimensions – Time, ASN, People, GDP, Location Concept Hierarchy Time – month < quaterly < year ASN - AS pool > Allocated ASs > Advertised Ass People – users < population GDP Location - Region > Country Region = { AFRINIC, APNIC,ARIN,RIPENCC,LACNIC} List of countries in each reagion can be found in [url removed, login to view] Perform the following OLAP operations 1. Construct a datacube using the dimensions Time=quaterly, ASN = {AS pool, Allocated Ass, Advertised Ass}, People = {users, population}, GDP, Location=country where country=India 2. Construct a datacube using the dimensions Time=yearly, ASN = {AS pool, Allocated ASs, Advertised ASs}, People = {users, population}, GDP, Location=Region where Region = { AFRINIC, APNIC,ARIN,RIPENCC,LACNIC} sample data for cross reference when you download the data from given url

    $33 (Avg Bid)
    $33 Avg Bid
    2 bids

    I'd like to find someone who has experience in Hadoop, Apache spark, Scala, Kafka that could ask me questions as many as possible based on my CV and help me answer in a more English-native way. Each session lasts about 2-3 hours.I will pay NZD10-15/hour. If you are interested, please contact me and I would appreciate your help. Thanks.

    $11 / hr (Avg Bid)
    $11 / hr Avg Bid
    9 bids

    I need someone who can work on Iot, Big data, Flink, Spark and Hadoop

    $579 (Avg Bid)
    $579 Avg Bid
    25 bids

    Exp between 6 to 10 yrs experience JOB DESCRIPTION in BRIEF: Develop in Big Data architecture, Hadoop stack including HDFS cluster, MapReduce, Pig,Hive, Spark and Yarn resource Management Hands on Programming experience in any of the programming language like(Python/Scala/R/Java) Assist and support proof of concepts as Big Data technology evolves Spark SQL,Spark Core,Spark Streaming,Kafka and Scala experience is a must. Able to work with leadership team and define the learning and unlearning metrics .Understanding experience of any NoSql Data Base(Preferred:Cassandra/MongoDB)

    $447 (Avg Bid)
    $447 Avg Bid
    17 bids

    Looking for Bigdata Architect/Data scientist. We have a multiple requirements on distributed processing. We need to manage huge volume data and perform CRUD operations on that. Your responsibilities are designing and implementing the business use cases and take care of end to end process. You must proficient in Big data technologies and Big Data Tools. Java web application development is prefered but not mandatory. It is a long term project. So, looking for a freelancer with long term relationship. We have a set of specifications. We will explain them while interviewing.

    $24 / hr (Avg Bid)
    $24 / hr Avg Bid
    9 bids

    Video Training on Big Data Hadoop. It would be screen recording and voice over. The recording will be approx 10 hrs

    $306 (Avg Bid)
    $306 Avg Bid
    7 bids

    Top Spark Community Articles