Hadoop hbase nutch jobs

Filter

My recent searches
Filter by:
Budget
to
to
to
Type
Skills
Languages
    Job State
    2,000 hadoop hbase nutch jobs found, pricing in USD

    I am in urgent need of Hadoop/Spark developer who is proficient in both Scala and Python for a data processing task. I have a huge volume of unstructured data that needs to be processed and analyzed swiftly and accurately. Key Project Responsibilities: - Scrubbing and cleaning the unstructured data to detect and correct errors. - Designing algorithms using Scala and Python to process data in Hadoop/Spark. - Ensuring effective data processing and overall system performance. The perfect fit for this role is a professional who has: - Expertise in Hadoop and Spark frameworks. - Proven experience in processing unstructured data. - Proficient coding skills in both Scala and Python. - Deep understanding of data structures and algorithms. - Familiarity with data ...

    $25 / hr (Avg Bid)
    $25 / hr Avg Bid
    38 bids

    ...and natural language processing 3. Strong proficiency in programming languages such as Python, Java, and C++, as well as web development frameworks like Node.js and React 4. Experience with cloud computing platforms such as AWS, Azure, or Google Cloud, and containerization technologies like Docker and Kubernetes 5. Familiarity with data engineering and analytics tools and techniques, such as Hadoop, Spark, and SQL 6. Excellent problem-solving and analytical skills, with the ability to break down complex technical challenges into manageable components and solutions 7. Strong project management and communication skills, with the ability to collaborate effectively with both technical and non-technical stakeholders 8. Familiarity with agile development methodologies and best pr...

    $1618 (Avg Bid)
    NDA
    $1618 Avg Bid
    98 bids

    We are looking for an Informatica BDM developer with 7+ yrs of experience, who can support us for 8 hours in a day from Mon - Friday. Title : Informatica BDM Developer Experience : 5 + Yrs 100%Remote Contract : Long term Timings: 10:30 am - 07:30 pm IST Required Skills: Informatica Data Engineering, DIS and MAS • Databricks, Hadoop • Relational SQL and NoSQL databases, including some of the following: Azure Synapse/SQL DW and SQL Database, SQL Server and Oracle • Core cloud services from at least one of the major providers in the market (Azure, AWS, Google) • Agile Methodologies, such as SCRUM • Task tracking tools, such as TFS and JIRA

    $1301 (Avg Bid)
    $1301 Avg Bid
    5 bids

    ...which will include parameters such as patient age ranges, geographical regions, social conditions, and specific types of cardiovascular diseases. Key responsibilities: - Process distributed data using Hadoop/MapReduce or Apache Spark - Developing an RNN model (preferably Python) - Analyzing the complex CSV data (5000+ records) - Identifying and predicting future trends based on age, region, types of diseases and other factors - Properly visualizing results in digestible diagrams Ideal candidates should have: - Experience in data analysis with Python - Solid understanding of Hadoop/MapReduce or Apache Spark - Proven ability in working with Recurrent Neural Networks - Excellent visualization skills to represent complex data in static or dynamic dashboards - Experien...

    $488 (Avg Bid)
    $488 Avg Bid
    84 bids

    I am looking for an experienced Senior Data Engineer for interview training. Your primary responsibility would be data cleaning and preprocessing, design and optimize database and implem...preprocessing, design and optimize database and implement ETL processes. Key responsibilities include: - Clean and preprocess data to ensure its quality and efficiency. - Design and optimize databases, aiming for both flexibility and speed. - Implement ETL (Extract, Transform, Load) processes to facilitate the effective and secure moving of data. Skills and Experience: - Proficient in Python, SQL, and Hadoop. - Expertise in handling medium-sized databases (1GB-1TB). - Proven track record in ETL processes handling. Your expertise in these areas will be crucial to the successful completion of thi...

    $53 / hr (Avg Bid)
    $53 / hr Avg Bid
    17 bids

    I have encountered a problem with my Hadoop project and need assistance. My system is showing ": HADOOP_HOME and are unset", and I am not certain if I've set the HADOOP_HOME and variables correctly. This happens creating a pipeline release in devops. In this project, I am looking for someone who: - Has extensive knowledge about Hadoop and its environment variables - Can determine whether I have set the HADOOP_HOME and variables correctly and resolve any issues regarding the same - Able to figure out the version of Hadoop installed on my system and solve compatibility issues if any I will pay for the solution immediately.

    $22 / hr (Avg Bid)
    $22 / hr Avg Bid
    15 bids

    *Title: Freelance Data Engineer* *Description:* We are seeking a talented freelance data engineer to join our team on a project basis. The ideal candidate will have a strong background in data engineering, with expertise in designing, implementing, and maintaining data pipelines and infrastructure. You will work closely with our data scientists and analysts to ensure the smooth flow of data from various sources to our data warehouse, and to support the development of analytics and machine learning solutions. This is a remote position with flexible hours. *Responsibilities:* - Design, build, and maintain scalable and efficient data pipelines to collect, process, and store large volumes of data from diverse sources. - Collaborate with data scientists and analysts to understand data require...

    $84 (Avg Bid)
    $84 Avg Bid
    3 bids

    As a beginner, I am seeking a knowledgeable developer who can guide me on effectively using Google Cloud for Hadoop, Spark, Hive, pig, and MR. The main goal is data processing and analysis. Key Knowledge Areas Needed: - Google Cloud usage for big data management - Relevant functionalities of Hadoop, Spark, Hive, pig, and MR - Best practices for data storage, retrieval, and workflow streamlining Ideal Skills: - Extensive Google Cloud experience - Proficiency in Hadoop, Spark, Hive, Pig, and MR for data processes - Strong teaching abilities for beginners - Demonstrated experience in data processing and analysis.

    $171 (Avg Bid)
    $171 Avg Bid
    14 bids

    As a beginner, I am seeking a knowledgeable developer who can guide me on effectively using Google Cloud for Hadoop, Spark, Hive, pig, and MR. The main goal is data processing and analysis. Key Knowledge Areas Needed: - Google Cloud usage for big data management - Relevant functionalities of Hadoop, Spark, Hive, pig, and MR - Best practices for data storage, retrieval, and workflow streamlining Ideal Skills: - Extensive Google Cloud experience - Proficiency in Hadoop, Spark, Hive, Pig, and MR for data processes - Strong teaching abilities for beginners - Demonstrated experience in data processing and analysis.

    $181 (Avg Bid)
    $181 Avg Bid
    26 bids

    As a beginner, I am seeking a knowledgeable developer who can guide me on effectively using Google Cloud for Hadoop, Spark, Hive, pig, and MR. The main goal is data processing and analysis. Key Knowledge Areas Needed: - Google Cloud usage for big data management - Relevant functionalities of Hadoop, Spark, Hive, pig, and MR - Best practices for data storage, retrieval, and workflow streamlining Ideal Skills: - Extensive Google Cloud experience - Proficiency in Hadoop, Spark, Hive, Pig, and MR for data processes - Strong teaching abilities for beginners - Demonstrated experience in data processing and analysis.

    $21 (Avg Bid)
    $21 Avg Bid
    11 bids

    ...commonly used packages specially with GCP. Hands on experience on Data migration and data processing on the Google Cloud stack, specifically: Big Query Cloud Dataflow Cloud DataProc Cloud Storage Cloud DataPrep Cloud PubSub Cloud Composer & Airflow Experience designing and deploying large scale distributed data processing systems with few technologies such as PostgreSQL or equivalent databases, SQL, Hadoop, Spark, Tableau Hands on Experience with Python-Json nested data operation. Exposure or Knowledge of API design, REST including versioning, isolation, and micro-services. Proven ability to define and build architecturally sound solution designs. Demonstrated ability to rapidly build relationships with key stakeholders. Experience of automated unit testing, automated integra...

    $13 / hr (Avg Bid)
    $13 / hr Avg Bid
    11 bids

    I am looking for a skilled professional who can efficiently set up an big data cluster. REQUIREMENTS: • Proficiency in Elasticsearch,Hadoop,Spark,Cassandra • Experience in working with large-scale data storage (10+ terabytes). • Able to structure data effectively. SPECIFIC TASKS INCLUDE: - Setting up the Elasticsearch,Hadoop,Spark,Cassandra big data cluster. - Ensuring the data to be stored is structured. - Prep for the ability to handle more than 10 terabytes of data. The ideal candidate will have substantial experience in large data structures and a deep understanding of the bigdata database technology. I encourage experts in big data management and those well-versed with the best practices of bigdata to bid for this project.

    $30 / hr (Avg Bid)
    $30 / hr Avg Bid
    3 bids

    We are looking for an Informatica BDM developer with 7+ yrs of experience, who can support us for 8 hours in a day from Mon - Friday. Title : Informatica BDM Developer Experience : 5 + Yrs 100%Remote Contract : Long term Timings: 10:30 am - 07:30 pm IST Required Skills: Informatica Data Engineering, DIS and MAS • Databricks, Hadoop • Relational SQL and NoSQL databases, including some of the following: Azure Synapse/SQL DW and SQL Database, SQL Server and Oracle • Core cloud services from at least one of the major providers in the market (Azure, AWS, Google) • Agile Methodologies, such as SCRUM • Task tracking tools, such as TFS and JIRA

    $1218 (Avg Bid)
    $1218 Avg Bid
    3 bids

    I am seeking a skilled professional proficient in managing big data tasks with Hadoop, Hive, and PySpark. The primary aim of this project involves processing and analyzing structured data. Key Tasks: - Implementing Hadoop, Hive, and PySpark for my project to analyze large volumes of structured data. - Use Hive and PySpark for sophisticated data analysis and processing techniques. Ideal Skills: - Proficiency in Hadoop ecosystem - Experience with Hive and PySpark - Strong background in working with structured data - Expertise in big data processing and data analysis - Excellent problem-solving and communication skills Deliverables: - Converting raw data into useful information using Hive and Visualizing the results of queries into the graphical representations. - C...

    $17 / hr (Avg Bid)
    $17 / hr Avg Bid
    15 bids

    ...currently seeking a Hadoop Professional with strong expertise in Pyspark for a multi-faceted project. Your responsibilities will extend to but not limited to: - Data analysis: You'll be working with diverse datasets including customer data, sales data and sensor data. Your role will involve deciphering this data, identifying key patterns and drawing out impactful insights. - Data processing: A major part of this role will be processing the mentioned datasets, and preparing them effectively for analysis. - Performance optimization: The ultimate aim is to enhance our customer targeting, boost sales revenue and identify patterns in sensor data. Utilizing your skills to optimize performance in these sectors will be highly appreciated. The ideal candidate will be skilled in ...

    $463 (Avg Bid)
    $463 Avg Bid
    25 bids

    ...R), and other BI essentials, join us for global projects. What We're Looking For: Business Intelligence Experts with Training Skills: Data analysis, visualization, and SQL Programming (Python, R) Business acumen and problem-solving Effective communication and domain expertise Data warehousing and modeling ETL processes and OLAP Statistical analysis and machine learning Big data technologies (Hadoop, Spark) Agile methodologies and data-driven decision-making Cloud technologies (AWS, Azure) and data security NoSQL databases and web scraping Natural Language Processing (NLP) and sentiment analysis API integration and data architecture Why Work With Us: Global Opportunities: Collaborate worldwide across diverse industries. Impactful Work: Empower businesses through data-drive...

    $21 / hr (Avg Bid)
    $21 / hr Avg Bid
    24 bids

    I'm launching an extensive project that needs a proficient expert in Google Cloud Platform (including BigQuery, GCS, Airflow/Composer), Hadoop, Java, Python, and Splunk. The selected candidate should display exemplary skills in these tools, and offer long-term support. Key Responsibilities: - Data analysis and reporting - Application development - Log monitoring and analysis Skills Requirements: - Google Cloud Platform (BigQuery, GCS, Airflow/Composer) - Hadoop - Java - Python - Splunk The data size is unknown at the moment, but proficiency in managing large datasets will be advantageous. Please place your bid taking into account all these factors. Your prior experience handling similar projects will be a plus. I look forward to working with a dedicated and know...

    $488 (Avg Bid)
    $488 Avg Bid
    54 bids

    ...commonly used packages specially with GCP. Hands on experience on Data migration and data processing on the Google Cloud stack, specifically: Big Query Cloud Dataflow Cloud DataProc Cloud Storage Cloud DataPrep Cloud PubSub Cloud Composer & Airflow Experience designing and deploying large scale distributed data processing systems with few technologies such as PostgreSQL or equivalent databases, SQL, Hadoop, Spark, Tableau Hands on Experience with Python-Json nested data operation. Exposure or Knowledge of API design, REST including versioning, isolation, and micro-services. Proven ability to define and build architecturally sound solution designs. Demonstrated ability to rapidly build relationships with key stakeholders. Experience of automated unit testing, automated integra...

    $14 / hr (Avg Bid)
    $14 / hr Avg Bid
    6 bids

    As an ecommerce platform looking to optimize our data management, I require assistance with several key aspects of my AWS big data project, including: - Data lake setup and configuration - Development of AWS Glue jobs - Deployment of Hadoop and Spark clusters - Kafka data streaming The freelancer hired for this project must possess expertise in AWS, Kafka, and Hadoop. Strong experience with AWS Glue is essential given the heavy utilization planned for the tool throughout the project. Your suggestions and recommendations regarding these tools and technologies will be heartily welcomed, but keep in mind specific tools are needed to successfully complete this project.

    $844 (Avg Bid)
    $844 Avg Bid
    20 bids

    ...Queries: Write a SQL query to find the second highest salary. Design a database schema for a given problem statement. Optimize a given SQL query. Solution Design: Design a parking lot system using object-oriented principles. Propose a data model for an e-commerce platform. Outline an approach to scale a given algorithm for large datasets. Big Data Technologies (if applicable): Basic questions on Hadoop, Spark, or other big data tools. How to handle large datasets efficiently. Writing map-reduce jobs (if relevant to the role). Statistical Analysis and Data Processing: Write a program to calculate statistical measures like mean, median, mode. Implement data normalization or standardization techniques. Process and analyze large datasets using Python libraries like Pandas. Rememb...

    $8 / hr (Avg Bid)
    $8 / hr Avg Bid
    36 bids

    ...customer-centric software products · Analyze existing software implementations to identify areas of improvement and provide deadline estimates for implementing new features · Develop software applications using technologies that include and not limited to core Java (11+ ), Kafka or messaging system, Web Frameworks like Struts / Spring, relational (Oracle) and non-relational databases (SQL, MongoDB, Hadoop, etc), with RESTful microservice architecture · Implement security and data protection features · Update and maintain documentation for team processes, best practices, and software runbooks · Collaborating with git in a multi-developer team · Appreciation for clean and well documented code · Contribution to database design ...

    $1399 (Avg Bid)
    $1399 Avg Bid
    50 bids

    a project of data analysis/data engineering involving big data needs to be done. Candidate must have command on big data solutions like hadoop

    $11 / hr (Avg Bid)
    $11 / hr Avg Bid
    8 bids

    Project Title: Advanced Hadoop Administrator Description: - We are seeking an advanced Hadoop administrator for an inhouse Hadoop setup project. - The ideal candidate should have extensive experience and expertise in Hadoop administration. - The main tasks of the Hadoop administrator will include data processing, data storage, and data analysis. - The project is expected to be completed in less than a month. - The Hadoop administrator will be responsible for ensuring the smooth functioning of the Hadoop system and optimizing its performance. - The candidate should have a deep understanding of Hadoop architecture, configuration, and troubleshooting. - Experience in managing large-scale data processing and storage environments is requi...

    $310 (Avg Bid)
    $310 Avg Bid
    3 bids

    I am looking for a freelancer to help me with a Proof of Concept (POC) project focusing on Hadoop. Requirement: We drop a file in HDFS, which is then pushed to Spark or Kafka and it pushes final output/results into a database. Objective is to show we can handle million of records as input and put it in destination. The POC should be completed within 3-4 days and should have a simple level of complexity. Skills and experience required: - Strong knowledge and experience with Hadoop - Familiarity with HDFS and Kafka/Spark - Ability to quickly understand and implement a simple POC project - Good problem-solving skills and attention to detail

    $169 (Avg Bid)
    $169 Avg Bid
    9 bids

    ...of DataNode 3: Mike Set the last two digits of the IP address of each DataNode: IP address of DataNode 1: IP address of DataNode 2: IP address of DataNode 3: Submission Requirements: Submit the following screenshots: Use commands to create three directories on HDFS, named after the first name of each team member. Use commands to upload the Hadoop package to HDFS. Use commands to show the IP addresses of all DataNodes. Provide detailed information (ls -l) of the blocks on each DataNode. Provide detailed information (ls -l) of the fsimage file and edit log file. Include screenshots of the Overview module, Startup Process module, DataNodes module, and Browse Directory module on the Web UI of HDFS. MapReduce Temperature Analysis You are

    $15 (Avg Bid)
    $15 Avg Bid
    2 bids

    Big data project in java needed to be done in 24 hrs. Person needs to be experienced in spark. hadoop.

    $132 (Avg Bid)
    $132 Avg Bid
    10 bids

    Looking for hadoop specialist to design the query optimisation design . Currently when the search is made its getting freezing when the user tries to run more than one search at a time . Need to implement a solution . This is a remote project . Share your idea first if you have done any such work . Here the UI is in React and Backend is in Node js .

    $16 / hr (Avg Bid)
    $16 / hr Avg Bid
    38 bids

    #Your code goes here import '' import '' def jbytes(*args) { |arg| arg.to_s.to_java_bytes } end def put_many(table_name, row, column_values) table = (@, table_name) p = (*jbytes(row)) do |column, value| family, qualifier = (':') (jbytes(family, qualifier), jbytes(value)) end (p) end # Call put_many function with sample data put_many 'wiki', 'DevOps', { "text:" => "What DevOps IaC do you use?", "revision:author" => "Frayad Gebrehana", "revision:comment" => "Terraform" } # Get data from the 'wiki' table get 'wiki', 'DevOps' #Do not remove the exit call below exit

    $60 (Avg Bid)
    $60 Avg Bid
    7 bids

    I am in need of assistance with Hadoop for the installation and setup of the platform. Skills and experience required: - Proficiency in Hadoop installation and setup - Knowledge of different versions of Hadoop (Hadoop 1.x and Hadoop 2.x) - Ability to work within a tight timeline (project needs to be completed within 7 hours) Please note that there is no specific preference for the version of Hadoop to be used.

    $13 (Avg Bid)
    $13 Avg Bid
    2 bids

    ...Visualization of JanusGraph with Elasticsearch Integration for Relationship Analysis in Banking" Requirements Analysis: a. Conduct stakeholder interviews to gather system requirements b. Document use cases and user stories c. Define data schema and relationship mapping for JanusGraph d. Assess technical constraints and system integrations Planning and Design: a. Select the datastore (HBase or Cassandra) after analysing performance and scalability b. Define the JanusGraph schema, data model, and query patterns c. Plan data migration strategy and sequence from Elasticsearch to JanusGraph d. Design the algorithm for relationship creation between Main party and Other party e. Evaluate visualization libraries and choose the most appropriate for the Link Analysis cha...

    $300 (Avg Bid)
    $300 Avg Bid
    1 bids

    ...Visualization of JanusGraph with Elasticsearch Integration for Relationship Analysis in Banking" Requirements Analysis: a. Conduct stakeholder interviews to gather system requirements b. Document use cases and user stories c. Define data schema and relationship mapping for JanusGraph d. Assess technical constraints and system integrations Planning and Design: a. Select the datastore (HBase or Cassandra) after analysing performance and scalability b. Define the JanusGraph schema, data model, and query patterns c. Plan data migration strategy and sequence from Elasticsearch to JanusGraph d. Design the algorithm for relationship creation between Main party and Other party e. Evaluate visualization libraries and choose the most appropriate for the Link Analysis cha...

    $84 (Avg Bid)
    $84 Avg Bid
    1 bids

    Wordpress Black theme Design in photo Images can take from udemy Content here Content Coupon Code: 90OFFOCT23 (subscribe by 7 Oct’23 or till stock lasts) Data Engineering Career Path: Big Data Hadoop and Spark with Scala: Scala Programming In-Depth: Apache Spark In-Depth (Spark with Scala): DP-900: Microsoft Azure Data Fundamentals: Data Science Career Path: Data Analysis In-Depth (With Python): https://www

    $7 (Avg Bid)
    Guaranteed
    $7
    4 entries

    Seeking an expert in both Hadoop and Spark to assist with various big data projects. The ideal candidate should have intermediate level expertise in both Hadoop and Spark. Skills and experience needed for the job: - Proficiency in Hadoop and Spark - Intermediate level expertise in Hadoop and Spark - Strong understanding of big data concepts and tools - Experience working on big data projects - Familiarity with data processing and analysis using Hadoop and Spark - Ability to troubleshoot and optimize big data tools - Strong problem-solving skills and attention to detail

    $22 / hr (Avg Bid)
    $22 / hr Avg Bid
    12 bids

    I am looking for a freelancer to compare the performance metrics of Hadoop, Spark, and Kafka using the data that I will provide. Skills and experience required: - Strong knowledge of big data processing architectures, specifically Hadoop, Spark, and Kafka - Proficiency in analyzing and comparing performance metrics - Ability to present findings through written analysis, graphs and charts, and tables and figures The comparison should focus on key performance metrics such as processing speed, scalability, fault tolerance, throughput, and latency. The freelancer should be able to provide a comprehensive analysis of these metrics and present them in a clear and visually appealing manner. I will explain more about the data

    $157 (Avg Bid)
    $157 Avg Bid
    23 bids

    Looking for Hadoop Hive Experts I am seeking experienced Hadoop Hive experts for a personal project. Requirements: - Advanced level of expertise in Hadoop Hive - Strong understanding of big data processing and analysis - Proficient in Hive query language (HQL) - Experience with data warehousing and ETL processes - Familiarity with Apache Hadoop ecosystem tools (e.g., HDFS, MapReduce) - Ability to optimize and tune Hadoop Hive queries for performance If you have a deep understanding of Hadoop Hive and can effectively analyze and process big data, then this project is for you. Please provide examples of your previous work in Hadoop Hive and any relevant certifications or qualifications. I am flexible with the timeframe for completing the...

    $20 (Avg Bid)
    $20 Avg Bid
    2 bids

    I am looking for a Kafka Admin who can assist me with the following tasks: - Onboarding Kafka cluster - Managing Kafka topics and partitions - Its already available in the company and we need to onboard it for our project . -Should be able to Size and scope . - We will start with small data ingestion from Hadoop datalake . -Should be willing to work on remote machine . The ideal candidate should have experience in: - Setting up and configuring Kafka clusters - Managing Kafka topics and partitions - Troubleshooting Kafka performance issues The client already has all the necessary hardware and software for the Kafka cluster setup.

    $18 / hr (Avg Bid)
    $18 / hr Avg Bid
    10 bids

    Over the past years, I have devoted myself to a project involving Algorithmic Trading. My system leverages only pricing and volume data at market closing. It studies technical indicators for every stock in the S&P 500 from its IPO date, testing all possible indicator 'settings', as I prefer to call them. This process uncovers microscopic signals that suggest beneficial buying at market close and selling at the next day's close. Any signal with a p-value below 0.01 is added to my portfolio. Following this, the system removes correlated signals to prevent duplication. A Bayesian ranking of signals is calculated, and correlated signals with a lower rank are eliminated. The result is a daily optimized portfolio of buy/sell signals. This system, primarily built with numpy...

    $38 / hr (Avg Bid)
    NDA
    $38 / hr Avg Bid
    13 bids

    I am looking for a Hadoop developer with a strong background in data analysis. The scope of the project involves analyzing and interpreting data using Hadoop. The ideal candidate should have experience in Hadoop data analysis and be able to work on the project within a timeline of less than 1 month.

    $247 (Avg Bid)
    $247 Avg Bid
    4 bids

    I am looking for a Hadoop developer with a strong background in data analysis. The scope of the project involves analyzing and interpreting data using Hadoop. The ideal candidate should have experience in Hadoop data analysis and be able to work on the project within a timeline of less than 1 month.

    $12 (Avg Bid)
    $12 Avg Bid
    3 bids

    1: model and implement efficient big data solutions for various application areas using appropriately selected algorithms and data structures. 2: analyse methods and algorithms, to compare and evaluate them with respect to time and space requirements and make appropriate design choices when solving real-world problems. 3: motivate and explai...choices when solving real-world problems. 3: motivate and explain trade-offs in big data processing technique design and analysis in written and oral form. 4: explain the Big Data Fundamentals, including the evolution of Big Data, the characteristics of Big Data and the challenges introduced. 6: apply the novel architectures and platforms introduced for Big data, i.e., Hadoop, MapReduce and Spark complex problems on Hadoop execution pl...

    $129 (Avg Bid)
    $129 Avg Bid
    9 bids

    I am looking for a freelancer who can help me with an issue I am fac...who can help me with an issue I am facing with launching Apache Gobblin in YARN. Here are the details of the project: Error Message: NoClassDefFoundError (Please note that this question was skipped, so the error message may not be accurate) Apache Gobblin Version: 2.0.0 YARN Configuration: Not sure Skills and Experience: - Strong knowledge and experience with Apache Gobblin - Expertise in Hadoop,YARN configuration and troubleshooting - Familiarity with Interrupt exception and related issues - Ability to diagnose and resolve issues in a timely manner - Excellent communication skills to effectively collaborate with me and understand the problem If you have the required skills and experience, please bid on thi...

    $25 / hr (Avg Bid)
    $25 / hr Avg Bid
    10 bids

    Write MapReduce programs that give you a chance to develop an understanding of principles when solving complex problems on the Hadoop execution platform.

    $25 (Avg Bid)
    $25 Avg Bid
    9 bids

    It's java hadoop mapreduce task. The program should run on windows OS. An algorithm must be devised and implemented that can recognize the language of a given text. Thank you.

    $33 (Avg Bid)
    $33 Avg Bid
    8 bids

    Looking for a freelancer to help with a simple Hadoop SPARK task focusing on data visualization. The ideal candidate should have experience in: - Hadoop and SPARK - Data visualization tools and techniques - Ability to work quickly and deliver results as soon as possible. The task is: Use the following link to get the Dataset: Write a report that contains the following steps: 1. Write steps of Spark & Hadoop setup with some screenshots. 2. Import Libraries and Set Work Background (Steps +screen shots) 3. Load and Discover Data (Steps +screen shots + Codes) 4. Data Cleaning and Preprocessing (Steps +screen shots + Codes) 5. Data Analysis - Simple Analysis (explanation, print screen codes) - Moderate Analysis (explanation

    $33 (Avg Bid)
    $33 Avg Bid
    8 bids

    Looking for a freelancer to help with a simple Hadoop SPARK task focusing on data visualization. The ideal candidate should have experience in: - Hadoop and SPARK - Data visualization tools and techniques - Ability to work quickly and deliver results as soon as possible. The task is: Use the following link to get the Dataset: Write a report that contains the following steps: 1. Write steps of Spark & Hadoop setup with some screenshots. 2. Import Libraries and Set Work Background (Steps +screen shots) 3. Load and Discover Data (Steps +screen shots + Codes) 4. Data Cleaning and Preprocessing (Steps +screen shots + Codes) 5. Data Analysis - Simple Analysis (explanation, print screen codes) - Moderate Analysis (explanation

    $30 (Avg Bid)
    $30 Avg Bid
    6 bids

    I am looking for an advanced Hadoop trainer for an online training program. I have some specific topics to be covered as part of the program, and it is essential that the trainer can provide in-depth knowledge and expertise in Hadoop. The topics to be discussed include Big Data technologies, Hadoop administration, Data warehousing, MapReduce, HDFS Architecture, Cluster Management, Real Time Processing, HBase, Apache Sqoop, and Flume. Of course, the trainer should also have good working knowledge about other Big Data topics and techniques. In addition to the topics mentioned, the successful candidate must also demonstrate the ability to tailor the course to meet the learner’s individual needs, making sure that the classes are engaging and fun. The traine...

    $14 / hr (Avg Bid)
    $14 / hr Avg Bid
    1 bids

    I am looking for a freelancer with some experience in working with Hadoop and Spark, specifically in setting up a logging platform. I need full assistance in setting up the platform and answering analytical questions using log files within Hadoop. Ideal skills and experience for this project include: - Experience working with Hadoop and Spark - Knowledge of setting up logging platforms - Analytical skills to answer questions using log files

    $41 (Avg Bid)
    $41 Avg Bid
    4 bids

    I am looking for a freelancer who c...through WebEx meetings. Here are the project requirements: Specific Azure topics: - Azure Networking Assistance type: - Virtual Assistance Preferred meeting type: - WebEx Meeting and AZURE Azure Data Factory (ADE), Azure DataBricks, Azure Data Lake Services (ADLS), Azure Blob Services, Azure SQL DB, Azure Active Directory (AAD), Azure Dev Ops. Languages: Scala, Core Java, Python Databases Hive, Hbase Data Ingestion: Sqoop, Kafka, Spark Streaming Data Visualization:Table and AZURE:ADF Databricks Azure Skills and experience: - Strong understanding of Azure Networking - Experience in providing virtual assistance - Proficiency in conducting WebEx meetings If you have the required skills and experience, please bid on this summary

    $482 (Avg Bid)
    $482 Avg Bid
    4 bids

    I am looking for a freelancer who can develop an Search engine (Apache Nutch 1.0) Crawler system with an integrated AI backend. The project requires the following functionalities: - Optimized crawling and indexing - Advanced crawling and indexing with custom plugins - Crawling, indexing, and AI-driven data analysis - single page search engine The data analysis should be integrated with an existing database system. Additionally, the ideal candidate should have experience in working with Neural Networks as the AI algorithm.

    $264 (Avg Bid)
    $264 Avg Bid
    7 bids

    Looking for a freelancer to help with a simple Hadoop SPARK task focusing on data visualization. The ideal candidate should have experience in: - Hadoop and SPARK - Data visualization tools and techniques - Ability to work quickly and deliver results as soon as possible. The task is: Use the following link to get the Dataset: 1- Using Hadoop SPARK software execute three examples: simple, moderate, and advanced over the chosen DS. 2- For each case write the code and screenshot for the output. 3- Visualize the results of each example with appropriate method.

    $24 (Avg Bid)
    $24 Avg Bid
    5 bids