Find Jobs
Hire Freelancers

Help with MapReduce with Hadoop

$30-250 AUD

Cerrado
Publicado hace más de 1 año

$30-250 AUD

Pagado a la entrega
Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You
ID del proyecto: 35225956

Información sobre el proyecto

3 propuestas
Proyecto remoto
Activo hace 1 año

¿Buscas ganar dinero?

Beneficios de presentar ofertas en Freelancer

Fija tu plazo y presupuesto
Cobra por tu trabajo
Describe tu propuesta
Es gratis registrarse y presentar ofertas en los trabajos
3 freelancers están ofertando un promedio de $144 AUD por este trabajo
Avatar del usuario
Hello, Client. I have read your job description carefully. I have deep understanding and experience with MapReduce/Hadoop that you mentioned. I've previously worked on so many projects for other employers. Here is my profile URL: https://www.freelancer.com/u/LongVuDinh Check out my past reviews and skills. So, I would like to go through more specific discussions with you to provide successful results. Thank you, Dinh Long.
$200 AUD en 7 días
5,0 (6 comentarios)
4,2
4,2
Avatar del usuario
Hello Am ready to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. please send me a message to discuss this project in chat. thank you
$133,33 AUD en 2 días
5,0 (8 comentarios)
4,0
4,0
Avatar del usuario
Hi! Hope your are well! I think it’s better to use Apache Spark, Java/Scala. Apache Spark is the new-generation of Apache Hadoop. It’s also totally compatible with HDFS and other Hadoop Counterparts. You could research if it’s according to your requirements and let me know. Also, you could use AWS cloud services like EC2 on multiple free tier accounts. I can set that as well if you want! But for Spark you’d require memory intensive servers/clusters as Spark runs on RAM rather than HDD like HADOOP. I have plenty of experience in the field. I’m a Data Engineer for 3+ years. Also, since I’m new to freelancer, I’ll do it for a minimal amount if you get me a good rating. Cheers!
$100 AUD en 7 días
0,0 (0 comentarios)
0,0
0,0

Sobre este cliente

Bandera de PAKISTAN
Karachi, Pakistan
4,8
108
Miembro desde dic 10, 2018

Verificación del cliente

¡Gracias! Te hemos enviado un enlace para reclamar tu crédito gratuito.
Algo salió mal al enviar tu correo electrónico. Por favor, intenta de nuevo.
Usuarios registrados Total de empleos publicados
Freelancer ® is a registered Trademark of Freelancer Technology Pty Limited (ACN 142 189 759)
Copyright © 2024 Freelancer Technology Pty Limited (ACN 142 189 759)
Cargando visualización previa
Permiso concedido para Geolocalización.
Tu sesión de acceso ha expirado y has sido desconectado. Por favor, inica sesión nuevamente.