Golang Stream Data to S3

  • Estado: Closed
  • Premio: $1000
  • Propuestas recibidas: 6
  • Ganador: jorissuppers

Resumen del concurso

We are looking to contract several Golang developers to help build out our backend microservices. To do so we are offering a contest to find the best Golang developers in the world. We welcome both independent Freelancers and outsource companies to apply. This is your chance to show your awesome skills and work on a very meaninful project.

Habilidades recomendadas

Principales propuestas de este concurso

Ver más participaciones

Tablero de aclaración pública

  • TwoHat
    Organizador del concurso
    • 4 años atrás

    it should have only one writer per file. if you generate a guid and append to the filename you'll know it is yours

    • 4 años atrás
    1. yadavgajender087
      yadavgajender087
      • 4 años atrás

      Usage of ./s3_uploader:
      -acl="bucket-owner-full-control": ACL for new object
      -bucket="": S3 bucket name (required)
      -chunk_size=50MB: multipart upload chunk size (bytes, understands standard suffixes like "KB", "MB", "MiB", etc.)
      -expected_size=0: expected input size (fail if out of bounds)
      -key="": S3 key name (required; use / notation for folders)
      -mime_type="binary/octet-stream": Content-type (MIME type)
      -region="us-west-2": AWS S3 region
      -retries=4: number of retry attempts per chunk upload
      -sse=false: use server side encryption

      • 4 años atrás
  • TwoHat
    Organizador del concurso
    • 4 años atrás

    at least 10000qps using a max 8 core machine.

    • 4 años atrás
    1. yadavgajender087
      yadavgajender087
      • 4 años atrás

      is these one is right S3 has a maximum multipart count of 10000, therefore: total_input_size / chunk_size

      • 4 años atrás
  • yadavgajender087
    yadavgajender087
    • 4 años atrás

    Stream to S3 from stdin using concurrent, multipart uploading.
    Intended for use with sources that stream data fairly slowly (like RDS dumps), such that getting the initial data is the dominant bottleneck. It is also useful to upload large files as quickly as possible using concurrent multipart uploading

    • 4 años atrás
  • TwoHat
    Organizador del concurso
    • 4 años atrás

    Contest closes tomorrow. Looking forward to all the great submissions

    • 4 años atrás
  • ankurs13
    ankurs13
    • 4 años atrás

    Is there any QPS expectation for this service (under what constraints)? Also, what should happen if the file corresponding to the message already exists in S3 (when the program starts)? Do we overwrite the file or append to it? Will there be multiple writers to the same log file? Do we need to handle that situation?

    • 4 años atrás

Mostrar más comentarios

Cómo comenzar con los concursos

  • Publica tu concurso

    Publica tu concurso Fácil y rápido

  • Recibe montones de propuestas

    Consigue toneladas de propuestas De todo el mundo

  • Elige la mejor propuesta

    Elige la mejor propuesta ¡Descarga fácilmente los archivos!

Publica un concurso ahora o únete a nosotros hoy