top of page

Efficient Big Data Stack



As a long user of big data stack we all know that most of the big data stack requires a lots of resources and costly. most people blames JVM for this. So now lots of company create a rewrite of the open source big data software.


Redpandas rewrote Kafka. ScyllaDB rewrote Cassandra. And the list goes on. However if we think clearly, we also have a great and efficient software written in Java and managed language. But it might a small parts and usually related to financial market.


So it's not JVM we need to blame. It's the developers that wrote the applications like us. We need to be more aware of what resources we use when writing the code and platform or stack we use.


We need to have mechanical sympathy of what hardware can give us and to accept that there's no free lunch. Even the hardware getting faster and cheaper it's useless if we can't utilize them better or even make it work harder because of our inefficient code.


So what we should be aware is to increase our understanding about the software ecosystem and stack that lies under the codes we write. Understanding the tools and what's the best way to leverage and utilize the true power and potential.


We can save some cost and all of that depends on our job to educate as software engineering leaders wherever we are working right now.


With great power, comes great responsibility


And you might consider porting to ARM processor like we are doing here


https://devpost.com/software/porting-big-data-platform-on-graviton2


Happy porting!

65 tampilan0 komentar

Postingan Terakhir

Lihat Semua
Post: Blog2_Post
bottom of page