Radar has landed - discover the latest DDoS attack trends. Get ahead, stay protected.Get the report
Under attack?

Products

Solutions

Resources

Partners

Why Gcore

  1. Home
  2. Developers
  3. Comparison of NATS, RabbitMQ, NSQ, and Kafka

Comparison of NATS, RabbitMQ, NSQ, and Kafka

  • By Gcore
  • June 20, 2024
  • 11 min read
Comparison of NATS, RabbitMQ, NSQ, and Kafka

Messaging platforms can handle asynchronous message management and process large amounts of data with high throughput, greatly improving data delivery reliability. However, with two different types of platforms—message queuing (point-to-point) or streaming (pub-sub) depending on the receiver and how the message is handled—and so many different messaging platforms popping up, choosing one can be a challenge.

In this roundup, you’ll learn the difference between the two types and compare four leading options: NATS, RabbitMQ, NSQ, and Kafka.

Messaging Platforms: What Are They?

Messaging platforms provide a way for microservices to communicate and exchange data reliably and efficiently. This reliable communication between services architecture is crucial for completing tasks in microservice, as each service is responsible for managing one specific function.

First, let’s look at the two main types of messaging platforms.

Message Queuing

Message queuing, also known as point-to-point communication, involves sending messages to a queue, where they are stored until the intended recipient processes them. The sender and receiver do not need to be connected at the same time, facilitating asynchronous communication. The messages are stored in the queue until they can be delivered, and they are generally deleted after that.

Generally, queuing is appropriate for use cases where there’s no requirement for real-time delivery or processing, such as batch processing, order fulfillment, or offline data processing.

Streaming

Streaming, or the publish-subscribe (pub-sub) model, has the capability to route messages to multiple queues that can be consumed by multiple consumers, who perform various operations based on the content of the messages. Message creators are called publishers or producers, message senders are called receivers or consumers, and messages are stored in files or topics.

Consumers can subscribe to a log file or topic to receive a continuous stream of messages stored within. These messages can be accessed in real time or from any point in history, unlike queues, where messages are removed. Streaming is commonly used in real-time applications such as IoT and stock market order processing.

How Are These Messaging Platforms Compared?

This article compares four major messaging platforms while focusing on the following five characteristics:

  • Message queuing model: How are messages passed between two parties? Is it via a stream or a queue?
  • Delivery guarantee: Are messages always delivered at least once, or is this not always the case?
  • Ordering guarantee: Are messages delivered in the order they were sent, or are they not?
  • Throughput and latency: How many messages can the platform handle, and how fast is the communication? Keep in mind that all these systems can scale to handle increased throughput, and that results will vary based on your system configuration.
  • Persistence and replayability: Does the platform store messages and allow for reprocessing if they were missed the first time?

These factors will determine how the platform can be used in your workflow and whether it’s suitable for your use case.

NATS

NATS, or Neural Autonomic Transport System, is a cloud-native messaging platform that was first released in 2011. It was originally written in Ruby but has since been rewritten in Go to support increased throughput. With a minimal footprint of under 10 MB, it’s a lightweight solution with a very straightforward setup process. It’s suitable for edge deployments and can be deployed anywhere, including using VMs and containers on a device.

Message Queuing Model

NATS follows a pub-sub model with extended support for models like request-reply and wildcard subscriptions.

Delivery Assurance

The messaging platform offers at-most-once delivery, which makes it suitable for real-time, high-speed communication between microservices or other applications but not for long-term event storage. The at-most-once delivery guarantee means that messages are delivered once, and if there is a failure in the delivery process, the message is lost.

This model is suitable for use cases where the loss of a few messages is acceptable as long as the overall system continues to operate correctly. It works particularly well in use cases that prioritize real-time communication over long-term data storage and persistence. For example, in a game like World of Warcraft, which has over 2 million players daily, it’s important that players know where they are in the world, but not necessarily where they were a few seconds ago. NATS is a good messaging tool for this kind of data volume that needs to be handled every second, so it’s unsurprising that the game’s developer, Activision Blizzard, uses it.

Ordering Assurance

NATS supports ordering assurance. Messages published by a single connection are received by the server and then delivered in the same order to all active subscriptions. If you have concurrent publishers, the order relative to those publishers is not known or guaranteed, but end-to-end through a single connection is guaranteed. A subsystem called JetStream enables delivery guarantee, specifically “at least once.”

JetStream stores messages received by publishers in an immutable order. As a result, these messages will be consumed and replayed in their original order.

Throughput and Latency

NATS is known for its high performance, low latency, and emphasis on simplicity after it was rewritten in Go. Its rewrite in Go makes NATS an ideal choice for demanding and real-time applications and has increased its throughput compared to its original Ruby implementation.

NATS has been shown in many benchmarks to be faster and more efficient than other messaging systems, with the ability to manage 3 million messages per second.

Persistence and Replayability

NATS’s persistence engine, JetStream, provides persistence and replayability through two primitives called streams and consumers. Messages published to subjects bound to a stream are persisted and support a variety of retention models. One or more consumers can be created to manage the delivery of messages to clients when they are connected. Consumers support a few different acknowledgment and redelivery policies to accommodate a wide variety of use cases. This provides an “at least once” delivery guarantee and enables the ability to implement an “exactly once” processing model. JetStream also provides an additional “at least once” delivery model and a Kafka-like processing model.

Limitations

Even though NATS supports batch publishing, it doesn’t support atomic batch publishing. This means even messages published in the same batch are not guaranteed to be treated atomically and succeed in an “all-in-one” fashion. Core NATS, with no persistence, can achieve high throughput in batch publishing, but with the persistence layer (a replicated file-backed stream) it will cap out at around 200k msgs/sec.

For use cases that require massive throughput/ingestion of, for example, clickstream data or events, a Kafka cluster can currently still scale better than a NATS cluster. But this is what Kafka was originally designed for, whereas NATS has evolved to address a richer set of use cases beyond its original use for streaming. Considering NATS wasn’t created with a huge throughput capacity in mind, its ability to handle substantial loads is still decent and continues to improve.

RabbitMQ

RabbitMQ is a popular open-source messaging broker written in Erlang that has been available since 2007. It implements the AMQP 0-9-1 model, a technology-agnostic protocol that supports interoperability between different systems.

It operates on a push-based message delivery system, where the broker sends messages to consumers rather than requiring the consumers to actively retrieve them. The platform also offers a very rich user interface with fine admin and monitoring capabilities, which is a great addition.

Message Queuing Model

RabbitMQ is a unique messaging platform with an architecture that supports both point-to-point and pub-sub messaging patterns. It operates through the use of exchanges, to which producers publish messages. Consumers subscribe to the corresponding queues to receive the messages.

This architecture provides flexibility in terms of message delivery, allowing producers and consumers to use the pattern that best fits their needs. However, the main objective is for it to act as a pub-sub system.

Delivery Guarantee

RabbitMQ does not guarantee exact-once delivery by default. Achieving exact-one delivery of messages is a complex task because it requires sending an acknowledgment to remove a message from the queue. If the processing system were to encounter an issue after receiving the message but before sending the acknowledgment, the message may be processed again, potentially resulting in duplicate processing.

However, RabbitMQ provides several mechanisms for ensuring that messages are delivered in a reliable manner, such as transport transactions.

At-least-once delivery is supported, which makes it reliable with delivery guarantees. One of the key strengths of RabbitMQ is its support for complex message routing, where messages can be routed based on rules and conditions, making it a very reliable and extensible platform.

Ordering Guarantee

In RabbitMQ, messages are stored in queues and can be consumed by multiple consumers. It guarantees the consumption order of messages. As mentioned above, it’s important to note that RabbitMQ requires an acknowledgment before removing a message from the queue. So, if a consumer fails, that message will be at the beginning of the queue again and left for the next consumer to process.

In cases where you want certain consumers to only process certain types of messages, an exchange can be configured with specific filters. Your consumer can then connect to this exchange to ensure that certain message types are always processed in a specified order.

Throughput and Latency

RabbitMQ offers strong guarantees for message delivery and persistence. When messages are produced, they are confirmed only after they have been replicated to a majority of nodes and written to disk with fsync. This provides strong guarantees but can result in slower performance, with the ability to process upwards of 60,000 messages per second with higher latency than its counterparts.

On the other hand, when messages are consumed instantaneously, the read operations happen before they are written to disk, which results in improved performance.

Persistence and Replayability

RabbitMQ follows a standard store-and-forward pattern, allowing messages to be stored in RAM, on disk, or both. To ensure the persistence of messages, the producer can tag them as persistent, and they will be stored in a separate queue. This helps achieve message retention even after a restart or failure of the RabbitMQ server.

However, the system doesn’t allow replayability of the messages, as they are removed once the message is acknowledged.

Limitations

RabbitMQ faces difficulties in scaling horizontally, as it’s unable to resolve conflicts in queues that may arise from split-brain scenarios), which can occur during network failures and high-load situations and cause conflicts and inconsistencies between the split groups. Vertical scaling is available as an alternative, but it involves preplanning for capacity, which is not always feasible.

NSQ

NSQ is one of the most popular messaging platforms and the successor to simplequeue. It’s popular due to its ease of use and efficient handling of high-volume, real-time data streams.

NSQ consists of two main components: nsqd and nsqlookupd. The nsqd (NSQ daemon) is responsible for accepting, storing, and dispatching messages, while nsqlookupd manages topology information and helps to maintain network connections. Additionally, NSQ includes an administrative user interface called nsqadmin for web-based data visualization and tasks.

Message Queuing Model

NSQ consists of topics, each of which contains channels. Every channel receives a duplicate of the messages for a specific topic, making it a pub-sub model for message delivery.

Delivery Guarantee

NSQ is designed with a distributed architecture around the concept of topics, which allows messages to be organized and distributed across the cluster. To ensure reliable delivery, NSQ replicates each message across multiple nodes within the NSQ cluster. This means that if a node fails or there’s a disruption in the network, the message can still be delivered to its intended recipients.

Ordering Guarantee

NSQ provides a message delivery guarantee; however, it does not guarantee the order of messages that are published to a topic and channel. This means that the order in which messages are received by a consumer may not match the order in which they were published.

The absence of a strict message ordering requirement in NSQ reduces the necessity of establishing interconnections between all instances of nsqd to synchronize and sort messages, leading to improved performance.

Throughput and Latency

NSQ is a high-performance messaging platform that has been successfully used by companies like Twilio to process up to 800,000 messages per second with low latency. While it is a popular choice for certain use cases, NSQ might not be good for use cases that require high throughput because of its relatively low benchmarks compared to other options.

However, depending on your requirements, if you require a feature-rich dashboard with the same throughput, NSQ can be quite a good choice.

Persistence and Replayability

NSQ has limitations in terms of persistence, as it only maintains file durability in cases of low memory or when messages are archived by consumers. In the event of node failure, messages may be lost as NSQ deletes them immediately upon receiving the finish signal from the consumer. There is no way to recover these messages.

Archiving messages is possible using the built-in nsq_to_file utility, but it does not provide any built-in replay functionality.

Limitations

NSQ lacks the capability of replication or clustering, which means that it does not have the ability to create multiple copies of data across different nodes in a network. Moreover, a heartbeat mechanism is used to detect if consumers are alive or dead, which is not an ideal method because it doesn’t ensure idempotency.

Kafka

Kafka is one of the leading open-source streaming platforms. It’s currently governed by the Apache Software Foundation and is written in Scala. The platform’s first public release dates back to January 2011 and was created and used inside LinkedIn as a solution that could handle real-time data streams and process them in near-real time.

Message Queuing Model

The platform was created to serve queues, making it well-suited for use with a publish-subscribe model. Messages or orders are stored in topics with the support of multiple subscribers, allowing a single topic to be consumed by zero or many subscribers.

Delivery Guarantee

Kafka allows consumers to subscribe to topics and receive orders from its partitions. It provides different delivery guarantees for messages, including at-most-once, at-least-once, and exactly-once. The first two are standards across the industry, but the exact-once delivery guarantee is a more advanced feature that can be achieved through the use of idempotent producer and transaction APIs.

Exactly-once delivery provides a higher level of reliability, but it requires a more complex setup and careful management to ensure messages are not processed multiple times in the event of transaction failures. It reads orders in committed transactions by setting isolation.level to read_committed.

Ordering Guarantee

Kafka is highly recommended for applications where maintaining order is a critical requirement, for example, in log aggregation, complex event processing (CEP), and staged event-driven architecture (SEDA).

In these use cases, Kafka provides an ordering guarantee for messages within a partition, ensuring that messages are delivered to consumers in the same order as they were produced. This is achieved by maintaining an ordered sequence of records within a partition, ensuring the exact order of messages is preserved.

Throughput and Latency

A Kafka topic is designed to support multiple partitions, which helps with concurrency and, in turn, increases the throughput of the platform. The partitions inside the topic store messages on disk, and consumers read them in a chronological fashion when they’re attached to the topic. Because the I/O layer is replaced by a much faster OS page cache, there is a significant increase in throughput when messages are read as they are produced without being written to disk.

Unfortunately, Kafka doesn’t state what their throughput is, though some tests indicate it can handle at least 350,000 messages per second. It is, however, highly scalable, and a recent podcast by Confluent explains how one company handles 2 million messages per second.

Persistence and Replayability

Kafka supports persistence and replayability out of the box. The data persistence is maintained by replicating messages across multiple nodes in the cluster, which provides high availability and fault tolerance. The retention period for messages can be configured, and messages within the retention period can be replayed, making it useful for scenarios such as debugging or data recovery. For example, if the period is set for a week, replay scenarios should work for any messages up to a week old.

The durability of messages can be increased by declaring the topic as durable, marking the messages as persistent, and using publisher confirms, which ensure that messages have been successfully written to disk.

Kafka’s message durability and persistence and ability to persist data even in case of node failures make it a dependable platform and a popular choice for storing and processing real-time data.

Limitations

One of the biggest drawbacks of Apache Kafka is the architecture that makes it so efficient. The combination of brokers and ZooKeeper nodes, along with numerous configurable options, can make it difficult and complex for new teams to set up and manage without encountering performance issues or data loss. However, Kafka can work without ZooKeeper after 3.3.1 version using Kraft improving performance.

Additionally, Kafka’s architecture is not suitable for use with remote procedure calls (RPCs), as having an RPC in the middle of your process could slow it down if you’re doing it synchronously and waiting for a response. This is because Kafka is designed for fast (or at least evenly performant) consumers and is not optimized for scenarios where individual consumers may have widely varying processing speeds.

These complexities can make it challenging for teams that are new to Kafka to get the most out of the platform, and you may require specialized expertise to set it up and manage it effectively.

Summary: A Comparison Table of NATS, RabbitMQ, NSQ, and Kafka

The below table will help you understand in a summary how different message queuing systems, including NATS, RabbitMQ, NSQ, and Kafka, compare across the different parameters previously discussed:

Message Queuing ModelNATSRabbitMQNSQKafka
Delivery GuaranteeAt-most-once, at-least-once, exactly-onceAt-least-onceAt-least-onceAt-most-once, at-least-once, exactly-once
Ordering GuaranteeYesYesNoYes
ThroughputUp to 6 million messages per secondUp to 60,000 messages per secondUp to 800,000 messages per secondUp to 2 million messages per second
Persistence and ReplayabilityYesPersistent, but lacks ReplayabilityNoYes
LimitationsNo atomic batch publishLimited scalability, no replayabilityLimited scalability and persistence, no replayabilityComplex setup and management, not suitable for RPCs

Final Thoughts

In this roundup, we explained what different messaging platforms have to offer and how they differ from one another. However, the most important factor in deciding your go-to platform should be your requirements and how experienced your infrastructure team is at maintaining the platform and adapting it for your infrastructure.

At the end of the day, the number one factor in determining the platform that best meets your needs is understanding your requirements. Start with what you want from the platform and work back to the things that are okay for you to sacrifice for your use case.

Kafka has emerged as an industry leader for event streaming use cases, so choosing it can be beneficial for you because of the ecosystem. However, if you are more interested in cloud-native technologies and want a simple solution, NATS can be quite useful. RabbitMQ is suitable for long-lasting tasks and integrating services with a very easy setup, and NSQ can be used for simplicity with some sacrifices on ordering.

Written by Hrittik Roy.

Related articles

Optimize your workload: a guide to selecting the best virtual machine configuration

Virtual machines (VMs) offer the flexibility, scalability, and cost-efficiency that businesses need to optimize workloads. However, choosing the wrong setup can lead to poor performance, wasted resources, and unnecessary costs.In this guide, we’ll walk you through the essential factors to consider when selecting the best virtual machine configuration for your specific workload needs.﹟1 Understand your workload requirementsThe first step in choosing the right virtual machine configuration is understanding the nature of your workload. Workloads can range from light, everyday tasks to resource-intensive applications. When making your decision, consider the following:Compute-intensive workloads: Applications like video rendering, scientific simulations, and data analysis require a higher number of CPU cores. Opt for VMs with multiple processors or CPUs for smoother performance.Memory-intensive workloads: Databases, big data analytics, and high-performance computing (HPC) jobs often need more RAM. Choose a VM configuration that provides sufficient memory to avoid memory bottlenecks.Storage-intensive workloads: If your workload relies heavily on storage, such as file servers or applications requiring frequent read/write operations, prioritize VM configurations that offer high-speed storage options, such as SSDs or NVMe.I/O-intensive workloads: Applications that require frequent network or disk I/O, such as cloud services and distributed applications, benefit from VMs with high-bandwidth and low-latency network interfaces.﹟2 Consider VM size and scalabilityOnce you understand your workload’s requirements, the next step is to choose the right VM size. VM sizes are typically categorized by the amount of CPU, memory, and storage they offer.Start with a baseline: Select a VM configuration that offers a balanced ratio of CPU, RAM, and storage based on your workload type.Scalability: Choose a VM size that allows you to easily scale up or down as your needs change. Many cloud providers offer auto-scaling capabilities that adjust your VM’s resources based on real-time demand, providing flexibility and cost savings.Overprovisioning vs. underprovisioning: Avoid overprovisioning (allocating excessive resources) unless your workload demands peak capacity at all times, as this can lead to unnecessary costs. Similarly, underprovisioning can affect performance, so finding the right balance is essential.﹟3 Evaluate CPU and memory considerationsThe central processing unit (CPU) and memory (RAM) are the heart of a virtual machine. The configuration of both plays a significant role in performance. Workloads that need high processing power, such as video encoding, machine learning, or simulations, will benefit from VMs with multiple CPU cores. However, be mindful of CPU architecture—look for VMs that offer the latest processors (e.g., Intel Xeon, AMD EPYC) for better performance per core.It’s also important that the VM has enough memory to avoid paging, which occurs when the system uses disk space as virtual memory, significantly slowing down performance. Consider a configuration with more RAM and support for faster memory types like DDR4 for memory-heavy applications.﹟4 Assess storage performance and capacityStorage performance and capacity can significantly impact the performance of your virtual machine, especially for applications requiring large data volumes. Key considerations include:Disk type: For faster read/write operations, opt for solid-state drives (SSDs) over traditional hard disk drives (HDDs). Some cloud providers also offer NVMe storage, which can provide even greater speed for highly demanding workloads.Disk size: Choose the right size based on the amount of data you need to store and process. Over-allocating storage space might seem like a safe bet, but it can also increase costs unnecessarily. You can always resize disks later, so avoid over-allocating them upfront.IOPS and throughput: Some workloads require high input/output operations per second (IOPS). If this is a priority for your workload (e.g., databases), make sure that your VM configuration includes high IOPS storage options.﹟5 Weigh up your network requirementsWhen working with cloud-based VMs, network performance is a critical consideration. High-speed and low-latency networking can make a difference for applications such as online gaming, video conferencing, and real-time analytics.Bandwidth: Check whether the VM configuration offers the necessary bandwidth for your workload. For applications that handle large data transfers, such as cloud backup or file servers, make sure that the network interface provides high throughput.Network latency: Low latency is crucial for applications where real-time performance is key (e.g., trading systems, gaming). Choose VMs with low-latency networking options to minimize delays and improve the user experience.Network isolation and security: Check if your VM configuration provides the necessary network isolation and security features, especially when handling sensitive data or operating in multi-tenant environments.﹟6 Factor in cost considerationsWhile it’s essential that your VM has the right configuration, cost is always an important factor to consider. Cloud providers typically charge based on the resources allocated, so optimizing for cost efficiency can significantly impact your budget.Consider whether a pay-as-you-go or reserved model (which offers discounted rates in exchange for a long-term commitment) fits your usage pattern. The reserved option can provide significant savings if your workload runs continuously. You can also use monitoring tools to track your VM’s performance and resource usage over time. This data will help you make informed decisions about scaling up or down so you’re not paying for unused resources.﹟7 Evaluate security featuresSecurity is a primary concern when selecting a VM configuration, especially for workloads handling sensitive data. Consider the following:Built-in security: Look for VMs that offer integrated security features such as DDoS protection, web application firewall (WAF), and encryption.Compliance: Check that the VM configuration meets industry standards and regulations, such as GDPR, ISO 27001, and PCI DSS.Network security: Evaluate the VM's network isolation capabilities and the availability of cloud firewalls to manage incoming and outgoing traffic.﹟8 Consider geographic locationThe geographic location of your VM can impact latency and compliance. Therefore, it’s a good idea to choose VM locations that are geographically close to your end users to minimize latency and improve performance. In addition, it’s essential to select VM locations that comply with local data sovereignty laws and regulations.﹟9 Assess backup and recovery optionsBackup and recovery are critical for maintaining data integrity and availability. Look for VMs that offer automated backup solutions so that data is regularly saved. You should also evaluate disaster recovery capabilities, including the ability to quickly restore data and applications in case of failure.﹟10 Test and iterateFinally, once you've chosen a VM configuration, testing its performance under real-world conditions is essential. Most cloud providers offer performance monitoring tools that allow you to assess how well your VM is meeting your workload requirements.If you notice any performance bottlenecks, be prepared to adjust the configuration. This could involve increasing CPU cores, adding more memory, or upgrading storage. Regular testing and fine-tuning means that your VM is always optimized.Choosing a virtual machine that suits your requirementsSelecting the best virtual machine configuration is a key step toward optimizing your workloads efficiently, cost-effectively, and without unnecessary performance bottlenecks. By understanding your workload’s needs, considering factors like CPU, memory, storage, and network performance, and continuously monitoring resource usage, you can make informed decisions that lead to better outcomes and savings.Whether you're running a small application or large-scale enterprise software, the right VM configuration can significantly improve performance and cost. Gcore offers a wide range of virtual machine options that can meet your unique requirements. Our virtual machines are designed to meet diverse workload requirements, providing dedicated vCPUs, high-speed storage, and low-latency networking across 30+ global regions. You can scale compute resources on demand, benefit from free egress traffic, and enjoy flexible pricing models by paying only for the resources in use, maximizing the value of your cloud investments.Contact us to discuss your VM needs

How to get the size of a directory in Linux

Understanding how to check directory size in Linux is critical for managing storage space efficiently. Understanding this process is essential whether you’re assessing specific folder space or preventing storage issues.This comprehensive guide covers commands and tools so you can easily calculate and analyze directory sizes in a Linux environment. We will guide you step-by-step through three methods: du, ncdu, and ls -la. They’re all effective and each offers different benefits.What is a Linux directory?A Linux directory is a special type of file that functions as a container for storing files and subdirectories. It plays a key role in organizing the Linux file system by creating a hierarchical structure. This arrangement simplifies file management, making it easier to locate, access, and organize related files. Directories are fundamental components that help ensure smooth system operations by maintaining order and facilitating seamless file access in Linux environments.#1 Get Linux directory size using the du commandUsing the du command, you can easily determine a directory’s size by displaying the disk space used by files and directories. The output can be customized to be presented in human-readable formats like kilobytes (KB), megabytes (MB), or gigabytes (GB).Check the size of a specific directory in LinuxTo get the size of a specific directory, open your terminal and type the following command:du -sh /path/to/directoryIn this command, replace /path/to/directory with the actual path of the directory you want to assess. The -s flag stands for “summary” and will only display the total size of the specified directory. The -h flag makes the output human-readable, showing sizes in a more understandable format.Example: Here, we used the path /home/ubuntu/, where ubuntu is the name of our username directory. We used the du command to retrieve an output of 32K for this directory, indicating a size of 32 KB.Check the size of all directories in LinuxTo get the size of all files and directories within the current directory, use the following command:sudo du -h /path/to/directoryExample: In this instance, we again used the path /home/ubuntu/, with ubuntu representing our username directory. Using the command du -h, we obtained an output listing all files and directories within that particular path.#2 Get Linux directory size using ncduIf you’re looking for a more interactive and feature-rich approach to exploring directory sizes, consider using the ncdu (NCurses Disk Usage) tool. ncdu provides a visual representation of disk usage and allows you to navigate through directories, view size details, and identify large files with ease.For Debian or Ubuntu, use this command:sudo apt-get install ncduOnce installed, run ncdu followed by the path to the directory you want to analyze:ncdu /path/to/directoryThis will launch the ncdu interface, which shows a breakdown of file and subdirectory sizes. Use the arrow keys to navigate and explore various folders, and press q to exit the tool.Example: Here’s a sample output of using the ncdu command to analyze the home directory. Simply enter the ncdu command and press Enter. The displayed output will look something like this:#3 Get Linux directory size using 1s -1aYou can alternatively opt to use the ls command to list the files and directories within a directory. The options -l and -a modify the default behavior of ls as follows:-l (long listing format)Displays the detailed information for each file and directoryShows file permissions, the number of links, owner, group, file size, the timestamp of the last modification, and the file/directory name-a (all files)Instructs ls to include all files, including hidden files and directoriesIncludes hidden files on Linux that typically have names beginning with a . (dot)ls -la lists all files (including hidden ones) in long format, providing detailed information such as permissions, owner, group, size, and last modification time. This command is especially useful when you want to inspect file attributes or see hidden files and directories.Example: When you enter ls -la command and press Enter, you will see an output similar to this:Each line includes:File type and permissions (e.g., drwxr-xr-x):The first character indicates the file type- for a regular filed for a directoryl for a symbolic linkThe next nine characters are permissions in groups of three (rwx):r = readw = writex = executePermissions are shown for three classes of users: owner, group, and others.Number of links (e.g., 2):For regular files, this usually indicates the number of hard linksFor directories, it often reflects subdirectory links (e.g., the . and .. entries)Owner and group (e.g., user group)File size (e.g., 4096 or 1045 bytes)Modification date and time (e.g., Jan 7 09:34)File name (e.g., .bashrc, notes.txt, Documents):Files or directories that begin with a dot (.) are hidden (e.g., .bashrc)ConclusionThat’s it! You can now determine the size of a directory in Linux. Measuring directory sizes is a crucial skill for efficient storage management. Whether you choose the straightforward du command, use the visual advantages of the ncdu tool, or opt for the versatility of ls -la, this expertise enhances your ability to uphold an organized and efficient Linux environment.Looking to deploy Linux in the cloud? With Gcore Edge Cloud, you can choose from a wide range of pre-configured virtual machines suitable for Linux:Affordable shared compute resources starting from €3.2 per monthDeploy across 50+ cloud regions with dedicated servers for low-latency applicationsSecure apps and data with DDoS protection, WAF, and encryption at no additional costGet started today

How to Run Hugging Face Spaces on Gcore Inference at the Edge

Running machine learning models, especially large-scale models like GPT 3 or BERT, requires a lot of computing power and comes with a lot of latency. This makes real-time applications resource-intensive and challenging to deliver. Running ML models at the edge is a lightweight approach offering significant advantages for latency, privacy, and resource optimization.  Gcore Inference at the Edge makes it simple to deploy and manage custom models efficiently, giving you the ability to deploy and scale your favorite Hugging Face models globally in just a few clicks. In this guide, we’ll walk you through how easy it is to harness the power of Gcore’s edge AI infrastructure to deploy a Hugging Face Space model. Whether you’re developing NLP solutions or cutting-edge computer vision applications, deploying at the edge has never been simpler—or more powerful. Step 1: Log In to the Gcore Customer PortalGo to gcore.com and log in to the Gcore Customer Portal. If you don’t yet have an account, go ahead and create one—it’s free. Step 2: Go to Inference at the EdgeIn the Gcore Customer Portal, click Inference at the Edge from the left navigation menu. Then click Deploy custom model. Step 3: Choose a Hugging Face ModelOpen huggingface.com and browse the available models. Select the model you want to deploy. Navigate to the corresponding Hugging Face Space for the model. Click on Files in the Space and locate the Docker option. Copy the Docker image link and startup command from Hugging Face Space. Step 4: Deploy the Model on GcoreReturn to the Gcore Customer Portal deployment page and enter the following details: Model image URL: registry.hf.space/ethux-mistral-pixtral-demo:latest Startup command: python app.py Container port: 7860 Configure the pod as follows: GPU-optimized: 1x L40S vCPUs: 16 RAM: 232GiB For optimal performance, choose any available region for routing placement. Name your deployment and click Deploy.Step 5: Interact with Your ModelOnce the model is up and running, you’ll be provided with an endpoint. You can now interact with the model via this endpoint to test and use your deployed model at the edge.Powerful, Simple AI Deployment with GcoreGcore Inference at the Edge is the future of AI deployment, combining the ease of Hugging Face integration with the robust infrastructure needed for real-time, scalable, and global solutions. By leveraging edge computing, you can optimize model performance and simultaneously futureproof your business in a world that increasingly demands fast, secure, and localized AI applications. Deploying models to the edge allows you to capitalize on real-time insights, improve customer experiences, and outpace your competitors. Whether you’re leading a team of developers or spearheading a new AI initiative, Gcore Inference at the Edge offers the tools you need to innovate at the speed of tomorrow. Explore Gcore Inference at the Edge

10 Common Web Performance Mistakes and How to Overcome Them

Web performance mistakes can carry a high price, resulting in websites that yield low conversion rates, high bounce rates, and poor sales. In this article, we dig into the top 10 mistakes you should avoid to boost your website performance.1. Slow or Unreliable Web HostYour site speed begins with your web host, which provides the server infrastructure and resources for your website. This includes the VMs and other infrastructure where your code and media files reside. Three common host-related problems are as follows:Server location: The further away your server is from your users, the slower the site speed and the poorer the experience for your website visitors. (More on this under point 7.)Shared hosting: Shared hosting solutions share server resources among multiple websites, leading to slow load times and spotty connections during peak times due to heavy usage. Shared VMs can also impact your website’s performance due to increased network traffic and resource contention.VPS hosting: Bandwidth limitations can be a significant issue with VPS hosting. A limited bandwidth package can cause your site speed to decrease during high-traffic periods, resulting in a sluggish user experience.Correct for server and VM hosting issues by choosing a provider with servers located closer to your user base and provisioning sufficient computational resources, like Gcore CDN. Use virtual dedicated servers (VDS/VPS) rather than shared hosting to avoid network traffic from other websites affecting your site’s performance. If you already use a VPS, consider upgrading your hosting plan to increase server resources and improve UX. For enterprises, dedicated servers may be more suitable.2. Inefficient Code, Libraries, and FrameworksPoor-quality code and inefficient frameworks can increase the size of web pages, consume too many resources, and slow down page load times. Code quality is often affected by syntax, semantics, and logic errors. Correct these issues by writing clean and simple code.Errors or inefficiencies introduced by developers can impact site performance, such as excessive API calls or memory overuse. Prevent these issues by using TypeScript, console.log, or built-in browser debuggers during development. For bugs in already shipped code, utilize logging and debugging tools like the GNU debugger or WinDbg to identify and resolve problems.Improving code quality also involves minimizing the use of large libraries and frameworks. While frontend frameworks like React, Vue, and Angular.js are popular for accelerating development, they often include extensive JavaScript and prebuilt components that can bloat your website’s codebase. To optimize for speed, carefully analyze your use case to determine if a framework is necessary. If a static page suffices, avoid using a framework altogether. If a framework is needed, select libraries that allow you to link only the required components.3. Unoptimized Code Files and FontsEven high-quality code needs optimization before shipping. Unoptimized JavaScript, HTML, and CSS files can increase page weight and necessitate multiple HTTP requests, especially if JavaScript files are executed individually.To optimize code, two effective techniques are minification and bundling.Minification removes redundant libraries, code, comments, unnecessary characters (e.g., commas and dots), and formatting to reduce your source code’s size. It also shortens variable and function names, further decreasing file size. Tools for minification include UglifyJS for JavaScript, CSSNano for CSS, and HTMLminifier for HTML.Bundling groups multiple files into one, reducing the number of HTTP requests and speeding up site load times. Popular bundling tools include Rollup, Webpack, and Parcel.File compression using GZIP or Brotli can also reduce the weight of HTTP requests and responses before they reach users’ browsers. Enable your chosen compression technique on your server only after checking that your server provider supports it.4. Unoptimized Images and VideosSome websites are slowed down by large media files. Upload only essential media files to your site. For images, compress or resize them using tools like TinyPNG and Compressor.io. Convert images from JPEG, PNG, and GIF to WebP and AVIF formats to maintain quality while reducing file size. This is especially beneficial in industries like e-commerce and travel, where multiple images boost conversion rates. Use dynamic image optimization services like Gcore Image Stack for efficient processing and delivery. For pages with multiple images, use CSS sprites to group them, reducing the number of HTTP requests and speeding up load times.When adding video files, use lite embeds for external links. Standard embed code, like YouTube’s, is heavy and can slow down your pages. Lite embeds load only thumbnail images initially, and the full video loads when users click the thumbnail, improving page speed.5. No Lazy LoadingLazy loading delays the rendering of heavy content like images and JavaScript files until the user needs it, contrasting with “eager” loading, which loads everything at once and slows down site load times. Even with optimized images and code, lazy loading can further enhance site speed through a process called “timing.”Image timing uses the HTML loading attribute in an image tag or frameworks like Angular or React to load images in response to user actions. The browser only requests images when the user interacts with specific features, triggering the download.JavaScript timing controls when certain code loads. If JavaScript doesn’t need to run until the entire page has rendered, use the defer attribute to delay its execution. If JavaScript can load at any time without affecting functionality, load it asynchronously with the async attribute.6. Heavy or Redundant External Widgets and PluginsWidgets and plugins are placed in designated frontend and backend locations to extend website functionality. Examples include Google review widgets that publish product reviews on your website and Facebook plugins that connect your website to your Facebook Page. As your website evolves, more plugins are typically installed, and sometimes website admins forget to remove those that are no longer required.Over time, heavy and unused plugins can consume substantial resources, slowing down your website unnecessarily. Widgets may also contain heavy HTML, CSS, or JavaScript files that hinder web performance.Remove unnecessary plugins and widgets, particularly those that make cURL calls, HTTP requests, or generate excessive database queries. Avoid plugins that load heavy scripts and styles or come from unreliable sources, as they may contain malicious code and degrade website performance.7. Network IssuesYour server’s physical location significantly impacts site speed for end users. For example, if your server is in the UK and your users are in China, they’ll experience high latency due to the distance and DNS resolution time. The greater the distance between the server and the user, the more network hops are required, increasing latency and slowing down site load times.DNS resolution plays a crucial role in this process. Your authoritative DNS provider resolves your domain name to your IP address. If the provider’s server is too far from the user, DNS resolution will be slow, giving visitors a poor first impression.To optimize content delivery and reduce latency, consider integrating a content delivery network (CDN) with your server-side code. A CDN stores copies of your static assets (e.g., container images, JavaScript, CSS, and HTML files) on geographically distributed servers. This distribution ensures that users can access your content from a server closer to their location, significantly improving site speed and performance.8. No CachingWithout caching, your website has to fetch data from the origin server every time a user requests. This increases the load time because the origin server is another physical hop that data has to travel.Caching helps solve this problem by serving pre-saved copies of your website. Copies of your web files are stored on distributed CDN servers, meaning they’re available physically closer to website viewers, resulting in quicker load times.An additional type of caching, DNS caching, temporarily stores DNS records in DNS resolvers. This allows for faster domain name resolution and accelerates the initial connection to a website.9. Excessive RedirectsWebsite redirects send users from one URL to another, often resulting in increased HTTP requests to servers. These additional requests can potentially crash servers or cause resource consumption issues. To prevent this, use tools like Screaming Frog to scan your website for redirects and reduce them to only those that are absolutely necessary. Additionally, limit each redirect to making no more than one request for a .css file and one for a .js file.10. Lack of Mobile OptimizationForgetting to optimize for mobile can harm your website’s performance. Mobile-first websites optimize for speed and UX. Better UX leads to happier customers and increased sales.Optimizing for mobile starts with understanding the CPU, bandwidth, and memory limitations of mobile devices compared to desktops. Sites with excessively heavy files will load slowly on mobiles. Writing mobile-first code, using mobile devices or emulators for building and testing, and enhancing UX for various mobile device types—such as those with larger screens or higher capacity—can go a long way to optimizing for mobile.How Can Gcore Help Prevent These Web Performance Mistakes?If you’re unsure where to start in correcting or preventing web performance mistakes, don’t worry—you don’t have to do it alone. Gcore offers a comprehensive suite of solutions designed to enhance your web performance and deliver the best user experience for your visitors:Powerful VMs: Fast web hosting with a wide range of virtual machines.Managed DNS: Hosting your DNS zones and ensuring quick DNS resolution with our fast Managed DNS.CDN: Accelerate both static and dynamic components of your website for global audiences.With robust infrastructure from Gcore, you can ensure optimal performance and a seamless experience for all your web visitors. Keep your website infrastructure in one place for a simplified website management experience.Need help getting started? Contact us for a personalized consultation and discover how Gcore can supercharge your website performance.Get in touch to boost your website

How to Choose Between Bare Metal GPUs and Virtual GPUs for AI Workloads

Choosing the right GPU type for your AI project can make a huge difference in cost and business outcomes. The first consideration is often whether you need a bare metal or virtual GPU. With a bare metal GPU, you get a physical server with an entire GPU chip (or chips) installed that is completely dedicated to the workloads you run on the server, whereas a virtual GPU means you share GPU resources with other virtual machines.Read on to discover the key differences between bare metal GPUs and virtual GPUs, including performance and scalability, to help you make an informed decision.The Difference Between Bare Metal and Virtual GPUsThe main difference between bare metal GPUs and virtual GPUs is how they use physical GPU resources. With a bare metal GPU, you get a physical server with an entire GPU chip (or chips) installed that is completely dedicated to the workloads you run on the server. There is no hypervisor layer between the operating system (OS) and the hardware, so applications use the GPU resources directly.With a virtual GPU, you get a virtual machine (VM) and uses one of two types of GPU virtualization, depending on your or a cloud provider’s capabilities:An entire, dedicated GPU used by a VM, also known as a passthrough GPUA shared GPU used by multiple VMs, also known as a vGPUAlthough a passthrough GPU VM gets the entire GPU, applications access it through the layers of a guest OS and hypervisor. Also, unlike a bare metal GPU instance, other critical VM resources that applications use, such as RAM, storage, and networking, are also virtualized.The difference between running applications with bare metal and virtual GPUsThese architectural features affect the following key aspects:Performance and latency: Applications running on a VM with a virtual GPU, especially vGPU, will have lower processing power and higher latency for the same GPU characteristics than those running on bare metal with a physical GPU.Cost: As a result of the above, bare metal GPUs are more expensive than virtual GPUs.Scalability: Virtual GPUs are easier to scale than bare metal GPUs because scaling the latter requires a new physical server. In contrast, a new GPU instance can be provisioned in the cloud in minutes or even seconds.Control over GPU hardware: This can be critical for certain configurations and optimizations. For example, when training massive deep learning models with a billion parameters, total control means the ability to optimize performance optimization—and that can have a big impact on training efficiency for massive datasets.Resource utilization: GPU virtualization can lead to underutilization if the tasks being performed don’t need the full power of the GPU, resulting in wasted resources.Below is a table summarizing the benefits and drawbacks of each approach: Bare metal GPUVirtual GPUPassthrough GPUvGPUBenefitsDedicated GPU resourcesHigh performance for demanding AI workloadsLower costSimple scalabilitySuitable for occasional or variable workloadsLowest costSimple scalabilitySuitable for occasional or variable workloadsDrawbacksHigh cost compared to virtual GPUsLess flexible and scalable than virtual GPUsLow performanceNot suitable for demanding AI workloadsLowest performanceNot suitable for demanding AI workloadsShould You Use Bare Metal or Virtual GPUs?Bare metal GPUs and virtual GPUs are typically used for different types of workloads. Your choice will depend on what AI tasks you’re looking to perform.Bare metal GPUs are better suited for compute-intensive AI workloads that require maximum performance and speed, such as training large language models. They are also a good choice for workloads that must run 24/7 without interruption, such as some production AI inference services. Finally, bare metal GPUs are preferred for real-time AI tasks, such as robotic surgery or high-frequency trading analytics.Virtual GPUs are a more suitable choice for the early stages of AI/ML and iteration on AI models, where flexibility and cost-effectiveness are more important than top performance. Workloads with variable or unpredictable resource requirements can also run on this type of GPU, such as training and fine-tuning small models or AI inference tasks that are not sensitive to latency and performance. Virtual GPUs are also great for occasional, short-term, and collaborative AI/ML projects that don’t require dedicated hardware—for example, an academic collaboration that includes multiple institutions.To choose the right type of GPU, consider these three factors:Performance requirements. Is the raw GPU speed critical for your AI workloads? If so, bare metal GPUs are a superior choice.Scalability and flexibility. Do you need GPUs that can easily scale up and down to handle dynamic workloads? If yes, opt for virtual GPUs.Budget. Depending on the cloud provider, bare metal GPU servers can be more expensive than virtual GPU instances. Virtual GPUs typically offer more flexible pricing, which may be appropriate for occasional or variable workloads.Your final choice between bare metal GPUs and virtual GPUs depends on the specific requirements of the AI/ML project, including performance needs, scalability requirements, workload types, and budget constraints. Evaluating these factors can help determine the most appropriate GPU option.Choose Gcore for Best-in-Class AI GPUsGcore offers bare metal servers with NVIDIA H100, A100, and L40S GPUs. Using the 3.2 Tbps InfiniBand interface, you can combine H100 or A100 servers into scalable GPU clusters for training and tuning massive ML models or for high-performance computing (HPC).If you are looking for a scalable and low-latency solution for global AI inference, explore Gcore Inference at the Edge. It especially benefits latency-sensitive, real-time applications, such as generative AI and object recognition.Discover Gcore bare metal GPUs

How to Configure Grafana for Visualizing Kubernetes (K8s) Cluster Monitoring

Kubernetes monitoring allows you to observe your workloads and cluster resources, spot issues and failures, and efficiently manage pods and other resources. Cluster admins should prioritize tracking the performance and stability of clusters in these environments. One popular tool that can help you visualize Kubernetes monitoring is Grafana. This monitoring solution lets you display K8s metrics through interactive dashboards and real-time alerts. It seamlessly integrates with Prometheus and other data sources, providing valuable insights.Gcore Managed Kubernetes simplifies the Grafana setup process by providing a managed service that includes tools like Grafana. In this article, we’ll explain how to set up and configure Grafana to monitor Kubernetes, its key metrics, and dashboards.Setting Up Grafana for Effective Kubernetes MonitoringTo begin monitoring Kubernetes with Grafana, first, check that you have all the requirements in place: a functioning Kubernetes cluster, the Helm package manager installed, and kubectl set up to communicate with your cluster.Install Grafana in a Kubernetes Cluster. Start by adding the Grafana Helm repository.helm repo add grafana https://grafana.github.io/helm-chartshelm repo updateNext, install Grafana using Helm. This command deploys Grafana into your Kubernetes cluster:helm install grafana grafana/grafanaNow it’s time to configure Grafana for the Kubernetes environment. After installation, retrieve the admin password by using the command below:kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echoThen access the Grafana UI by port-forwarding:kubectl port-forward svc/grafana 3000:80Open your web browser and navigate to http://localhost:3000. Log in using the default username admin and the password you retrieved. Once logged in, you can configure Grafana to monitor your Kubernetes environment by adding data sources such as Prometheus and creating custom dashboards.You’ve now successfully set up Grafana for Kubernetes monitoring!Key Metrics for Kubernetes MonitoringUnderstanding metrics for Kubernetes monitoring allows you to visualize your cluster’s reliability. Key metrics are the following:Node resources. Track CPU and memory usage, disk utilization, and network bandwidth to understand resource consumption and identify bottlenecks.Cluster metrics. Monitor the number of nodes to understand resource billing and overall cluster usage, and track running pods to determine node capacity and identify failures.Pod metrics. Measure how pods are managed and deployed, including instances and deployment status, and monitor container metrics like CPU, memory, and network usage.State metrics. Keep an eye on persistent volumes, disk pressure, crash loops, and job success rates to ensure proper resource management and application stability.Container metrics. Track container CPU and memory usage relative to pod limits, and monitor network data to detect bandwidth issues.Application metrics. Measure application availability, performance, and business-specific metrics to maintain optimal user experience and operational health.Setting Up Grafana DashboardsYou can opt to design and tailor Grafana dashboards to monitor your Kubernetes cluster. This will help you better understand your systems’ performance and overall well-being at a glance.Log into Grafana. Open your web browser, go to http://localhost:3000/, and log in with the default credentials (admin for both username and password), then change your password if/when prompted.Grafana—Log In to Start MonitoringAdd data source. Navigate to Configuration and select Data Sources. Click on Add Data Source and choose the appropriate data source, such as Prometheus.Create a dashboard. Go to Create > Dashboard, click Add New Panel, choose the panel type (e.g., Time series chart, Gauge, Table), and configure it with a PromQL query and visualization settings.Adding a New Panel in Grafana DashboardOrganize and save the dashboard. Arrange panels by clicking Add Panel > Add Row and dragging panels into the desired rows. To save the dashboard, click the save icon, name it, and confirm the save.Gcore Managed Kubernetes for Kubernetes MonitoringWhether you’re getting started with monitoring Kubernetes or you’re a seasoned pro, Gcore Managed Kubernetes offers significant advantages for businesses seeking efficient and reliable Kubernetes cluster monitoring and container management:Ease of integrating Grafana: The service seamlessly integrates with Grafana, enabling effortless visualization and monitoring of performance metrics via dashboards.Automated control: Gcore Managed Kubernetes simplifies the setup and monitoring process by using automation. This service conducts health checks on your nodes, automatically updating and restarting them when needed to keep performance at its best.Enhanced security and reliability: Gcore Managed Kubernetes guarantees the management of nodes by integrating features like automatic scaling and self-repairing systems to maintain optimal performance.Discover Gcore Managed Kubernetes, including automated scaling, one-click provisioning, and Grafana integration.

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.