Fluency Blog Security’s Waste Management: Is Ignoring Data the Key to Reducing Log Storage Costs?

Date Oct 28, 2019

Security’s Waste Management: Is Ignoring Data the Key to Reducing Log Storage Costs?

Cloud log management companies know that their costs are high and therefore are unable to deliver on the value/efficiency promised by cloud usage. The major challenge is their inefficient database and the high costs associated with them. Developing a ground-up, cloud-based streaming database is cost prohibitive and forces two options: reduce the amount of time that logs are stored or reduce what is stored.

While we know reducing how long something is stored creates a situation that puts a company out-of-compliance, what about reducing what is stored? The approach to reducing what is stored clearly impacts the ability to detect threats and increases dwell time and therefore dramatically increases overall business risk.

Security’s Waste Management

Log management is an essential part to security operations. When done right, it provides the vision into the business infrastructure allowing for validation of prevention, detection of issues and the needed data to produce deep insight to resolve issues when they occur. A key cost of log management is directly related to how much data is stored each day and how long the data is retained; log data is quickly becoming the single largest dataset companies are required to manage.

This need to store log data creates three categories for how log management companies effectively handle pricing: reduce incoming data volumes, reduce retention of data stored or improve database efficiency. Fluency’s unique approach was to create a market leading and innovative database, LavaDB, which significantly improved database efficiency and thus led to a substantial cost reduction for log storage overall. To address the efficiency need , Fluency has spent six years developing a proprietary database to handle both the scaling and capacity retention needs of log management. Fluency’s database not only can ingress at petabyte speeds but also has been tested at 12 million events per second (EPS). Our search speeds have been tested faster than Vertica or Elastic. It dynamically and efficiently uses resources only when needed while taking advantage of using less expensive storage until someone starts a search.

The most common approach to reducing cost is to lower the log data storage retention time. The immediate issue is compliance, as the PCI DSS requires 90-days hot and one-year of storage. NYCRR requires 3 years of total log retention and HIPAA is at 6 years, looking at the PCI DSS regulation retention length, this is seen as a low bar in the compliance world, but most log management companies only provide between 14 to 30 days storage. The greater issue is that 30 days is not enough. Dwell time, the time between occurrence of an attack and its recognition, is measured at 197 days to detection and 69 days to containment of the breach according to IBM. This has been consistent for over three years. This means that investigating breach possibilities will include logs outside the search window. Between compliance and use case needs, reducing the search window is not an option for effective security management and risk mitigation.

Reducing what is collected is the last option which seems incredibly logical. The majority of data that is collected will never be used. Log data can be compared in one perspective like normal trash. The odds that a particular piece of trash that you threw away is needed is fairly close to zero. Therefore, the worth of each piece is trash is also very low. It makes sense that we should look just for those pieces of trash we may want some day and here lies the issue, how do you know what is worthless and what isn’t especially when the pieces don’t seem related to each other?

Let’s look at the most common logs that are not collected or saved and see the potential impact. Fluency has been installed and running with another vendors professional tuned SIEM, side-by-side for the last three years. The Gartner recommended Magic Quadrant SIEM has flow collection built in that is maintained for seven (7) days. It also reads almost all the same feeds as Fluency but does not keep application level exchange data outside the first 256 bytes, and only stores it in hex format. This compared to other reduction approaches, seems to be arbitrary in what it collects, yet as we point out, it still fails to meet comprehensive analysis needs.

Flow Data

Flow data remains the favorite chopping block of the bygone era of SIEM tools. Most SIEM tools do not handle flow data, and those that do keep mostly the Netflow data and a little host information. The application exchange above it is not normally kept.

We can clearly state that flow data is likely the most important data in determining an impact – especially when correlated and fused with other data that might seem like trivial unrelated trash.

OpenDNS Kicks In

Fluency is a fan of OpenDNS, Cisco’s Umbrella. They work great, but are not perfect. A couple of months back a spear-phishing (targeted phishing attack) hit our company. In the first 24 hours of the attack OpenDNS did not see the domain as malicious, and then all the sudden it did. Security saw the OpenDNS and thought it kicked in on time. When the breach was reported it was past the network storage time of the SIEM.

Enter Fluency. Using the host name from OpenDNS and Fluency’s patented fusion engines, Fluency gave a list of users and systems that exchanged data with the malicious site prior to the OpenDNS block. The search took seconds. Fluency fuses firewall user, DHCP data, and LDAP user data with network flow data without having to be programmed. The immutable flow data was the only definitive source to the attack. Because companies often have a prevention mindset, their log data will not have alerts for possible attacks. The only record for many attacks that are not prevented are in the flow logs and system logs, outside what is thought of as security logs.

Reputation Scare

Reputation is an important aspect to modern prevention and detection systems. In this next case, a reputation company posted an alert for a series of sites associated with remote system communication. The known issue was that a piece of malware is categorized as a remote administration tool (RAT) going out to an external service. This external service then allowed attackers to connect to this, and tunnel into the infected machine, thereby controlling it remotely.

The security team started receiving notifications that a known malicious site was being connected to from inside their network. The number of systems connecting to this site was growing at a fast rate, but the only sign of compromise was what appeared to be a beaconing callback. Updating OpenDNS with the outgoing sites blocked the outgoing connections, but the number of systems trying to connect continued to increase. The legacy commercial SIEM continued to alert.

Fluency’s RiskScore did not see any supporting anomalies that would validate the alert. We were asked to review the data, which we do often for our customers, at no cost if there appears to be an issue.

Here Fluency was capturing the full URL data (web messaging). It was clear that the exchange of data was related to advertisement. The UUIDs used in ad tracking were clearly being exchanged and the ad could be located. What was happening is that a company put out an ad to a site. The ad was posted on the developer’s system and was using the remote feature to make the ad available on the Internet. This is normally done to test an ad, but not normal for production. So, the ad was appearing on a system that allowed tunneling of remote connection, and this site had the bad reputation.

Here, the needed log data was simple application-level metadata. This invaluable data is seen as even more useless than flow data by most SIEM providers. But as noted in this example, is critical for investigation of web traffic. Log data considered not important (trash) by one tool can clearly have value in the world that Fluency operates.


The first two examples showed why detail matters in logs. Often logs have fields that change very little and seem redundant and one such field is the SSL/TLS version field. One need was to determine what systems were communicating over SSL/TLS and to see if the clients and the server negotiated to TLS 1.2. The company is getting ready for TLS 1.3 and was validating the current operational use.

Using the SIEM, the analyst searches for port 443 traffic (https) and then manually looked at the network application level in hex to read the certificate. This manual process was intense but is only a sampling of the data.

Fluency has application level data, like the SSL certificate data, parsed and stored in JSON format. The search took seconds to determine all web services inside the company. Then rotating through each system, a list of the highest and lower version negotiation was made. Lastly, a final list of servers that were misconfigured and user systems that were outdated were included.

In this case the data was there, just not parsed and searchable. Most SIEMs do not even have that type of data to perform this level of analytics.


I was finishing this blog after returning from the 2019 Wild West Hackin’ Fest in Deadwood South Dakota. Jonathan Ham presented a Threat Hunt scenario where he was given incoming traffic from an Enterprise and needed to find an infected system beaconing out. This is hard work, as evidenced by Jonathan making light of hours of work digging through traffic data need to complete the analysis. At the end he showed the failed attempt to find the channel in the DNS and a successful one in the SSL certificates. The key was the both these types of data are what log companies often think of as useless data or trash as we noted above. The truth is that there is no such thing as useless data. You will not know what data you need till you start trying to use the data you have. This means that ground truth, the ability to collect and relate everything, is the only way collect data. It’s this ground truth approach that leads to effective deep insight as Fluency provides that clients need now and in the future.

While it is true that most data collected will never be used, it is also true that most data you need falls in that category. Log management is like waste management, there is value in all that trash, you just should be porting it around in a high-end automobile. Waste Management needs a dump truck, not a luxury car. Fluency is just the fastest Dump Truck ever made.

About Fluency

Fluency’s objective is to advance technology and change the economics of the log analytics industry. Fluency doesn’t just want to lower the cost, but rather lower the cost to a point until it is practical to ingest and analyze everything. We aim for this, for we know that incomplete audit is the most common findings of corporate breaches. The inability to see, is the inability to implement security. Fluency’s customers routinely praise Fluency for making data useful while simultaneously offering a simple cost-effective pricing that includes by-default 90-day and 365-day storage options. Founded in 2013 by former McAfee threat intelligence executives, Fluency is headquartered in Greenbelt, Maryland.