Most common myths about Splunk - Damn it!!

Most common myths about Splunk - Damn it!!
After digging into the nitty gritty of the Cisco security portpolio now it's time to make our information gathering tools and SIEM solution more efficient for Incident response use cases. Let's jump into the world of Splunk and some lies which you might have heard from some Splunk newbies.
So after bugging the entire IT departments (Through friends and peers ) and interrogating as many business teams as possible to grant you (the security guy) access to their data, you are finally in the process of developing your dreamed use cases. Lucky you! I was doing many of the things that I’ll reference below. Many of these things work, but they aren’t exactly the most efficient approach. And because they do work, they’ve grown into fake news over the years!
I hope that this article will help you avoid some of the growing headaches which Splunk Guys had. If you doe experience anything,which is listed below, hopefully we will be able to give you an idea of how to better approach your own environment. But if all else fails, please don’t hesitate to reach out to us for Splunk professional services help.
Lie #1 : Always Place Your Config Files in etc/system/local
Splunk’s ability to scale is what attracts so many people to the product. It scales virtually better than anyone else in the market space. I know you’re probably wondering what the system/local directory has to do with scalability and believe it or not, it has a lot to do with it. When it comes to system precedence within Splunk, the system/local directory always wins. The reason this affects scalability is because the local directory CANNOT be remotely managed.

For example, if you’ve ever manually installed a forwarder on a Windows server, you’ve probably noticed that the install instructions ask you to set the deployment client and forwarding server during the install.
These values are entered into the etc/system/local directory upon install and cannot be remotely changed. Instead, do not enter any of these values when doing an install on a windows server. Leave these values blank and use the following approach to set the appropriate configurations.
The best approach is to create custom applications that contain your configuration files. If you manage your config files via applications, you can use your deployment server to remotely change configuration files on hundreds of thousands of forwarders in a matter of minutes. If you just stood up a new indexer and now you have the task of updating the Outputs.conf on your Universal Forwarders, if you follow this approach, your week-long project just became a 15-minute task.
Lie #2: The More Indexes, The Better
More often than not, you will see Splunk administrators who don’t really plan out their indexes. In many cases, we will see indexes created on a per sourcetype basis. Not only is this an overkill approach that makes management much more cumbersome, but it can also cause performance degradation and in some extreme cases, even data loss. This is because in a clustered environment, there are limitations on how many buckets a cluster manager can manage.

You also have to consider that your data retention and role-based access is all index based. This is where the management of so many indexes becomes difficult and cumbersome. The ideal approach to planning your indexes should revolve around the two aforementioned aspects. You should also consider these items when planning your indexes:
Data that is commonly searched together can more than likely be grouped together.
This approach can also alleviate your role-based access requirements as your indexes will be grouped by the team who owns the data, for the most part.
Organize your indexes by ownership group. For example, set your index to a superseding general term like “index=LAN”.
From there, group logs from your firewalls, switches, routers, etc. by their corresponding sourcetype under your “LAN” index.
The bottom line? Keep it simple. Take the time to logically plan out your indexes prior to your deployment. Once your data is indexed, there is no do-over so it is definitely worth the extra effort and attention.
Lie #3 : Sending TCP/UDP Syslog Data Directly to Indexers
Earlier, we talked about why you should avoid using an aggregate layer before indexing your data. Now we’re going to flip the script a bit. When it comes to syslog data, you want to use an aggregate layer. Often, we will see customers who send syslog data either directly to indexers, or via a third-party load balancer like Netscaler.
This is a risky approach because it can have a negative effect on your load balancing. The obvious implications of sending directly to indexers is that there is absolutely no load balancing going on. But when using a third-party load balancer, what can happen is the load balancer won’t switch often enough, or large streams of data could get stuck. Essentially, Splunk knows how to break the data, a third-party load balancer does not. The third-party load balancer could switch early before an entire event makes it to the indexer. In the Splunk world, you want to distribute your indexing as much as possible. Storage is expensive so the more distribution, the better.
Consider this if you send your data directly to indexers, if you have to restart any one of your indexers, the data that was sent to that box during restart is lost. Then there’s the potential data loss during restarts, and the inability to filter noisy data. Instead, consider standing up a dedicated syslog log server (like Syslog-ng). I can’t stress enough how critical this is to Splunk. Then deploy a Universal Forwarder to your syslog server. The Universal Forwarder -> Indexer flow of traffic is the ideal scenario whether it’s syslog you’re working with or reading files from a Windows server, Linux server, what have you.
There isn’t too much to say on this topic. When you are onboarding data, there are six settings you should ALWAYS set in your props.conf, despite whether Splunk is parsing the data correctly or not, TIME_PREFIX, MAX_TIMESTAMP_LOOKAHEAD, TIME_FORMAT, SHOULD_LINEMERGE, LINE_BREAKER, and TRUNCATE.
The approach many people take is that if Splunk is parsing and breaking events on its own, there is no need to set your props.conf. Splunk relies heavily on accurate timestamps and accurate event breaking. You always want to test your data before onboarding in a production environment to ensure that these are accurate.
Even if Splunk is accurately doing these things, you still want to set your props with the six settings mentioned earlier. The more you can do to tell Splunk where to look for these required items, the more accurate, and the more efficient your Splunk instance will run. However, it is worth mentioning that you should use TAs (technical add-ons) where available because they will typically handle this process for you. Most TAs will contain a props and transforms that eliminates the bulk of the legwork for you, although this isn’t guaranteed so make sure you check the content of your TAs.
Lie #4: A Heavy Forwarder is More Effective Than a Universal Forwarder
We could honestly do an entire post on this concept alone but the fact of the matter is, very rarely is a Heavy Forwarder (HF) more useful than a Universal Forwarder (UF). If you don’t know the difference, a Heavy Forwarder is an entire Splunk package with indexing turned off completely. Its only function is to forward data. A Universal Forwarder is along the same lines. It is a much smaller package that does not have the web UI that the Heavy Forwarder has. More often than not, we will see people who use a Heavy Forwarder as an intermediate forwarder and this is usually contrary to best practices. Unless it is syslog data, it is better to avoid an aggregate layer if possible. An aggregate layer creates a data funnel if not properly done. But if you need an aggregate layer, make sure you opt for the Universal Forwarder.
In many cases, a Heavy Forwarder is actually much more intensive on network IO. This is because a Heavy Forwarder is taking part of the indexers job and is actually parsing the data, and contrary to popular belief, this does not reduce CPU utilization on the indexers. In fact, the amount of data forwarded over the network is approximately SIX TIMES higher when forwarding data from a Heavy Forwarder instead of a Universal Forwarder. Do yourselves a favor, keep your network admin happy, and avoid using a Heavy Forwarder where you can.

*Table represents a test file of 367,463,625 being forwarded on a HF compared to a UF*
There are generally only two use cases where we would ever recommend someone to use a Heavy Forwarder over a Universal Forwarder. The most common being when you want to use a heavy add-on like DB Connect, Opsec LEA, etc and the other being when you need to forward logs to a third party. The Universal Forwarder cannot do either of the aforementioned functions. More specifically, these heavy add-ons typically make us of Splunk python binaries that otherwise don’t exist in a UF package.
Last but not least
Splunk is a robust product and there are tons of different ways to oil the gears, but it’s worth extra effort to go through Splunk Docs. Try to understand the difference between A WAY to do things, and the RIGHT WAY to do things. There are tons of myths and misconceptions around the best way to do things when it comes to configuring your Splunk environment.
For what it’s worth, every environment is different. What works in your environment, may not work in the next persons, but there are some things Splunk admins should try to avoid if possible. But if all else fails, don’t hesitate to reach out to Aditum for Splunk Professional Services assistance.
TRADEMARK LEGAL NOTICE All product names, logos, and brands are property of their respective owners in the Austria or other countries.All company, product and service names used on this website are for identification purposes only. Pheniix is notaffiliated with or an official partner of Cisco, CompTIA,Dimension Data, VMware, Amazon, Microsoft, Certified Ethical Hacker, (ISC)², Juniper, Wireshark, Offensive Security,Google, GNS3, F5, Python, Linux, Java, Openstack, Vagrant, Ansible, Docker, GIT, , Blockchain or other companies.Use of these names, logos, and brands does not imply endorsement.The opinions expressed on pheniix are personal perspectives and not those of Cisco , Dimension Data or any other company. Pheniix runs as an independent blog.
#SplunkEnterprise #SIEM #IBMSecurity #Lies #MikeGhahremani #Pheniix #CiscoSDWAN #Logging #Indexer