Splunk is a powerful software that gives enterprises access to a range of feature-rich applications to make the most out of the enterprise data and turn them into observable elements in the form of charts, tables, and easy-to-understand dashboard displays. Splunk lets organizations leverage public clouds, on-premises data centers, apps and services, and third-party tools to derive useful insights from data. 

Splunk Interview Questions for Experienced

1. What are Splunk commands? List out some of the basic Splunk commands.

Splunk commands help carry out certain actions or operations, such as searching, indexing, and identifying certain fields, to bring about desired results. Following are some common Splunk commands that you might need to use frequently on the Splunk Enterprise-

  • Accum: Accum command is used for calculating a running total of the numbers for events with numeric fields.
  • Chart: It generates results in a tabular format, which allows charting.
  • Timechart: You can create a time series chart along with the corresponding statistics table using this command.
  • Tags: Tags as a command allows users to annotate fields that are specified in your search results.
  • Rare: When you use rare, it displays the least common values in a field.
  • Cluster: When you need to create a cluster or group of similar events, you can use the cluster command.

2. Name a few important Splunk search commands.

Here are some important search commands that are available on Splunk-

  • Abstract- It is used for displaying a brief summary of the text of the search results, which is carried out by creating a summary version of the text rather than displaying the original text.
  • Addtotals- Addtotals is used for summing up the numerical fields, and it also allows you to specify only certain fields to sum up instead of calculating the sum for every field.
  • Accum- Accum is the command that helps calculate the accumulated sum of a numerical field.
  • Filldown- When you need to replace a set of NULL values with the last non-NULL value for a specific field, you can use the Filldown command to achieve the result. When the list of fields is not mentioned, you can apply Filldown to all the fields.
  • Typer- It helps with calculating the eventtype field for search results that match a certain type of event.
  • Rename- This command is used for renaming a specific field, and you can also choose multiple fields to run this command with the help of wildcards.
  • Anomalies- When calculating the “unexpected” score for a specific event, you use the command anomalies.

3. State the difference between stats vs. eventstats command.

Here’s how stats and eventstats are different from each other-

  • Stats- Stats is a command that helps with the calculation of the statistics for each field in your events or search results. These values are then stored in fields that are created newly.
  • Eventstats- While the Eventstats command is quite similar to the stats command in certain ways, what differentiates it from stats command is that it calculates the aggregate results inline to every event.

4. Name the commands included in the “filtering results” category.

Following are the commands that are included in the “filtering results” category-

  • Search- Search is used for the retrieval of events from indexes and filtering the results that come out in the previous search command. You can use various functions, such as wildcards, quoted phrases, keywords, and value or key expressions for retrieving the events.
  • Sort- With the Sort command, you can sort the search results for the specified fields. This can be carried out in several ways, such as ascending order, descending order, and reverse order. You can also limit the results when using this command.
  • Where- The ‘where’ command is used for filtering out the results with the help of ‘eval’ expressions. When you use the ‘search’ command, it retains search results that have a successful evaluation. However, with the ‘where’ command, you can carry out a more in-depth investigation. It also helps you find matching conditions for different active nodes that run a particular application.
  • Rex- The ‘rex’ command is used for the extraction of specific data or fields from the events. For example, you can use ‘rex’ and define specific fields in an email ID, which will allow you to differentiate the domain, company, and user ID elements in the email ID. 

5. What do you mean by the Lookup command? State difference between Inputlookup and Outputlookup commands.

Splunk lookup commands are used for the retrieval of specific fields from an external file for deriving the value of an event. Here are the differences between Inputlookup and Outputlookup commands-

  • Inputlookup- It is used for searching the contents of a specific lookup table and taking input. Based on the specifications, this command takes a certain input and then matches it with internal fields.
  • Outputlookup- Quite similar to Inputlookup in operation, the Outputlookup command helps you have access to output search results, and these search results are matched with the fields. It helps place the output search results in a lookup table.

6. What is Splunk Btool?

The Splunk Btool command-line tool helps determine the settings that are set and where these settings are configured on the Splunk Enterprise instance. This command is also useful in troubleshooting the configuration file issues. Configuration files on Splunk software are merged and loaded together as this helps in the execution of tasks by creating a set of configurations that are functional. The Splunk Btool helps with the process of merging of these files and it also displays a report pertaining to the merged settings.

7. What do you mean by File precedence in Splunk?

When troubleshooting Splunk, file precedence is something that needs to be considered by developers, architects, and administrators alike. These configuration files determine most of the aspects related to the behavior of Splunk. Since these files exist in a layered manner in directories, file precedence helps understand how these files are evaluated on the Splunk software. To prioritize the configuration files and figure out the directories’ order, the context of each configuration is considered by the Splunk software.

8. State difference between ELK and Splunk.

Here are the major differences between ELK and the Splunk software-

ELK

SPLUNK

ELK is an open-source enterprise that brings together Kibana, LogStash, and ElasticSearch to carry out functions, such as searching, monitoring, analyzing, and visualizing machine data.

Splunk, which is a close-sourced enterprise, is a handy tool for carrying out searches, visualizations, analysis, and monitoring of machine data.

In ELK, LogStash and Kibana are combined with the ElasticSearch tool, which helps ELK function like Splunk.

Splunk can also be integrated with a range of tools, some of which are Amazon Guard Duty, OverOps, Wazuh, etc.

ElasticStack is used by some of the well-established companies, such as Shopify, Slank, Uber, Udemy, etc. for searching, visualizing, and analyzing data.

There is a host of reputed companies that leverage the Splunk software, such as Intuit, Starbucks, Yelp, and Blend, among others.

ElasticStack does not come with pre-loaded features and wizards, which means users who leverage this tool have to install plugins and extensions for using certain features, or make use of Kibana.

Splunk has a variety of pre-loaded features and wizards. However, you can also use plugins and extensions in addition to the applications that come with the software.

9. Explain what is Dispatch Directory.

The Dispatch Directory contains a directory for every search that is either completed or is till in progress. The configuration for a Dispatch Directory is as follows-

$SPLUNK_HOME/var/run/splunk/dispatch

10. State difference between Search head pooling and Search head clustering.

Search heads are Splunk Enterprise instances that are responsible for distributing search requests to other instances, which are known as search peers. Search peers execute actual indexing and searching of the data. Search head also merges the results and returns it to the user. Search head clustering and pooling help implement Distributed Search.

  • Search Head Pooling- Search head pooling utilizes shared storage in order to configure various search heads for the purpose of configuring and sharing user data. When you use search head pooling to multiple search heads, it helps with horizontal scaling when the data is being searched by multiple users. In this context, search head pooling means sharing of resources.
  • Search Head Clustering- Search head cluster means a set of search heads that act as a centralized resource to enable searching. With this set of search heads, the searches, results, and dashboards are accessible to every member of the cluster.

11. What do you mean by SF (Search Factor) and RF (Replication Factor)?

Search Factor and Replication Factor are functions pertaining to search head clustering and index clustering. 

  • Search Factor- Associated with index clustering, Search factor helps in figuring out the number of searchable data copies maintained by the indexing cluster. The default value of the search factor is 2. 
  • Replication Factor- Associated with both search head clustering and index clustering, replication factor is helpful in determining the number of data copies maintained by an indexer cluster as well as the minimum number of search artefact copies maintained by a search head cluster. The default value for replication factor is 3. 

12. Explain what is a fish bucket and a fish bucket index.

Splunk Fishbucket, which is a subdirectory within Splunk, is helpful in monitoring and tracking the extent of indexing of the content of a file within Splunk. There are two kinds of content for the Splunk Fishbucket feature, which are seek pointers and cyclic redundancy checks. 

13. What do you mean by buckets? Explain Splunk bucket lifecycle?

The Splunk software stores data in a directory, which is known as a bucket. There are data events stored for a particular time frame in each of these buckets, and the buckets pass through various stages as the data ages. Following are the stages through which the bucket goes through-

  • Hot bucket- Hot bucket contains data that is newly indexed.
  • Warm bucket- When the data ages a little, it is pulled out of the hot bucket and contained in a warm bucket. There are various warm buckets in indexes.
  • Cold bucket- When the data rolls out of a warm bucket, it is stored in a cold bucket.
  • Frozen bucket- The data, after reaching the cold bucket, rolls out of the cold bucket and reaches the frozen bucket. The data that goes into the frozen bucket gets removed by the indexer, and this data can be archived.

14. Explain how you will set default search time in Splunk 6.

The default search time can be specified in Splunk 6 with the help of a configuration, which is “ui-prefs.conf”. If you want the search time to be viewable as the default setting by all users, this is how the value needs to be set- $SPLUNK_HOME/etc/system/local

15. What is the best way to clear Splunk’s search history?

The search history for Splunk can be cleared using the following configuration- $splunk_home/var/log/splunk/searches.log

16. How to reset Splunk Admin (Administrator) password?

The right way to reset Splunk Admin password depends on the Splunk version you’re using. When you use Splunk 7.1 or a higher version, here are the steps that you must follow to reset the Splunk admin password-

  • To begin with, the user should stop the Splunk Enterprise.
  • This is followed by finding and renaming the ‘passwd’ file as ‘passwd.bk’.
  • The user needs to create a file by the name ‘user-seed.conf’ in the following directory- $SPLUNK_HOME/etc/system/local
  • The new password that a user opts for replaces the ‘NEW_PASSWORD’ when the following command is entered in the file- [user_info] PASSWORD = NEW_PASSWORD
  • After using the above-mentioned command, you must restart Splunk Enterprise and log in with the newly created password, and then stop the Splunk Enterprise.
  • Log in to Splunk Enterprise with the default credentials of changeme or admin.
  • You can follow these instructions when asked to change password or admin username.

17. Explain how Splunk avoids duplicate indexing of logs.

Duplicate indexing of logs can be avoided on Splunk using Splunk Fishbucket, which is a subdirectory within Splunk. It essentially keeps a track of the extent to which a file and its content have been indexed. The fish bucket subdirectory has a default location, which is /opt/splunk/var/lib/splunk

18. Name the commands used to restart Splunk Web Server and Splunk Daemon

The command used to restart Splunk Web Server is “splunk start splunkweb” and the command for restarting Splunk Daemon is “splunk start splunkd”. 

19. Name the commands used to enable and disable Splunk boot start.

The command used for enabling Splunk boot-start is $SPLUNK_HOME/bin/splunk enable boot-start

For disabling Splunk boot-start, here’s the command that should be used- $SPLUNK_HOME/bin/splunk disable boot-start.

Bridge the gap between software developers and operations and develop your career in DevOps by choosing our unique Post Graduate Program in DevOps. Enroll for the PGP in collaboration with Caltech CTME Today!

Conclusion

Software like Splunk Enterprise are extremely useful in increasing the ability of an organization to provide services and products at high velocity. As opposed to various other tools and traditional software, Splunk helps in efficient searching, monitoring, analysis, and visualization of machine data and search results. And courses, such as Simplilearn’s Post Graduation Program in DevOps in collaboration with Caltech can help you develop skills to adopt the best practices and leverage powerful tools like Splunk to enhance the operational efficiency of organizations. 

If you have any doubts or queries regarding Splunk interview questions, then feel free to post them in the comments below. Our team will review them and get back to you with the solutions at the earliest.

Learn from Industry Experts with free Masterclasses

  • Program Overview: Prepare for a Career as a DevOps Engineer with Caltech CTME

    DevOps

    Program Overview: Prepare for a Career as a DevOps Engineer with Caltech CTME

    27th Jun, Tuesday9:00 PM IST
  • Ignite Your DevOps Potential and Succeed in the Tech Sector

    DevOps

    Ignite Your DevOps Potential and Succeed in the Tech Sector

    3rd Apr, Wednesday7:00 PM IST
  • Career Information Session: Get Certified in DevOps with Caltech CTME

    DevOps

    Career Information Session: Get Certified in DevOps with Caltech CTME

    18th May, Thursday9:00 PM IST
prevNext