File Indexing In Golang

File Indexing In Golang

I have been working on a pet project to write a File Indexer, which is a utility that helps me to search a directory for a given word or phrase.

The motivation behind to build this utility was so that we could search the chat log files for dgplug. We have a lot of online classes and guest session and at time we just remember the name or a phrase used in the class, backtracking the files using these are not possible as of now. I thought I will give stab at this problem and since I am trying to learn golang I implemented my solution in it. I implemented this solution over a span of two weeks where I spent time to upskill on certain aspects and also to come up with a clean solution.

Exploration

This started with exploring a similar solution because why not? It is always better to improve an existing solution than to write your own. I didn’t find any which suits our need so I ended up writing my own. The exploration to find a solution led me to discover few of the libraries that can be useful to us. I discovered fulltext¬†and Bleve.

I found bleve to have better documentation and really beautiful thought behind it. They have a very minimal yet effective thought process with which they designed the library. At the end of it I was sure I am going to use it and there is no going back.

Working On the Solution

After all the exploration I tried to break the problem I have into smaller problems and then to follow and solve each one of them. So first one was to understand how bleve works, I found out that bleve creates an index first for which we need to give it the list of files. The way the index is formed is basically a map structure behind the back where you give the id and content to be indexed. So what could be a unique constraint for a file in a filesystem? The path of the file I used it as the id to my structure and the content of my file as the value.

After figuring this out I wrote a function which takes the directory as the argument and gives back the path of each file and the content of each file. After few iteration of improvement it diverged into two functions one is responsible to get the path of all the files and the other just reads the file and get the content out.

func fileNameContentMap() []FileIndexer {
	var ROOTPATH = config.RootDirectory
	var files []string
	var filesIndex FileIndexer
	var fileIndexer []FileIndexer

	err := filepath.Walk(ROOTPATH, func(path string, info os.FileInfo, err error) error {
		if !info.IsDir() {
			files = append(files, path)
		}
		return nil
	})
	checkerr(err)
	for _, filename := range files {
		content := getContent(filename)
		filesIndex = FileIndexer{Filename: filename, FileContent: content}
		fileIndexer = append(fileIndexer, filesIndex)
	}
	return fileIndexer
}

This forms a struct which stores the name of the file and the content of the file. And since I can have many files I need to have a array of the struct. This is how the transition of moving from a simple data structure evolves into complex one.

Now I have the utility of getting all files, getting content of the file and making an index.

This forms a crucial step of what we are going to achieve next.

How Do I Search?

Now since I am able to do the part which prepares my data the next logical stem was to retrieve the searched results. The way we search something is by passing a query so I duck-typed a function which accepts a string and then went on a spree of documentation to find out how do I search in bleve, I found a simple implementation which returns me the id of the file which is the path and match score.


 func searchResults(indexFilename string, searchWord string) *bleve.SearchResult {
	index, _ := bleve.Open(indexFilename)
	defer index.Close()
	query := bleve.NewQueryStringQuery(searchWord)
	searchRequest := bleve.NewSearchRequest(query)
	searchResult, _ := index.Search(searchRequest)
	return searchResult
}

This function opens the index and search for the term and returns back the information.

Let’s Serve It

After all that is done I need to have a service which does this on demand so I wrote a simple API server which has two endpoints index and search.  The way mux works is you give the enpoint to the handler and which function has to be mapped with it. I had to restructure the code in order to make this work. I faced a very crazy bug which when I narrowed down came to a point of a memory leak and yes it was because I left the file read stream open so remember when you Open always defer Close.

I used Postman to heavily test it and it war returning me good responses. A dummy response looks like this:

 [{"index":"irclogs.bleve","id":"logs/some/hey.txt","score":0.6912244671221862,"sort":["_score"]}]

Missing Parts?

The missing part was I didn’t use any dependency manager which Kushal pointed out to me so I landed up using dep¬†to do this for me. The next one was the best problem and that is how do auto-index¬†a file, which suppose my service is running and I added one more file to the directory, this files content wouldn’t come up in the search because the indexer¬†has not run on it. This was a beautiful problem I tried to approach it from many different angles first I thought I would re-run the service every time I add a file but that’s not a graceful solution then I thought I would write a cron which will ping /index¬†at regular interval and yet again that was a bad option, finally I thought if I could detect the change in file. This led me to explore gin, modd and fresh.

Gin was not very compatible with mux so didn’t use it, modd was very nice but I need to kill the server to restart it since two service cannot run on a single port and every time I kill that service I kill the modd daemon too so that possibility also got ruled out.

Finally the best solution was fresh although I had to write a custom config file to suite the requirement this still has issues with nested repository indexing which I am thinking how to figure out.

What’s Next?

This project is yet to be containerised and there are missing test cases so I would be working on them as and when I get time.

I have learnt a lot of new things about filesystem and how it works because of this project, this helped me appreciate a lot of golang concepts and made me realise the power of static typing.

If you are interested you are welcome to contribute to file-indexer. Feel free to ping me.

Till then, Happy Hacking!

 

Advertisements

Template Method Design Pattern

Template Method Design Pattern

This is a continuation of the design pattern series.

I had blogged about Singleton once, when I was using it very frequently. This blog post is about the use of the Template Design Pattern. So let’s discuss the pattern and then we can dive into the code and its implementation and see a couple of use cases.

The Template Method Design Pattern is a actually a pattern to follow when there are a series of steps, which need to be followed in a particular order. Well, the next question that arises is, ‚ÄúIsn’t every program a series of steps that has to be followed in a particular order?‚ÄĚ

The answer is Yes!

This pattern diverges when it becomes a series of functions that has to be executed in the given order. As the name suggests it is a Template Method Design pattern, with stress on the word method, because that is what makes it a different ball game all together.

Let’s understand this with an example of Eating in a Buffet. Most of us have follow a set of similar specific steps, when eating at a Buffet.¬†We all go for the starters first, followed by main course and then finally, dessert. (Unless it is Barbeque Nation then it’s starters, starters and starters :))

So this is kind of a template for everyone Starters --> Main course --> Desserts.

Keep in mind that content in each category can be different depending on the person but the order doesn’t change which gives a way to have a template in the code. The primary use of any design pattern¬†is to reduce duplicate code or solve a specific problem. Here this concept solves the problem of code duplication.

The concept of Template Method Design Pattern depends on, or rather  is very tightly coupled with Abstract Classes. Abstract Classes themselves are a template for derived classes to follow but Template Design Pattern takes it one notch higher, where you have a template in a template. Here’s an example of a BuffetHogger class.

from abc import ABC, abstractmethod

class BuffetHogger(ABC):

    @abstractmethod
    def starter_hogging(self):
        pass

    @abstractmethod
    def main_course_hogging(self):
        pass

    @abstractmethod
    def dessert_hogging(self):
        pass

    def template_hogging(self):
        self.starter_hogging()
        self.main_course_hogging()
        self.dessert_hogging()

So if you see here the starter_hogging, main_course_hogging and dessert_hogging are abstract class that means base class has to implement it while template_hogging uses these methods and will be same for all base class.

Let’s have a Farhaan¬†class who is a BuffetHogger¬†and see how it goes.

class Farhaan(BuffetHogger):
    def starter_hogging(self):
        print("Eat Chicken Tikka")
        print("Eat Kalmi Kebab")

    def __call__(self):
        self.template_hogging()

    def main_course_hogging(self):
        print("Eat Biryani")

    def dessert_hogging(self):
        print("Eat Phirni")
Now you can spawn as many  BuffetHogger  classes as you want, and they’ll all have the same way of hogging. That’s how we solve the problem of code duplication
Hope this post inspires you to use this pattern in your code too.
Happy Hacking!

Benchmarking MongoDB in a container

The database layer for an application is one of the most crucial part because believe it or not it effects the performance of your application, now with micro-services getting the attention I was just wondering if having a database container will make a difference.

As we have popularly seen most of the containers used are stateless containers that means that they don’t retain the data they generate but there is a way to have stateful containers and that is by mounting a host volume in the container. Having said this there could be an issue with the latency in the database request, I wanted to measure how much will this latency be and what difference will it make if the installation is done natively verses if the installation is done in a container.

I am going to run a simple benchmarking scheme I will make 200 insert request that is write request keeping all other factors constant and will plot the time taken for these request and see what comes out of it.

I borrowed a quick script to do the same from this blog. The script is simple it just uses pymongo the python MongoDB driver to connect to the database and make 200 entries in a random database.


import time
import pymongo
m = pymongo.MongoClient()

doc = {'a': 1, 'b': 'hat'}

i = 0

while (i < 200):

start = time.time()
m.tests.insertTest.insert(doc, manipulate=False, w=1)
end = time.time()

executionTime = (end - start) * 1000 # Convert to ms

print executionTime

i = i + 1

So I went to install MongoDB natively first I ran the above script twice and took the second result into consideration. Once I did that I plotted the graph with value against the number of request. The first request takes time because it requires to make connection and all the over head and the plot I got looked like this.

 

Native
MongoDb Native Time taken in ms v/s Number of request

The graph shows that the first request took about 6 ms but the consecutive requests took way lesser time.

Now it was time I try the same to do it in a container so I did a docker pull mongo and then I mounted a local volume in the container and started the container by

docker run --name some-mongo -v /Users/farhaanbukhsh/mongo-bench/db:/data/db -d mongo

This mounts the volume I specified to /data/db¬†in the container then I did a docker cp¬†of the script and installed the dependencies and ran the script again twice so that file creation doesn’t manipulate the time.

To my surprise the first request took about 4ms but subsequent requests took a lot of time.

Containered
MongoDB running in a container(Time in ms v/s Number of Requests)

 

And when I compared them the time time difference for each write or the latency for each write operation was ‚Äčconsiderable.

MongoDB bench mark
Comparison between Native and Containered MongoDB

I had this thought that there will be difference in time and performance but never thought that it would be this huge, now I am wondering what is the solution to this performance issue, can we reach a point where the containered performance will be as good as native.

Let me know what do you think about it.

Happy Hacking!

Debugging Python with Visual Studio Code

Debugging Python with Visual Studio Code

I have started using Visual Studio Code, and to be honest, I feel it’s one of the best IDEs in the market. I‚Äôm still a Vimmer; given a chance I still use VIM for small edits or carrying out nifty text transformations. After Vim, the next tool that has really impressed me is VSC; the innovations the team are doing, the utility that it provides is almost a super power.

This post is regarding one of the utilities that I have been using very recently. This is a skill that I have been trying to harness for a long time. For every person who writes code there comes a time where they need to figure out what is going wrong;  there’s a need to debug the code.
The most prominent and well used debugging tools are print statements. To be really honest, it doesn’t feel (to me) quite right to use print statements to debug my code, but that’s the most handy way to figure out the flow and inspect each variable. I’ve tried a lot of debuggers and it alway feels like extra effort to actually take a step up and use them. This could be one of the reasons I might have not used them very intensively. (Although I have used pudb extensively.)

But, with VS Code, the way debugger is integrated in really well. It feels very natural to use it. Recently when I was working on few scripts and was trying to debug them, I went on exploring a little more with the python debugger in VS Code.

So I have this script and I want to run the debugger or it. You hit ctrl + alt + p, this opens the the command drop down, just type debug and you will see the option,  Debug and start debugging.

 

Screenshot 2018-06-24 22.45.31

 

This actually creates a launch.json file in your project. You can put all your configuration in here. We’ll edit the config file as we go; since it is not a Django or Flask project we will use the current file configuration. That looks like this:

{

"name":"Python: Current File",

"type":"python",

"request":"launch",

"program":"${file}"

}

You can set the pythonPath¬†here if you are using a virtual environment, name¬†sets the name of the configuration, type¬†is the type of file, that is being debugged it, and ¬†request¬†can be used to debug it in different ways. Let’s make our configs more customised,
{

"name":"Facebook Achieve Debug",

"type":"python",

"request":"launch",

"program": "${flle}"

}
Screenshot 2018-06-25 00.23.42
If you observe there’s a red dot at line 50.  That is called the breakpoint and that is where the program will stop and you will be able to observe variables and see the flow of the program.
Let’s see what the screen looks like when you do that,
Screenshot 2018-06-25 00.34.34
This is the editor in full flow, you could see the stack that is being followed, you can also go and inspect each variable.
With the debug console (lower right pane) you can even run some code that you want to run or to inspect the same. Now, let us look at the final config and see what is going on.

{

 "name":"Python: Current File",

 "type":"python",

 "request":"launch",

"program":"${file}",

 "pythonPath":"/Users/farhaanbukhsh/.virtualenvs/facebook_archieve/bin/python",

 "args":[

    "--msg",

    "messages"

   ]

}

If you observe I have the pythonPath¬†set to my ‚Äčvirtualenv¬†and I have one more argument which is args¬†which is the command-line¬† argument that has to be passed to the script.
I still use print statement sometimes but I have made it  a sure point to start using the debugger as early as possible because, believe it or not, this definitely helps a lot and saves time.

Home Theatre!

Due to a lot of turmoils in my life in the recent past, I had to shift with a friend. Abhinav has been an old friend and college mate, we have hacked on a lot of software and hardware projects together but this one is on of the coolest hack of all time and since we are flatmates now it solved a lot of issues. We also had his brother Abhishek so the hack became more fun.

The whole idea began with the thoughts of making the old laptops which we have to be used as servers, we just thought what can we do to make the best of the machines we have. He has already done few set ups but then we landed up on doing a htpc, it stands for Home Theatre PC or media centre, basically a one stop shop for all the need, movies, tv shows and music. And we came up with a nice arrangement which requires few things, the hardware we have:

  1. Dell Studio 1558
  2. Raspberry Pi 3
  3. And a TV to watch these on ūüėČ

When we started configuring this setup we had a desktop version of Ubuntu 18.04 installed but we figured out that this was slowing down the machine so we switched to Ubuntu Server edition. This was some learning because I have never installed any server version of operating system. I use to wonder always what kind of interface will these versions give. Well without any doubt it just has a command-line utility for every thing, from partition to network connection.

Once the server was installed we just had to turn that server into a machine which can support our needs, basically installed few packages.

We landed up on something called as Atomic Toolkit. A big shoutout for the team to develop this amazing installed which has a ncurses like interface and can run anywhere. Using this toolkit we kind of installed and configured CouchePotato, Emby and Headphones.

This slideshow requires JavaScript.

This was more than enough we could automate a lot of things in our life with this kind of set up, from Silicon Valley to Mr. Robot. CouchePotato help us to get the best quality of videos and Emby gives us a nice dashboard to show all the content we have.

I don’t use Headphones much because I love another Music Application but then Headphones being a one stop shop is not wrong too. All this was done on the Dell Studio Machine we had, also we stuck a static IP on it so to know which IP to hit.

Our sever was up, running and configured. Now, we needed a client to listen to this server we kind of have a TV but that TV is not smart enough so we used a Raspberry Pi 3 and attached it to the TV using the HDMI port.

We installed OSMC on the Raspberry Pi and configured it to use Emby and listen to the Emby server once we booted it up it was very straight forward. This made our TV look good and also a little smart and it opened our ways for 1000s of movies, music and podcast. Although I don’t know if setting up this system was more fun or watching those movies will be.

 

Writing Chuck – Joke As A Service

Writing Chuck – Joke As A Service

Recently I really got interested to learn Go, and to be honest I found it to be a beautiful language. I personally feel that it has that performance boost factor from a static language background and easy prototype and get things done philosophy from dynamic language background.

The real inspiration to learn Go was these amazing number of tools written and the ease with which these tools perform although they seem to be quite heavy. One of the good examples is Docker. So I thought I would write some utility for fun, I have been using fortune, this is a Linux utility which gives random quotes from a database. I thought let me write something similar but let me do something with jokes, keeping this mind I was actually searching for what can I do and I landed up on jokes about Chuck Norris or as we say it facts about him. I landed up on chucknorris.io they have an API which can return different jokes about Chuck, and there it was my opportunity to put something up and I chose Go for it.

JSON PARSING

The initial version of the utility which I put together was way simple, it use to make a GET request stream the data in put in the given format and display the joke. But even with this implementation I learnt a lot of things, the most prominent one was how a variable is exported in Go i.e how can it be made available across scope and how to parse a JSON from a received response to store the beneficial information in a variable.

Now the mistake I was doing with the above code is I was declaring the fields of the struct with a small letters this caused a problem because although the value get stored in the struct¬†I can’t use them outside the function I have declared it in. I actually took a while to figure it out and it was really nice to actually learn about this. I actually learnt about how to make a GET¬†request and parse¬†the JSON and use the given values.

Let’s walk through the code, the initial part is a struct¬†and I have few fields inside it, the Category field is a slice¬†of string, which can have as many elements as it receives the interesting part is the way you can specify the key¬†from the received JSON how the value of received JSON is stored in the variable or the field of the struct. You can see the json:"categories"¬†that is the way to do it.

With the rest of the code if you see I am making a GET request to the given URL and if the it returns a response it will be res and if it returns an error it will be handled by err. The key part here is how marshaling and unmarshaling of JSON takes place.

This is basically folding and un-folding JSON once that is done and the values are stored to retrieve the value we just use a dot notation and done. There is one more interesting part if you see we passed &joke which if you have a C background you will realize is passing the memory address, pass by reference, is what you are looking at.

This was working good and I was quite happy with it but there were two problems I faced:

  1. The response use to take a while to return the jokes
  2. It doesn’t work without internet

So I showed it to Sayan and he suggested why not to build a joke caching mechanism this would solve both the problems since jokes will be stored internally on the file system it will take less time to fetch and there is no dependency on the internet except the time you are caching jokes.

So I designed the utility in a way that you can cache as may number of jokes as you want you just have to run chuck --index=10 this will cache 10 jokes for you and will store it in a Database. Then from those jokes a random joke is selected and is shown to you.

I learnt to use flag¬†in go and also how to integrate a sqlite3¬†database in the utility, the best learning was handling files, so my logic was anytime you are caching you should have a fresh set of jokes so when you cache I completely delete the database and create a new one for the user. To do this I need to check of the Database is already existing and if it is then remove it. I landed up looking for the answer on how to do that in Go, there are a bunch of inbuilt APIs which help you to do that but they were misleading for me. There is os.Stat, os.IsExist¬†and os.IsNotExist. What I understood is os.Stat¬†will give me the status of the file, while the other two can tell me if the file exists or it doesn’t, to my surprise things don’t work like that. The IsExist¬†and IsNotExist¬†are two different error wrapper and guess what not¬†of IsExist¬†is not IsNotExist, good luck wrapping your head around it. I eventually ended up answering this on stackoverflow.

After a few iteration of using it on my own and fixing few bugs the utility is ready except the fact that it is missing test cases which I will soon integrate, but this has helped me learn Go a lot and I have something fun to suggest to people. Well, I am open to contribution and hope you will enjoy this utility as much as I do.

Here is a link to chuck!

Give it a try and till then Happy Hacking and Write in GO! 

Featured Image: https://gopherize.me/

The Open Organization

I was recently going through few of the Farnam Street articles, and I landed on the article on¬†how to read a book, where they basically describe how to read a book; ¬†the fact that there are types of books, and the fact that books can, in the words of Francis Bacon¬†‚Äúbe gulped, some books chewed and others digested.‚ÄĚ

This basically signifies the intensity and the level of awareness to have when you are reading a book. I have gulped lots of books, but The Open Organization is one of those, that I wanted to chew on.

I wanted to learn about how you can build an ecosystem where people are free to voice their opinions, where failure is be worn as a badge of honor for trying. This book filled me with thoughts of how would it be like, if an organization is really an Open Organization.

There are a lots of beautiful anecdotes that I came across, and a lot of values that were given in the book to think on.

The book talks about Purpose¬†and Passion.¬†People specially us Millenials,have been spoiled to an extent that we actually don’t run after money but after a purpose, after a problem. We don’t mind working crazy hours and being paid peanuts, but we do care about people, we care about how are we treated, we care about the problem we are after. One of the quotes in the book says Basis of loyalty is a common purpose and not economic dependency. A lot of people I know believe in this. When you unite with an organization which is after the same problem as you, it‚Äôs a match made in heaven.

The book talks about Passion, the passion about doing good, making a dent in the universe, but sometimes you realize Universe doesn’t give a damn .

One of the most amazing analogies, is when the book compares a structure of an organization with web architecture which is end to end and not center to end. Where there is no central point of control but there should be a central point of co-ordination. The organization is lead by leaders it selects, where Meritocracy is the idea behind every decision.

The other idea that was completely new to me was the difference between Crowd-sourcing and Open-sourcing.To be honest I had not thought open source to be a business model until the recent past. The thing with the wisdom of the crowd is that it works amazingly well when the work can be easily disagregated and individuals can work in relative isolation. I love the point in the book that says members of the organization should be inspired by the leader and not motivated. Motivation is something they already have and that is the reason they are joining your organization. I love this idea a lot because I have seen people complaining about their employees not being motivated enough. I think that this (lack of inspired leadership) is a reason.

“Great companies don’t hire skilled people and motivate them, they hire already motivated people and inspire them.‚ÄĚ – Simon Sinek

I really enjoyed the way the power of purpose is laid out in the book. The other idea was the idea of Meritocracy.¬† I think of ¬†merit¬†as having an amazing idea and idea being the sole reason for doing a certain action. Better ideas win, they are questioned and deliberated upon and that is how innovation happens in the organization. People debate over it, question it, trash it. People just don’t settle for something to avoid conflict. That very same complacency however is what has creeped into organizations where people don’t debate ideas just to avoid conflict so that everyone remains happy. It was so amazing to read stories where someone thought out of the box and wanted to bring in a new way of doing things and how he convinced everyone that this is the right way of doing things, we ought to give it a try.

This book pushes back on the belief in hierarchy and brings to limelight lateral structure, letting people know that the conventional ways of running an organization might have to change, upgrade as it were, to a newer version.

I got a lot of amazing ideas and to be honest I got to know how a person in an organization should be treated. I was awestruck with the insights in the book. Wish someday I could mould an organization in this way. Theories are always romantic, hope the execution and implementation is beautiful as well.