Docs in Pagure

Docs in Pagure

I took this week to hack on this feature called Docs which gives you the ability to host documentation of your project in Pagure. I have never explored this feature before so I started to hack on it .

This feature is pretty straight forward to use. Once you have your project up and running you can go to Setting of the project and under  Project Option  click on Activate Documentation this will actually activate a Doc tab in the main project. This can be used to host your docs specifically now this is a little tricky because you need to clone and push to a different URL, the docs are maintain in a separate location this is due to security concerns. When you activate the Project option you are provided with a Doc specific URL, you need to push your document or static pages to that URL and automatically any page named as index will be taken as the first page.

Selection_026

You have to click on the more button beside GIT URLs to get your Docs URL and then you are good to go to host your static page.

For people who want to hack on Docs in Pagure you need to pull a few tricks to do that.

First and foremost is you need to get the code from pagure.io and then after setting up Pagure for development, you need to run two servers :

  1. Pagure server
  2. Doc server

The script corresponding to them are  runserver and rundocserver.

So if you have ever hacked on Pagure then you will know that you have to log in make repo  and follow the same steps mentioned above to see the Doc tab.

Under pagure/default_conf.py a new conf key has to be added which is

DOC_APP_URL = 'https://localhost:5001'

This tells Pagure that this instance supports Doc.

Now comes the tricky part, if you need to see Doc there should be a <project_name>.git created in the docs repo which is not there you just need to copy the file from repo directory to docs. Once this is done you need to clone the project repo from docs delete all the files there put the files you want in the static page , we support a lot of formats like md, rst etc. Add, commit and push and voila you will see them in your local instance.

I am actually working on issue 469 in which Ryan has suggested to make docs more specific to static page  hosting with the architecture that docs is based on this is actually a straight forward task but a really beautiful one which need a bit of deliberation on things we want to achieve.  Hope it gave you insight in what I am trying to to do.

More documentation on this can be found in the usage section of Pagure Docs.

Happy Hacking! 🙂

Search for Code in Pagure

Search for Code in Pagure

I was trying to get into code search in Pagure, thing that I land up on got really interesting and amazing.  If you want to have a code searching mechanism in your website you need to look into something called Indexing.

The way search happens in some E-commerce sites like Amazon or be it the search happening on Google, with Google its web scrapping and then indexing on the results. The point being the response time , while you are searching for something you get results in few microseconds.

Now imagine going through such a huge database and going through them in few micro second how much ever power you have but what you need is a clever way to manage it. I was looking at a CS50 video in which Mark Zuckerberg was telling about how he managed his DB, the first architectural design he took was have different MySql instance for different school so that they reduce time taken to search and form relation.

That was a really clever move.

While I was searching for ways to have code search feature on Pagure, I landed up on a pyhton based library called Whoosh. It blew me off with the way it was doing its searches and maintaining the database. I actually looked for a lot for tutorials on how one can understand indexing.

I landed up on Building Search Engines using Python and the way he explained things like N-grams , edge N-grams and how different files store different index words with the frequency and path to documents. I am yet to analyze git grep v/s whoosh.

While I was going through whoosh I saw that it has performance issues and then I started contemplating on the fact that if search is not fast enough then there is no point in having it. I actually looked into HyperKitty I figured out they were using Whoosh before and I assumed even they suffered form performance issues or may be because Django introduce Haystack . As the name suggest you can also use this to find the needle in haystack.

Yeah! you are right, I started looking for Haystack in Flask and I found Flask-whoosh. Again the draw back I had was it use to search through databases and not files, where as my application was to search through files on the system

There came the xapian there are a lot of core concepts involved while using or writing utilities in xapian. I went through the documentation for Xapian. They have covered a lot of concepts and have given examples of it, the bottleneck still persist when it comes to file searching and performance. I found a nice application Building Document Search which might give me some hope but still a lot of work is required there.

The whole concept being you need to do two things on a really high level:

  1. Indexing
  2. Search

Indexing

Indexing is required to go through the each file or record and build something called Index which has the search words filtering  stop words and the new database is build having the frequency and location of the word , this is the most time consuming process.

Search

This comprises of forming a query and searching through the formulated database and return the document in which word or phrase is found.

If you need to see a demo.

Till then Happy Coding an Bingo!

Setting Postgres For Pagure

Setting Postgres For Pagure

I normally use Sqlite for development because of the ease you get to see your file , browse through it and edit it. Having said that sqlite is good for development and not for production one of the foremost reason being it doesn’t support multi-thread querying.

The other disadvantage was sqlite doesn’t give a damn if you have dangling Foreign Key references, I land up on this problem recently. The way we categorize fork project in Pagure is on the basis of parent_id so if a project has parent_id its a fork and if it doesn’t then its not a fork.

This works out quite well unless recently we figured out a flaw , what if the main project is deleted, the expected behavior is the fork should be accessible but because of the parent_id  dependency the fork was getting inaccessible this was because as you delete the main repo , the FK references with the fork gets modified and becomes Null.

This creates anomaly because now the project is no more a fork , its a main repo and its treated like it which leads to a lot of repo path chaos. The relation of Postgres came here because I was able to have a dangling FK reference here in sqlite but when I try to achieve the same thing in Postgres it throws an integrity error.

Pagure uses Sqlalchemy as the ORM so I just need to set up postgres on my system and provide the URL in pagure/default_config.py  and ORM magic makes all the queries just work.

Setting up Postgres is really easy because of the amazing documentation provided in fedora-wiki . The only thing you need to care or a little tricky about is you need to be a superuser  before you change to user postgres .  So first sudo su and then su - postgres. Then the follow the steps in the wiki and create a user and create a database name pagure.

Private Repo on Pagure

Private Repo on Pagure

One of my proposal for Pagure was to have private repositories. Private repositories are basically repositories which are visible to you and people you give permission to.

To be honest , I thought it would require a few tweaks and I will be good to go, but that wasn’t the case and the insights I got working on this feature was amazing. I fiddle with this project on primarily  three stages. Each stage was a challenge in its own.

The three stages were:

  1. UI
  2.  Database Query
  3. Tests

UI

The UI  was suppose to have a checkbox saying “Private”  and when a user ticks it the existing project becomes private or the new project is private from the time it is conceived.

Achieving this was a joy ride, with flask I just need to make changes in the form and setting page UI and Voilla!

I introduced a column Private in the project table and that was pretty much it. Nice and beautiful.

DATABASE

This was the most challenging part for me , since I have not worked with databases, and this was out of my comfort zone, I actually went back to my database basics to see if I am doing things right.

We in Pagure use Sqlalchemy as the ORM layer, ORM stands for object relation mapper. It basically use to map databases to object-class model of representing data. Sqlalchemy is a really powerful tool.

While figuring out ways to get all admins who can view private projects , I struggled a lot since I was working with a function which forms the core of Pagure so if things go wrong with this function the whole Project will take a hit.

So the challenge was to make minimum changes which are independent so that it doesn’t compromise the existing functionality and yet able to introduce a new one. I struggle to achieve it I failed a lot of time , was working hard to get it working , constantly moving to the board to figure out a solution on paper. Then switching back to my screen to code it out.

I was so desperate to get this working that I even pinged Armin on IRC to ask my doubt about flask and Sqlalchemy.  All this while the best support I got was from my mentor Pingou.

Finally after struggling a lot I got a very beautiful solution and done !

Just when I thought I am done , there comes a question of writing tests. Since I have altered a very major functionality that means I need to test every aspect of it.

Selection_021

Testing

Testing was a herculean task since I have not done a lot of testing, I actually got a lot to learn for starting the DB used for testing is a in-memory DB and not the one used by the app.

The session maintained has to be replicated in a way to use them in the test and how to use pygit to actually initialize a repo with git init and use it.

Towards the end of this PR my development evolve from writing code and testing it , to write the test and then introduce code or write code that pass the test. It has been really amazing working on this feature and hope it will be integrated soon.

I think may be a little more work is required on this feature maybe. It feels really amazing to do this work.

The link to the branch on Pagure.

The link to the current Pull-Request.

Happy Hacking!

Investigating Python

Investigating Python

I have been trying to implement private projects on Pagure, while doing that I was struggling with certain design of a function and while doing that I constantly have to switch between shell, editor and at times browser.

I am not saying it is a bad thing or  a good thing but this lead me to find a debugger I thought that might ease my task of finding what is going wrong and where and it actually helped.

I used a python debugger called pudb. This looks like Turbo C but its a lot more useful. This can be used in one of the two ways:

  1. You can directly debug a script  pudb <your_script_name>.py
  2.  You may need to call certain function when you are working with big projects in that case you can just put import pudb; pu.db

 

The most beautiful thing is, it just pops an IDE out of no where, it gives a deep insight about what code i s doing, how it is doing , what stack it is maintaining and what are the various values of variables.

You can always set breakpoints so that you can investigate about the code,  you actually play a detective. This is actually an important  point in developing in an opensource project since there are a lot of functions doing a lot of Hocus Pocus.

This is one of those tools that might even help you to understand the code base better , it really helped me to design the code better.

This is the script that I am trying to debug, the screen looks like this and ‘n’ can let you to go to the next line which can be investigated using the stacks shown.

Selection_017

These are the few windows that you can navigate to and see what is going on.

Selection_018

Not only this but you can jump between different modules and you can set breakpoints.

Selection_019

 

This might help you get some cool insight about the project.

Happy Hacking !

Pagure CI

Pagure CI

As my GSoC project one of the first goals is getting CI to Pagure. In my previous post I have been blogging about getting Fedmsg to work, Configuring Jenkins and my favorite Poor Man’s CI. Well, Poor Man’s CI evolved into Slightly Richer Man’s CI and now  Pagure CI.

This is how it started I was trying to test how Poor Man’s CI works and how can I actually integrate it, while doing that I hit a deadend when Poor Man’s CI would not respond to local Pagure instance because there is no GIT server running. So one cannot clone git repo with a normal URL, I landed up on a really amazing utility called Git Daemon, it gave me some hope but it didn’t last long, that also has disadvantage that the URL should be a git:// URL.

I somehow thought I needed to set my Pagure as an application and then I can make common clone commands. I tried using that and I am still in the process of doing it , I am still setting gitolite on that front.

While doing that and because of my lazy nature I found a neat hack and it was , my Poor Man’s CI was running on localhost and so was my Jenkins but the I made a repo on pagure.io.  The only thing to understand here is the Fedmsg should listen to all the msgs and Fedmsg plugin should be enabled in the repository on Pagure.

So, recap you have :

1. fedmsg-relay running and listening to all the production messages.

2.  fedmsg-tail  –really-pretty running to check the message is being read.

3. fedmsg-hub running in Poor man’s CI which is consuming the message and configure        to consume new pull request.

Voilla! it worked and there you go I got a build there and the flag is attached to it.

Selection_016

 

After talking to Pierre we arrived a on conclusion that we need a way to either merge the two application or make Pagure talk to Poor Man’s CI. I had a lot of doubts to achieve it, later Pierre gave me “Food for thought ”  which was

“If we are able to gather all information from Pagure and Poor Man’s CI could get that database access would it not do the deal?”

The way PMCI handles it is the consumer written for fedmsg-hubs parses the message received and according to the message received it triggers Jenkins build and does all the work and it just queries a single table.

Pierre wanted no individual UI for PMCI and so we made it a hook , now a person can enable the hook and add the useful information required. I brought the whole consumer as it is from PCMI. Consumers need to get registered in setup.py and dependencies needs to be build up using python setup.py develop which does a lot of magic behind the scenes.

One of the dependencies is python-jenkins library, which provides various functionality. The whole PMCI is reduced to three files in Pagure. The most important being consumer.py , I gathered a lot of functionalities into a lib/pagure_ci.py and made a model in hooks/jenkins_hook.py.  Since I got lot of functionalities in place the only thing then I struggled with was because of my poor understanding of Sqlalchemy. I read up a lot and was able to debug the code and make it work.

 

The most peculiar thing I came across was even the server needs to be run in the virtualenv, since Jenkins library is installed in that. Once that is done its like a train of dominoes , one thing will lead to another . I am trying it to show you diagrammatically.

hook

Step 2: Configuring Jenkins

Step 2: Configuring Jenkins

Jenkins is one of the major part in setting up Poor Man’s CI. Lets look into how Jenkins can be configured and how can we make it automate task.

Jenkins can be downloaded for the OS you are using from their website. After downloading the mistake that I did was using Jenkins with Global Credentials, as pointed by lsedlar on the channel, because of this I was not able to get the “Trigger by URL” option in the project.

Initial configuration is pointed by lsedlar in his blog. I will be covering extra configuration to have it working for local development. First and foremost being the authentication , this can be done by  Manage Jenkins –> Configure Global Security. Selection_013

 

Give Read, View and Discover to anonymous and add another user and give all the permission to that user. You need to restart Jenkins service.

sudo systemctl restart jenkins.service

On web ui jenkins will ask you to sign in ,  create a user with the username you gave all the permission and log in with the user. Now add New Item  and create a Freestyle Project. Now configure the project , click on  “This build is parameterized” and configure it according to Poor Man CI’s. Once that is done, select the option as shown below:

Selection_014

Once that is done you can use this token to trigger build using a POST request. The trick is you need to pass parameters too with the URL. Next thing is you need to tell Jenkins what to do and where to do. Since we are dealing with Jenkins and git we need a local repo or some URL to the git repo. For every operation carrying out in the repository the directory should have the group and user set to jenkins else you cat just put the repo in /var/lib/jenkins.

Download and install Git Plugin for Jenkins. Once that is done you need to point the git plugin to the repository you are going to test.

Selection_015

Once jenkins know where to perform action you need to tell what to perform this is done in the build section of the configuration and select Execute Shell.


if [ -n "$REPO" -a -n "$BRANCH" ]; then
git remote rm proposed || true
git remote add proposed "$REPO"
git fetch proposed
git checkout origin/master
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
git merge --no-ff "proposed/$BRANCH" -m "Merge PR"
fi

We are almost done, the last thing is we need an auth token for the user. Go to Manage Jenkins –> Manager User. Now get the api token for the user. Make sure that branch you are passing as parameter exists in the repository. Lets trigger the build using cuRL.

USER:fhackdroid

API Token: 728507950f65eec1d77bdc9c2b09e14b

Token: BEEFCAFE

BRANCH:checking

curl -X POST http://fhackdroid:728507950f65eec1d77bdc9c2b09e14b@localhost:8080/job/pagureExp/buildWithParameters\?token\=BEEFCAFE\&REPO\=file:///$\{JENKINS_HOME\}/new_one_three\&BRANCH\=checking\&cause\=200