Making Evince “Behave” & GUADEC!

Standard

Evince_logo_newThis is probably going to be the shortest post I will be writing, but then it’s GUADEC time!

My latest component is Evince, the document viewer. I am building up on the same infrastructre as we did for Weather testing, and have put together some basic tests for the application. These can be found HERE

On the lighter side of things, I could do with a baggage zipper (much like a file zipper) at a click to take care of my packing, and mail me and my zipped luggage to save us from the gruesome travel :P

Apart from these, I am helping the team organize Volunteers, and I realize I love organizational tasks for GUADEC. (Should take up more of these in the future too :D)

Anyways, returning to the old way of packing now. (I promise the next blog will be better!)

Making applications “behave”

Standard

After the previous post wherein we setup gnome-continuous, the next phase of my project involves writing tests for applications installed in it. I started with gnome-weather.
To start with one can upgrade the previously installed image to latest version from build.gnome.org with “ostree admin upgrade” (restart required).
There are majorly 2 ways to approach writing tests.

  • By playing around directly with the installed tests located at “/usr/libexec/installed-tests/<application-name>
  • By writing “features” and “steps” for testing the application via “behave

In my case the former option doesn’t quite work as much as the later. A major could be that I find it faster to work on my local machine and then test the tests on the gnome-continuous platform. And also because the VM tends to undergo major connectivity issues every now and then. So, this post will majorly talk about “behave”.

We segregated a list of components which need work, along with the priority in which we will be working on them.
The compiled list can be found HERE.

Since, the resources which talk about behave are scarce, my mentors linked me to some good snippets for beginners. They helped me crawl through the first bits of my code for gnome-weather ( HERE ). And so I decided to document a short summary of how to make our applications “behave”. (TLDR)

What is Behave?


Behave is Behavior Driven Development in Python (BDD). This testing methodology doesn’t use ‘tests’ in its usual sense, but replaces them with scenarios, which are basically an equivalent of ‘use cases’. These scenarios are organized in features, which are bdd’s equivalent of test suites. These features describe how some particular feature is supposed to work. Note, that features can use BBD’s equivalent of ‘setup’ – Background, which describes the state, in which the application under test should be put in order to execute the following scenarios.

 

BDD’s core concepts are Gherkin and DRY

Gherkin

Gherkin is Business Readable Domain Specific Language. This is a subset of plain text language, which is designed to be both readable by non-technical folks and be parsable by test automation tools. This language can also be used be designer to describe desired behavior before the implementation starts. Another feature is that engineer can use failing Gherkin scenario as ‘Steps to reproduce’ The core Gherkin concept is a step – an instruction, which is both human-readable and can be parsed by test automation tool. Step examples are:

   Given main Evolution window is displayed
    When I click ‘New Message’ button on the toolbar
    Then new composer window is opened
Note the gherkin keywords – Given, When, Then. These core keywords are used to highlight the purpose of the step.

Given is used to describe prerequesites to be executed to put the system into a known state.
When is generally used to execute actions
Then is used to verify result or check some important clause
Checkout a tutorial on basic concepts of Given When Then

Though Behave uses and implements all Gherkin features, I suggest to use a simplified version of this syntax with these simple rules:

Use asterisk (*) instead of When (and Given). ‘When’ is the most usable keyword in scenarios, so we should keep it short
‘Then’ keyword is used in the last step of the scenario only. The last step should verify the result of the whole scenario and should contain assertEquals (etc.)
Gherkin steps are organized into scenarios:

Scenario: Create a contact with categories
   * Evolution is opened
   * Create a new contact
   * Set “Full Name…” in contact editor to “Adam Jones”
   * Set “Categories…” in contact editor to “Business,Key Customer”
   * Save the contact
Asterisk here means “any keyword”, as step definitions don’t care which keyword is used.

DRY Don’t repeat yourself

DRY principle in Gherkin is important in automation using BDD, as in order to execute Gherkin scenarios, automation tool has to refer steps to parts of code, called step definitions. As Gherkin steps can be used in various scenarios and not organized in packages, class etc. (in other words, have no hierarchy), each step must match one and only one step definition. As a result, steps should be written in the way to be re-used in various Gherkin scenarios, as step definition code doesn’t know about previous nor following steps.

Using scenario from the previous section, we can create the following step definitions:

Toggle line numbers
   1 from behave import step
   2
   3 @step(u’Create a new contact’)
   4 def create_new_contact(context):
   5    ….
Note, that step definition is basically a Python function with @step decorator, which contain step matching string. Note context variable, which contains existing execution. Context should be used to store variables between steps, scenarios and features during test execution.

The step matching string can contain named properties, using parse format (think inverted format). This allows to pass parameters to steps:

Toggle line numbers
   1 from behave import step
   2
   3 @step(u’Set “{field}” in contact editor to “{value}”‘)
   4 def set_field_to_value(context, field, value):
   5    ….
Using previous scenario, set_field_to_value here will be called twice:

Toggle line numbers
   1  set_field_to_value(context, field=’Full Name…’, value=’Adam Jones’)
   2  set_field_to_value(context, field=’Categories…’, value=’Business,Key Customer’)
Behave also allows to call steps withing step definitions:

Toggle line numbers
   1 @step(u’Set contact categories to “{value}”‘)
   2 def set_contact_categories_to_value(context, value):
   3    context.execute_steps(u”””
   4      * Set “Categories…” in contact editor to “%s”
   5      * Another step if request
   6    “”” % (value))
   7    ….
These simple features of Behave allow us to write generic, easy-to-maintain code, which doesn’t depend on actual scenarios.

Note, that some code can be shared across projects, for instance, handling file open/save dialogs, start/stop test for applications etc.

Scenarios = Test case, Feature = Test suite

Gherkin scenarios are organized into features, which contain separate scenarios. If some actions should be performed before each scenario, use Background keyword:

Feature: Contact categories

Background:
  * Open Evolution via command and setup fake account
  * Open “Contacts” section
  * Select “Personal” addressbook

Scenario: Create a contact with categories
  …

Scenario: Set categories using Categories dialog
  …

Scenario: Contact is not listed in unchecked categories
  …
Note, that Behave doesn’t have ‘setup’ or ‘teardown’ concepts, but you still can control a group of scenarios/features using tags:

@needs_google_goa_account
Scenario: Setup Google account in evolution via GOA
  …
In step defintion code you can specify actions to be performed before and after execution of scenario/feature with a tag:

Toggle line numbers
   1 def before_tag(context, tag):
   2     if tag == ‘@needs_google_goa_account’:
   3         # setup google goa account here
   4
   5 def after_tag(context, tag):
   6     if tag == ‘@needs_google_goa_account’:
   7        # remove google goa account here
Scenarios and features have similar procedures: before_feature, before_scenario, after_scenario and after_feature

The above has helped me alot in developing a clarity about how to go about writing tests in behave. My code for gnome-weather is HERE . I will be updating this as and when I code new features.

Desktop Testing, GNOME Continuous and VMs..

Standard

Before writing tests, one needs to figure whats wrong with the existing ones and fix them. Or better more get them to run first.
This is where I started testing those we already have gnome-weather. (Insert – This is where I realized my system needed a lot more packages, thrashing and fixing to run anything close to tests) .

Usually, by my experience prior to this project, I would simply run the python tests (via “make check”). However, as gnome-weather has quite a complicated way to get started, one needs gnome-desktop-testing runner installed via jhbuild.

[sourcecode language="bash"]
jhbuild build gnome-desktop-testing-runner
jhbuild run gnome-desktop-testing-runner org.gnome.Weather
[/sourcecode]

Following this, to be able to run the above tests the app needs to be built with  –enable-installed-tests -> this to be added in autogenargs in the file “~/.config/jhbuildrc”.

I faced some at-spi issues, which I resolved by building  at-spi2-atk, at-spi2-core and pyatspi2-python2 with jhbuild.

In my case however, some packages clashed because of which depreciation errors kept intruding. To overcome this, I moved to a much saner approach. Install a pristine environment for the tests. (Read – create a gnome-continuous VM and work on installed tests).

The following post will talk all about setting up a VM for gnome-continuous. The image we will need is Here.
There are various options of VM Managers available , the easiest to use is Virt-Manager, the one I used.

The foremost thing we will do is, setting up a Bridged Network which the VM can use to ssh into.

  1. Go to “Edit – Connection Detals”
  2. Navigate to “Virtual Networks”
  3. Click “+” < Input any name for the Network < Then click Forward < Forward < Forward < Check “Forwarding to physical network”
  4. Choose your connection (em1 usually, if wired) and finally, method = NAT
  5. Click Finish.

Your final output should be something like the below:

Image

After the network is setup, we will create the VM which will be using the above network to connect to the internet and for ssh connections. The steps to install the gnome-continuous image can be seen in the following screens.

ImageImageImageClick Forward < Finish.

Make sure the VM boots. A few troubleshooting tips:

  • Make sure the storage format is selected correctly, the default format is “raw”, while we are using “qcow2″ , which might cause problems in booting the VM. You can correct the same by changing the it from the IDE Disk settings tab as follows: Image
  • Another issue can be the boot loader settings. Make sure the Boot source is set to be from the Hard Disk. Image

After this hopefully, your VM should be up and running!
If not. you arent alone ! :P Although the VM should ideally boot up by now, there is a slight bug because of which you might be prompted to login. I was subjected to a blank gnome screen and was stuck after this. The workaround for the following awaits you in the escape terminal. The steps you need are :

  1. Start the vt2 terminal by sending in the combination Ctrl+Atl+F1/F2
  2. Login as root
  3. Use “useradd test”
  4. and finally ‘passwd test”

This should get the VM up and running with a VM :D

Image

More on testing to follow :)

 

 

SoC 2014 , Automated Tests for GNOME Continuous.

Standard

The past couple of weeks have been a cocktail of emotions, from being thrilled (with SoC results), to amazed by all that I am going to be learning and have learned so far.. I am now on the phew lots to learn and implement flavor of my summer of code journey.
Being my first blog about my project I’d like to write about the utilities of testing and why this project. The next blogs which shall soon follow are about:

  • Setting up gnome-continuous with a VM
  • Dogtail, Sniff (and the issues I faced with them so far)
  • Introducing Behave

 

What is the project about? 

The ultimate Aim of the project is to improve test coverage for GNOME components. Me along with Martin Simon , under the mentorship of Vadim Rutkovsky and Vitezslav Humpa will be working on important components being built on gnome-continuous namely:  gedit, geary, gnome-documents, gnome-logs, evince, seahorse, gnome-maps, gnome-calculator, gnome-photos, gnome-software, evolution, gnome-control-center, gnome-weather, nautilus, gnome-terminal, gnome-clocks and totem.     The tests we envision to incorporate with various GNOME component would be majorly integration tests, which will thus witness the full application running in a pristine but complete gnome environment with most dependencies, and without mocking services or back-ends.

 

What did I learn so far? 

Although I had a brief exposure to writing code to test applications, however, the sanctity of writing a robust test, is something that I have learnt now.  I have realized that it probably might take you more time to setup things, learn to move on from systems you have tried to setup devoting time than the real coding part of the application.  (More about this in the following blogs!)

 

Signing off with this funny quote about TDD

Always code as if the person who writes tests for your program is a violent psychopath and knows where you live!

Statistical Programming Language R .. The dribbles of a new found awesomeness

Standard

My research in social computing with an intention to mine user personalities found me in entangled in a quagmire of what are the ways to deal best with statistical data. I did hear of R on numerous occasions, but unfortunately my curiosity did not decide to get the better of me. I let R be awesome in theory and concept and never explored it practically. Until recently when I decided to test run the glitters of alleged happiness the language tainted its programmers with. Yes, so far, (and I am not far enough into it) . It is bliss.

I decided to jot down some quick noted for my beginner self and for others who might share my initial skeptical outlook to a programming language as handy as this one.

Ways in which R statistical Programming Language is different from any coding language I have worked with so far!
(Assumption: The reader has a broad idea and has worked with languages like Python and C)

It delegates variable values like a weight for graph resource ! x <- y
There are commands like help(func_name) and examples(func_name) to walk the beginners through.
Running files from the interpreter does not need an import but a source(file_name)

Vectors in R → Simply a list of values.
Creating a vector is done by c(x, y, z) wherein c → combine and x, y, z → list of values of the same type.
If however you do get adventurous and try to use different types of values for x y and z (1, TRUE, “three” , respectively) , c converts all the values to a single mode (characters) so that the vector can hold them all.
The range function in python is as simple as calling 5:9 here or maybe a seq(5,9)
The good part about the seq is not this, it is that it lets you hop onto values, for example

[sourcecode language="R"] 
> seq(5,9,0.5)
[1] 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0
[/sourcecode]

One thing which might come across as confusing to the everyday programmer is : Many languages start array indices at 0, but R’s vector indices start at 1…(Err Not good)..
Good thing is that the vector has a special feature of growing when values greater than indices already defined are added to them.

The arrays values/ vector values can have labels/names without having to go through the trouble of declaring a struct or multidimensional arrays :P Example..

[sourcecode language="R"]
> ranks <- 1:3
> names(ranks) <- c("first", "second", "third")
> ranks first second third
1 2 3
(yieey!)
[/sourcecode]

You can now access the vector not only by the indices but also by the names of the value indices!

Magic (No more dieing online for chart plotting! :D)

The barplot function draws a bar chart with a vector’s values. We’ll make a new vector for you, and store it in the vesselsSunk variable.

> store <- c(4, 5, 1)
> names(store) <- c("England", "France", "Norway")
> barplot(store)

Image

(Suggestion : Try barplot(1:1000))

Not just for a single vector, R can formulate x vs y plots too.
Stay tuned for some pretty ones after a little familiarity with the vector math..

Vector Math
Operations on the vectors in R are like map operations on arrays in python!
say :

> a <- c(1, 2, 3)
> a + 1
[1] 2 3 4

Plots with R ..

> x <- seq(1, 20, 0.1)
> y <- sin(x)
> plot(x,y)

Image

One might like to explore the NA options for vectors and vector math operations in R.

Matrices

> matrix(value,rows,cols)
> matrix(a,3,4)
[,1] [,2] [,3] [,4]
[1,]    1    4    7   10
[2,]    2    5    8   11
[3,]    3    6    9   12

To Convert a vector to a matrix in place , use dim(vector_name) <- c(rows, cols).
The good part about accessing data in R is it resonates to that of python accessing of indices (except the difference in the starting index.)

Plotting maps (one of the best parts so far!)

I am going to be writing some quick examples and their screens, which codeschool tutorial walks us through, these are fun enough to grasp the bigger picture and getting one started.

Elevation map of a sandy beach:

> elevation <- matrix(1, 10, 10)
> elevation[4, 6] <- 0
> contour(elevation)
> persp(elevation, expand=0.2)

 

 

presp

(Tip: Play around with different values of expand here)

R also includes some sample data sets to play around with. One of these is volcano, a 3D map of a dormant New Zealand volcano

> persp(volcano, expand=0.2)
> image(volcano)

Image

More to come .. :)

“A language that doesn’t affect the way you think about programming is not worth knowing.” ― Alan Perlis

OPW.close()

Standard

Summary of my Accomplishments as a part of OPW’2013

Arrival of mid September marks the wrap up of my term as an OPW intern for GNOME-Music and also the beginning of my time with gnome after the internship (Yes! I plan to stick around for a long, long time !)

This post is an effort to condense my contributions and visions from the last 3 months into a report for OPW wrap up.

View for No Music Found

Image

Behavior of playbar toolbar after reaching end of queue
Implementing Repeat ALL functionality
Made my first port from gjs to python (widgets.py!)
Added RTL variants to repeat-shuffle icons
Implemented searchbar
Optimized code!
Learnt to deal with travis errors!
Optimized more code!

  • While I would have expected myself to be limited to just the implementation related bugs, I soon discovered that you can indeed work on UI bugs regardless of owning ubuntu 13.04 (where UI fixes are a pain , partially why the screen-shot courtesy of this post goes to sai :) ). And so I gave in color and width fixes .. and well, adhered to the mockup designs for music !
  • I also learnt that not only do we want a clean indented code, but also an “error-trace” free terminal , and so I cleaned my error trace and made pyflakes and pep8 happy !
  • Porting to python in one crazy week not only saw me writing some huge code porting javascript code but also introduced me to Travis. And I could write a separate blog describing how to make Travis happy !
  • Its the one line fixes which prevent random crashes and fix the “i don’t know why the hell wouldn’t this work” emotion which are the toughest to get to:

I too had my share of those and fixed some :
Fixed grabbing media keys on window focus
Fixed disappearing error icon
Fixed scrollbar positioning
Fixed Attribute error
Fixed crashes

  • Taking care of i18n is important ! – Strings should be translatable
  • Vision for future!  I intend to keep working with gnome-music and help implementing the future features like remote sourcing, completing search, etc.

I have through this journey learnt that Opensource is not just about coding , but about collaborative learning and innovating ..Its about the thrill of solving your first bug, the magical week of meeting those people behind the “nicks”, making friends, improving your skills, experiencing lows.. but cutting through them and rediscovering your highs.

Its about joining a community (and a family ) and never wanting to leave!

Signing off with some screens:

ImageImage

ImageImage

My Diary for Git && GitHub

Standard
Image

Working with Git for the first time in opensource can indeed be overwhelming! The purpose of this blogpost is to make that experience a little less troublesome by covering some of the initial steps which every novice should find useful. Its pretty concise for a topic as elaborate as git, yet I try to cover as many bits as I can :) The following finds it’s origin in my notes for Git and GitHub while I was scaling my first steps into the world of collaborative development.

Git && Github ~ What’s difference ? Don’t they sound the same??

While Git is a revision control system, a tool to manage your source code history,  GitHub is a hosting service for Git repositories.  So, they are not the same thing: Git the tool, GitHub the service for projects that uses Git.

Ok , Good so which one do I use?

There are usually 2 types of patches one needs to contribute with opensource (my experience is largely composed of gnome) :

First, are the ones wherein your project has a repo hosted not on GitHub but (say)  Gnome Repos for existing Projects .

In this case you don’t really have to worry about the GitHub part of this post (unless ofcourse you want to learn it anyways ;) ).Here the git commands you would be looking for are (In order):

  • git clone <git-url-for-project-repo> : This command will create a copy of the repository you intend to work on in your local system (the first time). It is this repo you will be working on to test the changes you will later want to contribute in the codebase. 
  • If you already have a repo cloned then :

git fetch: This command updates your local copy of a remote branch. This operation never changes any of your own branches and is safe to do without changing your working copy.
git pull : This command does a “git fetch” followed by a “git merge” .It is what you would do to bring your repository up to date with a remote repository.

  • git add <name-of-file-editted-by-you> : This command “updates the index using the current content found in the working tree, to prepare the content staged for the next commit”, it also ensures that the ignored files are not added by default.

Yeah, sounds fancy, BUT , how do I check if some file is added or not at any point of time in my git repo ?

    git status is what you want to run here .

That shows me the file I added , I need to see the changes too, is git that smart ?

Turns out , it is .. git diff  is what you are looking for here.

Continuing with the bullet flow..

  • git commit -m “<commit-message>” : This command records the changes in the files earlier indexed(staged) to be “committed” into the history of the git repo.

Thus, you will use git add to start tracking new files and also to stage changes to already tracked files, then git status and git diff to see what has been modified and staged and finally git commit to record your snapshot into your history

So, I have the changes here, they work like pretty and I am proud of me . but, how do they get to the parent repo?!

Here is when we introduce patches. These are consolidated pieces of code you would attach to Bug reports in (here Bugzilla) or send to mailing lists to share the modifications you have with you in your local repo, and the ones which you would like to be integrated into the main source code of the respective projects.

  • git format-patch HEAD^ : This command generates patches suitable for being applied via “git am” , In the scope of this blog, I only describe the command given the one commit on a clean repo is being patched.

Thus, with the above workflow , you will find that you have successfully leaped the first baby steps towards git. I’d highly recommend going through branching and many more concepts which can be found in a great reference -> HERE

Thanks for the first way round , but too bad, I have a project which requires me to issue PRs (whatever they are ! )

Second kind of projects are those which are hosted on the web and have repositories in GitHub (for eg gnome-music :) )

While the first steps uptil committing changes in these are the same, the way a developer is required to communicate these changes to the co-developers varies a bit. After “git commit” , the workflow derails to the following steps:

if you have rights to change the master project repo (which is a little unlikely if you find this blog useful ;) ):  then directly use “git push

if not: you need to do the following :

  • Go to the “Fork” option on the extreme right of the webpage of the git repo and fork it
  • A copy of the repo in your github page gets created. This is where you will need to merge the changes from your local copy for you to request a pull request with the master repo of the respective project. The URL for this will be referred to as <your-project-fork-url> in the subsequent post.
  • Since a direct command to push into the master wont work , we need to first push it into our repo on Github and then request the maintainers to merge the changes with the project repo.
  • git remote add <remote-name> <your-project-fork-url> : This command will add another repository for the main repo to synchronize with.
  • git push <remote-name> master: This command pushes the required changes to your forked repository.

So far so good, but where are the PRs!! (Pull Requests)

The final step involves going to the main project page. In the right side column of the page is an option called “Pull Requests” , click it  < Click on the green button on top-right which says “New pull request” < select the base repo on the left-side (i.e. the project repo) and your project repo on the right < The required git diff is generated < Click “Send Pull Request”

(While choosing the repos you want to compare, one can choose other repositories other than just the master for the given repo. Thankfully the github ui is friendly enough to walk newbies through this !)

Vola! there you go, bug the people on the channel to make sure your enhancements are merged ;)

Other advanced tools you might want to look at – git rebase, git gui, git mergetool, git branch, git stash, git fetch, git pull,git merge (and more as you go :) ) Hope it helps in “un-horrifying” the initial troubles with git and github.

Signing off : “Its not that I know too much, its only that I know what Hiting the “?” key on any page in Github does ;)”