Python's Summer of Code 2015 Updates

August 04, 2015

Vipul Sharma

GSoC 2015: Coding Period

I've been working on implementing advanced search feature to filter tickets based on its metadata like effort, severity, difficulty, priority, author, tags and assignee. In this way a user can filter tickets based on specific requirements by filling the advanced search form.
For this I created a new view: /+tickets/query which contains a form for searching tickets.



It still requires some UI improvements which I'll finish it soon.

I've also been improving the implementation of comment mechanism in ticket modify view.
The new implementation now supports markdown syntax. A user can reply to comments and also new comment is automatically posted if any metadata is updated.
For eg: if effort field is changed from None to 3, a comment will be posted as "Update: Effort changed from None to 3" which will be like any other comment.



by Vipul Sharma ( at August 04, 2015 01:26 PM

August 03, 2015

Goran Cetusic

Ubridge is great but...

In my last post I've talked about ubridge and how it's supposed to work with GNS3. The problem is that users generally need root permissions because GNS3 is basically creating a new (veth) interface on the host. You can't do this without some kind of special permission. Ubridge does this by using Linux capabilities and the setcap command. That's why when you do "make install" when installing ubridge you get:

sudo setcap cap_net_admin,cap_net_raw=ep /usr/local/bin/ubridge

Setting permissions on a file *once* is not a problem, ubridge is already used for VMware and this doesn't really conflict with how GNS3 works. So in GNS3 ubridge should create the veth interfaces for Docker, not GNS3. That's why the newest version of ubridge has some new cool features like hypervisor mode and creating and moving veth interfaces to other namespaces. Here's a quick example:

1. Start ubridge in hypervisor mode on port 9000:

./ubridge -H 9000

2. Connect in Telnet on port 9000 and ask ubridge to create a veth pair and move one interface to namespace of container

telnet localhost 9000
Connected to localhost.
Escape character is '^]'
docker create_veth guestif hostif
100-veth pair created: guestif and hostif
docker move_to_ns guestif 29326
100-guestif moved to namespace 29326

3. Bridge the hostif interface to an UDP tunnel:

bridge create br0
100-bridge 'br0' created
bridge add_nio_linux_raw br0 hostif
100-NIO Linux raw added to bridge 'br0'
bridge add_nio_udp br0 20000 30000
100-NIO UDP added to bridge 'br0'
bridge start br0
100-bridge 'br0' started

That's the general idea of how it should work but I'm having some problems getting this to work on my Fedora installation in the docker branch. My mentors are being really helpful and are trying to debug this with me. I sent them the outputs from various components so here they are for you to get the wider picture.

Manual hypervisor check:
gdb) run -H 11111
Starting program: /usr/local/bin/ubridge -H 11111
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/".
Hypervisor TCP control server started (port 11111).
Destination NIO listener thread for bridge0 has started
Source NIO listener thread for bridge0 has started
[New Thread 0x7ffff65b8700 (LWP 4530)]
[New Thread 0x7ffff6db9700 (LWP 4529)]
[New Thread 0x7ffff75ba700 (LWP 3576)]

GNS3 output:
bridge add_nio_linux_raw bridge0 gns3-veth0ext
bridge add_nio_udp bridge0 10000 10001
2015-08-01 13:11:22 INFO gcetusic-vroot-latest-1 has started
bridge add_nio_linux_raw bridge0 gns3-veth1ext
bridge add_nio_udp bridge0 10001 10000
2015-08-01 13:11:22 INFO gcetusic-vroot-latest-2 has started

Tcpdump output:
[cetko@nerevar gns3]$ sudo tcpdump -i gns3-veth0ext
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on gns3-veth0ext, link-type EN10MB (Ethernet), capture size 262144 bytes
12:06:51.995942 ARP, Request who-has tell, length 28
12:06:52.998217 ARP, Request who-has tell, length 28

[cetko@nerevar gns3]$ sudo tcpdump -i gns3-veth1ext
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on gns3-veth1ext, link-type EN10MB (Ethernet), capture size 262144 bytes

Netstat output:
udp        0      0         ESTABLISHED
udp        0      0         ESTABLISHED

by Goran Cetusic ( at August 03, 2015 08:55 AM

Aman Singh

Porting of function find_objects

I have made the first PR of my GSoC project which is the Porting of the function find_objects,  in measurement submodule of ndimage.  This is one of the most basic function of  ndimage module which finds any object from a labelled image.  It returns a slice object which we can be used on the image to find objects in image.  In porting this function we had several problems to deal with.  Firstly I had to make it  run on all range of devices running whether solaris or Ubuntu 14.10 and for both big endian and little endian machines.  So the first challenge was to manage byteswapped pointers. For this we had used two api’s in numpy. First one PyArray_ISBYTESWAPPED() is used to  check whether the given  pointer is byteswapped or not.  And the second one copyswap(), is used to convert a byteswapped pointer into a normal one which we can dereference normally.  Initially we had used this function but it was making the whole function look like  a proper C function. So we decided to use another high level api of numpy itself which was costlier than the original implementation (as it makes copy of the whole array) but it made the implementation more cythonish and easy to  maintainable. We have yet to do the bench-marking of this version and if results come good,  we are sticking to this new version.

Then there was another conflict regarding functions using  fused data type of input arrays. If variables are declared in the same file then using fused data type in file itself then it becomes very easy to use  fused data type. But writing a function with fused type coming from user is a very tedious task. We finally found a way of implementing it, but its very much complex and uses function pointers which makes it horrible to maintain. We are trying to find any alternative to it yet. I will in my next blog explain how I have used function pointers for fused data type where the type depends upon the input data given by the user.

Link of the PR is

</Keep Coding>

by Aman at August 03, 2015 07:05 AM

Prakhar Joshi

Testing the transform

Hello everyone, now the transform for filtering html is ready and the main task is to test the transform. For that purpose I have set up the whole test environment for my add-on using for unit tests and robot tests.

After setting up the environment, now its time to first write unit test for transform that we have just created to check if they are all passing and the transform is working properly or not.

For creating unit test I first created test class and in that class I just call the convert function that we have created in the transform and give the input as a data stream and pass it to convert function and then get the output as required. After writing few simple test cases like 30-35 then just ran these test cases and they ran successfully.

Test cases ran successfully locally :-

Travis is also happy ;)

Yayayay!!! Finally test cases were passing so its like a milestone for the project and its completed. The PR got merged and things working good as expected.

Now its time to write more test cases and write robot test to pass the whole html page for the script and get the required output, though I tried that on script manually and it ran perfectly and now its time to write them in automated test form so that we can check them with just one command and to check if the transform is working perfectly or not.

Though these last two weeks were too pathetic as I was busy with the placement season and it helped as I got placed.

In the next blog I will write about how I implemented the robot test for the transform. Stay tunned.

Hope you enjoy!!


by prakhar joshi ( at August 03, 2015 07:05 AM

Chienli Ma


In this two weeks connection_pattern and infer_shape were merged. I was supposted to implemented the GPU optimization feature. As I don’t have a machine with Nvidia GPU, I truned to the c_code method after several days.

Reusing the code of CLinker would be our original idea. But thing wuold not be that simple. CLinker generate code at a function scale which mean it treat node OpFromGraph as a function. Yet we want to generate code at a OP scale. Therefore we need to remove some code in a larger scale to make OpFromGraph.c_code() return a ‘node like’ c_code.

There’re to solution. One is to add a new feature to CLinker so that it can detect OpFromGraph node and do some special behavior. The other one is to avoid high level method like code_gen() in CLinker – only use auxilary method to assemble a ‘node like’ c_code. The later solution seems easier and I am working on it. CLinker is a very complicated class, I try to understand it and work out a workable code in the next two weeks.

Wish myself good luck~~!

August 03, 2015 03:04 AM

August 02, 2015

Jaakko Leppäkanga

Interactive TFR

Last two weeks I've been working on interactive TFR and topography views. The interactivity comes from rectangle selector, which can be used for selecting time and frequency windows of TFR to draw a scalp view of the selected area. From scalp view it is now possible to select channels in similar manner to draw an averaged TFR of selected channels. Also fixed a couple of bugs.

The following weeks I'll be mostly finalizing the code as all the planned features are pretty much done.

by Jaakko ( at August 02, 2015 07:54 PM

Shivam Vats

GSoC Week 10

I spent a good amount of time this week trying to make the series function more intelligent about which ring it operates on. The earlier strategy of using the EX ring proved to be slow in many cases. I had discussions with Kalevi Suominen, who is a developer at SymPy and we figured out the following strategy:

  • The user inputs a Basic expr. We use sring over QQ to get the starting ring.

  • We call individual functions by recursing on expr. If expr has a constant term we create a new ring with additional generators required by the ring (e.g sin and cos in case of rs_sin and expand expr over that.)

  • This means that each ring_series function now can add generators to the ring it gets so that it can expand the expression.

This results in considerable speed-up as we do operations on the simplest possible ring as opposed to using EX which is the most complex (and hence slowest) ring. Because of this, the time taken by the series_fast function in faster_series is marginally more than direct function calls. The function doesn't yet have code for handling arbitrary expressions, which will add some overhead of its own.

Most of the extra time is taken by sring. The overhead is constant, however (for a given expression). So for series_fast(sin(a**2 + a*b), a, 10) the extra routines take about 50% of the total time (the function is 2-4 x slower). For series_fast(sin(a**2 + a*b), a, 100), they take 2% of the total time and the function is almost as fast as on QQ

There is, of course scope for speedup in sring (as mentioned in its file). Another option is to minimise the number of calls to sring, if possible to just one (in the series_fast function).

In my last post I talked about the new solveset module that Harsh and Amit are working on. I am working with them and I sent a patch to add a domain argument to the solveset function. It is pretty cool stuff in that solution is always guaranteed to be complete.

Next Week

I haven't been yet been able to start porting the code to Symengine as the Polynomial wrappers are not yet ready. Hopefully, they should be done by the next week. Till, then I will focus on improving series_fast and any interesting issues that come my way.

  • Write a fully functional series_fast. Profile it properly and optimise it.

  • Polynomial wrappers.

  • Document the functions and algorithms used in


August 02, 2015 12:00 AM

August 01, 2015

Udara Piumal De Silva

Synthesizable Conversion Output

After getting the automatic conversion from myhdl to vhdl I have tried to synthesize it. There were several issues. I have changed my myhdl script so that the converted output will not have errors. Not only errors I also tried to remove all the warnings so that synthesize report will show 0 errors and 0 warnings. However I found 3 lines need to be change. Those lines are shown in the below figure.

After manually changing these 3 changes I get 0 errors and 0 warnings. The rtl view of the controller is as follows.

Now I am trying to use my controller in the test suit for sdram in the Xula2 board.

by YUP ( at August 01, 2015 06:58 AM

Ambar Mehrotra
(ERAS Project)

GSoC 2015: 6th Biweekly Report

Hello Everyone! The last two weeks were quite exhausting and I couldn't get much work done due to some issues. Although, I did manage to make some very important bug fixes and some other feature additions.

Summary Deletion: A user can now delete summaries as well.
  • Navigate to the branch.
  • Select the summary you want to delete from the drop down menu in the summary tab.
  • Click on "Edit" menu --> "Delete Summary".
With the implementation of this feature the user will be able to support multiple summaries in the GUI according to the need. There is no summary modification feature in case of branches since you can now both delete and add summaries.

Graphs for Branches: Another feature that I implemented over the past week was the graphs for branches. Earlier the graphs were supported only by the leaves, i.e., the data sources.
  • Click on a branch in the tree.
  • Go to the Graph tab.
  • Select the child whose data you want to view.
I avoided putting graphs for all the children in the same window as in case of large number of children for a branch, it will lead to cluttering and chaos.

Bug Fixes: I mentioned in my earlier blog post about how a user can create multiple summaries for a branch and can view the required one. There were some serious problems with the implementation design of that feature which took a lot of time in fixing.

Video Tutorials: I also made some small tutorials to guide the user so as how to use the GUI. The tutorials describe how to get started, add devices and branches and what modifications you can make to them. Here is the link to the youtube play list for the tutorials Habitat Tutorials Playlist .

I have planned to work on alarms from the following week. Happy Coding.


by Ambar Mehrotra ( at August 01, 2015 06:35 AM

Chad Fulton

Bayesian state space estimation in Python via Metropolis-Hastings

This post demonstrates how to use the ( `tsa.statespace` package along with the [PyMC]( to very simply estimate the parameters of a state space model via the Metropolis-Hastings algorithm (a Bayesian posterior simulation technique). Although the technique is general to any state space model available in Statsmodels and also to any custom state space model, the provided example is in terms of the local level model and the equivalent ARIMA(0,1,1) model.

by Chad Fulton at August 01, 2015 12:04 AM

July 31, 2015

Siddhant Shrivastava
(ERAS Project)

Telerobotics and Bodytracking - The Rendezvous

Hi! The past week was a refreshingly positive one. I was able to solve some of the insidious issues that were plaguing the efforts that I was putting in last week.

Virtual Machine Networking issues Solved!

I was able to use the Tango server across the Windows 7 Virtual Machine and the Tango Host on my Ubuntu 14.04 Host Machine. The proper Networking mode for this turns out to be Bridged Networking mode which basically tunnels a connection between the Virtual Machine and the host.

In the bridged mode, the Virtual Machine exposes a Virtual Network interface with its own IP Address and Networking stack. In my case it was vm8 with an IP Address different from the IP Address patterns that were used by the real Ethernet and WiFi Network Interface Cards. Using bridged mode, I was able to maintain the Tango Device Database server on Ubuntu and use Vito's Bodytracking device on Windows. The Virtual Machine didn't slow down things by any magnitude while communicating across the Tango devices.

This image explains what I'm talking about -

Jive on Windows and Ubuntu machines

In bridged mode, I chose the IP Address on the host which corresponds to the Virtual Machine interface - vmnet8 in my case. I used the vmnet8 interface on Ubuntu and a similar interface on the Windows Virtual Machine. I read quite a bit about how Networking works in Virtual Machines and was fascinated by the Virtualization in place.

Bodytracking meets Telerobotics

With Tango up and running, I had to ensure that Vito's Bodytracking application works on the Virtual Machine. To that end, I installed Kinect for Windows SDK, Kinect Developer Tools, Visual Python, Tango-Controls, and PyTango. Setting a new virtual machine up mildly slowed me down but was a necessary step in the development.

Once I had that bit running, I was able to visualize the simulated Martian Motivity walk done in Innsbruck in a training station. The Bodytracking server created by Vito published events corresponding to the moves attribute which is a list of the following two metrics -

  • Position
  • Orientation

I was able to read the attributes that the Bodytracking device was publishing by subscribing to Event Changes to that attribute. This is done in the following way -

    while TRIGGER:
        # Subscribe to the 'moves' event from the Bodytracking interface
        moves_event = device_proxy.subscribe_event(
                                                                                  cb, [])
        # Wait for at least REFRESH_RATE Seconds for the next callback.

This ensures that the Subscriber doesn't exhaust the polled attributes at a rate faster than they are published. In that unfortunate case, an EventManagerException occurs which must be handled properly.

Note the cb attribute, it refers to the Callback function that is triggered when an Event change occurs. The callback function is responsible for reading and processing the attributes.

The processing part in our case is the core of the Telerobotics-Bodytracking interface. It acts as the intermediary between Telerobotics and Bodytracking - converting the position, and orientation values to linear and angular velocity that Husky can understand. I use a high-performance container from the collections class known as deque. It can act both as a stack and a queue using deque.append, deque.appendleft, deque.pop, deque.popleft.

To calculate velocity, I compute the differences between consecutive events and their corresponding timestamps. The events are stored in a deque, popped when necessary and subtracted from the current event values

For instance this is how linear velocity processing takes place -

  # Position and Linear Velocity Processing
  position_previous = position_events.pop()
  position_current = position
  linear_displacement = position_current - position_previous
  linear_speed = linear_displacement / time_delta

ROS-Telerobotics Interface

We are halfway through the Telerobotics-Bodytracking architecture. Once the velocities are obtained, we have everything we need to send to ROS. The challenge here is to use velocities which ROS and the Husky UGV can understand. The messages are published ot ROS only when there is some change in the velocity. This has the added advantage of minimzing communication between ROS and Tango. When working with multiple distributed systems, it is always wise to keep the communication between them minimial. That's what I've aimed to do. I'll be enhacing the interface even further by adding Trigger Overrides in case of an emergency situation. The speeds currently are not ROS-friendly. I am writing a high-pass and low-pass filter to limit the velocities to what Husky can sustain. Vito and I will be refining the User Step estimation and the corresponding Robot movements respectively.

GSoC is only becoming more exciting. I'm certain that I will be contributing to this project after GSoC as well. The Telerobotics scenario is full of possibilities, most of which I've tried to cover in my GSoC proposal.

I'm back to my university now and it has become hectic but enjoyably challenging to complete this project. My next post will hopefully be a culmination of the Telerobotics/Bodytracking interface and the integration of 3D streaming with Oculus Rift Virtual Reality.


by Siddhant Shrivastava at July 31, 2015 07:53 PM

Vito Gentile
(ERAS Project)

Enhancement of Kinect integration in V-ERAS: Fifth report

This is my fifth report on what I have done for my GSoC project. If you don’t know what it is about and want to find more information, please refer to this page and this blog post.

After finalizing the user step estimation (which is still under test by Siddhant, and probably will require some refinements), during the last week I have also helped Yuval with some scripts to help him in analyzing users’ data. What I have done is mainly to aggregate Kinect and Oculus Rift data, and outputting them in a single file. This has been made possible by using the timestamps related to every single line in the files, in order to synchronize data from different files.

I have not committed these files yet, because Franco has also worked on these stuff; so probably he will commit everything in a while, as soon as he can.

The second (and more important, interesting and compelling) task that I have just finished to implement (although it will need some minor improvements) is to implement hand gesture recognition. This feature is not included in PyKinect, but it ships with the Microsoft Kinect Developer Toolkit, as part of what they called Kinect Interactions. Because PyKinect is based on the C++ Microsoft Kinect API, I have decided to implement this feature in this language (so that I can use the API, rather than reimplementing everything from scratch), and then port it in Python by mean of ctypes.

I had never used ctypes before, and this implied a lot of hard work, but at the end of the story I figure out how to use this powerful technology. Here there are some links useful to everyone wants to start using this technology:

The whole C++ module is stored in this directory of ERAS repository, and its output is a .dll file named KinectGestureRecognizer.dll. This file needs to be placed in the same directory of before executing the body tracker, as well as the KinectInteraction180_32.dll. The latter ships with the Developer Toolkit, and it can be found in C:\Program Files\Microsoft SDKs\Kinect\Developer Toolkit v1.8.0\bin.

Then, I have written a Python wrapper by using ctypes; you can see it at this link. I had also tried to use ctypesgen to automatically generate Python wrapper from header files, but it didn’t seem easy to use to me (mainly due to some issues with the Visual Studio C++ compiler).

I also had to change some settings in order to enable Kinect Interactions to work in the proper way, and it implied to edit also and For instance, I had to change the depth resolution used, which is now 640×480 pixels, while it was 320×240.

Another script involved in the last commit was This file is very useful for testing purposes, because it allows you to see an avatar of the user, moving in 3D space. After adding gesture recognition, I decided to improve the avatar by coloring the hand joints in red if the hand is closed.

I have also helped the IMS with their current mission AMADEE, by setting up one of their machine (the Windows one). This way we can verify if what we have developed during these months works fine, by testing it in a real context with several users.

That’s it for the moment. I will update you soon for what about my GSoC project!


by Vito Gentile at July 31, 2015 04:35 PM

Yue Liu

GSOC2015 Students coding Week 10

week sync 14

Last week:

  • ReWritten all gadgets graph parts using networkx library.
  • Using the algorithms in networkx.algorithms instead of the previous codes.
    • networkx.topological_sort() instead of ROP.__build_top_sort()
    • networkx.all_shortest_paths() instead of ROP.__dfs().
  • Filter all binaries as the rop-tools doing. regardless of its size. Important!!!
  • search_path() return no more than 10 paths(shortest order), for performance.
  • Using gadget's address as the node of graph, not the Gadget object, for performance.
  • Update doctests and regular expression of filter.

It is much more faster now. We can finding gadgets and solving for setRegisters() within less than 10 seconds for most of the binaries, including the amoco's loading time which is about 2 seconds.

And classifier is the main bottleneck now.

Next week:

  • Fixing potential bugs.
  • Aarch64 supported.

July 31, 2015 12:00 AM

July 29, 2015

Sartaj Singh

GSoC: Update Week 8 and 9

It's been a long time since my last post. Holidays are now over and my classes have started. Last few days have been hectic for me. Here's the highlights of my last two weeks with SymPy.


My implementation of the algorithm to compute formal power series is finally done. As a result #9639 finally got merged. Thanks Jim and Sean for all the help. As #9639 brought in all the necessary changes #9572 was closed.

In the SymPy master,

>>> fps(sin(x), x)
x - x**3/6 + x**5/120 + O(x**6)
>>> fps(1/(1-x), x)
1 + x + x**2 + x**3 + x**4 + x**5 + O(x**6)

On a side note, I was invited for Push access by Aaron. Thanks Aaron. :)


  • Improve test coverage of series.formal.
  • Start working on operations on Formal Power Series.

July 29, 2015 06:20 PM

Nikolay Mayorov

Robust nonlinear regression in scipy

The last feature I was working on is robust loss functions support. The results are again available as IPython Notebook, look here (I’m struggling to get “&” work correctly in LaTeX blocks, so the formatting is a bit off at the moment). The plan is to provide this example as tutorial for scipy.

by nickmayorov at July 29, 2015 07:48 AM

July 28, 2015

Michael Mueller

Week 9

This week I spent quite a bit of time on mixin column support for indices, where appropriate. After first moving the indices themselves from a Column attribute to a DataInfo attribute (accessed as, I moved most of the indexing code for dealing with column access/modifications to BaseColumnInfo for mixin use in methods like __getitem__ and __setitem__. Since each mixin class has to include proper calls to indexing code, mixins should set a boolean value _supports_indices to True in their info classes (e.g. QuantityInfo). As of now, Quantity and Time support indices, while SkyCoord does not since there is no natural order on coordinate values. I've updated the indexing testing suite to deal with the new mixins.

Aside from mixins (and general PR improvements like bug fixes), I implemented my mentors' suggestion to turn the previous static_indices context manager into a context manager called index_mode, which takes an argument indicating one of three modes to set for the index engine. These modes are currently:

  • 'freeze', indicating that table indices should not be updated upon modification, such as updating values or adding rows. After 'freeze' mode is lifted, each index updates itself based on column values. This mode should come in useful if users intend to perform a large number of column updates at a time.
  • 'discard_on_copy', indicating that indices should not be copied upon creation of a new column (for example, due to calls like "table[2:5]" or "Table(table)").
  • 'copy_on_getitem', indicating that indices should be copied when columns are sliced directly. This mode is motivated by the fact that BaseColumn does not override the __getitem__ method of its parent class (numpy.ndarray) for performance reasons, and so the method BaseColumn.get_item(item) must be used to copy indices upon slicing. When in 'copy_on_getitem' mode, BaseColumn.__getitem__ will copy indices at the expense of a reasonably large performance hit. One issue I ran into while implementing this mode is that, for special methods like __getitem__, new-style Python classes call the type's method rather than the instance's method; that is, "col[[1, 3]]" corresponds to something like "type(col).__getitem__(col, [1, 3])" rather than "col.__getitem__([1, 3])". I got around this by adjusting the actual __getitem__ method of BaseColumn in this context (and only for the duration of the context), but this has the side effect that all columns have changed behavior, not just the columns of the table supplied to index_mode. I'll have to ask my mentors whether they see this as much of an issue, because as far as I can tell there's no other solution.
At this point I see the PR as pretty much done, although I'll spend more time writing documentation (and making docstrings conform to the numpy docstring standard).

by Michael Mueller ( at July 28, 2015 06:21 PM

Sahil Shekhawat

GSoC Week 10

Hi everyone, my last post was made at a very bad position. I had lost 3 days of work and was lagging behind my timeline. I was in a very bad mood than I am now. Now, I feel confident with this project because I am finally getting the hand of dynamics (the only issue).

July 28, 2015 05:55 PM

AMiT Kumar

GSoC : This week in SymPy #9

Hi there! It's been nine weeks into GSoC . Here is the Progress for this week.

  Progress of Week 9

This week I worked on Replacing solve with solveset or linsolve in the codebase: Here are the modules, I covered, as of now:

@moorepants pointed out that I should not change old solvetests, since people may break an untested code, this argument is valid, so I have added equivalent tests for solveset, where it is competent with solve.

There are some untested code in codebase as well, where solve is used, for those cases replacing has not been done, as the tests would pass anyway, since those lines are not tested. So I have added a TODO for those instances, to replace with solveset, when those lines are tested.

Other Work

I also changed the output of linsolve when no solution are returned, earlier it throwed ValueError & now it returns an EmptySet(), which is consistent with rest of the solveset. See PR #9726

from future import plan Week #10:

This week I plan to Merge my pending PR's on replacing old solve in the code base with solveset, and work on Documentation & lambertw solver.

$ git log

  PR #9726 : Return EmptySet() if there are no solution to linear system

  PR #9724 : Replace solve with solveset in core

  PR #9717 : Replace solve with solveset in sympy.calculus

  PR #9716 : Use solveset instead of solve in sympy.sets

  PR #9717 : Replace solve with solveset in sympy.series

  PR #9710 : Replace solve with solveset in sympy.stats

  PR #9708 : Use solveset instead of solve in sympy.geometry

  PR #9587 : Add Linsolve Docs

  PR #9500 : Documenting solveset

That's all for now, looking forward for week #10. :grinning:

July 28, 2015 12:00 AM

July 27, 2015

Yask Srivastava

Admin and editor enhancements in wiki

Last week I was down with Chicken Pox. I had to take tons of medicies :\ .

Thankfully, I recovered.

Recently I worked on restricted Admin page. Only a super user has access to administrative functions. To become the useruser add the following line in

MoinMoin -
    # create a super user who will have access to administrative functions
    # acl_functions = u'+YourName:superuser'
    acl_functions = u'yask:superuser'

The screenshots after changes.">">">

Other apparent changes from the screenshots are wider spread navbar and footers with bluish background. This was done to give it distictive look from basic theme.

Editor Changes

Currently MoinMoin has a dull editor. It’s looks more like a simple text box than a editor. Thus basic toolbar for markdown,creole,html.. etc is an essential feature we are missing.

I used MarkItUp Javascript plugin to quickly set up the edior like features for our Markdown wiki.

The beautiful thing about this plugin is that it enables us to easily modify toolbar setting by modifying set.js file. This enables us to make editor that works for multiple syntax languages.

MoinMoin - set.js
var mySettings = {
    onShiftEnter:   {keepDefault:false, replaceWith:&lsquo;<br />\n&rsquo;},
    onCtrlEnter:    {keepDefault:false, openWith:&lsquo;\n<p>&rsquo;, closeWith:&lsquo;</p>&rsquo;},
    onTab:          {keepDefault:false, replaceWith:&lsquo;    &rsquo;},
    markupSet:  [ <br/>
        {name:&lsquo;Bold&rsquo;, key:&lsquo;B&rsquo;, openWith:&lsquo;(!(<strong>|!|<b>)!)&rsquo;, closeWith:&lsquo;(!(</strong>|!|</b>)!)&rsquo; },
        {name:&lsquo;Italic&rsquo;, key:&lsquo;I&rsquo;, openWith:&lsquo;(!(<em>|!|<i>)!)&rsquo;, closeWith:&lsquo;(!(</em>|!|</i>)!)&rsquo;  },
        {name:&lsquo;Stroke through&rsquo;, key:&rsquo;S', openWith:&lsquo;<del>&rsquo;, closeWith:&lsquo;</del>&rsquo; },
        {separator:&lsquo;&mdash;&mdash;&mdash;&mdash;&mdash;&rsquo; },
        {name:&lsquo;Bulleted List&rsquo;, openWith:&lsquo;    <li>&rsquo;, closeWith:&lsquo;</li>&rsquo;, multiline:true, openBlockWith:&lsquo;<ul>\n&rsquo;, closeBlockWith:&lsquo;\n</ul>&rsquo;},
        {name:&lsquo;Numeric List&rsquo;, openWith:&lsquo;    <li>&rsquo;, closeWith:&lsquo;</li>&rsquo;, multiline:true, openBlockWith:&lsquo;<ol>\n&rsquo;, closeBlockWith:&lsquo;\n</ol>&rsquo;},
        {separator:&lsquo;&mdash;&mdash;&mdash;&mdash;&mdash;&rsquo; },
        {name:&lsquo;Picture&rsquo;, key:&lsquo;P&rsquo;, replaceWith:&lsquo;<img src="[![Source:!:http://]!]" alt="[![Alternative text]!]" />&rsquo; },
        {name:&lsquo;Link&rsquo;, key:&lsquo;L&rsquo;, openWith:&lsquo;<a href="[![Link:!:http://]!]"(!( title="[![Title]!]")!)>&rsquo;, closeWith:&lsquo;</a>&rsquo;, placeHolder:&lsquo;Your text to link&hellip;&rsquo; },
        {separator:&lsquo;&mdash;&mdash;&mdash;&mdash;&mdash;&rsquo; },
        {name:&lsquo;Clean&rsquo;, className:&lsquo;clean&rsquo;, replaceWith:function(markitup) { return markitup.selection.replace(/&lt;(.*?)>/g, &ldquo;&rdquo;) } },     <br/>
        {name:&lsquo;Preview&rsquo;, className:&lsquo;preview&rsquo;,  call:&lsquo;preview&rsquo;}


Thus we can easily load up different set.js file for different content type of editor.

This is how it looks in Markdown editor. This is how it look!">

RogerHaase tested this today .

Things in editor aren’t fully functional yet as I am still in the process of integrating it.


  • QuickLinks
  • Error Notification Styling

Commits made last week:

  • 953a8cd Local history page themed

  • c6f8ed4 Fixed indicator color bug in usersetting

  • ecb9cfa Enhanced breadcrumbs in basic theme

  • 726692b stretched topnav and header

July 27, 2015 08:41 PM

Wei Xue

GSoC Week 8, 9 and Progress Report 2

Week 8 and 9

In the week 8 and 9, I implemented DirichletProcessGaussianMixture. But its behavior looks similar to BayesianGaussianMixture. Both of them can infer the best number of components. DirichletProcessGaussianMixture took a slightly more iteration than BayesianGaussianMixture to converge on Old-faith data set, around 60 iterations.

If we solve Dirichlet Process Mixture by Gibbs sampling, we don't need to specify the truncated level T. Only the concentration parameter $\alpha$ is enough. In the other hand, with variational inference, we still need to specify the maximal possible number of components, i.e., the truncated level.

At the first, the lower bound of DirichletProcessGaussianMixture seems a little strange. It is not always going up. When some clusters disappear, it goes down a little bit, then go up straight. I think it is because the estimation of the parameters is ill-posed when these clusters have data samples less than the number of features. I did the math derivation of Dirichlet process mixture models again, and found it was a bug on the coding of a very long equation.

I also finished the code of BayesianGaussianMixture for 'tied', 'diag' and 'spherical' precision.

My mentor pointed out the style problem in my code and docstrings. I knew PEP8 convention, but got no idea where was also a convention for docstring, PEP257. It took me a lot of time to fix the style problem.

Progress report 2

During the last 5 weeks (since the progress report 1), I finished the

  1. GaussianMixutre with four kinds of covariance
  2. Most test cases of GaussianMixutre
  3. BayesianGaussianMixture with four kinds of covariance
  4. DirichletProcessGaussianMixture

Although I spent some time on some unsuccessful attempts, such as decoupling out observation models and hidden models as mixin classes, double checking DP equations, I did finished the most essential part of my project and did some visualization. In the following 4 weeks, I will finish all the test cases for BayesianGaussianMixture and DirichletProcessGaussianMixture, and did some optional tasks, such as different covariance estimators and incremental GMM.

July 27, 2015 02:59 PM

Lucas van Dijk

GSoC 2015: Arrows and networks update

A few weeks of development have passed, time for another progress report!

The past few weeks a lot of things have been added and/or improved:

  • Finished the migration the new scenegraph and visual system.
  • Added another example on how to use the ArrowVisual API (a quiver plot)
  • Improved formatting and documentation of the ArrowVisual and Bezier curves code
  • Added some tests for the ArrowVisual

New scenegraph system

This pull request is almost ready to merge, and it's an huge update on the scenegraph and visuals system. The ArrowVisual is now completely ported to this new system.

New Quiver Plot Example

I've created a new example on how to use the ArrowVisual API. It's a quiver plot showed below:

Quiver plot example

The arrows will always point towards the mouse cursor.

July 27, 2015 02:21 PM

Ziye Fan

[GSoC 2015 Week 7&8]

In Week 7 and Week 8, I am mainly working on the optimization of local_fill_sink. This is more complicated than I thought. For details, check discussions here.

When the code of this PR is being used, the time costed by "canonicalize" is less than the original code. (12 passes in 166 seconds --> 10 passes in 155 seconds, tested with the user case on my computer). 

But these changes make theano fails on a test case, "test_local_mul_switch_sink". This test case is to check if the optimizer "local_mul_switch_sink" behaves correctly. Why does it fail? For short, in this test there is a fgraph like " (* (switch ...) (switch ...) )", if this optimizer is applied correctly, the mul op will sink under the switch op, so that expression like "(* value_x Nan)" can be avoided and end up with right result. 

What stops the optimizer is the assert node inserted into the graph. What I am working on now is to make MergeOptimizer deal with nodes with assert inputs. Of course this is already another optimization. For the failed test case, one way is to modify the code of local_mul_switch_sink, make it able to applied through assert nodes, but this is not a good way because it is not general.

Please reply here or send me email if you have any idea or comments. Thanks very much.

by t13m ( at July 27, 2015 02:01 PM

[GSoC 2015 Week 5&6]

In the Week 5 and Week 6, I was working on a new feature for debugging, to display the names of rejected optimizers when doing "replace_and_validate()". The PR is here

To implement this feature in one place in the code, the python library "inspect" is used. Anytime validation fails, the code will inspect the current stack frames and get to know which optimizer is the caller and whether there is "verbose" flag. Then the information for debugging can be displayed.

Besides, optimization for the local_fill_sink optimizer is also began here, the main idea on local_fill_sink is to make "fill" to sink in a recursive way to be more efficiency. 

For the inplace_elemwise optimizer, it is not merged because of the bad performance.

by t13m ( at July 27, 2015 01:29 PM

Mark Wronkiewicz

Paris Debriefing

C-day + 62

I just returned a few days ago from the MNE-Python coding sprint in Paris. It was an invigorating experience to work alongside over a dozen of the core contributors to our Python package for an entire week. Putting a face and personality to all of the github accounts I have come to know would have made the trip worthwhile on it's own, but it was also a great experience to participate in the sprint by making some strides toward improving the code library too. Although I was able to have some planning conversations with my GSoC mentors in Paris (discussed later), my main focus for the week was focused on goals tangential to my SSS project.

Along with a bright student in my GSoC mentor’s lab, I helped write code to simulate raw data files.  These typically contain the measurement data directly as they come off the MEEG sensors, and our code will allow the generation of a raw file for an arbitrary cortical activation. It has the option to include artifacts from the heart (ECG), eye blinks, and head movement. Generating this type of data where the ground truth is known is especially important for creating controlled data to evaluate the accuracy of source localization and artifact rejection methods – a focus for many researchers in the MEEG field. Luckily, the meat of this code was previously written by a post-doc in my lab for an in-house project – we worked on extending and molding it into a form suitable for the MNE-Python library. 

The trip to Paris was also great because I was able to meet my main GSoC mentor and discuss the path forward for the SSS project. We both agreed that my time would be best spent fleshing out all the add-on features associated with SSS (tSSS, fine-calibration, etc.), which are all iterative improvements on the original SSS technique. The grand vision is to eventually create an open-source implementation of SSS that can completely match Elekta’s proprietary version. It will provide more transparency, and, because our project is open source, we have the agility to implement future improvements immediately since we are not selling a product subject to regulation. Fulfilling this aim would also add one more brick to the wall of features in our code library.

by Mark Wronkiewicz ( at July 27, 2015 05:23 AM

Keerthan Jaic

MyHDL GSoC Update

After a long winding road, MyHDL v0.9.0 has been released with many new features! Since the release, I’ve been focusing on major, potentially breaking changes to MyHDL’s core for v1.0 I’ve submitted a PR which lays the groundwork for streamlined AST parsing by centralizing AST accesses and reusing ast.NodeVisitor s across the core decorators. While this PR is being revieiwed, I’m carefully examining MyHDL’s conversion modules in order to centralize symbol table access. I have also been working on improving MyHDL’s conversion tests using pytest fixtures to enable isolation and parallelization.

July 27, 2015 12:00 AM

July 26, 2015

Rupak Kumar Das


Hello all! Let me summarize my progress.

In the last couple of weeks, I worked on a new feature for Ginga – Intensity Scaling. It basically scales the intensity values relative to the first image so that the changes in brightness of the images can be measured. With a few small fixes from me, Eric and I have improved some parts of Ginga like the Cuts plugin and the auto-starting of the MultiDim plugin according to whether the FITS file is multidimensional or not. I have improved the save support branch by fixing all sorts of silly bugs. Although I could not get it to work with OpenCv, the save as movie code nevertheless works pretty well and fast. Now, my focus lies on the Slit and Line Profile plugins which basically need some clean-up.

This is the last week of my summer vacation before my college opens. Although it doesn’t seem likely to be a problem, I will try to complete some important parts before that.


by Rupak at July 26, 2015 08:03 PM

Shivam Vats

GSoC Week 9

Like I said in my last post, this was my first week in college after summer vacation. I had to reschedule my daily work according to my class timings (which are pretty arbitrary). Anyway, since I do not have a test anytime soon, things were manageable.

So Far

Ring Series

This week I worked on rs_series in PR 9614. As Donald Knuth succinctly said, 'Premature optimisation is the root of all evil', my first goal was to write a function that used ring_series to expand Basic expressions and worked in all cases. That has been achieved. The new function is considerably faster than SymPy's series in most cases. eg.

In [9]: %timeit rs_series(sin(a)*cos(a) - exp(a**2*b),a,10)
10 loops, best of 3: 46.7 ms per loop

In [10]: %timeit (sin(a)*cos(a) - exp(a**2*b)).series(a,0,10)
1 loops, best of 3: 1.08 s per loop

However, in many cases the speed advantage is not enough, especially considering that all elementary ring_series functions are faster than SymPy's series functions by factors of 20-100. Consider:

In [20]: q
Out[20]: (exp(a*b) + sin(a))*(exp(a**2 + a) + sin(a))*(sin(a) + cos(a))

In [21]: %timeit q.series(a,0,10)
1 loops, best of 3: 2.81 s per loop

In [22]: %timeit rs_series(q,a,10)
1 loops, best of 3: 3.99 s per loop

In this case, rs_series is in fact slower than the current series method!. This means that rs_series needs to be optimised, as expanding the same expression directly with rs_* functions is much faster.

In [23]: %timeit (rs_exp(x*y,x,10) + rs_sin(x,x,10))*(rs_exp(x**2+ x,x,10) + rs_sin(x,x,10))*(rs_sin(x,x,10) + rs_cos(x,x,10))
1 loops, best of 3: 217 ms per loop

I spent Friday playing with rs_series. Since the function is recursive, I even tried using a functional approach (with map, reduce, partial, etc). It was fun exploring SymPy's functional capabilities (which are quite decent, though Haskell's syntax is of course more natural). This didn't make much difference in speed. Code profiling revealed that rs_series is making too many function calls (which is expected). So, I plan to try a non-recursive approach to see if that makes much of a difference. Other than that, I will also try to make it smarter so that it does not go through needless iterations (which it currently does in many cases).


I had a discussion with Sumith about Polynomial wrappers. I am helping him with constructors and multiplication. We both want the basic Polynomial class done as soon as possible, so that I can start with writing series expansion of functions using it.

I also sent a PR 562 that adds C wrappers for Complex class. This will be especially helpful for Ruby wrappers that Abinash is working on. FQA is a nice place to read about writing C++/C wrappers and for some side entertainment too.

Other than that, I also happened to have a discussion with Harsh on the new solve-set he and Amit are working on. Their basic idea is that you always work with sets (input and output) and that the user can choose what domain he wants to work on. The latter idea is quite similar to what SymPy's polys does. Needless to say, their approach is much more powerful that solvers's. I will be working with them.

Next Week

Targets for the next week are as modest as they are crucial:

  • Play with rs_series to make it faster.

  • Finish Polynomial wrappers and start working on series expansion.


July 26, 2015 12:00 AM

July 25, 2015

Isuru Fernando

GSoC Week 8 and 9

These two weeks me and Ondrej started adding support for different compilers.

I added support for MinGW and MinGW-w64. There were some documented, but not yet fixed bugs in MinGW that I encountered. When including cmath, there were errors saying `_hypot` not defined, and `off64_t` not defied. I added flags `-D_hypot=hypot -Doff64_t=_off64_t` to fix this temporarily. With that symengine was successfully built.

For python wrappers in windows, after building there was a wrapper not found error which was the result of not having the extension name as pyd in windows. Another problem faced was that, python distribution's `libpython27.a` for x64 was compiled for 32 bit architecture and there were linking errors. I found some patched files at and python wrappers were built successfully. Also added continuous integration for MinGW using appveyor.

With MinGW, to install gmp all you had to do was run the command `mingw-get install mingw32-gmp`. For MinGW-w64, I had to compile gmp. For this appveyor came in handy. I started a build in appveyor, stopped it and then logged into the appveyor machine remotely using `remmina` (Each VM was shutdown after 40 minutes. Within that 40 minutes you can login and debug the building). I compiled gmp using msys and mingw-w64 and then downloaded them to my machine. For appveyor runs, these pre-compiled binaries of gmp were used to test MinGW-w64

Ondrej and I worked together to make sure SymEngine could be built using MSVC in Debug mode. Since gmp couldn't be used out of the box in MSVC, we used MPIR project's sources which included visual studio project files. MPIR is a fork of GMP and provides MSVC support. We used it to build SymEngine in MSVC. Later I added support for Release mode and also added continuous integration for both build types and platform types.

Python extension can also be built with MSVC. We are testing the Python extensions in Release mode only right now, because appveyor has only python release mode libraries and therefore when building the extension in Debug mode it gives an error saying python27_d.lib is not found.

I also improved the wrappers for Matrix by adding `__getitem__` and `__setitem__` so that the matrices can be used easily in Python.

Another improvement to SymEngine was the automatic simplification of expressions like `0.0*x` and `x**0.0`. These expressions are not simplified more in master, so I 'm proposing a patch to simplify them to `0.0` and `1.0` respectively.

by Isuru Fernando ( at July 25, 2015 12:04 PM

Julio Ernesto Villalon Reina

Progress Report

Hi all,

During this last three weeks I have been mainly designing, implementing, debugging and running tests for the brain tissue classification code. It is tough because you have to think of all possible options, input arguments, noise models that the end user may end up trying. I have learned a lot during this period, mainly because I have never tested any code so thoroughly and it has also come to my attention the importance of this kind of practice. You realize how “fragile” your code can be and how easy it is to make it fail. This has been a true experience of how to develop really robust software.

Although my mentors and I decided to not move forward to the validation phase until having finished the testing phase, we decided to refactor the code (in part to make it also more robust as I was saying before) and to cythonize some loops that were causing the code to be slow. This has also been a interesting learning experience because I am practically new to cython and the idea of writing python-like code at the speed of C seems fascinating to me.

I am planning to post more detailed information about the testing throughout this weekend. I will be working on finalizing this phase of the project the next couple of days and jump directly to the validation step.

Keep it up and stay tuned!

by Julio Villalon ( at July 25, 2015 08:28 AM

Prakhar Joshi

Updating the Transform

Hello everyone, its been quite a long time since I updated the post, So finally here is the recent work that I have done in past few weeks. As in the last blog post I was able to create a new transform script using lxml and I have mentioned the way I implemented that script.

As the code have been reviewed, then Jamie (mentor) pointed me a very important bug that I have comparing regular expression with the strings and not with the tags which was a bug in that script, so what I did is that when I am converting the whole input of string into a tree form and then iterating through every node and then replacing or removing the unwanted tags as required.

How to work with tree and replace tags ?

So basically what I did is I just took the whole document as a string and parse it into HTMLParser which converts the whole string into a tree like structure. So here in this tree we will have a parent node and then the child nodes and we will iterate through the whole tree and manipulate the nodes (or better call tags).

In the lxml tree structure the nodes are filled with elements (or tags) and we can then iterate over the tree and check for node and do manipulations accordingly. Also we can get the content between the tags using tag.text method.

So what I did here is first created a tree like this :-
                               parser = etree.HTMLParser()
                               tree = etree.parse(StringIO(html), parser)

Then it will create a tree and the tree variable right now is an object which tell us the address where this tree is stored when we print the tree.

Now we have a tree object so basically what we need is to iterate over the tree and this is bit easy work done like this :-
            for element in tree.getiterator():
                if(element.tag == 'h3' or element.tag == 'h4' or element.tag == 'h5'                     or element.tag == 'h6' or element.tag == 'div'):
                        element.tag = 'p'
               if(element.tag == "html" or element.tag == "body" or      

                            element.tag ==               "script"):
                    etree.strip_tags(tree, element.tag)

So this way we can iterate over the nodes and we can play with tags.

Why the cleaner function ?

After that we will convert the whole tree into string and then we will pass it to the cleaner function and here we will clean the html by removing Nasty tags and keeping only Valid tags and the cleaner function will again return a string of filtered html. We will give string to cleaner function so we will first convert the tree into the string and this is how it is done :-
               result = etrree.tostring(tree.getroot(), pretty_print=True,                                method="html")

After that we will pass the result to the cleaner function where the string will be cleaned or filtered like this :-

NASTY_TAGS = frozenset(['style', 'script', 'object', 'applet', 'meta', 'embed'])     
  cleaner=HTMLParser(kill_tags=NASTY_TAGS,page_structure=False,                                                     safe_attrs_only=False)

safe_html = fragment_fromstring(cleaner.clean_html(result))

Here we have also created the fragments of the cleaned string.

Why to fragment the clean html string ?
 We will fragment the string so that we can remove the additionally added parent tag which usually created when we convert the string into the tree and it get appended and creates false results. So we create fragments of the single string and then again convert it into string. This seems like quite stupid to create fragment and convert it back to string but this is the way I found to remove extra tags.

So after the final string we obtain is the final output of the transform and seems like all test cases are passing.

Yayaya!! Its always good to see all the test cases passing. Hopefully you like reading this. Next time I will describe more about the testing part of the transform.


by prakhar joshi ( at July 25, 2015 07:55 AM

Stefan Richthofer

JyNI status update

While for midterm evaluation the milestone focused on building the mirrored reference graph and detecting native reference leaks as well as cleaning them up I focused on updating the reference graph since then. Also I turned the GC-demo script into gc-unittests, see

32 bit (Linux) JNI issue

For some reason test_JyNI_gc fails on 32 bit Linux due to seemingly (?) a JNI-bug. JNI does not properly pass some debug-info to Java-side, and causes a JVM crash. I spent over a day desperately trying several workarounds and double and triple checked correct JNI usage (the issue would also occur on 64 bit Linux if something was wrong here). The issue persists for Java 7 and 8, building JyNI with gcc or clang. The only way to avoid it seems to be passing less debug info to Java-side in JyRefMonitor.c. Strangely the issue also persists when the debug info is passed via a separate method call or object. However it would be hard or impossible to turn this into a reasonably reproducible JNI-bug report. For now I decided not to spend more time on this issue and remove the debug info right before alpha3 release. Until that release the gc-unittests are not usable on 32 bit Linux. Maybe I will investigate this issue further after GSOC and try to file an appropriate bug report.

Keeping the gc reference-graph up to date

I went through the C-source code of various CPython builtin objects and identified all places where the gc-reference graph might be modified. I inserted update-code to all these places, but it was only explicitly tested for PyList so far. All unittests and also the Tkinter demo still run fine with this major change.

Currently I am implementing detection of silent modification of the reference graph. While the update code covers all JyNI-internal calls that modify the graph, there might be modifications via macros performed by extension code. To detect these, let's go into JyGC_clearNativeReferences in gcmodule. This is getting enhanced by code that checks the objects-to-be-deleted for consistent native reference counts. All counts should be explainable within this subgraph. If there are unexplainable reference counts, this indicates unknown external links, probably created by an extension via some macro, e.g. PyList_SET_ITEM. In this case we'll update the graph accordingly. Depending of the object type we might have to resurrect the corresponding Java object. I hope to get this done over the weekend.

by Stefan Richthofer ( at July 25, 2015 04:15 AM

Manuel Paz Arribas

Progress report

My work on the last 3 weeks has been mainly in the container class for the cube background models (X, Y, energy). The class is called CubeBackgroundModel and the code has recently been merged to the master branch of Gammapy.

The class has remodeled after a few code reviews from its first draft as in the post on Friday, June 19, 2015. For instance it can read/write 2 different kind of FITS formats:
  • FITS binary tables: more convenient for storing and data analysis.
  • FITS images: more convenient for the visualization, using for instance DS9.
For the records, FITS is a standard data format largely used in astronomy.

In addition, the plotting methods have been also simplified to allow a more customizable API for the user. Now only one plot is returned by the methods, and the user can easily combine the plots as desired with only a few lines of code using matplotlib.

A new function has been added to the repository as well for creating dummy background cube models called make_test_bg_cube_model. This function creates a background following a 2D symmetric Gaussian model for the spatial coordinates (X, Y) and a power-law in energy. The Gaussian width varies in energy from sigma/2 to sigma. An option is also available to mask 1/4th of the Gaussian images. This option will be useful in the future, when testing the still-to-come reprojection methods, necessary for applying the background model to the analysis data to subtract the background. Since the models are produced in the detector coordinate system (a.k.a. nominal system), the models need to be projected to sky coordinates (i.e. Galactic, or RA/Dec) in order to apply them to the data.

The work on the CubeBackgroundModel class has also triggered the development of other utility functions, for instance to create WCS coordinate objects for describing detector coordinates in FITS format or a converter of Astropy Table objects to FITS binary table ones.

Moreover, a test file with a dummy background cube produced with the make_test_bg_cube_model tool has been placed in the gammapy-extra repository here for testing the input/output (read/write) methods of the class.

This work has also triggered some discussions about some methods and classes in both the Astropy and Gammapy repositories. As a matter of fact, I am currently solving some of them, especially for the preparation of the release of the Gammapy 0.3 stable version in the coming weeks.

In parallel I am also currently working on a script that should become a command-line program to produce background models using the data of a given gamma-ray astronomy experiment. The script is still on a first draft version, but the idea is to have a program that:
  1. looks for the data (all observations of a given experiment)
  2. filters out the observations taken on known sources
  3. divides the data into groups of similar observation conditions
  4. creates the background models and stores them to file
In order to create the model, the following steps are necessary:
  • stack events and bin then (fill a histogram)
  • apply livetime correction
  • apply bin volume correction
  • smooth histogram (not yet implemented)

    A first glimpse on such a background model is shown in the following animated image (please click on the animation for an enlarged view):

    The movie shows a sequence of 4 images (X, Y), one for each energy bin slice of the cube. The image spans 10 deg on each direction, and the energy binning is defined between 0.01 TeV and 100 TeV, equidistant in logarithmic scale. The model is performed for a zenith angle range between 0 deg and 20 deg.

    There is still much work to do in order to polish the script and move most of the functionality into Gammapy classes and functions, until the script is only a few high-level calls to the necessary methods in the correct order.

    by mapaz ( at July 25, 2015 03:30 AM

    Udara Piumal De Silva

    Refreshing Completed

    With the help of my mentor Christopher Felton, I was able to use traceSignals to detect the errors in refreshing. Now the controller is going into REFSHROW state every 782 cycle and perform a AUTO_REFRESH which fulfil the refreshing requirement of the sdram.

    How does refreshing happens...

    refTimer_r counts the 782 cycles and when it is zero rfshCntr_r is incremented. Sometimes the sdram can be in a state like read-In-Progress , write-In-Progress or activate-In-Progress. If that is the case controller can not issue a immediate AUTO_REFRESH command. This is the reason for keeping the rfshCntr_r register. This register will keep track of the refreshes needed and when the sdram is in the idle state it will start issuing AUTO_REFRESH commands until rfshCntr_r is zero.

    The figure below shows the results of where it is doing a refresh once refTimer reaches zero. Here the refreshing happens immediately because the sdram is in a idle state.

    What next...

    I have completed the basic functionality of the The remaining work is to do the convertion to verilog or vhdl and then hardware verifying the functionality.

    by YUP ( at July 25, 2015 02:54 AM

    July 24, 2015

    Siddhant Shrivastava
    (ERAS Project)

    Virtual Machines + Virtual Reality = Real Challenges!

    Hi! For the past couple of weeks, I've been trying to get a lot of things to work. Linux and Computer Networks seem to like me so much that they ensure my attention throughout the course of this program. This time it was dynamic libraries, Virtual Machine Networking, Docker Containers, Head-mounted display errors and so on.

    A brief discussion about these:

    Dynamic Libraries, Oculus Rift, and Python Bindings

    Using the open-source Python bindings for the Oculus SDK available here, Franco and I ran into a problem -

    ImportError: <root>/oculusvr/linux-x86-64/ undefined symbol: glXMakeCurrent

    To get to the root of the problem, I tried to list all dependencies of the shared object file - =>  (0x00007ffddb388000) => /lib/x86_64-linux-gnu/ (0x00007f6205e1d000) => /lib/x86_64-linux-gnu/ (0x00007f6205bff000) => /usr/lib/x86_64-linux-gnu/ (0x00007f62058ca000) => /usr/lib/x86_64-linux-gnu/ (0x00007f62056c0000) => /usr/lib/x86_64-linux-gnu/ (0x00007f62053bc000) => /lib/x86_64-linux-gnu/ (0x00007f62050b6000) => /lib/x86_64-linux-gnu/ (0x00007f6204ea0000) => /lib/x86_64-linux-gnu/ (0x00007f6204adb000)
      /lib64/ (0x00007f6206337000) => /usr/lib/x86_64-linux-gnu/ (0x00007f62048bc000) => /lib/x86_64-linux-gnu/ (0x00007f62046b8000) => /usr/lib/x86_64-linux-gnu/ (0x00007f62044a6000) => /usr/lib/x86_64-linux-gnu/ (0x00007f620429c000) => /usr/lib/x86_64-linux-gnu/ (0x00007f6204098000) => /usr/lib/x86_64-linux-gnu/ (0x00007f6203e92000)
      undefined symbol: glXMakeCurrent  (./
      undefined symbol: glEnable  (./
      undefined symbol: glFrontFace (./
      undefined symbol: glDisable (./
      undefined symbol: glClear (./
      undefined symbol: glGetError  (./
      undefined symbol: glXDestroyContext (./
      undefined symbol: glXCreateContext  (./
      undefined symbol: glClearColor  (./
      undefined symbol: glXGetCurrentContext  (./
      undefined symbol: glXSwapBuffers  (./
      undefined symbol: glColorMask (./
      undefined symbol: glBlendFunc (./
      undefined symbol: glBindTexture (./
      undefined symbol: glDepthMask (./
      undefined symbol: glDeleteTextures  (./
      undefined symbol: glGetIntegerv (./
      undefined symbol: glXGetCurrentDrawable (./
      undefined symbol: glDrawElements  (./
      undefined symbol: glTexImage2D  (./
      undefined symbol: glXGetClientString  (./
      undefined symbol: glDrawArrays  (./
      undefined symbol: glGetString (./
      undefined symbol: glXGetProcAddress (./
      undefined symbol: glViewport  (./
      undefined symbol: glTexParameteri (./
      undefined symbol: glGenTextures (./
      undefined symbol: glFinish  (./

    This clearly implied one thing - libGL was not being linked. My task then was to somehow link libGL to the SO file that came with the Python Bindings. I tried out the following two options -

    • Creating my own bindings: Tried to regenerate the SO file from the Oculus C SDK by using the amazing Python Ctypesgen. This method didn't work out as I couldn't resolve the header files that are requied by Ctypesgen. Nevertheless, I learned how to create Python Bindings and that is a huge take-away from the exercise. I had always wondered how Python interfaces are created out of programs written in other languages.
    • Making the existing shared object file believe that it is linked to libGL: So here's what I did - after a lot of searching, I found the nifty little environment variable that worked wonders for our Oculus development - LD_PRELOAD

    As this and this articles delineate the power of LD_PRELOAD, it is possible to force-load a dynamically linked shared object in the memory. If you set LD_PRELOAD to the path of a shared object, that file will be loaded before any other library (including the C runtime, For example, to run ls with your special malloc() implementation, do this:

    $ LD_PRELOAD=/path/to/my/ /bin/ls

    Thus, the solution to my problem was to place this in the .bashrc file -


    This allowed Franco to create the Oculus Test Tango server and ensured that our Oculus Rift development efforts continue with gusto.

    ROS and Autonomous Navigation

    On the programming side, I've been playing around with actionlib to interface Bodytracking with Telerobotics. I have created a simple walker script which provides a certain degree of autonomy to the robot and avoids collissions with objects to override human teleoperation commands. An obstacle could be a Martian rock in a simulated environment or an uneven terrain with a possible ditch ahead. To achieve this, I use the LaserScan message and check for the range readings at frequent intervals. The LIDAR readings ensure that the robot is in one of the following states -

    • Approaching an obstacle
    • Going away from an obstacle
    • Hitting an obstacle

    The state can be inferred from the LaserScan Messages. A ROS Action Server then waits for one of these events to happen and triggers the callback which tells the robot to stop, turn and continue.

    Windows and PyKinect

    In order to run Vito's bodytracking code, I needed a Windows installation. Running into problems with a 32-bit Windows 7 Virtual Machine image I had, I needed to reinstall and use a 64-bits Virtual Machine image. I installed all the dependencies to run the bodytracking code. I am still stuck with Networking modes between the Virtual Machine and the Host machine. The TANGO host needs to be configured correctly to allow the TANGO_MASTER to point to the host and the TANGO_HOST to the virtual machine.

    Docker and Qt Apps

    Qt applications don't seem to work with sharing the display in a Docker container. The way out is to create users in the Docker container which I'm currently doing. I'll enable VNC and X-forwarding to allow the ROS Qt applications to work so that the other members of the Italian Mars Society can use the Docker container directly.

    Gazebo Mars model

    I took a brief look at the 3D models of Martial terrain available for free use on the Internet. I'll be trying to obtain the Gale Crater region and represent it in Gazebo to drive the Husky in a Martian Terrain.

    Documentation week!

    In addition to strong-arming my CS concepts against the Networking and Linux issues that loom over the project currently, I updated and added documentation for the modules developed so far.

    Hope the next post explains how I solved the problems described in this post. Ciao!

    by Siddhant Shrivastava at July 24, 2015 07:53 PM

    Abraham de Jesus Escalante Avalos

    A glimpse into the future (my future, of course)

    Hello again,

    Before I get started I just want to let you know that in this post I will talk about the future of my career and moving beyond the GSoC so this will only be indirectly related to the summer of code.

    As you may or may not know, I will start my MSc in Applied Computing at the University of Toronto in September (2015, in case you're reading this in the future). Well, I have decided steer towards topics like Machine Learning, Computer Vision and Natural Language Processing.

    While I still don't know what I will end up having has my main area of focus nor where this new journey will take me, I am pretty sure it will have to do with Data Science and Python. I am also sure that I will keep contributing to SciPy and most likely start contributing to other related communities like NumPy, pandas and scikit-learn so you could say that the GSoC has had a positive impact by helping me find areas that make my motivation soar and introducing me to people who have been working in this realm for a very long time and know a ton of stuff that make me want to pick up a book and learn.

    In my latest meeting with Ralf (my mentor), we had a discussion regarding the growing scope of the GSoC project and my concern about dealing with all the unforeseen and ambiguous details that arise along the way. He seemed oddly pleased as I proposed to keep in touch with the project even after the "pencils down" date for the GSoC. He then explained that this is the purpose of the summer of code (to bring together students and organisations) and their hope when they choose a student to participate is that he/she will become a longterm active member of the community which is precisely what I would like to do.

    I have many thanks to give and there is still a lot of work to be done with the project so I will save the thank you speech for later. For now I just want to say that this has been a great experience and I have already gotten more out of it than I had hoped (which was a lot).

    Until my next post,

    by Abraham Escalante ( at July 24, 2015 07:42 PM

    Rafael Neto Henriques

    [RNH post #9] Progress Report (24th of July)

    Progress is going as planed in my mid-term summary :). 

    A short summary of what was done on the last weeks is described in the points below:

    • The functions to fit the diffusion kurtosis tensor are already merged to the main Dipy's repository (you can see the merged work here).
    • The functions to extract kurtosis statistics were submitted in a separate pull request. Great advances on the validation of these functions were done according to the next steps pointed in the mid-term summary. Particularly, I completed the comparisons between the analytical solutions with simpler numerical methods (for nice figures of these comparisons, see below the subsection "Advances on the implementation of DKI statistics").
    • At the same time that I was waiting for the revision of the work done on kurtosis tensor fitting and statistic estimation, I started working on functions to estimate the direction of brain white matter fibers from diffusion kurtosis imaging. This work is happening in a new created pull request. For the mathematical framework of this implementation and some nice figures of the work done so far, you can see below the subsection "DKI based fiber estimates".

    Advances on the implementation of DKI statistics

    Improving the speed performance of functions - As mentioned on the last points of my mid-term summary, some features on the functions for estimating kurtosis statistics to reduce the processing time were added. At the time of the mid-term evaluation, I was planing to add some optional inputs to receive a mask pointing the relevant voxels to process. However, during the last weeks, I decided that a cleaver way to avoid processing unnecessary background voxels was to create a subfunction that automatically detects these voxels (detecting where all diffusion tensor elements are zero) and exclude them. In addition, I also vectorized parts of the codes (for details on this, see directly the discussion on the relevant pull request page). Currently, reprocessing the kurtosis measures shown in Figure 1 of my post #6 is taking around:

    • Mean Kurtosis - 14 mins
    • Radial Kurtosis - 7 mins
    • Axial Kurtosis - 1 min

    Using ipython profiling techniques, I also detected the parts of the codes that are more computationally demanding. Currently, I have been discussing with members of my mentoring organization the possibility of converting this function in cython.

    Comparison between mean kurtosis analytical and approximated solutions. Mean Kurtosis (MK) corresponds to the average of the kurtosis values along all spatial directions. Therefore, an easy way to estimate MK is to sample directional kurtosis values in evenly sampled directions and compute their average. This procedure can be very easy to implement, however it has some pitfalls as requiring a sufficient number of direction samples and being dependent on the performance of the direction sampling algorithms. Fortunately, this pitfalls can be overcome using an analytical solution that was proposed by Tabesh and colleagues.

    In previous steps of my GSoC project, I had already implemented the MK estimation functions according to the analytical solution. However, I decided to implement also the directional average since it could be useful to evaluate the analytical approach. In the figure below, I run this numerical estimate for different number of directions, to analyse how many directions are required so that the directional kurtosis average approaches the analytical mean kurtosis solution.

    Figure 1 - Comparison between the MK analytical (blue) and numerical solutions (red). The numerical solution is computed relative to a different number of direction samples (x-axis). 

    From the figure above, we can see that the numerical approach never reaches a stable solution. Particularly, large deviations are still observed even when a large number of directions is sampled. After a careful analysis, I noticed that this was caused by imperfections on the sphere dispersion algorithm strategies to sample evenly distributed directions.

    Due to the poor performance, I decided to completely remove the MK numerical solution from the DKI implementation modules. This solution is only used on the code testing procedures.  

    Comparison between radial kurtosis analytical and approximated solutions. Radial kurtosis corresponds to the average of the kurtosis values along the perpendicular directions of the principal axis, i.e. the direction of non-crossing fibers. Tabesh and colleagues also proposed an analytical solution for this kurtosis statistic. I implemented this solution in Dipy on my previous steps of the GSoC project. Nevertheless, based on the algorithm described in my post #8, radial kurtosis can be estimated as the average of exactly evenly perpendicular direction samples. The figure below shows the comparison between the analytical solution and the approximated solution for a different number of perpendicular direction samples.

    Figure 2 - Comparison between the RK analytical (blue) and numerical solutions (green). The numerical solution is computed relative to a different number of direction samples (x-axis).

    Since, opposite to the MK case, the algorithm to sample perpendicular directions does not depend on sphere dispersion algorithm strategies, the numerical method for the RK equals the exact analytical solution after a small number of sample directions.

    Future directions of the DKI statistic implementation. Having finalized the validation of the DKI statistic implementation, the last step of the DKI standard statistic implementation is to replace the data used on the sample usage script by an HCP-like dataset. As mentioned on my post #7, the reconstructions of the dataset currently used on this example seems to be corrupted by artifacts. After discussing with an expert of the NeuroImage mailing list, these artefacts seem to be caused by an insufficient SNR for fitting the diffusion kurtosis model.

    DKI based fiber direction estimates

    Mathematical framework. This fiber direction estimation is done based on the orientation distribution function as proposed by Jensen and colleagues (2014). The orientation distribution function (ODF) gives the probability that a fiber is aligned to a given direction and it can be estimated from the diffusion and kurtosis tensors using the following formula:

    where α is the radial weighting power, Uij is the element ij of the dimensionless tensor U which is defined as the mean diffusivity times the inverse of the diffusion tensor (U = MD x iDT), Vij is defined as

    and ODFg the Gaussian ODF contribution which is given by: 

    Implementation in python 1. In python, this expression can be easily implemented using the following command lines:

    (Note to a description of what from_lower_triangular does, see Dipy's DTI module). 

    Results. in the figure below, I show a ODF example obtained from the simulation of two white matter crossing fibers. 

    Figure 3 - DKI -ODF obtained from two simulated crossing fibers. Maxima of the ODF correspond to the direction of the crossing fibers.

    From the figure above, we can see that the ODF has two directions with maxima amplitudes which correspond to the directions where fibers are aligned.

    Implementation in python 2. The lines of codes previously presented correspond to a feasible implementation of Jensen and colleagues formula. However, for the implementation of the DKI-ODF in dipy, I decided to expand the four for loops and use kurtosis tensor symmetry to simplify this expansion. The resulting code is as follow:

    This implementation of the ODF can look less optimized, but in fact it involves a smaller number of operations relative to the four for loops of the algorithm in "Implementation in python 1". Particularly, this version of the code is more than 3 times faster!

    Future directions of the DKI-ODF implementation. An algorithm to find the maxima of the DKI-ODF will be implemented. The direction of maxima ODF will be used as the estimates of fiber direction that will be useful to obtain DKI based tractography maps (for a reminder of what is a tractography map, see my post #3).

    by Rafael Henriques ( at July 24, 2015 07:35 PM

    Abraham de Jesus Escalante Avalos

    Progress Report

    Hello all,

    A lot of stuff has happened in the last couple of weeks. The project is coming along nicely and I am now getting into some of the bulky parts of it.

    There is an issue with the way NaN (not a number) checks are handled that spans beyond SciPy. Basically, there is no consensus on how to deal with NaN values when they show up. In statistics they are often assumed to be missing values (e.g. there was a problem when gathering statistic data and the value was lost), but there is also the IEEE NaN which is defined as 'undefined' and can be used to indicate out-of-domain values that may point to a bug in one's code or a similar problem.

    Long story short, the outcome of this will largely depend on the way projects like pandas and Numpy decide to deal with it in the future, but right now for SciPy we decided that we should not get in the business of assuming that NaN values signify 'missing' because that is not always the case and it may end up silently hiding bugs, leading to incorrect results without the user's knowledge. Therefore, I am now implementing a backwards compatible API addition that will allow the user to define whether to ignore NaN values (asume they are missing), treat them as undefined, or raise an exception. This is a longterm effort that may span through the entire stats module and beyond so the work I am doing now is set to spearhead future development.

    Another big issue is the consistency of the `scipy.stats` module with its masked arrays counterpart `scipy.mstats`. The implementation will probably not be complicated but it encompasses somewhere around 60 to 80 functions so I assume it to be a large and time consuming effort. I expect to work on this for the next month or so.

    During the course of the last month or two there have been some major developments in my life that are indirectly related to the project so I feel like they should be addressed but I intend do so in a separate post. For now I bid you farewell and thank you for reading.


    by Abraham Escalante ( at July 24, 2015 06:21 PM

    Udara Piumal De Silva

    Progress Report

    I have been working on getting the auto refresh of the controller to work fine. Now the initial refresh commands are issued properly. Thanks to the traceSignals() feature in MyHDL I was able to detect what I was doing wrong and that error was fixed. Now I got the following output from

    My calculations shows the REF_CYCLES_C values as 7813 which seems too big to be practical. Because of this large value no auto refresh commands are issued while in the idle state. Currently I'm working on this issue.

    by YUP ( at July 24, 2015 05:58 PM

    Siddharth Bhat

    gsoc vispy week 6

    As usual, here’s the update for week 6. Let’s get down to it!

    SceneGraph Overhaul

    The fabled PR is yet to be closed, but we have everything we need for it to be merged. There were 2 remaining (outstanding) bugs related to the Scenegraph - both stemming from the fact that not all uniforms that were being sent to the shader were being used correctly. One of these belonged to the MeshVisual, a Visual that I had ported, so tracking this down was relatively easy. [The fix is waiting to be merged]().

    The other one was a shader compliation bug and was [fixed by Eric]()

    Once Luke Campagnola is back, these changes should get merged, and the PR should be merged as well. That would be closure to the first part of my GSoC project.

    Plotting API

    The high level plotting API has been coming together - not at the pace that I would have loved to see, but it’s happening. I’ve been porting the [ColorBarVisua]() so it can be used with a very simple, direct API. I’d hit some snags on text rendering, but it was resolved with Eric’s help.

    Text rendering was messed up initially, as my code wasn’t respecting coordinate systems. I rewrote the buggy code I’d written to take the right coordinate systems into account.

    Another bug arose from the problem that I wasn’t using text anchors right. I’d inverted what a "top" anchor and a bottom anchor does in my head. The top anchor makes sure that all text is placed below it, while the bottom anchor pushes text above itself. Once that was fixed, text is rendering properly.

    However, There’s still an inconsistency. I don’t fully understand the way anchors interact with transforms. So, this above stated solution works under translation / scaling, but breaks under rotation. Clearly, there are gaps in my knowledge. I’ll be spending time fixing this, but I’m reasonably confident that it shouldn’t take too much time.

    There was also a bug related to bounding box computation that was caught in the same PR, which I’ve highlighted.

    Inetellisense / Autocomplete

    There’s a module in VisPy called vispy.scene.visuals whose members are generated by traversing another module (vispy.visuals) and then wrapping (“decorating”, for the Design Patterns enthusiasts) the members of vispy.visuals. Since this is done at run-time (to be more explicit, this happens when the module is initialized), no IDE / REPL was able to access this information. So, autocomplete for vispy.scene.visuals was non-existent. After deliberating, we decided to unroll the loop and hand-wrap the members, so that autocomplete would work.

    This is a very interesting trade-off, where we’re exchanging code compactness / DRY principles for usability.

    [Here’s the pull request]() waiting to be merged.

    Odds and Ends

    I’ve been meaning to improve the main github page of VisPy, so it provides more context and development information to developers. [There’s an open PR]() that I want to finish by the end of this week.

    That’s all for now! Next week should see the ColorBar integrated into vispy.plot. I’ll hopefully be working on layout using [Cassowary]() by this time next week, but that’s just a peek into the future :). Adios!

    July 24, 2015 04:29 PM

    Aman Jhunjhunwala

    GSOC ’15 Post 4 : AstroPython Preview Phase Begins

    Four weeks left into what has been an amazing Summer of Code, we are now ready for a limited preview launch ! But before that summarizing the progress of past 2-3 weeks :-

    The time has been used to mature the site and plug in any and every remaining hole!

    • The post – midterm phase  began with setting up RSS / Atom Feeds for our website.
    • This was quickly followed by integrating Sprint Forum for Q&A on our application , which took a long time to integrate and was removed later as it didn’t  quite fit in the concept of the site
    •  The “Package Section” of the site has gone through multiple overhauls of design and finally an old-school table list view was chosen to best fit in the cause !
    • Advanced Filtering Mechanisms in place ! When displaying all the posts, options can now be combined (Eg. Sort by Ratings + Tags:Python,ML+ From Native Content )
    • Completed Landing Page(Added Twitter Feed,Facebook Feed, Feedback form,etc). Created a Facebook page
    • Added Blog Aggregating Feature ( Only through admin – Adding the RSS/ATOM Feed address of a particular website and the section in which the posts should appear (NEWS for News Website,etc)
    • Added Edit Ability to any content. A pencil-like clipart will appear on hovering an editable section, which if clicked allows you to edit that section
    • Added Timeline feature (in TIMELINE section ) Posts are now displayed in time-order.
    • If abstract is absent, display the first 4 lines of the post.This was a bit difficult as all the HTML and Markdown syntax needed to be removed
    • Better way to add Tags in Creation Form. If tags are present in a section, it displays all the tags that are present . Clicking the tags adds the tags to the for
    • Deployment to production server – comes with its own problems and bugs !
    • Framed a deployment guide consisting of a step-by-step guide of setting up the server and configuring the necessary system files.
    • Setting up our Backup & Restore Mechanisms (Only Partial Testing done). For Backup all files are stored in JSON format in /backup folder and are then restored using the restore script. It is run on a cron task so that the backup stays up-to-date
    • The mail server was set up in collaboration with SendGrid to send Moderation Approval/Rejection emails, notifications,etc to the moderators ,admins and users alike.Every-time a post is added , the moderators are informed and every time a post is approved, its authors are informed !
    • Added “Live Coding Area” on our HomePage to introduce Python to new learners (using Skulpt and Trinket) ! Users can code with Python right on the Homepage
    • Added a periodic newsletter delivery mechanism which was again removed for later – if /when the site’s popularity rises
    • Lots of major / minor CSS changes including favicon generation,etc and bug fixing ! A huge amount of time was spent on this !

    For the limited preview phase, we invited the Astropy community(through their mailing list) to review the website and share their feedback with us . The invitation mail goes as :-

    This summer we have been fortunate to have a very talented Google Summer of Code student, Aman Jhunjhunwala, working to create a brand new site from scratch.  In addition to writing all the code, Aman has also brought a fresh design look and many cool ideas to the project.

    The primary driver is to make a modern site that would engage the community and succeed in having a broad base of contributors to the site content.  To that end the site allows contributions from anyone who authenticates via Github, Google, or Facebook.  By default all such content will be moderated before appearing on the live site, but there is also a category for trusted users that can post directly.  During the preview phase moderation is disabled, so you can post at will!

    The preview version is available at:

    At this time we are opening the site for a preview in order to get feedback from the AstroPython community regarding the site including:

    • Overall concept, in particular whether the site design will be conducive to broad base of contributors.  Would you want to post?
    • Are the authentication options obvious and sufficient?
    • General site organization and ease of navigating
    • How easily can you find information?
    • Visual design and aesthetics
    • Other features, e.g. a Q&A forum?
    • Accessibility
    • Ease of posting
    • Bugs
    • Code review (
    • Security
    We highly encourage you to post content in any / all of the categories.  You can either try to play nice and use things as we intended, or do stress testing and try to break it.  All posts will show up immediately, so please do act responsibly in terms of the actual content you post.
    As you find issues or have comments, please first check the site github repo to see if it is already known:
    Ideally put your comments into github.  For most of the broad review categories above I have already created a placeholder issue starting with [DISCUSS].  Also, if you’d rather not use github then just send to me directly at
    Best regards,
      Tom Aldcroft, 
      Aman Jhunjhunwala,
      Jean Connelly, and
      Tom Robitaille
    Next update in about 15 days , when we end our preview phase and hopefully push through our final deployment !
    Aman Jhunjhunwala

    by amanjjw at July 24, 2015 04:05 PM

    Jakob de Maeyer

    The add-on system in action

    In my earlier posts, I have talked mostly about the motivation and the internal implementation of Scrapy’s add-on system. Here, I want to talk about how the add-on framework looks in action, i.e. how it actually effects the users’ and developers’ experience. We will see how users are able to configure built-in and third-party components without worrying about Scrapy’s internal structure, and how developer’s can check and enforce requirements for their extensions. This blog entry will therefore probably feel a little like a documentation page, and indeed I hope that I can reuse some of it for the official Scrapy docs.

    From a user’s perspective

    To enable an add-on, all you need to do is provide its path and, if necessary, its configuration to Scrapy. There are two ways to do this:

    • via the ADDONS setting, and
    • via the scrapy.cfg file.

    As Scrapy settings can be modified from many places, e.g. in a project’s, in a Spider’s custom_settings attribute, or from the command line, using the ADDONS setting is the preferred way to manage add-ons.

    The ADDONS setting is a dictionary in which every key is the path to an add-on. The corresponding value is a (possibly empty) dictionary, containing the add-on configuration. While more precise, it is not necessary to specify the full add-on Python path if it is either built into Scrapy or lives in your project’s addons submodule.

    This is an example where an internal add-on and a third-party add-on (in this case one requiring no configuration) are enabled/configured in a project’s

    ADDONS = {
        'httpcache': {
            'expiration_secs': 60,
            'ignore_http_codes': [404, 405],
        '': {},

    It is also possible to manage add-ons from scrapy.cfg. While the syntax is a little friendlier, be aware that this file, and therefore the configuration in it, is not bound to a particular Scrapy project. While this should not pose a problem when you use the project on your development machine only, a common stumbling block is that scrapy.cfg is not deployed via scrapyd-deploy.

    In scrapy.cfg, section names, prepended with addon:, replace the dictionary keys. I.e., the configuration from above would look like this:

    expiration_secs = 60
    ignore_http_codes = 404,405

    From a developer’s perspective

    Add-ons are (any) Python objects that provide Scrapy’s add-on interface. The interface is enforced through zope.interface. This leaves the choice of Python object up the developer. Examples:

    • for a small pipeline, the add-on interface could be implemented in the same class that also implements the open/close_spider and process_item callbacks
    • for larger add-ons, or for clearer structure, the interface could be provided by a stand-alone module

    The absolute minimum interface consists of two attributes:

    • name: string with add-on name
    • version: version string (PEP-404, e.g. '1.0.1')

    Of course, stating just these two attributes will not get you very far. Add-ons can provide three callback methods that are called at various stages before the crawling process:

    update_settings(config, settings)

    This method is called during the initialization of the crawler. Here, you should perform dependency checks (e.g. for external Python libraries) and update the settings object as wished, e.g. enable components for this add-on or set required configuration of other extensions.

    check_configuration(config, crawler)

    This method is called when the crawler has been fully initialized, immediately before it starts crawling. You can perform additional dependency and configuration checks here.

    update_addons(config, addons)

    This method is called immediately before update_settings(), and should be used to enable and configure other add-ons only.

    When using this callback, be aware that there is no guarantee in which order the update_addon() callbacks of enabled add-ons will be called. Add-ons that are added to the add-on manager during this callback will also have their update_addons() method called.

    Additionally, add-ons may (and should, where appropriate) provide one or more attributes that can be used for limited automated detection of possible dependency clashes:

    • requires: list of built-in or custom components needed by this add-on, as strings

    • modifies: list of built-in or custom components whose functionality is affected or replaced by this add-on (a custom HTTP cache should list httpcache here)

    • provides: list of components provided by this add-on (e.g. mongodb for an extension that provides generic read/write access to a MongoDB database)

    Some example add-ons

    The main advantage of add-ons is that developers gain better control over how and in what conditions their Scrapy extensions are deployed. For example, it is now easy to check for external libraries and have the crawler shut down gracefully if they are not available:

    class MyAddon(object):
        name = 'myaddon'
        version = '1.0'
        def update_settings(self, config, settings):
                import boto
            except ImportError:
                raise RuntimeError("boto library is required")
                # Perform configuration

    Or, to avoid unwanted interplay with other extensions and add-ons, or the user, it is now also easy to check for misconfiguration in the final (final!) settings used to crawl:

    class MyAddon(object):
        name = 'myaddon'
        version = '1.0'
        def update_settings(self, config, settings):
            settings.set('DNSCACHE_ENABLED', False, priority='addon')
        def check_configuration(self, config, crawler):
            if crawler.settings.getbool('DNSCACHE_ENABLED'):
                # The spider, some other add-on, or the user messed with the
                # DNS cache setting
                raise ValueError("myaddon is incompatible with DNS cache")

    Instead of depending on the user to activate components and than gather configuration the global settings name space on initialization, it becomes feasible to instantiate the components ad hoc:

    from import MySQLPipeline
    class MyAddon(object):
        name = 'myaddon'
        version = '1.0'
        def update_settings(self, config, settings):
            mysqlpl = MySQLPipeline(password=config['password'])
                {mysqlpl: 200},

    Often, it will not be necessary to write an additional class just to provide an add-on for your extension. Instead, you can simply provide the add-on interface alongside the component interface, e.g.:

    class MyPipeline(object):
        name = 'mypipeline'
        version = '1.0'
        def process_item(self, item, spider):
            # Do some processing here
            return item
        def update_settings(self, config, settings):
                {self: 200},

    July 24, 2015 12:58 PM

    Richard Plangger

    GSoC: Bilbao, ABC and off by one


    I have never attended a programming conference before. Some thoughts and impressions:
    • The architecture of conference center is impressive.
    • Python is heavily used in numerical computation, data analysis and processing (I thought it to be less).
    • Pick any bar and a vegetarian dish (if there is any): It will most certainly contain meat/fish
    • PyPy is used, but most people are unaware of the fact that there is a JIT compiler for Python, that speeds up computations and reduces memory
    It was a good decision to come to the EuroPython, meet with people (especially with the PyPy dev team) and see how things work in the Python community. See you next time :)

    I did as well work on my proposal all along. Here are some notes what I have been working on (before Bilbao).

    ABC Optimization

    One "roadblock" I did not tackle is vectorization of "user code". The vecopt branch at it's current shape was not able to efficiently transform the most basic Python for loop accessing array instances of the array module.  (Micro)NumPy kernels work very well (which is the main use case), but for Python loops this is a different story. Obviously, it is not that easy to vectorize these, because it is much more likely that many guards and state modifying operations (other than store) are present.

    In the worst case the optimizer just bails out and leaves the trace as it is.
    But I think at least for the simplest loops should work as well.

    So I evaluated what needs to be done to make this happen: Reduce the number of guards, especially Array Bound Checks (ABC). PyPy does this already, but the ones I would like to remove need a slightly different treatment. Consider:

    i = 0
    while i < X:
        a[i] = b[i] + c[i]
        i += 1

    There are four guards in the resulting trace, one protecting the index to be below X, and three protecting the array access. You cannot omit them, but you can move them outside the loop. The idea is to introduce guards that make the checks (but the index guard) redundant. Here is an example:

    guard(i < X) # (1) protects the index
    guard(i < len(a)) # (2) leads ot IndexError
    load(..., i, ...) # actual loading instruction

    Assume X < len(a). Then (1) implies (2) is redundant and guard(X < len(a)) can be done before the loop is entered. That works well for a well behaved program and in order to pick the right guard as a reference (the index guard might not be the first guard), I'll take a look at the runtime value. The minimum value is preferable, because it is the strongest assumption.

    I'm not yet sure if this is the best solution, but it is certainly simple and yields the desired result.

    Off by one

    Some commits ago the last few "off by one" iterations of a NumPy call were always handled by the blackhole interpreter, eventually compiling a bridge out of the guard. This makes the last iteration unnecessary slow. Now, PyPy has the bare essentials to create trace versions and immediately stitch them to an guarding instructions. The original version of the loop is compiled to machine code at the same time as the optimized version and attached to the guards within the optimized version.

    Ultimately a vectorized trace exits an index guard immediately leading into a trace loop to handle the last remaining elements.

    by Richard Plangger ( at July 24, 2015 08:15 AM

    Daniil Pakhomov

    Google Summer of Code: Creating Training set.

    I describe a process of the creating a dataset for training classifier that I use for Face Detection.

    Positive samples (Faces).

    For this task I decided to take the Web Faces database. It consists of 10000 faces. Each face has eye coordinates which is very useful, because we can use this information to align faces.

    Why do we need to align faces? Take a look at this photo:

    Not aligned face

    If we just crop the faces as they are, it will be really hard for classifier to learn from it. The reason for this is that we don’t know how all of the faces in the database are positioned. Like in the example above the face is rotated. In order to get a good dataset we first align faces and then add small random transformations that we can control ourselves. This is really convinient because if the training goes bad, we can just change the parameters of the random transformations and experiment.

    In order to align faces, we take the coordinates of eyes and draw a line through them. Then we just rotate the image in order to make this line horizontal. Before running the script the size of resulted images is specified and the amount of the area above and below the eyes, and on the right and the left side of a face. The cropping also takes care of the proportion ratio. Otherwise, if we blindly resize the image the resulted face will be spoiled and the classifier will work bad. That way we can be sure now that all our faces are placed cosistently and we can start to run random transformations. The idea that I described was taken from the following page.

    Have a look at the aligned faces:

    Aligned face one Aligned face two Aligned face three

    As you see the amount of area is consistent across images. The next stage is to transform them in order to augment our dataset. For this purpose we will use OpenCv create_samples utility. This utility takes all the images and creates new images by randomly transforming the images and changing the intensity in a specified manner. For my purposes I have chosen the following parameters -maxxangle 0.5 -maxyangle 0.5 -maxzangle 0.3 -maxidev 40. The angles specify the maximum rotation angles in 3d and the maxidev specifies the maximum deviation in the intesity changes. This script also puts images on the specified by user background.

    This process is really complicated if you want to extract images in the end and not the .vec file format of the OpenCv.

    This is a small description on how to do it:

    1. Run the bash command find ./positive_images -iname "*.jpg" > positives.txt to get a list of positive examples. positive_images is a folder with positive examples.
    2. Same for the negative find ./negative_images -iname "*.jpg" > negatives.txt.
    3. Run the file like this perl positives.txt negatives.txt vec_storage_tmp_dir. Internally it uses opencv_createsamples. So you have to have it compiled. It will create a lot of .vec files in the specified directory. You can get this script from here. This command transforms each image in the positives.txt and places the results as .vec files in the vec_storage_tmp_dir folder. We will have to concatenate them on the next step.
    4. Run python -v vec_storage_tmp_dir -o final.vec. You will have one .vec file with all the images. You can get this file from here.
    5. Run the vec2images final.vec output/%07d.png -w size -h size. All the images will be in the output folder. vec2image has to be compiled. You can get the source from here.

    You can see the results of the script now:

    Transformed face one Transformed face one Transformed face one Transformed face one Transformed face one

    Negative samples.

    Negative samples were collected from the aflw database by eleminating faces from the images and taking random samples from the images. This makes sence because the classifier will learn negatives samples from the images where the faces usually located. Some people usually take random pictures of text or walls for negative examples, but it makes sence to train classifier on the things that most probably will be on the images with faces.

    July 24, 2015 12:00 AM

    Yue Liu

    GSOC2015 Students coding Week 09

    week sync 13

    Last week:

    • Single process optimization for load_gadgets() and build_graph()
    • Multi Process supporting for GadgetFinder.load_gadgets()
    • Multi Process supporting for ROP.build_graph()

    Example for, which size larger than 200Kb.

     lieanu@ARCH $ time python -c 'from pwn import *; context.clear(arch="amd64"); rop=ROP("/usr/lib/")' 
    [*] '/usr/lib/'
        Arch:     amd64-64-little
        RELRO:    Partial RELRO
        Stack:    Canary found
        NX:       NX enabled
        PIE:      PIE enabled
    python -c   44.18s user 3.04s system 301% cpu 15.655 total       

    Example for xmms2, < 200Kb

     lieanu@ARCH $ ls -alh /bin/xmms2 
    -rwxr-xr-x 1 root root 133K Jun  4 18:27 /bin/xmms2
     lieanu@ARCH $ time python -c 'from pwn import *; context.clear(arch="amd64"); rop=ROP("/bin/xmms2")'
    [*] '/bin/xmms2'
        Arch:     amd64-64-little
        RELRO:    Partial RELRO
        Stack:    Canary found
        NX:       NX enabled
        PIE:      No PIE
    python -c   86.14s user 1.05s system 305% cpu 28.545 total       
    • bottlenecks:
      1. All graph operation, such as: top sort and dfs.
      2. Classify when finding gadgets.

    Next week:

    • Optimization for graph operation.
    • Fixing potential bugs.

    July 24, 2015 12:00 AM

    July 23, 2015

    Sahil Shekhawat

    GSoC Week 9

    Last week was very sad, I am behind my timeline because I had to travel to Bangalore (around 3000 km) for 3 days. I had to finish pinjoint, slidingjoint and cylindricaljoint by this week but I was only able to develop prototypes for pin and sliding joint. I was not able to implement them.

    July 23, 2015 04:29 PM

    Christof Angermueller

    GSoC: Week six and seven

    Theano allows function profiling by setting the profile=True flag. After at least one function call, the compute time of each node can be then be printed with debugprint. However, analyzing complex graphs in this way can become cumbersome.

    d3printing allows now to graphically visualize the same timing information and hence to easily spot bottlenecks in Theano graphs! If the function has been profiled, a ‘Toggle profile colors’ button will appear on the top on the page. By clicking on it, nodes will be colored by their compute time. In addition, timing information can be retrieved by mouse-over event! You can find an example here, and the source code here.


    The second new feature is a context menu to edit the label of nodes and to release them from a fixed position.






    The next release will make it possible to visualize complicated nested graphs with OpFromGraph nodes. Stay tuned!

    The post GSoC: Week six and seven appeared first on Christof Angermueller.

    by cangermueller at July 23, 2015 02:41 PM

    July 22, 2015

    Palash Ahuja

    Map inference in Dynamic Bayesian Network

    I am about to be finished with junction tree algorithm for inference in dynamic bayesian network. I am about to start with the map queries for inference.

    Currently, the map queries finds the maximum value using the maximize operation. But for the dynamic bayesian network we need to compute a path that has the maximum probability also called the viterbi path.

    The viterbi algorithm uses the famous dynamic programming algorithm paradigm, although it could be quite complicated to implement for a dynamic bayesian network.

    Also, the inference in junction tree will further optimize the scale of operations, so the variable elimination will not try to lag the algorithm down.
    I hope that the inference works well now.

    by palash ahuja ( at July 22, 2015 08:18 PM

    Udara Piumal De Silva

    refreshing bug

    After adding the exact values as delay I tried to simulate the design. But the simulation is not working fine. The controller does not even go into "LOAD_SETMODE" state. I am still trying to find out what causes this error.

    by YUP ( at July 22, 2015 05:51 PM

    Andres Vargas Gonzalez

    Improvements to the RendererKivy (Part 2)

    The following modifications have been implemented in these days:

    • Fixed problem when rendering concave polygons. It can be seen from previous images that in the case of an arrow and a wedge there was a defect on the rendering due to the triangulation algorithm to fill it. The mesh renderer uses “triangle_fan” and since all the points form triangles always with the hub in concave polygons this was a problem. It was fixed by splitting concave polygons into convex ones. This was implemented using Tesselator a utility class in the graphics kivy package.
    • Path was being closed when it should not. It has been fixed that a path line was being closed by default, the initial point was removed so the line does not try to close a path when it is not necessary.
    • Elements out of the figure axes are not rendered. The problem here is that everything was being drawn in the same layer. A stencil widget is used as a mask so everything out of bounds it is hidden. In the second figure, 4 stencil instructions are created to show just the graphics instructions visible in the clip rectangle.
    • Focus behavior was added to the FigureCanvasKivy. Keyboard events are automatically bound by the focus behavior.
    • Added support for multiple dash length and offset in a same line. Line can receive a list with the values of length and offset for the respective rendering. For instance [10, 5, 20, 10] would create a line with 10 visible points then 5 hidden 20 visible and 10 hidden. This can be seen on the figure of the sin function below

    dynamic dashes in a line4 stencil widgetsartist demointegral demo

    by andnovar at July 22, 2015 06:47 AM

    July 21, 2015

    Nikolay Mayorov

    Large-scale Bundle Adjustment

    As a demonstration of large-scale capabilities of the new least-squares algorithms we decided to provide an example of solving a real industry problem called bundle adjustment. The most convenient form is IPython Notebook, later it will serve as an example/tutorial for scipy (perhaps hosted on some server). Here I just give a link to the static version

    by nickmayorov at July 21, 2015 10:01 PM

    Mridul Seth


    Hello folks, this blog post will cover the work done in week 7 and week 8.

    Summer going really fast :)

    This period was dedicated towards merging iter_refactor branch to the master branch. The main issues were regarding documentation and improving it . We also discussed shifting the tutorial to ipython notebook and moving the current examples to ipython notebooks.

    I also took a dig at MixedGraph class.


    by sethmridul at July 21, 2015 05:01 PM

    Pratyaksh Sharma

    Wait, how do I order a Markov Chain? (Part 2)

    Let's get straight to the meat. We were trying to generate samples from $P(\textbf{X}| \textbf{E} = \textbf{e})$. In our saunter, we noticed that using a Markov chain would be a cool idea. But we don't know yet what transition model the right Markov chain must have.

    Gibbs sampling

    We are in search of transition probabilities (from one state of the Markov chain to another), that converge to the desired posterior distribution. Gibbs sampling gives us just that. 

    As per our last discussion on the factored state space, the state is now an instantiation to all variables of the model. We'll represent the state as $(\textbf{x}_{-i}, x_{i})$. Consider the kernel $\mathcal{T}_{i}$ that gives us the transition in $i^{th}$ variable's state:
    $$\mathcal{T}_{i}((\textbf{x}_{-i}, x_{i}) \rightarrow (\textbf{x}_{-i}, x'_{i})) = P(x'_i | \textbf{x}_{-i})$$

    Yep, the transition probability does not depend on the current value $x_i$ of $X_i$ -- only on the remaining state $\textbf{x}_{-i}$. You can take my word, or check it for yourself, the stationary distribution that this process converges to is $P(\textbf{X}| \textbf{e})$.

    Now all that's left is computing $P(x'_i | \textbf{x}_{-i})$. That can be done in a pretty neat way, I'll show you how next time!

    by Pratyaksh Sharma ( at July 21, 2015 01:19 PM

    Jaakko Leppäkanga

    MNE sprint

    Last week I spent in Paris at the MNE sprint where many of the contributors came together to produce code. It was nice to see the faces behind the github accounts. It was quite an intensive five days of coding. I finalized the ICA source plotter for raw and epochs objects. It turned out quite nice. It is now possible to view interactively the topographies of independent components by clicking on the desired component name.

    I also got some smaller pull requests merged like the adding of axes parameters to some of the plotting functions, so that the user can plot the figures where ever he/she desires. I also got one day of spare time to see the sights of Paris before flying back to Finland. This week I'll start implementing similar interactive functionalities for TFR plotters.

    by Jaakko ( at July 21, 2015 12:02 PM

    Michael Mueller

    Week 8

    This past week I've been adding some functionality to the index system while the current PR is being reviewed: taking slices of slices, including non-copy and non-modify modes as context managers, etc. One issue my mentors and I discussed in our last meeting is the fact that `Column.__getitem__` becomes incredibly slow if overriden to check for slices and so forth, so we have to do without it (as part of a larger rebase on astropy/master). Our decision was to drop index propagation upon column slicing, and only propagate indices on Table slices; though this behavior is potentially confusing, it will be documented and shouldn't be a big deal. For convenience, a separate method `get_item` in `Column` has the same functionality as the previous `Column.__getitem__` and can be used instead.

    I have a lot more to write, but I need to be up early tomorrow morning so I'll finish this post later.

    by Michael Mueller ( at July 21, 2015 04:23 AM

    AMiT Kumar

    GSoC : This week in SymPy #8

    Hi there! It's been eight weeks into GSoC . Here is the Progress for this week.

      Progress of Week 8

    This week, my PR for making invert_real more robust was Merged, along with these:

    • PR #9628 : Make invert_real more robust

    • PR #9668 : Support solving for Dummy symbols in linsolve

    • PR #9666 : Equate S.Complexes with ComplexPlane(S.Reals*S.Reals)

    Note: We renamed S.Complex to S.Complexes, which is analogous with S.Reals as suggested by @jksuom.

    I also opened PR #9671 for Simplifying ComplexPlane output when ProductSet of FiniteSets are given as input: ComplexPlane(FiniteSet(x)*FiniteSet(y)), It was earlier simplified to:

    ComplexPlane(Lambda((x, y), x + I*y), {x} x {y})

    It isn't very useful to represent a point or discrete set of points in ComplexPlane with an expression like above. So in the above PR it is now simplified as FiniteSet of discrete points in ComplexPlane:

    In [3]: ComplexPlane(FiniteSet(a, b, c)*FiniteSet(x, y, z))
    Out[3]: {a + I*x, a + I*y, a + I*z, b + I*x, b + I*y, b + I*z, c + I*x, c + I*y, c + I*z}

    It's awaiting Merge, as of now.

    Now, I have started replacing solve with solveset and linsolve.

    from future import plan Week #9:

    This week I plan to Merge my pending PR's & work on replacing old solve in the code base with solveset.

    $ git log

      PR #9710 : Replace solve with solveset in sympy.stats

      PR #9708 : Use solveset instead of solve in sympy.geometry

      PR #9671 : Simplify ComplexPlane({x}*{y}) to FiniteSet(x + I*y)

      PR #9668 : Support solving for Dummy symbols in linsolve

      PR #9666 : Equate S.Complexes with ComplexPlane(S.Reals*S.Reals)

      PR #9628 : Make invert_real more robust

      PR #9587 : Add Linsolve Docs

      PR #9500 : Documenting solveset

    That's all for now, looking forward for week #9. :grinning:

    July 21, 2015 12:00 AM

    July 20, 2015

    Zubin Mithra

    Integration tests complete for arm, mips and mipsel + ppc initial commit

    This week I worked on getting the integration tests for ARM, MIPS and MIPSel merged in. Additionally I've set up the qemu image for working with powerpc(big endian). The image I'm using can be from here. Additionally, you will need to install openbios from here in order to get the qemu image to work out. The deb files for the same can be found here.

    I used "debian_squeeze_powerpc_standard.qcow2" and "openbios-ppc_1.0+svn1060-1_all.deb". The startup command line is as follows.

    qemu-system-ppc -hda ./debian_squeeze_powerpc_standard.qcow2 -m 2047 -bios /usr/share/openbios/openbios-ppc -cpu G4 -M mac99 -net user,hostfwd=tcp::10022-:22 -net nic

    Note: Do not use ping to test network connectivity. use "apt-get update" or something.
    Note 2: To ssh into the image do "ssh root@localhost -p10022".

    Looking at gdb in ppc the register layout seems roughly as shown here. I'll be working on finalising the aarch64 integration test and ppc support this week.

    by Zubin Mithra<br />(pwntools) ( at July 20, 2015 09:26 AM

    July 19, 2015

    Udara Piumal De Silva

    Refresh timing

    Earlier I was confused about the refresh interval required by the controller. With the help of my mentor Dave, I clarify those details and now working on implementing the feature.

    SDRAM expect that every row to be refreshed within tREF delay. For the development board I'm working on (Xula2) this value is 64ms. SDRAM has an internal refresh controller which automatically calculate the address of the row to be refreshed. Therefore the controller should only be worried about issuing AUTO_REFRESH command within the certain delay.

    The reference controller uses distributed auto refresh commands which is issued every 7.8 us delay. I am now working on implementing this feature.

    With my recent changes conversion is not happening properly. I wish to solve that issue once I complete the refreshing functionality.

    by YUP ( at July 19, 2015 06:07 PM

    July 18, 2015

    Chienli Ma

    Putting Hand on OpFromGraph

    This two week, I start working on OpFromGraph. Which is the second part of the proposal.

    Currently, if a FunctionGraph have repeated subgraph, theano will optimize these sub-graphs individually, which is not only a waste of computational resources but a waste of time. If we can extract a common structure in FunctionGraph and make it a Op, we can only optimize the sub-graph of this Op once and reuse it every where. This will speed up the optimization process. And OpFromGraph provides such feature.

    To make OpFromGraph works well, it should support GPU and can be optimized. Following feature are expected:

    • __eq__() and __hash__()
    • connection_pattern() and “infer__shape()“`
    • Support GPU
    • c_code()

    I implement two feature in last two week: connection_pattern and infer_shape. I hope I can make OpFromGraph a useful feature at the end of this GSoC :).

    July 18, 2015 10:01 AM

    Andres Vargas Gonzalez

    Improvements to the RendererKivy

    The main improvements to the Renderer involved to run a test for the backend_kivy with as many as possible examples. From the test we figured out there were some implementations missed. The following are the main improvements done to the Renderer this week:

    • The rendering of the text is not blurry anymore. This was fixed by working with int values instead of float, it seems like opengl was trying to give some percentage of the pixel value to the previous and next pixel giving this effect.
    • Math text such as integrals, derivatives and so on can be added to the figure canvas. Matplotlib provides a very useful math text parser which receives the string and returns an object that can be embedded on the kivy canvas.
    • Images as textures can be added inside a Figure and outside as well. Matplotlib allows to generate some interpolation graphs which can be inserted into the axes and give some fancy visualization of data. Since color in kivy’s renderering is multiplicative, you would want the color to be white or the texture will be ‘tinted’ as it will be something other than 1.* color channel at pixel. Initially the problem was that the texture was invisible but it was solved by changing the Color to white as it is explained before. When a figure is not present the image is added directly to the canvas as a texture.

    Some examples can be seen in the pictures below. Additionally there are two known problems, the first one is that the rendering is out of the axes and the second one is related to the path and mesh creation. This problems can be seen specifically in the integral example and the electric dipole with the arrows and magnetic fields.

    gradient bar imshow without figure

    imshow inside axes math text rendering ggplot style applied wrong rendering of arrows

    by andnovar at July 18, 2015 04:16 AM

    Ambar Mehrotra
    (ERAS Project)

    GSoC 2015: 5th Biweekly Report

    Hi everyone, there were two major features that I worked on during the past two weeks.

    Multiple Attributes per leaf: I gave most of the time to the implementation of this feature during the past two weeks. As I have mentioned in earlier blog posts, leaves represent the data sources, i.e., sensor devices directly interfaced with the Tango Bus. These include servers like Aouda, Health Monitor, etc.
    Each of these servers can have multiple attributes. For example, an aouda server keeps track of several things like:
    • Air Flow
    • Temperature
    • Heart Rate
    • Oxygen Level
    In a similar way, there can be multiple servers having multiple attributes. This feature involved adding support for all the attributes provided by a server. I achieved this by making a specific attribute from a specific server a node instead of the entire server.

    Multiple Summaries per branch: As mentioned in previous blog posts, a summary represents the minimum/maximum/average value of the raw data coming in from the various children. This feature aims at adding support for multiple summaries for a branch. For example:
    • A user will be asked to name a summary.
    • Select nodes from its children that he wants to keep track of in that summary.
    • Select the summary function - Minumum/Maximum/Average.
     I started implementing this feature in the late part of the previous week and will continue working on this for the coming week. After that I am planning to move on to the implementation of alarms.

    Happy Coding.

    by Ambar Mehrotra ( at July 18, 2015 04:14 AM

    Andres Vargas Gonzalez

    Navigation Toolbar for #kivympl using an Action Bar

    After my last post I started the implementation of a Navigation Toolbar for the backend. For the Navigation Toolbar for kivy we decided to experiment with an ActionBar from the set of kivy widgets. It is a very early version and not implemented at all, but this is a snippet of how it would be the layout and the result can be seen on the image below. The only element being used for now are the icons for the action buttons so we can have an idea of how it would look like.  The behavior of each one of the items will be defined on the callbacks. The NavigationToolbar2Kivy extends from NavigationToolbar2 which is the base class.

    def _init_toolbar(self):
            basedir = os.path.join(rcParams['datapath'], 'images')
            actionbar = ActionBar(pos_hint={'bottom': 1.0})
            actionview = ActionView()
            for text, tooltip_text, image_file, callback in self.toolitems:
                if text is None:
                    # insert a separator
                fname = os.path.join(basedir, image_file + '.png')
                action_button = ActionButton(text=text, icon=fname)
            self.canvas.add_widget(actionbar, canvas='after')

    Navigation Toolbar Kivy MPL

    by andnovar at July 18, 2015 03:50 AM

    Shivam Vats

    GSoC Week 8

    This week I started work on writing the series module for symengine. I am working with Sumith on Polynomials on PR 511. It is still a work in progress. A lot of the code, especially related to Piranha was new to me, so I had to read a lot. I got a PR 533 merged which uses templates to simplify code for equality and comparison of maps.

    With regard to PR 9614, I had a meeting with my mentor Thilina. We decided that we should be able to call the RingSeries function on SymPy Basic expressions so that the user need not bother about creating rings and all the extra function calls. So, I will be building on top of the classes I created earlier to make the series method accept Basic expressions. I created RingFunction, RingMul and RingAdd classes. The taylor_series method now works with sums and products of functions. However, it is still not optimised, especially with series that have a low minimum exponent (say cos). I need to find a better way.

    I returned to college. Once classes begin from Monday I will need to manage my time well as my schedule will get busier.

    Next Week

    • Finish the Polynomials PR.

    • Further optimise the taylor_series method to work in all cases.

    • Start writing a series method that works with Basic expressions.

    I think it is good to have the flexibility to work with Basic objects as that integrates the ring_series module better with the existing series infrastructure of SymPy. At the end of day, we want maximum number of people to benefit from using the code.


    July 18, 2015 12:00 AM

    July 17, 2015

    Jazmin's Open Source Adventure

    PR's left and right

    Hey, everybody!

    It's been a while since I made a big post--$h!t got crazy what with finding a new apartment to call home, packing all of the books I've accumulated over the years, etc. has been an extremely productive last couple of weeks, with a couple bits of code merged into the main project branch, and a few other sitting in PR's. 

    So let's rap.

    Mass of air? Para-what angle??

    As mentioned in a previous post, I've been working on some plotting functions for my Google Summer of Code project, part of the Astroplan project. 

    Code says what?

    After testing preliminary code in some IPython notebooks, I transferred this code to .py files.  An IPython notebook is essentially a frozen interactive programming session--you can write, modify and run code from a web browser and immediately see any results (i.e., plots, numerical output, etc.), but unlike a normal interactive session in IPython, you can save your code AND output.  A file that ends in .py is just code--no output--and so it's more efficient to store bits of code for a complex project in this type of file.  Other bits of code, either in IPython notebooks or .py files, can call the code you save in a .py file, which may contain descriptions of classes, functions and other objects (see my post here)

    So, when you've got very preliminary code, it's nice to have it in IPython notebooks, but once it's (mostly) in working order, you need to transfer it to a .py file and put it in the appropriate place in your copy of the repository you're working on.  The usual place to put your Python code is in the source code sub-directory, which tends to have the same name as your project.  For instance, our project, Astroplan, has a root directory, astroplan.  Inside this directory is another one, also called astroplan, and this is where we put our source code. 

    Dem files!

    Most projects will have even more subdirectories, each one containing source code for a particular aspect.  In order for all this code to communicate with each other, each source directory (including the main one) has to have an file. 

    When a module such as Astropy is being used, files help communicate to your Python installation where to look for useful functions, objects, etc., and to make sure there's no confusion about which .py file contains the source code.  This is why you can have two modules both containing a function with the exact same name--Python module importing conventions (e.g., "from astropy.coordinates import EarthLocation") plus the files make sure everything stays organized. 

    The plot increases in viscosity--yet again!*

    What all the above meant for me was that I had to figure out how all this worked before I could use my newly-minted plotting functions.  When you're working on a development copy of a software package, you don't really want to install it in the usual way (if you even have that option).  You'll want to do a temporary, "fake" installation of the code you do have so that you can test it (python build, anyone?).  Sometimes this means you'll have to take the extra step of informing your current Python/IPython session where this installation lies. 

    Plots or it didn't happen

    The plotting functions for airmass and parallactic angle went through several iterations, and had to wait for some PR's from Brett to get merged in order to use our project's built-in airmass and parallactic angle functions.  My PR containing the plot_airmass and plot_parallactic functions finally merged recently--check it out!  It also contains some IPython notebooks examples on the usage of these, which will eventually migrate to our documentation page.

    Airmass vs. Time for 3 targets as seen from Subaru Telescope on June 30, 2015.

    Parallactic Angle vs. Time for the same three targets.  Polaris would make a horrible target.

    You may notice that the sky plot is missing here--due to technical issues, I moved it to a separate PR.  It's unfinished, and hopefully my mentors will have some suggestions *cough, cough* as to how to figure out the funky grid stuff

    by Jazmin Berlanga Medina ( at July 17, 2015 08:57 PM

    Siddharth Bhat

    Math Rambling - Dirac Delta derivative

    I’ve been studying Quantum Mechanics from Shankar’s Principles of Quantum Mechanics recently, and came across the derivative of the Dirac delta function that had me stumped.

    $ \delta'(x - x') = \frac{d}{dx} \delta(x - x') = -\frac{d}{dx'} \delta(x - x') $

    I understood neither what the formula represented, how the two sides are equal.

    Thankfully, some Wikipedia and Reddit (specifically /r/math and /u/danielsmw) helped me find the answer. I’m writing this for myself, and so that someone else might find this useful.


    I will call $\frac{d}{dx} \delta(x - x')$ as the first form, and $-\frac{d}{dx'} \delta(x - x')$ as the second form

    Breaking this down into two parts:

    1. show what the derivative computes
    2. show that both forms are equal

    1. Computing the derivative of the Dirac Delta

    Since the Dirac Delta function can only be sensibly manipulated in an integral, let’s stick the given form into an integral.

    $$ \delta'(x - x') = \frac{d}{dx} \delta(x - x') \\ \int_{-\infty}^{\infty} \delta' (x - x') f(x') dx' \\ = \int_{-\infty}^{\infty} \frac{d}{dx} \delta(x - x') f(x') dx' \\ $$ Writing out the derivative explicitly by taking the limit, $$ = \int_{-\infty}^{\infty} \lim{h \to 0} \; \frac{\delta(x - x' + h) - \delta(x - x')}{h} f(x') dx' \\ = \lim{h \to 0} \; \frac{ \int_{-\infty}^{\infty} \delta((x + h) - x') f(x') dx' - \int_{-\infty}^{\infty} \delta(x - x') f(x') dx'}{h} \\ = \lim{h \to 0} \; \frac{f(x + h) - f(x)}{h} \\ = f'(x) $$

    Writing only the first and last steps,

    $$ \int_{-\infty}^{\infty} \delta' (x - x') f(x') dx' = f'(x) $$

    This shows us what the derivative of Dirac delta does. On being multiplied with a function, it “picks” the derivative of the function at one point.

    2. Equivalence to the second form

    We derived the “meaning” of the derivative. Now, it’s time to show that the second form is equivalent to the first form.

    Take the second form of the delta function as the derivative, $$ \delta'(x - x') = - \frac{d}{dx'} \delta(x - x') \\ \int_{-\infty}^{\infty} \delta' (x - x') f(x') dx' \\ = \int_{-\infty}^{\infty} - \frac{d}{dx'} \delta(x - x') f(x') dx' \\ $$ Just like the first time, open up the derivative with the limit definition $$ = \int_{-\infty}^{\infty} \lim{h \to 0} \; - (\frac{\delta(x - (x' + h)) - \delta(x - x')}{h}) f(x') dx' \\ = \lim{h \to 0} \; \frac{ \int_{-\infty}^{\infty} \delta((x - h) - x') f(x') dx' - \int_{-\infty}^{\infty} \delta(x - x') f(x') dx'}{h} \\ = \lim{h \to 0} \; - \frac{f(x - h) - f(x)}{h} \\ = \lim{h \to 0} \; \frac{f(x) - f(x - h)}{h} \\ = f'(x) $$


    That shows that the derivate of the Delta Function has two equivalent forms, both of which simply “pick out” the derivative of the function it’s operating on.

    $$ \delta'(x - x') = \frac{d}{dx} \delta(x - x') = -\frac{d}{dx'} \delta(x - x') $$

    Writing it with a function to operate on (this is the version I prefer):

    First form:

    $$ \int_{-\infty}^{\infty} \delta' (x - x') f(x') dx' = \\ \int_{-\infty}^{\infty} \frac{d}{dx}\delta(x - x') f(x') dx' = \\ f'(x) $$

    Second form:

    $$ \int_{-\infty}^{\infty} \delta' (x - x') f(x') dx' = \\ \int_{-\infty}^{\infty} -\frac{d}{dx'}\delta(x - x') f(x') dx' = \\ f'(x) $$

    A note on notation

    In a violent disregard for mathematical purity, one can choose to abuse notation and think of the above transformation as -

    $$ \delta'(x - x') = \delta(x - x') \frac{d}{dx} $$

    We can write it that way, since one can choose to think that the delta function transforms

    $$ \int_{-\infty}^{\infty} \delta'(x - x')f(x') dx' \to \\ \int_{-\infty}^{\infty} \delta(x - x')\frac{d}{dx}f(x')dx' = \\ \int_{-\infty}^{\infty} \delta(x - x') f'(x') = \\ f'(x) $$

    The original forms and the rewritten one are equivalent, although the original is “purer” than the other. Which one to use in is up to you :)

    So, to wrap it up:

    $$ \delta'(x - x') = \frac{d}{dx} \delta(x - x') = -\frac{d}{dx'} \delta(x - x') = \delta(x - x') \frac{d}{dx} $$

    July 17, 2015 02:49 PM

    Vito Gentile
    (ERAS Project)

    Enhancement of Kinect integration in V-ERAS: Fourth report

    This is my fourth report on what I have done for my GSoC project. If you don’t know what it is about and want to find more information, please refer to this page and this blog post.

    During the past two weeks I have worked mainly on two issues: finalizing a first user’s step estimation algorithm, and supporting data analysis during and after the training session for AMADEE’15.

    For what about the user’s step estimation, I have implemented this algorithm, which uses skeletal data got by Kinect to estimate user’s rotation and the walked distance every time a new skeletal frame is tracked. Then a Tango change-event on the moves attribute is fired, and any other Tango module can subscribe to this event in order to use this data and implement user’s navigation. This whole idea will be tested by using a module that Siddhant is writing, which will use estimated user’s movements to animate a rover on the (virtual) Mars surface.

    I have also worked to support a training session for the AMADEE’15 mission, which has taken place in Innsbruck and was organized by the Austrian Mars Society. During this training session, the Italian Mars Society was there to test their V-ERAS system. What I did was, firstly, to configure two Windows 7 machines to be able to execute the new Python-based body tracker. For this purpose we used Team Viewer for remote control of PCs. After that, we noticed a strange issue, which did not allow to use the new body tracker, due to some strange Tango error (we are going to report this to the Tango community). To overcome this annoying and unexpected problem, the old body tracker (written in C# and still available in the ERAS repository) was used.

    I have also written some scripts to support Yuval Brodsky and the other team members of IMS to evaluate the effects of virtual reality on the neurovestibular system. To do this, I wrote a first script to get the positions of head and torso skeletal joints from the body tracker, and a second script to convert this data in .xlsx format (to be used by Yuval in data analysis). This allowed me to learn how to use openpyxl, a very easy to use and powerful Python module for writing .xlsx files. To get a feel on it, take a look to this sample code:

    from openpyxl import Workbook
    wb = Workbook()
    # grab the active worksheet
    ws =
    # Data can be assigned directly to cells
    ws['A1'] = 42
    # Rows can also be appended
    ws.append([1, 2, 3])
    # Python types will automatically be converted
    import datetime
    ws['A2'] =
    # Save the file"sample.xlsx")

    The scripts I have written for data analysis are not yet on the repository (we are trying to improve them a little bit), and now I have to find a way to include, in the same .xlsx file generated from Kinect data, also data taken from Oculus Rift.

    The next step then will be to include also some gesture recognition, in particular the possibility to identify if user’s hands are open or closed.

    I will keep you updated with the next posts!


    by Vito Gentile at July 17, 2015 11:38 AM


    GSoC Progress - Week 9

    Hello all. Last week has been rough, here's what I could do.


    The printing now works, hence I could test them. Due to that we could even test both the constructors, one from hash_set and other from Basic.

    The Polynomial wrappers PR, we need to get in quick, our highest priority.

    We need to make the methods more robust, we plan to get it in this weekend.
    Once this is in, Shivam can start writing function expansions.

    I have also couple of other tasks:

    • Use std::unordered_set so that we can have something even when there is no Piranha as dependency.
    • Replace mpz_class with piranha::integer throughout SymEngine and checkout benchmarks.

    I intend to get Polynomial in this weekend because I get free on weekends :)
    As there are only 3-4 weeks remaining, I need to buck up.

    That's all I have

    July 17, 2015 12:00 AM

    GSoC Progress - Week 8

    Hello. Short time since my last post. Here's my report since then.


    I have continued my work on the Polynomial wrappers.

    Constructors from hash_set and Basic have been developed and pushed up. Printing has also been pushed. I'm currently writing tests for both, they'll be ready soon.

    When hash_set_eq() and hash_set_compare() were developed, we realised that there were many functions in *_eq() and *_compare() form with repeated logic, the idea was to templatize them which Shivam did in his PR #533.

    Solution to worry of slow compilation was chalked which I wish to try in the coming week, using std::unique_ptr to a hash_set, instead of a straight hash_set. Hence not necessary to know the full definition of hash_set in the header. I've been reading relevant material, known as PIMPL idiom.


    * #511 - Polynomial Wrapper

    Targets for Week 9

    I wish to develop the Polynomial wrappers further in the following order.

    • Constructors and basic methods, add, mul, etc, working with proper tests.
    • Solve the problem of slow compilation times.
    • As mentioned previously, use standard library alternates to Piranha constructs so that we can have something even when there is no Piranha as dependency.

    After the institute began, the times have been rough. Hoping everything falls in place.

    Oh by the way, SymPy will be present (and represented heavily) at PyCon India 2015. We sent in the content and final proposal for review last week. Have a look at the website for our proposal here.

    That's all this week.

    July 17, 2015 12:00 AM

    Yue Liu

    GSOC2015 Students coding Week 08

    week sync 12

    Last week:

    • Update the doctests for ROP module.
    • Update the doctests for gadgetfinder module.
    • Using LocalContext to get the binary arch and bits.
    • Start coding for Aarch64 supported.
    • Try to do some code optimization.

             220462    0.743    0.000    0.760    0.000 :0(isinstance)
             102891    0.413    0.000    0.413    0.000 :0(match)
      116430/115895    0.347    0.000    0.363    0.000 :0(len)
               1119    0.243    0.000    0.487    0.000 :0(filter)
              80874    0.243    0.000    0.243    0.000<lambda>)
              11226    0.117    0.000    0.117    0.000 :0(map)
        12488/11920    0.047    0.000    0.050    0.000 :0(hash)
    • Fix some bugs in rop module.

    Next week:

    • Coding for Aarch64.
    • Optimizing and fix potential bugs.
    • Add some doctests and pass the example doctests.

    July 17, 2015 12:00 AM

    July 16, 2015

    Siddhant Shrivastava
    (ERAS Project)

    Streamed away (in Real-Time)!

    Hi! This post is all about Video Streaming and Cameras :-) If you've wondered how services like YouTube Live or work, then this post is for you. After the Innsbruck experiments and Remote tests in Telerobotics, it was time for me to create a full-fledged Real Time Video Streaming solution for the ERAS project. After a lot of frustration and learning, I've been able to achieve the following milestones -

    1. Stream losslessly from a single camera in real-time to a Blender Game Engine instance.
    2. Create example Blender projects to test multiple video sources streaming over a network.
    3. Record a live stream from a stereoscopic camera into a side-by-side video encoded on the fly.

    It's going to be a very long post as I've been playing around with lots of video streaming stuff. All this experience has turned me into a confident Multimedia streamer.

    Why am I doing this?

    Integrating Augmented and Virtual Reality requires one to know the nitty-gritty of Multimedia Streaming. This week was spent in learning and tinkering with the various options provided by FFmpeg and Video4Linux2. One of the aims of the Telerobotics project is to allow streaming of Rover Camera input to the Astronaut's Head-Mounted Device (Minoru 3D camera and Oculus Rift in my case). The streamed video has multiple uses -

    1. It is used by the various Tango servers (Planning, Vision, Telerobotics, etc) and processed to obtain Semantic relationships between objects in the Martian environment.
    2. The video, in addition to the LIDAR and other sensing devices are the interface of the Human world in the ERAS habitat on Mars. The video stream provides a window to Mars.
    3. The real-time stream helps the astronaut and the simulated astronaut to guide the rover and the simulated rover around on Mars.
    4. Streaming is an integral component of both ERAS and V-ERAS which we at the Italian Mars Society are currently working on.

    Initial Impressions

    When I started with 3D streaming, it appeared easy. "I did it with a single camera, two cameras can't be a huge deal, right!". I had never been so wrong. I found myself stuck in the usual embedded device vs the Linux kernel interface -

    • The hardware of desktop machines are unsuitable for Streaming applications.
    • The Kernel is not configured to use multiple webcams
    • This results in lots of memory-related errors - insufficient memory, rt_underflow

    To tweak the Minoru camera and strike an optimum settings agreement with this cute little stereo camera, I began to dig into the core software components involved -

    Video4Linux2 saves the day!

    The Video4Linux is an important driver framework which makes it possible for Linux users to use Video Capture devices (webcams and streaming equipment). It supports multiple features. The ones that this project is concerned with are -

    • Video Capture/Output and Tuning (/dev/videoX, streaming and control)
    • Video Capture and Output overlay (/dev/videoX, control)
    • Memory-to-Memory (Codec) devices (/dev/videoX)

    These slides by Hans Verkuil (Cisco Systems) are and informative entry point for understanding how Video4Linux works.

    The different Streaming Modes supported by Video4Linux are -

    • Read/Write (Supported by Minoru)
    • Memory Mapped Streaming I/O (Supported by Minoru)
    • User Pointer Streaming I/O
    • DMA (Direct Memory Access) Buffer Streaming I/O

    The take-away from Video4Linux is understanding how streaming works. So a Stream requires the following - queue setup, preparing the buffer, start streaming, stop streaming, wait to prepare, wait to finish, compression and encoding of the input stream, transmission/feeding on a channel, decompression and decoding the received stream, and facilities for playback and time-seek.

    The Qt frontend to v4l2 made me realize where the problem with the camera lied -

    Qv4l2 Minoru

    The video4linux2 specification allows for querying and configuring everything about Video Capture Cards. The nifty command-line utitlity v4l2-ctl is a lifesaver while debugging cameras.

    For instance, with the Stereo Camera connected, `v4l2-ctl --list-devices gives -

    Vimicro USB2.0 PC Camera (usb-0000:00:14.0-1.1):
    Vimicro USB2.0 PC Camera (usb-0000:00:14.0-1.4):
    WebCam SC-13HDL11939N (usb-0000:00:1a.0-1.4):
    v4l2-ctl --list-frameintervals=width=640,height=480,pixelformat='YUYV'


            Interval: Discrete 0.033s (30.000 fps)
            Interval: Discrete 0.067s (15.000 fps)

    This means that I've to use one of these settings for getting input from the camera, and then transcode them into the desired stream characteristics.

    Knowing your stereoscopic Camera


    VLC carefully configured to stream the Left and Right Minoru Cameras/

    Minoru 3D webcam uses the following Color Spaces -

    1. RGB3
    2. YU12
    3. YV12
    4. YUYV
    5. BGR3

    Explanations ahead...

    When colors meet computers and humans

    Color Spaces are models of 'Color Organization' that enable reproducible representations of color in different media (analog, digital). Color is a human subjective visual perceptual property. Recursing these definitions on Wikipedia took me back to Middle School. Color is a physical (observable and measurable) property. The way us humans see it is not the same as a color sensing photodiodes see it and the computer monitors reproduce it. Translating color from one base to another requires a data structure known as the color space. The signals from the webcam are encoded into one of the color spaces. Just in case you're wondering - YUV model describes colors in terms of a Luma (luminance) component and two chrominance components (U and V). The 2-D UV plane can describe all colors. YUV can be converted into RGB and vice-versa. The YUV422 data format shares U and V values between two pixels. As a result, these values are transmitted to the PC image buffer only once for every two pixels, resulting in an average transmission rate of 16 bits per pixel. Capturing on the YUV 4:2:2 format is more efficient than RGB formats whereas color reproduction on a pixel array is more convenient via RGB. For the purposes of Video Streaming from a Stereo Camera System like Minoru, using a RGB color space is the best option because it results in faster performance with a codec like MJPEG (Multi-part JPEG) which is the final requirement for the Blender Game Engine stream. I hope this theoretical explanation superveniently describes the challenge I've been trying to crack.

    FFmpeg built with v4l2-utils support is used for the Stereo Streaming.

    Experiments with Blender

    I tried capturing the two video devices directly from the Blender Game Engine application. It was a good experience learning about creating basic Blender Games.

    Blender Game

    The workflow to this end was -

    • Create two Cube Meshes
    • Enable GLSL shading mode
    • Set Object Shading to Shadeless to enhance brightness
    • Add Image Textures to both images
    • Add a sensor that is triggered to True always.
    • Add a Python script controller corresponding to each sensor.
    • The script to control the right camera of the stereo system is -
    import VideoTexture
    import bge
    contr = bge.logic.getCurrentController()
    obj = contr.owner
    if not hasattr(bge.logic, 'video'):
        matID = VideoTexture.materialID(obj, 'IMimage.png') = VideoTexture.Texture(obj, matID) = VideoTexture.VideoFFmpeg("/dev/video2",0) = True = True = 0.2 = -1
    print("In Video 2 fps: ",

    But it turns out Blender Game Engine does not provide extensive Video Device control. It relies on the default settings provided by Video4Linux. Since the Minoru camera is unable to stream both camera outputs at 30 frames per second - Blender simply gives in and compromises by playing the first camera output that it receives. Video4Linux simply reports Insufficient Memory for the other stream.

    The output could only support one camera at a time - Blender cameras

    The BGE documentation is ambiguous in the use of the VideoTexture command while controlling webcam devices.

    It was an exciting learning experience about contemporary game design nevertheless. The take-away was that Blender Game Engine is unable to handle cameras at the hardware level. Network Streaming with FFmpeg was the only option.

    FFmpeg - the one-stop-shop for Multimedia

    My search for the perfect tool for streaming ended with FFmpeg. It amazes me how versatile this software is. Some people even call it the Swiss-army knife of Internet streaming. So I had to basically work with Streams. Streams are essentially Multimedia resources which are identified with the help of a Media Resource Locator (MRL). A combination of ffmpeg and ffserver is what I used to achieve the desired results. The stereoscopic stream produced will be used by multiple applications-

    1. Streaming to the Head-Mounted Device (currently Oculus Rift)
    2. Processing Martian environment's video.
    3. View in the ERAS application from ground control.

    Why FFmpeg?

    • It is fast, reliable, and free.
    • It provides a complete solution from streaming and transcoding to media playback, conversion, and probe analysis.

    Quoting from its documentation -

    ffmpeg reads from an arbitrary number of input "files" (which can be regular files, pipes, network streams, grabbing devices, etc.), specified by the -i option, and writes to an arbitrary number of output "files", which are specified by a plain output filename. Anything found on the command line which cannot be interpreted as an option is considered to be an output filename.

    I tinkered with loads of ffmpeg options and created a lot of useful junkcode. The good thing about GSoC is that it makes you aware of the open-source influences out there. Throughout this work on streaming, I was motivated by the philosophy of Andrew Tridgell who says that "junkcode can be an important learning tool".

    ffmpeg -f v4l2 -framerate 15 -video_size 640x480 -i /dev/video1 outp1.mp4 -framerate 15 -i /dev/video2 outp2.mp4

    This resulted in a steady video stream -

    A sample of three different frames at

    frame= 1064 fps= 16 q=27.0 q=27.0 size=631kB time=00:01:07.06
    frame= 1072 fps= 16 q=27.0 q=27.0 size=723kB time=00:01:07.60
    frame= 1079 fps= 16 q=27.0 q=27.0 size=750kB time=00:01:08.06

    Learning about the ffmpeg-filters made this experience worthwhile. I was not able to overlay videos side-by-side and combine them in real-time. This is the script that I used -

    ffmpeg -s 320x240 -r 24 -f video4linux2 -i /dev/video1 -s 320x240 -r 24 -f video4linux2 -i /dev/video2 -filter_complex "[0:v]setpts=PTS-STARTPTS, pad=iw*2:ih[bg];[1:v]setpts=PTS-STARTPTS[fg]; [bg][fg]overlay=w" -c:v libx264 -crf 23 -preset medium -movflags faststart nerf.mp4

    It basically tells ffmpeg to use a resolution of 320x240 and 24 fps for each of the camera devices and apply an overlay filter to enable side-by-side video output. PTS-STARTPTS allows for time synchronization of the two streams and the presets enable efficient encoding.

    I shot a video using the Minoru video camera. After applying the Overlay filter, I got a nice video with the Left and Right video streams arranged side-by-side. In this screenshot, I am pointing my little brother's Nerf guns towards each of the Minoru's two cameras -

    Minoru Nerf Gun

    I can experiment with the Stereoscopic anaglyph filters to extend it to a single-screen 3D live stream. But the present task involves streaming to the Oculus Rift which is what I'll be working on next. In addition to ffmpeg, I also made use of ffserver and ffplay in my Streaming workflow. These have been explained in a previous post.

    Experiments with v4l2stereo

    Working with stereoscopic cameras is atypical to a traditional Computer Vision workflow. Each of the cameras require calibration in order for Range-Imaging applications like depth maps and point clouds to work. I calibrated my camera using the excellent v4l2stereo tool.

    Here are some screenshots -

    Minoru Calibration

    Basic Feature detection -

    Minoru Calibration

    Closing remarks

    This was a very hectic couple of weeks. The output I produced pales in comparison to the tinkering that I had been doing. I'll be using all the important scripts that did not make it to the final repository in the documentation so that future students won't have to wade through the insurmountable learning curve of Multimedia Streaming. All the work regarding this can be found here. I realized the overwhelming importance of IRC channels when I got help from #ffmpeg and #v4l2 channels when I was stuck with no end in sight. I gathered a GREAT DEAL of experience in Video Streaming which I hope will go a long way.

    This has been one giant bi-weekly report. Thank you for reading. Ciao!

    by Siddhant Shrivastava at July 16, 2015 07:53 PM

    July 15, 2015

    Siddharth Bhat

    Websites, Editors, Religion

    First thing’s first - I moved this from to I’ve had this domain for around a year now, but I never got around to do anything with it. The excuse that I was using was that “there is no space on my EC2 instance”, which makes no real sense when you think about it, since it was a fresh off-the-shelf Ubuntu install. It turns out that nginx wrote out an error log file into /etc/nginx/error.log that was a whopping 6.4 GB. Deleting that single file solved the no-space-left-to-hide problem.

    Next was the HTML, CSS and all the other the shiny aspects of running a website. Having messed around with Hugo, which is a static site generator written in Go, I decided to use it. I really, really, dig Hugo. It’s simple, fast, and not at all like Jekyll with it’s byzantine settings. I’m pleased with it, and it looks pretty as well!

    Now that we’ve completed the Websites part of the title, let’s move on to editors. I’ve been a Vim person ever since I switched to using Linux. This was a combination of two factors - the peer pressure that I “had to learn vim”. The second (arguably more important) factor was the fact that the only proper C++ development environment I’ve been able to get up and running was Vim + YouCompleteMe, a fantastic plugin by Valloric (Val Markovic).

    Recently, I’ve taken a liking towards Haskell. Unfortunately, Haskell’s state of affairs when it comes to tooling is pretty terrible. The only stable environment that exists is haskell-mode for Emacs.

    So, I set it up. Color me surprised as all hell, but I dig Emacs. It’s slick, generally fast, and is honestly awesome. The fact that I can browse the filesystem using Dired, use git with Magit (which is by the way a saner git interface that git itself), startup Python REPLs with excellent autocompletion, and all sorts of other nice features is enjoyable as hell.

    I think the major mistake I made previously was to immediately install Evil mode, which is a Vi emulation layer for Emacs. I guess that insulated me from the “real Emacs” while making it easy to hate, since the two don’t fit perfectly.

    I hope I’ll stick around with Emacs, since it’s this really nice environment to use. In fact, I’m writing this in Emacs. Shout out to Chopella sir who asked me to try out Emacs for the first time!

    July 15, 2015 11:07 PM

    Gsoc Vispy Week 5

    I’m writing this blogpost since we’ve come close to hitting a milestone in Vispy’s progress - the scenegraph changes are getting merged! So, now, I can move on to phase 2 - building the plotting infrastructure that Vispy needs.

    Also, I’ve moved the blog from using handrolled html/css/js to Hugo + a standard theme. While not as fun, this is definitely more maintainable, I’ve got to admit.

    Most of the work for the Scenegraph update is done now. It just needs a little bit of spit and polish to make sure that everything works fine.

    The final Visual that was blocking, the ColorBarVisual was ported, with much joy all around.

    Along with that, a nice side-effect of the new SceneGraph system meant that we could implement per-fragment Phong lighting for Vispy. There’s still a bug lurking in that piece of code that messes up lighting calculations on the edges, but it works otherwise. I’ll be spending some time tracking that down.

    The first order of business is to get ColorBarvisual onto the high-level vispy.plot API. I’ve started investigating how the plotting side of Vispy works.

    There’s also a lot of things that maybe outside the scope of my summer project, but I’d like to continue contributing to see them through. Automatic palette generation, better theming support for vispy, working on the IPython and web backend parts of Vispy are all things that I want to work on.

    Adios, and see everyone next time!

    July 15, 2015 09:07 PM

    Chau Dang Nguyen
    (Core Python)

    Week 7

    Hi everyone

    In the past few week, I had my schedule screwed up so I didn't have a new post.

    I have made many improvements to the rest module. One of them is adding filtering and pagination. So user can get a filtered list of data they want, for example, issue?where_status=open&where_priority=normal,critical&page_size=50&page_index=0 will return the first 50 issues which have status as "open", and priority as "normal" or "critical".

    User can also request the server to send pretty output by adding 'pretty=true'. Pretty output will have 4 spaces indent.

    Another improvement is having a routing decorator. So people can add new function to the tracker easily like Flask

    @Routing.route("/hello", 'GET')
    def hello_world(self, input):
    return 200, 'Hello World'

    Unit test and basic authentication are also done to provide testing.

    by Kinggreedy ( at July 15, 2015 12:18 PM

    Yask Srivastava

    GSoC Updates

    Lastly I worked on UserSetting. But most of that work had to be reverted after discussions. I was pointed to this issue where it is suggested to merge template with common features.

    The alternative was to use less mixins. So for eg. form styling with Bootstrap :

    form input,textarea {

    But this resulted in significant increase in .css file.

    Lastly I had to resort to editing form macros to use Bootstrap compnents.

    So exclusive form macros for themes. While this does increase the codebase slightly, their won’t be any performing issue in site loading.

    But I am not using can’t use Bootstrap nav-tabs, instead I styled it to fit the theme. Here is how it looks:">

    Previously I had implemented these tabs with Bootstrap components, but that did look like an overplay since I also had to write separate javascript for indicating * symbol to unsaved forms.

    I also worked on index page to use Bootstrap components (Buttons, Paginations..) ." title="" >

    Content inside the footer is now more consistent by a simple trick I learned from css tricks blog

    <p>  position: absolute;
      top: 10px;

    Roger occasionally forks my repo to test my work. He noticed a bug irregular header collabse in header in mobile view. I fixed the issue in the last commit.

    For error validations which looked ugly :">

    I used HTML 5 validations and pattern maching (for emails, password.. etc).">">

    There was also a slight bummer last week.

    I was using some extension in mercurial which made numerous commits without my consent. Ajitesh suggested to delete the repo and recommit all changes.

    That took some time, but I took that as an opportunity to write more verbose commit messages


    • Fix broken search (Fixed ✓)
    • Fix footer icons coming almost at the border (Fixed ✓)
    • Fix alignment of the buttons in modernized forms (Fixed ✓)
    • Modernized item history still has old tables (I’ll do it today)
    • Give a border around text input boxes in modernized (✘)
    • Highlight the content in the modernized theme else it looks too much like basic(✘)

    Here is the latest commit I pushed.

    Other Updates

    I was invited to give a speech in Software Development: The Open Source Way in IIIT-Delhi.">

    It was a wonderful experience, I love talking/motivating people to use and contribute opensource softwares.

    And the response was amazing! Couple of people complimented me personally and requested for link to my slides :)">

    Teaching Django in college

    I love django and I am currently teaching first year students of my college python and Django. This is truely an amazing experience!

    Again, response is pretty good.

    His words are so motivating . that i tend to do whatever he tells us .. he told us to start blogging …so here i am .. writing my first blog .“

    “This Workshop is mentored by Yask Shirivastava ….he is 1 yr elder to us . But seriously is too good , infact better than the final year students”

    I think ,I’m particularly doing well in this, but yeah it needs a lot of time and hard work . You may have the best teacher but you only learn when u explore it yourself. “

    Well, thats what keeps me motivated. I am trying my best not just to teach them concepts of web development but also to ignite passion among them.

    I also migrated my blog to Octopress 3.0. Migrating was easy as all my images are are hosted on imgur.

    Using the script I wrote which uploads screenshot to imgur and copies the url to clipboard. Very very convenient. Check it out:

    July 15, 2015 08:05 AM

    AMiT Kumar

    GSoC : This week in SymPy #7

    Hi there! It's been seven weeks into GSoC and second half has started now. Here is the Progress so far.

      Progress of Week 7

    This week I Opened #9628, which is basically an attempt to make solveset more robust, as I mentioned in my last post. The idea is to tell the user about the domain of solution returned.

    Now, It makes sure that n is positive, in the following example:

    In [3]: x = Symbol('x', real=True)
    In [4]: n = Symbol('n', real=True)
    In [7]: solveset(Abs(x) - n, x)
    Out[7]: Intersection([0, oo), {n}) U Intersection((-oo, 0], {-n})

    Otherwise it will return an EmptySet()

    In [6]: solveset(Abs(x) - n, x).subs(n, -1)
    Out[6]: EmptySet()


    In [12]: solveset(Abs(x) - n, x)
    Out[12]: {-n, n}

    So, for this to happen, we needed to make changes in the invert_real:

    if isinstance(f, Abs):
      g_ys = g_ys - FiniteSet(*[g_y for g_y in g_ys if g_y.is_negative])
      return _invert_real(f.args[0],
        Union(g_ys, imageset(Lambda(n, -n), g_ys)), symbol)
        Union(imageset(Lambda(n, n), g_ys).intersect(Interval(0, oo)),
              imageset(Lambda(n, -n), g_ys).intersect(Interval(-oo, 0))),

    So, we applied set operations on the invert to make it return non-EmptySet only when there is a solution.

    Now For more Complex Cases:

    For the following case:

    In [14]: invert_real(2**x, 2 - a, x)
    Out[14]: (x, {log(-a + 2)/log(2)})

    For the invert to be real, we must state that a belongs to the Interval (-oo, 2] otherwise it would be complex, but no set operation on {log(-a + 2)/log(2)} can make the interval of a to be in (-oo, 2].

    Although, it does returns an EmptySet() on substituting absurd values:

    In [23]: solveset(2**x + a - 2, x).subs(a, 3)
    Out[23]: EmptySet()

    So, we need not make any changes to the Pow handling in invert_real & It's almost done now, except for a couple of TODO's:

    • Document new changes
    • Add More tests

    Though, I will wait for final thumbs up from @hargup, regarding this.

    from future import plan Week #7:

    This week I plan to complete PR #9628 & get it Merged & start working on replacing old solve in the code base with solveset.

    $ git log

    Below is the list of other PR's I worked on:

      PR #9671 : Simplify ComplexPlane({x}*{y}) to FiniteSet(x + I*y)

      PR #9668 : Support solving for Dummy symbols in linsolve

      PR #9666 : Equate S.Complexes with ComplexPlane(S.Reals*S.Reals)

      PR #9628 : [WIP] Make invert_real more robust

      PR #9587 : Add Linsolve Docs

      PR #9500 : Documenting solveset

    That's all for now, looking forward for week #8. :grinning:

    July 15, 2015 12:00 AM

    July 14, 2015

    Nikolay Mayorov

    Large-scale Least Squares

    I finally made my code available as PR to scipy This PR contains all code, but was branched from the previous one and focuses on sparse Jacobian support. In this post I’ll explain the approach I chose to handle large and sparse Jacobian matrices.

    Conventional least-squares algorithms require O(m n) memory and O(m n^2) floating point operations per iteration (again n — the number of variables, m — the number of residuals). So on a regular PC it’s possible to solve problems with n, m \approx 1000 in a reasonable time, but increasing these numbers by an order or two will cause problems. These limitations are inevitable if working with dense matrices, but if Jacobian matrix of a problem is significantly sparse (has only few non-zero elements), then we can store it as a sparse matrix (eliminating memory issues) and avoid matrix factorizations in algorithms (eliminating time issues). And here I explain how to avoid matrix factorizations and rely only of matrix-vector products.

    The crucial part of all non-linear least-squares algorithms is finding (perhaps approximate) solution to linear least squares (it gives O(m n^2) time asymptotics):

    J p \approx -f.

    As a method to solve it I chose LSMR algorithm, which is available in scipy. I haven’t thoroughly investigate this algorithm, but conceptually it can be thought of as a specially preconditioned conjugate gradient method applied to least-squares normal equation, but with better numerical properties. I preferred it over LSQR, because it appeared much more recently and the authors claim that it’s more suitable for least-squares problems (as opposed to system of equations). This LSMR algorithm requires only matrix-vector multiplication in the form J u and J^T v.

    In large-scale setting both implemented algorithms dogbox and Trust Region Reflective as the first step compute approximate Gauss-Newton solution using LSMR. And then:

    • dogbox operates in a usual way, i. e. this large-scale modification was almost for free.
    • In Trust Region Reflective I apply the 2-d subspace approach to solve a trust-region problem. This subspace is formed by computed LSMR solution and scaled gradient.

    When Jacobian is not provided by a user, we need to estimate it by finite differences. If the number of variables is large, say 100000, this operation becomes very expensive if performed in a standard way. But if Jacobian contains only few non-zero elements in each row (its structure should be provided by a user), then columns can be grouped such that all columns in one group are estimated by a single function evaluation, see “Numerical Optimization”, chapter 8.1. The simplest greedy grouping algorithm I used is described in this paper. Its average performance should be quite good — the number of function evaluations required usually is only slightly higher than the maximum number of non-zero elements in each row. More advanced algorithms consider this problem as a graph-coloring problem, but they come down to simple reordering of columns before applying greedy grouping (so can be perhaps implemented later).

    In the next post I will report results of algorithms in sparse large problems.

    by nickmayorov at July 14, 2015 08:22 PM

    Sartaj Singh

    GSoC: Update Week-7


    • I opened #9639 bringing in the rest of the algorithm for computing Formal Power Series. There are still some un-implemented features. I hope to complete them in this week.

    • Few of my PR's got merged this week(#9622, #9615 and #9599). Thanks @jcrist, @aktech and @pbrady.

    • Opened #9643 for adding the docs related to Fourier Series.


    • Polish #9572 and get it ready to be merged.

    • Complete #9639.

    • Get docs of Fourier Series merged.

    That's it. See you all next week. Happy Coding!

    July 14, 2015 05:11 PM

    Sahil Shekhawat

    GSoC Week 8

    I finished unit tests for all the parts, i.e. Joints class and all specific joints, JointsMethod and Body class. I still have to push one last update to them. Now, I will start implementing because I feel that real implementation will bring out many things which just unit tests can't and I will be able to implement more detailed unit tests using that implementation since at this time at best I can test interface, functions and some general tasks but implementation dependent things will be be easy to finish after the implementation.

    July 14, 2015 04:41 PM

    Vipul Sharma

    GSoC 2015: Coding Period

    Passed my mid term evaluations :) Thanks to my mentors for all their support and guidance.

    I've been working on implementing threaded comments in ticket modify view. Earlier the comments were created by creating message markups of all the comments and then concatenating them in the content part of the ticket item. In this way, editing/reply to comments was not possible.

    New implementation:
    Each comment is a new item which refers to the itemid of the ticket in which it is created. In this way, it is easy to query all the comments of a particular ticket.

    Initially, I worked on non-threaded comments, then added a feature to reply to comments. It is similar to non-threaded comments but the difference is, there is a new field "reply_to" which stores the itemid of the comment to which a reply is made which is hence, stored in the "refers_to" field of the comment reply. After this, I tried to create a tree of all the comments in a ticket which included comments, replies of comments, replies of replies and so on.

    Something like :
     [[< object at 0x7f907596d550>, [< object at 0x7f9075963d50>, []]]]

    The list above, is the tree of 1 single comment which has 1 reply and 1 reply to reply.

    Still more work has to be done to render the comments in threaded form in the UI. Also, there is some issues in the recursive function I wrote for parsing the comments/reply tree which I think can be fixed soon.


    by Vipul Sharma ( at July 14, 2015 03:43 PM

    Sudhanshu Mishra

    GSoC'15: Fourth biweekly update

    During this period we've been able to finish and merge following PRs:

    As of now I'm working on reducing autosimplifications based on assumptions from the core.

    That's all for now.

    by Sudhanshu Mishra at July 14, 2015 05:30 AM

    Michael Mueller

    Week 7

    The existing PR is passing Travis tests but hasn't been reviewed yet, so in the meantime I've been working on some performance issues. One huge problem I discovered last week was that `group_by` created a number of Table slices, each of which incurred an expensive deep copy of column indices and contributed to a running time of several seconds. To circumvent the problem, I created a context manager `static_indices`, which can be used like so:
    with static_indices(table):
        subtable = table[[1, 3, 4]]
    In this case `subtable` will have no indices. However, the main issue was with column slicing, which should represent a reference rather than a (deep) copy in keeping with the behavior of `numpy.ndarray` (and thus `Column` internal data). Lucky that this came up, because I didn't have any previous tests checking to make sure that modifying a slice affects both the internal data and indices of the original Table or Column.

    Aside from this, I've been working on cutting down the time required to initialize an index; the pure-Python loop I previously used was woefully inefficient memory- and time-wise. I haven't yet figured out how to get this to work with FastRBT, but SortedArray is now much faster to initialize. Here are some time profiling results on a 100,000 line table:
    time_init: 1.2263660431
    time_group: 0.2325990200
    time_where: 0.0003449917
    time_query: 0.0000329018
    time_query_range: 0.0643420219
    time_add_row: 2.7397549152
    time_modify: 0.0001499653

    time_init: 0.0355048180
    time_group: 0.0041830540
    time_where: 0.0000801086
    time_query: 0.0000169277
    time_query_range: 0.0217781067
    time_add_row: 0.0200960636
    time_modify: 0.0016808510

    time_init: 0.0000019073
    time_group: 0.0865180492
    time_where: 0.0002820492
    time_query: 0.0001931190
    time_query_range: 0.2128648758
    time_add_row: 0.0006089211
    time_modify: 0.0000159740
    I've focused on SortedArray quite a bit, so FastBRT is still pretty slow in areas that should be easily fixable--I'll tackle those tomorrow. I have an IPython notebook with these timing results here, and a short Markdown file here.

    by Michael Mueller ( at July 14, 2015 05:06 AM

    July 13, 2015

    Wei Xue

    GSoC Week 6/7

    In the week 6 and 7, I coded BayesianGaussianMixture for the full covariance type. Now it can run smoothly on synthetic data and old-faithful data. Take a peek on the demo.

    from sklearn.mixture.bayesianmixture import BayesianGaussianMixture as BGM
    bgm = BGM(n_init=1, n_iter=100, n_components=7, verbose=2, init_params='random',

    BayesianGaussianMixture on old-faithful dataset. n_components=6, alpha=1e-3

    The demo is to repeat the experiment of PRML, page 480, Figure 10.6. VB on BGMM has shown its capability of inferring the number of components automatically. It has converged in 47 iterations.

    The lower bound of the log-likelihood, a.k.a ELBO

    The ELBO looks a little weired. It is not always going up. When some clusters disappear, ELBO goes down a little bit, then go up straight. I think it is because the estimation of the parameters is ill-posed when these clusters have data samples less than the number of features.

    The BayesianGaussianMixture has much more parameters than GaussianMixture, there are six parameters per each components. I feel it is not easy to control the so many functions and parameters. The initial design of BaseMixture is also not so good. I took a look at bnpy which is a more complicated implementation of VB on various mixture models. Though I don't need to go such complicated implementation, but the decoupling of observation model, i.e. $X$, $\mu$, $\Lambda$, and mixture mode, i.e. $Z$, $\pi$ is quite nice. So I tried to use Mixin class to represent these two models. I split MixtureBase into three abstract classes ObsMixin, HiddenMixin and MixtureBase(ObsMixn, HiddenMixin). I also implemented subclasses for Gaussian Mixture ObsGaussianMixin(ObsMixin), MixtureMixin(HiddenMixin), GaussianMixture(MixtureBase, ObsGaussianMixin, MixtureMixin), but Python does allow me to do this due to there is correct MRO. :-|. I changed them back, but this unsuccessful experiment gives me a nice base class, MixtureBase.

    I also tried to use cached_property to store the intermediate variables such as, $\ln \pi$, $\ln \Lambda$, and cholsky decomposed $ W-1 $, but didn't get much benefits. It is almost the same to save these variables as private attributes into instances.

    The numerical issue comes from responsibility is extremely small. When estimating resp * log resp, it gives NAN. I simply avoid computing when resp < 10*EPS. Still, ELBO seems suspicious.

    The current implementation of VBGMM in scikit-learn cannot learn the correct parameters on old-faithful data.

    VBGMM(alpha=0.0001, covariance_type='full', init_params='wmc',
       min_covar=None, n_components=6, n_iter=100, params='wmc',
       random_state=None, thresh=None, tol=0.001, verbose=0)

    It gives only one components. The weights_ is

     array([  7.31951611e-07,   7.31951611e-07,   7.31951611e-07,
             7.31951611e-07,   7.31951611e-07,   9.99996340e-01])

    I also implemented DirichletProcessGaussianMixture. But currently it looks the same as BayesianGaussianMixture. Both of them can infer the best number of components. DirichletProcessGaussianMixture took a slightly more iteration than BayesianGaussianMixture. If we infer Dirichlet Process Mixture by Gibbs sampling, we don't need to specify the truncated level, only alpha the concentration parameter is enough. But with variational inference, we still need the give the model the maximal possible number of components, i.e., the truncated level $T$.

    July 13, 2015 09:17 PM

    Isuru Fernando

    GSoC Week 7

    This week I worked on the Sage wrappers and Python wrappers. To make it easier to try out symengine, I made changes to the sage wrappers such that if sage does not have symengine_conversions methods, (i.e. sage not updated to the symengine branch) then conversions would be done via python strings. For example, an integer is converted to a Python string and then to a Sage integer. This is slow, but makes it easier to install symengine. You can try it out by downloading cmake-3.2.3.spkg and symengine-0.1.spkg and installing them. (Link to download is .....) To install type

    sage -i /path/to/cmake-3.2.3.spkg

    sage -i /path/to/symengine-0.1.spkg

    Python wrappers included only a small amount of functions from SymEngine. Wrappers were added to functions like log, trigonometric functions, hyperbolic functions and their inverses.

    CMake package for Sage is now ready for review,

    SymEngine package for Sage can be found here, A PR would be sent as soon as CMake ticket is positively reviewed.

    Next week, testing with Sage, Python docstrings, SymEngine package for Sage are the main things that I have planned for now. Also a PyNumber class to handle python numbers would be started as well.

    by Isuru Fernando ( at July 13, 2015 03:12 PM


    GSoC Progress - Week 7

    Hello. Sorry for the really late post. As I was moving from home to Mumbai back and also part of the grading team of International Physics Olympiad(IPhO), I could not contribute as much as I had thought I could. Here is what I have for this week.


    The Expression class was built upon the initial works of Francesco. I made a SymEngine patch with his as an initial commit. We now have a top-level value class.

    The slowdowns finally got tackled. It was Piranha that needed amendment. The slowdown, as discussed previously, was due to the class thread_pool. This was resolved was templatizing thread_pool i.e. replace class thread_pool: private detail::thread_pool_base<> with template <typename = void> class thread_pool_: private detail::thread_pool_base<>. This basically saw to it that inclusion of individual headers. Including single piranha.hpp still had this problem. The problem was piranha.hpp includes settings.hpp, which in turn defines a non-template function called set_n_threads() which internally invokes the thread pool. This was resolved by a similar fix, the setting class to <typename = void> class settings_.

    Many things were reported until now, hence Ondřej suggested a documentation of all the decisions taken. The wiki page, En route to Polynomial was hence made.


    * #511 - Polynomial Wrapper

    * #512 - Add Francesco to AUTHORS
    * #500 - Expression wrapper.

    En route to Polynomial

    Targets for Week 8

    Get the Polynomial wrapper merged.

    Points to be noted:
    * Use standard library alternates to Piranha constructs so that we can have something even when there is no Piranha as dependency.
    * Basic class in, so that Shivam can start some work in SymEngine.

    I am thankful to Ondřej and the SymEngine team for bearing with my delays. I hope I can compensate in the coming week.

    That's all this week.

    July 13, 2015 12:00 AM

    July 12, 2015

    Mark Wronkiewicz

    Opening up a can of moths

    C-day + 48

    After remedying the coil situation (and numerous other bugs) my filtering method finally seems to maybe possibly work. When comparing my method to the proprietary one, the RMS of the error is on average 1000 times less than the magnetometer and gradiometer RMS.

    It turns out that many of the problems imitating the proprietary MaxFilter method stemmed from how the geometry of the MEG sensors were defined in my model. Bear with me here, as you have to understand some background about the physical layout of the sensors to comprehend the problem. When measuring brain activity, each sensor takes three measurements: two concerning the gradient of the magnetic field (the gradiometers) and one sensing the absolute magnetic field (a magnetometer). The MEG scanner itself is made up of ~100 of these triplets. The gradiometers and magnetometers are manufactured with different geometries, but they are all similar in that they contain one (or a set) of wire coils (i.e., loops). The signal recorded by these sensors is a result of magnetic field that threads these coil loops and then induces a current within the wire itself, which can then be measured. When modeling this on a computer system, however, that measurement has to be discretized, as we can’t exactly calculate how a magnetic field will influence any given sensor coil. Therefore, we break up the area contained in the coil into a number of “integration points.” Now, instead of integrating across the entire rectangular area enclosed by a coil, we calculate the magnetic field at 9 points within the plane. This allows a computer to estimate the signal any given coil would pick up. For an analogy, imagine you had to measure the air flowing through a window. One practical way might be to buy 5 or 10 flowmetry devices, hang them so they’re evenly distributed over the open area, and model how air was flowing through using those discrete point sensors. Only here, the airflow is a magnetic field and the flow sensors are these extremely expensive and sensitive SQUIDS bathed in liquid helium – other than that, very similar.

    The hang-up I’ve been dealing with is largely because there are different ways to define those discrete points for the numerical integration. You can have more or fewer points (trading off accuracy vs. computational cost) and there are certain optimizations for how to place those points. As far as the placement, all points could be evenly spaced with equal weighting, but there are big fat engineering books that recommend more optimal (and uneven) weighting of the points depending on the shape in use. It turns out the proprietary SSS software used one of these optimized arrangements, while MNE-Python uses an evenly distributed and weighted arrangement. Fixing the coil definitions has made my custom implementation much closer to the black box I’m trying to replicate.

    In the process I’ve also been forced to learn the dedication it takes to produce high-quality code. Before last week, I felt pretty high and mighty because I was religiously following PEP8 standards and making sure my code had something more than zero documentation. With some light nudging from my mentors, I feel like I’ve made the next solid leap forward; unit tests, markup, extensive references and comments have all been a theme since my last blog post. In the process, it can be frustrating to get that all right, but I’m sure the minor annoyance is a small price to pay to make this esoteric algorithm easier for the poor soul who inherits the SSS workload :)

    by Mark Wronkiewicz ( at July 12, 2015 11:37 AM

    July 11, 2015

    Shivam Vats

    GSoC Week 7

    So Far

    PR 9575 and PR 9495 got merged this week. All the basic functions are now in place in polys.ring_series. The module supports Laurent as well as Puiseux series expansion. The order of the day is to extend this support for nested functions, and encapsulate the whole thing with classes. The idea is that the user need not bother about calling ring_series functions and that the class should hold all the relevant information about the series.

    Given that SymPy already has series infrastructure, we need to decide whether ring_series with be integrated with it or whether it will be distinct from it.

    Next Week

    • Discuss and decide how ring_series is to be used and write its classes accordingly. I will build upon PR 9614

    • Write a series function that makes use of ring_series functions to expand arbitrary expressions.

    I also need to start porting some of it to Symengine. The basic polynomial operations are in place there. I need to discuss with Ondrej and Sumith how the series module will best work with the Polynomial module. If that is sorted, I can start porting the ring_series functions.


    July 11, 2015 12:00 AM

    July 09, 2015

    Brett Morris

    astroplan Tutorial 1

    I'm long overdue for a post about my Google Summer of Code project with astropy, called astroplan. For background on GSoC and astroplan, see this earlier blog post.

    Why so silent?

    I haven't posted in a while because, well, we've been working on the code! You can see the progress in our GitHub repository, where I've made a few big contributions over the past few weeks in pull requests 11 and 14. Most of the discussion about the development of the core functionality of astroplan is in those pull requests. 

    Quick Tutorial: observation planning basics

    Say you're going to observe sometime in the near future, and you need to figure out: the time of sunrise and sunset, the altitude of your target at a particular time from your observatory, and when the target next transits the meridian. Let's use Vega as our target and Mauna Kea as the location of our observatory, and use astroplan to find the answers:

    from astropy.coordinates import EarthLocation
    from astropy.time import Time
    from astroplan import Observer, FixedTarget
    import astropy.units as u

    # Initialize Observer object at the location of Keck
    keck = EarthLocation.from_geodetic('204d31m18s', '19d49m42s', 4160)
    obs = Observer(location=keck, timezone='US/Hawaii')

    # Initialize FixedTarget object for Vega using from_name
    vega = FixedTarget.from_name('Vega')

    # Pick the time of our observations in UTC
    time = Time('2015-07-09 03:00:00')

    # Calculate the time Vega rises above 30 degress:
    next_rise_vega = obs.calc_rise(time, vega, horizon=30*u.deg)
    print('Vega rises: {0} [ISO] = {1} [JD]'.format(next_rise_vega.iso, next_rise_vega.jd))
    The above code returns:
    Vega rises: 2015-07-09 05:24:22.732 [ISO] = 2457212.72526 [JD]
    The time at next rise is an astropy Time object, so it's easy to convert it to other units. Now let's do the rest of the calculations:
    # Calculate time of sunrise, sunset
    previous_sunset = obs.sunset(time, which='previous')
    next_sunrise = obs.sunrise(time, which='next')
    print('Previous sunset: {}'.format(previous_sunset.iso))
    print('Next sunrise: {}'.format(next_sunrise.iso))

    # Is Vega up at the present time?
    vega_visible = obs.can_see(time, vega)
    print('Is Vega up?: {}'.format(vega_visible))

    # When will Vega next transit the meridian?
    next_transit = obs.calc_meridian_transit(time, vega, which='next')
    print("Vega's next transit: {}".format(next_transit.iso))
    prints the following:
    Previous sunset: 2015-07-08 05:02:09.435
    Next sunrise: 2015-07-09 15:53:53.525
    Is Vega up?: True
    Vega's next transit: 2015-07-09 09:51:18.800
    Now let's say you need a half-night of observations. What are the times of astronomical sunrise/sunset and midnight?
    # Sunrise/sunset at astronomical twilight, nearest midnight:
    set_astro = obs.evening_astronomical(time, which='previous')
    rise_astro = obs.morning_astronomical(time, which='next')
    midnight = obs.midnight(time)
    print('Astronomical sunset: {}'.format(set_astro.iso))
    print('Astronomical sunrise: {}'.format(rise_astro.iso))
    print('Midnight: {}'.format(midnight.iso))
    which prints:
    Astronomical sunset: 2015-07-08 06:29:05.259
    Astronomical sunrise: 2015-07-09 14:27:05.156
    Midnight: 2015-07-09 10:27:59.015
    You can also view this code in an iPython Notebook here.

    by Brett Morris ( at July 09, 2015 08:54 PM

    Jazmin's Open Source Adventure

    Quick Update - Sunday, 5 July to Thursday, 9 July

    Quick update!

    This week, I have:

    1) Updated the PR with plot_airmass and plot_parallactic, as well as example notebooks.
    2) Made another branch for plot_sky.

    by Jazmin Berlanga Medina ( at July 09, 2015 04:27 PM

    Rafael Neto Henriques

    [RNH post #8] Perpendicular directions samples relative to a given vector

    As I mentioned in the mid-term summary, one of my next steps is to implement some numerical methods to compute the standard kurtosis measures to evaluate their analytical solution. 

    The numerical method for the perpendicular kurtosis requires samples of perpendicular directions to a given vector v.  

    I am posting here the mathematical basis of this function which will be implemented in module dipy.core.geometry and named as perpendicular_directions.

    Function's Algorithm

    • Vector v: Perpendicular directions are sampled relative to this vector.
    • N: Number of perpendicular directions

    Step 1) The N directions are first sampled in the unit circumference parallel to the y-z plane (plane normal to the x-axis), as shown the figure below.

    Fig 1. First step of perpendicular_directions algorithm.

    Coordinates of the perpendicular directions are therefore initialized as:


    where ai are the angles sampled for [0, 2*pi [To perform N samples, the angle between two adjacent directions is given by 2*pi / N.

    Step 2) Sampled directions are then rotated and aligned to the plane normal to vector v (see figure below).

    Fig 2. Second step of perpendicular_directions algorithm.

    Mathematically, this is done by multiplying each perpendicular directions ni by a rotational matrix. The final perpendicular directions di are given by:


    The rotational matrix in Eq.2 is constructed as the reference of frame basis in which the first basis axis is the vector v, while the other two basis axis are any pair of orthogonal directions pair relative to vector v. These orthogonal vectors are named here as vector e and vector k. For the implementation of function perpendicular_directions, vectors e and k are estimated using the following procedure:

        1) The direction of e is defined as the normalized vector defined by the cross product between vector v and the unit vector aligned to x-axis, i.e [1, 0, 0]. After normalizing, the final coordinates of e are:


        2) k is directly defined as the cross product between vectors v and e. The coordinates of this vector are:

     (Eq. 4)

    From equations 2, 3 and 4, the coordinates of the perpendicular directions relative to vector v are give as:

    (Eq. 5)

    Note that Eq. 5 has a singularity when vector v is aligned to the x-axis. To resolve this singularity, perpendicular directions are first defined in the x-y plane and vector e is computed as the normalized vector given by the cross product between vector v and the unit vector aligned to the y-axis, i.e [0, 1, 0]. Following this, the coordinates of the perpendicular directions are given as:

     (Eq. 6)

    by Rafael Henriques ( at July 09, 2015 04:09 PM

    Zubin Mithra

    Aarch64 SROP support completed

    I just added in Aarch64 support for pwntools. There is no sys_sigreturn in Aarch64, instead there is a sys_rt_sigreturn implementation. In a lot of ways writing the SROP frame was similar to my ARM experience; there is a magic flag value(FPSIMD_MAGIC) that needs to be present.
    Quick note : When setting up QEMU images for an architecture, do not test network connectivity using ping. ICMP might be disabled.
    There is something else that is curious about Aarch64 - regardless of the gcc optimization level, the local variables are allocated first and later on, the return address and the frame pointer are pushed onto the stack. I found this quite interesting, and I don't think I've seen this on any other architectures.

    At the prologue
       0x000000000040063c <+0>: sub sp, sp, #0x200
       0x0000000000400640 <+4>: stp x29, x30, [sp,#-16]!

    and at the epilogue,
       0x000000000040067c <+64>: ldp x29, x30, [sp],#16
       0x0000000000400680 <+68>: add sp, sp, #0x200
       0x0000000000400684 <+72>: ret

    For a PoC we can get away with something like this. So we end up overwriting the return address of stub(and not of read_input). The make_space makes sure that the "access_ok" function inside the kernel(it checks if there is a frame that can be accessed from the stack) does not fail. The frame is about 4704 bytes in size ; so when access_ok runs ; we need sp to sp+4704 to be mapped in as valid addresses.

    The registers for the SROP frame are named as `regs[31]` in the kernel source; so I used forktest.c, set the breakpoint at handler, the stack state before the "svc 0x0" and the register state after it and found the offsets.

    You can view the PR for the same here.

    by Zubin Mithra<br />(pwntools) ( at July 09, 2015 01:50 PM

    July 08, 2015

    Siddhant Shrivastava
    (ERAS Project)

    Remote tests in Telerobotics

    Ciao :) The sixth week of GSoC 2015 got over. According to the Telerobotics project timeline, this week was supposed to be the Buffer Week to account for any unforeseen work that may pop up. We at the Italian Mars Society were trying to get ROS communication possible over a large network. After effective discussion via mail and prioritizing on Trello, the first Husky test was scheduled on July 1, second test on July 7 and the third test on July 8. It was an international effort spanning timezones in UTC-5:00, UTC+2:00, and UTC+5:30 regions. So zeroing in on a common time was an interesting sub-challenge in itself.

    By a large network, I mean this -

    Remote Testing

    On visceral observation, the problem statement looks quite tractable and practical. But like all problems in Computer Networks, this one looked easy in theory, but frustrated the budding Computer Scientist in me as the solutions proposed didn't work out.

    Husky Test 1

    Matt (from the Space Research Systems Group, Carleton University), Franco, and I were trying to get the Husky UGV in Canada to respond to the commands sent from the three parts of world involved (Canada, India, Italy). The few problems we came across -

    1. ROS version issues caused a minor problem. The Husky robot was running an older version of ROS (Hydro) while Franco and I were using the newer version (Indigo). This caused problems in reading certain Husky messages. Solution - Upgrade ROS version on the Husky robot OR downgrade our version to ROS Hydro and Ubuntu 12.04.

    2. Network Issues - Unable to communicate with all three computers in all cases. There was no bidirectional communication between the ROS computers and ports were blocked.

    3. Success - GPS Messages and status messages were received from the Husky robot laptop set as the ROS Master. But the Husky laptop was unable to receive Teleoperation messages from Franco's computer and my computer (even though it detected that we were publishing messages). Again a Network problem.

    Solution - Virtual Private Networks, well almost...

    At first, I had to ensure that the TP-Link WiFi Router at home was not creating problems. To ensure this, I added my laptop interface in the Demilitarized Zone (DMZ), and enabled Port Forwarding for all the ports of interest.

    Success with Blender Game Engine Streaming

    Now, this solved quite a few problems - my public IP could now behave like one. To prove this, Franco and I held a Web-Stream session in which his laptop in Italy behaved as the Blender Game Engine Client while I provided a live video feed from the Minoru Camera while using a FFMpeg Server. His words - "You are live. I can see the stream." provided the much-needed boost I required to tackle the pending Computer Networks problems I had to solve in the following couple of days.

    Coming to the VPN problem, I first read about the various VPN Server solutions available, like -

    • OpenVPN
    • PPTP (Point-to-Point Tunneling Protocol)
    • IPSec
    • SSH Tunneling

    The second Husky test was done with a PPTP VPN setup which wasn't quite succesful. The reason being - ROS requires bidirectional communication between the peers, and I couldn't become a peer while I was the VPN server. It caused a slew of other pesky problems like REQ TIMEOUTS, Disconnected ROS Nodes, disabling Internet on the VPN server, etc. But as a start, it was assuring that the problem could be solved. I realized that the learning curve for working with computers at the scale of the Internet is no child's play. But there was another takeaway with the second Husky test. Andrea (from the Husky team) could work with my remote node as the ROS master and still get the Husky up and running. This means that all the Husky traffic and node maintenance could be relegated through my PC and transferred to the Husky. Much assuring.

    Armed with the Computer Networks concepts I learnt at my college, I set on to set up the slightly tougher OpenVPN server. This is a snapshot of the OpenVPN access server that I set up -

    OpenVPN users

    I was not only able to set up a world-wide VPN, but also able to set up communication among the peers. But the firewalls on the Husky computer network were strong for it and sent Andrea's laptop in a continous Trying to Reconnect loop. There went our hopes with OpenVPN. I am still looking into this issue. The main issue was that the UDP channel of OpenVPN was accessible in the Husky network but not the TCP channels. This caused intermittent connection losses and the OpenVPN client couldn't figure out what to do. There must be a solution to this and I'll find it.

    Throughout this experience, I learnt a lot of new things about practical Computer Networks. Once I'm able to crack the VPN problem, I could put it to use in diverse scenarios (remote robotics testing, as a road warrior, Internet of Things applications, creating a network of friends, etc. ). VPN brings everyone on the same page (or logical subnet). I also did quite a bit of work with the Stereo Video Streaming which would be the theme of my next post. Stay tuned.


    by Siddhant Shrivastava at July 08, 2015 07:53 PM

    Rupak Kumar Das

    Mid-Term Update

    Hi all!

    The midterm evaluations are over now. Regarding the project, half of the work is done. I fixed a few bugs with the Cuts plugin which was also modified to include the Slit features. It is nearly complete with only a few more fixes needed. Eric, who maintains Ginga, has partially implemented the Bezier curve but it needs a function to determine the points lying on the curve before the Cuts plugin can use it which is my current focus. Also, I need to figure out how to use OpenCv to save arrays as video in the MultiDim plugin instead of using ‘mencoder’ as it does now but it seems OpenCv has a few problems.

    Here’s a useful post on how to determine which points lie on a Bezier Curve.

    by Rupak at July 08, 2015 04:51 PM

    Rafael Neto Henriques

    [RNH Post #7] Artifacts in Dipy's sample data Sherbrooke's 3 shells

    Hi all,

    Just to report an issue that I am currently trying to figure out!

    As I showed on my previous post my first diffusion kurtosis reconstructions are looking very good. However when I try to process the Dipy's multi-shell dataset sample Sherbrooke 3 shells, kurtosis measures seem to be widely corrupted by implausible high negative values see figure below:

    Fig.1 - Diffusion kurtosis standard measures obtain from the Sherbrooke 3 shells Dipy's sample dataset.

    by Rafael Henriques ( at July 08, 2015 07:35 AM

    Andres Vargas Gonzalez

    Kivy matplotlib backend with right text positioning and rendering.

    In order to fix the problem of the layout of the text inside the canvas, an important function had to be overwritten with the information of the bounding box for each text inside it. The function is get_text_width_height_descent in this, one calculates the width and height and return them. Inside matplotlib these values are important to calculate the positioning of the elements according to the layout set by the user. Additionally, in the draw_text method transformations and styling is applied to the text. At the end we can have a working backend with different fonts and styles that are mapped between matplotlib and kivy capabilities.

    Another inconvenient in the rendering of the graphs were some glitches while drawing. I was calculating u and v to create the mesh for the polygons when this was not really necessary. After some debugging and analysis of the vertices the final implementation generate the meshes and lines for all the elements in the canvas as follow:

    Screenshot from 2015-07-07 12:18:28 example mpl kivy

    As can be seen, now it is getting closer to how the static image implementation looks like from the previous post.

    by andnovar at July 08, 2015 06:56 AM

    Siddharth Bhat

    Gsoc Update - Weeks 3 & 4

    I’m sitting at home with the flu and a runny nose, so this is as good a time as any to write another blog post!

    I managed to get a decent amount of work done over the past two weeks. I think I can start working on plotting from next week, since most of the Scenegraph update I talked about last time is done. What’s left is a polish and performance improvements, which will be done iteratively, and (I’m hoping) in parallel with the plotting API.

    Scenegraph Overhaul

    I’ve spent most of my time porting over a lot of the visuals from the old system to the new rewrite. It was an interesting exercise to learn the new architecture and to figure out how everything fits. Much of it was a direct rewrite, but some of it was challenging.

    I’m stuck on a few things right now, chief among them being porting lighting of MeshVisuals. I suspect that there is a bug in the shader code / normals calculation, but I’ve not been able to isolate it properly.


    Vispy also has an experimental webGL backend that’s going to be used with IPython. I’ve been checking it out, and it’s a really interesting project.

    Vispy uses a custom domain specific “language” (it’s not a language in the Turing completeness sense of the word. It’s more of a spec / internal representation (IR) kind of like the LLVM IR) that is designed to represent openGL operations, which is knows as GLIR. This provides a really neat way to specify openGL commands in a nicely serialize-able format.

    Vispy (both the python version and the JavaScript version) has an object-oriented abstraction of OpenGL called “gloo”. Now, gloo has been implemented in vispy.js. I plan on implementing GLIR on top of gloo in the coming weeks. That should be a fun exercise (both to learn gloo properly and to implement it in JavaScript)

    Odds and Ends

    The usual “fix what annoys you” is around this time too.

    I added an option to the test suite that lets one test docstrings. This feature existed before, but required one to manually run the python file. It’s just a tad easier with this change

    There was also a “bug” in the installer( that silently failed to install in development mode (python develop) if setuptools wasn’t present. I just added a warning so that this would be reported.

    That’s it for this time!

    July 08, 2015 12:00 AM

    July 07, 2015

    Sartaj Singh

    GSoC: Update Week-6

    Midterm evaluations are complete. I got to say Google was fairly quick in mailing the results. It was just after a few minutes after the deadline, I received a mail telling me I had passed. Yay!

    Here's my report for week 6.


    1. Formal Power Series:

    For the most of the week I worked towards improving the implementation for the second part of the algorithm. I was able to increase the range of admissible functions. For this I had to write a custom solver for solving the RE of hypergeometric type. It's lot faster and better in solving the specific type of RE's this algorithm generates in comparison to just using rsolve for all the cases. However, it still has some issues. It's currently in testing phase and probably will be PR ready by the end of this week.

    The code can be found here.

    While working on it, I also added some more features to FormalPowerSeries(#9572).

    Some working examples. (All the examples were run in isympy)

    In [1]: fps(sin(x), x)
    Out[1]: x - x**3/6 + x**5/120 + O(x**6)
    In [2]: fps(cos(x), x)
    Out[2]: 1 - x**2/2 + x**4/24 + O(x**6)
    In [3]: fps(exp(acosh(x))
    Out[3]: I + x - I*x**2/2 - I*x**4/8 + O(x**6)

    2. rsolve:

    During testing, I found that rsolve raises exceptions while trying to solve RE's, like (k + 1)*g(k) and (k + 1)*g(k) + (k + 3)*g(k+1) + (k + 5)*g(k+2) rather than simply returning None which it generally does incase it is unable to solve a particular RE. The first and the second RE are formed by functions 1/x and (x**2 + x + 1)/x**3 respectively which can often come up in practice. So, to solve this I opened #9615. It is still under review.

    3. Fourier Series:

    #9523 introduced SeriesBase class and FourierSeries. Both FormalPowerSeries and FourierSeries are based on SeriesBase. Thanks @flacjacket and @jcrist for reviewing and merging this.

    In [1]: f = Piecewise((0, x <= 0), (1, True))
    In [2]: fourier_series(f, (x, -pi, pi)
    Out[2]: 2*sin(x)/pi + 2*sin(3*x)/(3*pi) + 1/2 + ...

    4. Sequences:

    While playing around with sequences, I realized periodic sequences can be made more powerful. They can now be used for periodic formulas(#9613).

    In [1]: sequence((k, k**2, k**3))
    Out[2]: [0, 1, 8, 3, ...]

    5. Others:

    Well I got tired with FormalPowerSeries(I am just a human), so I took a little detour from my regular project work and opened #9622 and #9626 The first one deals with inconsistent diff of Polys while while the second adds more assumption handler's like is_positive to Min/Max.

    Tasks Week-7:

    • Test and polish hyper2 branch. Complete the algorithm.
    • Add sphinx docs for FourierSeries.
    • Start thinking on the operations that can be performed on FormalPowerSeries.

    That's it. See you all next week. Happy Coding!

    July 07, 2015 05:56 PM

    Sahil Shekhawat

    GSoC Week 7

    I have been working according to my new timeline. I have finished writing unit tests for bodies and all specific joints (PinJoint, SlidingJoint, CylindricalJoint, SphericalJoint and PlanarJoint). Writing unit tests gave me a confidence and helped us to discuss the design. It would have taken a lot more time if we would have implemented them first. Have not pushed PlanarJoint and SphericalJoints yet, I have to change somethings so that they can work without body class. Right now Body is passed to Joint's but Joints will be implemented in Sympy and we cannot have a Body class there. Thus, I have to change RigidBody and Particle classes in Sympy to include Bodies functionalities. I will discuss this further with my mentors.

    July 07, 2015 03:29 PM

    Zubin Mithra

    Setting up Aarch64 and QEMU

    This is a short quick post on how I set up Aarch64 with a NAT connection.
    For the most part, the process is similar to what is described here and here. Here is the command line I ended up using to start the VM.

    HOST=ubuntu; mac=52:54:00:00:00:00; sshport=22000
    sudo qemu-system-aarch64 -machine virt -cpu cortex-a57 -nographic -smp 1 -m 512 \
    -global virtio-blk-device.scsi=off -device virtio-scsi-device,id=scsi \
    -drive file=ubuntu-core-14.04.1-core-arm64.img,id=coreimg,cache=unsafe,if=none -device scsi-hd,drive=coreimg \
    -kernel vmlinuz-3.13.0-55-generic \
    -initrd initrd.img-3.13.0-55-generic \
    -netdev user,hostfwd=tcp::${sshport}-:22,hostname=$HOST,id=net0 \
    -device virtio-net-device,mac=$mac,netdev=net0 \
    --append "console=ttyAMA0 root=/dev/sda"

    The tricky part here is that ping was disabled on the image I used(this might also have been the case with a couple of other images I tried), even if it had a functional NAT connection. Try apt-get update or something to test your connection.

    by Zubin Mithra<br />(pwntools) ( at July 07, 2015 09:24 AM

    Michael Mueller

    Week 6

    I began this week by fixing up a couple loose ends causing test failures, and then opening a new pull request for the work I've done so far. Travis CI and Appveyor run their builds of the PR successfully, but for some reason Coveralls reports that overall coverage has decreased. I'll have to look into that...

    From there I've been working on the SortedArray engine for indexing; that is, I've turned it into an actual array (a numpy structured array) rather than the old fill-in pure-Python list. One nice thing about the new class is that it can make good use of the `numpy.searchsorted` function, which searches for the right index to insert a given parameter into a sorted list. Unfortunately `searchsorted` doesn't have a lexicographical feature for lists that are sorted by more than one column (as in the case of composite indices), although I was able to manage a workaround by using `searchsorted` several times.

    I also worked on adding to the testing suite, particularly for the new SortedArray engine. I added a test fixture in `` to run each test with every possible engine, which gave me some grounds for testing SortedArray and will make it worthwhile to add more indexing tests, so as to test the Index class and each engine simultaneously. In terms of memory, SortedArray seems to do fairly well; I profiled a few engines on a 100,000 line table and found memory usages of 18.6 MB for SortedArray, 20.0 MB for FastRBT, and 48.2 MB for BST (the pure-Python binary search tree). The time profiles for these engines are more complicated, and much more problematic; querying an index takes negligible time compared to creating one, and the time it takes to create an index is totally unreasonable at this stage. (I found 7.53 seconds for BST, 2.04 seconds for FastRBT, and 2.21 seconds for SortedArray.) It's also a bit difficult to find what the main issue is, although iterating through Column takes up a significant time chunk and `numpy.argsort`, oddly enough, takes up a full half-second for SortedArray -- maybe there's something more subtle than I expect going on in there. I'm interested to hear whether Michael and Tom think we should copy Column data at a lower level (i.e. Cython/C), or how otherwise to get around this unexpected time bottleneck. Hopefully the current PR will get some feedback soon, as well.

    by Michael Mueller ( at July 07, 2015 05:22 AM

    July 06, 2015

    Mridul Seth


    Hello folks, this blog post will cover the work done in week 5 and week 6.

    As decided in #1592, now is a hybrid between the old and G.degree_iter(). This is implemented in #1617 and merged with the iter_refactor branch. We also decided (#1591) to stick with the old interface of G.adjacency_iter() for the new method G.adjacency() and remove G.adjacency_list() for the Di/Multi/Graphs classes. The methods G.nodes_with_selfloops(), G.selfloop_edges() now return an iterator instead of lists #1634.

    And with these changes merged into the iter_refactor branch, the work for the core Di/Multi/Graphs is done. We have planned to do an extensive review before merging it with the master branch, and this will also need a review of the documentation.

    Just a recap:

    G.func() now works as G.func_iter(), with the original G.func() gone. Only iterators no lists. Where func belongs to ( nodes, edges, neighbors, successors, predecessors, in_edges, out_edges ). And now returns the degree of the node if a single node is passed and works as G.degree_iter() if a bunch of nodes or nothing is passed. Same behaviour for in_degree() and out_degree().

    Summer going really fast. Mid terms evaluations already done. I passed :)


    PS: I wrote something regarding these changes at

    by sethmridul at July 06, 2015 04:35 PM

    AMiT Kumar

    GSoC : This week in SymPy #6

    Hi there! It's been six weeks into GSoC, and it marks the half of GSoC. The Mid term evaluations have been done now, Google has been preety quick doing this, I recieved the passing mail within 15 minutes after the deadline to fill up evaluations, so basically GSoC Admin did the following, (I guess):

    SELECT * FROM GSoCStudents
    WHERE EvaluationResult='PASS';
    and SendThemMail

    (Don't Judge my SQL, I am not good at it!)

      Progress of Week 6

    Last week my Linsolve PR #9438 finally got Merged Thanks! to @hargup @moorepants @flacjacket @debugger22 for reviewing it and suggesting constructive changes.

    This week I worked on Intersection's of FiniteSet with symbolic elements, which was a blocking issue for lot of things, I managed to Fix the failing test which I mentioned in my last post. Eventually this PR got Merged as well, which has opened doors for lot of improvements.

    Thanks to @jksuom & @hargup for iterating over this PR, and making some very useful comments for improving it to make it Mergeable.

    I had a couple of hangout meeting with @hargup this week, (though now he has left for SciPy for a couple of weeks), we discussed about the further plan, for making solveset more robust, such as like returning the domain of invert while calling the invert_real , See #9617.

    Motivation for this:

    In [8]: x = Symbol('x', real=True)
    In [9]: n = Symbol('n', real=True)
    In [12]: solveset(Abs(x) - n, x)
    Out[12]: {-n, n}

    The solution returned above is not actually complete, unless, somehow we state n should be positive for the output set to be non-Empty. See #9588

    from future import plan Week #7:

    This week I plan to work on making invert_real more robust.

    Relavant Issue:

    $ git log

      PR #9618 : Add test for solveset(x**2 + a, x) issue 9557

      PR #9587 : Add Linsolve Docs

      PR #9500 : Documenting solveset

      PR #9612 : solveset return solution for solveset(True, ..)

      PR #9540 : Intersection's of FiniteSet with symbolic elements

      PR #9438 : Linsolve

      PR #9463 : ComplexPlane

      PR #9527 : Printing of ProductSets

      PR # 9524 : Fix solveset returned solution making denom zero

    That's all for now, looking forward for week #7. :grinning:

    July 06, 2015 12:00 AM

    July 05, 2015

    Jazmin's Open Source Adventure

    Quick Update - Thursday, 2 July 2015

    Quick update!

    Today, I:
    1) Pushed a PR with functions and example notebooks for airmass and parallactic angle plots.
    2) Worked on plot_sky issues.

    by Jazmin Berlanga Medina ( at July 05, 2015 08:59 PM

    Jaakko Leppäkanga


    I somehow managed to catch a cold in the middle of the summer. So this last week I've been working with half strength, but got at least something done. The browser family got a new member as the ICA plotter was added. It's basically using the same functions as mne browse raw, but with small modifications. This meant heavy refactoring of the code I made earlier in June. I also made some smaller fixes to existing code. Next step is to add a function for plotting independent components as epochs.

    by Jaakko ( at July 05, 2015 06:40 PM

    Christof Angermueller

    GSoC: Week four and five

    Theano graphs become editable! By clicking on nodes, it is now possible to change their label. This allows to shorten default labels or to extend them by additional information. Moving the cursor over nodes will now also highlight all  incoming and outgoing edges . You can find three examples here.

    150705_editI started to work on curved edges that minimize intersections with nodes, but everything is still in development:

    150705_curvedApart from that, I fixed a couple of bugs and revised the backend to visualizing more detailed graph information in the future, such as timing information or nested graphs.

    I welcome any feedback and ideas to further improve the visualization!

    The post GSoC: Week four and five appeared first on Christof Angermueller.

    by cangermueller at July 05, 2015 02:29 PM

    July 04, 2015

    Pratyaksh Sharma

    Wait, how do I order a Markov Chain? (Part 1)

    Clearly, not any Markov Chain would do. At the expense of sounding banal, let me describe (again) what will fit our bill. We wish to sample from a Bayesian Network (given some evidence) or from a Markov Network, both of which are in general hard to sample from.

    Till now, we've figured out that using a Markov Chain can solve our worries. But, the question still remains, how do we come up with the right Markov Chain? We want precisely one property: that the Markov Chain have the same stationary distribution $\pi$ as the distribution $P(\textbf{X}|\textbf{E}=\textbf{e})$ we wish to sample from.

    Factored State Space

    First, we define the states of our Markov Chain. Naturally, as we want our samples to be instantiations to the variables of our model (Bayesian Network or Markov Network), we let our states be these instantiations. 

    Each state of the Markov Chain shall now represent a complete assignment (a particle). At first, it would seem that we would have to maintain an exponentially large number of objects, but as it turns out, that isn't really required. We'll just maintain the current state and will modify it as we perform a run.

    Multiple Kernels

    In theory, we can define a single transition function $\mathcal{T}$ that takes the current state and gives the probability distribution over the next states. But in practice, it is basically convenient to work with multiple transition models, one per variable.

    We shall have the transition model $\mathcal{T}_i$ for the $i$th variable of the model. On simulating a run of the Markov Chain, we:
    1. Start with a starting state $S_0$ which is a complete assignment to all variables in the model.
    2. In a certain order of variables, transition to the next state (assignment) of that variable. 
    3. Do this for all variables in a pre-defined order.
    This completes a single step of our random walk and generates a single sample. Repeat the above steps with the sampled state as the new starting state and we have our sampling algorithm.

    I haven't yet described how we are supposed to get the ideal $\mathcal{T}_i$s; I'll probably save it for the next post. Till then, check out the implementation of the above here.

    by Pratyaksh Sharma ( at July 04, 2015 04:48 PM

    Chienli Ma

    Evaluation Passed and the Next Step: OpFromGraph

    Evaluation passed and the next step: OpFromGraph

    The PR of function.copy() is ready to merged, only need fred to fix a small bug. And in this Friday I passed the mid-term evaluation. So it’s time to take the next step.

    In the original proposal ,the next step is to swap output and updates. After a discussion with Fred, we thought this feature is useless so we skip this and head to the next feature directly – OpFromGraph.


    make class OpFromGraph work.

    Big How?

    OpFromGraph should init a gof.op that has no difference with other Ops and can be optimized. Otherwise it has no sense.

    For this, we need to make it work on GPU, make sure it works with C code and document it. Make sure infer_shape(), grad() work with it. Ideally, make R_op() work too.

    Detailed how.

    • Implement __hash__() and __eq__() method so it is a basic
    • Implement infer_shape() method so that it’s optimizable
    • test if it work with shared variable as input and if not make it work. Add test for that.
    • Move it correctly to the GPU. We can do it quickly for the old back-end, move all float32 inputs to the GPU. Otherwise, we need to compile the inner function, see which inputs get moved to the GPU, then create a new OpFromGraph with the corresponding input to the GPU. #2982
    • Makergrad() work. This should remove the grad_depth parameter

    First Step: infer_shape:

    The main idea is to calculatet the shapes of outputs from given input shapes. This is a process similar to executing a function – we cannot know the shape of a variable before knowing the shape of the variables it depends on. So, we can mimic the make_thunk() method to calculate the shape from output to input. I come out with a draft now, and need some help with test case.

    order = self.fn.maker.fgraph.toposort()
    # A dict that map variable to its shape(list)
    shape_map = {}
    # set the input shape of the fgraph
    for in_var, shape in izip(in_vars, shapes);
        shape_map.set_default(in_var, shape)
    # calculate the output shape from input shape
    for node in order:
        assert all([var in shape_map.keys() for var in node.inputs])
        # calculate output shape
        in_shapes = [ shape_map[var] for var in node.inputs]
        out_shapes = node.op.infer_shape(node, in_shapes)
        # store the shape of that variable
        for out_var, shape in izip(node.outputs, out_shapes):
            shape_map.set_default(out_var, list(shape))
    # extract output shape
    return [ tuple(shape_map[var]) for var in fgraph.outputs]

    July 04, 2015 03:51 PM

    Zubin Mithra

    Tests for AMD64 and aarch64

    This week I've been working on adding an integration test into for AMD64. You can see the merged PR here. Writing an integration test involves writing mako templates for read and sigreturn.
    I've also been working on setting up an AARCH64 qemu machine with proper networking settings.

    Next week, I'll be working on getting AARCH64 merged in along with its doctest, and the rest of the integration tests.

    by Zubin Mithra<br />(pwntools) ( at July 04, 2015 03:40 PM

    Isuru Fernando

    GSoC Week 6

    This week, I worked on improving the testing and making Sage wrappers. First, building with Clang had several issues and they were not tested. One issue was a clang bug when `-ffast-math` optimization is used. This flag would make floating point arithmetic perform better, but it may do arithmetic not allowed by the IEEE floating point standard. Since it performs faster we have enabled it in Release mode and due to a bug in clang, a compiler error is given saying  error: unknown type name '__extern_always_inline' . This was fixed by first checking if the error is there in cmake and then adding a flag D__extern_always_inline=inline. Another issue was that type_traits header was not found. This was fixed by upgrading the standard C++ library, libstdc++

    This week, I finished the wrappers for Sage. Now converters to and from sage can be found at sage.symbolic.symengine. For this module to convert using the C++ level members, symengine_wrapper.pyx 's definitions of the classes were taken out and declared in symengine_wrapper.pxd and implemented in pyx file. To install symengine in sage, has to be resolved. A cmake check will be added to find whether this issue exists and if so, then the flag -Wa,-q will be added to the list of flags. We have to make a release of symengine if we were to make spkg's to install symengine in Sage, so some of my time next week will involve getting symengine ready for a release and then making spkgs for everyone to try out symengine.

    by Isuru Fernando ( at July 04, 2015 03:03 AM

    Shivam Vats

    GSoC Week 6

    I successfully passed my mid term evaluation this week and now the second half of my project has begun! It has been a challenging journey so far that has made me explore new algorithms (some very ingenious) and read a lot of code (much more difficult than I had imagined). This week, Mario whose code I am working on, helped in a big way by showing how and where to improve the algorithms. It is clear now that all the functions need to guarantee the order of the series they output. We were planning to keep it optional but since ring_series functions each other, an error in the order will propagate and eventually make it unpredictable.

    So Far

    PR 9575 is ready for merge except for a bug in sympify that converts PythonRational into float.

    I had a discussion with Mario on PR 9495. He has suggested a lot of improvements while dealing with fractional exponents, especially the fact that Newton method may not be ideal in these cases. It is very interesting to try and compare different algorithms and come up with ways to optimise them for out needs. The scope for improvement is immense and we'll need to decide the order in which we'll push the optimisations.

    I started writing the RingSeries class for series evaluation in PR 9614. I was supposed to update my blog yesterday, but I dozed off while working on it. According to my current approach, I am writing classes for all the functions so that they can be represented symbolically. Another issue that needs to be tackled is expansion of nested functions, things like sin(cos(sin(x)). This will need some work as there are many approaches to tackle it. Currently, I evaluate the inner functions( it they exist) recursively with prec + 1. This will work in simple cases, but not if there are cancellations, eg, sin(cos(x)) - sin(cos(x**2)).

    Next Week

    • Get PR 9575 merged.

    • Improve PR 9495 and get it merged.

    • Finalise the series class hierarchy and the series evaluation function.

    The next phase of my project is in Symengine and there have been a lot of improvements and changes there. I will need to play with the new stuff and perhaps also think of ways to port ring_series there.


    July 04, 2015 12:00 AM

    July 03, 2015

    Lucas van Dijk

    GSoC 2015: Midterm summary

    Hi all!

    It's midterm time! And therefore it is time for a summary. What did I learn these past few weeks, and what were the main road blocks?

    What I learned

    This is my first project where I use OpenGL, and a lot has become clearer how this system works: the pipeline, the individual shaders and GLSL, and how they're used for drawing 2D and 3D shapes. Of course, I've only scratched the surface right now, but this is a very good basis for more advances techniques.

    I've learned about some mathematical techniques for drawing 2D sprites

    A bit more Git experience in a situation where I'm not the only developer of the repository.

    This has been a great experience, and the core developers of Vispy are very active and responsive.


    I was a bit fooled by my almost lecture free college schedule in May/June, but the final personal assignments where a bit tougher and bigger than expected. So combining GSoC and all these study assignments was sometimes quite a challenge. But the college year is almost over, and after next week I can focus 100% on the GSoC.

    In terms of code: I don't think I've encountered real big roadblocks, it took maybe a bit more time before every piece of a lot of shader code fell together, but I think I'm starting to get a good understanding of both the Vispy architecture and OpenGL.

    Past week

    The past week I've trying to flesh out the requirements for the network API a bit, and I've also been investigating the required changes for the arrow head visual, because there's a scenegraph and visual system overhaul coming:

    Until next time!

    July 03, 2015 12:01 PM

    Abraham de Jesus Escalante Avalos

    Mid-term summary

    Hello all,

    We're reaching the halfway mark for the GSoC and it's been a great journey so far.

    I have had some off court issues. I was hesitant to write about them because I don't want my blog to turn into me ranting and complaining but I have decided to briefly mention them in this occasion because they are relevant and at this point they are all but overcome.

    Long story short, I was denied the scholarship that I needed to be able to go to Sheffield so I had to start looking for financing options from scratch. Almost at the same time I was offered a place at the University of Toronto (which was originally my first choice). The reason why this is relevant to the GSoC is because it coincided with the beginning of the program so I was forced to cope with not just the summer of code but also with searching/applying for funding and paperwork for the U of T which combined to make for a lot of work and a tough first month.

    I will be honest and say that I got a little worried at around week 3 and week 4 because things didn't seem to be going the way I had foreseen in my proposal to the GSoC. In my previous post I wrote about how I had to make a change to my approach and I knew I had to commit to it so it would eventually pay off.

    At this point I am feeling pretty good with the way the project is shaping up. As I mentioned, I had to make some changes, but out of about 40 open issues, now only 23 remain, I have lined up PRs for another 8 and I have started discussion (either with the community or with my mentor) on almost all that remain, including some of the longer ones like NaN handling which will span over the entire scipy.stats module and is likely to become a long term community effort depending on what road Numpy and Pandas take on this matter in the future.

    I am happy to look at the things that are still left and find that I at least have a decent idea of what I must do. This was definitely not the case three or four weeks ago and I'm glad with the decision that I made when choosing a community and a project. My mentor is always willing to help me understand unknown concepts and point me in the right direction so that I can learn for myself and the community is engaging and active which helps me keep things going.

    My girlfriend, Hélène has also played a major role in helping me keep my motivation when it seems like things amount to more than I can handle.

    I realise that this blog (since the first post) has been a lot more about my personal journey than technical details about the project. I do apologise if this is not what you expect but I reckon that this makes it easier to appreciate for a reader who is not familiarised with 'scipy.stats', and if you are familiarised you probably follow the issues or the developer's mailing list (where I post a weekly update) so technical details would be redundant to you.  I also think that the setup of the project, which revolves around solving many issues makes it too difficult to write about specific details without branching into too many tangents for a reader to enjoy.

    If you would like to know more about the technical aspect of the project you can look at the PRs, contact me directly (via a comment here or the SciPy community) or even better, download SciPy and play around with it. If you find something wrong with the statistics module, chances are it's my fault, feel free to let me know. If you like it, you can thank guys like Ralf Gommers (my mentor), Evgeni Burovski and Josef Perktold (to name just a few of the most active members in 'scipy.stats') for their hard work and support to the community.

    I encourage anyone who is interested enough to go here to see my proposal or go here to see currently open tasks to find out more about the project. I will be happy to fill you in on the details if you reach me personally.


    by Abraham Escalante ( at July 03, 2015 01:07 AM


    GSoC Progress - Week 6

    Hello, received a mail few minutes into typing this, passed the midterm review successfully :)
    Just left me wondering how do these guys process so many evaluations so quickly. I do have to confirm with Ondřej about this.
    Anyways, the project goes on and here is my this week's summary.


    SymEngine successfully moved to using Catch as a testing framework.

    The travis builds for clang were breaking, this let me to play around with travis and clang builds to fix this issue. The linux clang build used to break because we used to mix-up and link libraries like GMP compiled with different standard libraries.
    Thanks to Isuru for lending a helping hand and fixing it in his PR.

    Next task to make SYMENGINE_ASSERT not use standard assert(), hence I wrote my custom assert which simulates the internal assert.
    Now we could add the DNDEBUG as a release flag when Piranha is a dependence, this was also done.

    Started work on Expression wrapper, PR that starts off from Francesco's work sent in.

    Investigated the slow down in benchmarks that I have been reporting in the last couple of posts. Using git commit(amazing tool, good to see binary search in action!), the first bad commit was tracked. We realized that the inclusion of piranha.hpp header caused the slowdown and was resolved by using mp_integer.hpp, just the requirement header.
    With immense help of Franceso, the problem was cornered to this:
    * Inclusion of thread_pool leads to the slowdown, a global variable that it declares to be specific.
    * In general a multi-threaded application may result in some compiler optimizations going off, hence slowdown.
    * Since this benchmark is memory allocation intensive, another speculation is that compiler allocates memory differently.

    This SO question asked by @bluescarni should lead to very interesting developments.

    We have to investigate this problem and get it sorted. Not only because we depend on Piranha, we might also have multi-threading in SymEngine later too.


    No benchmarking was done this week.
    Here is my PR reports.

    * #500 - Expression Wrapper

    * #493 - The PR with Catch got merged.
    * #498 - Made SYMENGINE_ASSERT use custom assert instead of assert() and DNDEBUG as a release flag with PIRANHA.
    * #502 - Make poly_mul used mpz_addmul (FMA), nice speedup of expand2b. * #496 - En route to fixing SYMENGINE_ASSERT led to a minor fix in one of the assert statements.
    * #491 - Minor fix in compiler choice documentation.

    Targets for Week 7

    • Get the Expression class merged.
    • Investigate and fix the slow-downs.

    The rest of tasks can be finalized in later discussion with Ondřej.

    That's all this week.

    July 03, 2015 12:00 AM

    Yue Liu

    GSOC2015 Students coding Week 06

    week sync 10

    Last week:

    • issues #37 set ESP/RSP fixed, and a simple implementation for migrate method.
    • All testcases in issues #38 passed.
    • All testcases in issues #39 passed.
    • All testcases in issues #36 passed, but need more testcases.

    Next week:

    • Optimizing and fix potential bugs.
    • Add some doctests and pass the example doctests.

    July 03, 2015 12:00 AM

    Keerthan Jaic

    GSoC Midterm Summary

    So far, I’ve fixed a release blocking bug, updated the documentation and revamped the core tests. Most of my pull requests have been merged into master. I’ve also worked on refactoring some of the core decorators and improving the conversion tests. However, these are not yet ready to be merged.

    In the second period, I will focus on improving the conversion modules. More details can be found in my proposal.

    July 03, 2015 12:00 AM

    July 02, 2015

    Manuel Paz Arribas

    Mid-term summary

    Mid-term has arrived and quite some work has been done for Gammapy, especially in the observation, dataset and background modules. At the same time I have learnt a lot about Gammapy, Astropy (especially tables, quantities, angles, times and fits files handling), and python (especially numpy and matplotlib.pyplot). But the most useful thing I'm learning is to produce good code via code reviews. The code review process is sometimes hard and frustrating, but very necessary in order to produce clear code that can be read and used by others.

    The last week I have been working on a method to filter observations tables as the one presented in the figure on the first report. The method is intended to be used to select observations according to different criteria (for instance data quality, or within a certain region in the sky) that should be used for a particular analysis.

    In the case of background modeling this is important to separate observations taken on or close to known sources or far from them. In addition, the observations can be grouped according to similar observation conditions. For instance observations taken under a similar zenith angle. This parameter is very important in gamma-ray observations.

    The zenith angle of the telescopes is defined as the angle between the vertical (zenith) and the direction where the telescopes are pointing. The smaller the zenith angle is, the more vertical the telescopes are pointing, and the thinner is the atmosphere layer. This has large consequences in the amount and properties of the gamma-rays detected by the telescopes. Gamma-rays interact in the upper atmosphere and produce Cherenkov light, which is detected by the telescopes. The amount of light produced is directly proportional to the energy of the gamma-ray. In addition, the light is emitted in a narrow cone along the direction of the gamma-ray.

    At lower zenith angles the Cherenkov light has to travel a smaller distance through the atmosphere, so there is less absorption. This means that lower energy gamma-rays can be detected.

    At higher zenith angles the Cherenkov light of low-energy gamma-rays is totally absorbed, but the Cherenkov light cones of the high-energy ones are longer, and hence the section of ground covered is larger, so particles that fall further away from the telescopes can be detected, increasing the amount of detected high-energy gamma-rays.

    The zenith angle is maybe the most important parameter, when grouping the observations in order to produce models of the background.

    The method implemented can filter the observations according to this (and other) parameters. An example using a dummy observation table generated with the tool presented on the first report is presented here (please click on the picture for an enlarged view):
    Please notice that instead of the mentioned zenith angle, altitude as the zenith's complementary angle (altitude_angle = 90 deg - zenith_angle) is used.
    In this case, the first table was generated with random altitude angles between 45 deg and 90 deg (or 0 deg to 45 deg in zenith), while the second table is filtered to keep only zenith angles in the range of 20 deg to 30 deg (or 60 deg to 70 deg in altitude).

    The tool can be used to apply selections in any variable present in the observation table. In addition, an 'inverted' flag has been programmed in order o be able to apply the filter to keep the values outside the selection range, instead of inside.

    Recapitulating the progress done until now, the next steps will be to finish the tools that I am implementing now: the filter observations method described before and the background cube model class on the previous report. In both cases there is still some work to do: an inline application for filtering observations and more methods to create cube background models.

    The big milestone is to have a working chain to produce cube background models from existing event lists within a couple of weeks.

    by mapaz ( at July 02, 2015 11:44 PM

    Vito Gentile
    (ERAS Project)

    Enhancement of Kinect integration in V-ERAS: Mid-term summary

    This is my third report on what I have done for my GSoC project. If you don’t know what it is about and want to find more information, please refer to this page and this blog post.

    In this report, I will summarize what I have done until now, and also describe what I will do during the next weeks.

    My project is about the enhancement of Kinect integration in V-ERAS, which was all based on C#, in order to use the official Microsoft API (SDK version: 1.8). However, the whole ERAS source code is mainly written in Python, so the first step was to port the C# body tracker in Python, by using PyKinect. This also required the rewrite of all the GUI (by using PGU).

    Then, I have also integrated the height estimation of the user in the body tracker, by using skeletal information for calculating it. This has been implemented as a Tango command, so that it can be executed by any device connected to the Tango bus. This feature will be very useful to modulate the avatar size before starting simulation in V-ERAS.

    I have also took a look to the webplotter module, which will be useful for the incoming AMADEE mission, to verify the effect of virtual reality interaction on user’s movements. What I have done is to edit the script, which was not able to manage numpy arrays. These structures are used by PyTango for attributes defined as “SPECTRUM”; in order to correctly save user’s data in JSON, I had to add a custom JSON encoder (see this commit for more information).

    What I am starting to do now is perhaps the most significant part of my project, which is the implementation of user’s step estimation. At the moment, this feature is integrated in the V-ERAS Blender repository, as a Python Blender script. The idea now is to change the architecture to be event-based: everytime a Kinect frame with skeletal data is read by the body tracker, it will calculate user’s movements in terms of body orientation and linear distance, and  I will push a new change event. This event will be read by a new module, that is being developed by Siddhant (another student which is participating to GSoC 2015 with IMS and PSF), to move a virtual rover (or any other humanoid avatar) according to user’s movements.

    I have started to developing the event-based architecture, and what I will start to do in these days is to integrate the step estimation algorithm, starting from the one that is currently implemented in V-ERAS Blender. Then I will improve it, in particular for what about the linear distance estimation; the body orientation is quite well calculated with the current algortihm indeed, so although I will check its validity, hopefully it will be simply used as it is now.

    The last stage of my project will be to implement gesture recognition, in particular the possibility to recognize if user’s hands are closed or not. In these days I had to implement this feature in C# for a project that I am developing for my PhD research. With Microsoft Kinect SDK 1.8, it is possible by using KinectInteraction, but I am still not sure about the feasibility of this feature with only PyKinect (which is a sort of binding of the C++ Microsoft API). I will discover more about this matter in the next weeks.

    I will let you know every progress with the next updates!

    Stay tunes!

    by Vito Gentile at July 02, 2015 10:00 PM

    Rafael Neto Henriques

    [RNH Post #6] Mid-Term Summary

    We are now at the middle of the GSoC 2015 coding period, so it is time to summarize the progress done so far and update the plan for the work of the second half part of the program.

    Progress summary

    Overall a lot was achieved! As planed on my project proposal, during the first half of the coding period, I finalized the implementation of the first version of the diffusion kurtosis imaging (DKI) reconstruction module. Moreover, some exciting extra steps were done! 

    Accomplishing the first steps of the project proposal

    1) The first accomplished achievement was merging the work done on the community bonding period to the main master Dipy repository. This work consisted on some DKI simulation modules that can be used to study the expected ground truth kurtosis values of white matter brain fibers. In this project, these simulations were useful to test the real brain DKI processing module. The documentation of this work can be already found in Dipy's website

    2) The second achievement was finalizing the procedures to fit the DKI model on real brain data. This was done from inheritance of a module class already implemented in Dipy, which contains the implementation of the simpler diffusion tensor model (for more details on this you can see my previous post). Completion of the DKI fitting procedure was followed by implementation of functions to compute the ordinary linear least square fit solution of the DKI model. By establishing the inheritance between the DKI and diffusion tensor modules, duplication of code was avoided and the standard diffusion tensor measures were automatically incorporated. The figure below shows an example of these standard measures obtained from the new DKI module after the implementation of the relevant fitting functions.

    Figure 1 - Real brain standard diffusion tensor measures obtained from the DKI module, which included the diffusion fractional anisotropy (FA), the mean diffusivity (MK), the axial diffusivity (AD) and the radial diffusivity (RD). The raw brain dataset used for the images reconstruction was kindly provided by Maurizio Marrale (University of Palermo).

    3) Finally, from the DKI developed fitting functions, standard measures of kurtosis were implemented. These were based on the analytical solutions proposed by Tabesh and colleagues which required, for instance, the implementation of sub-functions to rotate 4D matrices and to compute Carlson's incomplete elliptical integrals.  Having implemented the analytical solution of standard kurtosis measure functions, I accomplished all the work proposed for the first half of the GSoC. Below I am showing the first real brain images reconstructed kurtosis from the new implemented modules. 

    Figure 2 - Real brain standard kurtosis tensor measures obtained from the DKI module, which included the mean kurtosis (MK), the axial kurtosis (AK), and radial kurtosis (RK). The raw brain dataset used for the images reconstruction was kindly provided by Maurizio Marrale (University of Palermo).

    Extra steps accomplished

    Some extra steps were also accomplished during the first half of the GSoC program. In particular, from the feedback that I obtained at the International Society for Magnetic Resonance in Medicine (ISMRM) conference (see my fourth post), I decided to implement an additional DKI fitting solution - the weighted linear least square DKI fit solution. This fit is considered to be one of the most robust fitting approaches in recent DKI literature (for more details see my previous post)Therefore, having this method implemented, I am insuring that the new Dipy's DKI modules are implemented according to the most advanced DKI state of art.

    To show how productive was the ISMRM conference for the project, I am sharing you a photo that I took at the conference with one of the head developers of Dipy - Eleftherios Garyfallidis.

    Figure 3 - Photo taken at the ISMRM conference - I am wearing the Dipy's T-shirt at the right side of the photo and in the left side you can see the head Dipy's developer Eleftherios Garyfallidis.

    Next steps

    After discussing with my mentor, we agreed that we should dedicate more time on the first part of the project proposal, i.e. improving the DKI reconstruction module. Due to the huge extent of code and the math complexity of this module, I will dedicate a couple of weeks more in improving the module's performance, quality of code testing and documentation. In this way, we decided to postpone the two last milestones initially planed for the second half term of the GSoC to the last three weeks of the GSoC coding period

    The next steps of the updated project plan are as described in the following points: 

    1) Merge the pull requests that contain the new DKI modules with the master's Dipy repository. To facilitate the revision of the implemented functions by the mentoring organization, I will split my initial pull request into smaller pull requests.

    2) At the same time that the previous developed code is reviewed, I will implement new features on the functions for estimating kurtosis parameters to reduce processing time. For instance, I will implement some optional variables that allow each method to receive a Boolean mask to point the image voxels to be processed. This will save the time wasted on processing unnecessary voxels as from the background.

    3) I will also implement simpler numerical methods for a faster estimation of the standard DKI measures. These numerical methods are expected to be less accurate that the analytical solutions already implemented, however they provide alternatives less computationally demanding. Moreover, they will provide a simpler mathematical framework which will be used to further validate the analytical solutions.

    4) Further improvements of the weighed linear least square solution will be performed. In particular, the weights' estimations used on the fit will be improved by an iterative algorithm as described on recent DKI literature

    5) Finally, the procedures to estimate from DKI concrete biophysical measures and white matter fiber directions will be implemented as I described on the initial project proposal.

    by Rafael Henriques ( at July 02, 2015 09:55 PM

    Shridhar Mishra
    (ERAS Project)

    Mid - Term Post.

    Now that my exams are over i can work with full efficiency and work on the project.
    the current status of my project looks something like this.

    Things done:

    • Planner in place.
    • Basic documentation update of europa internal working.
    • scraped pygame simulation of europa.

    Things i am working on right now:
    • Integrating Siddhant's battery level indicator from Husky rover diagnostics with the planner for more realistic model.
    • Fetching things and posting things on PyTango server. (Yet to bring it to a satisfactory level of working)
    Things planned for future:
    • Integrate more devices.
    • improve docs.

    by Shridhar Mishra ( at July 02, 2015 08:04 PM

    Ambar Mehrotra
    (ERAS Project)

    GSoC 2015: Mid-Term and 4th Biweekly Report

    Google Summer of Code 2015 started on May 25th and the midterm is already here. I am glad to note that my progress has been in accordance with the timeline I had initially provided. This includes all the work that I had mentioned till the last blog post in this series as well as the work done during the previous week.

    During the past week I was busy working on the Data Aggregation and Summary Creation for various branches in the tree. Basic structure and functionality of the tree is as follows:
    • The tree can have several nodes inside it.
    • Each node can either be a branch(can have more branches or leaves as children) or a leaf(cannot have any children).
    • Each node has its raw data and a summary.
    • The raw data for a leaf node is the data coming in directly from the device servers, while the raw data for branches is the summary of individual nodes.
    • The summary for a leaf node can be defined as the minimum/maximum./average value of the sensor readings over a period of time. Later the user can create a custom function for defining the summary.
    • The summary for a branch is the minimum/maximum/average value of its children.

    For summarizing information over time at different levels of hierarchy it was necessary to keep logging the data coming in from the device servers. I decided to go with MongoDB as a json style database seemed like the best option for storing and retrieving data for different levels of hierarchy and mongodb is quite popular for doing such tasks.

    I started a thread as soon as the user created the summary for a data source which polls the device server at regular intervals and logs the data in the mongodb database. Similar threads were created for each level of hierarchy where each node has the information about its raw data and summary and knows its immediate children. This kind of structure simplified the process of managing the hierarchy at different levels.

    When the user clicks a node its information - raw data and summary, are shown on the right panel in different tabs. The user has the option of modifying the summary as well if he wants to do so. Here is a screenshot for the raw data:

    In the upcoming weeks and the later part of the program, I am planning to work on various bug fixes, implementation of functionality for multiple attributes from a device server and integration with the Tango Alarm Systems and monitoring alarms.

    Happy Coding!

    by Ambar Mehrotra ( at July 02, 2015 06:49 PM

    Jakob de Maeyer

    Meet the Add-on Manager

    Previously, I introduced the concept of Scrapy add-ons and how it will improve the experience of both users and developers. Users will have a single entry-point to enabling and configuring add-ons without being required to learn about Scrapy’s internal settings structure. Developers will gain better control over enforcing and checking proper configuration of their Scrapy extensions. Additional to their extension, they can provide a Scrapy add-on. An add-on is any Python object that provides the add-on interface. The interface, in turn, consists of few descriptive variables (name, version, …) and two callbacks: One for enforcing configuration, called before the initialisation of Scrapy’s crawler, and one for post-init checks, called immediately before crawling begins. This post describes the current state of and issues with the implementation of add-on management in Scrapy.

    Current state

    The pull request with the current work-in-progress on the implementation can be found on GitHub. Besides a lot of infrastructure (base classes, interfaces, helper functions, tests), its heart is the AddonManager. The add-on manager ‘holds’ all loaded add-ons and has methods to load configuration files, add add-ons, and check dependency issues. Furthermore, it is the entry point for calling the add-ons’ callbacks. The ‘loading’ and ‘holding’ part can be used independently of one another, but in my eyes there are too many cross-dependencies for the ‘normal’ intended usage to justify separating them into two classes.

    Two “single” entry points?

    From a user’s perspective, Scrapy settings are controlled from two configuration files: scrapy.cfg and This distinction is not some historical-backwards-compatible leftover, but has a sensible reason: Scrapy uses projects as organisational structure. All spiders, extensions, declarations of what can be scraped, etc. live in a Scrapy project. Every project has in which crawling-related settings are stored. However, there are other settings that can or should not live in This (obviously) includes the path to (for ease of understanding, I will always write for the settings module, although it can be any Python module), and settings that are not bound to a particular project. Most prominently, Scrapyd, an application for deploying and running Scrapy spiders, uses scrapy.cfg to store information on deployment targets (i.e. the address and auth info for the server you want to deploy your Scrapy spiders to).

    Now, add-ons are bound to a project as much as crawling settings are. Consequentially, add-on configuration should therefore live in However, Python is a programming language, and not a standard for configuration files, and its syntax is therefore (for the purpose of configuration) less user-friendly. An ini configuration like this:

    # In scrapy.cfg
    database = some.server
    user = some_user
    password = some!password

    would (could) look similar to this in Python syntax:

    # In
    addon_mysqlpipe = dict(
        _name = '',
        database = 'some.server',
        user = 'some_user',
        password = 'some!password',

    While I much prefer the first version, putting add-on configuration into scrapy.cfg would be very inconsistent with the previous distinction of the two configuration files. It will therefore probably end up in The syntax is a little less user-friendly, but after all, most Scrapy users should be familiar with Python. For now, I have decided to write code that reads from both.

    Allowing add-ons to load and configure other add-ons

    In some cases, it might be helpful if add-ons were allowed to load and configure other add-ons. For example, there might be ‘umbrella add-ons’ that decide what subordinate add-ons need to be enabled and configured given some configuration values. Or an add-on might depend on some other add-on being configured in a specific way. The big issue with this is that, with the current implementation, the first time the methods of an add-ons are called is during the first round of callbacks to update_settings(). Should an add-on load or reconfigure another add-on here, other add-ons might already have been called. While it is possible to ensure that the update_settings() method of the newly added add-on is called, there is no guarantee (and in fact, it is quite unlikely) that all add-ons see the same add-on configuration in their update_settings().

    I see three possible approaches to this:

    1. Forbid add-ons from loading or configuring other add-ons. In this case ‘umbrella add-ons’ would not be possible and all cross-configuration dependencies would again be burdened onto the user.
    2. Forbid add-ons to do any kind of settings introspection in update_settings(), instead only allow them to do changes to the settings object or load other add-ons. In this case, configuring already enabled add-ons should be avoided, as there is no guarantee that their update_settings() method has not already been called
    3. Add a third callback, update_addons(config, addonmgr), to the add-on interface. Only loading and configuring other add-ons should be done in this method. While it may be allowed, developers should be aware that depending on the config (of their own add-on, i.e. the one whose update_addons() is currently called) is fragile as, once again, there is no guarantee in which order add-ons will be called back.

    I have put too much thought into it just yet, but I think I prefer option 3.

    July 02, 2015 05:58 PM

    Julio Ernesto Villalon Reina

    Midterm Summary

    So, the first part of GSoC is over and the first midterm is due today. Here is a summary of this period. 

    The main goal of the project is to implement a segmentation program that is able to estimate the Partial Volume (PV) between the three main tissue types of the brain (i.e. white matter, cerebrospinal fluid (CSF) and grey matter). The input to the algorithm is a T1-weighted Magnetic Resonance Image (MRI) of the brain.  
    I checked back on what I have worked on so far and these are my two big accomplishments:

    - The Iterated Conditional Modes (ICM) for the Maximum a Posteriori - Markov Random Field (MAP-MRF) Segmentation. This part of the algorithm is at the core of the segmentation as it minimizes the posterior energy of each voxel given its neighborhood, which is equivalent to estimating the MAP. 
    - The Expectation Maximization (EM) algorithm in order to update the tissue/label parameters (mean and variance of each label). This technique is used because this is an “incomplete” problem, since we know the probability distribution of the tissue intensities but don’t know how each one contributes to it. 

    By combining these two powerful algorithms I was able to obtain 1) the segmented brain into three classes and 2) the PV estimates (PVE) for each tissue type. The images below show an example coronal slice of a normal brain and its corresponding outputs. 

    What comes next? Tests, tests, tests…. Since I have the segmentation algorithm already up and running I have to do many tests for input parameters such as the number of iterations to update the parameters with the EM algorithm, the beta value, which determines the importance of the neighborhood voxels, and the size of the neighborhood. Validation scripts must be implemented as well to compare the resulting segmentation with publicly available programs. These validation scripts will initially compute measures such as Dice and Jaccard coefficients to verify how close my method’s results are to the others.  

    For an updated version of the code and all its development since my first pull request please go to:

    T1 original image
    Segmented image. Red: white matter, Yellow: grey matter,
    Light Blue: corticospinal fluid
    Corticospinal fluid PVE
    Grey matter PVE

    White matter PVE

    by Julio Villalon ( at July 02, 2015 10:50 AM

    Jazmin's Open Source Adventure

    Quick Update - Wednesday, 1 July 2015

    Quick update!

    Today, I:

    1)  Made a PR for plot_airmass and plot_parallactic, as well as some example notebooks for their use.

    by Jazmin Berlanga Medina ( at July 02, 2015 05:03 AM

    Aron Barreira Bordin

    Mid-Term Summary


    We are at the middle of the program, so let's get an overview of my proposal, what I've done, what I'll be doing in the second part of the project. I'll also post about my experience until now, good and bad aspects, and how I'll work to do a good job.

    Project Development

    I'm really happy to be able to work with extra features not listed in my proposal. As long as I did a good advance in my proposal, I worked in some interesting and important improvements to Kivy Designer. In the second part of the program, I'll finish to code my proposal and try to add as many new features/bug fixes as possible.


    Unfortunately, my University has a different calendar this year, I'll have classes until August 31 ;/, so I'm really sad to not be able to work with full time in my project. Sometimes I need to divide my study/work time. As I wrote above, I'm really happy to have a good progress, but I'd love to be able to do even more.

    Second period

    In this second period, I'll try to focus my development to be able to release a more stable version of Kivy Designer. Right now Kivy Designer is an alpha tool, and actually, Isn't a nice tool to use. But by the end of the project, my goal is to invert this point of view. To improve the stability project, I'd like to add Unit Tests and documentation to the project.


    Aron Bordin.

    July 02, 2015 12:00 AM

    July 01, 2015

    Siddhant Shrivastava
    (ERAS Project)

    Mid-term Report - GSoC '15

    Hi all! I made it through the first half of the GSoC 2015 program. This is the evaluation week of the Google Summer of Code 2015 program with the Python Software Foundation and the Italian Mars Society ERAS Project. Mentors and students evaluate the journey so far in the program by answering some questions about their students and mentors respectively. On comparing with the timeline, I reckoned that I am on track with the project so far.

    The entire Telerobotics with Virtual Reality project can be visualized in the following diagram -

    Project Architecture


    Husky-ROS-Tango Interface

    • ROS-Tango interfaces to connect the Telerobotics module with the rest of ERAS.
    • ROS Interfaces for Navigation and Control of Husky Husky Navigation

    • Logging Diagnostics of the robot to the Tango Bus

    • Driving the Husky around using human commands Husky Commands

    Video Streaming

    • Single Camera Video streaming to Blender Game Engine

    This is how it works. ffmpeg is used as the streaming server to which Blender Game Engine subscribes.

    The ffserver.conf file is configured as follows which describes the characterstics of the stream:

    Port 8190
    MaxClients 10
    MaxBandwidth 50000
    <Feed webcam.ffm>
    file /tmp/webcam.ffm
    FileMaxSize 2000M
    <Stream webcam.mjpeg>
    Feed webcam.ffm
    Format mjpeg
    VideoSize 640x480
    VideoFrameRate 30
    VideoBitRate 24300
    VideoQMin 1
    VideoQMax 5

    Then the Blender Game Engine and its associated Python library bge kicks in to display the stream on the Video Texture:

    # Get an instance of the video texture = bge.texture.Texture(obj, ID)
    # a ffmpeg server is streaming the feed on the IP:PORT/FILE
    # specified in FFMPEG_PARAM,
    # BGE reads the stream from the mjpeg file. = bge.texture.VideoFFmpeg(FFMPEG_PARAM)

    The entire source code for single camera streaming can be found in this repository.

    • Setting up the Minoru Camera for stereo vision Minoru Camera

    It turns out this camera can stream at 30 frames per second for both cameras. The last week has been particularly challenging to figure out the optimal settings for the Minoru Webcam to work. It depends on the Video Buffer Memory allocated by the Linux Kernel for libuvc and v4l2 compatible webcams. Different kernel versions result in different performances. It is inefficient to stream the left and right cameras at a frame rate greater than 15 fps with the kernel version that I am using.

    • Setting up the Oculus Rift DK1 for the Virtual Reality work in the upcoming second term Oculus Rift

    Crash-testing and Roadblocks

    This project was not without its share of obstacles. A few memorable roadblocks come to mind-

    1. Remote Husky testing - Matt (from Canada), Franco (from Italy), and I (from India) tested whether we could remotely control Husky. The main issue we faced was Network Connectivity. We were all on different networks geographically, which the ROS in our machines could not resolve. Thus some messages (like GPS) were accessible whereas the others (like Husky Status messages) were not. The solution we sought is to create a Virtual Private Network for our computers for future testing.

    2. Minoru Camera Performance differences - Since the Minoru's performance varies with the Kernel version, I had to bump down the frames per second to 15 fps for both cameras and stream them in the Blender Game Engine. This temporary hack should be resolved as ERAS moves to newer Linux versions.

    3. Tango related - Tango-controls is a sophisticated piece of SCADA library with a server database for maintaining device server lists. It was painful to use the provided GUI - Jive to configure the device servers. To make the process in line with other development activities, I wrote a little CLI-based Device server registration and de-registration interactive script. A blog post which explains this in detail.

    4. Common testing platform - I needed to use ROS Indigo, which is supported only on Ubuntu 14.04. ERAS is currently using Ubuntu 14.10. In order to enable Italian Mars Society and the members to execute my scripts, they needed my version of Ubuntu. Solution - Virtual Linux Containers. We are using a Docker Image which my mentors can use on their machine regarding of their native OS. This post explains this point.

    Expectations from the second term

    This is a huge project in that I have to deal with many different technologies like -

    1. Robot Operating System
    2. FFmpeg
    3. Blender Game Engine
    4. Oculus VR SDK
    5. Tango-Controls

    So far, the journey has been exciting and there has been a lot of learning and development. The second term will be intense, challenging, and above all, fun.

    To-do list -

    1. Get Minoru webcam to work with ffmpeg streaming
    2. Use Oculus for an Augmented Reality application Oculus Rift Source

    3. Integrate Bodytracking with Telerobotics

    4. Automation in Husky movement and using a UR5 manipulator
    5. Set up a PPTP or OpenVPN for ERAS

    Time really flies by fast when I am learning new things. GSoC so far has taught me how to not be a bad software engineer, but also how to be a good open source community contributor. That is what the spirit of Google Summer of Code is about and I have imbibed a lot. Besides, working with the Italian Mars Society has also motivated me to learn the Italian language. So Python is not the only language that I'm practicing over this summer ;)

    Here's to the second term of Google Summer of Code 2015! GSoC Banner

    Ciao :)

    by Siddhant Shrivastava at July 01, 2015 07:53 PM

    Sartaj Singh

    GSoC: Update Week-5

    Midterm evaluation has started and is scheduled to end by 3rd of July. So, far GSoC has been a very good experience and hopefully the next half would even be better.

    Yesterday, I had a meeting with my mentor @jcrist. It was decided that we will meet every Tuesday on gitter at 7:30 P.M IST. We discussed my next steps in implementing the algorithm and completing FormalPowerSeries.


    • Most of my time was spent on writing a rough implementation for the second part of the algorithm. Currently it is able to compute series for some functions. But fails for a lot of them. Some early testing indicates, this maybe due to rsolve not being able to solve some type of recurrence equations.

    • FourierSeries and FormalPowerSeries no longer computes the series of a function. Computation is performed inside fourier_series and fps functions respectively. Both the classes are now used for representing the series only.

    • I decided it was time to add sphinx documentation for sequences. So, I opened #9590. Probably, it will be best to add documentation at the same time as the implementation from next time.

    • Also opened #9599 that allows Idx instances to be used as limits in both Sum and sequence.

    Tasks Week-6:

    • #9523's review is mostly done and should get merged soon. Also, add the documentation for Fourier series.

    • Polish #9572 and make it more robust.

    • Improve the range of functions for which series can be computed. Probably will need to improve the algorithm for solving the recurrence equations.

    This week is going to be fun. Lots to do :)

    July 01, 2015 05:03 PM

    Jazmin's Open Source Adventure

    Quick Update - Monday, 29 June and Tuesday, 30 June 2015

    Quick update!

    The last two days, I:
    1) Updated to reflect the updated functions.
    2) Updated example notebooks to include those Astroplan objects/functions.

    by Jazmin Berlanga Medina ( at July 01, 2015 03:58 PM

    Richard Plangger

    GSOC Mid Term

    Now the first half of the GSoC program 2015 is over and it has been a great time. I compared the time line just recently and I have almost finished all the work that needed to be done for the whole proposal. Here is a list what I have implemented.
    • The tracing intermediate representation has now operations for SIMD instructions (named vec_XYZ) and vector variables
    • The proposed algorithm was implemented in the optimization backend of the JIT compiler
    • Guard strength reduction that handles arithmetic arguments.
    • Routines to build a dependency graph and reschedule the trace
    • Extended the backend to emit SSE4.1 SIMD instructions for the new tracing operations
    • Ran some benchmark programs and evaluated the current gain
    I even extended the algorithm to be able handle simple reduction patterns. I did not include this in my proposal. This means that numpy.sum or can be executed with SIMD instructions.

    Here is a preview of trace loop speedup the optimization currently achieves.

    Note that the setup for all programs is the following: Create two vectors (or one for the last three) (10.000 elements) and execute the operation (e.g. multiply) on the specified datatype. It would look something similar to:

    a = np.zeros(10000,dtype='float')
    b = np.ones(10000,dtype='float')

    After about 1000 iterations of multiplying the tracing JIT records and optimizes the trace. Before jumping to and after exiting the trace the time is recorded. The difference you see in the plot above. Note there is still a problem with any/all and that this is only a micro benchmark. It does not necessarily tell anything about the whole runtime of the program.

    For multiply-float64 the theoretical maximum speedup is nearly achieved!

    Expectations for the next two months

    One thing I'm looking forward to is the Python conference in Bilbao. I have not met my mentors and other developers yet. This will be awesome!
    I have also been promised that we will take a look at the optimization so that I can further improve the optimization.

    To get even better result I will also need to restructure some parts in the Micro-NumPy library in PyPy.
    I think I'm quite close to the end of the implementation (because I started in February already) and I expect that the rest of the GSoC program I will extend, test, polish, restructure and benchmark the optimization.

    by Richard Plangger ( at July 01, 2015 01:39 PM

    Prakhar Joshi

    The Transform Finally!!

    Hello everyone, today I will tell you how I implemented the safe_html transform using lxml library of python. I tried to port the safe_html form CMFDefault dependencies to lxml libraries and tried to install my new transform in place of the old safe_html transform. So when ever our add-on is installed then it will uninstall the safe_html and install our new transoform. So there are a lot of things going on in the mind about lxml, why we use that and all.. So lets explore all these things.

    What is lxml ?

    The lxml XML toolkit is a Pythonic binding for the C libraries libxml2 and libxslt. It is unique in that it combines the speed and XML feature completeness of these libraries with the simplicity of a native Python API, mostly compatible but superior to the well-known ElementTree API.

    Why we need to port the transform to lxml ?

    Earlier the safe_html transform had the dependencies of CMFDefault and we all are working to make plone free from CMFDefault dependencies as due to CMFDefault dependencies the transform was slow and also the code base for safe_html was old and needs to be updates or we can say it needs to be changed. So as we have seen lxml is fast so we choose that for our transform.

    How to implement our transform using lxml ?

    Till now its all good that we have decided what to use to remove CMFDefault dependencies. But now main thing is how to implement the lxml for our new transoform so that it functions same as the previous old safe_html transform. So for that I have to dig the lxml libraries and find out the modules that are useful for our transform. So I found out that we have use the cleaner class of lxml package. This class have several functions like "__init__" and "__call__". So I inherited the cleaner class into my HTMLParser class and overwrite the "__call__" function according to the requirements of our transforms.

    Also I created a new function named "fragment_fromString()" which return the string by removing the nasty tags or element from it. Here is the snippet for the function :-

    def fragment_fromstring(html, create_parent=False, parser=None, base_url=None, **kw):
        if not isinstance(html, _strings):
            raise TypeError('string required')
        accept_leading_text = bool(create_parent)
        elements = fragments_fromstring(html, parser=parser,
                                            no_leading_text=not accept_leading_text,
                                            base_url=base_url, **kw)
        if not elements:
            raise etree.ParserError('No elements found')
        temp2 = []
        if len(elements) > 1:
            for i in range(len(elements)):
                result = elements[i]
                if result.tail and result.tail.strip():
                    raise etree.ParserError('Element followed by text: %r' % result.tail)
                result.tail = None
            result = elements[0]
            if result.tail and result.tail.strip():
                raise etree.ParserError('Element followed by text: %r' % result.tail)
            result.tail = None
        return temp2

    After that I created the main class for our transform named SafeHTML and in that class I defined the pre configured transform status as in the nasty tags and valid tags for the transform initially.

    After that the transform is created that it will take the data as a stream and will give out data also as a stream. We created a data object of IDataStream class.
    Now after that the convert function will take data as input and will do the operations as required as if the user give the input of nasty tags and valid tags it will filter the input html accordingly or if user doesn't give the input then it will take the default configuration of the transform and will do operations accordingly.

    After writing that transform I test that transform with a lot of html inputs and checked their outputs also. They were all as required. There we go, tests cases were passing and the safe_html transform script we created was working perfectly. So the last thing that was left was to register our transform and remove old safe_html transform of PortalTransform.

    Register new transform and remove old safe_html transform on add-on installation..

    As of new the transform is ready and new have to integrate with plone. For that we have to modify the file as in that file we have our add-on configuration after add-on installation. We have function class "post_install" so we will configure our transform and remove the old safe_html transform on post_installation of our add-on.

    There are 2 things that have to be done on the add-on installation :-
    1) The old safe_html of PortalTransform have to be uninstalled/unregistered.
    2) The new transform that we have created above named "exp_safe_html" have to installed.

    So for uninstalling the old transform we will unregister the transform with name by using the transformEngine of PortalTransform. We will get the transform name by "getToolByName(context, 'portal_transforms')" this will give us all the transform of the portal_transforms and we will just uninstall the tranfrom with name safe_html. For confirming that we will use the logger message which will say "safe_html transform un registered" .

    After unregistering the old safe_html its time to register our new exp_safe_html transform. For that we will use pkgutil to get the module where we have our new transform and we will register our new transform using getToolByName(context, 'portal_transforms') so by using TranfromEngine of portal Transform we will be able to register our new transform for our new add-on and put the logger message on successful registration of new transform.

    Finally when I ran the test cases after implementing these things, I saw the logger message  as "UnRegistering the Safe_html" and then next message is "Registering exp_safe_html".

    Yayaya!! Finally able to register my new transform and unregister the old transform.

    I tried to make you understand the code as much as possible but most part of it was coding so it better to see the code for the same as it will be more clear form the code as it quite impossible to tell all the minute things done in code to be detailed here. Hope you will understand.


    by prakhar joshi ( at July 01, 2015 11:55 AM

    Nikolay Mayorov

    2D Subspace Trust-Region Method

    Trust-region type optimization algorithms solve the following quadratic minimization problem at each iteration:

    \displaystyle \min m(p) = \frac{1}{2} p^T B p + g^T p, \text { s. t. } \lVert p \rVert \leq \Delta

    If such problem is too big to solve, the following popular approach can be used.Select two vectors and put them in n \times 2 matrix S. One of these vectors is usually gradient g, another is unconstrained minimizer of quadratic function (in case of B is positive definite) or the direction of negative curvature otherwise. Then it’s helpful to make vectors orthogonal to each other and of unit norm (apply QR to S). Now let’s define p to lie in subspace spanned by these two vectors p = S q, substituting in the original problem we get:

    \displaystyle \min m'(q) = \frac{1}{2} q^T B' q + g'^T q, \text { s. t. } \lVert q \rVert \leq \Delta,

    where B' = S^T B S is 2 \times 2 matrix and g' = S^T g.

    The problem becomes very small and supposably easy to solve. But still we need to find its accurate solution somehow. The appealing approach which is often mentioned in books without details is to reduce the problem to the fourth-order algebraic equation. Let’s find out how to actually do that. As I mentioned in the previous posts there are two main cases: a) B' is positive definite and -B'^{-1} g' lies within a trust region, then it’s an optimal solution b) Otherwise an optimal solution lies on the boundary. Of course the only difficult part is case b. In this case let’s rewrite 2 \times 2 problem with the obvious change of notation and assuming \Delta=1:

    \displaystyle a x^2 + 2 b x y + c y^2 +2 d x + 2 f y \rightarrow \min_{x, y} \\ \text{ s. t. } x^2 + y^2 = 1

    To solve it we need to find stationary points of Lagrangian L(x, y, \lambda) = a x^2 + 2 b x y + c y^2 +2 d x + 2 f y + \lambda (x^2 + y^2 - 1). Assigning partial derivatives to zeros, we come to the following system of equations:

    a x + b y + d + \lambda x = 0 \\  b x + c y + f + \lambda y = 0 \\  x^2 + y^2 = 1

    After eliminating \lambda we get:

    b x^2 + (c - a) x y - b y^2 + f x - dy = 0 \\  x^2 + y^2 = 1

    To exclude the last equation let’s use parametrization x = 2 t / (1 + t^2), y = (1 - t^2) / (1 + t^2). Then substitute it to the first equation and multiply by nonzero (1 + t^2)^2 to get (with a help of sympy):

    (-b + d) t^4 + 2 (a - c + f) t^3 + 6b t^2 + 2 (-a + c + f) t - b - d = 0

    And this is our final fourth-order algebraic equation (note how it’s symmetric is some sense). After finding all its roots, we discard complex roots, compute corresponding x and y, substitute them in the original quadratic function and choose ones which give the smallest value. Originally I thought that this equation can’t have complex roots, but it didn’t confirm in practice.

    Here is the code with my implementation. It contains the solver function and the function checking that the found solution is optimal according to the main optimality theorem for trust-region problems. (See my introductory post on least-squares algorithms.) Root-finding is done by numpy.roots, which I assume to be accurate and robust.

    import numpy as np
    from numpy.linalg import norm
    from scipy.linalg import cho_factor, cho_solve, eigvalsh, orth, LinAlgError
    def solve_2d_trust_region(B, g, Delta):
        """Solve a 2-dimensional general trust-region problem.
        B : ndarray, shape (2, 2)
            Symmetric matrix, defines a quadratic term of the function.
        g : ndarray,, shape (2,)
            Defines a linear term of the function.
        Delta : float
            Trust region radius.
        p : ndarray, shape (2,)
            Found solution.
        newton_step : bool
            Whether the returned solution is Newton step which lies within
            the trust region.
            R, lower = cho_factor(B)
            p = -cho_solve((R, lower), g)
            if, p) &lt;= Delta**2:
                return p, True
        except LinAlgError:
        a = B[0, 0] * Delta ** 2
        b = B[0, 1] * Delta ** 2
        c = B[1, 1] * Delta ** 2
        d = g[0] * Delta
        f = g[1] * Delta
        coeffs = np.array(
            [-b + d, 2 * (a - c + f), 6 * b, 2 * (-a + c + f), -b - d])
        t = np.roots(coeffs)  # Can handle leading zeros.
        t = np.real(t[np.isreal(t)])
        p = Delta * np.vstack((2 * t / (1 + t**2), (1 - t**2) / (1 + t**2)))
        value = 0.5 * np.sum(p *, axis=0) +, p)
        i = np.argmin(value)
        p = p[:, i]
        return p, False
    def check_optimality(B, g, Delta, p, newton_step):
        Check if a trust-region solution optimal.
        Optimal solution p satisfies the following conditions for some alpha &gt;= 0:
        1. (B + alpha*I) * p = -g.
        2. alpha * (||p|| - Delta) = 0.
        3. B + alpha * I is positive semidefinite.
        alpha : float
            Corresponding alpha value, must be non negative.
        collinearity : float
            Condition 1 check - norm((B + alpha * I) * p + g), must be very small.
        complementarity : float
            Condition 2 check - alpha * (norm(p) - Delta), must be very small.
        pos_def : float
            Condition 3 check - the minimum eigenvalue of B + alpha * I, must be
            non negative.
        if newton_step:
            alpha = 0.0
            q = + g
            i = np.argmax(np.abs(p))
            alpha = -q[i] / p[i]
        A = B + alpha * np.identity(2)
        collinearity = norm(, p) + g)
        complementarity = alpha * (Delta - norm(p))
        pos_def = eigvalsh(A)[0]
        return alpha, collinearity, complementarity, pos_def
    def matrix_with_spectrum(eigvalues):
        Q = orth(np.random.randn(eigvalues.size, eigvalues.size))
        return * eigvalues, Q)
    def test_on_random(n_tests):
        print(("{:&lt;20}" * 4).format(
            "alpha", "collinearity", "complementarity", "pos. def."))
        for i in range(n_tests):
            eigvalues = np.random.randn(2)
            B = matrix_with_spectrum(eigvalues)
            g = np.random.randn(2)
            Delta = 3.0 * np.random.rand(1)[0]
            p, newton_step = solve_2d_trust_region(B, g, Delta)
            print(("{:&lt;20.1e}" * 4).format(
                *check_optimality(B, g, Delta, p, newton_step)))
    if __name__ == '__main__':

    The output after running the script:

    alpha               collinearity        complementarity     pos. def.           
    0.0e+00             1.1e-16             0.0e+00             4.0e-01             
    1.1e+00             9.2e-16             0.0e+00             6.0e-01             
    4.8e+01             5.0e-16             3.3e-16             4.7e+01             
    8.9e+00             4.4e-16             0.0e+00             1.0e+01             
    0.0e+00             3.1e-16             0.0e+00             1.2e+00             
    2.6e+00             1.1e-16             0.0e+00             2.4e+00             
    9.1e-01             4.4e-15             0.0e+00             1.1e-02             
    2.9e+00             2.2e-16             -3.2e-16            2.2e+00             
    1.6e+00             1.2e-16             -1.8e-16            7.0e-01             
    1.8e+00             8.0e-15             7.8e-16             5.2e-01     

    The figures tell us that all found solutions are optimal (see the docstring of check_optimality). So, provided we have a good function for root-finding, this approach is simple, robust and accurate.

    by nickmayorov at July 01, 2015 10:50 AM

    Goran Cetusic

    GSOC GNS3 Docker support: The road so far

    So midterm evaluations are ending soon and I'd like to write about my progress before that. If you remember my last update it was about how to write a new GNS3 module. Probably the biggest issue you'll run into is implementing links between between various nodes. This is because GNS3 is a jungle of different technologies, all with their own networking technologies. Implementing Docker links is no different.

    Docker is different kind of virtualization than what GNS3 has been using until now -> OS-level virtualization. VMware, for instance uses full virtualization. You can read more about the difference on one of the million articles on the Internet. An important thing to note is that Docker uses namespaces to manage its network interfaces. More on this here: It's great, go read it!

    GNS3 uses UDP tunnels for connecting its various VM technologies. This means that it after creating a network interface on the virtual machine, it allocates a UDP port on that interface. But this is REALLY not that easy to do in Docker because a lot of the virtualization technologies have UDP tunnels built in - Docker doesn't. Assuming you've read the article above, this is how it will work (still having trouble with it):

    1.  Create a veth pair
    2. Allocate UDP port on one end of veth pair
    3. Wait for container to start and then push the other interface into container namespace
    4. Connect interface to ubridge
    If you're wondering what ubridge is -> it's a great little piece of technology that allows you to connect udp tunnels and interfaces. Hardly anyone's heard of it but GNS3 has been using it for their VMware machines for quite some time:

    The biggest problem with this is that this is all hidden deep inside GNS3 code which makes you constantly aske the question: "Where the hell should I override this??" Also, you have to take into consideration unforseen problems like the one I've mention earlier: You have to actually start the container in order to create the namespace and push the veth interface into it.

    Another major problem that was solved is that Docker container require a running process without which they'll just terminate. I've decided to make an official Docker image to be used for Docker containers: It's not yet merged as part of GNS3. Basically, it uses a sleep command to act as a dummy init process and also installs packages like ip, tcpdump, netstat etc. It's a great piece of  code and you can use it independently of GNS3. In the future I expect there'll be a setting, something like "Startup command" so users will be able to use their own Docker images with their own init process.

    It's been bumpy road so far, solving problems I haven't really thought about when I was writing the proposal but Docker support is slowly getting there.  

    by Goran Cetusic ( at July 01, 2015 09:24 AM

    GNS Docker support

    GNS3 Docker support

    So the coding session for GSOC finally began this week. I got accepted with the GNS Docker support project and here is the project introduction and my plan of attack.

    GNS3 is a network simulator that uses faithfully simulates network nodes. Docker is a highly flexible VM platform that uses Linux namespacing and cgroups to isolate processes inside what are effectively virtual machines. This would enable GNS3 users to create their custom virtual machines and move beyond the limitations of nodes that are network oriented and because of its lightweight implementation, would make it possible to run thousands of standalone servers on GNS3.
    Right now GNS3 supports QEMU, VirtualBox and Dynamips (a Cisco IOS emulator). The nodes in GNS3 and the links between them can be thought of as virtual machines that have their own network stacks and communicate amongst themselves like separate machine on any other "real" network. While this is nice by itself, QEMU and VirtualBox are "slow" virtualization technologies because they provide full virtualization -> they can run any OS but this comes at a price. So while QEMU and VirtualBox can run various network services, it's not very efficient. Docker, on the other hand, uses kernel-level virtualization which means it's the same OS but processes are grouped together and different groups isolated between themselves, effectively creating a virtual machine. That's why Docker and other such virtualization solutions are extremely fast and can run thousands of GNS3 nodes -> no translation layer between host and guest systems because they run the same kernel. Docker is quite versatile when it comes to managing custom made kernel-based VMs. It takes the load of the programmer so he/she doesn't have to think about disk space, node startup, process isolation etc. Links between Docker containers pose an additional problem. In its current form, GNS3 uses UDP networking (tunnels) for all communication between all nodes. The advantage is that this is done in userland. It is very simple and works on all OSes without requiring admin privileges. However, using UDP tunnels has proven to be more difficult to integrate new emulators to GNS3 because they usually do not support UDP networking out of the box. OpenvSwitch is a production quality, multilayer virtual switch and interconnecting Docker containers and alleviating the problem of new emulators requires at least basic support for OpenvSwitch ports in GNS3. Additionally, this would enable Docker links in GNS3 to be manipulated through Linux utilities like netem and tc that are specialized for such tasks, something not possible with UDP tunnels.

    Let's start coding!

    by Goran Cetusic ( at July 01, 2015 09:23 AM

    June 30, 2015

    Sudhanshu Mishra

    GSoC'15: Mixing both assumption systems, Midterm updates

    It's been very long since I've written anything here. Here's some of the pull requests that I've created during this period:

    There's also this patch which makes changes in the Symbol itself to make this work.

    commit de49998cc22c1873799539237d6202134a463956
    Author: Sudhanshu Mishra <>
    Date:   Tue Jun 23 16:35:13 2015 +0530
        Symbol creation adds provided assumptions to global assumptions
    diff --git a/sympy/core/ b/sympy/core/
    index 3945fa1..45be26d 100644
    --- a/sympy/core/
    +++ b/sympy/core/
    @@ -96,8 +96,41 @@ def __new__(cls, name, **assumptions):
    +        from sympy.assumptions.assume import global_assumptions
    +        from sympy.assumptions.ask import Q
             cls._sanitize(assumptions, cls)
    -        return Symbol.__xnew_cached_(cls, name, **assumptions)
    +        sym = Symbol.__xnew_cached_(cls, name, **assumptions)
    +        items_to_remove = []
    +        # Remove previous assumptions on the symbol with same name.
    +        # Note: This doesn't check expressions e.g. Q.real(x) and
    +        # Q.positive(x + 1) are not contradicting.
    +        for assumption in global_assumptions:
    +            if isinstance(assumption.arg, cls):
    +                if str(assumption.arg) == name:
    +                    items_to_remove.append(assumption)
    +        for item in items_to_remove:
    +            global_assumptions.remove(item)
    +        for key, value in assumptions.items():
    +            if not hasattr(Q, key):
    +                continue
    +            # Special case to handle commutative key as this is true
    +            # by default
    +            if key == 'commutative':
    +                if not assumptions[key]:
    +                    global_assumptions.add(~getattr(Q, key)(sym))
    +                continue
    +            if value:
    +                global_assumptions.add(getattr(Q, key)(sym))
    +            elif value is False:
    +                global_assumptions.add(~getattr(Q, key)(sym))
    +        return sym
         def __new_stage2__(cls, name, **assumptions):
             if not isinstance(name, string_types):
    In [1]: from sympy import *
    In [2]: %time x = Symbol('x', positive=True, real=True, integer=True)
    CPU times: user 233 µs, sys: 29 µs, total: 262 µs
    Wall time: 231 µs
    This branch
    In [1]: from sympy import *
    In [2]: %time x = Symbol('x', positive=True, real=True, integer=True)
    CPU times: user 652 µs, sys: 42 µs, total: 694 µs
    Wall time: 657 µs

    I did a small benchmarking by creating 100 symbols and setting assumptions over them and then later asserting them. It turns out that the one with changes in the ask handers is performing better than the other two.

    Here's the report of the benchmarking:

    When Symbol is modified

    Line #    Mem usage    Increment   Line Contents
         6     30.2 MiB      0.0 MiB   @profile
         7                             def mem_test():
         8     30.5 MiB      0.3 MiB       _syms = [Symbol('x_' + str(i), real=True, positive=True) for i in range(1, 101)]
         9     34.7 MiB      4.2 MiB       for i in _syms:
        10     34.7 MiB      0.0 MiB           assert ask(Q.positive(i)) is True

    pyinstrument report

    When ask handlers are modified

    Line #    Mem usage    Increment   Line Contents
         6     30.2 MiB      0.0 MiB   @profile
         7                             def mem_test():
         8     30.4 MiB      0.2 MiB       _syms = [Symbol('x_' + str(i), real=True, positive=True) for i in range(1, 101)]
         9     31.5 MiB      1.1 MiB       for i in _syms:
        10     31.5 MiB      0.0 MiB           assert ask(Q.positive(i)) is True

    pyinstrument report

    When satask handlers are modified

    Line #    Mem usage    Increment   Line Contents
         6     30.2 MiB      0.0 MiB   @profile
         7                             def mem_test():
         8     30.4 MiB      0.2 MiB       _syms = [Symbol('x_' + str(i), real=True, positive=True) for i in range(1, 101)]
         9     41.1 MiB     10.7 MiB       for i in _syms:
        10     41.1 MiB      0.0 MiB           assert ask(Q.positive(i)) is True

    pyinstrument report

    On the other hand, the documentation PR is almost ready to go.

    As of now I'm working on fixing the inconsistencies between the two assumption systems. After that I'll move to reduce autosimplification based on the assumptions in the core.

    That's all for now. Cheers!

    by Sudhanshu Mishra at June 30, 2015 10:00 PM

    Sahil Shekhawat

    GSoC Week #5 Update #2

    I raised many PRs including unit tests for various parts of the API which include one giant PR including everything and atomic PRs for every single components. The main challenge was to balance between symbolic and numeric part of the API.

    June 30, 2015 09:37 PM

    Palash Ahuja

    Inference in Dynamic Bayesian Network (continued)

    For the past 2 weeks I have spent some time understanding the algorithmic implementation for inference and implementing it. Today I will be talking about the junction tree algorithm for inference in Dynamic Bayesian Networks.

    For processing the algorithm, here are the following steps
    1) Initialization :- This requires constructing the two initial junction trees J1 and Jt.
    1. J1 is the junction tree created from the initial timeslice. is the junction tree created from the timeslice 1 of the 2TBN(2 - timeslice bayesian network).Jt is the junction tree created from the timeslice 2 of the 2TBN. Time counter is initialized to 0. Also, let the interface nodes(denoted by I1, I2 for the timeslices 1 and 2 respectively ) be those nodes whose children are there in the first timeslice.
    2. If the queries are performed on the initial timeslice. Then the results can be output by the standard VariableElimination procedure where we could have the model having the timeslice 1 of the bayesian network as the base for inference.
    3. For evidence, if the current time in the evidence is 0, then the evidence should be applied to the initial static bayesian network. Otherwise, it has to be applied to the second timeslice of the 2-TBN.
    4. For creating the junction tree J1, the procedure as follows:-
      1. Moralize the initial static bayesian network.
      2. Add the edges from the interface nodes so as to make I1 a clique.
      3. Rest of the procedure is the same as it was before. The above step is the only difference.
    5. For the junction tree Jt, a similar procedure is followed, where there is a clique formed for I2 as well.
    2) Inference procedure:- In this procedure, the clique potential from the interface nodes is passed onto the interface clique. (similar to the message passing algorithm). 
    The time counter is incremented accordingly.
    So basically the junction tree Jt` seems some sort of the engine where the in-clique is where the values are supplied and the out-clique is where the values are obtained, given the e
    The variables in the query are taken out as always at each step, and the evidence is applied also.
    The best part about this procedure, that this method eliminates entanglement and only the out-clique potential is required for inference. 
    The implementation is still in progress.

      by palash ahuja ( at June 30, 2015 04:11 PM

      Vivek Jain

      ProbModelXMl Reader And Writer

      I worked on ProbModelXML reader and writer module for this project.My Project involved solving various bugs which were present in the module. It also involved solving the various TODO's to be done. Some of TODO's are
      Decision Criteria :
      The tag DecisionCriteria is used in multicriteria decision making. as follows:


      <Criterion name = string >
      <AdditionalProperties />0..1

      Potential :
      The tag DecisionCriteria is used in multicriteria decision making. as follows:

      <Potential type="" name="">

      <Variablies >
      <Variable name="string"/>
      My project involved parsing the above type of XML for the reader module.

      For writer class my project involved given an instance of Bayesian Model, create a probmodelxml file of that given Bayesian Model.

      by and then ( at June 30, 2015 08:45 AM

      Michael Mueller

      Week 5

      In our last meeting, my mentors and I talked about breaking up this summer's work into multiple major pull requests, as opposed to last year's enormous pull request which was merged toward the very end of the summer. It'll be nice to do this in pieces just to make sure everything gets into the master branch of Astropy as intended, so we're planning on getting a PR in very soon (we discussed a deadline of 1-2 weeks past last Wednesday's meeting). The idea is to have working code that handles changes to Table indices when the Table itself is modified, and after this PR we can focus more on speeding up the index system and adding more functionality.

      With that in mind, I mostly spent this week working on previous #TODO's, fixing bugs, and generally getting ready for a PR. Having previously ignored some of the subtleties of Table and Column copying, I found it pretty irritating to ensure that indices are preserved/copied/deep copied as appropriate when doing things like constructing one Table from another, slicing a Table by rows, etc. -- mostly because there are some intricacies involved in subclassing `numpy.ndarray` that I wasn't aware of before running across them. Also, while I managed to get this working correctly, there might end up being relevant time bottlenecks we need to take into consideration.

      I also moved the relevant tests for Table indices to a new file `` (adding some new tests), and fixed a couple issues including a bug with `Table.remove_rows` when the argument passed is a slice object. For the actual indexing engine, I found a library called bintrees which provides C-based binary tree and red-black tree classes, so for now I'm using this as the default engine (with the optional bintrees dependency, and falling back on my pure-Python classes if the dependency isn't present). I'm looking forward to figuring out the plan for a PR at this Wednesday's meeting, and from there moving on to optimization and increasing functionality.

      by Michael Mueller ( at June 30, 2015 03:55 AM

      Julio Ernesto Villalon Reina

      Hi all,

      I mentioned before that I was at a conference meeting (Organization of Human Brain Mapping, 2015 where I had the great chance to meet with my mentors. Now, it's time to update on what was done during those days and during the week after (last week).
      As stated in my proposal, the project consists of classifying a brain T1 MRI into “tissue classes” and estimating the partial volume at the boundary between those tissues. Consequently, this is a brain segmentation problem. We decided to use a segmentation method based on Markov Random Field modeling, specifically the Maximum a Posteriori MRF approach (MAP-MRF). The implementation of a MAP-MRF estimation for brain tissue segmentation is based on the Expectation Maximization (EM) algorithm, as described in Zhang et al. 2001 ("Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm," Medical Imaging, IEEE Transactions on, vol.20, no.1, pp.45,57, Jan 2001). The maximization step is performed using the Iterative Conditional Modes (ICM) algorithm. Thus, together with my mentors, we decided to first work on the ICM algorithm. I started working on it during the Hackathon at OHBM and finished it up last week. It is working now and I already shared it publicly to the rest of the DIPY team. I submitted my first pull request called:

      WIP: Tissue classification using MAP-MRF

      There was a lot of feedback from all the team, especially regarding how to make it faster. The plan for this week is to include the EM on top of the ICM and provide the first Partial Volume Estimates. Will do some testing and validation of the method to see how it performs compared to other publicly available methods such as FAST from FSL (

      by Julio Villalon ( at June 30, 2015 12:28 AM

      June 29, 2015

      Wei Xue

      GSoC Week 5

      The week 5 began with a discussion with whether we should deprecate params. I fixed some bugs in checking functions, random number generator and one of covariance updating methods. In the following days, I completed the main functions of GaussianMixutre and all test cases, except AIC, BIC and sampling functions. The tests are some kind of challenging, sine the current implementation in the master branch contains very old test cases imported from Weiss's implementation which is never got improved. I simplified the test cases, and wrote more tests that are not covered by the current implementation, such as covariance estimation, ground truth parameter prediction, and other user-friendly warnings and errors.

      Next week, I will begin to code BayesianGaussianMixture.

      June 29, 2015 04:03 PM

      GSoC Week 5

      The week 5 began with a discussion with whether we should deprecate params. I fixed some bugs in checking functions, random number generator and one of covariance updating methods. In the following days, I completed the main functions of GaussianMixutre and all test cases, except AIC, BIC and sampling functions. The tests are some kind of challenging, sine the current implementation in the master branch contains very old test cases imported from Weiss's implementation which is never got improved. I simplified the test cases, and wrote more tests that are not covered by the current implementation, such as covariance estimation, ground truth parameter prediction, and other user-friendly warnings and errors.

      Next week, I will begin to code BayesianGaussianMixture.

      June 29, 2015 04:03 PM

      Zubin Mithra

      MIPS and MIPSel now in, doctests added

      I was travelling for the most part of last week, and thats why this post is coming out a bit late. Right now we have doctests for ARM, MIPS and MIPSel added in, and has been changed to use an <offset: reg> representation internally. I've made a pull request with squashed commits at for those of you who wish to see the diff involved.

      by Zubin Mithra<br />(pwntools) ( at June 29, 2015 02:19 PM

      Mark Wronkiewicz

      Bug Hunt

      C-Day plus 34

      For over a week now, the name of the game has been bug hunting. I had a finished first draft since the last blog post, so I’ve been trying to get the output of my custom SSS filter to match the proprietary version with sample data. One issue that took a couple days to track down was a simple but erroneous switch of two angles in a complex spherical coordinate gradient to Cartesian coordinate gradient transformation matrix. I can’t say that this is a new class of obstacles – spherical coordinates have thrown wrench after wrench into my code since different mathematicians regularly define these coordinates in different ways. (Is it just me, or is having seven separately accepted conventions for the spherical coordinate system a bit absurd?) My project crosses a couple domains of mathematics, so wrestling with these different conventions has helped me deeply appreciate the other mathematical concepts that do have a single accepted formulation.

      Regardless, weeding out the spherical coordinate issue and a menagerie of other bugs has left me with a filter that produces filtered data that is similar to (but not exactly matching) the proprietary code (see some example output below). Luckily, I do have several checkpoints in the filter’s processing chain and I know the problem is between the last checkpoint and the final output. My mentors have been fantastic so far, and we have a potential bead on the last issue; the weak magnetic signals produced by the brain are measured with two major classes of MEG pickup coils: magnetometers and gradiometers. In a very simple sense, one measures the magnetic field while the other measures the spatial derivative of the magnetic field, and (because of this difference) they provide readings on very different scales that I have yet to normalize. Given some luck, this last patch could fix the issue and yield a working solution to the first half of my GSoC project! (Knock on wood.)

      Exemplar data showing raw unfiltered MEG signal and the same data after the benchmark SSS filter and my own custom filtering (top). Difference between benchmark and my custom implementation (bottom). The filter in progress is close, but not quite the same as the benchmark implying there remains some bugs to fix.

      by Mark Wronkiewicz ( at June 29, 2015 02:47 AM

      June 28, 2015

      Stefan Richthofer

      Midterm evaluation

      The midterm-evaluation milestone is as follows:
      Have JyNI detect and break reference-cycles in native objects backed by Java-GC. This must be done by Java-GC in order to deal with interfering non-native PyObjects. Further this functionality must be monitorable, so that it can transparently be observed and confirmed.

      Sketch of some issues

      The issues to overcome for this milestone were manifold:
      • The ordinary reference-counting for scenarios that actually should work without GC contained a lot of bugs in JyNI C-code. This had to be fixed. When I wrote this code initially, the GC-concept was still an early draft and in many scenarios it was unclear whether and how reference-counting should be applied. Now all this needed to be fixed (and there are probably still remaining issues of this type)
      • JNI defines a clear policy how to deal with provided jobject-pointers. Some of them must be freed explicitly. On the other hand some might be freed implicitly by the JVM - without your intention, if you don't get it right. Also on this front vast clean-up in JyNI-code was needed, also to avoid immortal trash.
      • JyNI used to keep alive Java-side-PyObjects that were needed by native objects indefinitely.
        Now these must be kept alive by the Java-copy of the native reference-graph instead. It was hard to make this mechanism sufficiently robust. Several bugs caused reference-loss and had to be found to make the entire construct work. On the other hand some bugs also caused hard references to persist, which kept Java-GC from collecting the right objects and triggering JyNI's GC-mechanism.
      • Issues with converting self-containing PyObjects between native side and Java-side had to be solved. These were actually bugs unrelated to GC, but still had to be solved to achieve the milestone.
      • A mechanism to monitor native references from Java-side, especially their malloc/free actions had to be established.
        Macros to report these actions to Java/JyNI were inserted into JyNI's native code directly before the actual calls to malloc or free. What made this edgy is the fact that some objects are not freed by native code (which was vastly inherited from CPython 2.7), but cached for future use (e.g. one-letter strings, small numbers, short tuples, short lists). Acquiring/returning an object from/to such a cache is now also reported as malloc/free, but specially flagged. For all these actions JyNI records timestamps and maintains a native object-log where one can transparently see the lifetime-cycle of each native object.
      • The original plan to explore native object's connectivity in the GC_Track-method is not feasible because for tuples and lists this method is usually called before the object is populated.
        JyNI will have a mechanism to make it robust of invalid exploration-attempts, but this mechanism should not be used for normal basic operation (e.g. tuple-allocation happens for every method-call) but only for edgy cases, e.g. if an extension defines its own types, registers instances of them in JyNI-GC and then does odd stuff with them.
        So now GC_track saves objects in a todo-list regarding exploration and actual exploration is performed at some critical JyNI-operations like on object sync-on-init or just before releasing the GIL. It is likely that this strategy will have to be fine-tuned later.

      Proof of the milestone

      To prove the achievement of the explained milestone I wrote a script that creates a reference-cycle of a tuple and a list such that naive reference-counting would not be sufficient to break it. CPython would have to make use of its garbage collector to free the corresponding references.
      1. I pass the self-containing tuple/list to a native method-call to let JyNI create native counterparts of the objects.
      2. I demonstrate that JyNI's reference monitor can display the corresponding native objects ("leaks" in some sense).
      3. The script runs Java-GC and confirms that it collects the Jython-side objects (using a weak reference).
      4. JyNI's GC-mechanism reports native references to clear. It found them, because the corresponding JyNI GC-heads were collected by Java-GC.
      5. Using JyNI's reference monitor again, I confirm that all native objects were freed. Also those in the cycle.

      The GC demonstration-script

      import time
      from JyNI import JyNI
      from JyNI import JyReferenceMonitor as monitor
      from JyNI.gc import JyWeakReferenceGC
      from java.lang import System
      from java.lang.ref import WeakReference
      import DemoExtension

      # For now we attempt to verify JyNI's GC-functionality independently from
      # Jython concepts like Jython weak references or Jython GC-module.
      # So we use java.lang.ref.WeakReference and java.lang.System.gc
      to monitor and control Java-gc.

      JyWeakReferenceGC.monitorNativeCollection = True

      l = (123, [0, "test"])
      l[1][0] = l
      #We create weak reference to l to monitor collection by Java-GC:
      wkl = WeakReference(l)
      print "weak(l): "+str(wkl.get())

      # We pass down l to some native method. We don't care for the method itself,
      # but conversion to native side causes creation of native PyObjects that
      # correspond to l and its elements. We will then track the life-cycle of these.
      print "make l native..."

      print "Delete l... (but GC not yet ran)"
      del l
      print "weak(l) after del: "+str(wkl.get())
      print ""
      # monitor.list-methods display the following format:
      # [native pointer]{'' | '_GC_J' | '_J'} ([type]) #[native ref-count]: [repr] *[creation time]
      # _GC_J means that JyNI tracks the object
      # _J means that a JyNI-GC-head exists, but the object is not actually treated by GC
      # This can serve monitoring purposes or soft-keep-alive (c.f. java.lang.ref.SoftReference)
      # for caching.
      print "Leaks before GC:"
      print ""

      # By inserting this line you can confirm that native
      # leaks would persist if JyNI-GC is not working:
      #JyWeakReferenceGC.nativecollectionEnabled = False

      print "calling Java-GC..."
      print "weak(l) after GC: "+str(wkl.get())
      print ""
      print ""
      print "leaks after GC:"

      print ""
      print "===="
      print "exit"
      print "===="

      It is contained in JyNI in the file JyNI-Demo/src/

      Instructions to reproduce this evaluation

      1. You can get the JyNI-sources by calling
        git clone
        Switch to JyNI-folder:
        cd JyNI
      2. (On Linux with gcc) edit the makefile (for OSX with llvm/clang makefile.osx) to contain the right paths for JAVA_HOME etc. You can place a symlink to jython.jar (2.7.0 or newer!) in the JyNI-folder or adjust the Jython-path in makefile.
      3. Run make (Linux with gcc)
        (for OSX with clang use make -f makefile.osx)
      4. To build the DemoExtension enter its folder:
        cd DemoExtension
        and run
        python build
        cd ..
      5. Confirm that JyNI works:
      6. ./

      Discussion of the output


      JyNI: memDebug enabled!
      weak(l): (123, [(123, [...]), 'test'])
      make l native...
      Delete l... (but GC not yet ran)
      weak(l) after del: (123, [(123, [...]), 'test'])

      Leaks before GC:
      Current native leaks:
      139971370108712_GC_J (list) #2: "[(123, [...]), 'test']" *28
      139971370123336_J (str) #2: "test" *28
      139971370119272_GC_J (tuple) #1: "((123, [(123, [...]), 'test']),)" *28
      139971370108616_GC_J (tuple) #3: "(123, [(123, [...]), 'test'])" *28

      calling Java-GC...
      weak(l) after GC: None

      Native delete-attempts:
      139971370108712_GC_J (list) #0: -jfreed- *28
      139971370123336_J (str) #0: -jfreed- *28
      139971370119272_GC_J (tuple) #0: -jfreed- *28
      139971370108616_GC_J (tuple) #0: -jfreed- *28

      leaks after GC:
      no leaks recorded

      Let's briefly discuss this output. We created a self-containing tuple called l. To allow it to self-contain we must put a list in between. Using a Java-WeakReference, we confirm that Java-GC collects our tuple. Before that we let JyNI's reference monitor print a list of native objects that are currently allocated. We refer to them as "leaks", because all native calls are over and there is no obvious need for natively allocated objects now. #x names their current native ref-count. It explains as follows (observe that it contains a cycle):
      139971370108712_GC_J (list) #2: "[(123, [...]), 'test']"

      This is l[1]. One reference is from JyNI to keep it alive, the second one is from l.

      139971370123336_J (str) #2: "test"

      This is l[1][1]. One reference is from JyNI to keep it alive, the second one is from l[1].

      139971370119272_GC_J (tuple) #1: "((123, [(123, [...]), 'test']),)"
      This is the argument-tuple that was used to pass l to the native method. The reference is from JyNI to keep it alive.
      139971370108616_GC_J (tuple) #3: "(123, [(123, [...]), 'test'])" 
      This is l. One reference is from JyNI to keep it alive, the second one is from the argument-tuple (139971370108616)and the third one is from l[1]. Thus it builds a reference-cycle with l[1].

      After running Java-GC (and giving it some time to finnish) we confirm that our weak reference to l was cleared. And indeed, JyNI's GC-mechanism reported some references to clear, all reported leaks among them. Finally another call to JyNI's reference monitor does not list leaks any more.

      Check that this behavior is not self-evident

      In JyNI-Demo/src/ go to the section:

      # By inserting this line you can confirm that native
      # leaks would persist if JyNI-GC is not working:
      #JyWeakReferenceGC.nativecollectionEnabled = False

      Change it to
      # By inserting this line you can confirm that native
      # leaks would persist if JyNI-GC is not working:

      JyWeakReferenceGC.nativecollectionEnabled = False

      Run again. You will notice that the native leaks persist.

      Next steps

      The mechanism currently does not cover all native types. While many should already work I expect that some bugfixing and clean-up will be required to make this actually work. With the demonstrated reference-monitor-mechanism the necessary tools to make this debugging straight forward are now available.

      After fixing the remaining types and providing some tests for this, I will implement an improvement to the GC-mechanism that makes it robust against silent modification of native PyObjects (e.g. via macros). And provide tests for this.

      Finally I will add support for the PyWeakReference builtin type. As far as time allows after that I'll try to get ctypes working.

      by Stefan Richthofer ( at June 28, 2015 09:22 PM

      June 27, 2015

      Yask Srivastava

      New UserSettings and various tweaks

      My last commit for xstatic was finally merged. The less file compiled for both the themes successfully and there were no issues even with Basic theme.

      Instead of making a todo list in etherpad, I have started making issues in Bitbucket. Since the theme has started coming out with basic functionality. Other people who notice the bug may also create issues there. Issues Page :

      ` RogerHaase pointed another bug which was the weird overlay of forms and menu whenhamburgerbutton was clicked to collapse the navbar in menu bar. This issue was fixed in [cumulative patch#2` of CR](

      New User Setting

      I finally implemented a new user setting page which uses bootstrap forms. This wasn’t as easy at it sounds. We use flatland for forms. The way we rendered the form was through pre-defined macros. But the pre-defined macros also rendered unwanted stuff such as label,table,td.. etc.

      So the way forms work in Moin Moin is like this. There are html macros defined in forms.html. There is a while which contains Flatlandform related constants. So lets say we wish to render a form for css input field. Code snippet :


      We have form’s class defined in file: In this case it looks like:

      class UserSettingsUIForm(Form):
              name = &lsquo;usersettings_ui&rsquo;
              theme_name = Select.using(label=L<em>(&lsquo;Theme name&rsquo;)).out_of(
                  ((unicode(t.identifier), for t in get_themes_list()), sort_by=1)
              css_url = URL.using(label=L</em>(&lsquo;User CSS URL&rsquo;), optional=True).with_properties(
                  placeholder=L<em>(&ldquo;Give the URL of your custom CSS (optional)&rdquo;))
              edit_rows = Natural.using(label=L</em>(&lsquo;Editor size&rsquo;)).with_properties(
                  placeholder=L<em>(&ldquo;Editor textarea height (0=auto)&rdquo;))
              results_per_page = Natural.using(label=L</em>(&lsquo;History results per page&rsquo;)).with_properties(
                  placeholder=L<em>(&ldquo;Number of results per page (0=no paging)&rdquo;))
              submit_label = L</em>(&lsquo;Save&rsquo;)

      This class creates provides the basic skeleton for forms. The file detects the kind of html tag required for the form field, example:input text, checkbox,submit.. etc and renders the macros present in forms.htmlfile.

      For convenience we have macros defined which contains some unwanted stuff such as labels with table form design (td dd,dt)

      Editing this file would have changed the behavior in other non bootstrap themes which depend on this design. So I had to make exclusive forms.html template file for modernized theme.

      I also changed the setting tabs design to match the current design of the theme.

      Another issue I encountered was with the common.css. It contains global css style rules that are supposed to be used by all themes. But Bootstrap contains its own style rules. I was inheriting style rules from both the files which resulted in weird layout. The only hack was to override these styles. If only their was something like this:


      So I ended up opening developers tool and under style tab it showed me the properties which were being inherited and I manually override thos styles in my modernized theme.less file. This hack fixes the weird table layout in global history template page. Code Review patch(pending) :

      ChangeLogs for the patch

      <h1>Uses latest xstatic, bootstrap version:</h1>
      <h1>Fix for footer jump. Now footer won’t jump in any page ( Even when the content is null).</h1>
      <h1>User setting forms rewritten in bootstrap form design fashion (without <td> <tl> tags) to suit the current design of theme.</h1>
      <h1>Macros in forms.html changed for compatibility reasons (as their is no requirement of labels and input box to be in <td>.. tags.</h1>
      <h1>written css rules in theme.less to override default styling of tables written in <code>common.css</code>.</h1>
      <h1>minor changes in footer.</h1>
      <h1>Various style improvements</h1>
      <h1>overlay issues afer reducing table width to ~900 pixels and clicking on the hamburger thing</h1>
      <h1>Darker text</h1>

      Anyway, this is how it



      June 27, 2015 09:15 PM

      Isuru Fernando

      GSoC Week 5

      This week, I looked into installing SymEngine in Sage and wrappers. One issue was that we have a header named `complex.h` and some libraries look inside `$SAGE_LOCAL/include` for the C header `complex` and this leads to errors. It was decided to install symengine headers inside a folder called `symengine` so that a header could be accessed by `symengine/basic.h` form and it avoids clashes.

      I looked at other libraries for how this is done. Some libraries have the installing headers in a separate folder like `include/project_name`, but `.cpp`s under `src`. Some libraries have headers and source files in the same folder and they are included in `project_name` folder. Since SymEngine has headers and sources in the same folder, we decided to rename `src` to `symengine`. This lead to another problem. Python wrappers folder was named `symengine`. So I moved them to `symengine/python/symengine` to enable using symengine python wrappers inside the build directory (although you have to change directory into `symengine/python` to use it).

      Some other work involved, making sure `make install` installed all the python tests and installing dependencies in Travis-ci without using sudo.

      That's all the updates for this week. Mid evaluations are coming this week, so I hope to get a SymEngine spkg done this week.

      by Isuru Fernando ( at June 27, 2015 02:40 AM

      June 26, 2015

      Aman Jhunjhunwala

      GSOC ’15 Post 3 : Mid Term Evaluation

      Mid Term Report – AstroPython (Astropy)

      Week 3 – Week 6

      Report Date : 26th June, 2015

      The mid term evaluations of Google Summer of Code , 2015 are here ! It has been an incredible 6 weeks of coding. Before I get into all the boring-geeky stuff , a big Hello from the new AstroPython web app !

      The Main Index Page for proposed AstroPython website !The Main Index Page for AstroPython website !

      Now that you’ve met AstroPython, here’s summarizing the efforts gone into it in the past 3 weeks (last report was on 5th June,2015)  :-

      The Creation Wizard which I had put up earlier was riddled with flaws and was unnecessarily complex – code wise. So, I revamped the entire section – right from creation to displaying each article.

      The Creation form now is a single step – “Save Draft” or “Publish” kind of a form. Users who save a draft can come back later and complete their post to publish on the website. Articles are not moderate unless they are published. Once an article is “published” by a user, it awaits admin approval before showing up. An email is sent to all moderators stating a post has entered Moderation Queue. When an article is approved, the user gets an email stating so. Bypassing moderation for non-published articles was another speed bump for the project – and after failing at IRCs, StackExchange and other forums , I was happy to come up with a non conventional solution to the problem and resolve it soon !

      A unique feature of the Creation Form is the availability of 2 types of editors – A WYSIWYG CKEditor for Rich Text Functionality and an advanced TEX supported GFM Markdown Editor. This was one of the most difficult parts of the project- Integrating 2 Editors to be dynamically replaceable in a form. Markdown has just become popular but the lack of any fully functional “plug and play” Javascript editor meant that I had to fork one according to my needs. After trying out Epic , MarkItUp , BMarkdown, etc , I successfully forked CKEditor and Pandao Editor to my needs.Additional functionality of adding code snippets was added to completely remove the need of a separate ACE Code Editor for the code snippet section.

      This was followed by developing Creation Forms for other sections. Here , I used relatively infamous Django Dynamic Forms to allow for maximum re-usability of existing infrastructure. This created creation forms of all sections in less than 10 lines of code.

      The next challenging portion was displaying rendered Markdown text on the website. I tried a lot of Markdown parsers , but there were features each one was lacking. So in the end , I used the “Preview Mode” of the current Markdown Editor to feed it raw markdown and generate HTML content to display on our web application. This was extended by displaying forms from each section in a centralized framework.

      Moderator approved User Editing of Posts from front end was implemented successfully next. Edit forms are being displayed in modals on selection dynamically. Disqus Comments and Social Sharing Plugins (Share on FB,Twitter,Email,etc) were integrated next, finished by a custom “Upvote – Downvote – Unvote ” function for each post which works quite well for Anonymous users also. (Generates a unique key based on IP Address and User META Address). Anonymous users also, can successfully create or edit artices on the web app.

      After this , we had our first round of “code cleaning” , during which we decided to “de-compartmentalize” the old app , and unify all the sections to use a common main framework. After this, most of our sections – Announcements , Blogs, Events,Educational Resources, Events, News, Packages,Snippets,Tutorials , Wiki lay complete. This was definitely one of the high point of the project and greatly benefited the timeline.

      “Search” was the next feature to be integrated. I initially started off with a Whoosh engine powered HayStack modular Search, but later shifted to Watson Engine. One could now search and display results from each section or all sections in a sorted manner. Filtering and Sorting Section Rolls was next – sorting by popularity, hits and date created and filtering by tags – which were discussed in the proposal were successfully implemented.

      Then a “My Posts” section was created to store list of all complete and incomplete posts from all the sections written by the user. This would allow users to resume editing of raw articles easily. Sidebar was populated next with recent and popular posts and our basic site was finally up !

      On the front end, a lot of work has gone into the styling and layout of these pages through LESS,CSS and Bootstrap after taking into account the feedback from all my mentors. A Question and Answer Forum has  just been integrated. Testing and Customization for the same remain. This completes the Work Summary of these 3 weeks !

      The test server is running at and the Github repository is accessible at

      I filled in the Mid Term Evaluation forms at the Melange Website and had an extensively detailed “Mid Term Review” Google Hangout today with my mentors. I was glad that I far exceeded my mentor’s expectation and passed the Mid Term Review with “flying colors”.  Hope I can carry on this work , and make the GSoC project a huge hit !  The next Blog will be up in about 15  days from now , when we open the app to a limited audience for preview ! Till then Happy Coding …..</end>

      by amanjjw at June 26, 2015 07:50 PM

      Prakhar Joshi

      Getting Control panel for the add-on

      Last time I was able to finish things up with registering and deregistering the add-on in the plone site, so that when ever I activate my add-on from the ZMI on plone instance, it registered the default profile of my add-on, also it register the browserlayer of the add-on. So this things goes well and also there were some issues related to the versions which have been solved lately.

      What's Next ?

      After the registration of add-on we need to create a view on plone site so that when ever we click on the add-on we can get some page and configure our add-on from plone site. There are default configuration for the transform script already there and we can customize the configuration. So for that we have to create a control panel for our add-on so that user can get a platform to customize the configuration.

      There were 2 ways to create a control panel :-

      1) Either to overwrite the old cntrol panel of PortalTransform safe_html.
      2) Or to create a separate control panel for our new safe_html add-on.

      I choose the 2nd way to create a control panel and created a separate control panel for the add-on.

      How to create a control panel in plone add-on ?

      For creating control panel in plone add-on we have to
      1) Create a schema for control pane.
      2) register the control panel schema.
      3) Create permissions for registering control panel.

      Lets start with 1st step,

      Create a Schema for control panel

      We will create the schema for the control panel in a file, where we will define FilterTagSchema which will contain space for nasty tags, stripped tags and custom tags. Similarly we will create IFilterAttributeSchema, IFilterEditorSchema and Finally in IFilterSchema we will include all the above mentioned classes.  After that we will create FilterControlPanelForm which will allow the above defined schema on the plone site.

      Here is the snippet for FIlterControlForm :-

      class FilterControlPanelForm(controlpanel.RegistryEditForm):

          id = "FilterControlPanel"
          label = _("SAFE HTML Filter settings")
          description = _("Plone filters HTML tags that are considered security "
                          "risks. Be aware of the implications before making "
                          "changes below. By default only tags defined in XHTML "
                          "are permitted. In particular, to allow 'embed' as a tag "
                          "you must both remove it from 'Nasty tags' and add it to "
                          "'Custom tags'. Although the form will update "
                          "immediately to show any changes you make, your changes "
                          "are not saved until you press the 'Save' button.")
          form_name = _("HTML Filter settings")
          schema = IFilterSchema
          schema_prefix = "plone"

          def updateFields(self):
              super(FilterControlPanelForm, self).updateFields()

      Observer here we used schema for the filter control form as IFilterSchema which further includes all the classes as mentioned above.

      Now finally we will wrap the control panel form and this will help us to get our control panel on plone site.

      Register the Control panel 

      This was just the first step, but now after defining the control panel we have to register the control panel in the configuration.zcml in the generic way.

      Here is the snippet of code done to register the control panel :-

      <include file="permissions.zcml" />
      <!-- Filter Control Panel -->

      Here we have registered the browser page with the name safe_html_transfrom-settings, for IPloneSiteRoot of CMFPlone and using our own add-on browser layer and importing our controlpanel class.

      Adding permissions for control panel

      We will notice that we have added the permissions at the end of the setup and for that we will create a separate file named permissions.zcml and import that file in the configuartion.zcml.

      The permission.zcml file looks like this :-
          <permission id="experimental.safe_html.controlpanel.Filtering"
                  title="Plone Site Setup: Filtering">
              <role name="Manager"/>
              <role name="Site Administrator"/>

      After adding these permissions to generic setup and configuring the control panel we will be able to see the controlpanel on the plone site. Here is the snippet

      Finally after that the control panel thing is working perfectly.        

      What is Next ?

      After that the main thing left before mid-term evaluation is to register the safe_html trasnsform in the add-on and BTW the safe_html transform is almost ready. I will explain that in next blog.

      Hope you like it!!

      Happy Coding.                                                                                                                                                                  


      by prakhar joshi ( at June 26, 2015 12:22 PM

      Andres Vargas Gonzalez

      Kivy backend using Line and Mesh graphics instructions

      This is the first prototype for the backend, points are extracted from the path an transformed into polygons or lines. Line and Mesh are used in the kivy side to render these objects in a widget canvas. Labels are used for displaying text. Below some examples can be found. The lines are not well defined and the next is to optimize this drawing as well as the text. Some attributes should be added to the labels, positioning is another problem.

      matplotlib examples kivy matplotlib kivy matplotlib kivy matplotlib kivy

      matplotlib kivy

      by andnovar at June 26, 2015 07:44 AM

      Daniil Pakhomov

      Google Summer Of Code: Optimizmizing existing code. Creating object detection module.

      The post describes the steps that were made in order to speed-up Face Detection.

      Rewriting of MB-LBP function into Cython

      The MB-LBP function is called many times during the Face Detection. For example, in a region of an image that contains face of size (42, 35) the function was called 3491 times. The sliding window approach was used. These numbers will be much greater if we use bigger image. This is why the function was rewritten in Cython. In order to make it fast, all the Python calls were eliminated and the function now uses nogil mode.

      Implementing the Cascade function and rewriting it in Cython

      In the approach that we use for Face Detection the cascade of classifiers is used in order to detect the face. Only faces pass all stages and are detected. All non-faces are rejected on some stage of cascade. The cascade function is also called a lot of times. This is why the class that has all the data was written in Cython. As opposed to native Python classes, cdef classes are implemented using struct C structure. Python classes use dict for properties and method search which is slow.

      Other additional entities that are needed for cascade to work were implemented using pure struct C structure.

      New module

      For the current project I decided to put all my work in skimage.future.objdetect. I did this because the functions can be changed a lot in the future. The name objdetect was used because the approach that I use will make it possible to detect not only faces but other objects on which the classifier can be trained.

      June 26, 2015 12:00 AM


      GSoC Progress - Week 5

      Hello, this post contains the fourth report of my GSoC progress. We hit Piranha's speed, the highlight of this week.


      We were able to reach Piranha's speed. At an average 14-ish ms to the benchmark, we are happy enough (still can be improved) to start wrapping this low-level implementation to a Polynomial class. Last week I had reported speed 23ms and this week we are better.

      We had missed out a compiler flag, DNDEBUG to indicate Release mode of Piranha leading to slow-down, #482.
      Adding this compiler flag means we should not be using assert statement, which SymEngine does in SYMENGINE_ASSERT and test files too. These had to be sorted out if Piranha were to be a hard dependency of SymEngine's polynomial module.

      Hence, the issue of moving the tests suite from asserts to a well-developed test framework came up again, #282. We explored a couple, but Catch still seemed to be the best option.
      Catch was implemented, which is a benefit to SymEngine in the long run too.
      As for the SYMENGINE_ASSERT, we decided to change our macro to raise an exception or just abort the program.
      Catch is a very good tool. We thank Phil Nash and all the contributors for making it.

      Next up, wrapping into Polynomial.

      • We need some functionality to convert a SymEngine expression(Basic) into one of hashset representations directly. Now I convert basic to poly and then to hashset as just getting the speed right was the issue.

      • Domains of coefficients need to be thought of. SymPy and Sage will be need to looked into and their APIs need to be studied. We need ZZ, QQ and EX, the work for EX has been done by Francesco Biscani, this will be patched for the latest master and commited in his name. There could also be an automatic mode, which figures out the fastest representation for the given expression, at the price of a little slower conversion, as it needs to traverse the expression to figure out what representation fits.

      • tuple to packed conversion when exponents don't fit. Also encode supports signed ints which is a boon to us, as we don't have to worry about negative exponents. For rational exponents we use tuple.

      I still haven't figured out the reason for slow down of expand2 and expand2b in my packint branch. I have been suggested to use git bisect. Will do next week.


      expand2d results:

      Result of 10 execution:

      Maximum: 15ms
      Minimum: 14ms
      Average: 14.3ms

      Here, the evaluate_sparsity() gave the following result for the hash_set


      Piranha has the following results

      Average: 13.421ms
      Maximum: 13.875ms
      Minimum: 12.964ms

      A more detailed report of benchmarks and comparisons can be found here

      A minor PR where MPFR was added as a Piranha dependency, 472 was merged.

      Another PR in which the tests where moved to Catch is good to play with and merge, minor build nits remaining, 484.

      Targets for Week 5

      • Figure out the reason for slow down in benchmarks, fix that.
      • Change the SYMENGINE_ASSERT macro to raise an exception.
      • Add the DNDEBUG flag for with Piranha builds, as now SymEngine doesn't use assert, close issue #482.
      • Port @bluescarni's work of EX to SymEngine.
      • Wrap the lower-level into Polynomial for signed integer exponents in ZZ domain with functionality atleast that of UnivariatePolynomial.

      That's all this week.

      June 26, 2015 12:00 AM

      Yue Liu

      GSOC2015 Students coding Week 05

      week sync 09

      Last week:

      • issues #36 fixed, support call/jmp/bx/blx in anywhere.
      • issues #37 set ESP/RSP fixed, but need re-write the migrate() method.
      • issues #38 partial fixed.
      • issues #39 partial fixed.
      • Update some doctests.

      Next week:

      • Optimizing and fix potential bugs.
      • Add some doctests and pass the example doctests.

      June 26, 2015 12:00 AM

      Aron Barreira Bordin

      Kivy Designer Development


      I had some tests on my University last week, so I've done a smaller progress in my development.

      Events/Properties viewer UI

      I did some modifications to Events and Properties UI, and fixed some related bugs.

      • Add custom event is working
      • Added a canvas line between properties and event names
      • Displaying Info Bubble in the correct position (information about the current event being modified)

      Designer Code Input

      • Added line number to KvLangArea
      • Implemented ListSetting radio

      Designer Tabbed Panel

      Implemented a closable and auto sizable TabHeader to Designer Code Input. Now it's possible to close open tabs :)

      Implemented "smart" tabs design. Change the tab style to inform if there is something wrong with the source code, if the content has modification, and to say the git status. (Only design, not yet working)

      Bug fixes

      I fixed a small bug to Python Console. Edit Content View was not working with DesignerCodeInput. Working now.

      Thats it, thanks for reading :)

      Aron Bordin.

      June 26, 2015 12:00 AM

      June 25, 2015

      Yask Srivastava

      GSoC Updates | Hackathon | Teaching Django

      Informal Intro

      Ah! This week was a bit hectic. But I was able to do considerable amount of work.


      I got all my pending code reviewed and commited the changes to my repo after resolving issues. The patch had to go through a number of interations to resolve the issues in prior patches. The last patch fixed all the major bugs.

      As I mentioned in my previous post I ported modernized stylus theme to bootstrap by making changes in Global Tempates. But Roger Haase suggested to make exlusive templates for Bootstrap themes as making changes to Global templates would restrict all the other theme developers to use Bootstrap’s components such as row , col-md-x , nav , panel.. etc. I also made changes in Global Templates to make sure that it doesn’t conflict with any bootstrap theme that work on top of it.

      Show me the code!!

      ChangeLogs from CR #2:

      <h1>ChangeLog from patch #2</h1>
      <p>Fixed the alignment of sub menu tabs and item views tabs
      Added active visual effect to the current tab view
      Fixed horizontal scroll bug
      Fixed padding inside sub menu
      Increased font size for wiki contents</p>
      <p>Automation + Global History</p>
      <h1>Changelog 3 , new cr patch 1</h1>
      <p>Added automation. run <code>$ ./m css</code> to automatically compile all the less files.
      Modernized theme now runs with the current version of xstatic bootstrap.
      Rewrote <code>global history</code> template
      Changed font sizes at various places</p>
      <p>Changelog #4
      Made changes as suggested in last CR</p>
      <p>ChangeLog #5
      Created a special directory exclusively for modernized theme’s template.
      Added footer and breadcrumb.
      Made changes as suggested by mentors in previous patch

      Some of the bugs in previous CRs were:

      • HTML validation error due to use of form inside ul and unclosed div tags.This is fixed in my last commit.

      • Design break issues in mobile views. Fixed in commit:

      • Design break issue when breadcrumb’s patch is too large. Fixed in this commit.

      ChangeLogs from CR #3:

      <p>Changes in User Setting and common.js to support highlighting of current links in menu</p>
      <h1>Added common.css</h1>
      <h1>Current opened tab now highlights in menu</h1>
      <h1>Various css rules written to work on top/with common.css</h1>
      <h1>Fixed the issue ‘jumping of footer while changing tabs in user setting’</h1>
      <h1>Fixed issue with breadcrumbs when the location address gets too long.</h1>
      <h1>Fixed all the HTML error validation errors</h1>

      The issues with last CR’s were discussed with mentors and fixed. Quick summary of my commits:

      I actually made a new branch in my fork of repo called improvethemes. Since I am doing things step by step and some things get broken in intermediatlry stages, It wouldn’t have been right to commit changes in main branch. This can be easily merged when this feature is working 100% without any bugs.

      Now back to summary: I have made 3 commits as yet , 4th one with improvements in usersetting page is exptected soon :). Anyway:

      1. Commit #1 : Created a new branch improvethemes

      2. Commit #2 : Wrote a new modernized theme based on bootstrap and also made it’s template files (layout.html, global_history.html) The template contains all the basic components such as navbar,sub menu, item menu, breadcrumb, footer.. etc.

      3. Commit #3 : Further improvements in modernized theme and few style fixes in basic theme Improvements in modernized theme:# Added common.css Current opened tab now highlights in menu Various css rules written to work on top/with common.css Fixed the issue ‘jumping of footer while changing tabs in user setting’ Fixed issue with breadcrumbs when the location address gets too long. Fixed footer jump while changing tabs in user setting in basic theme Fixed design break issue in Basic theme’s subscription box: Fixed design break in small resolution and removed form from under ‘ul’

      4. Commit #4: Fixed HTML validation error due to unclosed div tag

      I also updated xstatic bootstrap here is the commmit. This updates Bootstrap to version : 3.3.5

      Show me the screenshots!!">">">">">">

      Other Updates ?


      Yea! I participated in continuous 34 hours AngelHack Hackathon this week. It was a greeat experience and we made an opensource chat summarizer tool. I am really proud of this app. We worked together all night all day! It was a great experience. Well done Vinayak Mehta, Ketan bhutt , Pranu!.

      About this app: Summarize it is a chat summarizer plugin for instant messaging applications. It summarizes the large content of chat logs which enables users to quickly understand the current context of the conversation. Currently Summarize it works on top of Slack as its plugin.

      App Link

      One last thing…!

      Teaching Django

      I have started teaching Django web developement to college students as a part of their summer training. First class was on tuesday which was an introductory class. All of the students are enthusiastic! I really like Django and this is going to be a great experiance.

      June 25, 2015 12:54 PM

      Sartaj Singh

      GSoC: Update Week-4

      This week was mostly spent on completing the rational algorithm for computing Formal Power Series of a function. It took some time, mainly due to testing.

      Rational algorithm is now mostly complete and I have opened PR-#9572 bringing in the changes. It is still a work in progress. There is still lots to do and test. So, am gonna spend the next few weeks implementing the rest of the algorithm.

      So far, the results are good. It is in general faster than the series function already implemented in SymPy.

      Tasks Week-5:

      • Get PR-#9523 polished and merged.

      • Improve the FormalPowerSeries class.

      • Start implementing SimpleDE and DEtoRE functions.

      I guess, that's it. See you later.

      June 25, 2015 09:39 AM

      Saket Choudhary

      Week 5 Update

      This week was a bummer. I tried profiling MixedLM, and did all mistakes possible.
      My foolish attempts are discussed here:

      by Saket Choudhary ( at June 25, 2015 04:22 AM

      June 24, 2015

      Chau Dang Nguyen
      (Core Python)

      Week 4

      In the previous week, I have been testing the code and preparing the documentation.

      For testing, I was interested in Pyrestest at the beginning because of its simplicity. However it doesn’t support dynamic variable assign between tests, so I decided to switch to Javascript. This also brings convenience in the next step, I will need to write a demo client using Javascript. For documentation, is a great tool. It provides user friendly, quick testing, support authentication & API key.

      I still feel like the code needs more polishing and it is not good enough for demonstration. So I decided to hold back the demo version for a while. In the mean time I will tell more details about the project in the blog posts.

      The REST handler will be separated from the rest of the tracker. Any request uri begins with ‘/rest’ will be forwarded to REST handler. At this point, request is divided into 3 groups ‘class’, ‘object’, and ‘attribute’ with 5 possible actions ‘GET’, ‘POST’, ‘PUT’, ‘DELETE’ and ‘PATCH’.

      • 'issue' is a class, ‘GET’ request to /rest/issue will return the whole collection of “issue” class.
      • 'issue12' is an object, ‘DELETE’ request to /rest/issue12 will delete the issue12 from the database
      • 'title' is an attribute of 'issue12', ‘PUT’ request to /rest/issue12/title with form “data=new title” will make the title of issue12 becomes “new title”

      REST Handler also accept HTTP Header ‘X-HTTP-Method-Override’ to override GET, POST with PUT, DELETE and PATCH in case the client cannot perform those methods. For PUT and POST, the object will be using form data format

      Error status will appear in both HTTP Status code and response body, so if the client cannot access the response header, it will retrieve the status from the response object.

      Detail information of the standard will be published in the documentation.

      by Kinggreedy ( at June 24, 2015 05:01 PM

      Keerthan Jaic

      MyHDL GSoC Update

      MyHDL has two major components – core and conversion. The core allows users to model and simulate concurrent, event driven systems(such as digital hardware) in Python. A subset of MyHDL models can be converted to Verilog or VHDL.

      Over the last couple of weeks, I’ve been working on improving MyHDL’s test suite. The core tests were written using unittest, and we used py.test as the test runner. However, MyHDL’s core relies on a few hacks such as global variables. This did not play well with pytest and prevented us from automating boring test activities with tox. Additionally, one core test relied on the behaviour of the garbage collector and could not be run with PyPy. I’ve converted all the core tests to pytest, and PyPy can run our entire testsuite again. Now, we can also use tox and pytest-xdist to rapidly verify that tests pass in all the platforms we support.

      The conversion tests are a little trickier. MyHDL uses external simulators such as iVerilog, GHDL and Modelsim to verify the correctness of converted designs. The test suite currently uses global variables to pick the simulator, and the suite must be repeated for each simulator. This is cumbersome and inefficient because MyHDL’s conversion and simulation modules are re-run for every simulator. I’m currently working on using a combination of auto detection and pytest custom CLI options to simplify the process of testing against multiple simulators. Furthermore, The test suite generates a number of converted Verilog/VHDL files and intermediate simulation results which are used for validation. These files are clobbered every time the tests are run. This makes it harder to compare the conversion results of different branches or commits. I’ve implemented a proof of concept using pytest’s tmpfile fixture to isolate the results of each run. Along the same lines, I’ve uploaded a small utility which uses tox to help analyze the conversion results of different versions of MyHDL and Python. I’ve also made a few minor improvements to the conversion test suite: A bug fix for Modelsim 10.4b, and support for the nvc VHDL simulator.

      Finally, I’ve been exploring ways to reduce the redundancy in MyHDL’s core decorators and conversion machinery. After I finish improving the conversion tests, I will send a PR upstream and begin working on improving the robustness of Verilog/VHDL conversion.

      June 24, 2015 12:00 AM

      June 23, 2015

      Gregory Hunt

      Working Log Likelihood

      Got a working log likelihood function it would seem. Still need to do some more testing. Opened an issue on github which fairly accurately describes the situation. We're going to move to start thinking about implementing the score function now. We'll have to think about how to numerically integrate these integrals which may not be as nice as those in the log likelihood function. There were a couple of issues in implementing the log likelihood function. The last main issue was implementing enough algebra in the code so that numpy didn't have to handle large numbers which potentially were problematic.

      by Gregory Hunt ( at June 23, 2015 07:36 PM

      AMiT Kumar

      GSoC : This week in SymPy #4 & #5

      Hi there! It's been five weeks into GSoC, This week, I worked on polishing my previous PR's to improve coverage and fixing some bugs.

        Progress of Week 4 & 5

      During the last couple of weeks my ComplexPlane Class PR #9438 finally got Merged thanks to Harsh for thoroughly reviewing it and suggesting constructive changes.

      For this I Managed to improve it's coverage to perfect 100%, which is indeed satisfying, as it depicts all the new code being pushed is completely tested.

      This week I also improved the Exception handling and coverage in my linsolve PR, It also have a 100% coverage.

      Coverage Report

      • [1] gauss_jordan_solve 100 %
      • [2] linsolve : 100: %


      It's good to be Merged now.

      Blocking Issue: Intersection's of FiniteSet with symbolic elements

      During Week 5, While working on transcendental equation solver in, I discovered a blocking issue in FiniteSets, which is with the Intersection of FiniteSet containing symbolic elements, for example:

      In [2] a = Symbol('a', real=True)
      In [3]: FiniteSet(log(a)).intersect(S.Reals)
      Out[3]: EmptySet()

      Currently, either FiniteSet is able to evaluate intersection otherwise it, returns an EmptySet(). (See 9536 & 8217).

      To fix this, I have opened the PR #9540. Currently It fixes both the issues (9536 & 8217), but there are some failing tests using the current behaviour of FiniteSet.

      For example:

      In [16]: x, y, z = symbols('x y z', integer=True)
      In [19]: f1 = FiniteSet(x, y)
      In [20]: f2 = FiniteSet(x, z)
      • In Master:
      In [23]: f1.intersect(f2)
      Out[23]: {x}
      • It should rather be:
      In [5]: f1.intersect(f2)
      Out[5]: {x} U Intersection({y}, {x, z})

      The current behavior of FiniteSet in Master is non-acceptable, since in the above example x, y, z are integer symbols, so they can be any integer, but in the Master , they are assumed to be distinct, which is wrong. There are such failing tests in here, which is updated here: aktech@e8e6a0b to incorporate with the right behaviour.

      As of now there are a couple of failing tests, which needs to passed, before we can Merge #9540

      TODO Failing Tests:

      from future import plan Week #6:

      This week I plan to Fix Intersection's of FiniteSet with symbolic elements & start working on LambertW solver in solveset.

      $ git log

        PR #9540 : Intersection's of FiniteSet with symbolic elements

        PR #9438 : Linsolve

        PR #9463 : ComplexPlane

        PR #9527 : Printing of ProductSets

        PR # 9524 : Fix solveset returned solution making denom zero

      That's all for now, looking forward for week #6. :grinning:

      June 23, 2015 12:00 AM

      June 22, 2015

      Vipul Sharma

      GSoC 2015: Coding Period (7th June - 22nd June)

      In the 3rd week of the coding period, I worked on file upload feature, to upload files in the ticket create and modify view. Now one can upload any patch file, media file or screenshots.

      CR (for file upload feature) :

      We had a meeting over our IRC channel where I discussed about my work and cleared few doubts with my mentors.

      I also worked on improving the UI of ticket create and modify views and made it look more consistent in both basic and modernized themes.

       Basic Theme (Before)

      Basic Theme (After)

       Modernized Theme (Before)

       Modernized Theme (After)

      360x640 view

      by Vipul Sharma ( at June 22, 2015 08:52 PM

      Lucas van Dijk

      GSoC 2015: Vispy progress report

      Another two weeks have passed! At some point every piece fell together and all arrow shader code became clear to me, which has resulted in all Glumpy arrows ported to Vispy! A few selected examples can be seen in the image below.

      Several arrow heads

      It's not perfect yet, the OpenGL shader tries to automatically calculate the orientation of the arrow head, but it is often slightly off. Also note that I've also added the new "inhibitor" arrow head. We've decided to document the principles behind this, and I used a few days to write a tutorial about it, which can be found here.

      However, there's a big update coming to Vispy changing quite a lot about the visual and scene system, so it requires a few changes to the code before it can be merged.

      The coming weeks I'll start thinking about the design of the network API!

      June 22, 2015 05:32 PM

      Wei Xue

      GSoC Week 5

      The week 5 began with a discussion with wheter we should deprecate params. I just fixed some bugs about checking functions and PRNG.

      June 22, 2015 04:03 PM

      Ziye Fan

      [GSoC 2015 Week 4]

      This week I'm working with an optimization of inplace_elemwise_optimizer. The idea is described here. In the current version, when inplace_elemwise_optimizer trying to replace outputs of a node, the graph can become invalid, therefore validate() is called frequently to make sure the graph unbroken. But validate() is very time consuming, and the goal of this optimization is to make this optimizer more efficient by applying new validating strategy.

      However, this optimization did not work as expected. The total optimization time become 10 times larger:
      370.541s for fgraph.validate()
      1497.519540s - ('inplace_elemwise_optimizer', 'FromFunctionOptimizer', 33) - 315.055s
      The origin version:
      72.644s for fgraph.validate()
      143.351832s - ('inplace_elemwise_optimizer', 'FromFunctionOptimizer', 34) - 30.064s

      After several small optimization pointed out by my mentor, the time become 1178s.

      Why is it slower? I think it is because we are trying to apply the optimizer successfully on all nodes. It is a trade-off between the time took by validate() and the number of nodes optimized. In the past, all failed nodes are ignored directly, so it was fast. Now we are trying to apply on them again and again. validate() is called for much more times than before.

      Here is a figure I just plotted to display the nodes number to optimize in each iteration.

      In this figure, we know that although it is slower now, there is more nodes are optimized. A better balance should be taken in the trade-off, maybe to make the iteration stop earlier is a good choice? Or maybe the validate() can be optimized?

      I'm still working on this. Please tell me if you have any idea.

      Thank you.

      by t13m ( at June 22, 2015 12:34 PM

      [GSoC 2015 Week 3]

      Hi, this is my third post of weekly record for GSoC 2015.

      What I did this week is to implement an optimization on local_dimshuffle_lift optimizer. This optimizer does following transfers,

      DimShuffle(Elemwise(x, y)) => Elemwise(DimShuffle(x), DimShuffle(y))
      DimShuffle(DimShuffle(x)) => DimShuffle(x)
      This optimizer is a local optimizer, which means it will be called by global optimizers several times on different nodes. For example, here is a function graph,

      DimShuffle{1,0} [@A] ''   
      |Elemwise{mul,no_inplace} [@B] ''
      |DimShuffle{x,0} [@C] ''
      | |Elemwise{add,no_inplace} [@D] ''
      | |<TensorType(float64, vector)> [@E]
      | |DimShuffle{x} [@F] ''
      | |TensorConstant{42} [@G]
      |Elemwise{add,no_inplace} [@H] ''
      |<TensorType(float64, matrix)> [@I]
      |DimShuffle{x,x} [@J] ''
      |TensorConstant{84} [@K]
      If we apply local_dimshuffle_lift on this graph, it can be applied for at least 6 times, looking carefully at how this optimization works we will find that an optimization applied on node @A results in two new subnodes that can be applied on again.

      So the idea is to recursively apply the optimizer. So that the optimizer will applied less times than before. An additional test case is also added.

      I think perhaps doing things recursively can be replaced by a iterative way? I'll do some experiments first to know recursion's efficiency. Also, this optimization makes me recall another optimization on the to-do list, which will also be done recursively. Is there possibility to extract this pattern out?

      by t13m ( at June 22, 2015 11:24 AM

      Mridul Seth

      GSoC 2015 Python Software Foundation NetworkX Biweekly Report 2

      Hello folks, this blog post is regarding the work done in Week 3 and Week 4 of Google Summer of Code 2015.

      After some discussion in #1546 we decided to make a new branch `iter_refactor` and decided to spilt up the changes into multiple pull requests, as it will be easier to review and will help avoid merge conflicts, as these changes touch a lot of files. The work done in week 1 and 2 is now merged. The following methods now return an iterator instead of list and their *iter counterparts are removed.

      A simple example of these changes using a directed graph with two edges, one from node 1 to node 2 and one from node 2 to node 3.

      In [1]: import networkx as nx
      In [2]: G = nx.DiGraph()
      In [3]: G.add_edge(1, 2)
      In [4]: G.add_edge(2, 3)
      In [5]: G.nodes()
      Out[5]: dictionary-keyiterator at 0x10dcd9578
      In [6]: list(G.nodes())
      Out[6]: [1, 2, 3]
      In [7]: G.edges()
      Out[7]: generator object edges at 0x10dcd5a00
      In [8]: list(G.edges())
      Out[8]: [(1, 2), (2, 3)]
      In [9]: G.in_edges(2)
      Out[9]: generator object in_edges at 0x10dcd5aa0
      In [10]: list(G.in_edges(2))
      Out[10]: [(1, 2)]
      In [11]: G.out_edges(2)
      Out[11]: generator object edges at 0x10dcd5eb0
      In [12]: list(G.out_edges(2))
      Out[12]: [(2, 3)]
      In [13]: G.neighbors(2)
      Out[13]: dictionary-keyiterator at 0x10dcd9c00
      In [14]: list(G.neighbors(2))
      Out[14]: [3]
      In [15]: G.successors(2)
      Out[15]: dictionary-keyiterator at 0x10dcd9db8
      In [16]: list(G.successors(2))
      Out[16]: [3]
      In [17]: G.predecessors(2)
      Out[17]: dictionary-keyiterator at 0x10dcd9f18
      In [18]: list(G.predecessors(2))
      Out[18]: [1]

      During the review we also found a bug in the core DiGraph class, which was a surprising thing as this code has been there from 2010. 5 years is a long time for a bug like this in a crucial place. The bug is now fixed. #1607

      We started working on the degree (#1592) and adjacency (#1591) methods. After a detailed conversation we decided to work on a new interface for degree. Now will return the degree of node if a single node is passed as argument of, and it will return an iterator for a bunch of nodes or if nothing is passed. The implementation work is in progress at #1617. I plan to complete this in this week.

      In [1]: import networkx as nx
      In [2]: G = nx.path_graph(5)
      In [3]:
      Out[3]: 2
      In [4]:
      Out[4]: generator object d_iter at 0x11004ef00
      In [5]: list(
      Out[5]: [(0, 1), (1, 2), (2, 2), (3, 2), (4, 1)]

      We have also started a wiki regarding various ideas discussed for NX 2.0.

      We also have a release candidate for v1.10, everyone is welcome to try it out and report any issue here. As a side project I also started making networkx tutorials based on ipython notebooks. Feel free to correct me and contribute to it NetworkX Tutorial :)


      PS: A note on my workflow regarding this work.

      by sethmridul at June 22, 2015 11:02 AM

      Rupak Kumar Das

      Mid-Term time

      And I am back for a report!

      The last two weeks were spent mostly in reading code for the implementations. Unfortunately, the curved cut implementation for the Cuts plugin seemed complex and time-consuming so it has been put off till later. So I completed the Save feature by adding save support to the MultiDim plugin. Now it can save the slice as an image and also can generate a movie.

      And I finally figured out the Slit plugin! It was only a simple matter of plotting the array as an image but I overthought it. The only thing left to figure out is how to display it using Ginga’s viewer. I am also working on the Line Profile View feature which will plot a pixel’s intensity vs all wavelengths in the data.


      by Rupak at June 22, 2015 04:44 AM

      June 21, 2015

      Patricia Carroll

      Modeling Galaxies

      In 1962, J.L. Sersic empirically derived a functional form for the way the light is spread out across a galaxy. This is called the Sersic surface brightness profile. The Sersic index n determines how steeply the light intensity drops off away from a galaxy's center, and different values of n describe different galaxy populations. An index of n=4 for example (a.k.a. de Vancouleurs profile), well describes giant elliptical galaxies, whereas smaller star forming spiral galaxies like the Milky Way are best described with an exponential profile, n=1.

      Coming soon to Astropy are the Sersic1D and Sersic2D model classes. This is my first substantial code contribution to the project and I hope it proves useful to the astronomy community. This was also a great stepping stone to developing more complex functionality as I move forward with implementing bounding boxes and fast image rasterization.

      In [1]:
      from IPython.display import Image

      In [2]:
      import os 
      In [3]:
      % matplotlib inline
      import numpy as np
      import matplotlib.pyplot as plt
      from astropy.modeling.models import Sersic1D,Sersic2D
      from astropy.visualization import LogStretch
      from astropy.visualization.mpl_normalize import ImageNormalize
      import seaborn as sns
      In [4]:
          s1 = Sersic1D(amplitude = 1, r_eff = 5)
          for n in range(1,10):
              s1.n = n 
          plt.ylabel('log Surface Brightness',fontsize=25)
          plt.xlabel('log Radius',fontsize=25)
          t=plt.text(.25,1.5,'n=1',fontsize = 30)
          t=plt.text(.25,300,'n=10',fontsize = 30)
          plt.title('Sersic1D model',fontsize=30)
          x_0 = 50.
          y_0 = 50.
          ellip = .5
          theta = -1.
          x,y = np.meshgrid(np.arange(1000),np.arange(1000))
          mod = Sersic2D(amplitude = 1, r_eff = 250, n=4, \
                       x_0=500, y_0=500, ellip=.5,theta=-1)
          img = mod(x,y)
          norm = ImageNormalize(vmin=1e-2,vmax=50,stretch=LogStretch())
          cbar.set_label('Surface Brightness',rotation=270,labelpad=40,fontsize=30)
          plt.title('Sersic2D model, $n=4$',fontsize=30)


      by Patti Carroll at June 21, 2015 11:33 PM

      Jaakko Leppäkanga


      Okay, the epoch viewer got merged and the butterfly plotter is coming along nicely. See the pictures below.

      Now I'm starting to move the focus to other visualization issues. I already did small tweaks to the raw plotter as well. Now it has the same awesome scaling features that the epoch plotter has.

      We also decided to make a todo-list for the GSOC. It already has quite a few items, so I think I have the next couple of weeks planned out for me. Here it is:

      by Jaakko ( at June 21, 2015 05:11 PM

      Chad Fulton

      Estimating a Real Business Cycle DSGE Model by Maximum Likelihood in Python

      This post demonstrates how to setup, solve, and estimate a simple real business cycle model in Python. The model is very standard; the setup and notation here is a hybrid of Ruge-Murcia (2007) and DeJong and Dave (2011). Since we will be proceeding step-by-step, the code will match that progression by generating a series of child classes, so that we can add the functionality step-by-step. Of course, in practice a single class incorporating all the functionality would be all you would need.

      by Chad Fulton at June 21, 2015 04:37 PM

      Lucas van Dijk

      Drawing arbitrary shapes with OpenGL points

      Part of my Google Summer of Code project involves porting several arrow heads from Glumpy to Vispy. I also want to make a slight change to them: the arrow heads in Glumpy include an arrow body, I want to remove that to make sure you can put an arrow head on every type of line you want.

      Making a change like that requires that you understand how those shapes are drawn. And for someone without a background in computer graphics this took some thorough investigation of the code and the techniques used. This article is aimed at people like me: good enough programming skills and linear algebra knowledge, but almost no former experience with OpenGL or computer graphics in general.

      June 21, 2015 02:41 PM

      Pratyaksh Sharma

      Mingling Markov Chains

      We're still at the same problem - we wish to generate sample from a probability distribution $P$, that is intractable to sample from. In our case, such a problem arises when we wish to sample from, say, a Bayesian network (given some evidence); or even a Markov network.

      Markov chain is a commonly used construct to tackle this problem.

      What are Markov chains?

      Figure 1  Example of a Markov chain
      To put it simply, a Markov chain is a weighted directed graph $\mathbb{G} = (\textbf{V}, \textbf{E})$, where the out-edges from a node $\textbf{x}$ define a transition probability $\mathcal{T}(\textbf{x}\rightarrow\textbf{x'})$ of moving to another node $\textbf{x'}$.

      Pick a node $x^{(0)}$ as the start state of the Markov chain. We define a run as the sequence of nodes (states) $(x^{(0)}, x^{(1)}, ..., x^{(n)}$, where $x^{(i)}$ is sampled from $P(\textbf{x}) = \mathcal{T}(x^{(i-1)} \rightarrow \textbf{x})$.

      At the $t+1$-th step of a run, we can define the distribution over the states as:
      $$P^{(t+1)}(\textbf{X}^{(t+1)} = \textbf{x'}) = \sum_{\textbf{x}\in Val(\textbf{X})} P^{(t)}(\textbf{X}^{(t)} = \textbf{x}) \mathcal{T}(\textbf{x}\rightarrow \textbf{x'})$$

      The above process is said to converge, when $P^{(t+1)}$ is close to $P^{(t)}$. At convergence, we call $P = \pi$, the stationary distribution of the Markov chain.

      $$\pi(\textbf{X}^{(t+1)} = \textbf{x'}) = \sum_{\textbf{x}\in Val(\textbf{X})} \pi(\textbf{X}^{(t)} = \textbf{x}) \mathcal{T}(\textbf{x}\rightarrow \textbf{x'})$$

      The useful property here is that as we generate more and more samples from our Markov chain, the more closer it is to the sample generated from its stationary distribution.


      Check out the pull request!

      by Pratyaksh Sharma ( at June 21, 2015 12:05 PM

      June 20, 2015

      Shridhar Mishra
      (ERAS Project)

      Update! @20/06/2015

      Things done:

      • Basic code structure of the battery.nddl has been set up.
      • PlannerConfig.xml has is in place.
      • PyEUROPA working on the docker image.

      Things to do:
      • test the current code with pyEUROPA.
      • Document working and other functions of pyEUROPA(priority).
      • Remove Arrow server code from the existing model.
      • Remove Pygame simulation and place the model for real life testing with Husky rover.
      • Plan and integrate more devices for planning. 

      by Shridhar Mishra ( at June 20, 2015 08:24 PM

      Chienli Ma

      The Second Two-week

      Almost forget to update a post.

      In this two week, I finished the first feature “Allow user to regenerate a function from compiled one”, and this feature “can be merged. But there’s another PR need to rebase.” So, it’s done.

      Also, I get a draft of the code that allow user to swap SharedVariable. When I said ‘draft’, I mean that I’ve finish the code as well as the testcase and they work. I’ll make a PR for review at the beginning of next week. Also I have some new idea need to discuss with Fred.

      I hope I can finish all 3 feature in the first 6-week: copy, swap_sharedvariable and delete_update. So that I can focus on OpFromGraph in the next half. It seems that someone has started working on it now. I hope he did not ‘rob’ my job. :)

      June 20, 2015 04:09 PM

      Tarashish Mishra

      I'm porting stuff to Python 3. And I'm loving it.

      GSoC update time! In case you didn't read my previous post, I'm participating in GSoC and porting Splash to Python 3.

      Quick update on what has been done so far. The pull request to add support for Qt5 and PyQt5 has been merged into the qt5 branch. The plan is to merge it into master after Python 3 porting and some other cleanup(fixing the docs, Vagrantfile etc) is done.

      So now on to Python 3 porting.

      The main road block in porting Splash to Python 3 is that some dependencies don't (fully) support Python 3 yet. The major one at that is Twisted. But the good thing is the most used parts of Twisted already support Python 3 and the developers behind Twisted are actively working on porting more and more modules. Also Twisted has a fairly well laid out guide for Python 3 porting and the community is really responsive with feedback and reviews. Thanks to that I have ported a module already and working on porting twisted.web.proxy for now.

      Among other dependencies, my fork of qt5reactor is Python 3 compatible. And pyre2, a faster drop-in replacement of the re module from standard library, is now Python 3 compatible after my PR was merged.

      For now, I'm porting the Splash code base one test at a time. Splash has a good test coverage and lots of tests. So that's working in my favor. That and pdb.

      That's all I have to share for now. Thanks for reading.

      by sunu at June 20, 2015 02:09 PM

      Himanshu Mishra

      GSoC '15 Progress: Second Report

      Past couple weeks had been fun! Learnt many new and interesting things about Python.

      The modified source code of METIS has got in, followed by its Cython wrappers. Thanks again to Yingchong Situ for all his legacy work. Nevertheless, things were not smooth and there were lots of hiccups and things to learn.

      One of the modules in the package was named types which was being imported by absolute import. Unknown of the fact that types is also a built-in module of python, the situation was a mystery for me. Thanks to iPython which told me this
      In [2]: types                                                           
      Out[2]: <module 'types' from '/usr/lib/python2.7/types.pyc'>

      This alerted me for all the difference, pros and cons of absolute and relative import. Now one may ask (Does anyone read these blog posts?) why didn't I go with the following at the very first place.

      In [3]: from . import types

      Actually networkx-metis is supposed to be installed as a namespace package in networkx, and the presence of is prohibited in a namespace package. Hence from . import types would raise a Relative import from a non-package error.

      We are now following the Google's style guide for Python[1].

      Being licensed under Apache License Version 2, we also had to issue a file named NOTICE clearly stating the modifications we did to the library networkx-metis is a derivative work of.

      Next important items in my TODO list are

      • Finalizing everything according for namespace packaging

      • Setting up Travis CI

      • Hosting docs over

      That's all for now.

      Happy Coding!


      by Himanshu Mishra ( at June 20, 2015 01:28 PM

      Isuru Fernando

      GSoC 2015 - Week 4

      This week I got access to OS X, so I decided to do all the work related to OS X this week while I had access. First of all, I worked on getting CMake to build on Sage in OS X 10.10. CMake is supported to be built using clang on OS X and is not supporting gcc. Since sage uses gcc for building packages, I tried building CMake on Sage. (Thanks to +Rajith for giving me the chance to work on his Mac).

      Main problem with CMake in OSX 10.10 was that it uses an Apple header <CoreFoundation/CoreFoundation.h> which is a collection of headers including <CoreFoundation/CFStream.h> which in turn includes a faulty Apple header '/usr/local/include/dispatch/dispatch.h'. After going through CMake code, it seemed that although the header 'CoreFoundation.h' was included 'CFStream.h' was not needed. So I used the specific headers needed (<CoreFoundation/CFBundle.h> etc.) and CMake was successfully built on Sage. Testing the CMake installation resulted in 6 out of 387 tests failing.

      Another good news was that we got access to test SymEngine on TravisCI with OSX. We are testing clang and gcc both to make sure, symengine builds on both. Building with clang was successful, but with gcc there were couple of problems and was hard to check on TravisCI as there were huge queuing times for builds on OSX.

      One issue is that on OSX, gmp library we are linking to is installed by homebrew. g++ was using a different C++ standard library than what gmp was compiled with and hence linking errors occur. A CMake check was added to try to compile a simple program with gmpxx and if it fails, give an error message at configuration time.

      Another issue was that `uint` was used in some places instead of `unsigned int`. On Linux and OSX clang `unsigned int` was typedef as `uint`, so there was no problem detected in automated tests in Travis-CI. Since `uint` is not a C++ standard type, it was changed to `unsigned int`.

      Next week, I will try to figure out why 6 tests in CMake test suite fails and try to fix those and get CMake into optional packages. Also I will work on the wrappers for sage for SymEngine.

      by Isuru Fernando ( at June 20, 2015 07:56 AM

      Manuel Paz Arribas

      Progress report

      The last two weeks have been a bit tough. After finishing the observation table generator mentioned in the previous post, I started working on the background module of Gammapy.

      I am currently working on a container class for 3D background models (a.k.a. cube background models). The three dimensions of the model are detector coordinates (X and Y) and energy. These kind of background models are largely in use by Fermi. The development of tools for creating such background models is an important milestone in my project and so implementing a class to handle them is crucial. For now, the class can read cube models from fits files and slice the 3D models to produce 2D plots of the background rate.

      Two kinds of plots are produced:
      1. Color maps of the background rate for each energy slice defined in the model.
      2. 1D curves of the spectrum of the background rate for each detector bin (X, Y) defined in the model.
      The figures attached have been performed using a sample background model fits file from the Gammalib repository. (Please click for an enlarged view).

      More functionality should come soon into this class, for instance, methods to create the bg models using event lists and the smoothing of the models to attenuate the effects of the statistical nature of the event detection. As I mentioned, I had some trouble developing a more complex class in python and this task is taking more time than expected.I am working hard to keep on track.

      by mapaz ( at June 20, 2015 06:55 AM

      June 19, 2015

      Nikolay Mayorov

      Algorithm Benchmarks

      This post was updated to improve its clarity and to incorporate new information about “leastsqbound”.

      Before I present the results I want to make a few notes.

      1. Initially I wanted to find a very accurate reference optimal value for each problem and measure the accuracy of an optimization process by comparison with it. I abandoned this idea for several reasons. a) In local optimization there isn’t a single correct minimum, all local minima are equivalently good. So ideally we should find all local minima which can be hard and comparison logic with several minima becomes awkward. b) Sources with problem descriptions often provide inaccurate (or plain incorrect) reference values or provide them with single precision, or doesn’t provide them at all. Finding optimal values with MATLAB (for example) is cumbersome and still we can’t 100% assure the required accuracy.
      2. It is desirable to compare algorithms with identical termination conditions. But this requirement is never satisfied in practice as we work with already implemented algorithms. Also there is no one correct way to specify termination condition. So the termination conditions for all algorithms will be somewhat different, but nothing we can do about it.

      The methods benchmarked were dogbox, Trust Region Reflectiveleastsqbound (also used in lmfit) and l-bfgs-b (this method doesn’t take into account the structure of a problem and works with f = \lVert r \rVert^2 and g = 2 J^T r).

      The columns have the following meaning:

      • n – the number of independent variables.
      • m – the number of residuals.
      • solver – the algorithm used. The suffix “-s” means additional scaling of variables to equalize their influence on the objective function, this has nothing to do with scaling applied in Trust Region Reflective. The equivalent point of view is usage of an elliptical trust region. In constrained case this scaling usually degrades performance, so I don’t show the results for it.
      • nfev – the number of function evaluations done by the algorithm.
      • g norm – first-order (gradient) optimality. In dogbox it is the infinity-norm of the gradient with respect to variables which aren’t on the boundary (optimality of active variables are assured by the algorithm). For other algorithms it is the infinity-norm of the gradient scaled by Coleman-Li matrix, read my post about it.
      • value – the value of the objective function we are minimizing, the final sum of squares. Could serve as a rough measure of an algorithm adequacy (by comparison with “value” for other algorithms).
      • active – the number of active constraints at the solution. Absolutely accurate for “dogbox”, somewhat arbitrary for other algorithms (determined with tolerance threshold).

      The most important columns are “nfev” and “g norm”.

      For all runs I used tolerance parameters ftol = xtol = gtol = EPS**0.5, where EPS is the machine epsilon for double precision floating-point numbers. As I said above termination conditions vary from method to method, so it is a tedious job to explain each parameter for each method.

      The benchmark problems were taken from “The MINPACK-2 test problem collection” and “Moré, J.J., Garbow, B.S. and Hillstrom, K.E., Testing Unconstrained Optimization Software”, constraints to the latter collection are added according to “Gay, D.M., A trust-region approach to linearly constrained optimization”. Here is the very helpful page I used. All problems were run with analytical (user supplied) Jacobian computation routine.

      The discussion of the results is below the table.

                                               Unbounded problems                                        
      problem                   n     m     solver          nfev  g norm     value      active   status  
      Beale                     3     2     dogbox          8     4.80e-11   1.02e-22   0        1       
                                            dogbox-s        9     7.69e-12   2.58e-24   0        1       
                                            trf             7     3.21e-11   4.50e-23   0        1       
                                            trf-s           9     4.07e-11   7.26e-23   0        1       
                                            leastsq         10    3.66e-15   5.92e-31   0        2       
                                            leastsq-s       9     6.66e-16   4.93e-32   0        2       
                                            l-bfgs-b        16    1.77e-07   1.95e-15   0        0       
      Biggs                     6     13    dogbox          140   1.84e-02   3.03e-01   0        2       
                                            dogbox-s        600   3.92e-03   8.80e-03   0        0       
                                            trf             65    2.18e-16   2.56e-31   0        1       
                                            trf-s           43    1.18e-14   3.60e-29   0        1       
                                            leastsq         74    7.24e-16   4.56e-31   0        2       
                                            leastsq-s       40    1.65e-15   1.53e-30   0        2       
                                            l-bfgs-b        42    1.23e-06   5.66e-03   0        0       
      Box3D                     3     10    dogbox          6     3.92e-10   1.14e-19   0        1       
                                            dogbox-s        6     3.92e-10   1.14e-19   0        1       
                                            trf             6     3.92e-10   1.14e-19   0        1       
                                            trf-s           6     3.92e-10   1.14e-19   0        1       
                                            leastsq         7     2.80e-16   4.62e-32   0        2       
                                            leastsq-s       7     2.80e-16   4.62e-32   0        2       
                                            l-bfgs-b        37    1.41e-07   3.42e-13   0        0       
      BrownAndDennis            4     20    dogbox          144   4.89e+00   8.58e+04   0        2       
                                            dogbox-s        153   3.46e+00   8.58e+04   0        2       
                                            trf             26    1.38e+00   8.58e+04   0        2       
                                            trf-s           275   4.28e+00   8.58e+04   0        2       
                                            leastsq         26    1.15e+00   8.58e+04   0        1       
                                            leastsq-s       254   3.63e+00   8.58e+04   0        1       
                                            l-bfgs-b        17    9.66e-01   8.58e+04   0        0       
      BrownBadlyScaled          2     3     dogbox          26    0.00e+00   0.00e+00   0        1       
                                            dogbox-s        29    0.00e+00   0.00e+00   0        1       
                                            trf             23    0.00e+00   0.00e+00   0        1       
                                            trf-s           23    0.00e+00   0.00e+00   0        1       
                                            leastsq         17    0.00e+00   0.00e+00   0        2       
                                            leastsq-s       16    0.00e+00   0.00e+00   0        2       
                                            l-bfgs-b        25    2.14e-02   2.06e-15   0        0       
      ChebyshevQuadrature10     10    10    dogbox          83    3.53e-05   6.50e-03   0        2       
                                            dogbox-s        72    1.70e-05   6.50e-03   0        2       
                                            trf             21    6.50e-07   6.50e-03   0        2       
                                            trf-s           21    5.93e-06   6.50e-03   0        2       
                                            leastsq         18    2.78e-06   6.50e-03   0        1       
                                            leastsq-s       25    3.22e-06   6.50e-03   0        1       
                                            l-bfgs-b        29    1.48e-05   6.50e-03   0        0       
      ChebyshevQuadrature11     11    11    dogbox          154   2.75e-05   2.80e-03   0        2       
                                            dogbox-s        196   5.01e-05   2.80e-03   0        2       
                                            trf             37    7.25e-06   2.80e-03   0        2       
                                            trf-s           44    7.85e-06   2.80e-03   0        2       
                                            leastsq         45    4.99e-06   2.80e-03   0        1       
                                            leastsq-s       47    7.70e-06   2.80e-03   0        1       
                                            l-bfgs-b        32    4.79e-04   2.80e-03   0        0       
      ChebyshevQuadrature7      7     7     dogbox          8     1.17e-12   4.82e-25   0        1       
                                            dogbox-s        10    7.22e-15   1.62e-29   0        1       
                                            trf             8     3.10e-15   2.60e-30   0        1       
                                            trf-s           9     1.04e-08   3.76e-17   0        1       
                                            leastsq         9     8.43e-16   7.65e-32   0        2       
                                            leastsq-s       9     1.37e-15   1.96e-31   0        2       
                                            l-bfgs-b        18    1.16e-05   7.14e-11   0        0       
      ChebyshevQuadrature8      8     8     dogbox          20    4.12e-06   3.52e-03   0        2       
                                            dogbox-s        56    5.73e-06   3.52e-03   0        2       
                                            trf             33    5.40e-06   3.52e-03   0        2       
                                            trf-s           39    9.26e-06   3.52e-03   0        2       
                                            leastsq         32    2.71e-06   3.52e-03   0        1       
                                            leastsq-s       39    7.99e-06   3.52e-03   0        1       
                                            l-bfgs-b        27    5.13e-06   3.52e-03   0        0       
      ChebyshevQuadrature9      9     9     dogbox          14    3.55e-15   9.00e-30   0        1       
                                            dogbox-s        11    6.02e-13   2.72e-25   0        1       
                                            trf             12    6.46e-13   3.15e-25   0        1       
                                            trf-s           9     2.22e-10   6.38e-20   0        1       
                                            leastsq         13    8.47e-16   7.64e-32   0        2       
                                            leastsq-s       12    5.95e-16   5.85e-32   0        2       
                                            l-bfgs-b        27    2.07e-05   2.53e-10   0        0       
      CoatingThickness          134   252   dogbox          7     2.33e-05   5.05e-01   0        2       
                                            dogbox-s        7     2.33e-05   5.05e-01   0        2       
                                            trf             7     2.33e-05   5.05e-01   0        2       
                                            trf-s           7     2.33e-05   5.05e-01   0        2       
                                            leastsq         7     2.33e-05   5.05e-01   0        1       
                                            leastsq-s       7     2.33e-05   5.05e-01   0        1       
                                            l-bfgs-b        281   5.54e-04   5.11e-01   0        0       
      EnzymeReaction            4     11    dogbox          23    1.32e-07   3.08e-04   0        2       
                                            dogbox-s        21    1.36e-07   3.08e-04   0        2       
                                            trf             20    1.30e-07   3.08e-04   0        2       
                                            trf-s           24    1.17e-07   3.08e-04   0        2       
                                            leastsq         23    7.13e-08   3.08e-04   0        1       
                                            leastsq-s       18    7.96e-08   3.08e-04   0        1       
                                            l-bfgs-b        30    2.53e-06   3.08e-04   0        0       
      ExponentialFitting        5     33    dogbox          10    1.56e-10   5.46e-05   0        1       
                                            dogbox-s        10    1.56e-10   5.46e-05   0        1       
                                            trf             19    1.29e-08   5.46e-05   0        1       
                                            trf-s           20    3.28e-08   5.46e-05   0        2       
                                            leastsq         20    3.23e-08   5.46e-05   0        1       
                                            leastsq-s       18    1.94e-08   5.46e-05   0        1       
                                            l-bfgs-b        44    9.98e-05   7.68e-05   0        0       
      ExtendedPowellSingular    4     4     dogbox          13    2.33e-09   5.72e-13   0        1       
                                            dogbox-s        13    2.33e-09   5.72e-13   0        1       
                                            trf             13    2.33e-09   5.72e-13   0        1       
                                            trf-s           13    2.33e-09   5.72e-13   0        1       
                                            leastsq         37    4.93e-31   7.22e-42   0        4       
                                            leastsq-s       37    4.93e-31   7.22e-42   0        4       
                                            l-bfgs-b        27    1.77e-04   3.70e-08   0        0       
      FreudensteinAndRoth       2     2     dogbox          6     0.00e+00   0.00e+00   0        1       
                                            dogbox-s        9     1.84e-11   1.95e-25   0        1       
                                            trf             6     1.57e-10   1.41e-23   0        1       
                                            trf-s           9     8.41e-11   4.07e-24   0        1       
                                            leastsq         8     1.78e-14   3.16e-30   0        2       
                                            leastsq-s       10    0.00e+00   0.00e+00   0        2       
                                            l-bfgs-b        15    5.29e-06   1.54e-13   0        0       
      GaussianFittingI          11    65    dogbox          14    1.27e-07   4.01e-02   0        2       
                                            dogbox-s        15    3.25e-07   4.01e-02   0        2       
                                            trf             13    1.75e-07   4.01e-02   0        2       
                                            trf-s           16    1.93e-07   4.01e-02   0        2       
                                            leastsq         13    5.89e-07   4.01e-02   0        1       
                                            leastsq-s       16    1.77e-06   4.01e-02   0        1       
                                            l-bfgs-b        69    8.67e-05   4.01e-02   0        0       
      GaussianFittingII         3     15    dogbox          3     5.93e-13   1.13e-08   0        1       
                                            dogbox-s        3     5.93e-13   1.13e-08   0        1       
                                            trf             3     5.93e-13   1.13e-08   0        1       
                                            trf-s           3     5.93e-13   1.13e-08   0        1       
                                            leastsq         4     1.25e-16   1.13e-08   0        2       
                                            leastsq-s       4     1.25e-16   1.13e-08   0        2       
                                            l-bfgs-b        4     5.81e-06   1.18e-08   0        0       
      GulfRnD                   3     100   dogbox          20    9.12e-09   1.83e-18   0        1       
                                            dogbox-s        22    1.00e-15   5.87e-31   0        1       
                                            trf             16    1.00e-15   5.87e-31   0        1       
                                            trf-s           25    1.26e-08   3.51e-18   0        1       
                                            leastsq         16    1.61e-15   7.12e-31   0        2       
                                            leastsq-s       23    1.61e-15   7.12e-31   0        2       
                                            l-bfgs-b        60    2.51e-06   1.25e-12   0        0       
      HelicalValley             3     3     dogbox          9     8.37e-12   3.13e-25   0        1       
                                            dogbox-s        19    5.81e-09   3.94e-19   0        1       
                                            trf             13    1.68e-13   1.16e-28   0        1       
                                            trf-s           13    1.78e-11   1.26e-24   0        1       
                                            leastsq         16    2.50e-29   2.46e-60   0        2       
                                            leastsq-s       11    1.58e-15   9.87e-33   0        2       
                                            l-bfgs-b        32    6.79e-07   3.28e-15   0        0       
      JenrichAndSampson10       2     10    dogbox          22    7.17e-02   1.24e+02   0        2       
                                            dogbox-s        21    5.51e-02   1.24e+02   0        2       
                                            trf             20    6.21e-03   1.24e+02   0        2       
                                            trf-s           20    5.86e-02   1.24e+02   0        2       
                                            leastsq         20    2.76e-04   1.24e+02   0        1       
                                            leastsq-s       21    2.90e-02   1.24e+02   0        1       
                                            l-bfgs-b        63    1.86e+03   nan        0        2       
      PenaltyI                  10    11    dogbox          35    4.90e-09   7.09e-05   0        1       
                                            dogbox-s        25    2.89e-09   7.09e-05   0        1       
                                            trf             38    3.98e-08   7.09e-05   0        2       
                                            trf-s           69    2.93e-08   7.09e-05   0        2       
                                            leastsq         26    7.54e-08   7.09e-05   0        1       
                                            leastsq-s       79    1.67e-08   7.09e-05   0        1       
                                            l-bfgs-b        20    8.58e-06   7.45e-05   0        0       
      PenaltyII10               10    20    dogbox          71    8.64e-07   2.91e-04   0        2       
                                            dogbox-s        33    4.15e-06   2.91e-04   0        2       
                                            trf             50    2.01e-06   2.91e-04   0        2       
                                            trf-s           32    4.98e-07   2.91e-04   0        2       
                                            leastsq         47    2.77e-07   2.91e-04   0        1       
                                            leastsq-s       58    6.64e-08   2.91e-04   0        1       
                                            l-bfgs-b        14    2.31e-06   2.91e-04   0        0       
      PenaltyII4                4     8     dogbox          27    1.64e-07   9.31e-06   0        2       
                                            dogbox-s        29    2.96e-07   9.31e-06   0        2       
                                            trf             24    3.42e-07   9.31e-06   0        2       
                                            trf-s           85    8.46e-08   9.31e-06   0        2       
                                            leastsq         70    7.35e-08   9.31e-06   0        1       
                                            leastsq-s       111   2.74e-08   9.31e-06   0        1       
                                            l-bfgs-b        19    1.16e-06   9.61e-06   0        0       
      PowellBadlyScaled         2     2     dogbox          43    4.89e-09   2.89e-27   0        1       
                                            dogbox-s        67    2.02e-11   9.86e-32   0        1       
                                            trf             43    4.90e-09   2.90e-27   0        1       
                                            trf-s           19    0.00e+00   0.00e+00   0        1       
                                            leastsq         72    1.01e-11   6.16e-32   0        2       
                                            leastsq-s       19    1.01e-11   1.23e-32   0        2       
                                            l-bfgs-b        4     1.35e-01   1.35e-01   0        0       
      Rosenbrock                2     2     dogbox          20    0.00e+00   0.00e+00   0        1       
                                            dogbox-s        18    0.00e+00   0.00e+00   0        1       
                                            trf             18    0.00e+00   0.00e+00   0        1       
                                            trf-s           20    0.00e+00   0.00e+00   0        1       
                                            leastsq         15    0.00e+00   0.00e+00   0        4       
                                            leastsq-s       14    0.00e+00   0.00e+00   0        4       
                                            l-bfgs-b        47    1.89e-06   1.31e-14   0        0       
      ThermistorResistance      3     16    dogbox          300   5.91e+06   1.61e+02   0        0       
                                            dogbox-s        291   1.95e+00   8.79e+01   0        2       
                                            trf             262   5.32e-04   8.79e+01   0        2       
                                            trf-s           202   3.10e-04   8.79e+01   0        3       
                                            leastsq         279   7.49e-04   8.79e+01   0        2       
                                            leastsq-s       216   1.68e+01   8.79e+01   0        3       
                                            l-bfgs-b        633   3.16e+00   3.17e+04   0        0       
      Trigonometric             10    10    dogbox          10    1.54e-11   9.90e-22   0        1       
                                            dogbox-s        65    3.66e-07   2.80e-05   0        2       
                                            trf             26    1.42e-07   2.80e-05   0        2       
                                            trf-s           31    2.08e-08   2.80e-05   0        2       
                                            leastsq         25    3.84e-08   2.80e-05   0        1       
                                            leastsq-s       28    5.67e-08   2.80e-05   0        1       
                                            l-bfgs-b        28    1.63e-06   2.80e-05   0        0       
      Watson12                  12    31    dogbox          7     5.77e-10   4.72e-10   0        1       
                                            dogbox-s        12    1.50e-13   4.72e-10   0        1       
                                            trf             6     1.56e-10   5.98e-10   0        1       
                                            trf-s           8     2.19e-10   2.16e-09   0        1       
                                            leastsq         9     8.94e-14   4.72e-10   0        2       
                                            leastsq-s       9     3.63e-11   4.72e-10   0        3       
                                            l-bfgs-b        52    4.20e-05   1.35e-05   0        0       
      Watson20                  20    31    dogbox          11    4.90e-12   2.48e-20   0        1       
                                            dogbox-s        19    6.35e-10   2.60e-20   0        1       
                                            trf             7     1.36e-12   1.63e-19   0        1       
                                            trf-s           8     1.32e-08   7.10e-18   0        1       
                                            leastsq         17    8.65e-13   2.48e-20   0        2       
                                            leastsq-s       20    1.08e-11   2.49e-20   0        2       
                                            l-bfgs-b        69    2.66e-05   7.28e-06   0        0       
      Watson6                   6     31    dogbox          8     5.16e-08   2.29e-03   0        2       
                                            dogbox-s        10    5.62e-08   2.29e-03   0        2       
                                            trf             8     5.16e-08   2.29e-03   0        2       
                                            trf-s           11    1.11e-07   2.29e-03   0        2       
                                            leastsq         8     5.16e-08   2.29e-03   0        1       
                                            leastsq-s       8     5.16e-08   2.29e-03   0        1       
                                            l-bfgs-b        44    5.28e-06   2.29e-03   0        0       
      Watson9                   9     31    dogbox          7     3.21e-13   1.40e-06   0        1       
                                            dogbox-s        9     8.26e-12   1.40e-06   0        1       
                                            trf             6     2.91e-11   1.40e-06   0        1       
                                            trf-s           10    4.29e-12   1.40e-06   0        1       
                                            leastsq         7     1.47e-11   1.40e-06   0        1       
                                            leastsq-s       7     1.70e-11   1.40e-06   0        4       
                                            l-bfgs-b        40    1.27e-04   6.51e-05   0        0       
      Wood                      4     6     dogbox          73    0.00e+00   0.00e+00   0        1       
                                            dogbox-s        66    0.00e+00   0.00e+00   0        1       
                                            trf             74    0.00e+00   0.00e+00   0        1       
                                            trf-s           67    5.53e-12   9.26e-26   0        1       
                                            leastsq         69    0.00e+00   0.00e+00   0        2       
                                            leastsq-s       70    0.00e+00   0.00e+00   0        2       
                                            l-bfgs-b        20    6.39e-04   7.88e+00   0        0       
                                                Bounded problems                                         
      problem                   n     m     solver          nfev  g norm     value      active   status  
      Beale_B                   3     2     dogbox          4     0.00e+00   0.00e+00   0        1       
                                            trf             19    8.74e-09   2.11e-10   0        1       
                                            leastsqbound    12    1.17e-09   1.99e-20   1        2       
                                            l-bfgs-b        5     5.83e-15   4.44e-31   1        0       
      Biggs_B                   6     13    dogbox          32    1.61e-08   5.32e-04   2        2       
                                            trf             24    3.96e-10   5.32e-04   2        1       
                                            leastsqbound    63    5.95e-04   5.90e-04   2        1       
                                            l-bfgs-b        70    1.52e-03   5.79e-04   2        0       
      Box3D_B                   3     10    dogbox          8     1.45e-10   1.14e-04   1        1       
                                            trf             13    6.55e-09   1.14e-04   0        1       
                                            leastsqbound    18    1.75e-08   1.14e-04   0        1       
                                            l-bfgs-b        16    1.08e-03   1.18e-04   0        0       
      BrownAndDennis_B          4     20    dogbox          78    9.28e+00   8.89e+04   2        2       
                                            trf             41    4.98e+01   8.89e+04   0        2       
                                            leastsqbound    271   8.63e-01   8.89e+04   1        1       
                                            l-bfgs-b        18    5.66e-01   8.89e+04   2        0       
      BrownBadlyScaled_B        2     3     dogbox          33    1.11e-10   7.84e+02   1        1       
                                            trf             39    8.25e-05   7.84e+02   1        3       
                                            leastsqbound    300   3.14e+00   7.87e+02   0        5       
                                            l-bfgs-b        7     1.44e-11   7.84e+02   1        0       
      ChebyshevQuadrature10_B   10    10    dogbox          147   1.64e-05   6.50e-03   0        2       
                                            trf             40    1.03e-06   4.77e-03   0        2       
                                            leastsqbound    55    1.14e-06   4.77e-03   0        1       
                                            l-bfgs-b        50    1.78e-06   4.77e-03   0        0       
      ChebyshevQuadrature7_B    7     7     dogbox          15    2.75e-07   6.03e-04   2        2       
                                            trf             15    3.95e-08   6.03e-04   2        2       
                                            leastsqbound    33    9.61e-08   6.03e-04   0        1       
                                            l-bfgs-b        29    2.18e-05   6.03e-04   2        0       
      ChebyshevQuadrature8_B    8     8     dogbox          81    5.34e-06   3.59e-03   1        2       
                                            trf             127   1.12e-06   3.59e-03   0        2       
                                            leastsqbound    900   1.33e-06   3.59e-03   0        5       
                                            l-bfgs-b        46    2.92e-06   3.59e-03   1        0       
      ExtendedPowellSingular_B  4     4     dogbox          20    2.42e-07   1.88e-04   1        2       
                                            trf             16    7.36e-09   1.88e-04   1        1       
                                            leastsqbound    23    5.59e-08   1.88e-04   1        1       
                                            l-bfgs-b        29    6.06e-05   1.88e-04   1        0       
      GaussianFittingII_B       3     15    dogbox          3     5.93e-13   1.13e-08   0        1       
                                            trf             5     2.64e-10   1.13e-08   0        1       
                                            leastsqbound    12    1.43e-15   1.13e-08   0        1       
                                            l-bfgs-b        5     7.93e-09   1.84e-08   0        0       
      GulfRnD_B                 3     100   dogbox          10    7.29e-05   5.29e+00   2        2       
                                            trf             9     9.03e-07   5.29e+00   1        2       
                                            leastsqbound    22    4.71e-05   5.29e+00   0        1       
                                            l-bfgs-b        29    4.11e-01   6.49e+00   0        0       
      HelicalValley_B           3     3     dogbox          9     2.69e-05   9.90e-01   1        2       
                                            trf             14    6.10e-05   9.90e-01   1        2       
                                            leastsqbound    125   4.24e-03   9.90e-01   1        1       
                                            l-bfgs-b        17    2.63e-05   9.90e-01   1        0       
      PenaltyI_B                10    11    dogbox          16    1.00e-05   7.56e+00   3        2       
                                            trf             17    8.72e-04   7.56e+00   3        2       
                                            leastsqbound    328   2.56e-06   7.56e+00   3        1       
                                            l-bfgs-b        5     8.56e-04   7.56e+00   3        0       
      PenaltyII10_B             10    20    dogbox          30    9.20e-07   2.91e-04   2        2       
                                            trf             304   5.20e-06   2.91e-04   1        2       
                                            leastsqbound    297   2.18e-07   2.91e-04   2        1       
                                            l-bfgs-b        23    2.59e-04   2.92e-04   0        0       
      PenaltyII4_B              4     8     dogbox          29    1.40e-12   9.35e-06   2        1       
                                            trf             193   2.91e-09   9.35e-06   0        1       
                                            leastsqbound    78    1.39e-08   9.35e-06   1        1       
                                            l-bfgs-b        14    3.07e-05   9.50e-06   0        0       
      PowellBadlyScaled_B       2     2     dogbox          38    4.07e-12   1.51e-10   1        1       
                                            trf             100   3.02e-11   2.07e-10   0        1       
                                            leastsqbound    220   1.82e-06   1.51e-10   1        1       
                                            l-bfgs-b        5     1.08e+00   1.35e-01   0        0       
      Rosenbrock_B_0            2     2     dogbox          17    0.00e+00   0.00e+00   0        1       
                                            trf             23    0.00e+00   0.00e+00   1        1       
                                            leastsqbound    6     1.11e-13   1.97e-29   1        2       
                                            l-bfgs-b        47    1.89e-06   1.31e-14   1        0       
      Rosenbrock_B_1            2     2     dogbox          6     4.97e-09   5.04e-02   1        1       
                                            trf             9     1.59e-07   5.04e-02   1        2       
                                            leastsqbound    21    2.66e-07   5.04e-02   1        1       
                                            l-bfgs-b        23    1.68e-06   5.04e-02   1        0       
      Rosenbrock_B_2            2     2     dogbox          6     2.27e-06   4.94e+00   1        2       
                                            trf             9     4.36e-07   4.94e+00   1        2       
                                            leastsqbound    18    3.44e-05   4.94e+00   1        1       
                                            l-bfgs-b        19    1.04e-05   4.94e+00   1        0       
      Rosenbrock_B_3            2     2     dogbox          3     0.00e+00   2.50e+01   2        1       
                                            trf             8     3.27e-09   2.50e+01   2        1       
                                            leastsqbound    19    4.38e-09   2.50e+01   2        1       
                                            l-bfgs-b        3     0.00e+00   2.50e+01   2        0       
      Rosenbrock_B_4            2     2     dogbox          6     4.97e-09   5.04e-02   1        1       
                                            trf             14    1.06e-08   5.04e-02   1        1       
                                            leastsqbound    20    7.03e-06   5.04e-02   0        1       
                                            l-bfgs-b        21    2.73e-10   5.04e-02   1        0       
      Rosenbrock_B_5            2     2     dogbox          12    0.00e+00   2.50e-01   1        1       
                                            trf             20    5.05e-06   2.50e-01   1        2       
                                            leastsqbound    24    8.47e-08   2.50e-01   1        1       
                                            l-bfgs-b        27    0.00e+00   2.50e-01   1        0       
      Trigonometric_B           10    10    dogbox          117   3.31e-07   2.80e-05   0        2       
                                            trf             37    7.67e-07   2.80e-05   0        2       
                                            leastsqbound    64    3.99e-08   2.80e-05   0        1       
                                            l-bfgs-b        34    2.61e-04   4.22e-05   0        0       
      Watson12_B                12    31    dogbox          1200  1.30e-03   7.17e-02   5        0       
                                            trf             171   4.50e-05   7.16e-02   6        2       
                                            leastsqbound    13    1.02e+02   1.71e+01   12       1       
                                            l-bfgs-b        101   9.37e-02   7.28e-02   6        0       
      Watson9_B                 9     31    dogbox          5     1.79e+01   4.91e+00   3        2       
                                            trf             26    1.87e-09   3.74e-02   5        1       
                                            leastsqbound    462   2.89e-03   3.91e-02   2        1       
                                            l-bfgs-b        285   5.72e-05   3.74e-02   5        0       
      Wood_B                    4     6     dogbox          63    5.05e-07   1.56e+00   1        2       
                                            trf             29    2.12e-08   1.56e+00   1        2       
                                            leastsqbound    43    4.17e-05   1.56e+00   1        1       
                                            l-bfgs-b        20    8.38e-03   1.56e+00   1        0       

      For unbounded problems “leastsq” and “trf” are generally comparable, with “leastsq” being modestly better. This is easily explained as the algorithms are almost equivalent, but “leastsq” uses a smarter strategy for decreasing a trust region radius, perhaps this issue is worth investigating. My second algorithm “dogbox” is less robust and fails in some problems (most of them have rank deficient Jacobian). The general purpose “l-bfgs-b” is generally not as good as lsq algorithms, but might be used with satisfactory results.

      In bounded problems “trf”, “dogbox” and “l-bfgs-b” do reasonably well, with performance varying over problems. I see one big fail of “dogbox” in “Watson9_B”, all other problems were solved relatively successful by all 3 methods. I suspect that performance of “l-bfgs-b” might degrade in high-dimensional problems, but for small constrained problems this method proved to be very solid, so use it! (At least until I add new lsq methods to scipy.)  And I just fixed leastsqbound, so now it works OK!

      by nickmayorov at June 19, 2015 11:29 PM

      Rafael Neto Henriques

      [RNH Post #5] Progress Report (DKI simulations merged and DKI real data fitted)

      I have done great progresses on the 2 last weeks of coding!!! In particular, two major achievements were accomplished:

      1 - By solving the couple of problems mentioned on my previous post, the DKI simulations were finally merged to the Dipy's master repository.

      2 - The first part of the reconstruction modules to process DKI in real brain data was finalized.

      The details of these two achievements and the project's next step are posted on the below sections.

      1) DKI simulations on Dipy's master repository 

      Just to give an idea of the work done, I am posting an example of how to use the DKI simulations that I developed. More details on the mathematical basis of these simulations can be found here.

      1.1) Import python modules and defining MRI parameters 

      First of all, we have to import relevant modules (see code lines bellow). The main DKI simulations function multi_tensor_dki can be imported from the Dipy simulations' sub-module dipy.sims.voxel (line 19 shown below).  

      To perform the simulations, some parameters of the MRI acquisition have to be considered. For instance, the intensity of the MRI's diffusion-weighted signal depends on the diffusion-weighting used on the MRI scanner (measured as the b-value) and the directions that the diffusion measurement are done (measured as the b-vectores).  This information, for example, can be obtain from Dipy's real dataset samples.

      Dipy's dataset 'small_64D' was acquired with only one diffusion-weighting intensity. Since DKI requires data from more than one non zero b-value, a second b-values is artificially added.

      To convert the artificial produced b-values and b-vectors to the format assumed by Dipy's functions, the function gradient_table has to be called.

      1.2) Defining biological parameters

      Having all the scanner parameters set, the biophysical parameters of the simulates have to be defined.

      Simulations are based on multi-compartmental models, which allow us to take into account brain's white matter heterogeneity. For example, to simulate two crossing fibers with two different media (representing intra and extra-cellular media), a total of four heterogeneous components are taken into account. The diffusion parameters of each compartment are defined below (the first two compartments correspond to the intra and extra cellular media for the first fiber population while the others correspond to the media of the second fiber population).

      The orientation of each fiber is saved in polar coordinates. To simulate crossing fibers at 70 degrees
      the compartments of the first fiber are aligned to the x-axis while the compartments of the second fiber are aligned to the x-z plane with an angular deviation of 70 degrees from the first one.

      Finally, the volume fractions of the compartment are defined.

      1.3) Using DKI simulation main function

      Having defined the parameters for all tissue compartments, the elements of the diffusion tensor (dt), the elements of the kurtosis tensor (kt) and the DW signals simulated from the DKI model (signal_dki) can be obtained using the function multi_tensor_dki.

      As I mentioned in my previous post, these simulations are useful for testing the performance of DKI reconstruction codes that I am currently working on. In particular, when we apply the reconstruction modules to the signal_dki, the estimated diffusion and kurtosis tensors have to match the ground truth kt and dt produced here. 

      2) Progresses on the development of the DKI reconstruction module

      Finalizing DKI reconstruction module is the milestone that I proposed to achieved before the mid-term evaluation.  Basically, the work done on this is on schedule!

      Since DKI is an extension of DTI, classes of the DKI modules were defined from inheritance of the classes defined on Dipy's DTI module (a nice post can be found here for more details on class inheritance). Having established this inheritance, DKI modules are compatible to all standard diffusion statistical measures previously defined in Dipy.

      I carried on with the development of the DKI module by implementing the estimation of the diffusion and kurtosis tensors from the DKI model. Two strategies were implemented - the DKI's ordinary linear least square (OLS) solution, which corresponds to a simple but less computational demanding approach, and the weighted DKI's linear least square (WLS) solution, which is considered to be one of the most robust estimation approaches in the recent DKI literature

      Currently, I am validating DKI implementation using the nose testing modules. Both implementations of the OLS and WLS solutions seem to produce the ground truth diffusion and kurtosis tensors when applied on the diffusion signal simulated from my DKI simulation modules. In addition, DKI modules are also producing the expected standard diffusion parameter images when applied to real data (see Figure 1).
      Figure 1. Comparison between real brain parameter maps of the diffusion fractional anisotropy (FA), mean diffusivity (MD), axial diffusivity (AD), and radial diffusivity (RD) obtain from the DKI modules (upper panels) and the DTI module (lower panels). 

      From the figure, we can see that the DT standard diffusion measures from DKI are noisier than the DTI measurements. This is a well known pitfall of DKI. Since it involves the fit of a larger number of parameters, DKI is more sensitive to noise than DTI. Nevertheless, diffusion measures from DKI were shown to have a better precision (i.e. less sensitive to bias). Moreover, as I have been mentioning on my previous posts, DKI allows the estimation of the standard kurtosis measures. 

      3) Next Steps

      Before the mid-term evaluation, a first version of the DKI reconstruction will be completed with the implementation of the standard kurtosis measures, as the mean, axial and radial kurtosis from the already estimated kurtosis tensors. Details of the usage of the DKI reconstruction modules and the meaning of the standard kurtosis measures will be summarized on my next post.

      by Rafael Henriques ( at June 19, 2015 09:08 PM

      Ambar Mehrotra
      (ERAS Project)

      GSoC 2015: 3rd Biweekly Report

      I worked on several things during the past two weeks.

      OpenMCTMission Control Technologies (MCT) brings information from many sources to the user through one consistent, intuitive interface. It is a software developed by NASA which helps the user compose the information he/she needs. MCT has a collection of user objects that correspond to the things users are interested in, along with the capability of displaying the same thing in different ways for different purposes. Data can be added from multiple sources, updated, modified, and represented in multiple composable views. This project is very similar to the framework I have to develop for my project, although MCT has been developed in Java while PSF requires us to write code in python.
      • Decision to use jython: In order to utilize this project directly, there was a decision to use jython(python running on jvm) which can combine both java and python. I was able to import MCT as a dependency in a java project but ran into trouble while using jython. After spending a lot of time in setting up things on jython I decided it would be better if I develop this completely in house using PyQt. 
      Working with PyQt:

      I spent the later part of the past 2 weeks designing the interface in PyQt. These were the features that I implemented.
      • Add a new device source: A user can add a new device by entering its Tango server address
      • Tree View implementation: Implemented a tree view to categorize various data sources, collector devices and custom groups or branches. Some work is left in this section and i'll be focusing on this for the upcoming two weeks.
      • Real-Time graph of data sources: User can click on a data source to view its real-time gaph.
      • Creating custom branches: A user can create custom branches. He will be presented with the list of available data sources from where he can select the data sources he wants to add to that specific branch.

      In the upcoming weeks i'll mainly be working on making the tree view more concrete and presenting more data inside it. Also, a major point of focus will be data aggregation and summary creation from the children of a branch. 

      by Ambar Mehrotra ( at June 19, 2015 05:51 PM

      Stefan Richthofer

      GSoC status-update for 2015-06-19

      I finally completed the core-GC routine that explores the native PyObject reference-connectivity graph and reproduces it on Java-side. Why mirror it on Java side? Let me comprehend the reasoning here. Java performs a mark-and-sweep GC on its Java-objects, but there is no way to extend this to native objects. On the other hand using CPython's reference-counting approach for native objects is not always feasible, because there are cases where a native object must keep its Java-counterpart alive (JNI provides a mechanism for this), allowing it to participate in an untracable reference cycle. So we go the other way round here, and let Java-GC track a reproduction of the native reference connectivity-graph. Whenever we observe that it deletes a node, we can discard the underlying native object. Keeping the graph up to date is still a tricky task, which we will deal with in the second half of GSoC.

      The native reference-graph is explored using CPython-style traverseproc mechanism, which is also implemented by extensions that expect to use GC at all. To mirror the graph on Java-side I am distinguishing 8 regular cases displayed in the following sketch. These cases deal with representing the connection between native and managed side of th JVM.

      In the sketch you can see that native objects have a so-called Java-GC-head assigned that keeps alive the native object (non-dashed arrow), but is only weakly reachable from it (dashed arrow). The two left-most cases deal with objects that only exist natively. The non-GC-case usually needs no Java-GC-head as it cannot cause reference cycles. Only in GIL-free mode we would still track it as a replacement for reference-counting. However GIL-free mode is currently a vague consideration and out of scope for this GSoC-project. Case 3 and 4 from left deal with objects where Jython has no corresponding type and JyNI uses a generic PyCPeer-object - a PyObject-subclass forwarding the magic methods to native side. PyCPeer in both variants serves also as a Java-GC-head. CStub-cases refer to situations where the native object needs a Java-object as backend. In these cases the Java-GC-head must not only keep alive other GC-heads, but also the Java-backend. Finally in mirror-mode both native and managed representations can be discarded independently from each other at any time, but for performance reasons we try to softly keep alive the counterparts for a while. On Java-side we can use a soft reference for this.

      PyList is a special case in several ways. It is a mutable type that can be modified by C-macros at any time. Usually we move the java.util.List backend to native side. For this it is replacing by JyList - a List-implementation that is backed by native memory, thus allowing C-macros to work on it. The following sketch illustrates how we deal with this case.

      It works roughly equivalent to mirror mode but with the difference that the Jython-PyList must keep alive its Java-GC-head. For a most compact solution we build the GC-head functionality into JyList.

      Today I finished the implementation of the regular cases, but testing and debugging still needs to be done. I can hopefully round this up for midterm evaluation and also include the PyList-case.

      by Stefan Richthofer ( at June 19, 2015 04:41 PM

      Jakob de Maeyer

      Towards an Add-on Framework

      Last time, we learned that most Scrapy extension hooks are controlled via dictionary-like settings variables. We allowed updating these settings from different places without having to worry about order by extending Scrapy’s priority-based settings system to dictionaries. The corresponding pull request is ready for final review by now and includes complete tests and documentation. Now that this is (almost) out of the way, how can we “[improve] both user and developer experience by implementing a simplified interface to managing Scrapy extensions”, as I promised in my initial blog post?

      The Concept of Add-ons

      Often, extension developers will provide their users with small manuals that show which settings they need to modify in which way. The idea behind add-ons is to provide developers with mechanisms allowing them to apply these basic settings themselves. The user, on the other hand, no longer needs to understand Scrapy’s internal structure. Instead, she only needs “plug in” the add-on at unified single entry point, possibly through a single line. If necessary, she can also configure the add-on at this entry point, e.g. to supply database credentials.

      Let us assume that we have a simple pipeline that saves items into a MySQL database. Currently, the user has to configure her file similar to this:

      # In
          # Possible further pipelines here
          'myproject.pipelines.mysql_pipe': 0,
      MYSQL_DB = 'some.server'
      MYSQL_USER = 'some_user'
      MYSQL_PASSWORD = 'some!password'

      This has several shortfalls:

      • the user is required to either edit settings blindly (Why ITEM_PIPELINES? What does the 0 mean?), or learn about Scrapy internals
      • all settings are exposed into the global settings namespace, creating potential for name clashes
      • the add-on developer has no option to check proper for dependencies and proper configuration

      With the add-on system, the user experience would be closer to this:

      # In scrapy.cfg
      database = some.server
      user = some_user
      password = some!password

      Note that:

      • Scrapy’s internals (ITEM_PIPELINES, 0) are hidden
      • Specifying a complete Python path (myproject.pipelines.mysql_pipe) is no longer necessary
      • The database credentials are no longer independent settings, but local to the add-on section

      Add-ons from a Developer’s Point of View

      With the add-on system, developers gain greater control over Scrapy’s configuration. All they have to do is write a (any!) Python object that implements Scrapy’s add-on interface. The interface could be provided in a Python module, separate class, or along the extension class they wrote. The interface consists of two attributes and two callbacks:

      • NAME: String with human-readable add-on name
      • VERSION: tuple containing major/minor/patchlevel version of the add-on
      • update_settings()
      • check_configuration()

      While the two attributes can be used for dependency management (e.g. “My add-on needs add-on X > 1.1.0”), the two callbacks are where developers gain control over Scrapy’s settings, freeing them from relying on their users to properly follow their configuration manuals. In update_settings(), the add-on receives its (local) configuration from scrapy.cfg and the Scrapy Settings object. It can then internally configure the extensions and expose settings into the global namespace as it seems fit. The second callback, check_configuration(), is called after Scrapy’s crawler is fully initialised, and should be used for dependency checks and post-init tests.

      Current State

      So far, I have redrafted an existing Scrapy Extension Proposal (SEP) with an outline of the add-on implementation. Code-wise, I have already written loaders that read add-on configuration from Scrapy’s config files, then search and initialise the add-on objects.

      Where exactly the add-on objects should live is still up for debate. Currently, I plan on writing a small helper class that holds the add-on objects and provides helpers to access their attributes. This ‘holder’ would then live on the crawler, which is Scrapy’s central entry point object for all extensions and which manages the crawling process.

      You can follow my progress in my Add-ons pull request.

      June 19, 2015 03:36 PM

      Vito Gentile
      (ERAS Project)

      Enhancement of Kinect integration in V-ERAS: Second report

      This is my second report about what I have done for my GSoC project. If you don’t know what it is about and want to find more information, please refer to this page and this blog post.

      The first problem I had to solve was to implement a valid head estimation algorithm. You can find the code of how I had implemented it at this link, while for the algorithm itself, I have also recently discussed it in an answer on StackOverflow.

      After the height estimation (that we decided to implement as a Tango command), the next step was to update the Tango classes that I had added with the previous commits, in order to use the new Tango API. You can find more about this topic in this very useful documentation page on “High level server API”. This update allowed me to reduce the number of lines of code, and it is also much more simple to implement commands or events in the Tango server now. However I had some issues with data types and starting the server (which made me stuck for a bit). Thankfully I finally fix them in my last commit, yesterday evening.

      I have also worked on the documentation. In particular, I have updated the Software Architecture Document (SAD) for the Body Tracker application, by adding the CLI section and updating the GUI part with some new features introduced together with the Python-based interface.

      GUI for managing multiple Kinects

      GUI for managing multiple Kinects

      I have also removed a redundant document, named “Execution of body tracker”, that was about how to execute the old tracker (which was written in C# and is still available in the repository, but basically to be deprecated).

      For more information about my project and the other ones supported by Italian Mars Society and Python Software Foundation, refers to the GSoC2015 page of ERAS website.

      by Vito Gentile at June 19, 2015 11:19 AM

      Julio Ernesto Villalon Reina

      OHBM 2015 Hackathon

      Hi all, 

      It has been a busy week. The Organization of Human Brain Mapping (OHBM) conference in Hawaii just finished today ( I had the chance to meet with my mentors in person and to get help from them directly. We participated in the Hackaton that took place two days before the conference ( We had the chance to work on the code and set up goals for the midterm. I also had the opportunity to talk about my GSoC project to other Hackathon participants and conference attendees. They all shared ideas with me and gave me good advice. I will be flying back home this weekend and will write another post with a detailed description of what we worked on this week plus some preliminary results. This is a photo with my mentors and other contributors to the DIPY project (Diffusion Imaging in Python). Mahalo!

      by Julio Villalon ( at June 19, 2015 07:21 AM

      Abraham de Jesus Escalante Avalos

      Scipy and the first few GSoC weeks

      Hi all,

      We're about three (and a half) weeks into the GSoC and it's been one crazy ride so far. Being my first experience working in OpenSource projects and not being much of an expert in statistics was challenging at first, but I think I might be getting the hang of it now.

      First off, for those of you still wondering what I'm actually doing, here is an abridged version of the abstract from my proposal to the GSoC (or you can click here for the full proposal):

      "scipy.stats is one of the largest and most heavily used modules in Scipy. [...] it must be ensured that the quality of this module is up to par and [..] there are still some milestones to be reached. [...] Milestones include a number of enhancements and [...] maintenance issues; most of the scope is already outlined and described by the community in the form of open issues or proposed enhancements."

      So basically, the bulk of my project consists on getting to work on open issues for the StatisticsCleanup milestone within the statistics module of SciPy (a Python-based OpenSource library for scientific computing). I suppose this is an unusual approach for a GSoC project since it focuses on maintaining and streamlining an already stable module (in preparation for the release of SciPy 1.0), rather than adding a new module or a specific function within.

      The unusual approach allows me to make several small contributions and it gives me a wide (although not as deep) scope, rather than a narrow one. This is precisely the reason why I chose it. I feel like I can benefit (and contribute) a lot more this way, while I get acquainted with the OpenSource way and it also helps me to find new personal interests (win-win).

      However, there are also some nuances that may be uncommon. During the first few weeks I have discovered that my proposal did not account for the normal life-cycle of issues and PRs in scipy; my estimations we're too hopeful.

      One of OpenSource's greatest strengths is the community getting involved in peer reviews; this allows a developer to "in the face of ambiguity, refuse the temptation to guess". If you didn't get that [spoiler alert] it was a reference to the zen of python (and if you're still reading this and your name is Hélène, I love you).

      The problem with this is that even the smooth PRs can take much longer than one week to be merged because of the back and forth with feedback from the community and code update (if it's a controversial topic, discussions can take months). Originally, I had planned to work on four or five open issues a week, have the PRs merged and then continue with the next four or five issues for the next week but this was too naive so I have had to make some changes.

      I spent the last week compiling a list of next steps for pretty much all of the open issues and I am now trying to work on as many as I can at a time, thus minimising the impact of waiting periods between feedback cycles for each PR. I can already feel the snowball effect it is having on the project and on my motivation. I am learning a lot more (and in less time) than before which was the whole idea behind doing the Summer of Code.

      I will get back in touch soon. I feel like I have rambled on for too long, so I will stop and let you continue to be awesome and get on with your day.


      by Abraham Escalante ( at June 19, 2015 12:19 AM

      Aron Barreira Bordin

      Progress Report 1

      <p>Hi!</p> <p>In this week I developed some extra features to <strong>Kivy Designer</strong> not listed in my proposal. In the firsts weeks I made a good advance on my proposal, so now I have some times to add some important features to the project.</p> <p>Check the video with some of this features working:</p> <div align="center"> <iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe> </div> <h3>Better Python Code Input - Jedi</h3> <p>Something completely <strong>essential to any IDE is autocompletion</strong>. In this extra-feature, I had added some improvements to the Python Code Input. Now it&#39;s possible to <strong>change the theme</strong>, it shows the <strong>line number</strong> on the left, and the most important one: <strong>Jedi integration</strong>.</p> <h3>Jedi - an awesome autocompletion/static analysis library for Python</h3> <p>Jedi is a static analysis tool for Python that can be used in IDEs/editors. Its historic focus is autocompletion, but does static analysis for now as well. Jedi is fast and is very well tested. It understands Python on a deeper level than all other static analysis frameworks for Python.</p> <h3>Next week</h3> <h4>Kivy Console</h4> <p>The current version of Kivy Console(a terminal emulator on Kivy Designer) have some bugs, it&#39;s not compatible with Python 3 and it&#39;s getting slower with long processes. Now I&#39;m analyzing to check what is the best: fix some parts, or rewrite this Widget.</p> <h4>PRs</h4> <p>I have 3 PRs waiting review and some branchs waiting r+ from these PRs.</p> <p>Thats it, thanks for reading :)</p> <p>Aron Bordin.</p>

      June 19, 2015 12:00 AM

      June 18, 2015

      Richard Plangger

      It is ... alive!!!

      I have been quite busy the last weeks improving my solution. Most of the time I have dedicated to accumulation of values. But first I have to tell you about the ...


      I have measured speedup on my sample interpreter already, but not in the NumPy library. I have tested and hardened the edge cases and it is now possible to measure speedup using the NumPy library.

      Micro benchmark

      a = np.arange(1000.0)
      b = np.arange(1000.0)
      for i in range(10000):
          a = a + b

      Invoking this program one can measure as speedup of ~1.33 faster program execution.

      Well, that is not quite the theoretical maximum of 2.00 (SSE4)

      I have then spent time to analyze the behavior using several profiling utilities. The included Python profiler did not do the job, because it is unaware of the underlying JIT. Thus I used the brand new vmprof and gprof.

      Sidenote: I used gprof only to verify, but if a statistical profiler is enough for your python program, go for vmprof! The overhead is minimal and it is possible to get live profiling feedback of your application! In combination with the jitviewer you can find out where your time is spent.

      It helped me a lot and the above loop spends about half of the time copying memory. So if the loop body is exchanged with ufunc.add(a, b, out=a) speedup increases up to 1.70-1.80.

      That is better, but where is the rest of the time spent? Sadly the profiling says in the loop around the NumPy call. One of my mentors has suggested that there might be possibilities to improve the register allocation. And I'm currently evaluating a way to exchange and add some heuristics to improve the allocator.

      The loop itself is a magnitude faster than the scalar loop. So I'm quite happy that my idea really worked out.


      That is another big thing that I have been working on. I did not suggest this improvement in my GSoC proposal. Still I want to include it.
      Frequently used functions in scientific computing are sum, prod, any, all, max, min, ...

      Some of them consider the whole array, some of them bail out if an element has been found. There is potential to use SIMD instructions for these operations.

      Let's consider sum(...). The addition is commutative.

      x+y = y+x  f.a. R

      Thus I have added a temporary vector register for summation, the accumulator. Instead of resolving the dependency using a horizontal add (supported by x86 SSE4) the loop partially sums the array. At every guard exit the accumulator is then horizontally added. Again the theoretical speedup is a factor 2 when using float64 on SSE4.

      I have not yet managed to compile a version that fully works on sum, but I'm quite close to it. Other functions like all or any are more complex. It is not so easy to recognize the reduction pattern if more than one operation is involved. I will add a pattern matcher for those instructions. Let's have a look at the following example (for all):

      d = int_and(a,b)
      e = int_and(c,d)

      And output the following vector statements (excluding guard compensation code)

      v = vec_int_and([a,c], accum)

      I did not expect...

      I have evaluated the possibility to vectorize arbitrary PyPy traces using the array module. This does not work for PyPy traces. It works in my test toy language (located here). Let's take a look at the following user program:

      while i < 100:
          a[i] = b[i] + c[i] * 3.14
              i += 1

      a,b and c are array objects of the Python array module. Their elements are homogeneous and adjacent in memory. The resulting trace could be transformed into a vectorized form.

      The current two limitations make it impossible to vectorize the user program: 1) Python checks array boundaries (also negative) for each load/store operation. This adds an additional guard to the trace.
      2) The boxing of integer values. The index variable will be recreated at the end of the loop and incremented. This includes several non pure operations in the trace including memory allocation of an integer box every iteration.

      I do not yet know how I will come around these problems, but for the second limitation I'm quite sure that the creation of the integer box can be moved to the guard exit.

        by Richard Plangger ( at June 18, 2015 05:04 PM

        Raghav R V

        GSoC 2015 - PSF / scikit-learn - Nested Cross Validation

        Nested cross validation is simply cross validation done for hyper-parameter tuning as well as for evaluation of the tuned model(s).

        This is necessary to have an unbiased measure of the tuned estimators' performance score. To elaborate a bit we will start will model selection.

        The basic process in tuning a model is trying out different models (parameter combinations) and choosing one which has the highest cross validated score. The cross validation makes the scores unbiased by avoiding optimistic evaluation of the scores which happens when the model is tested with the training data itself.

        Now the best model's cross validated score found using the entire dataset cannot be considered as an unbiased estimate of this tuned model's performance on unseen data.

        This is because the information about the dataset could have leaked into the model by the selection of the best hyper parameters. (i.e, the hyper parameters could have been optimized for this input dataset which was also used to obtain the cross validated score.)

        To avoid this, we could simply partition the initial dataset into a tuning set and a testing set, tune the model using this tuning set, and finally evaluate it on the testing set.

        This would give us a fairly unbiased estimate of the tuned model as long as the tuning and testing set are similar in their distribution. Moreover partitioning the dataset and not utilizing the testing set for the model building is a bit uneconomical, especially when the number of samples are less, considering the fact that we get only a single evaluation of only one best model.

        Using cross validation to do this will be economical and efficient as it produces one best model and its unbiased score for each iteration. This makes it possible to check if there is any variance amongst the different models or their scores.

        In a nested CV approach to model selection, there are three main parts.

        The outer CV loop

        • The outer CV loop has n iterations (the number of iterations depends on the selected CV strategy)
        • For each iteration
          • The data is split into a tuning set and a testing set.
          • This tuning set is then passed on to the search algorithm which returns the best hyper parameter setting for that tuning set.
          • This model is then evaluated using the testing set to obtain a score which will be an unbiased estimate of the estimators performance on unseen data.
        • The variance in each model's hyper param setting and its score is studied to get a better picture of the best models.

          The parameter search

          • The parameter search module is given the estimator, the range of hyper parameters and a tuning set.
          • The various possible combinations of the hyper parameters are generated.
          • For each combination of the hyper parameter - 
            • The estimator is constructed using this combination.
            • This estimator and the tuning set are passed to the inner CV loop to fit and evaluate the model for that particular combination.
            • If the inner CV loop has m interations, there will be m such performance scores*.
            • The mean of these m performance scores give an average measure of the estimator for that particular combination of hyper params.
          • The combination which has the best performance measure amongst the various combinations is chosen as the best model.

          The Inner CV loop

          • The inner CV loop gets the unfitted estimator with the particular combination of the hyperparameters and the tuning set.
          • Similar to the outer CV loop there are multiple iterations in the inner CV loop, say m (number of ways in which the data is partitioned).
          • In each iteration -
            • The tuning set is split into a training set and a testing set.
            • This training set is used to fit the estimator.
            • The testing set is used to evaluate the estimator's score.
            • Since the testing set is not used while training it eliminates the possibility of a bias in the computed score owing to overfitting of the model to the training data.
          • m such scores are then returned to the search module which averages them to get an unbiased measure of the models performance.
          So now to perform nested CV we require 2 cv iterators, say, the outer_cv and inner_cv. Since the cross validation iterators (as seen in the previous blog post) are data dependent, both these objects need to be constructed by passing in the characteristics of data such as the number of samples or the labels.

          While it is easier to set the outer_cv object, since the entire data is passed to it, constructing the inner_cv object ranges from difficult to impossible depending on the CV strategy. This is because the characteristics of the data (tuning set) generated by the outer CV loop for each iteration is not easily known which makes it impossible to construct the inner_cv object which requires this information.

          By making CV iterators data independent, we will no longer have this limitation.

          To illustrate nested CV let's consider a small example which despite the data dependency of iterators is possible owing to the selected CV strategy.

          Lets again work with the iris dataset and SVC.
              import numpy as np
          from sklearn import datasets
          from sklearn.cross_validation import ShuffleSplit
          from sklearn.svm import SVC

          iris = datasets.load_iris()

          n_samples =[0]

          We will be using GridSearchCV for the parameter searching process. For the inner CV loop let us choose the default [Stratified*]KFolds CV strategy with the number of folds, k = 4.

          NOTE: KFolds has been deliberately chosen to illustrate nested CV without getting bitten by data dependency. The reason why this works is because the CV iterator is constructed implicitly by the cross_val_score function which is placed inside GridSearchCV. Thus it has knowledge of the tuning set and hence is able to supply the data dependent parameters required for the construction of the inner CV loop.

          This eliminates the need for us to explicitly construct the CV object by initializing it with the data characteristics.
              p_grid = {'C':[1, 10, 100, 1000],
          'gamma':[1e-1, 1e-2, 1e-3, 1e-4],
          'degree':[1, 2, 3]}
          grid_search = GridSearchCV(SVC(random_state=0), param_grid=p_grid,

          The outer CV loop can now be freely chosen since we have the dataset X, y

                  cv_outer = StratifiedShuffleSplit(y, n_iter=5,
                                                         test_size=0.3, random_state=0)

          Now lets nest this cross validated parameter search inside the outer CV loop and get the best parameters and the scores for 5 iterations.
          >>> for training_set_indices_i, testing_set_indices_i in cv_outer:
          ... training_set_i = X[training_set_indices_i], y[training_set_indices_i]
          ... testing_set_i = X[testing_set_indices_i], y[testing_set_indices_i]
          ... print grid_search.best_params_, '\t\t', grid_search.score(*testing_set_i)
          {'C': 10, 'gamma': 0.1, 'degree': 1} 1.0
          {'C': 10, 'gamma': 0.1, 'degree': 1} 0.977777777778
          {'C': 100, 'gamma': 0.01, 'degree': 1} 0.977777777778
          {'C': 1000, 'gamma': 0.001, 'degree': 1} 0.977777777778
          {'C': 10, 'gamma': 0.1, 'degree': 1} 0.977777777778

          The scores alone can be obtained using the cross_val_score
              >>> cross_val_score(grid_search, X, y, cv=cv_outer)
          array([ 0.97777778, 0.95555556, 1. , 0.95555556, 1. ])

          To provide us more flexibility in choosing the inner CV strategy, we require the CV iterators to be data independent so they can be constructed without prior knowledge of the data.

          *Stratification is used in classification tasks to make the subsets (folds/strata) homogenous (i.e the percentage of samples per class remains same).

          My Progress

          The month of May went really slow with me doing very less work. :/

          I've been working hard the past few weeks to catch up with all the pending issues/PRs both GSoC related or otherwise.

          WRT to my GSoC work, so far I've finished the model_selection refactor (Goal 1) and have done a quick draft of the data independent CV iterator which I will refine, add tests and document before pushing.

          I hope to finish Goal 1 and Goal 2 (excluding reviews/revisions) before this Sunday and publish my next blog post by Monday/Tuesday, which will be on how the new data independent CV iterator makes nested CV easier with a few examples.


          by Raghav R V ( at June 18, 2015 12:31 AM

          June 17, 2015

          Palash Ahuja

          Inference in Dynamic Bayesian Networks

          Today, I will be talking about how the inference works in Dynamic Bayesian Networks.
          We could have applied the following methods,
          1) Naive Method:- There is one way where we could unroll the bayesian network as much as we'd like and then apply inference methods that we applied for the standard bayesian network. However, this methods lead to exponentially large graphs, thereby increasing the time that it takes for inference.
          (It was a surprise to me, wondered if we could do so, but then leads to a lot of problems)
          2) Forward and Backward Algorithm:- We could apply this algorithm but then this applies exclusively for hidden markov model(hmm). This involves converting each of those nodes into state spaces, basically increasing the sizes, again leading to huge complexity.
          So as to reduce the complexity of inference, the methods are as follows:-
           We could compute a prior belief state by using some recursive estimation.
          Let's assume that

          Then we could propagate the state forward by the following algorithm.

          So this complicated jargon would be always multiplied for a recursive procedure so as to do the filtering algorithm.
          Other Algorithms are the Frontier Algorithms and the Interface Algorithms which are more popular for the inference on a large scale.
          Apart from that, there could be certain expectation maximization algorithms which may help in computing the most probable path.
          This is what I am planning to implement in a couple of weeks.


          by palash ahuja ( at June 17, 2015 05:57 PM

          Yask Srivastava

          Gsoc Update

          Informal Intro

          Phew! So my end-terms exams just ended on 15th June.

          But I had long gaps in between my exams so I was contributing to MoinMoin side by side in my free time. Now since my exams have finally ended, I can devote myself 100% to this. \o/


          As I mentioned in my introductory blog post, we decided to use Less. I started by modernized theme. It used stylus for CSS preprocessing. Since our new basic theme worked on top of Bootstrap's less files we decided to redesign and port theme to work on top of it. I finished writing the base new theme for modernized and also rewrote the base template layout.html to use this theme.

          Show me the code!!

          Code Review On process


          This is a rewrite of the <code>layout.html</code> template to work on top of Boostrap.
          It also replaces the old stylus theme by new modernized bootstrap theme. The theme is written in <code>theme.less</code> and <code>modernized.less</code> file. The rest of the .less files are Bootstrap’s default less files.
          To test it on your machine: compile this file MoinMoin/themes/modernized/static/css/theme.less by lessc <code>$ lessc theme.less > ../css/</code>
          So the files to look for are:
          Rest of the less files are from bootstrap source.
          Rest of the css files are compiled form of less files by lessc.</p>
          <h1>ChangeLog from patch #2</h1>
          <p>Fixed the alignment of sub menu tabs and item views tabs
          Added active visual effect to the current tab view
          Fixed horizontal scroll bug
          Fixed padding inside sub menu
          Increased font size for wiki contents</p>
          <p>Now the files to look for are :
          This is how our previous modernized theme looks:


          This is how I styled it’s menu and sub menu tabs:">

          New Roboto fonts for the wiki contents">

          Complete new look:


          The code review is still under process so no commits have been made as yet. We also had a weekly meeting yesterday on IRC channel : #moin-dev where we discussed about our progress and future plannnings with all our mentors.

          My developement configuration

          I like to see the changes as I make thus compiling less file to css everytime after I made minor change was a big No No.

          I have added the complete project to Codekit and it automatically compiles and refreshes the page as soon as it detects any changes in the source code. :)

          We use mercurial as our version control system and this project is hosted on Bitbucket. I like to use sublime text as its super light.

          Future Plans

          1. Make changes in file to automate the less file’s compilation for modernized theme.
          2. Write CSS rules for all the elements
          3. Design footer, user setting page,… etc.
          4. Implement the changes mentioned my mentors on previous CR.

          June 17, 2015 03:15 PM

          Saket Choudhary

          Week 3 Update

          This week was mostly spent in debugging the current MixedLM code. It looks like the convergence depends a  lot on the optimiser being used.

          For example here are two notebooks:



          The first one uses 'Nelder-Mead' as its optimiser while the latter relies on  'BFGS'.

          `lme4` by default  uses BOBQA. However, switching the default optimiser to Nelder-Mead in MixedLM results in a lot of difference from the expected results(following lme4 results).

          There is an existing issue tracking this[1] and an existing PR to Kerby's branch[2]

          This Week, I plan to profile and probably finish off the optimisation work. My goals have been well summarised by Kerby in [1]



          by Saket Choudhary ( at June 17, 2015 07:29 AM

          June 16, 2015

          Chau Dang Nguyen
          (Core Python)

          Week 3

          So far, I have my Roundup REST live and working now. During next week, I will have the Documentation published, so people can start making feedback on them.

          This week:
          Roundup can perform GET, POST, PUT, DELETE and return the data
          Errors and Exception handling

          Next week:
          Perform PATCH
          Simple client that uses REST from Roundup

          by Kinggreedy ( at June 16, 2015 09:25 PM

          Christof Angermueller

          GSoC week three

          Further steps towards a more agile visu150614_complexalization…

          Last week, I revised my implementation to improve the visualization of complex graphs with many nodes. Specifically, I

          • added buttons to rearrange all nodes in a force layout,
          • implemented double-click events to release single nodes from a fixed position,
          • colored edges consistently with pydotprint.

          You can play around with three different examples here!

          The post GSoC week three appeared first on Christof Angermueller.

          by cangermueller at June 16, 2015 06:53 PM

          Brett Morris

          UT1, UTC and astropy

          What's the difference between UTC and UT1?

          Keeping time is a messy business. Depending on your perspective, you may want one of two (or more) time systems:
          1. As humans, we want a time system that ticks in seconds on the surface of the Earth contiguously and forever, both backwards and forwards in time.
          2. As astronomers, we want a time system that will place stationary astronomical objects (like distant quasars) at the same position in the sky with predictable periodicity.
          It turns out that reconciling these distinct systems is a difficult task because the Earth's rotation period is constantly changing due to tidal forces and changes in the Earth's moment of inertia. As a result, the number of seconds in a mean solar day or year changes with time in ways that are (at present) impossible to predict, since the variations depend on plate tectonics, large-scale weather patterns, earthquakes, and other stochastic events.

          The solution is to keep these two time systems independent.

          Coordinated Universal Time (UTC)

          The first time system is kept by atomic clocks which tick with no regard for the Earth's rotation. If that system was left uncorrected over many years, solar noon would no longer occur at noon on the atomic clocks, because 24 hours × 60 minutes × 60 seconds is not precisely the rotation period of the Earth. To make up for this, the atomic clock timekeeping system gets leap seconds added to it every so often to keep the atomic clock time as close as possible (within 0.9 seconds) to mean solar time. We call this Coordinated Universal Time (UTC).

          Universal Time 1 (UT1)

          The second time system is kept by very precisely, for example, by measuring the positions of distant quasars using Very Long Baseline Interferometry. This time is therefore defined by the rotation of the Earth, and varies with respect to UTC as the Earth's rotation period changes. The orientation of the Earth, which must be measured continuously to keep UT1 accurate, is logged by the International Earth Rotation and Reference Systems Service (IERS). They update a "bulletin" with the most recent measurements of the Earth's orientation, called Bulletin B, referred to within astropy as the IERS B table.

          Calculating UT1-UTC with astropy

          The difference between UTC and UT1 are therefore modulated by (1) changes in the Earth's rotation period and (2) leap seconds introduced to try to keep the two conventions as close to each other as possible. To compute the difference between the two is simple with astropy, and reveals the strange history of our dynamic time system.

          The following code and plots are available in an iPython notebook for your forking pleasure.

          Using IERS B for backwards conversion

          from __future__ import print_function
          import numpy as np
          import datetime
          import matplotlib.pyplot as plt

          # Make the plots pretty
          import seaborn as sns

          # Generate a range of times from 1960 (before leap seconds)
          # to near the present day
          dt_range = np.array([datetime.datetime(1960, 1, 1) +
          i*datetime.timedelta(days=3.65) for
          i in range(5600)])
          # Convert to astropy time object
          from astropy.time import Time
          time_range = Time(dt_range)

          # Calculate the difference between UTC and UT1 at those times,
          # allowing times "outside of the table"
          DUT1, success = time_range.get_delta_ut1_utc(return_status=True)

          # Compare input times to the times available in the table. See
          from astropy.utils.iers import (TIME_BEFORE_IERS_RANGE, TIME_BEYOND_IERS_RANGE,
          extrapolated_beyond_table = success == TIME_BEYOND_IERS_RANGE
          extrapolated_before_table = success == TIME_BEFORE_IERS_RANGE
          in_table = success == FROM_IERS_B

          # Make a plot of the time difference
          fig, ax = plt.subplots(figsize=(10,8))
          ax.axhline(0, color='k', ls='--', lw=2)

          ax.plot_date(dt_range[in_table], DUT1[in_table], '-',
          label='In IERS B table')
          DUT1[extrapolated_beyond_table], '-',
          label='Extrapolated forwards')
          DUT1[extrapolated_before_table], '-',
          label='Extrapolated backwards')

          ax.set(xlabel='Year', ylabel='UT1-UTC [seconds]')
          ax.legend(loc='lower left')

          There have been 25 leap seconds so far to date (as of summer 2015) since they were introduced in 1972.

          Using IERS A for forwards conversion

          # Download and cache the IERS A and B tables
          from astropy.utils.iers import IERS_A, IERS_A_URL, IERS_B, IERS_B_URL
          from import download_file
          iers_a_file = download_file(IERS_A_URL, cache=True)
          iers_a =
          iers_b_file = download_file(IERS_A_URL, cache=True)
          iers_b =

          # Generate a range of times from 1960 (before leap seconds)
          # to near the present day
          dt_range = np.array([datetime.datetime(1970, 1, 1) +
          i*datetime.timedelta(days=36.5) for
          i in range(525)])
          # Convert to astropy time object
          from astropy.time import Time
          time_range = Time(dt_range)

          # Calculate the difference between UTC and UT1 at those times,
          # allowing times "outside of the table"
          DUT1_a, success_a = time_range.get_delta_ut1_utc(return_status=True, iers_table=iers_a)
          DUT1_b, success_b = time_range.get_delta_ut1_utc(return_status=True, iers_table=iers_b)

          # Compare input times to the times available in the table. See
          from astropy.utils.iers import (TIME_BEFORE_IERS_RANGE, TIME_BEYOND_IERS_RANGE,

          in_table_b = success_b == FROM_IERS_B

          # Make a plot of the time difference
          fig, ax = plt.subplots(figsize=(10,8))
          ax.axhline(0, color='k', ls='--', lw=2)

          ax.plot_date(dt_range, DUT1_a, '-',
          label='IERS a table')
          ax.plot_date(dt_range[in_table_b], DUT1_b[in_table_b], 'r--',
          label='IERS B table')

          ax.set(xlabel='Year', ylabel='UT1-UTC [seconds]')
          ax.legend(loc='upper right')

          The IERS A table will know about near-future leap seconds and provide more accurate forward predictions in time.

          by Brett Morris ( at June 16, 2015 04:03 PM

          June 15, 2015

          Gregory Hunt

          GLMMs, Loglikelihood and Laplacian approximations

          Kerby did some of the heavy lifting helping out with the Laplacian approximation to a Gaussian integral. This forms the basis of our approximation of the log-likelihood. I wrote up some notes on what we're trying to accomplish here. Note 1

          by Gregory Hunt ( at June 15, 2015 05:26 AM

          June 14, 2015

          Mark Wronkiewicz

          Inner workings of the Maxwell filter

          C-Day plus 20

          This week, I finished a first draft of the Maxwell filtering project. Remember my goal here is to implement an open-source version of this filter that uses physics to separate brain signal from environmental garbage picked up by the MEG sensors. Now comes the fun part of this project: trying to add all the small tweaks required to precisely match the proprietary Maxwell filter, which I cannot access. I’m sure this will devolve into a tedious comparison between what I’ve implemented and the black box version, so here’s to hoping the proprietary code follows the original white papers.

          Most of the Maxwell filter work up until this point was focused on enabling the calculation of the multipolar moment space (comprised of the spherical harmonics), which is the foundation of this Maxwell filter. These multipolar moments are the basis set I’ve mentioned earlier that allow the brain signals to be divided into two parts: those coming from within a sphere and those originating from outside a slightly larger sphere (to see this graphically, cf. Fig 6 in Taulu, et al., 2005). In essence, representing the brain signals as a sum of multipolar moments permits the separation of brain signal from external noise sources like power lines, large moving objects, the Earth’s magnetic field, etc. My most recent code actually projects the brain signals onto this multipolar moment space (i.e., representing MEG data as a sum of these moments), and then reconstructs the signal of interest. These calculations are all standard linear algebra. From Taulu and Simola, 2006 (pg 1762): 

          Takeaway: The below equations show how Maxwell filtering is accomplished once the appropriate space has been calculated. We take brain signals recorded using MEG, represent them in a different space (the truncated mutlipolar moment space), and then reconstruct the MEG signal to apply the Maxwell filter and greatly reduce the presence of environmental noise.

          ϕ represents MEG recordings
          represents the multipolar moment space (each vector is a spherical harmonic)
          x represents the ideal weight of each multipolar moment (how much of each basis vector is present)
          hat represents an estimate
          inout, refer to internal spaces (brain signals), and external spaces (noise), respectively
          xis the inverse of x

          In the ideal case, the signal we recorded can also be represented as a weighted combination of our multipolar moments: 
          ϕ = S * x
          The S matrix contains multipolar moments but only up to a certain complexity (or degree), so it has been truncated. See my first post (end of 3rd paragraph) about why we cut out the very complex signals. 

          Since we can break up the multipolar moments and their weights into an internal and external space (comprised of brain signal and noise), this is equivalent to the last equation:
          ϕ = [S_in, S_out] * [x_in, x_out]T

          However, we're not in an ideal world, so we need to estimate these mutlipolar moment weights. x is the unknown so isolate it by taking the pseudo inverse of S to solve for an estimate of multipolar moment weights:

          S_pinv * ϕ = S_pinv * S * x
          S_pinv * ϕ = x_hat 
          x_hat = S_pinv * ϕ
          or equivalently,
          [x_in_hat, x_out_hat]S_pinv * ϕ

          With the multipolar weight estimates in hand, we can finally reconstruct our original MEG recordings, which effectively applies the Maxwell filter. Again, since S_in, and S_out have been truncated, they only recreate signals up to a certain spatial complexity to cut out the noise.

          ϕ_in_hat = S_in * x_in_hat
          ϕ_out_hat S_out x_out_hat

          The above ϕ matrices are a cleaner version of the brain signal we started with and the world is now a much better place to live in.

          by Mark Wronkiewicz ( at June 14, 2015 09:36 PM

          June 12, 2015

          Chau Dang Nguyen
          (Core Python)

          Week 2

          In the previous week, I had implemented GET prototype using the same method as xmlrpc handler. With this implementation, I will achieve better manipulation with information, in comparison with the last method.

          The current challenge is making my module as clean and easy as possible, in order to upgrade it. Moreover, taking the full advantage of the roundup design is necessary.

          So that's it for this week. My target is having a working REST by the end of this week.

          by Kinggreedy ( at June 12, 2015 12:44 AM

          June 11, 2015

          Ziye Fan

          [GSoC 2015 Week 2]

          In the week 2 I implement one optimization to the Equilibrium Optimizer. The PR is here.

          In this optimization, an "final optimization" procedure is added to the equilibrium optimization. Final optimizers is a list of global optimizers, and will be applied at the end of every equilibrium optimization pass. The number of optimization pass is expected to decrease, by making right optimizers final ones.

          Another change is to delete a node's function graph reference when pruning it from a function graph. So merge optimizer can easily tell whether a node belongs to a graph. It will be useful in other optimizers too.

          In the next week, the next 2 optimizations in the to-do list are what I'm going to do:

          * Make local_dimshuffle_list lift through many elemwise at once
          * Speed up inplace elemwise optimizer

          Thanks. Any feedback is welcome!

          by t13m ( at June 11, 2015 08:54 AM

          June 10, 2015

          Brett Morris

          Anti-Racism, Pro-Astronomy: Week 2

          For background on what this is all about, check out my first post on Anti-Racism, Pro-Astronomy.

          This week, I've gotten the ball rolling on the Diversity Journal Club (DJC) blog idea, which I'm calling astroDJC. Before committing to a name, I briefly considered renaming Diversity Journal Club to a new name with less contention like "inclusion" or "equity" rather than "diversity". After a brief Twitter discussion about alternatives, I decided to stick with Diversity simply because many institutions have DJCs by that name, and its goals and purpose is widely recognized. If you have strong opinions about the name and what alternatives you'd prefer, I'd love to hear them on Twitter or in the comments.


          The first iteration of the blog is now live(!) with two posts contributed by Nell Byler (with Russell Deitrick) about the Genius Effect and the Matilda Effect. I encourage you to read these posts, first for content and then for format, and give feedback about how these posts work as a template for future submissions.

          Submitting a post to astroDJC

          I created a GitHub repository for suggested posts for astroDJC where anyone can contribute their resources and discussion questions for DJC presentations. For those unfamiliar with GitHub, there is a wiki page (still in development) with a tutorial on how to contribute a post to astroDJC on GitHub in the browser, without any command line nonsense.

          The workflow goes something like this:

          1. A contributor will take a template post file styled with markdown, and fill it with content. 
          2. Once they are happy with their draft post, they can submit it to astroDJC for review via a pull request, where we can collaborate on improvements and make corrections. 
          3. The finalized file will be merged to the repository where it will be stored, converted into HTML with pandoc, and posted to the astroDJC blog.

          Why GitHub?

          Using GitHub for contributed posts ensures a few things that are important to me:

          • The ability to post must be open to everyone. Pull requests can be submitted by anyone, removing the need for a moderator or gatekeeper – which has been a sticking point in some social media circles lately... This way, if an undergraduate or physics graduate student wants to contribute but wouldn't have the credentials to prove that they're an astronomer (though they may be an expert on DJC issues), the content of their post is all that will matter to be considered for a submission.
          • The collaborative dialogue on each post – from the moment it's submitted via pull request to the moment it's merged – is done in public, where those who are interested can contribute and those aren't can ignore it. GitHub's notifications settings are flexible and easy to use, allowing you to get as much or as little notification about each pending update as you like.
          • Appropriate attribution is natural – you choose how you'd like to be referred to in the final blog post, and the history of your contribution is logged in GitHub for bragging rights/reference.
          • Writing posts in markdown enables contributors to have some control over the formatting of the post without prior knowledge of HTML (though of course, this is in exchange for prior knowledge of markdown, but I think this is a preferable exchange).
          If you would like to submit a post and have any difficulty, reach out to me and I'll help you and work to update the tutorial to make it more complete and intuitive.

          Make a submission and gimme feedback!

          I'd really like to hear what you think about the blog, the post template, and the example posts that are up. The best way to get good feedback would be to have you give it a test drive – if you've given a DJC talk, try putting it into astroDJC format and submit a pull request. Then be sure to make suggestions about how can we make this tool more effective and easy to use.

          by Brett Morris ( at June 10, 2015 06:05 PM

          Patricia Carroll

          Bounding Boxes & Benchmarking

          Last week, I played around with modeling simple sources with both Astropy and Sherpa. Sherpa is a modelling and fitting application developed for analysis of Chandra x-ray data. The recently released Sherpa for Python package offers a very useful comparison to existing Astropy methods.

          Astropy vs. Sherpa

          Here I've generated a mock source with random Poisson noise and fit it with a 2D Gaussian using both Astropy and Sherpa. Both use the Levenberg-Marquardt algorithm and least squares statistic and begin with the same intial guesses.

          In [5]:
          % matplotlib inline
          import numpy as np
          import warnings
          import sherpa.astro.ui as ui
          import matplotlib.pyplot as plt
          import seaborn as sns
          In [2]:
          import numpy as np
          import sherpa.astro.ui as ui
          from astropy.modeling.models import Gaussian2D
          from astropy import table
          from astropy.nddata.utils import add_array
          def gen_source_table(imshape, nsrc, stddev=3., mean_amp=10.):
              Populate a source table with randomly placed 2D gaussians
              of constant width and variable amplitude. 
              imshape : tuple
                  Shape of the image.
              nsrc : int
                  Number of sources:
              stddev : float, optional
                  Standard deviation in pixels
              mean_amp : float, optional
                  Mean amplitude 
              # Buffer the edge of the image 
              buffer = np.ceil(stddev*10.)
              data = {}
              data['x_mean'] = np.around(np.random.rand(nsrc)*(imshape[1]-buffer))+buffer/2.
              data['y_mean'] = np.around(np.random.rand(nsrc)*(imshape[0]-buffer))+buffer/2.
              data['amplitude'] = np.abs(np.random.randn(nsrc)+mean_amp)
              data['x_stddev'] = np.ones(nsrc)*stddev
              data['y_stddev'] = np.ones(nsrc)*stddev
              data['theta'] = np.zeros(nsrc)
              return table.Table(data)
          def make_gaussian_sources(image, source_table):
              A simplified version of `~photutils.datasets.make_gaussian_sources`.
              Populates an image with 2D gaussian sources.
              image_shape : tuple
                  Shape of the image.
              source_table : astropy.table.table.Table
                  Table of sources with model parameters.
              y, x = np.indices(image.shape)
              for i, source in enumerate(source_table):
                  model = Gaussian2D(amplitude=source['amplitude'], x_mean=source['x_mean'],
                                     y_stddev=source['y_stddev'], theta=source['theta'])
                  image += model(x, y)
              return image
          def make_gaussian_sources_sherpa(image, source_table):
              A simplified version of `~photutils.datasets.make_gaussian_sources`.
              Populates an image with 2D gaussian sources generaged with the Sherpa python package.
              image_shape : tuple
                  Shape of the image.
              source_table : astropy.table.table.Table
                  Table of sources with model parameters.
              for i, source in enumerate(source_table):  
                  g2.ellip = 0.
                  g2.fwhm = sigma2fwhm(source['x_stddev'])
                  mod = ui.get_model_image().y
                  image += mod
              return image
          def make_gaussian_sources_bb(image, source_table, width_factor=5):
              A simplified version of `~photutils.datasets.make_gaussian_sources`.
              Populates an image with 2D gaussian sources.
              Uses a bounding box around each source to increase speed. 
              image_shape : tuple
                  Shape of the image.
              source_table : astropy.table.table.Table
                  Table of sources with model parameters.
              width_factor: int
                  Multiple of the standard deviation within which to bound the source.
              for i, source in enumerate(source_table):
                  dx,dy = np.ceil(width_factor*source['x_stddev']),np.ceil(width_factor*source['y_stddev'])
                  subimg = (2*dx,2*dy)
                  x,y = np.meshgrid(np.arange(-dx,dx)+source['x_mean'],np.arange(-dy,dy)+source['y_mean'])
                  model = Gaussian2D(amplitude=source['amplitude'], x_mean=source['x_mean'],
                                     y_stddev=source['y_stddev'], theta=source['theta'])
                  image=add_array(image, model(x, y), (source['y_mean'],source['x_mean']))
              return image 
          sigma2fwhm = lambda x: 2.*np.sqrt(2.*np.log(2.))*x
          fwhm2sigma  = lambda x:x/(2.*np.sqrt(2.*np.log(2.)))
          In [3]:
          from import fits
          from astropy.modeling import fitting,models
          import sherpa.astro.ui as ui
          from photutils.datasets import make_noise_image
          from benchmarking import *
          import logging
          logger = logging.getLogger("sherpa")
          x,y = np.meshgrid(range(npix),range(npix))
          data_model = models.Gaussian2D(amplitude=10.,x_mean=npix/2,y_mean=npix/2, \
                                  x_stddev = 5.,y_stddev =10., theta = np.pi/4.)
          data = data_model(x,y)+make_noise_image((npix,npix), type=u'poisson',mean=.5,stddev=.25)
          #this doesn't work so I'm writing to fits for reading by Sherpa
          hdu = fits.PrimaryHDU(data)
          hdulist = fits.HDUList([hdu])
          g2.ellip = .5
          g2.fwhm = sigma2fwhm(10.)
          g2.fwhm.min = 1
          g2.fwhm.max = 50.
          print 'Astropy:'
          fit_g = fitting.LevMarLSQFitter()
          amod = fit_g(data_model,x,y,data)(x,y)
          t1 = %timeit -o -r 3 -n 3 fit_g(data_model,x,y,data)
          print '%i iterations' % fit_g.fit_info['nfev']
          print '%.2f ms per model evaluation' % (['nfev']*1000.)
          print '\n'
          print 'Sherpa:'
          t1 = %timeit -o -r 3 -n 3

          smod = ui.get_model_image().y
          print '%i iterations' % f.nfev
          print '%.2f ms per model evaluation' % (*1000.)
          titles='Data', 'Astropy Fit','Sherpa Fit','Astropy Residual','Sherpa Residual','Astropy - Sherpa'
          for i,im in enumerate([data,amod,smod,data-amod,data-smod,amod-smod]):
              cbar = plt.colorbar()
              plt.setp(plt.getp(, 'yticklabels'), color='w')
              plt.setp(plt.getp(, 'yticklabels'), color='w')
              plt.setp(title, color='w')
          3 loops, best of 3: 30.1 ms per loop
          7 iterations
          4.31 ms per model evaluation
          3 loops, best of 3: 23.2 ms per loop
          8 iterations
          2.90 ms per model evaluation

          While Sherpa performs more iterations (likely due to a lower error tolerance threshold), there's no contest. Sherpa wins. So what makes it faster? I'm not sure yet. It's worth finding out but for now I want to implement a very simple improvement to speed up Astropy.

          Bounding Boxes

          When you have a large image of the sky containing many discrete sources (stars and galaxies) with lots of space inbetween, it makes little sense to evaluate each source model across the entire image. In the case of our 2D gaussian, 99.9999% of the flux is contained with a 5-sigma radius.

          What I've done is to simply evaluate each source only within these limits. Here I compare Sherpa, Astropy, and Astropy with bb's by timing how long it takes to model 10 sources as a function of image size.

          In [7]:
          N_sources = 10
          im_sides = np.arange(50,550,50)
          for i in im_sides:
              image = np.zeros((i,i), dtype=np.float64)
              hdu = fits.PrimaryHDU(image)
              hdulist = fits.HDUList([hdu])
              source_table = gen_source_table((i,i),N_sources,stddev=1)
              t=%timeit -r 30 -n 1 -o -q make_gaussian_sources_bb(image, source_table,\
              t=%timeit -r 30 -n 1 -o -q make_gaussian_sources(image, source_table)
              t=%timeit -r 30 -n 1 -o -q make_gaussian_sources_sherpa(image, source_table)
          In [8]:
          for j,tall in enumerate([t1all,t2all,t3all]):
              for i in range(len(im_sides)):
          t1,t2,t3 = mt*1000.
          In [11]:
              plt.errorbar(im_sides[:-1],t3[:-1], e_t3[:-1],fmt='r.-',label = 'Sherpa',lw=3,alpha=1)
              plt.errorbar(im_sides[:-1],t2[:-1], e_t2[:-1],fmt='y.-',label = 'Astropy',lw=3,alpha=1)
              plt.errorbar(im_sides[:-1],t1[:-1],e_t1[:-1],fmt='c.-',label = 'Astropy-BB',lw=3,alpha=1)
              plt.legend(frameon=False, loc='left')
              plt.xlabel('Image Pixels/Side')
              plt.ylabel('Average timing (ms)')

          Sherpa clearly excels over Astropy for any image size up to about 100,000 pixels. At that point, the bounding boxes really start to shine. This limit will of course differ depending on the model used, pixel scale, and source density; but given the same set of sources and pixel scale, the boxes don't change and so the time is independent of the total image size. The more sparsely popuated your image is, the bigger improvement you will see with bounding boxes.

          In [ ]:

          by Patti Carroll at June 10, 2015 01:27 AM

          June 09, 2015

          Shridhar Mishra
          (ERAS Project)

          Coding in full swing.

          Ok, so all the installations are over after a bit of a hassle while installing EUROPA-pso, except that all the installations like Pytango and pyEUROPA went well.

          Since i had to face a lot of problem installing EUROPA on a 64 bit ubuntu 14.10 machine, i have decided to write stepwise procedure of installing it so that if required it could be done again.

          These steps has to be followed in specific order for successful installation or its almost inevitable to get some weird java errors.


          • JDK-- sudo apt-get install openjdk-7-jdk
          • ANT-- sudo apt-get install ant
          • Python -- sudo apt-get install python
          • subversion-- sudo apt-get install subversion
          • wget -- sudo apt-get install wget
          • SWIG sudo apt-get install swig
          • libantlr3c
          • unzip sudo apt-get install unzip

          Now let us get the necessary packages to install libantlr3c.

          svn co plasma.ThirdParty

          Get Europa.

          cd ~/plasma.ThirdParty

          Install ANTLR-C
          First, unzip libantlr3c-3.1.3.tar.bz2.

          cd plasma.ThirdParty/libantlr3c-3.1.3
          > ./configure --enable-64bit ; make> sudo make install

          The above commands are for 64 bit machines.
          for 32 bit machines remove --enable-64bit flag.

          Installing EUROPA.
          mkdir ~/europa
          cd ~/
          europaunzip ~/tmp/ export EUROPA_HOME=~/europaexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$EUROPA_HOME/lib

          Add the following lines to ~/.bashrc at the end.


          $EUROPA_HOME/bin/makeproject Light ~
          cp $EUROPA_HOME/examples/Light/*.nddl ~/Light
          cp $EUROPA_HOME/examples/Light/*.bsh ~/Light
          If the install was successful.

          cd ~/Light

          The Gui should appear for EUROPA.

          If all the steps a correctly followed it should work.

          ANTLR-C installation
          Europa Installation.
          Quick start

          Apart from this i have been able to successfully run the Rover example from europa which is to be modified according to the further needs of the Italian mars society.

            by Shridhar Mishra ( at June 09, 2015 08:51 PM

            Andrzej Grymkowski

            More about Plyer

            What is it?

            It provides API that in easy way, in many platforms, features like desktop notifications, dialogs or taking picture can be executed in same way by one class. Implementation reaches platforms like OS X, Windows, Linux and even mobile: Android and iOs.

            When you need plyer?

            For multi platform app. For example code below:
            >> from plyer import battery
            >> battery.status
            >> {"isCharging": True, "percentage": 0.5}
            works on all of platforms listed above and give you the same result.


            For Linux just type in console
            pip install plyer
            If you haven't pip installed please look at: how to install pip.

            For android I recommend buildozer. It's wrapper for package python-for-android. Most of examples uses buildozer..

            For OSX you will need python in objective-c bridge called pyobjus.

            For iOs are needed pyobjus and kivy-ios.


            Plyer, for demonstrate implemented features, uses framework kivy. Kivy its a framework for making applications. Provides window with widgets similar to android or ios widgets.
            In order to run examples install kivy. How to install Kivy.

            To run example go to folder examples and pick wanted to test feature fe. battery.  In that folder ( here battery ) open terminal and for

            • desktop type:

            • android connect device to your pc and type in console:
            buildozer debug deploy run

            buildozer will compile ( debug param ) kivy, python, plyer and many other libraries and will build android app. Compilation takes about 5 minutes before app will be transfered ( parameter deploy ) to your device and run : )

            Wish many successes on testing examples.
            Best regards.

            by thegrymek ( at June 09, 2015 03:43 PM

            Artem Sobolev

            Week 1 + 2: Takeoff

            The first two weeks have ended, and it's time for a weekly (ahem) report.

            Basic implementation outlined in the previous post was rewritten almost from scratch. Now there are 2 implementations of cost function calculation: a fully vectorized (that doesn't scale, but should work fast) and a semi-vectorized (that loops through training samples, but all other operations are vectorized). Meanwhile I work on a large scale version. More on that below.

            Also, I wrote a simple benchmark that shows improved accuracy of 1NN with the learned distance, and compares 2 implementations.

            There are several issues to solve.

            The first and the major one is scalability. It takes $O(N^2 M^2)$ time to compute  NCA's gradient, which is waaay too much even for medium-size datasets. Some ideas I have in mind:

            1. Stochastic Gradient Descent. NCA's loss is a sum of each sample's contribution, so we do stochastic optimization on it reducing computational complexity down to $O(w N M^2)$ where $w$ is a number of iterations.
            2. There's a paper Fast NCA. I briefly skimmed through the paper, but my concern is that they look for $K$ nearest neighbors, which takes them $O(K N^2)$ time — don't look like quite an improvement (Though it's certainly is if you want to project some high-dimensional data to a lower dimensional space).
            Another thing to do which is not an issue, but still needs to be done is choosing an optimization algorithm. For now there're 3 methods: gradient descent, gradient descent with AdaGrad and scipy's scipy.optimize.minimize. I don't think it's a good idea to overwhelm a user by the variety of settings with no particular difference in the outcome, so we should get rid of features that are known to be useless.

            Also, unit tests and documentation are planned, as well.

            by B@rmaley.exe ( at June 09, 2015 02:52 PM

            Saket Choudhary

            Week 2 Update

            This week I brushed up a little bit of theory on heteroscedasticity for linear mixed models. As per my timeline, I have another week to wrap it up.

            On a slight digression, I moved Josef's `compare_lr_test` method(with minor changes) to allow likelihood ratio test for fixed & random effects.

            A note book comparing R's equivalent is here:

            The p-values provided by R's 'anova' and statsmodels' `compare_lr_test` seem to mostly agree. The problem however rises, since the ML estimates in statsmodels do not seem to converge(after playing around with tolerances and max iterations too, which will require another look)

            Pull request reflecting the minor change is more of a WIP here:

            Suggestions and criticism are always welcome.

            Week 3 goals:

            - Finish up heteroscedasticity support
            - Add some solid unit tests for compare_lr_test

            by Saket Choudhary ( at June 09, 2015 11:38 AM

            Prakhar Joshi

            New Releases at plone , releases my sweat !!

            Hello everyone!!, in the blog I will share my experience that how things become terrifying if the new version of some product is released in plone. The main problem occurs when we have not pin up the products to the specific versions in the buildouts. Wow!! its a lot much at one shot. Don't worry we will understand each and everything. Lets start..

            Plone uses buildouts to make a structure of its code. There are builduts.cfg (Configuration files) to set up plone projects.

            What is buildout ?
            Buildout is a Python-based build system for creating, assembling and deploying applications from multiple parts, some of which may be non-Python-based. It lets you create a buildout configuration and reproduce the same software later.

            So plone uses these buildouts for setups. I have also configured the buildout for my project and things were going great until there is new release for plone from plone 4.3.6(the previous plone version).

            Here is the snippet of buildout.cfg :-

            What happens when a new version is released ?
            When a new version is released, then the buildout tries to extract the code from the latest version of the various products until we have pinned the product to a particular version. Here is the snippet of how to pin the particular product to specific version, I used version.cfg that purpose that have been extended from the base.cfg as you can see in the "[extends]" section of the above snap. Here is the snippet for the versions.cfg :-
             Here we can see that I have specified the version of the products, there are other products also but as we have not pinned them so while we run "./bin/buildout" it will install the latest version of those products.

            What is the reason to pin the products ? Lets download the latest version of that product ?
            Yeah, its good to keep the latest version of the code, but sometimes there are things that are dependent on the previous versions of the code, like in my case the products CMFPlone's latest version has been released but I have been working on the Plone 4.3.6 so this causes the failure for travis. Actually I have pinned the CMFPlone product to 4.3.6 but there was another product named "" which has not been pinned in the buildouts and that product "p.a.widget" calls the CMFPlone, so as I have not pinned that product so it always call the latest version of CMFPlone but we need the CMFPlone version 4.3.6 so it creates the test failure for me. Here is the snippet for the travis failure :-
            So the main problem was that how to resolve that issue, as I have easily said that there was an error is as I have not pinned it but in the error log we can see that there is no mention of as it directly says that "there is a version conflict for CMFPlone", when we see our buildouts we can see that CMFPlone has been pinned to 4.3.6 so if that product is called directly then it should be installed as version 4.3.6 and not as, but it has been installed as a latest version, so it creates a lot of problem for me to detect the problem and to solve that issue.

            How to detect where the problem is ? which product to pin ?
            There were two ways either start pinning each product one by one and that will solve our issue, but that is really a terrible and redundant solution, we rather need to find the specific products that have been creating the problem. So with the help of jensens (irc name) , he suggested me the "grep" method. With the help of grep method I tried to find that which products are trying to call the CMFPlone of latest version and I have found that is the product that has not been pinned yet so buildout has been installing its latest version and it has been calling the latest version of the CMFPlone, which is CMFPlone

            So finally all this I got the solution of my problem and as I have pinned in the vesions.cfg , it works and finally travis passed.
            Here is the snippet :-

            People on irc really helped me on solving that issue, learn a lot from that issue.

            Thanks for reading that, hope you enjoy reading that!!

            Happy Coding :)

            by prakhar joshi ( at June 09, 2015 10:15 AM

            Sudhanshu Mishra

            GSoC'15: Week two

            Second week of GSoC is over. I learned a lot this week about the assumptions system.

            Goal of this week was to finish the documentation PR which I started few days back. I think it's complete now and ready for the final review.

            We also merged a very old and crucial PR for the new assumptions started by Aaron. Now we really need to improve performance of satask.

            This week I'll start working on adding assumptions on Symbols to global assumptions context.

            That's all for now. Cheers!

            by Sudhanshu Mishra at June 09, 2015 06:30 AM

            Mark Wronkiewicz

            Progress and Paris

            C-Day plus 13

            ·        I’ve spent the past couple of weeks have getting my hands dirty with the underlying physics equations behind solving for forward solutions in source imaging (mostly with Mosher et al., 1999 and Hämäläinen &Sarvas, 1989). The forward solution is a matrix that relates each point on the cortex to the MEEG sensors. Once you have the forward solution, you can use some pretty fancy mathematics (including Tikinov regularization) to find a pseudoinverse for this matrix – called the inverse solution. Then you’re able to relate the sensor measurements with estimates of what areas of the brain are active, which is the fundamental motivation for source imaging.

            ·        One component of my project is focused on modifying this forward solution, so learning something about the way it was originally formulated currently has been a useful endeavor. I also found out from the algorithm’s creator that no material exists to help bridge the gap between the idealized and published equations and the optimized and cryptic code. Therefore, I’ve added quite a few comments and docstring improvements to make this easier for the next programmer wrestling with these equations. Later in the summer, I’m hoping to use this knowledge to find the relationship between the cortical surface and the multipolar moment space I started describing in my last post (see the discussion on SSS and spherical harmonics). This should provide a number of benefits that I’ll discuss when I pick this portion of the work back up.

            ·        For now, I’m going to try to make some headway on the first aim of my project: Maxwell filtering. Again, the Maxwell filter implemented in SSS is just an elegant way to exclude noise from MEEG recordings using physics (see my earlier post about steam bowls and floating frogs for more description or Taulu 2005 for one of the SSS papers). 

            ·        Last thing: the heads of the MNE-Python project have generously offered to fly me to Paris for a weeklong coding sprint in July! I’m pretty excited to finally meet all of the MNE-Python crew and learn more about how the Europeans view science and research. 

            by Mark Wronkiewicz ( at June 09, 2015 05:01 AM

            Aniruddh Kanojia

            GSOC Update - Week 1


            This week we tried using the python built-in tool to  pickle the state of various layouts.However most of the qtile classes are not pickelable, because of some recursive links.We therefore had to make some changes to the classes to make them pickelabe.Instead of doing a major redesign of qtile code architecture , we decided to implement the __getstate__() and __setstate__() functions for the classes with problems.This was implemented for almost all layouts and seems to be working fine for them.However some layouts are still not working.

            This is all for this week.

            Aniruddh Kanojia

            by Aniruddh Kanojia ( at June 09, 2015 12:32 AM

            Keerthan Jaic

            GSOC 2015 with MyHDL

            MyHDL is a Python library for hardware description and verification. The goal of the MyHDL project is to empower hardware designers with the elegance and simplicity of Python. Designers can use the full power of Python and elegantly model and simulate hardware. Additionally, MyHDL designs which satisfy certain restrictions can be converted to Verilog or VHDL. This feature can be used to integrate MyHDL designs with conventional EDA flows.

            I started exploring alternative hardware description languages while working on a research project which involved designing an FPGA based network intrusion detection system (NIDS). During the initial stages of the project, we were using System Verilog for designing hardware and writing tests. However, seemingly simple tasks such as generating network packets for testing felt cumbersome. On the other hand, Python is a concise and dynamic language, and a great choice for creating quick prototypes. I decided to try MyHDL because I was already comfortable with Python. MyHDL greatly simplified the process of generating test data and validating results since I was able to use existing python modules in simulation code. MyHDL also enabled me to rapidly iterate on both the hardware and software components of the NIDS.

            Over the course of the project, I got involved in MyHDL’s development and started contributing code. Most notably, I implemented interface conversion support and helped make MyHDL compatible with Python 3. This summer, I have the opportunity to spend a considerable amount of time working on MyHDL since my proposal to the Python Software Foundation has been accepted for Google Summer of Code 2015!

            My agenda for the first few weeks is to clean up the code base and the test suite before I focus on the business logic. MyHDL was first released in 2003. Over the years, it has gathered lots of duplicated and dead code. I think that refactoring the code will make it easier for me and other contributors to extend MyHDL’s functionality.

            After the initial refactoring, I’m going to simplify the core modules. MyHDL relies heavily on parsing the abstract syntax tree (AST) of various code objects. AST parsing code is hard to debug, and sometimes causes incomprehensible errors. I plan to explore various ways to reduce MyHDL’s reliance on AST parsing. My eventual goal is to increase the robustness of MyHDL’s conversion modules and improve MyHDL’s maintainability.

            I’m currently working on squashing interface conversion bugs and finishing documentation for a stable release before I start making big changes to the code.

            I’ll be writing periodically with status updates and technical details. Thanks for reading!

            June 09, 2015 12:00 AM

            June 08, 2015

            Aman Singh

            Scipy.ndimage module structure and Initial plan for rewriting

            Originally posted on Aman Singh:

            Image processing functions are generally thought to operate over two-dimensional arrays of value. There are however a number we need to operate over images with more than wo dimensions. The scipy.ndimage module is an excellent collection of a number of general image processing functions which are designed to operate over arrays with arbitrary dimensions. This module is an extension of Python library written in C using Python – C API to ameliorate its speed. The whole module can be broadly divided into 3 categories:-

            • Files containing wrapper functions:- This includes the nd_image.h and nd_image.c files. ndimage.c file mainly contains functions required for extension of module in c viz. All the wrapper functions along with other module initialization function and method table.
            • Files containing basic constructs:- These are in the files ni_support.c and ni_support.h. These constructs include a mixture of some containers, macros and various functions. These constructs are like arteries of…

            View original 208 more words

            by Aman at June 08, 2015 10:07 PM

            Christof Angermueller

            GSoC week two

            Theano is becoming more colourful! Last week, I

            • improved the graph layout
            • revised colors and shapes of nodes
            • improved the visualization of edges and mouseover events
            • scaled the visualization to the full page size







            You can find two examples here!


            The post GSoC week two appeared first on Christof Angermueller.

            by cangermueller at June 08, 2015 09:22 PM

            Goran Cetusic

            GNS3 architecture and writing a new VM implementation

            Last time I wrote a post I talked about what GNS3 does and how Docker fits into this. What I failed to mention and some you already familiar with GNS3 may know, the GNS3 software suite actually comes in two relatively separate parts:

            The GUI is a Qt-based management interface that sends HTTP requests to specific endpoints to the server defined in one of its files. These endpoints normally handle the basic stuff you would expect for VM instance to do: start/stop/suspend/restart/delete. For example, sending a POST request to /projects/{project_id}/virtualbox/vms creates a new Virtualbox instance handled in You might run into some trouble getting the GUI to run, especially if you're using the latest development code like me because with the latest development version Qt4 was replaced with Qt5 and a lot of Linux distributions out there don't yet have Qt5 in their repositories. The installation instructions only deal with Qt4 and Ubuntu so it's up to you to trudge through numerous compile and requirement errors.

            Generally, every virtualization technology (Dynamips, VirtualBox, QEMU) has the request handler, the VM manager responsible for managing available VM instances and the VM handler that knows how to start/stop/suspend/restart/delete. Going back to the request handler, if we wanted to start a previously created VM instance, sending a POST request to /projects/{project_id}/docker/images/{id}/start would do it. Once the request gets routed to a specific method, it usually fetches the singleton manager object responsible for that particular VM technology like VirtualBox or Docker that can fetch the Python object representing the VM instance based on the ID in the request. This VM instance object has the methods that can do various things with the instance, but are specific for VirtualBox or Docker or Qemu. 

            Here are some important files that the current Docker implementation uses but there are equivalent files for other kinds of virtual devices:

            • handlers/ - HTTP request handlers calling the Docker manager
            • modules/docker/ - Manager class that knows how to fetch and list Docker containers
            • modules/docker/ - Docker VM class whose methods manipulate specific containers
            • modules/docker/ - Error class that mostly just overrides a base error class
            • schemas/ - request schemas determining allowed and required arguments in requests
            • modules/docker/dialogs - folder containing code for GUI Qt dialogs
              • - Wizard to create a new VM type, configure it and save it as a template from which the VM instances will be instantiated. Concretely, for Docker you choose from a list of available images from which a container will be created but this really depends on what you're using for virtualization.
            • modules/docker/ui - folder with Qt specifications that generate Python files that are then used to define how the GUI reacts to user interactions
            • modules/docker/pages - GUI pages and interactions are defined here by manipulating the previously generated Python Qt class objects
            • modules/docker/ - classes that handles the Docker specific functionality of the GUI  like loading and saving of settings
            • modules/docker/ - this part does the actual server requests and handles the responses
            • modules/docker/ - general Docker and also container specific settings
            This seems like quite a complicated setup but the important thing to remember is that if you want to add your own virtualization technology you have to make equivalent files to those above in a new folder. My advice is to copy the files of an already existing similar VM technology and go from there. All of the classes inherit from base classes that require some methods to exist, otherwise it will fail spectacularly. As is true with every object oriented language, you should try to leave most of the work to the base classes by overriding the required methods but if the methods they use make no sense, write custom code that circumvents their usage completely. A lot of the code used in one technology may seem useless and redundant in another. For example, VirtualBox has a lot of boilerplate code that manages the location of its vboxmanage command that's completely useless in Docker which uses docker-py to handle all container related actions. The core of GNS3 is written with modularity in mind but with all the (very) different virtualization technologies it supports you're bound to do a hack here or there if you don't want to completely retumble the rest of the code a couple of times.

            Cheers until next time when I'll be talking about how to connect vastly different VMs via links.

            by Goran Cetusic ( at June 08, 2015 08:15 PM

            Manuel Jacob

            Progress Report - Week 1 & 2

            GSoC Project Overview

            Since this is my first blog post here, I'll describe shortly what my GSoC project is about. The goal is to bring forward PyPy's Python 3.x support. As this is a large project, it can't be finished this summer. However, here is a rough schedule:

            1. Release PyPy3 2.6.0 (notably for CFFI 1.1 support, around June 12th)
            2. Finish Python 3.3 support (scheduled for release around July 3th)
            3. Work on Python 3.4 support (open project)

            Current Status

            In the first week I did a lot of merging. Merging is necessary to bring the latest features and optimizations from the default branch, which implements Python 2.7, to the py3k branch, which implements Python 3.2. The previous merge was done on February 25th, so this created a lot of merging conflicts.

            In the second week I spent most time fixing tests (all tests in the py3k branch pass now) and bugs reported by users in PyPy's issue tracker.

            Next week I will fix more issues from the bug tracker and release PyPy3 2.6.0.

            by Manuel Jacob at June 08, 2015 07:04 PM

            Mridul Seth

            GSoC 2015 – Python Software Foundation – NetworkX – Biweekly report 1

            NetworkX is preparing for a new release v1.10 and as discussed we are planning to deprecate *iter functions of the base classes of Di/Multi/Graphs. They are now deprecated (

            I started working on the first part of my project, removing *iter functions. Till now I have worked on the following functions:

            • `edges_iter` for Di/Multi/Graphs
            • `out_edges_iter` and `in_edges_iter` for Multi/Digraphs
            • `neighbors_iter` for Di/Multi/Graphs
            • `predecessors_iter` and `successors_iter` for Multi/DiGraphs

            The progress can be seen in this pull request : (

            I will also soon turn on a wiki page for further discussion and planning of various issues regarding the API and investigating it further.

            by sethmridul at June 08, 2015 01:53 PM

            Abraham de Jesus Escalante Avalos

            My motivation and how I got started

            Hello all,

            It's been a busy couple of weeks. The GSoC has officially begun and I've been coding away but before I go heavy into details, I think I should give a brief introduction on how I found SciPy and my motivations as well as the reasons why I think I got selected.

            The first thing to know is that this is my first time contributing to OpenSource. I had been wanting to get into it for quite a while but I just didn't know where to start. I thought the GSoC was the perfect opportunity. I would have a list of interesting organisations with many sorts of projects and an outline of the requirements to be selected which I could use as a roadmap for my integration with the OpenSource community. Being selected provided an extra motivation and having deadlines was perfect to make sure I stuck to it.

            I started searching for a project that was novice friendly, preferably in python because I'm good at it and I enjoy using it but of course, the project had to be interesting. Long story short, I found in SciPy a healthy and welcoming community so I decided this might be the perfect fit for me.

            The first thing I did was try find an easy-fix issue to get the ball rolling by making my first contribution and allowing one thing to lead to another, which is exactly what happened; before I knew it I was getting familiarised with the code, getting involved in discussions and exchanging ideas with some of the most active members of the SciPy community.

            In short, what I'm trying to say is: find your motivation, then find something that suits that motivation and get involved, do your homework and start contributing. Become active in the community and things will follow. Even if you don't make it into the GSoC, joining a community is a great learning opportunity.


            by Abraham Escalante ( at June 08, 2015 03:45 AM

            June 07, 2015

            Chad Fulton

            State space diagnostics

            State space diagnostics

            It is important to run post-estimation diagnostics on all types of models. In state space models, if the model is correctly specified, the standardized one-step ahead forecast errors should be independent and identically Normally distributed. Thus, one way to assess whether or not the model adequately describes the data is to compute the standardized residuals and apply diagnostic tests to check that they meet these distributional assumptions.

            Although there are many available tests, Durbin and Koopman (2012) and Harvey (1990) suggest three basic tests as a starting point:

            These have been added to Statsmodels in this pull request (2431), and their results are added as an additional table at the bottom of the summary output (see the table below for an example).

            Furthermore, graphical tools can be useful in assessing these assumptions. Durbin and Koopman (2012) suggest the following four plots as a starting point:

            1. A time-series plot of the standardized residuals themselves
            2. A histogram and kernel-density of the standardized residuals, with a reference plot of the Normal(0,1) density
            3. A Q-Q plot against Normal quantiles
            4. A correlogram

            To that end, I have also added a plot_diagnostics method which creates those following four plots.

            by Chad Fulton at June 07, 2015 10:21 PM

            Vipul Sharma

            GSoC 2015: Coding Period (25th May - 7th June)

            The coding period started 2 weeks ago, from 25th May. In these two weeks I worked according to the timeline which I created during the community bonding period.

            I worked on implementing Ajax based searching of duplicate tickets feature. Initially, I thought that it has to be created using Whoosh and JQuery from scratch but it turned out that something similar was implemented in /+search view. The /+search view has an Ajax based search form which displays content suggestions of various contenttypes, name term suggestions and content term suggestions along with informations like revision id, size, date of creation and file type. So, I used the code of /+search view for implementing duplicate ticket search. I made some changes in the existing code to allow it to search tickets and added few lines of code in ajaxsearch.html template to render duplicate ticket suggestions as the /+search view displayed some results which were not necessary as suggestions for duplicate tickets. Just a few lines of CSS were only required to keep the rendered result tidy.

            Obviously, I was not able to code it in one go as I don't have much experience in working on large codebase. But my mentors were very helpful. They guided me, reviewed my code and gave me suggestion on how to reduce few redundant code segments. Their advice was really helpful as I reduced a lot of redundant code and now it looks pretty.

            This is how it looks:

            I am currently working on file upload feature so that a user can upload any patch file, media or screenshot. I've implemented it by creating a new item for every file uploaded. I've few issues regarding how to deal with item_name and itemids which I am discussing with my mentor and I hope that I'll figure it out very soon :)

            by Vipul Sharma ( at June 07, 2015 08:27 PM

            Jaakko Leppäkanga

            First two weeks

            Time flies... It's been two weeks already and I've got up to speed with the coding despite the slow start. I was occupied with other stuff for the first couple of days of coding and had to work extra hours over the weekends to catch up.

            The epoch viewer is nearly done and you can expect a merge early next week. The biggest difficulties I faced concern compatibility issues with OSX, which are really hard to solve, since I don't have a mac at my disposal. I also faced some problems with GIT, but that's nothing new. Once we get the compatibility issues with OSX sorted out, I can start implementing butterfly plotter for the epochs. I have a pretty good picture of how to implement it and I think it'll be ready pretty soon. Overall, I feel pretty satisfied with the plotter thus far. Here's a link to the pull request:

            by Jaakko ( at June 07, 2015 05:03 PM

            Rupak Kumar Das

            Two weeks in

            Time flies quickly! It has been nearly two weeks since the start of the coding period so let me give a small report on my progress.

            The first week was spent in trying to figure out how to create the Slit plugin for Ginga. It is an extension of the existing Cuts plugin, but instead of plotting the pixel values, it plots the time values. Unfortunately, I was unsure of the implementation so it was a fruitless week spent in reading the Cuts code to figure out how it plotted the data and how to modify it.

            So I decided to work on the Save feature instead which will save the cuts plot and data. Fortunately, in a meeting with my mentor, it was decided that I should focus on the Cuts plugin for the time being, improving it by fixing bugs and adding a new type of curved cut to plot the data (so that Slit would have the same functionalities and stability). I have implemented the save function and am currently working on the curved cut. Hopefully, I will have completed it in a few days time.

            I will try to be more productive in the coming weeks. The next update will be in two weeks so till then, ciao!

            by Rupak at June 07, 2015 08:04 AM

            June 06, 2015

            Pratyaksh Sharma

            Sampling is one honking great idea -- let's do more of that!

            If the math below appears garbled, read this post here.

            A primer on inference by sampling

            The quintessential inference task in graphical models is to compute $P(\textbf{Y}= \textbf{y} | \textbf{E}=\textbf{e})$, which is the probability that the variables $\textbf{Y} = (Y_1, Y_2, ..., Y_n)$ take the values $\textbf{y} = (y_1, y_2, ..., y_n)$, given that we have observed an instantiation $\textbf{e} = (e_1, e_2, ..., e_m)$ to the other variables of the model.

            It turns out that this seemingly unassuming task is hard computationally ($\mathcal{NP}$-hard to be precise, though I would not bother with the details here). The good news, however, is that there exist numerous approximate algorithms that solve the problem at hand. An popular approach is to estimate  $P(\textbf{Y} | \textbf{e})$ by sampling.

            Let us surmise that we have a set of 'particles', $\mathcal{D} = \{\xi[1], ..., \xi[M]\}$ where $\xi[i]$ represents an instantiation of all variables $\mathcal{X}$ of our graphical model. Then, we can estimate $P(\textbf{y})$ from this sample as simply the fraction of particles where we have seen the event $\textbf{y}$,
            $$ \hat{P}_\mathcal{D} = \frac{1}{M} \sum_{i=1}^{M}\mathbb{I}\{\xi[m] = y\}$$
            where $\mathbb{I}$ is the indicator random variable, and $\xi[m]$ also has the overloaded meaning 'assignment to $\textbf{Y}$'.

            Getting back to our original problem of estimating $P(\textbf{Y} | \textbf{E} = \textbf{e})$, we can instead filter the set $\mathcal{D}$ to comprise of only those 'particles' which do not contradict the evidence $\textbf{E} = \textbf{e}$, and then proceed as we did for estimating $P(\textbf{Y})$.

            But, what am I doing?

            Wait, we didn't see how we can generate the sample $\mathcal{D}$. Given a Bayesian network, we'll see that it is straightforward to do so. But, things are rather onerous in case of Markov networks. 

            A Bayesian network is represented by a directed graph $\mathbb{G} = (V, E)$. Let us impose a topological ordering on the vertices $V$. Then, start with $V_1$, it does not have any parent(s). So, we sample $V_1$ from it's possible values $\{v^1_1, v^2_1, ...\}$, according to the given probability weights $P(V_1)$. Say, we sampled $V_1 = v^t_1$. Now we can go ahead and sample the next variable $V_2$ in the topological ordering. 

            The fact that we are proceeding in this order, makes known the sampled values of a variable's parents before we head out to sample that variable. This is known as forward sampling. 

            To answer the inference query when we are given an evidence $\textbf{E} = \textbf{e}$, we can merely reject those samples which do not comply with our evidence. This is known as rejection sampling. 

            All is well, until we notice that the number of samples generated by rejection sampling is proportional to $P(\textbf{E})$. The fewer samples we generate, the worse is our approximation of the intended quantity.

            Let us tweak forward sampling to generate only the observed values of the variables in our evidence. And then proceed as we've always had. As we look more closely into this approach, we see that it has some flaws. 

            Imagine the simple Bayesian network with just two nodes with an edge between them. [Intelligence]-->[Grade]. Suppose we have the evidence that the grade is an $A$. From our usual way of sampling, we sample a value for [Intelligence]. Then, instead of sampling the value of [Grade], we take it to be $A$. In the resulting particles, the value of [Intelligence] will be distributed as given by the probability weights associated with that variable. The fact that we have observed [Grade]$=A$, we should have expected the particles to demonstrate a higher value of [Intelligence] (assuming higher intelligence tends to produce better grades), but this is not the case. 

            The issue here is that the observed values of the variables are not able to influence the distribution of their ancestors. We make another tweak to address this. Now, with each particle, we shall associate a weight - the likelihood of that particle being generated. 
            $$w = \prod_{i=1}^m{P(\textbf{E}_i = \textbf{e}_i | Parents(\textbf{E}_i))}$$ 
            We then modify our estimation of $P(\textbf{y} | \textbf{e})$ as follows,
            $$\hat{P}_\mathcal{D}(\textbf{y}|\textbf{e}) = \frac{\sum_{i=1}^{M}w[M]\cdot{\mathbb{I}\{\xi[m] = y\}}}{\sum_{i=1}^{M}{w[M]}}$$

            This is known as likelihood weighted sampling. And with this, we have a decent enough way to estimate the conditional probability queries on a Bayesian network.

            In case you are still wondering, I've implemented forward sampling, rejection sampling and likelihood-weighted sampling so far. See the pull request

            by Pratyaksh Sharma ( at June 06, 2015 07:57 PM

            Aman Jhunjhunwala

            GSOC ’15 Post 2 : Coding Phase : Weeks 1 & 2

            Work Summary

            Report Date : 05th June, 2015

            The coding phase for Google Summer of Code ,2015 has officially started and the following sections(relating to the components worked on) summarize the progress of the 2 weeks- the successes, difficulties, roadblocks and surprises:-

            1.  Data Porting

            The old Astropython web application is a legacy Google App Engine Application written in 2010-2011.  The entire data is stored in a Google Cloud Datastore in an unstructured format ( as a soup of keys and values ). During the bonding period it was decided that the data would first be extracted and structured in a friendly format – JSON,YAML,etc. and then a “population” script would be written to use that data to populate the database irrespective of the DB technology used in the future.

            To accomplish this , I completed a few Google App Engine tutorials to get acquainted with the technology and also set up the infrastructure on my local machine. This included the official Google” Guestbook” tutorial ,which was great fun to learn.

            To extract the data, I first needed to get the old Astropython app running on the localhost and then use a “bulkloader dump” of data (provided to me by my mentor) to restore the data into my local app and then pull the data from there into whatever format I liked. Just 2 days into using GAE , this was extremely challenging but I was glad that I could get through.  Initially , I was trying to update the old code so that it could run with the latest versions of the dependencies (GAE SDK, WebApp2  and Django) . This took a lot of time and code manipulation but there were too many legacy dependencies (libraries, functions,etc) to satisfy and I ditched the process after which I re-created the environment used to code that project – Python 2.5 , Django 0.96 and webapp beta and a few manipulations later ,I got the old Astropython web app running on my localhost.

            Next , I restored the data to my local app using a local datastore and tried to extract the records but was unable to do so for 2 days. It was here that I reached a dead end. Any method I tried would output nothing but a random mess of data. In the end , I replaced the view  function with a new view that showed the XML of data using the builtin to_xml() function. Then a simple python script converted that XML to JSON keeping in mind the character encoding problems.  After this a python population script parsed that JSON data and stored it in our new web app’s database in the desired way. The script is found in the project’s GitHub repo.

            A very difficult ,tiring and challenging one week with little sleep, but mission accomplished in the end !

            2.  Teach and Learn Section

            Now that I entered into my comfort zone -things began to move more quickly and comfortably. Framing the models and setting up Moderation abilities on them were followed by creating a Multi-Step Creation Wizard to create a tutorial /code snippet / tutorial series or Educational Resource. Initially I was using Django form-tools’s built -in CreationWizard Class but it was quite inflexible to our needs, so I decided to create my own custom Creation Wizard views.  After getting the basic infrastructure up, I modified it to be more robust and add protection to it – making it usable and mature (saving un-submitted forms , resuming from where the user stopped ,etc )

            After finishing this , I jumped onto coding the infrastructure of displaying displaying each article (from any of the categories ) . After which a secure anonymous voting mechanism was added which took into account the IP Address and UserID of a user to generate a unique token for a vote.

            This was followed by finishing the last part of the basic Teach and Learn  section infrastructure – the aggregation pages which displayed all the posts and sorts them in terms of popularity or date created. Lastly , Pagination abilities were incorporated and CSS styling was completed to end the week on a high !

            It has been two excellent weeks of coding with a lot of new things to learn. Before the mid term evaluation , the Teach and Learn section is expected to be absolutely complete, mature and tested . The next update will be in 2 weeks ! Till then , Happy Coding !

            by amanjjw at June 06, 2015 04:02 PM

            June 05, 2015

            Manuel Paz Arribas