Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

June 04 2018

How to configure Sass and Bower with django-compressor - part 1 (local config)

Quick guide on how to configure Django with Sass, Bower and django-compressor

Django CORS

An introduction to CORS and how to enable it in Django.

June 03 2018

Reactify Django

# Reactify Django is coming. ...

June 01 2018

Django development with Docker — Testing, Continuous Integration and Docker Hub

May 31 2018

Ansible provision/deploy setup

I never got around to write down the ansible setup I figured out (together with others, of course, at my previous job) for deploying/provisioning django websites.

The whole setup (as a cookiecutter template) can be found in https://github.com/reinout/cookiecutter-djangosite-template . The relevant code is in the ansible/ directory. Note: this is a "cookiecutter template" from which you can generate a project so you'll see some {{ }} and {% %}: when you create the actual project these items will be filled in.

My goal was to keep the setup simple and safe. With "safe", I mean that you normally cannot accidentally do something wrong.

And it was intended for getting one website project onto the server(s). It is not a huge ansible setup to set up the whole infra in one fell swoop.

First the inventory files:

  • Yes, multiple: there are two of them. production_inventory and staging_inventory. The safe aspect is that we used to have a single inventory file with [production] and [staging] headings in it. If you wanted to update staging, you had to add --limit staging on the command line. If you forgot that...

    With two separate files, you never have this problem. Accidents are much less likely to occur.

  • The simple aspect is that variables are right there in the inventory. It are only a few variables after all: servername, hostname, checkout (master or tag).

    A "problem" with ansible is that you can place variables in lots of places (playbook, inventory, host variable file, group variable file and another two or three I don't remember right away). So where to look? I figured that the inventory was the right place for this kind of simple setup:


Second, the ansible playbooks:

  • There are two. A provision.yml and a deploy.yml. Provisioning is for one-time setup (well, you probably want to change stuff sometimes, but you'll rarely run this one). Deploy is for the regular deploys of the actual code. The stuff you regularly do.

  • Provision should be run as a user that can do sudo su on the target machine. Provisioning installs packages and adds the nginx config file in /etc/. And it creates a user for running/deploying the django site (in my previous job, this user was historically called buildout).

  • The deploy playbook connects as the abovementioned deploy user, does a "git pull" (or whatever deploy mechanism you prefer), runs the database migration and so.

  • Now, how can the person who deploys connect as that deploy user? Well, the provision playbook creates the deploy user ("buildout"), disables the password and adds the public ssh keys of the deployers to the /home/buildout/.ssh/authorized_keys file:

    - name: Add user "buildout" and set an unusable password.
       user: name=buildout password='*' state=present shell="/bin/bash"
     - name: Add maintainers' ssh keys so they can log in as user buildout.
       authorized_key: user=buildout key=https://github.com/{{ item}}.keys
         - reinout
         - another_colleague

    It is simple because you only need a very limited number of people on any server with sudo rights. Or a very limited number of people with the password of a generic admin account. Re-provisioning is basically only needed if something changed in the nginx config file. In practice: hardly ever.

    It is simple because you don't need to give the deployers each a separate user account (or access to a password): their public ssh key is enough.

    It safe because it limits the (root-level) mistakes you can do during regular deploys. And the small amount of users you need on your system is also an advantage.

  • The ansible playbooks are short. Just a couple of tasks. Even though I'm normally all in favour of generic libraries and so: for a 65 line playbook I personally don't need the mental overhead of one or two generic ansible roles.

    The playbooks do basically the same thing I did years earlier with "Fabric" scripts. So: the basic structure has quite proven itself (for me).

There are all sorts of approaches you can take. Automatic deploys from your Jenkins or Gitlab, for instance. That'd be something I like to do once. For not-automatically-deployed projects, a simple setup such as I showed here has much to recommend itself.

GeoDjango and PostgreSQL

I recently had to set up GeoDjango on a site that used PostgreSQL as a database. As the online instructions for doing this were less than totally clear, I decided to write down how I did it.

May 30 2018

Django Local 404 Page

Learn how to display and customize a 404 page locally.

Best React Books 2018

List of current React/JavaScript books.

May 29 2018

Angular 6|5 Tutorial: Integrating Angular with Django

In the previous Angular 6 tutorial we've seen how to build a CRUD web application with a Django REST framework API back-end. In this tutorial we'll see how we can integrate the Angular 6 front-end with the Django back-end.

After creating both the back-end and front-end apps we need to integrate them i.e instead of taking the approach where both applications are completely separated we'll serve the front-end application using a Django view. In development we'll have both Django development server and Angular/Webpack dev server running but for production we'll only need a Django server.

To use this approach you need to tweak the Webpack settings of your front-end project, use the webpack-bundle-tracker (from npm) and use the django-webpack-loader package (from PyPI)

The webpack-bundle-tracker is a Webpack plugin that generates a stats file containing meta data information about the assets of your front-end application generated by Webpack.

We'll start by installing the webpack-bundle-tracker module then update the Webpack configuration file to make use of this plugin.

npm install webpack-bundle-tracker --save

Next you need to eject the Webpack configuration from the Angular 6 CLI using

ng eject

If the ejection is successful you'll find a webpack.config.js in the root of your folder.

Open webpack.config.js and import BundleTracker from webpack-bundle-tracker then locate the plugins entry and add the following code

var BundleTracker = require('webpack-bundle-tracker');

module.exports  = {
        new BundleTracker({filename: '../webpack-stats.json'})

Next add the publicPath setting

"output": {
    "path":  path.join(process.cwd(), "dist"),
    "filename":  "[name].bundle.js",
    "chunkFilename":  "[id].chunk.js",
    "crossOriginLoading":  false,

"devServer": {
    "historyApiFallback":  true,
    "publicPath":  "",//2

If you serve your application you'll have a ../webpack-stats.json in the parent folder i.e the root of the Django project.

After ejecting your Webpack configuration from the Angular 6 CLI you won't be able to use ng serve instead you'll have to use npm run start to serve your application.

This is a screenshot of a project where you can see the webpack.config.js file in the front-end application and a generated webpack-stats.json in the root folder of the project

Next let's install The django-webpack-loader package which will take care of reading the webpack-stats.json and insert the assets in a Django template.

Head back to your terminal the run the following command:

pip install django-webpack-loader

In your settings.py file add webpack_loader to the list of installed apps:


Then add this configuration object:

    'DEFAULT': {
        'BUNDLE_DIR_NAME': '',

        'STATS_FILE': os.path.join(BASE_DIR, 'webpack-stats.json'),

You can find more settings that you can use via this link.

Serving the Angular 6 Application

Now let's create the view to serve the Angular 6 application. Open core/views and add the following view function:

from django.shortcuts import render

def  home(request):
    return render(request, 'core/home.html')

Next you need to create the home.html template so create a templates/core folder inside the core application then add a home.html with the following content:

{% load render_bundle from webpack_loader %}

<html  lang="en">
<meta  charset="UTF-8">
<base  href="/">
<title>A Simple CRM with Django and Angular 6</title>


{% render_bundle 'inline' %}
{% render_bundle 'polyfills' %}
{% render_bundle 'styles' %}
{% render_bundle 'vendor' %}
{% render_bundle 'main' %}


Now you need to add the URL for the home page in urls.py:

from django.contrib import admin
from django.urls import path
from django.conf.urls import url
from core import views as coreviews

urlpatterns =[


path('admin/', admin.site.urls)

That's it. You should now be able to see your Angular 6 page when visiting the Django web application

Fixing Hot Code Reload

If you change the source code of your front-end application you will not get updates hot code reloaded without manually refreshing the page if you are navigating your application from That means HCR is not working properly so simply open webpack.config.js and add the following setting:

"devServer": {
    "historyApiFallback":  true,
    "publicPath":  "",//2,
    "headers": {
        'Access-Control-Allow-Origin':  '\\*'//3

That's because http://localhost:8000 sends requests to the Webpack dev server (http://localhost:4200) to get source code changes so we need to update headers to allow request from all origins.


Throughout this tutorial we have integrated both Angular 6 front-end and the Django REST API back-end.

May 28 2018

QuerySet Filters on Many-to-many Relations

May 25 2018

Djangocon: friday lightning talks

(One of my summaries of a talk at the 2018 European djangocon.)

The stenographers - Sheryll and Andrew

The stenographers are the ones that provide LIVE speech to text subtitles for the talks. Wow.

They use "shorthand machines" for steno. It is like a piano where you press multiple keys to form words. On wednesday the speakers talked 46000 words...

How fast do people talk? Anything between 180 and 250, though 300 also occurs. The handiest are speakers that speak in a regular tempo. And not too fast.

They ask presenters for notes and texts beforehand. That helps the software pick the right words.

Pytest-picked - Ana Paula Gomes

Say you have a codebase that has tests that take a long time. You've just changed a few files and don't want to run the full test before the commit. You could run git status and figure out the files to test.

But you can do it automatically with pytest-picked.

It is a small plugin, but she wants to improve it with better guessing and also with support for testing what changed on a branch.

How to build a treehouse - Harry Biddle

A talk about building actual treehouses!

One of the craziest is "Horace's cathedral", a huge one that was closed by the fire brigade for fire risks...

If you're going to do it, be nice to your tree. Look at "garnier bolt" for securing your tree house.

Give the tree room to grow.

Recommended book: "the man who climbs trees".

What about DRF's renderers and parsers - Martin Angelov

They started with just plain django. Then they added DRF (django rest framework). Plus react.

Then it became a single page app with django, drf, react, redux, sagas, routers, etc. (Well, there not quite there yet).

Python and javascript are different. Python uses snake_case, javascript uses camelCase. What comes out of django rest framework also often is snake_case, which looks weird in javascript.

They tried to fix it.

  • JS utils. On-the-fly translation. Hard when debugging.
  • React middleware.
  • Then he used a custom CamelCase renderer in django rest framework :-)

Git Menorahs considered harmful - Shai Berger

A bit of Jewish tradition: a menorah is a 7-armed candle holder (a temple menorah) or 9 armed (hanukkah menorah).

What do we use version control for?

  • To coordinate collaborative work.
  • To keep a record of history.

But why do we need the history? To fix mistakes. Figure out when something was broken. Reconsider decisions. Sometimes you need to revert changes.

What do we NOT need in the history? The actual work process. Commits for typo fixes, for instance.

A git menorah is a git repo with many branches and changes. Bad. Use git rebase to reshape the history of your work before pushing. Follow the example of the django git repository.

5 minutes for your mental health - Ducky

What is really important for your mental health?

  • Sleep!
  • Exercise.

As weird as it sounds, you can 'feel' in your body how you're feeling. There are some exercises you can do. Close your eyes, put your mind in your fingertips. Slowly move inside your body. Etc. "Body scan".

Meta programming system, never have syntax errors again - Ilja Bauer

MPS is a system for creating new custom languages. It has a "projectional editor".

A normal editor shows the code, which is converted to an AST (abstract syntax tree) and it gets byte-compiled.

A projectional editor has the AST as the basis and "projects" it as .py/java/whatever files purely as output. The advantage is that renaming stuff is easier.

Save ponies: reduce the resource consumption of your django server - Raphaël Barrois

He works on a big django project. 300 models. Startup time is 12 seconds... Memory consumption 900MB. The first query afterwards after 4 seconds.

How dows it work? They use uwsgi. There is a simple optimization. By default, uwsgi has lazy = true. This loads apps in workers instead of in the master. He disabled it. It started up much slower, but the first query could then be handled much quicker.

Another improvement. They generate a fake (test) request and fire it at the server upon startup. So before the workers are started, everything is already primed and active. This helps a lot.

Djangocon: Graphql in python and django - Patrick Arminio

(One of my summaries of a talk at the 2018 European djangocon.)

For APIs, REST is the normal way. But REST is not perfect.

You can, for instance, have too many requests. If you request a user (/users/1) and the user has a list of friends, you have to grab the user page of all those friends also. You could make a special endpoint where you get the names of the friends, but can end up with many endpoints (/users-with-friends/1, /users-with-friends-and-images/1). Or with very big responses that contain everything you might need.

Graphql was created to solve some of these issues. You have a single /graphql endpoint, which you POST to. You post the data structure that you want to get back. There's the option of adding types. So you're not bound to pre-defined REST responses, but you can tell exactly how much or how few you need and in what form.

Almost every graphql instance has introspection enabled. You can discover the API that way, including which data types to expect.

In python, you can use the graphene library. From the same authors, there's graphene-django.

There is also integration for django REST framework in graphene-django. Quite useful when you already have all of your serializers.

For trying out a graphql API, https://github.com/graphql/graphiql is a handy in-browser IDE to "play" with it.

(He demoed it: looked nice and useful.)

What about security/authentication? Standard session based authentication. Or you can use an authentication header.

What about malicious queries? You could get big exploding responses by following a foreignkey relation back and forth (author->posts->authors->posts etc).

In the end, graphql is quite handy, especially when you're working with many developers. With REST, you'd have just finished one response when the UI people were already clamoring for other, different responses. That problem is gone with graphql.


Photo explanation: station signs on the way from Utrecht (NL) to Heidelberg (DE).

Djangocon: it's about time - Russell Keith-Magee

(One of my summaries of a talk at the 2018 European djangocon.)

Handling time and timezones is complex and painful.

It starts with leap years. Every four years, there is a leap year. Except every 100 years. Except except every 400 years. The latter two rules are from the gregorian calendar, which replaced the julian calander in 1582 (at least, in parts of the world...)

When did the "october revolution" happen? Well, either 25 oct or 7 november 1918. Russia still had the Julian calender at that time :-)

Year? Well, some countries use years based on the lunar cycle... Or they count from a different starting point.

In IT we had the Y2k problem. In 20 years time there'll be the 32 bit epoch overflow. It already crashed the AOL mail servers (!) years ago.

In python, there's the time module. It represents how your computer thinks about time. It isn't terribly useful if you actually want to do something with it that you're going to show to the user.

The datetime module is the one you'll probably want to use. But do you use a date or a datetime? A date doesn't have a timezone. The day Hawkings died, and you asked it on google, you'd get "he died tomorrow" if you asked it from the USA....

You'll need to use datetimes. With timezone info, not "naïve datetimes".

You absolutely have to use the pytz module. That is absolutely necessary. It is regularly updated. This year, it already had 5 updates. Countries change their timezone. Sometimes retrospectively.

He showed a couple of weird timezones examples. And he didn't even have to leave Australia.

Timezones are date sensitive. Daylight savings time, for instance. And as timezones can change....

Reading in dates: hard to get right. People write dates differently. 1/6/18 can be 1 jan 2018, 18 june 2001 etc....

And specifying the right timezone is hard. ISO8601 has a way of specifying the timezone as +08:00, for instance. But then you only know which of the 20 possible timezones it is because you only have the +8 hours, not the timezone...

Oh, there are also leap seconds. 23:59:61 sometimes is a valid time!

Oh, and don't communicate like "it will be released in fall 2018": on the southern hemisphere, fall is in the first half of the year.

A request:

  • Always include a year.
  • Always use the text version of the month (localized).
  • Always include a timezone.
  • Always use ISO8601 in logs.


  • A date means nothing without a time
  • A time means nothing without a date.
  • Both are nothing without an accurate timezone

If you thought everything was hard now, what when we start colonizing planets? A day on Venus is longer than a year on Venus :-)


Photo explanation: station signs on the way from Utrecht (NL) to Heidelberg (DE).

Djangocon: banking with django, how not to lose your customers' money - Anssi Kääriänen

(One of my summaries of a talk at the 2018 European djangocon.)

He works for a company (holvi.com) that offers business banking services to microentrepreneurs in a couple of countries. Payment accounts (online payments, prepaid business mastercard, etc). Business tools (invoices, online shop, bookkeeping, etc).

Technically it is nothing really special. Django, django rest framework, postgres, celery+redis, angular, native mobile apps. It runs on Amazon.

Django was a good choice. The ecosystem is big: for anything that you want to do, there seems to be a library. Important for them: django is very reliable.

Now: how not to lose your customers' money.

  • Option 1: reliable payments.
  • Option 2: take the losses (and thus reimburse your customer).

If you're just starting up, you might have a reliability of 99.9%. With, say, 100 payments per day and 2 messages per payment, that's 1 error case per day. You can handle that by hand just fine.

If you grow to 10.000 messages/day and 99.99% reliability, you have 5 cases per day. You now need one or two persons just for handling the error cases. That's not effective.

Their system is mostly build around messaging. How do you make messages reliable?

  • The original system records the message in the local database inside a single transaction.

    In messaging, it is terribly hard to debug problems if you're not sure whether a message was send or what the contents were. Storing a copy locally helps a lot.

  • On commit, the message is send.

  • If the initial send fails: retry.

  • The receiving side has to deduplicate, as it might get messages double.

You can also use an inbox/outbox model.

  • Abstract messages to Inbox and Outbox django models.
  • The outbox on the origin system stores the messages.
  • Send on_commit and with retry.
  • Receiving side stores the messages in the Inbox.
  • There's a unique constraint that makes sure only unique messages are in the Inbox.
  • There's a "reconcialation" task that regularly compares the Outbox and the Inbox, to see if they have the same contents.

For transport between inbox and outbox, they use kafka, which can send to multiple inboxes.

There are other reliability considerations:

  • Use testing and reviews.
  • If there's a failure: react quickly. This is very important from the customer's point of view.
  • Fix the original reason, the core reason. Ask and ask and ask. If you clicked on a wrong button, ask why you clicked on the wrong button. Is the UI illogical, for instance?
  • Constantly monitor and constantly run the reconciliation. This way, you get instant feedback if something is broken.

Photo explanation: station signs on the way from Utrecht (NL) to Heidelberg (DE).

Djangocon: don't look back in anger, the failure of 1858 - Lilly Ryan

(One of my summaries of a talk at the 2018 European djangocon.)

Full title of the talk: "don't look back in anger: Wildman Whitehouse and the great failure of 1858". Lilly is either hacking on something or working on something history-related.

"Life can only be understood backwards; but it must be lived forwards -- Søren Kierkegaard" So if you make mistakes, you must learn from it. And if something goes right, you can party.

The telegraph! Old technology. Often used with morse code. Much faster than via the post. Brittain and the USA had nation-wide telegraph systems by 1858. So when the first trans-atlantic cable was completed, it was a good reason for a huge celebration. Cannon, fireworks. They celebrated for three weeks. Until the cable went completely dead...

Normally you have a "soft lauch". You try out if everything really works for a while. But they didn't do this in case. They could only imagine succes...

Failures aren't necessarily bad. You can learn a lot from it. But at that time you didn't have agile retrospectives. An agile retrospective at that time was probably looking over your shoulder while you sprinted along the highway, chased by highway robbers...

Now on to Wildman Whitehouse. It looked like he had done a lot of inventions, but most of them were slight alterations on other peoples' work. So basically "fork something on github, change a single line and hey, you have a new repo".

But it all impressed Cyrus Field, a businessman that wanted to build a transatlantic cable. Undeterred by the fact that electricity was quite new and that it seemed technically impossible to build the thing. He hired Whitehouse to build it.

Another person, a bit more junior than Whitehouse, was also hired: William Thomson. He disagreed on most points with Whitehouse's design (and electrical theory). But there was no time to do any discussion, so the project was started to Whitehouse's design.

The initial attempts all failed: the cable broke. On the fourth try, it finally worked and the party started.

Now Thomson and Whitehouse started fighting over how to operate the line. Whitehouse was convinced that you had to use very high voltages. He actually did this after three weeks and fried the cable.

Time for reflection! But not for Whitehouse: everyone except him was to blame. But the public was terribly angry. There was an official "retrospective" with two continents looking on. Thomson presented the misgivings he shared beforehand. Engineers told the investigators that Whitehouse actually personally (illegally) raised the voltage. So it soon was clear that there was one person to blame... Whitehouse.

Whitehouse was indignant and published a pamflet... And nobody believed him and he stormed off to his room.

Some lessons for us to learn:

  • Open-mindedness is crucial. You must allow your ideas to be questioned.

  • Treat feedback sensitively. People get defensive if they are attacked in public.

    Before you do a retrospective, make sure it is clear that everyone did their best according to the knowledge in the room

  • Remember the prime directive.

  • There is no room for heroes in the team. It makes for a terrible team environment.

How did it end? They let William Thomson take over the management. The design and principles were changed. Also good: Thomson was nice to work with. Still, some things went wrong, but they were experienced enough to be able to repair it. After some test period, the cable went into operation.


Photo explanation: station signs on the way from Utrecht (NL) to Heidelberg (DE).

Djangocon: an intro to docker for djangonauts - Lacey Williams Henschel

(One of my summaries of a talk at the 2018 European djangocon.)



  • Nice: it separates dependencies.
  • It shares your OS (so less weight than a VM).
  • It puts all memmbers on the same page. Everything is defined to the last detail.

But: there is a pretty steep learning curve.

Docker is like the polyjuice potion from Harry Potter. You mix the potion, add a hair of the other person, and you suddenly look exactly like that other person.

  • The (docker) image is the person you want to turn into.
  • The (docker) container, that is you.
  • The Dockerfile, that is the hair. The DNA that tells exactly what you want it to look like. (She showed how the format looked).
  • docker build actually brews the potion. It builds the image according to the instructions in the Dockerfile.

Ok. Which images do I have? Image Revelio!: docker images. Same with continens revelio: docker container ls.

From that command, you can grap the ID of your running container. If you want to poke around in that running container, you can do docker exec -it THE_ID bash

Stop it? Stupefy! docker stop THE_ID. But that's just pause. If it is Avada kedavra! you want: docker kill THE_ID.

Very handy: docker-compose. It comes with docker on the mac, for other systems it is an extra download. You have one config file with which you can start multiple containers. It is like Hermione's magic bag. One small file and you can have it start lots of things.

It is especially handy when you want to talk to, for instance, a postgres database. With two lines in your docker-compose.yml, you have a running postgres in your project. Or an extra celery task server.

Starting up the whole project is easier than with just plain docker: docker-compose up! Running a command in one of the containers is also handier.

The examples are at https://github.com/williln/docker-hogwarts


Photo explanation: station signs on the way from Utrecht (NL) to Heidelberg (DE).

Djangocon keynote: the naïve programmer - Daniele Procida

(One of my summaries of a talk at the 2018 European djangocon.)

The naïve programmer is not "the bad programmer" or so. He is just not so sophisticated. Naïve programmers are everywhere. Almost all programmers wish they could be better. Whether you're

Programming is a craft/art/skill. That's our starting point. Art can be measured against human valuation. In the practical arts/crafts, you can be measured against the world (if your bridge collapses, for instance).

Is your craft something you do all the time, like landing a plane? Or are you, as programmer, more in the creative arts: you face the blank canvas all the time (an empty models.py).

In this talk, we won't rate along the single axis "worse - better". There are more axes. "Technique - inept", "creative - dull", "judgment - uncritical" and sophistication - naïve. It is the last one that we deal with.

What does it mean to be a sophisticated programmer? To be a real master of your craft? They are versatile and powerful. They draw connections. They work with concepts and ideas (sometimes coming from other fields) to think about and to explain the problems they have to solve.

The naïve programmer will write, perhaps, badly structured programs.

But... the programs exist. They do get build. What should we make of this?

He showed an example of some small-town USA photographer (Mike Disfarmer). He worked on his own with old tools. He had no contacts with other photographers. Just someone making photo portraits. Years after his death his photos were discovered: beautifully composed, beautifully lighted (though a single skylight...).

Software development is a profession. So we pay attention to tools and practices. Rightfully so. But not everyone is a professional developer.

Not everyone has to be a professional programmer. It is OK if someone learns django for the first time and builds something useful. Even if there are no unit tests. Likewise a researcher that writes a horrid little program that automates something for him. Are we allowed to judge that?

He talked a bit about mucisians. Most of them sophisticated and very good musicians. But some of them also used naïvity. Swapping instruments, for instance. Then you make more mistakes and you play more simply. Perhaps you discover new things that way. Perhaps you finally manage to get out of a rut you're in.

Some closing thoughts:

  • Would you rather be a naïve programmer with a vision or a sophisticated programmer without?
  • If you want to be a professional developer, you should try to become more sophisticated. That is part of the craft.
  • If you're naïve and you produce working code: there's nothing wrong with being proud of it.
  • As a sophisticated programmer: look at what the naïve programmer produces. Is there anything good in it? (He earlier showed bad work of a naïve French painter; his work was loved by Picasso.)

(Suggestion: watch the keynote on youtube, this is the kind of talk you should see instead of read).


Photo explanation: station signs on the way from Utrecht (NL) to Heidelberg (DE).

Djangocon: survival tricks and tools for remote developers - Alessio Bragadini

(One of my summaries of a talk at the 2018 European djangocon.)

He works in a company that has many remote workers. He is one of them. The core question for him: "how do I manage to work remotely in an effective way without much stress".

There is a difference between a remote-friendly company and a remote-first company. Remote-friendly is a company that still has an office. But you're allowed to work from home and you don't have strict work hours. Remote-first changes the entire structure/culture.

Can agile help?

  • Test driven development. First you make the tests. That's handy for remote teams. It sets strict boundaries where you can take over the work in a way that you do not have when sitting behind the same keyboard.
  • No code ownership. Anybody can work on everything all the time.
  • Shared "visual backlog" (boards and so).

But... "agile" also says that teams that work face-to-face are more efficient in conveying information. But note that the agile manifesto is many years old now.

Face-to-fase means proximity, but also truthfulness. So: no documents that can mean anything, but truthful conversation. Eh: we are now used to slack, skype, whatsapp. This is 99% of what face-to-face means. (You still miss body language, though, and the pleasure to be near to each other).

And, what is information? Discussion about the project, about code or design. Information about what moves forward: commits, tasks. Info about what moves backwards: bugs, regressions. All these things can be done online. Some of these can even be done better online.

The more you use these online communication channels, the more you become remote-first. Being in the office is almost accidental. The online channels become stronger if you have your machines post feedback there ("failed test!"). Perhaps even automate tasks that you can start via messages in your channel...

You need a shared repository that is accessible everywhere. A channel to communicate on. Automatic testing. CI. Etc.

Some comments:

  • There are some agile "ceremonies" like a daily standup and a sprint review. Do that, but online.
  • Explain what you're going to do and what you've done. Don't work in an invisible way.
  • Establish "work hours" even if you are not in a proper office. This is perhaps counter-intuitive to working remotely.
  • Important: keep the chat hannel open during work hours.
  • Do meet face-to-face from time to time.
  • Learn from companies that do remote-first: automattic, balsamiq.

Some tools they use:

  • Test driven development (unittests, selenium).
  • Infrastructure as code (VMs, docker).
  • In-house Gitlab as their git repository and project center.
  • Continuous integrations (with pipelines on gitlab). Due to the automated pipelines, no one has to do those tasks. Otherwise you often have a single person that has to do those kinds of tasks so that he feels a bit separated from the rest. Automate it away instead so that everybody can code.
  • Slack channel with integrations with gitlab and sentry. (via a Chatbot)
  • Gitlab boards, some trello boards.
  • Skype for "agile ceremonies" including the daily standup.
  • Google docs.

(He mentioned an article by Lee Bryant about slack, I guess this is it, but I'm not sure).


Photo explanation: station signs on the way from Utrecht (NL) to Heidelberg (DE).

May 24 2018

Djangocon: organizing conferences for learners, how we did it in Namibia - Jessica Upani

(One of my summaries of a talk at the 2018 european djangocon.)

For Jessica, it all began at PythonNamibia2015. She went there, not because she wanted to learn python, but because she was bored. And the conference was free. It had all changed by the end of the conference! Thanks to the organizers that inspired a lot of people there to become active with python.

In 2017, she helped organize a 'computer day'. Talks and panel discussions, poster presentations, software project presentations and workshops. It was aimed at kids!

Especially the panel discussions were aimed at the newcomers: trying to transfer experience. In 2018, there were separate workshop days, amongst other an introductory python course.

There are some differences from organizing a conference for adults:

  • You need to write letters to parents! Convincing them to send their kids to the conference.
  • Behaviour. You need to follow the behaviour of the kids. They're less well-behaved than a room of adults. Adults sit still, kids move around and look for attention. The one giving the presentation needs to be a good teacher, otherwise they won't be able to keep the kids' attention.
  • Different expectations. Kids expect fun and games, they don't expect talks. So you have to tell them beforehand what is going to happen. And you have to adjust the program.
  • Fun and games. Yes, you need it.
  • Speakers. You have to help the speakers. You have to check their talks beforehand: is the content easy enough for the kids? You don't want their eyes to glaze over and their attention to wander.
  • Funding. Important! It is also very hard. She hasn't found any local sponsors till now.
  • Decorations. Yes, put up balloons and other decorations!

What's important:

  • Connect to knowledge they learned in their classes.
  • But: give them more than they get in their regular class.
  • Inspire computing career choices.
  • Passion and knowledge.
  • Get them to challenge themselves. A great way is to let them build something that others have to try out: then it has to be pretty good!

Ok... all this is quite some work. Why go through all that trouble?

She teaches at a school with 215 students. But there is not a single computer. How to let those students get into contact with computers? Organizing such a computer day at the university helped. They could use the university's computers in the weekend that way. And it helped get some sponsorship for computers for her school.

Help is needed here!

Thanks for the python and django foundations, as they were the two that made the computer day 2017 possible. In 2018, they were more sponsors: thanks!


Photo explanation: constructing a viaduct module (which spans a 2m staircase) for my model railway on my attic.

Djangocon: slow food digest better (maintain an old project) - Christopher Grebs

(One of my summaries of a talk at the 2018 european djangocon.)

Full title: "slow food digest better - or how to maintain an 8.5 year old python project without getting lost". Christopher had to maintain such a project - and actually liked it. It was https://addons.mozilla.org, actually.

It started out as a quickly-hacked-together php project. Now it an almost modern django project. The transition from PHP to the django version took almost 16 months. During that time there were bugs, translation errors, downtime: irritating. The site went fully live in january 2010.

The big advantage of the move to django was that lots of tests were added at that time. The site wasn't anything special. Mostly django. Still quite some raw SQL from the old system. Celery for some tasks.

Mozilla at one time had the "Firefox OS" for mobile phones. For that, they build the "firefox marketplace". The work was based on the addons.mozilla.org code, but with some weird hacks based on which site it was running... During that time the addons.mozilla.org website itself was pretty much left alone.

In 2015 they had to re-animate the old codebase. At that time it was running django 1.6, barely, as lots of deprecated features were still used as long as they stayed available. The javascript code had almost no unit tests (and the site at the time was javascript-heavy).

So: complete rewrite or incremental improvement? They chose incremental improvement. Rewriting a huge site from scratch for a small team... no. And with the existing system they at least had the advantage of all the existing unittests!

The balance they had to make was between "removing technical dept" and "new features".

What they did was create a new react-based frontend as a single page app. This got released in december 2017. So they incrementally rewrote the backend (where they had unittests) and did a full rewrite of the frontend (which had no tests).

One thing they used a lot: feature flags/switches. They used "waffle" for that. It makes it much easier to revert broken implementations as you only have to flip a swich back.

Beware of third party dependencies. They can be a great pain. Especially when you want to upgrade your django version. On the frontend you have a similar problem: there you can be inundated by javascript versions and updates and problems. Make sure your dependencies are up-to-date and maintained. If not, fix it or move to other libraries.

They steered their django upgrades with waffle feature flags. Once the new django version was fully in production, they could remove the feature flags for the old version.

A quality assurance safes lives. Unittests are good, but a real QA team that really tests it discovers lots of problems. And purely the fact that you need to explain the upgrade process to the QA engineers already helps.

And... don't panic. You're there for the long run. Great food needs time, why should your software be different?


Photo explanation: constructing a viaduct module (which spans a 2m staircase) for my model railway on my attic.

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!