EuroPython 2016 - 5 Days of Pythonic Presentations, Programming and Pintxos
10,000 ft high Conference Overview
- 5 days
- Over 1100 participants
- Daily talks and workshops lasting between 9am to 6pm
- 7 different tracks
- Over 180 sessions
- Streamed videos at europython.tv
In other words, the experience of the conference is far too much to put down in a blog post, but read on if you would like to hear my personal highlights of the main conference content under the following topics:
- Keynote Highlights
- Python and the Web
- Data Mining with Python
- Python Under the Hood
- Coding Tips and Best Practices
I will follow up with another blog post on the lightning talks and an overview of interesting new tools that I came across.
Rachel Willmer kicked off the conference with a talk titled 20 Years Without a Proper Job on her experience of working for herself, making money from side projects and writing e-books. Her talk aimed to dispell myths such as "you can't start a successful company alone" and "you can't make money from a book". Rachel's advice for someone starting their own tech business included learn accounting, target the B2B market and charge more!
20 Years Without a Proper Job
We got to have a peek Inside the Hat as Paul Hildebrandt introduced us to the computer magic and wizardy being used by Walt Disney studios to produce their animated blockbusters. The process of making a film goes through around 13 different departments many of which make use of their in-house assets manager program which is written using Python and Qt.
Nicholas Tollervey delivered many people's favourite keynote as he gave every conference attendee the best swag EVER: a BBC micro:bit, a small computing device that can run Node and Python (well, MicroPython anyway) that will be distributed to One Million Children in the UK. He demo-ed some pretty cool stuff that the micro:bit can do using its LED light display, in-built accelerometer and speech library that can say words or sounds according to a given pitch or tone.
Wednesday's out of this world keynote speech on the Laser Interferometer Gravitational-Wave Observatory (LIGO) was delivered by Jameson Graef Rollins and gave a short introduction to gravitational waves and how they are measured at LIGO. Jameson mentioned how Python is steadily replacing Matlab as the de facto language of choice in the gravitational wave scientific community. There is even an IPython notebook tutorial at https://losc.ligo.org/ along with data sources that allow you to analyse historical data from the observatory yourself!
Naomi Ceder echoed the thoughts of many Pythonistas in her talk: Come for the Language, Stay for the Community and reminded us not to take the welcoming and beginner-friendly nature of the Python community for granted. Python as a language still faces many challenges - the GIL (see Python Under the Hood), it is not used widely for mobile, embedded/IoT technologies, some people still perceive it as a "hard-core" or difficult language and there are still improvements to be made on diversity and inclusivity. One easy way for anyone to be more involved and declare their support for the community is to become a basic non-voting member of the Python Software Foundation - this helps the PSF get a better idea of who uses Python and how. Anyone who spends more than 5 hours a month on community events or contributing to Python-related open source projects can also sign up as Contributing Member with voting rights.
Scientist meets Web Dev gave an interesting perspective on how a significant proportion of Python users from the
scientific community have a totally different approach when it comes to using the language. Gael Varoquaux,
co-creator of the
sci-kit learn library spoke about how when it comes to coding, most scientists don't know what
"devops" refers to and neither do they care about databases, Object-Oriented programming or flow control...rather they
prefer to think of everything in terms of arrays!
Python and the Web
David Arcos' talk on Efficient Django was an outline of how to identify
bottlenecks in your django app and ways to speed up the django admin, which can get slow especially for more complex
models. It was reassuring to see many of the tools/techniques he mentioned are put to use in the YPlan backend code
(e.g. application profiling,
prefetch_related statements where relevant), but I will also be trying out using
ipdb instead of
pdb for debugging, running some tests using the
--keepdb flag to skip migrations.
Jose Manuel Ortega gave a very quick overview into a wide variety of Ethical Hacking Tools written in Python, which could be used for pentesting (e.g. port scanning, domain gathering, monitoring network packets, detecting RFI/LFI/XSS vulnerabilities, extracting metadata from PDF and Image files etc). For the full list, check out his slide deck.
At YPlan we use Django Rest Framework, but it's always interesting to hear about alternative tools. Michal Karzynski
gave a demo on how using
will automatically generate interactive API documentation using
Swagger UI with minimal boilerplate code required.
I can imagine this would make a lot of front end developers very happy!
Data Mining with Python
Simulation for Fun and Profit (Vincent Warmerdam) was one of my favourite talks. He gave some demos on how inference from sampling rather than using mathematically calculated probabilities could provide some interesting insights into how to play Monopoly, how profitable collecting Lego Minifigures in Kinder Eggs can be, and how to use Markov chains to generate pokemon names, ikea furniture and Red Hot Chili Pepper lyrics. You can read about all of these topics on his pretty cool blog.
I hate you NLP ;) by Kathryn Jarmul was a fun introduction into the challenges involved with sentiment analysis using machine learning. Apparently this is very easy to do with movie reviews (potential idea for YPlan Cinema Club!), but with almost any other corpus, infering the correct context, stance of the author, the subject of the expressed emotion, filtering out sarcasm/humour, as well as interference from emojis seems like an interesting but incredibly difficult challenge! e.g. how would a computer tell the difference in sentiment between these two similarly-worded YPlan tweets:
Friday got us feelin like……YPlan (@YPlan) 22 July 2016
Monday's got us like……YPlan (@YPlan) 11 July 2016
Python Under the Hood
One of Python's core developers Larry Hastings gave an "exceedingly technical" talk on how he has performed The Gilectomy, i.e. the removal of Python's (in)famous Global Interpreter Lock. The presence of the GIL prevents Python from capitalising on multiple CPUs, but without it CPython runs significantly slower. The talk gave a good conceptual overview on the steps involved in removing the GIL and the technical considerations regarding how to deal with reference counting and C-extensions that have never been run in a world of multiple threads, for more details it's better that you hear it explained by the man himself
Anjana Vakil gave a very interesting introduction to Python Bytecode and how to use the
dis library to see how Python code translates into bytecode. This was then used to
illustrate why code that sits inside a function runs faster than the same code written at the module level.
# Can you guess what `my_test_function` is doing? import dis dis.dis(my_test_function) # Disassembly of my_test_function: # 2 0 LOAD_CONST 1 (0) # 3 STORE_FAST 0 (x) # # 3 6 SETUP_LOOP 30 (to 39) # 9 LOAD_GLOBAL 0 (range) # 12 LOAD_CONST 2 (1000) # 15 CALL_FUNCTION 1 (1 positional, 0 keyword pair) # 18 GET_ITER # >> 19 FOR_ITER 16 (to 38) # 22 STORE_FAST 1 (i) # # 4 25 LOAD_FAST 0 (x) # 28 LOAD_FAST 1 (i) # 31 INPLACE_ADD # 32 STORE_FAST 0 (x) # 35 JUMP_ABSOLUTE 19 # >> 38 POP_BLOCK # >> 39 LOAD_CONST 0 (None) # 42 RETURN_VALUE
Coding Tips and Best Practices
Effective Code Review by Dougal Matthews used many academic studies to show that code review is just as effective at defect removal if not more so than unit, functional or integration tests. He highlighted how though bug spotting is the main goal of code review, additional benefits such as knowledge transfer, increased team awareness and open discussion of alternative solutions are also very important. Dougal's top tips were:
- Don't start with code - first discuss what your pull request is trying to achieve, get everyone on the same page
- Small and contained pull requests
- Relinquish ownership and don't be overprotective, your masterpiece belongs to the reviewers too
- Code contributions are like puppies - everyone loves puppies, but you need to walk them look after them and feed them.
Similarly, contributions need love and care
- Multiple reviewers for the win
- Be polite and aware of tone, rephrase "Why didn't you do....?" as "Could we do this...?"
- Automate what you can (e.g. use linters and style guides)
- Be kind to yourself as a reviewer - limit how long you spend on code review
Writing Faster Python (And not hating your job as a software developer), Seb Witwowski Seb pointed out the different levels of your code in which you can optimise: from higher up design, algorithms and data structure considerations down to the lower levels of compile and runtime. We also got a refresher of the rules of optimisation: 1. Don't 2. Don't Yet (finish your code and tests) 3. Profile (you can't optimise something that is not measured)
Seb also reminded us that optimisation should not come at the expense of readability:
Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live
It was really interesting to see the extent of time difference in alternative methods that achieve the same results. Here are just a few examples run using IPython:
import random ONE_THOUSAND_ELEMENTS = random.sample(range(1000), 1000) # Counting Elements in a List def slow_count(my_list): count = 0 for _ in my_list: count += 1 return count %timeit slow_count(ONE_THOUSAND_ELEMENTS) # 39.5 µs per loop %timeit len(ONE_THOUSAND_ELEMENTS) # 67.6 ns per loop
# Filtering a List def slow_filter(my_list): output =  for item in my_list: if item % 2: output.append(item) return output %timeit slow_filter(ONE_THOUSAND_ELEMENTS) # 116 µs per loop %timeit list(filter(lambda item: item % 2, ONE_THOUSAND_ELEMENTS)) # 183 µs per loop %timeit [item for item in ONE_THOUSAND_ELEMENTS if item % 2] # 89 µs per loop List comprehensions for the win!
# Sometimes it's better to ask forgiveness than permission ... class Foo(object): hello = 'world' foo = Foo() def ask_permission(obj): if hasattr(foo, 'hello'): return foo.hello def ask_forgiveness(obj): try: return obj.hello except AttributeError: pass %timeit ask_permission(foo) # 197 ns per loop %timeit ask_forgiveness(foo) # 107 ns per loop # ...unless your objects don't have the attribute you are looking for class Bar(object): pass bar = Bar() %timeit ask_permission(bar) # 201 ns per loop %timeit ask_forgiveness(bar) # 611 ns per loop
Eskerrik asko / Muchas gracias to the organisers of EuroPython 2016 for such an enjoyable and informative week. I would not have been able to attend if it was not for the generosity of the Python Software Foundation for granting my scholarship ticket and YPlan for sponsoring my travel and accommodation. Bring on PyCon UK!