Rainer Hahnekamp

qCon London 2018

QCon London is a tech conference which lasts three days and covers a wide range of topics. Each day is divided into 5 tracks plus 2 additional ones for AMA (Ask me anything) sessions and sponsored talks. Each track covers a topic like “Serverless”, “Performance”, etc. A host per track organises the talks and selects the speakers.

Since QCon records most of the talks, I focussed more on the AMA and Open-Space sessions. With talks you get to learn new things, which not always affects you immediately. With the Open-Space and AMA sessions, I had the opportunity to talk to speakers and other conference attendees about the problems I face on a daily basis. I could speak up very often as only 5 - 15 people attended each of these sessions.

The organisers also offered a pub crawl on the first day, which - as everybody can imagine - was very funny and brought me new acquaintances.

I can only emphasise the importance and the value you receive when you get into contact with your peers. I think the organisers understand that well, and I can only encourage them to improve this even more. Probably with a buddy program or a social event the day before the conference starts.

What follows, is my summary of selected talks:

qCon 2018

Rob Harrop: Artificial Intelligence and Machine Learning for the SWE

The keynote was a motivation speech for software developers to learn the basics of artificial intelligence (AI) and machine learning (ML). It is a competency that software engineers should know about for various reasons. Mr. Harrop sees it as a tool that enables us to create things that weren‘t possible before. This is in contrast to technologies or patterns, like microservices, that only improve applications.

There is no magic. The same principles we find in software engineering apply to ML.

He also emphasised the importance of having all required competencies within one team. This aligns with developments over the last few years. We saw, for example, Operations and Development merge to DevOps. The same goes for ML. It should not be outside the team.

To learn it, you have to invest in theory, practice and — most importantly — intuition. He suggests using pytorch for starters instead of TensorFlow, since it leans more towards classical programming.

He recommended especially the book„Doing Data Science“ and Andrew Ng’s Coursera course „Deep Learning Specialization“.

Gil Tene: Java at Speed

Mr. Tene stressed the fact that Java starts interpreted, gets compiled and eventually optimises to run fast. A huge benefit of s JIT compiler is that, as it runs on the rather than on a vendor‘s machine, it can take into consideration specific CPU features. This does not just mean 32-bit versus 64-bit, but also things like the different features of each Intel chip generation.

Some features a compiler can do for optimisation include removing dead code, changing code and value propagation for which the `volatile` keyword is important.

He also gave advice for so-called microbenchmarking. The engine can potentially optimise code snippets so well, that the code doesn’t even have to run and you get the result immediately. To be sure, use the tool `jhm`.

In contrast to static compilers, JIT compilers can do some speculative optimisation for a very simple reason: it can always throw away and recompile the code. The JIT compiler can take more risks as it tries to squeeze out even more performance.

A typical speculation optimisation would be removing null pointer checks. Should a null occur, the compiler must react — a far more expensive operation compared to doing the original null check. Still, as nulls happen far less frequently than non-nulls, the net result is a performance improvement.

One quite well-known performance pattern is the so-called pre-warm-up. Execution of the application is simulated with fake data. This should compile the code, before the application really starts. A good example might be in trading, where we warm-up the application before the markets open.

The problem here is that these pre-warm-ups can differ from the real usage. As a result, JIT optimises for the fake process itself, which can result in a de-optimisation. A better solution might be to store the profile of a past real execution and reuse it.

The book “Java Performance” and articles from Brendan Gregg are recommendations, if you want to dig deeper into this topic.

Gil Tene showing assembler code generated by the JIT Compiler

Dave Farley: Taking back „Software Engineering“

Dave Farley is a veteran in the software industry and known for his book “Continuous Delivery”. The main theme was to focus on proper experimentation and not rely on simple guessing. That’s what makes an engineer.

There are many different disciplines in engineering. One can‘t for example compare chemical engineering to aircraft. Still, they are all engineers. Software engineering is no different — it must base itself on experimental data rather than guesswork.

Along those lines, Mr. Farley declared that Scrum and craftsmanship are project processes, and not software processes per se.

This is due to the resulting lower-quality code. Craftsmanship is hand-made, individual and can‘t usually reach the same level of quality as automated machines.

Sadly, that is the case today for Software Engineering. It shouldn‘t be.

Dave Farley explaining the difference between iterative and incremental

Matt Renney: Inside a Self-Driving Uber

This presentation was about the current state of self-driving cars at Uber. The presentation was heavily packed with marketing-driven material.

Uber has, of course, a major business interest in driverless cars. Their target is not to produce a self-driving car that can go everywhere. There are some limits. Especially with trucks, Uber targets the long distances.

Mr. Renney showed statistics claiming that self-driving cars will not reduce the number of driver jobs. As a personal note, common sense tells me that this statement cannot be true.

Uber does not use GPS because of its inaccuracy. Instead, they rely on lidar combined with stored map data. The three basic inputs for Uber’s self-driving algorithms are camera, radar and lidar.

Testing is the main obstacle because the software itself interacts with real world. So if something breaks, it causes physical damage. For that reason, Uber has a private track where they can test their cars. This type of testing is of course very expensive and takes a lot of time. Compared to classical software testing the potential coverage rate, is much more limited.

Uber created a simulator to address the limited of real-world testing. This gives them better control over the environmental factors. The simulator reproduces situations that happened in the real world by using the recorded log data from the cars. Additionally there is a kind of game engine that randomly produces situations where the interplay with pedestrians plays a huge role.

Mr. Renney closed his talk with the note that Uber is further along with their research than one might think.

Uber and Self-Driving Cars

Panel: Building Great Engineering Cultures

In the track about engineering culture, this panel discussion was attended mainly by managers.

One problem the panel raised was that ‘progressing’ in a career often means becoming a manager. Unfortunately, there is no real training for new managers. A good programmer does not become a good manager automatically due to the complete different kind of skills involved in the job.

The panelists offered several suggestions to help managers build better cultures.

Allowing people to speak up and to complain is required for building trust.

When managers send emails over the weekend, it puts pressure on employees. They have the feeling that the company will make important decisions without them unless they check their email regularly. Even during the weekend.

The panelists sent more divergent signals concerning working from 9 to 5. One on the one hand, it can mean that people aren‘t really focused or motivated. On the other hand, it is important not to burn out and to have a private life.

Panel on Engineering Culture

Simon Ritter: JDK 9: Mission Accomplished. What next for Java?

Modularity was the major change introduced with JDK 9, but Mr. Ritter wanted to draw our attention to the smaller, less well-known changes.

The new versioning mechanisms attaches features to the releases with fixed dates when they are finished. A new release comes out twice a year. The versioning numbering mechanism has 4 numbers.

Parts of the Class LIbrary were declared as internal modules since they shouldn‘t be used. The main issue is that, unfortunately, many developers use some well-known classes like `Unsafe`.

Applets were deprecated in JDK 9. The same is true for the meta module java.se.ee including packages for webservices. Currently, proprietary code like Flight Recorder or Mission Control are open sourced. The goal is, that there should be no difference between Oracle binaries and OpenJDK.

Other changes in JDK 9 Mr. Ritter mentioned include:

The „Big Kill Switch“ lets you disable the encapsulation of the module system.
jdeps, jlink were mentioned as important tools.
java.se is meta-module or a group of modules.
LTS is only meant for customers that bought commercial support.
Oracle will also drop support for 32bit and for ARM.

Mr. Rittter addressed some of the 109 features coming in the 20. March release of JDK 10. There will be a major improvement in type inferences as the `var` keyword enables „Local variable type inference“. In addition, the new JIT compiler, Graal, is written in Java itself. Currently marked as experimental, Graal will be released in Java 10.

A funny fact is that there will be de-deprecation with XMLInputFactory.newFactory().

Mr Ritter also presented an outlook for JDK 11. Corba and Java EE modules, already deprecated, will be removed.

Laura Bell: Guardians of the Galaxy: Architecting a Culture of Secure Software

Laura Bell took The Guardians of the Galaxy as a metaphor because they fight without having a special education. Continuous security requires integrating security into the development process rather than making it a separate function.

Security usually lies in the scope of a group working in isolation. This is very hard, since these guys are usually responsible if something goes wrong. Consequently, that work is not very enjoyable.

Builders usually know best how to fix security flaws in their product, so there is no need for a specific security person. After all, security specialists don’t have the deep technical insight than the author.

Good tools might be dependency checkers, static analysis tools or vulnerability scanner. It is important that you respect those tools and stop when they show you a problem. Regard it as a build failure.

How can you achieve this? Hiring, of course, is a good starting point. But don't go primarily for the best education. Look at their mindset. They should be able to think about bad things. After all, one has to have some ideas how to break a system.

Once you have hired good people, you need to hold them. A good suggestion is not to fire people for failures.

Blamelessness culture! That is very hard, especially in security.

Sander Mak: Modular Java Development in Action

This was a great talk for everyone who is about to migrate their codebase to Java 9.

With a migration to Java 9, the problem is not only the modularisation. There are also other ones. Often, an application won’t run in Java 9 even though it isn’t defined as module.

The app on your classpath without module definition is automatically named “unnamed module”.

Mr. Mak showed an extremely simple Spring application that failed to start in Java 9. Issues arose due to split package conflicts or code that contained so-called enterprise modules.

Since java.xml.bind is included in java.se.ee, for example, it is not part of the default module java.se.

A good solution is to look for alternative modules since the enterprise module will be deprecated and removed in Java 11. The `jdkinternals` flag of `jdeps` will help you by suggesting replacement classes for now-illegal encapsulated modules.

Using `exports` and `opens` keywords in module definition is very important — especially when working with frameworks like Spring that rely heavily on reflection. Otherwise, Spring could not instantiate beans from other modules.

If your application has a module definition, all your other libraries must be modules too. If that is not the case, the so-called auto module mechanism simply transforms a jar file to a module which exports everything.

Still, you have to add dependencies for transitive module dependencies explicitly. Instead of adding them in the module-info.java, add these modules as parameters for the java compiler and runtime with the `-add-modules` parameter.

Saner Mak summarising the migration process for a Java 9 migration

Justin Cornmack: The Modern Operating System in 2018

Linux and Windows are the main candidates for use as servers. Operating systems are the last large monoliths.

Unikernels are an alternative. You can start them on bare metal and only install the libraries and drivers you need for your application. An example for a working Unikernel is Microsoft’s SQL Server for Linux.

SSD’s and 10Gb networks created tremendous performance improvements that forced operating systems to adapt to new hardware speeds.

One approach was to avoid the slow kernel/userspace switch and instead try to run all code only in userspace. SeaStar is a framework for making drivers available in userspace.

The other option would be to run everything in the Kernel. eBPF is a framework for that strategy and is similar to “AWS Lambda for the Linux kernel”.

Another factor that changed over the last decades was scarcity. In the early days, many users shared a machine. It was important that the OS managed memory or I/O bandwidth. These days, it is more important that the operating system can support SLA or improve in tail latency.

Security is still an open issue.