For this Humans of Machine Learning
( # humansofml interview, I’m super excited to share my conversation with Emil Wallner. Emil is living, breathing proof that it’s possible to pursue serious AI research as a self-taught creator. Emil is currently doing machine learning research at Google Art & Culture and an independent researcher in reasoning.
This episode of Humans of ML is special because Emil got his start in AI at FloydHub. In , Emil created a popular open-source project that translates design mock-ups into HTML / CSS, Screenshot-to-code. In early , he was the subject of a short film made by Google for his work on automated colorization . He previously worked for the University of Oxford. He co-founded a seed investment firm that focuses on education technology.
image source: (https://blog.google/technology/ai/creative-coder-adding-color-machine-learning/ [Alessio]: You don’t have what we could consider a “standard education” in either AI or CS, despite your deep-domain expertise. This is very unconventional in a field where academic pedigree was considered to carry all the weight. I’d love to walk through your journey in AI. [Emil]: Where would you like to start? Looking at your past experiences, it’s fascinating to see such diversity in what you’ve pursued. By the way, I really love your CV – the quirks section was especially fun to read. Could you tell us a little about your pre-AI life? In my early teens, I was more focused on developing a theory about personal development than studying for exams. When I finished high school, I tried it. I moved from Sweden to Ghana, West Africa. I started working as a teacher in the countryside, but after invoking the spirit of their dead chief, they later annotated me the king of their village.
From My travels, I was exposed to a lot of social issues that led me into social entrepreneurship in my mid-twenties. I started working with the Skoll Center for Social Entrepreneurship at the University of Oxford. One thing led to another, and I ended up co-founding an investment firm to fund education initiatives.
Makes sense. How did studying programming lead you to ML / DL? spent six months programming in C and then did a deep learning internship at FloydHub.
During my internship, I spent my first two months playing with models and implemented the core deep learning algorithms from scratch. I then spent two months colorizing images with neural networks, and finished my internship with a project to translate design mock-ups into HTML. You can read about what I did on the FloydHub blog . The the launch of the AI phase of my career.
These peer-to-peer universities are in the early stages and many still prefer exam-based motivation. . However, they are becoming better by the day and I’m confident that they will become mainstream within the coming decade.
Can you elaborate more on the signaling in the self-taught process? In other words, how can we recognize when we are on the right track or pursuing the right learning experience? Creating value with your knowledge is evidence of learning. I see learning as a by-product of trying to achieve an intrinsic goal, rather than an isolated activity to become educated.
Early evidence of practical knowledge often comes from usage metrics on GitHub, or reader metrics from your work blog . Progress in theoretical work starts by having researchers you consider interesting engage with your work. Taste has more to do about character development than knowledge. You need taste to form an independent opinion of a field, having the courage to pursue unconventional areas and to not get caught up in self-admiration. Taste is related to the impact your work has.
Many of the small and medium enterprises prefer portfolios over degrees. When it comes to larger companies, it becomes more of an art than a science.
At large companies, less than a few percent are self-taught in ML. Of those, most don’t come through the classic hiring channels. Due to the volume of university applicants a large company faces, it’s harder for them to adjust to portfolio-centric hiring. It’s not an easy problem, here are the rough guidelines I shared earlier:
– 270% transparent requirements
– No cover / recommendation letters, nor theory questions – Offer on -the-job theory training
– Facilitate part-time PhDs and transitions into research roles – Emil Wallner (@EmilWallner) (March) , I don’t know what it would look like in practice, but I’d imagine clearly communicating that you have a separate track for portfolio-based hiring, and how you quantify the quality of a portfolio. Think of it as assessing a process rather than skill-specific questions. Focus the initial phase on discussing their portfolio in-depth. It can also be useful to ask how they solve a problem step by step, not a brain-teaser with a specific answer, but a more open-ended problem related to their area of expertise.
Depending on the bandwidth of the applicant, it can also be worth doing a take-home exam, followed by a shorter paid contracting assignment. That sounds so much more efficient than the typical hiring process . Assuming they can master the art of finding their way into a hiring channel, how can a self-taught applicant increase their chances of getting an offer to work for a big company? have a high chance of getting an offer, you need to understand most of Ian Goodfellow’s Deep Learning book and Cracking the Coding Interview , and find a dozen people from big companies to do mock interviews with. If you self-study full-time, it will take around two years. In the end, hiring pipelines at large companies assess your extrinsic motivation, your ability to learn a given body of knowledge. However, you are self-taught because you have strong intrinsic motivation. Forcing yourself to learn a body of knowledge is a dread. In my case, I think the opportunity cost is too high to study for interviews. I started working With Google because I reproduced an ML paper, wrote a blog post about it, and promoted it. Google’s brand department was looking for case studies of their products, TensorFlow in this case. They made a video about my project. Someone at Google saw the video, though my skill set could be useful, and pinged me on Twitter.
) What I’ve seen work is getting good at a niche and letting the world know about it. Have a blog, be active on Twitter, and engage with researchers via email.
Once an employer checks your portfolio, you have 19 seconds to pique their interest and another 40 seconds to convince them you are a fit.
The credentialism-value of a portfolio is proportional to how clear the evidence of your work is and how relevant it is to the employer. That’s why online course certificates are weak because it’s hard for an employer to know how it was assessed. They assume most copy and paste the assignments. The same is true for frequent portfolio items. Group projects are weaker because they don’t know what you contributed with.
Novelty has high credentialism-value because it’s evidence that you have unique knowledge and it’s clear that it came from you. Reproducing a paper without code is evidence that you can understand machine learning papers. And creating an in-depth blog post about your work creates further evidence that you made a genuine contribution.
) To create additional evidence, you can engage in an objective process to assess your work, in the form of machine learning competitions, publishing papers, or sharing it online to see what the broader public thinks of your work. Formal research is often measured by publishing first-author papers in high-quality conferences or journals.
That’s the context that led to this thread:
That leads to another question, how do you develop an identity when you don’t have constraints such as nature and nurture? You can artificially create constraints to create the illusion of a human-like identity, but artificial identities are probably going to evolve from information and energy constraints.
You can connect with me on Twitter: (@emilwallner and on GitHub ( (emilwallner).
GIPHY App Key not set. Please check settings