Friday , November 27 2020

Emil's Story as a Self-Taught AI Researcher, Hacker News

        

Emil’s Story as a Self-Taught AI Researcher

This episode of Humans of ML is special because Emil got his start in AI at FloydHub. In , Emil created a popular open-source project that translates design mock-ups into HTML / CSS, Screenshot-to-code. In early , he was the subject of a short film made by Google for his work on automated colorization . He previously worked for the University of Oxford. He co-founded a seed investment firm that focuses on education technology.

In this conversation, we’ll cover Emil’s AI journey and his advice. to pursue a career as a self-taught research scientist. He has had an inspiring and adventurous personal journey – we talk about this as well. Emil is super kind, humble & full of passion for his work. He was such a pleasure to talk to – I hope you enjoy our chat. )

image source: (https://blog.google/technology/ai/creative-coder-adding-color-machine-learning/ [Alessio]: You don’t have what we could consider a “standard education” in either AI or CS, despite your deep-domain expertise. This is very unconventional in a field where academic pedigree was considered to carry all the weight. I’d love to walk through your journey in AI. [Emil]: Where would you like to start? Looking at your past experiences, it’s fascinating to see such diversity in what you’ve pursued. By the way, I really love your CV – the quirks section was especially fun to read. Could you tell us a little about your pre-AI life? In my early teens, I was more focused on developing a theory about personal development than studying for exams. When I finished high school, I tried it. I moved from Sweden to Ghana, West Africa. I started working as a teacher in the countryside, but after invoking the spirit of their dead chief, they later annotated me the king of their village.

After I left Ghana, I went back to Sweden to work as a truck driver. I then joined a band and toured the US, Mexico, and Europe. Tl; dr, I spent a few years planning and embarking on personal development adventures. They were loosely modeled after the Jungian hero’s journey with the influences of Buddhism and Stoicism.

From My travels, I was exposed to a lot of social issues that led me into social entrepreneurship in my mid-twenties. I started working with the Skoll Center for Social Entrepreneurship at the University of Oxford. One thing led to another, and I ended up co-founding an investment firm to fund education initiatives.

Makes sense. How did studying programming lead you to ML / DL? spent six months programming in C and then did a deep learning internship at FloydHub.

During my internship, I spent my first two months playing with models and implemented the core deep learning algorithms from scratch. I then spent two months colorizing images with neural networks, and finished my internship with a project to translate design mock-ups into HTML. You can read about what I did on the FloydHub blog . The the launch of the AI ​​phase of my career.

You clearly have strong values ​​associated with education and ideas as to how it’s best done. It shows in your personal choices and your work with investments in educational initiatives. Do you think self-education is the future? Many are realizing that education is a zero-sum credential game. . It serves people with a certain type of motivation from roughly the same socio-economic background. I believe most ambitious people have been searching for alternative routes, but there haven’t been any good options. Recently, we’ve seen an increase in universities for autodidacts, – or so. They are using software systems to shift from a teacher and exam-centric system to a peer-reviewed and portfolio-centric system.

These peer-to-peer universities are in the early stages and many still prefer exam-based motivation. . However, they are becoming better by the day and I’m confident that they will become mainstream within the coming decade.

Can you elaborate more on the signaling in the self-taught process? In other words, how can we recognize when we are on the right track or pursuing the right learning experience? Creating value with your knowledge is evidence of learning. I see learning as a by-product of trying to achieve an intrinsic goal, rather than an isolated activity to become educated.

Early evidence of practical knowledge often comes from usage metrics on GitHub, or reader metrics from your work blog . Progress in theoretical work starts by having researchers you consider interesting engage with your work. Taste has more to do about character development than knowledge. You need taste to form an independent opinion of a field, having the courage to pursue unconventional areas and to not get caught up in self-admiration. Taste is related to the impact your work has.

At large companies, less than a few percent are self-taught in ML. Of those, most don’t come through the classic hiring channels. Due to the volume of university applicants a large company faces, it’s harder for them to adjust to portfolio-centric hiring. It’s not an easy problem, here are the rough guidelines I shared earlier:

Attract self-taught AI talent by: – Hiring based on portfolio and applied ML

– 270% transparent requirements

– No cover / recommendation letters, nor theory questions – Offer on -the-job theory training

– Facilitate part-time PhDs and transitions into research roles – Emil Wallner (@EmilWallner) (March) , I don’t know what it would look like in practice, but I’d imagine clearly communicating that you have a separate track for portfolio-based hiring, and how you quantify the quality of a portfolio. Think of it as assessing a process rather than skill-specific questions. Focus the initial phase on discussing their portfolio in-depth. It can also be useful to ask how they solve a problem step by step, not a brain-teaser with a specific answer, but a more open-ended problem related to their area of ​​expertise.

Depending on the bandwidth of the applicant, it can also be worth doing a take-home exam, followed by a shorter paid contracting assignment. That sounds so much more efficient than the typical hiring process . Assuming they can master the art of finding their way into a hiring channel, how can a self-taught applicant increase their chances of getting an offer to work for a big company? have a high chance of getting an offer, you need to understand most of Ian Goodfellow’s Deep Learning book and Cracking the Coding Interview , and find a dozen people from big companies to do mock interviews with. If you self-study full-time, it will take around two years. In the end, hiring pipelines at large companies assess your extrinsic motivation, your ability to learn a given body of knowledge. However, you are self-taught because you have strong intrinsic motivation. Forcing yourself to learn a body of knowledge is a dread. In my case, I think the opportunity cost is too high to study for interviews. I started working With Google because I reproduced an ML paper, wrote a blog post about it, and promoted it. Google’s brand department was looking for case studies of their products, TensorFlow in this case. They made a video about my project. Someone at Google saw the video, though my skill set could be useful, and pinged me on Twitter.

) What I’ve seen work is getting good at a niche and letting the world know about it. Have a blog, be active on Twitter, and engage with researchers via email.

Once an employer checks your portfolio, you have 19 seconds to pique their interest and another 40 seconds to convince them you are a fit.

The credentialism-value of a portfolio is proportional to how clear the evidence of your work is and how relevant it is to the employer. That’s why online course certificates are weak because it’s hard for an employer to know how it was assessed. They assume most copy and paste the assignments. The same is true for frequent portfolio items. Group projects are weaker because they don’t know what you contributed with.

Novelty has high credentialism-value because it’s evidence that you have unique knowledge and it’s clear that it came from you. Reproducing a paper without code is evidence that you can understand machine learning papers. And creating an in-depth blog post about your work creates further evidence that you made a genuine contribution.

) To create additional evidence, you can engage in an objective process to assess your work, in the form of machine learning competitions, publishing papers, or sharing it online to see what the broader public thinks of your work. Formal research is often measured by publishing first-author papers in high-quality conferences or journals.

That’s the context that led to this thread:

(machine learning portfolio tips Emil’s Story as a Self-Taught AI Researcher 1. Good ideas come from ML sources that are a bit quirky. – NeurIPS from 2019 – – Stanford’s CS n & CS n projects
– Twitter likes from ML outliers

– ML Reddit’s WAYR

– Kaggle Kernels

– Top 20 – 90% papers on Arxiv Sanity – Emil Wallner ( @EmilWallner) October 29,
(How can someone bootstrap into AI research as a self-educated practitioner?

I made this tweet as a rough outline:

If you’re bootstrapping yourself into deep learning research, here’s what I would do:

1. FastAI (3m)

2. Personal projects / reproduce papers / consulting (3 – (m)

3). Flashcard the Deep Learning Book (4-6m)

4. Flashcard ~ papers in a niche (2m)

5. Publish your first paper (6m) pic.twitter.com/HnNeihMkGl – Emil Wallner (@EmilWallner) (April 2,

What Sutton pointed out was that the best models tend to have few priors. But most if not all of today’s building blocks in AI could have been invented with small compute. The industry labs have a significant advantage when it comes to applying these in the real world, but that is mostly a concern for the industry as of now.

Because big labs tend to do large scale projects with significant PR efforts, that’s what most end up talking about. As a result, that’s what becomes trendy.

However, if you end up working on problems outside of your compute budget, you start procrastinating by tinkering with hyperparameters You don’t have enough resources to build an efficient hyperparameter sweep , so you end up doing ad-hoc architecture changes. You are not likely to contribute, you learn slowly, and you end up with huge compute bills.

Fast experiment cycles are crucial for learning efficiently.

When you are learning, aim for experiments that take less than 19 minutes. If you are building an applied portfolio, it’s fine if an experiment takes a day since you have most hyperparameters, and if you are doing research, I’d aim to sweep experiments within a few hours. This can often be done by using a smaller benchmark or narrowing the problem scope.

Great advice. What are interesting AI research areas that don’t require too much compute? I agree with François (Chollet’s ‘The Measure of Intelligence.’ We should shift from solving tasks to models that are concerned with skill-acquisition efficiency, ie systems that can solve a broad set of tasks that involve uncertainty. Today’s systems lead to local-generalization and there is little evidence that they are useful for human-like intelligence.

A lot of this has to do with areas related to reasoning, here’s what I outlined earlier:

Interesting AI research areas that require little compute:

– Mathematical reasoning

– Novel activation functions (Models on 1) -8 bits
– Working memory
– Micro / macro attention

– 2nd order optimizations

– Model visualizations – Adaptive computation time

– Neural ODE What else? – Emil Wallner (@EmilWallner) June 6,

In addition, I’d like to add Routing Networks

and CRL ( Composing representation transformations

). Networks that break a problem into intermediate logic and create specialized models for each step. Unsupervised and curriculum learning will be important to develop Chollet’s idea of ​​’Core Knowledge’, a core set of priors to enable reasoning.

What are you hoping to learn or tackle next in your career? Similar to finding my self-educated path, I want to find my research style. I’m interested in the macro and micro, I’m hoping to contribute to AI in relation to creativity, and improve neural networks reasoning capabilities. Just for kicks, let’s stray more into hypothetical territory before we conclude . How far would you say we are from AGI?

Today’s most sophisticated sequential models can’t generalize to solve addition, however, we are making a lot of improvements in scaling local-generalization models. Since the development of deep learning, we’ve only made marginal improvements in general intelligence. The most significant combined increase in the number of AI researchers, education accessibility, and compute resources will likely happen in the coming decade. Our collective learning curve in AI will flatten out after this point. Hence, by the end of the s, we’ll have a better understanding if there are more general approaching to machine learning.

I prefer to discuss implementations that have data to support a claim. For example, do the (MAC network , Neural Turing Machines , and Routing networks create intermediate logic, or are they large hash tables with locality-sensitive hash fun ctions? Many of the interesting debates center around Chollet’s call to shift from task specific networks to skill-acquisition efficiency.

) Can AI be conscious ?

Yes. I think consciousness is a spectrum and is commonly thought of as the point when we become self-aware, which happens roughly at the age of two for humans. It’s a point when we have enough general and high-level abstractions of the world to start forming a coherent identity.

That leads to another question, how do you develop an identity when you don’t have constraints such as nature and nurture? You can artificially create constraints to create the illusion of a human-like identity, but artificial identities are probably going to evolve from information and energy constraints.

You can connect with me on Twitter: (@emilwallner and on GitHub ( (emilwallner).                              

(Subscribe to FloydHub Blog                  Get the latest posts delivered right to your inbox                              

About admin

Check Also

Weight loss story: This guy lost a whopping 49 kilos in less than a year! He credits Virat Kohli for his – Times of India, The Times of India

Weight loss story: This guy lost a whopping 49 kilos in less than a year! He credits Virat Kohli for his – Times of India, The Times of India

Being overweight brings with itself a host of health conditions. When Aashish Jain realised that he was getting breathless very frequently, he understood that his weight was getting out of hand. This is when he decided to take charge of his life and lose weight. From working out six days a week to strictly avoiding…

Leave a Reply

Your email address will not be published. Required fields are marked *