Backend Software Engineer#

👋 I’m Cedric Chee. I’ve been a software engineer, AI engineer, writer, and entrepreneur.

I code and write about it sometimes. I create system softwares and apps in Go/JS.

I do product engineering and web development at startups/consulting. I enjoy backend development.

At night, I tinker with systems programming in Rust.

Read more on the about page →

Recent Posts

Meditation

Integrate mindfulness into your everyday life.

  1. Non-reactivity - body sensation
  2. Non-reactivity - working with sound
  3. Non-reactivity - thinking
  4. Non-reactivity - emotion
  5. The judging mind
  6. Mental noise as addiction
  7. External cues as mindfulness reminders
  8. Beginners mind
  9. One step at a time
  10. Grasping and aversion
  11. Riding the waves
  12. Letting go
  13. Balance and nourishment
  14. Questioning our thoughts
  15. Catching ourselves
  16. Leaning in
  17. Allowing
  18. Impermanence
  19. Loving kindness
  20. Maintaining momentum
  21. Practice in each moment

Change Habits

My notes on self-improvement plan.

Ways to change habits:

  1. Get more sleep
  2. Make time to exercise
  3. Drink more water
  4. Eat less sugar
  5. Stay teachable
  6. Read and write more
  7. Remove clutter
  8. More random acts of kindness
  9. Don’t respond to negativity
  10. Spend quality time with family
  11. Show gratitude
  12. Forgive first

Building the Software 2.0 Stack

My quick notes for the “Building the Software 2.0 Stack” talk by Andrej Karpathy at Train AI 2018 conference - machine learning for a human world.

Training Datasets#

The part around building and managing datasets is very interesting. We don’t get to hear about these problems often.

Software 2.0 Integrated Development Enviroments (IDEs)#

What IDEs including code editors will look like?

  • Show a full inventory or statistics of the current dataset.
  • Create or edit annotation layers for any datapoint.
  • Flag, escalate & resolve discrepancies in multiple labels.
  • Flag and escalate datapoints that are likely to be mislabeled.
  • Display predictions on an arbirary set of test datapoints.
  • Autosuggest datapoints that should be labeled.

AMSGrad Optimizer

Keras implementation of AMSGrad optimizer from “On the Convergence of Adam and Beyond” paper.

class AMSgrad(Optimizer):
    """AMSGrad optimizer.

    Default parameters follow those provided in the Adam paper.

    # Arguments
        lr: float >= 0. Learning rate.
        beta_1: float, 0 < beta < 1. Generally close to 1.
        beta_2: float, 0 < beta < 1. Generally close to 1.
        epsilon: float >= 0. Fuzz factor.
        decay: float >= 0. Learning rate decay over each update.

    # References
        - [On the Convergence of Adam and Beyond](https://openreview.net/forum?id=ryQu7f-RZ)
        - [Adam - A Method for Stochastic Optimization](http://arxiv.org/abs/1412.6980v8)
    """

    def __init__(self, lr=0.001, beta_1=0.9, beta_2=0.999,
                 epsilon=1e-8, decay=0., **kwargs):
        super(AMSgrad, self).__init__(**kwargs)
        with K.name_scope(self.__class__.__name__):
            self.iterations = K.variable(0, dtype='int64', name='iterations')
            self.lr = K.variable(lr, name='lr')
            self.beta_1 = K.variable(beta_1, name='beta_1')
            self.beta_2 = K.variable(beta_2, name='beta_2')
            self.decay = K.variable(decay, name='decay')
        self.epsilon = epsilon
        self.initial_decay = decay

    @interfaces.legacy_get_updates_support
    def get_updates(self, loss, params):
        grads = self.get_gradients(loss, params)
        self.updates = [K.update_add(self.iterations, 1)]

        lr = self.lr
        if self.initial_decay > 0:
            lr *= (1. / (1. + self.decay * K.cast(self.iterations,
                                                  K.dtype(self.decay))))

        t = K.cast(self.iterations, K.floatx()) + 1
        lr_t = lr * (K.sqrt(1. - K.pow(self.beta_2, t)) /
                     (1. - K.pow(self.beta_1, t)))

        ms = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
        vs = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params]
        vhats = [K.zeros(K.int_shape(p), dtype=K.dtype(p)) for p in params] 
        self.weights = [self.iterations] + ms + vs + vhats

        for p, g, m, v, vhat in zip(params, grads, ms, vs, vhats):
            m_t = (self.beta_1 * m) + (1. - self.beta_1) * g
            v_t = (self.beta_2 * v) + (1. - self.beta_2) * K.square(g)
            vhat_t = K.maximum(vhat, v_t)
            p_t = p - lr_t * m_t / (K.sqrt(vhat_t) + self.epsilon)

            self.updates.append(K.update(m, m_t))
            self.updates.append(K.update(v, v_t))
            self.updates.append(K.update(vhat, vhat_t))
            new_p = p_t

            # Apply constraints.
            if getattr(p, 'constraint', None) is not None:
                new_p = p.constraint(new_p)

            self.updates.append(K.update(p, new_p))
        return self.updates

    def get_config(self):
        config = {'lr': float(K.get_value(self.lr)),
                  'beta_1': float(K.get_value(self.beta_1)),
                  'beta_2': float(K.get_value(self.beta_2)),
                  'decay': float(K.get_value(self.decay)),
                  'epsilon': self.epsilon}
        base_config = super(AMSgrad, self).get_config()
        return dict(list(base_config.items()) + list(config.items()))

Google Colab makes free GPUs available for folks to try out

Google-owned Kaggle adds free GPUs to online coding service.

Google says users of Colaboratory, its live coding mashup that works like a cross between a Jupyter Notebook and a Google Doc, now comes with free GPUs. Users can write a few code snippets, detailed here, and get access to two vCPUs with 13GB of RAM and, the icing on the cake - an NVIDIA K80 GPU, according to a comment from an account linked to Michael Piatek at Google.

Access Colaboratory here.

This is awesome! This definitely increases the usability of Colab notebooks, which I have already started using in my day-to-day experiments. I especially like the ability to save multiple checkpoints - a features sorely lacking in standard Jupyter Notebooks.

Free GPUS with very few strings attached:

  • Jupyter notebooks max running time of 12 hours, according to someone (Michael Piatek?) on the Colab team. What this means is, the 12 hour limit is for contiguous assignment of a single GCE VM and applies to both CPU and GPU machines. There’s no per-day limit, so if you end up using one VM for 12h, you can use a distinct VM afterwards for another 12 hour.

I’ve used it for a few weeks when only CPU is available. Like Kaggle, it’s just so quick to get into the environment and start coding.

Question remains:

  • I wonder if there will be an option on the future to purchase instance upgrades (for example, to include GPU resources).
  • Any chance you are throwing some TPUs our way? :)
  • I looked around and didn’t see any listed software capabilities;

Playing around in the environment, I think I saw the VM is running on Ubuntu 17.10 Linux OS.

I hope they keep it forever free and keep the Bitcoin miner from abusing it.

The Art of Getting into Machine Learning

How to get into this field of work where you can actually work on developing and applying Machine Learning algorithms every day?

The following advice is based on this reply on HN:

I’m probably the worst example of how to get into this field of work, but since I do actually work on developing and applying ML algorithms every day, I think my case might be relevant. Firstly, my background is not in mathematics or computer science what-so-ever; I’m a classically trained botanist who started came at the issue of programming, computer science, and ML from a perspective of “I’ve got questions I want to ask and techniques I want to apply that I’m currently under prepared to answer.”

Working as a technician for the USDA, I learned programming (R and python) primarily because I needed a better way to deal with large data sets than excel (which prior to 5 years ago was all I used). At some point I put my foot down and decided I would go no further until I learned to manage the data I was collecting programmatically. The data I was collecting were UAV imagery, field and spectral reference data, specifically regarding the distribution of invasive plant species in cropping systems. The central thrust of the project was to automatically detect and delineate weed-species in cropping systems from low altitude UAV collects. This eventually folded into doing a masters degree continuing to develop this project. That folded into additional projects applying ML methods to feature discrimination in a wide range of data types. Currently I work for a geo-spatial company, doing vegetative classification in a wide range of environments with some incredibly interesting data (sometimes).

I think you’ve got the issue a bit cart-horse backwards. In a sense I see you as having a solution, but no problem to apply it too. The methods are ALL there, and there are plenty of other posts in this thread addressing where to learn the principals of ML. What this doesn’t offer you, is a why of why you should care about a thing? My recommendation would be to find something of personal interest to you in which ML may play a role.

With out a good reason to apply the techniques that everyone else here is outlining, I think it would be too challenging to keep the level of interest and energy required to realize how to apply these concepts. Watching lectures, reading articles, doing coursework is all very important, but it shouldn’t be thought of as a replacement for having personally meaningful work to do. Meaningful work will do more to drive your interests than anything.

Such a solid advice. A great approach on how to get into a new field and sticking for the long term. I think this is applicable to any field and not just Machine Learning.

Computer Science and Mathematical Reasoning

What is intelligence and learning from the perspective of computer science and mathematical reasoning?

Fifth 'Hello World' Post

First blog post.