Decentralized, blockchain enabled cloud computing is here to stay

Photo by Tanner Boriack on Unsplash

I recently stumbled upon an interesting blockchain project: Akash Network (GitHub). In simplest terms, Akash is a decentralized cloud computing marketplace. This short article documents the process of deploying my personal site using Akash, comparisons with other cloud providers, and some thoughts on the future of Akash.

  1. Build your website and containerize
  2. Buy some $AKT and fund deployment wallet
  3. Run a few commands from your machine
  4. ??? Profit


Learn how CNNs work without staring at complex math equations.

Photo by Hanson Lu

Introduction

Each time you unlock your smartphone using Face ID or use real-time Google Translate with your camera, something insane is going on behind the scenes! CNNs are the backbone of many amazing applications and tools that we use all the time. This post will explain the intuition behind the workings of CNNs, without delving into the complex probability functions and math equations. Everyone should have an opportunity to learn the basics about these tools, given how they are deeply ingrained in our lives now. For the nerdy folks, here is one of the best explanations provided by Stanford University.

Convolutional…


Learn the intuition and basic steps for canny edge detection

Photo by Shashank Sahay on Unsplash

Edge detection is a major component of image processing. Despite multiple advances in deep-learning-based techniques such as Convolutional Neural Networks that can perform very complex edge detection (i.e. edges with varying curvature, noise, color etc.), classical edge detection methods are still highly relevant in certain cases! An example would be if the data is known to be simple and predictable; a Canny Edge Detector would work right out of the box compared to a CNN which typically is more complicated to implement.

Most classical edge detection algorithms are based on the concept of first derivatives. In the figure below, we…


Exploring some of the top tricks used by data experts

Photo by Keenan Constance on Unsplash

Data Science is great. The idea of analyzing data for decision making has been around for many years, but the popularity of data science has exploded along with the FAANG companies’ growth in recent years. No matter your job title, experience level, or industry, I am confident that you will encounter solutions or products that are highly ‘data-driven’ or powered by Artificial Intelligenceᵗᵐ. Here are the Top 4 methods used by data scientists to fool others. As a Machine-Learning researcher and practitioner, I have made these ‘mistakes’ myself in the past, sometimes even unknowingly!

1) Measuring things the wrong way


Learn the intuition behind how GANs work, without the need of complex math equations.

Photo by Mario Gogh on Unsplash

Introduction

GANs (Generative Adversarial Networks) have taken the world of deep learning and computer vision by storm since they were introduced by Goodfellow et al. in 2014 at NIPS. The main idea of GANs is to simultaneously train two models; a generator model G that generates samples based on random noise, and another discriminator model D that determines whether a sample is real or generated by G.

This post will introduce the intuition behind the workings of GANs, without delving too much into the loss functions, probability distributions and math. The focus will be to have a good top-level understanding of…


Find the focal length (in pixels) of your smartphone

Photo by ShareGrid on Unsplash

In the world of Computer Vision(CV), there are many interesting concepts. Deep Convolutional Neural Networks have largely dominated many CV tasks in the past decade. CNNs are able to perform better than humans at things like image classification, object detection and image segmentation in certain domains. The best advantage of CNNs are that they can run at scale, hence putting much of the image data collected by individuals and corporations to good use! Recently, Transformers are also being explored for CV tasks. However, in this post, we will focus on a more ‘old-school’ aspect of Computer Vision: 3D Vision. I…


Hands-on Tutorials

Using Generative Adversarial Networks to restore image quality.

Photo by Marvin Meyer on Unsplash

GANs (Generative Adversarial Networks) have taken the world of deep learning and computer vision by storm since they were introduced by Goodfellow et al. in 2014 at NIPS. The main idea of GANs is to simultaneously train two models; a generator model G that captures a certain data distribution, and another discriminator model D that determines whether a sample came from the original distribution or from G.

The GAN framework is like a two player min-max game. G continually improves to generate images that are more realistic and have better quality. D improves in its ability to determine whether an…


Create your own cat/dog classifier in no time!

If you are trying to learn about Deep Learning today, there are tons of online courses, books and material for that. Then, something like this appears in the very first lesson:

Part of the backpropagation equations

Deep Learning is at it’s heart a data-analysis technique, thus the underlying concepts are definitely math-intensive. However, these complicated equations and formulas are really stressful to look at if we are just trying to learn something new! (Especially if we do not have PhDs in Math or Computer science. Or the last time we did integration was 10 years ago in school.)

This post will be the first part…


Attention based mechanism to boost your Deep CNNs

Squeeze-and-Excitation Networks (SENet) were the winners of the Imagenet Classification Challenge in 2017, surpassing the 2016 winners by a relative improvement of around 25%. SENets introduced a key architectural unit — Squeeze-and-Excitation Block (SE Block) which was crucial to the gains in performance. SE Blocks can also be easily added to other architectures with low additional overhead.

Photo by Ricardo Viana on Unsplash

Introduction to SE Blocks

Typically, CNNs work by extracting information from the spatial dimensions and storing them in the channel dimensions. This is why the spatial dimensions of feature maps shrink while the channels grow as we go deeper in a CNN. …


Create your own Visual Recognition Application within a day.

Photo by Jason Strull on Unsplash

Deep learning and artificial intelligence are one of the hottest topics in the world today. We see an ever-increasing number of applications that employ deep learning: facial recognition, speech recognition (Siri, ‘OK Google’, Alexa), Self-Driving Cars, and the list goes on and on. So as a student, fresh employee, team manager, senior management, we get curious: Will this ever-rising wave of AI technology eventually make my job or future career less relevant?

That was actually how I stumbled upon the world of Deep Learning years ago, and ended up where I am today: pursuing a postgraduate degree in this field…

Tee Yee Yang

PhD Candidate at Nanyang Technological University, Singapore

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store