AWS re:Invent 2019 – Highlights from Machine Learning Workshops on Day Three

This is the third of five articles in the AWS re:Invent 2019 recap series. If you missed any of the posts, you can find them here:

  1. AWS re:Invent 2019 – Highlights from Machine Learning Workshops on Day One
  2. AWS re:Invent 2019 – Highlights from AWS Product Releases on Day Two
  3. AWS re:Invent 2019 – Highlights from Day Four

Panoramic’s Senior Data Scientist Nik Buenning recaps notable moments from the following workshops and sessions at AWS re:Invent 2019 including:

  • Workshop Review: Deep Learning with PyTorch
  • Session: How to Build High-Performance ML Solutions at Low Cost featuring Mohammed Jamal, Head of Data Science at Aramex 
  • Reinforcement Learning Session: Amazon SageMaker

Workshop Review:
Deep Learning with PyTorch

In this workshop, we learned how to use Amazon Sagemaker’s integration with PyTorch. Similar to Tensorflow, PyTorch is a popular machine learning library that allows for easy creation of neural networks. The original layout of the workshop included four natural language processing labs, however, due to technical coding issues and time constraints, we were only able to cover one lab.

This lab was similar to the Tensorflow labs discussed in my previous article, AWS re:Invent 2019 – Highlights from AWS Product Releases on Day Two, as it involved importing the data, training a model, deploying it to an endpoint, and testing it with inference. The model that we trained uses an encoder-decoder, sequence-to-sequence model architecture. This model uses Recurrent Neural Networks for both the encoder and decoder (this architecture is discussed more in my past article on Building a chatbot to answer data-related questions. The data used for training the model comes with the wikitext-2 dataset, which consists of thousands of Wikipedia articles. The goal of the lab was to create a text generator model. After I built, trained, and deployed my model, I tested it and got the results below:

AWS re:Invent 2019: Deep Learning with PyTortch.

As you can see above, there’s a lot of room for improvement; the model likely needed to be trained longer (i.e., more training epochs were needed).  

How can Panoramic use PyTorch? The Data Science team is currently building a text generation model that would allow our customers to integrate their “comment” data from sources like Facebook. Our model would then generate a summary of the content from the text.

As mentioned before, we were only able to cover one of these labs, but I intend to check out the other three next week. The labs are available on Github. The other three labs utilize Transformer models (e.g., BERT and GPT-2), which are state-of-the-art Natural Language Processing (NLP) models. We intend to dive deeper into Transformer models at Panoramic for our NLP tasks (e.g., chatbots and sentiment analysis classification).

Session:
How to Build High-Performance ML Solutions at Low Cost featuring Mohammed Jamal, Head of Data Science at Aramex

In this session, Mohammed Jamal highlighted the hardships that come with building and operating a machine learning platform for both the data scientists and the DevOps teams. However, the focus of his talk was about good ways to cut costs when building machine learning tools.

Costs come from a number of different sources. One of the highest costs is paying team members to develop and deploy machine learning, therefore, automating as much as possible when building these tools is key. For example, hyperparameter tuning can be done within Amazon Sagemaker and can reduce time and costs, as opposed to manual hyperparameter tuning.

AWS re:Invent 2019: Training and Inference chart.

The cost of training and inference is a topic that is mentioned frequently during the duration of the talk and at the conference itself, with most of the cost coming from inference calculations (see chart on the right). This idea has always been puzzling to me, because typically, training requires an extensive amount of time and expensive computer resources. Inference calculations, on the other hand, are typically simple and quick. But, in many cases, there are large amounts of inference calculations, which would ultimately result in a higher cost. Thus, deploying your model on the right instance type is of upmost importance. 

The session ended Jamal providing a real-world example of Aramex, a worldwide courier service, and how they cut costs when developing and deploying machine learning models. 

Many of the tricks and topics discussed in this talk could be applied to our machine learning tools at Panoramic (e.g., hyperparameter tuning). 

Reinforcement Learning Session:
Amazon SageMaker

The first half of the learning session touched on the basics of reinforcement learning, as many of the attendees are new to the topic. Two examples were loosely discussed; 1) a finance example where the agent buys or sells a set number of stocks and 2) an HVAC example where the agent has to turn on and off heat and air conditioning to regulate temperatures.

AWS re:Invent 2019: Artificial map of Martian terrain navigating around barriers. 

In my opinion, the most exhilarating part of the talk was towards the end, when a researcher from Jet Propulsion Laboratory discussed his work on the Mars Rover. One of the biggest problems they face is guiding the rover to navigate around barriers, like cliffs and rocks, while on Mars. They have an artificial map of Martian terrain, where the rover needs to travel from point A to point B, navigating through various barriers (see image to the left). The simulated rover would start by running into the barriers, but after several training episodes, the rover would eventually figure out that it needs to turn when it encounters a barrier. The challenge requires that the participants either define a rewards function, similar to Amazon DeepRacer, that can be used to train the rover to navigate through the terrain properly, or create another algorithm that achieves that goal. An audience member thought it would be beneficial to use supervised machine learning methods . . . but I disagree.

Stay tuned for more reviews from AWS re:Invent 2019 in the days to come.

Next: AWS re:Invent – Day Four