AWS re:Invent 2019 – Highlights from Day Four
This is the fourth of five articles in the AWS re:Invent 2019 recap series. If you missed the previous posts, you can catch up here:
- AWS re:Invent 2019 – Highlights from Machine Learning Workshops on Day One
- AWS re:Invent 2019 – Highlights from AWS Product Releases on Day Two
- AWS re:Invent 2019 – Highlights from Machine Learning Workshops on Day Three
Panoramic’s Senior Data Scientist Nik Buenning recaps notable moments from the following machine learning workshops and sessions at AWS re:Invent 2019 including:
- Keynote Review: AWS CTO Dr. Werner Vogels
- Build Computer Vision Models with Amazon SageMaker
- Session: Reinforcement Learning: Using AI/ML to boost your software development on AWS
Keynote Review: AWS CTO Dr. Werner Vogels
AWS CTO Dr. Werner Vogels presented Thursday’s Keynote. He began the presentation with a discussion of Amazon Nitro, which is the underlying platform of the next generation of EC2 instances. According to Vogels, the key benefits to the Nitro System include enhanced security, reduced costs, and new instance types.
Clare Liguori then explained how Amazon Fargate uses Firecracker-containers that are based on Nitro Systems. Multiple speakers came on stage to discuss other aspects of Amazon services, including Jeff Down who analyzed Vanguard’s use of serverless features and containers. Sebastian de Halleax showed how Saildrone uses AWS to map and explore ocean waters to better understand fish migration patterns and their effect on humans (e.g., fishing industries).
Dr. Werner Vogels returned to talk about Amazon Industry 4.0, stating that factories and manufacturing need to change significantly if we want to start creating insights in this world. He then invited Dr. Martin Hofmann from Volkswagen on stage to discuss how a leading automotive company uses the power of cloud computing and AI to tackle their manufacturing challenges.
Build Computer Vision Models with Amazon SageMaker
This session discussed and demoed a relatively new feature with GluonCV – a computer toolkit that uses Apache MxNet to execute various computer vision tasks. The session was mostly a demo and was a refreshing, stimulating presentation.
The tool makes it easy for anyone to build and implement a machine learning computer vision model for image classification and image recognition. The tool also allows for transfer learning if the user wants to add other classes to classify in images. GluonCV also has the capability to do image style transfer (e.g., make an image look like a Picasso painting) and to do certain types of GANS (a popular generative model for creating new images).
Most of the talk covered code examples of how to use GluonCV. I was especially impressed by the GANS models. Using GANS in the past, I always thought to apply these models to generate creative content for marketing campaigns, specifically by sampling from creatives that generate high engagement rates. Perhaps, we could use GluonCV for our next research project.
Session: Reinforcement Learning – Using AI/ML to boost your software development on AWS
The name of this session was a little misleading. Reinforcement Learning was not discussed at all. The session started with a simple review of various basic terms within machine learning (e.g., supervised learning, regression, deep learning). The presenter spent most of the session reviewing two projects that he worked on for two different Amazon clients, though no slides were presented. He displayed his notes and workload sketches as shown below.
The first example involved a client that developed an application that failed every time it got spun up. The client’s IT team could not figure out the source of the problem, despite the output of four separate log files. They presented their problem to Amazon, who then tried putting all of their log files through a k-means model. He didn’t go too deep into how he vectorized the log files, but he did say the unsupervised machine learning approach was able to identify that each time the app launched, a file would get deleted, which was the source of their errors.
It is an interesting application of unsupervised learning, and I’m curious as to how exactly they detangled that problem and solution from a clustering model. As previously mentioned, he didn’t go into too much detail about the ML aspects.
The second example was also intriguing and involved a client who wanted to automate code reviews. The presenter hinted that the methodology used for the automated code reviews was similar to the one used by Amazon CodeGuru, a feature that was introduced and announced earlier in the conference.
In this example, they used a sentiment analysis model to extract the context of the client’s historical reviews. They also mapped the sentiment values into a binary result of either ‘suggestion’ or ‘requirement’. They could then predict how new code changes would be reviewed in terms of sentiment and whether it would be classified as a strong suggestion or a requirement. It might be fun to try out CodeGuru or create our own auto review models.
Stay tuned for more reviews from AWS re:Invent 2019 in the days to come.