Tag: alexa

Part 2: How to Make an Alexa Skill

This post is the second in my Alexa series. You can find my first post here Part 1: Introduction to Amazon Alexa.  This is a blog post format of my original presentation on Google Slides.  This post and Google Slide presentation are for educational purposes.

How to Make an Alexa Skill

First Sign Up

To make Alexa Skills, you need 2 accounts

  1. Amazon developer account for the front end
  2. AWS account for the back end

Note: You can use your consumer side Amazon account for both

Once you’ve signed up, you are ready to begin!

Ways to Get Started

The easiest and fastest way to get started– even if you haven’t written an app or JavaScript before — is to check out Amazon Alexa’s GitHub!

OK, so you’re looking at their GitHub — which one do you choose?

I suggest starting with this one: https://github.com/alexa/skill-sample-nodejs-fact. It has a great README, which is a thorough walk through.

Before you start the walk through, we can talk a bit more about the code you’re seeing. The following code follows the above repo by Amazon. This repo shows you how to create a random fact giver. For this repo, the random facts are about space, but you can update it to any topic you like.

So if you are doing this project, take a moment to think about what kind of facts you would like your skill to give to its users.

Alexa Project Structure


The SpeechAssts folder is front end items.

The IntentSchema.json is a JSON object that has an array of different intent objects a skill can have.

{ "intents": [
  { "intent": "GetNewFactIntent" },
  { "intent": "AMAZON.HelpIntent" },
  { "intent": "AMAZON.StopIntent" },
  { "intent": "AMAZON.CancelIntent" }

SampleUtterances.txt map to the intents. The below sample utterances map to only the GetNewFactIntent.

GetNewFactIntent a fact
GetNewFactIntent a space fact
GetNewFactIntent tell me a fact
GetNewFactIntent tell me a space fact
GetNewFactIntent give me a fact
GetNewFactIntent give me a space fact
GetNewFactIntent tell me trivia
GetNewFactIntent tell me a space trivia
GetNewFactIntent give me a space trivia
GetNewFactIntent give me some information
GetNewFactIntent give me some space information
GetNewFactIntent tell me something
GetNewFactIntent give me something

Now let’s move to the src/index.js. This is the backend function.

It starts with declaring variables.

const SKILL_NAME = 'Space Facts';
const GET_FACT_MESSAGE = "Here's your fact: ";
const HELP_MESSAGE = 'You can say tell me a space fact, or, you can say exit... What can I help you with?';
const HELP_REPROMPT = 'What can I help you with?';
const STOP_MESSAGE = 'Goodbye!';

The SKILL_NAME is the skill’s name.

In this code, GET_FACT_MESSAGE is what begins each of Alexa’s statements when she gives a user a random space fact.

HELP_MESSAGE and HELP_REPROMPT are what Alexa will say when the user triggers the intent for help.

STOP_MESSAGE is what Alexa will say when a user gives the intent to stop.

const data = [
 'A year on Mercury is just 88 days long.',
 'Despite being farther from the Sun, Venus experiences higher temperatures than Mercury.',
 'Venus rotates counter-clockwise, possibly because of a collision in the past with an asteroid.',
 'On Mars, the Sun appears about half the size as it does on Earth.'

This will be the data array that you can manipulate to be about your particular fact.

Then below that is the Alexa code.

exports.handler = function(event, context, callback) {
  var alexa = Alexa.handler(event, context);
  alexa.appId = APP_ID;

This is what makes Alexa work.

On alexa.registerHandlers(handlers), the variable handlers is an object of the intents.

const handlers = {
  'LaunchRequest': function () {
  'GetNewFactIntent': function () {
   const factArr = data;
   const factIndex = Math.floor(Math.random() * factArr.length);
   const randomFact = factArr[factIndex];
   const speechOutput = GET_FACT_MESSAGE + randomFact;

   this.response.cardRenderer(SKILL_NAME, randomFact);
  'AMAZON.HelpIntent': function () {
    const speechOutput = HELP_MESSAGE;
    const reprompt = HELP_REPROMPT;

  'AMAZON.CancelIntent': function () {
  'AMAZON.StopIntent': function () {

The key LaunchRequest is what starts your skill.  It runs the method GetNewFactIntent.

GetNewFactIntent does the calculations to grab a random fact. It has the ability to render a card on a screen (the cardRenderer method), and it also has Alexa speak it outloud.

AMAZON.HelpIntent delivers the HELP_MESSAGE and HELP_REPROMPT.

AMAZON.CancelIntent and AMAZON.StopIntent currently both deliver the STOP_MESSAGE.

If you are following Amazon’s walk through, it will walk you through building your skill, uploading your function to AWS, and how to certify your skill.

Alexa Skills Certification

It is easy to submit your skill for certification. (You will do so on the front end).

Remember to abide by Amazon’s rules.

After your skill is reviewed, you will receive an email of acceptance or rejection.


Alexa is still new. Amazon is giving a lot of cool incentives to write Alexa Skills. Alexa Skills and the related Echo products are still very new, and so everything changes often! This means the coding, the structures, how you submit via the frontend and backend sites all change very often! So be on your lookout!

Enjoy writing Amazon Alexa Skills! If you have any questions, please write them below! Thank you!


Part 1: Introduction to Amazon Alexa

This post is an introduction to Amazon Alexa. This is a blog post format of my original presentation on Google Slides.  This post and Google Slide presentation are for educational purposes.

Let’s get started!

Introduction to Amazon Alexa

What is Alexa?

Alexa is a software that a user can interact with by using their voice. A user would interact with Alexa through an Amazon Echo device. The Alexa software is hosted in the cloud (AWS) and is a VUI.

What is a VUI?

VUI stands for Voice User Interface meaning that a user can interact with the software using their voice. It is similar to a GUI or Graphical User Interface in the way that a VUI and GUI are both front ends. However, a VUI is not in competition with a GUI. A VUI is not meant to replace a screen.  A VUI is just another option in which a user can interact with software.

For Alexa, the software takes intents and utterances.  Intents are functions or actions that a user is available to invoke in an Alexa skill through an utterance. Since there are many different ways to express the same intent (or request), utterances are a set of likely spoken phrases for one intent. (In an upcoming blog, there will be code to illustrate this).

With VUI, there are different ways of expressing emotion. SSML is Speech Synthesis Markup Language. SSML gives a developer control on how Alexa should sound. SSML can make Alexa whisper, pause, or recite a number as a set of digits. Another way of expressing emotion is to use Speechcons. Speechcons are like emoticons.  Instead of a smiley face emoticon, a developer can use a speechcon to make Alexa say, “Hurrah!”

How Do You Use Alexa? (Utterance Syntax)

An utterance is what a user would say to interact with Alexa. The utterance maps to an intent, which invokes a function on an Alexa skill. What is an example of a spoken utterance?

“Alexa, start Space Facts.”

There are 3 parts to an utterance: 1. the wake word, 2. the phrase, and 3. the skill name. Here is a break down of an utterance:

  1. Wake word
    • Alexa
    • Amazon
    • Computer
  2. Phrase/verb
    • Start
    • Open
    • Begin
  3. Skill name/function invocation
    • Space Facts

The typical wake word is “Alexa”, and it is highly recommended to keep it as Alexa unless someone the user lives with has the name Alexa. If someone in the user’s household has the name Alexa, the user can change the wake word to “Amazon” or “Computer”. So the utterance would be “Amazon, start Space Facts”.

The phrase is the verb or word that causes an action. For example, “Alexa, start Space Facts” should start an intent that will invoke the Space Facts skill. Or “Alexa, stop” will stop the current action. “Alexa, help” will give some information about the skill.

This phrase or verb is where the developer will be concentrating making the different utterances for the same intent. So the utterances using “start”, “open”, “begin” will typically map the same intent. A developer will want to cover the many ways a user would say this command.

Lastly, there is the skill name or function invocation. An utterance using a skill name could be “Alexa, stop Space Facts”. Depending on the skill, there can be different functions a user can invoke that the skill will listen for.

Alexa Skill Request Lifecycle

Once a user says an utterance, what happens?

Once a user says an utterance such as, “Alexa, start Space Facts”, the Amazon Echo device picks that up and interfaces with the Alexa Skill. The skill sends the utterance to the cloud (AWS) where it does the fancy footwork. The cloud is the skill’s back end and where the code/functionality is located. The code maps the utterance to an intent, which runs code that will return a corresponding response via JSON to the Alexa Skill. The Alexa skill will then output the response to the user via the Amazon Echo device. In this example, if the user said, “Alexa, start Space Facts”, Alexa’s response will be probably be a space fact such as “A year on Mercury is just 88 days long”.

So that’s it on my introduction to Amazon Alexa.  To continue on how to make an Alexa Skill, click here for Part 2.

Alexa Skills Workshop

Yesterday I attended an Alexa Skills Workshop hosted by Coding Dojo! The goal was to create an Alexa Skill for the Amazon echo family of devices.

We used one of Amazon’s templates on GitHub to create a skill for Alexa.  The greatest advantage of using a template was that no one had to be an expert in Javascript. Since it was already built, we could get through the whole process of creating a skill.

At the end of the workshop, I was able to submit my skill in for certification.

Today I got the news that my skill was certified and being uploaded to the Alexa Skills Store! It’s my first publication! How exciting!

The second great outcome of this workshop was that I am excited to create more Alexa Skills! Let’s see what the world of voice user interface will become!

The third outcome of this workshop is that I can’t wait to attend more Hackathons and workshops! This was incredibly motivational! (Though this will have to wait til after I’m done with my coding bootcamp).

Thank you again Coding Dojo for hosting this Hackathon!