Bartosz Bąbol

Software engineering

YADT: Yet Another Docker Tutorial

These are my docker notes. They might be considered as some kind of tutorial. This content might change. Let’s start.

Prerequisites

I expect you have docker installed. Docker is available on all main platforms. Im working on mac and have docker community edition installed. If you need to install docker on your machine go to instructions on main page here

Introduction

“Docker is an open platform for developing, shipping, and running applications”- according to official documentation. Main idea is to manage application infrastructure in the same way you manage application itself. You want to have easy way of building ISOLATED space (container) where your application works because you hate to hear sentence “works on my machine!”. Virtual machines does exactly what you want but they are considered as ‘heavy’ and slow solution to a problem. Docker containers operates on host OS which means they can share resources so start and work very quickly.

I will skip theoretical part because official docs is a great source of knowledge. I encourage you to read it here. In this link you will (and should) find basic answers to questions like what is:

  • (more) motivation for using docker
  • Docker Engine (architecture of docker)
  • Image
  • Container
  • Image Registry (i.e Docker hub)

Docker building blocks

Container

Intro

Container is isolated space. By isolated space I mean own file system, process tree, networking and user space. Container is considered as operating system virtualization which means it shares host linux kernel.

Idea of containers in Docker is not new or unique. Different implementations of operating system virtualization:

LXC

Oracle solaris zones

FreeBSD Jails

Running first container

Containers are running images. Images might be published as repositories in docker hub You can ‘pull’ them similarly to cloning github repositories. To use published image you need to use docker client. If you want specify version of image you can do it like below:

1
docker pull ubuntu:16.04

If you want to pull the latest version you can skip the number:

1
docker pull ubuntu

Lets invoke second line

images

Now after downloading image you can preview it:

images

alternatively:

1
docker images

One of common problems which virtualization solves is: how to have the 2 different versions of the same library/program. Lets download older ubuntu:

1
docker pull ubuntu:14.04

and to see result:

images

Now you can run specific version:

1
docker run ubuntu:14.04

alternatively image ID (you dont have to specify whole ID, few unique characters is enough):

1
docker run 132b7

or skip the tag/ID and run latest:

1
docker run ubuntu

This is how we ‘instantiate’ docker image. Now we should see running instance of an image- container.

Lets list all containers:

And there is no running container. Why is that? Because default command of image ubuntu id bash so it executes and terminates. We can change this command:

We are listing all files by adding more arguments to docker run. Command

1
ls -l

terminates, prints the files and also executes. To see that this is true run ps command(similar to unix but lists containers instead of processes):

1
docker ps -a

Which will print you all stopped containers

as you see docker assigns some random, fancy names to containers. You can specify your own name with –name option.

1
docker run --name my_ubuntu ubuntu

If you want to get into container and look around by running it in foreground mode. By adding -i we run container in daemon mode. /bin/bash is primary process so it will be run in foreground.

1
docker run -i -t ubuntu

And you cant see bash prompt has changed and by executing command ps inside ubuntu container

1
ps

you can see that bash is primary process:

More about foreground mode here.

Debugging containers

If you want to debug conta

Image

Building your own image

Image is blueprint of your working container. It means you can have many working containers build from single image.

Let’s imagine following situation: you are writing app in go lang. That requires having go lang installed on your machine with all dependencies and whatever else your application needs. And you want to provide an image with whole environment set up for user to make sure your code will be executed in unified environment with the same OS and dependencies isolated from environment where it is being run.

Dockerfile

Docker provides command

1
docker build [OPTIONS] PATH | URL |

which is used to build images. As you see in usage above it takes as parameter PATH which is path to directory with file called Dockerfile. This file is specification of an image. It’s set of commands which docker needs to perform in order to build an image. Let’s look at mentioned earlier example which I prepared for this case. Repository is in my github.

This application is simple golang server written in framework called martini. This example is hello world copied from documentation of martini.

server.go
1
2
3
4
5
6
7
8
9
10
11
package main

import "github.com/go-martini/martini"

func main() {
  m := martini.Classic()
  m.Get("/", func() string {
    return "Hello world!"
  })
  m.Run()
}

And the point is there is a huge chance that you dont have go installed in your machine (as me for example) because you dont use this language in your everyday work. This is one of the case for docker. Lets look at the Dockerfile our blueprint for creating image and examine it step by step.

First thing is Dockerfile has to be called exactly like this. Capital D and no spaces or other characters.

1
FROM ubuntu:16.04

First line is mandatory FROM instruction which specifies base image for our new image. In my case I wanted to run my go script in ubuntu with tag 16.04.

1
MAINTAINER Bartek <bbartek91@gmail.com>

Next line is MAINTAINER which specifies who created the image. This line doesnt change functionality of the image at all. Its for informational purposes.

1
ENV GOVERSION 1.8.3

Next instruction is ENV. Which can specify environment variables moreover they can be used as variables in Dockerfile, so here you see example how I’ve extracted goland version to a variable.

1
2
3
4
5
RUN apt-get update && apt-get install --no-install-recommends -y \
  ca-certificates \
    curl \
    git-core
RUN curl -s https://storage.googleapis.com/golang/go${GOVERSION}.linux-amd64.tar.gz | tar -v -C /usr/local -xz

Next lines are RUN. This is the way you pass your commands to image. As you see commands specified in this particular example are necessary instructions to install golang in ubuntu.

Going further:

1
2
3
ENV GOPATH /go
ENV GOROOT /usr/local/go
ENV PATH /usr/local/go/bin:/go/bin:/usr/local/bin:$PATH

This is setting the environment variables in our image. Its required by golang to specify them in order to use language in bash shell.

Next I create directory for my app:

1
RUN mkdir go_server

And move my local application file to docker image using ADD instruction. We can specify file location using relative path as here in example. Second parameter is directory in container and it can be also relative to current working path in container:

1
ADD server.go /go_server

I want to have working directory to be /go_server. In order to achieve that:

1
WORKDIR /go_server

Having my environment fully setup I can finally install martini framework:

1
RUN go get github.com/go-martini/martini

And specify the entrypoint of my image so which process will have PID 1 in working instance of the image -> container

1
ENTRYPOINT ["go", "run", "server.go"]

Having Dockerfile specified we can build the image:

1
docker build -t golang /Users/bartek/projects/docker_blog/

Run

1
docker images

to see image ‘golang’. I hope Advantages of this kind of virtualization are obvious: I can publish this image in docker repository and user can pull it run with command:

1
docker run -it -p 3000:3000 golang

Where

1
-p 3000:3000

-p maps ports my host 3000 to docker 3000 where application works and avoid preparing whole environment. Of course this is a bit tedious because there is already prepared golang image here which you I should use in this case but I wanted to build it from scratch just to play with different Dockerfile options.

Cleaning docker

If you want to remove all unused containers, volumes, networks and images (both dangling and unreferenced):

1
docker system prune

Fun With Scalameta

Introduction

My previous post was about new inline macros in scalameta which might suggests that scalameta is ‘new macros’ and that statement is not true at all. Actually new macros in current stage are just experimental feature which might work as we saw in previous post but something else needs more attention in current state of metaprogramming. Scalameta 1.0.0-that’s the main dish. It has came out in June this year and in my opinion it’s super interesting thing to learn.

What is Scalameta?

Scalameta is a framework for tokenizing and parsing code. I imagine this library as some kind of tool which you can gently “inject” into process of compilation of your program. With this tool you can do many different things with your code before it will be sent for compilation. So I’ve divided this post into Compilers stage, in each stage you’ve got different set of tools which you might use.

Thats my imagination let’s go back to more formal explanation. Scalameta provides developer an api for tokenizing code, representing AST and cool wrapper around it called quasiquotes which you can use for constructing and deconstructing code. Sounds similar to macros? New inline macros will use the same API as scalameta, so you can’t go wrong with previewing it. At the end of this post I’ve included useful links which I enourage you to preview. Without further introduction let’s start with some easy examples.

TL;DR

My repo for this blog post is here on github.

First stage of compilation: Lexical Analysis- def tokenize

First stage of compilation of program is lexical analysis. This analysis is responsible for grabbing everything as it is from input. So look at the example below:

Main.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import scala.meta._

object Main extends App{
  val someCode =
    """
      def testMethod = {
        println("printing");
      }
    """.tokenize

  val tokens = someCode match {
    case Tokenized.Success(t) => t
    case Tokenized.Error(_, _, details) => throw new Exception(details)
  }
}

Ok so after importing this scalameta we’ve got a lot of implicits flying around our code and we’ve got access to many cool features one of them is method tokenize on string.

Method tokenize will return object Tokens. Check source code here. It’s wrapper containing all of tokens found in specific piece of code, and more of just that what you can read in source file. You can access tokens by invoking tokens method on Tokens object.

Main.scala
1
  println(tokens.tokens)

Possible types which tokenizer can recognise you might find in source code here You can also see structure of the tokens by invoking structure of Tokens.

Main.scala
1
println(tokens.structure)

Structure will contains absolutely everything like spaces, new lines etc. It might be useful for debugging.

If you want to preview human readable results of your play with tokens you can use method syntax:

Main.scala
1
println(tokens.syntax)

Goal of lexical analysis is to divide program into words(tokens). And this is exactly what method tokenize does. At the stage of tokenizing code our program doesn’t understand meaning of code. We can also preview syntax built from the tokens by invoking ‘syntax’ on Tokens. Actually we can pass completely not valid code here:

Main.scala
1
2
3
4
5
6
7
8
object Main extends App{
  val someCode =
    """
      defte st Method = {
        println("printing";
    """.tokenize

    ...

And it will be successfully separated into different words.

There is an error case which I’ve encountered playing with this example. Tokenizing the code might return Tokenized.Error in case where literal hasn’t got closed quote:

Main.scala
1
2
3
4
5
6
7
  val someCode =
    """
      defte st Method = {
        println("printing);
    """.tokenize

    ...

For me it was a little weird at the first time because I thought tokenizer doesn’t check any rules but in this particular example Token “printing” is treated as one token so there is no Token “printing and this is why we get error.

Tokenizing is very low level operation which gives you a lot information about how code looks like. In tokenized result you’ve got all spaces, new lines, commas, separators etc. In this level you might look for specific tokens, see how code is indented, modify it and so on. Let’s look at some example:

Tokenization example 1

Let’s say that we just grabbed project written by somebody else who felt scala syntax in different(worse) way. We want to replace occurences of getOrElse(null) to orNull. Example:

1
2
3
4
5
6
7
 case class Scalameta {
   def println() = sth.getOrElse(null)
   val x = Option(foo()).getOrElse(12)
   val y = {
     Option(bar()).getOrElse("foo") + Future(x).get(null)
   }
 }

to

1
2
3
4
5
6
7
 case class Scalameta {
   def println() = sth.orNull
   val x = Option(foo()).getOrElse(12)
   val y = {
     Option(bar()).getOrElse("foo") + Future(x).get(null)
   }
 }

I encourage you to open github sources from previous links and try to tackle this problem. I don’t want to spoil you fun with playing with Scalameta but if you’re interested in my quick solution you can look here

Using this API you can think about numerous analogical examples like:

  • replace filter(…).headOption to find
  • find(…).isDefined to exists
  • “${saveRateSettingParam}” to “$saveRateSettingParam”

And whatever syntax rule you want.

There is library called ScalaFmt, which heavily operates on token level, creator of this library also gave cool workshop about scalameta, so I encourage you to preview it. We will not dig deeper into tokenization in this post. At mentioned workshop there is more info and some cool examples too. Links at the end of the post.

Second stage of compilation: Parsing code- def parse[U]

So after tokenizing code compiler needs to understand it. This is stage of parsing code. Moreover compiler needs simpler structure than tokens, without redundant syntax like comments, spaces, commas or new lines etc. How to parse code in scalameta? Check this trait Api There is method def parse[U] and if you will track down this method you will finish in Parse Trait in Github

So Parse is parameterized with type T. What is type T? Hint is in object Parse. T in our example will be something which scalameta can read and Parse. In object Parse you can see a lot of implicits for types you can automatically parses. Let’s choose Type as T:

1
  implicit lazy val parseType: Parse[Type] = toParse(_.parseType())

Method parse[T] returns type Parsed(like Tokenized in previous stage), which has 2 nodes: Success and Error. Check implementation here

Let’s try to parse something, which might look like Type:

Main.scala
1
2
3
4
5
6
7
8
9
10
11
   val code = "List[String]".parse[Type]

   printResult[Type](code)

   def printResult[T](code: Parsed[T]): Unit = {
     code match {
       case Parsed.Success(tree) => println("Code is valid!")
       case Parsed.Error(pos, msg, details)  =>
         println(s"Pos: $pos, msg: $msg. More details: $details")
     }
   }

And code is parsed as Success, that makes sense. Let’s modify this line a bit.

Main.scala
1
2
3
4
5
6
   val code =
    """val l: List[String]= List()""".parse[Type]

   printResult[Type](code)

   ...

End suddenly our val code is now Parsed.Error(…). If you have read my previous posts from January about scalamacros, you’ve probably notice that AST was represented as Tree type. In scalameta AST is more strict (maybe better term will be typesafe). That means that you’ve got specific types of AST nodes, as you’ve seen in trait Parsed. Check this object to see what parsers you’ve available by default. Now we expect val code to be of type Stat not Type. Let’s change it:

Main.scala
1
2
3
4
   val code = """val l: List[String]= List()""".parse[Stat]

   printResult[Stat](code)
   ...

In previous stage we could tokenize everything we want, we have been operating on Token level so space, character etc. Parsing stage needs to know meaning of our code, so we can’t arbitrarily parse whatever we want. Types like Stat or Type will be present with us everytime when we will do sth with scalameta. I think this is the biggest difference between old API and new one. Other examples:

Main.scala
1
2
3
4
5
6
7
8
9
     val code = """val a: List[String]= List()""".parse[Stat]
     val caseExpr = """case true => println("its true!")""".parse[Case]
     val term = """x + y""".parse[Term]
     val arg = """a: List[String]""".parse[Term.Arg]

     printResult[Stat](code)
     printResult[Case](caseExpr)
     printResult[Term](term)
     printResult[Term.Arg](arg)

Ok Parsed.Success what next? Tree!

So parsing code gives us eventually one of AST type like Stat or Type. After successful parsing code, scalameta gives you full API for building AST. Moreover it provides you cool wrapper around those API called quasiquotes which drastically simplifies creating/deconstructing AST.

Open quasiquotes docs It will be very helpful. Keep it open all the time you do something with scalameta.

Lets modify our example a bit:

Main.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
import scala.meta._

object Main extends App{
  ...
  val code =
    """case class Car[CarCompany](brand: CarCompany, color: Color, name: String){
         val owner: String = "John"
         def playRadio() = {
           "playing radio"
         }
         val capacity, speed = (5, 200)
         val oneVal = 45
      }
    """.parse[Stat]

  val q"..$mods class $tname[..$tparams] ..$mods2 (...$paramss) extends $template" = parseCode(code)

  template match {
    case template"{ ..$stats } with ..$ctorcalls { $param => ..$stats2 }" => stats2.map{
      case q"..$mods def $name[..$tparams](...$paramss): $tpe = $expr" => println(s"methodName: $name")
      case q"..$mods val ..$patsnel: $tpeopt = $expr" => println(s"value $patsnel equals to $expr")

    }
  }

  def parseCode[T](code: Parsed[T]): T = {
    code match {
      case Parsed.Success(tree) => tree
      case Parsed.Error(pos, msg, details)  => throw new Exception(msg)
    }
  }
}

Now def parseCode[T] returns this tree so we can use quasiquotes API to play with parsed code. It’s super easy to construct and deconstruct code using this API.

If you didn’t see syntax of quasiquotes before it might look a little weird for you, especially those .. and … signs. I will copy (and modify a bit) explanation of them from my previous post about macro annotations:

1
$name[..$tparams](...$paramss)

Ok, so we are extracting methodName but what are those “..” and “…” signs means?

Let’s start with ..$- this pattern expects List[meta.Type.Param]. And this is nice because our annotated method could take many type parameters.

And what is …$- this pattern expects List[List[meta.Term.Param]]. This is becase our method can take many parameters sets, so it could look like this one:

1
2
3
private def foo[A, B, C](a:A ,b:B)(c: C): A = {
//body
}

Let’s look closer at those lines:

Main.scala
1
2
3
4
5
6
7
8
9
  val q"..$mods class $tname[..$tparams] ..$mods2 (...$paramss) extends $template" = parseCode(code)

  template match {
    case template"{ ..$stats } with ..$ctorcalls { $param => ..$stats2 }" => stats2.map{
      case q"..$mods def $name[..$tparams](...$paramss): $tpe = $expr" => println(s"methodName: $name")
      case q"..$mods val ..$patsnel: $tpeopt = $expr" => println(s"val names: $patsnel")
    }
  }
}

In above lines we deconstruct code. You can use pattern matching when you expect specific type of code. I’ve copied those patterns from docs So deconstructing is human readable code and moreover you’ve got some errors with parsing caught at compile time. I encourage you to preview docs and try to construct/deconstruct different code lines. Quasiquotes hide most of the complexity of building/deconstructing code. But what if you want to dig deeper into your code and try to understand what is going on under the hood of this cool API?

We need to go deeper

Run this code:

Main.scala
1
2
3
  val constructedTree = q"""def foo = println("quasiquotes")"""

  println(constructedTree.show[Structure])

Printed result looks like this:

console
1
Defn.Def(Nil, Term.Name("foo"), Nil, Nil, None, Term.Apply(Term.Name("println"), Seq(Lit("quasiquotes"))))

And printed result is equivalent of code built by quasiquotes. It will give you deeper insight whats going on in your metaprogram. In some cases you have to use constructors of scalameta types like Def or Term to build desired piece of code, we will see examples later. Hope you see what quasiquotes gives you, how they hide complexity inside human readable syntax.

Example no. 1- Constants

Ok so we know something what is scalameta, we saw examples of API usage and now let’s try to use it in some examples. This is the case:

We are working on the project and we need some standards like for example:

  • Constants strings in our project are always in object Constants
  • If value of some constant is assigned to 2 different vals then we want to throw a warning or exception whatever.

This is our object Constants and it clearly doesn’t follow our rules, “ruby” is assigned to 2 different vals:

Constants.scala
1
2
3
4
5
6
object Constants {
  val java = "java"
  val scala = "scala"
  val ruby1 = "ruby"
  val ruby2 = "ruby"
}

Let’s check for possible solution:

ConstantsValidator.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import scala.meta._

object ConstantsValidator {
  case class Val(valName: scala.meta.Pat, valValue: String)

  def validate(source: Source) = source match {
    case source"..$stats" => stats.collect(_ match {
      case q"..$mods object ${Term.Name(name)} extends $template" => name match{
        case "Constants" => template match {
          case template"{ ..$stats2 } with ..$ctorcalls { $param => ..$stats3 }" =>{
            val vals: List[Val] = stats3.foldLeft(List[Val]()) {
              (acc, elem) => elem match {
                case q"..$mods2 val ..$patsnel: $tpeopt = $expr" => acc :+ Val(patsnel.head, expr.toString)
                case _ => acc
              }
            }
            vals.groupBy(_.valValue).foreach{ case
              (valueKey, listOfVals) => if (listOfVals.length > 1 ) throw new Exception(s"$valueKey is assigned more than once to different vals: ${listOfVals.map(_.valName)}")
            }
          }
        }
        case _ =>
      }
    })
  }
}

Invoke it in Main.scala:

Main.scala
1
ConstantsValidator.validate(new java.io.File("src/main/scala/Constants.scala").parse[Source].get)

and run program. You should get an exception with error msg. To understand what’s going on in the implementation go to quasiquotes docs and check how to deconstruct Source. Then I choose only objects for further parsing, and moreover object which name is Constants. Then I do some logic with groupBy to find repetitions. Hope it’s straightforward. Look at this line:

ConstantsValidator.scala
1
case q"..$mods object ${Term.Name(name)} extends $template" => name match{...}

If you will look at quasiquotes docs you will find different syntax:

1
case q"..$mods object $name extends $template" => name match{...}

But name in above line is actually shorthand for Term.Name(“someName”) and I’m interested with this string “someName” so I’ve changed this expression to fit my needs. This is one of the example when constructing code from bare Tree types is useful. Change value “ruby” to sth else to make project compile again.

Example no. 2- Name of the object

Let’s say that we want to set some naming convention in our project. We want all objects to start with uppercase letter. Our metaprogram should check this condition and if it will find object with lowercase first letter then it should replace it with proper one.

ConstantsValidator.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
  def validateName(source: Source) ={
    val fixedFile: Source = source match {
      case source"..$stats" => source"..${buildNewStatements(stats)}"
    }

    val fw = new FileWriter("src/main/scala/Constants.scala")
    fw.write(fixedFile.syntax)
    fw.close

  }

  private def buildNewStatements(stats: scala.collection.immutable.Seq[Stat]): List[Stat] = {
    stats.foldLeft(List[Stat]())((acc, elem) => elem match {
      case q"..$mods object ${Term.Name(name)} extends $template" =>
        val isFirstLetterOfObjectLowercase = Character.isLowerCase(name.head)
        if(isFirstLetterOfObjectLowercase){
          val newName = name.head.toString.toUpperCase + name.tail
          val objectWithFixedName = q"..$mods object ${Term.Name(newName)} extends $template"
          acc :+ objectWithFixedName
        }else {
          acc :+ q"..$mods object ${Term.Name(name)} extends $template"
        }
      case whatever => acc :+ whatever
    })
  }

Idea is the same as in previous example. We deconstruct step by step another pieces of code, modify some element and construct new modified code. In this example we also save modified code to file so if you run code (and your object constants starts with lowercase letter) then you will see replacing of code. What is worth noticing, code is saved in the same structure, we have written it. That’s one of the feature of scalameta. Invoke this method to check it out:

Main.scala
1
ConstantsValidator.validateName(new java.io.File("src/main/scala/Constants.scala").parse[Source].get)

Example no. 3- Code metrics

Last example is building code review tool. Let’s say that we want some information, some basic statistics of scala objects in project for example number of classes, objects etc.

CodeMetrics.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
object CodeMetrics {
  val allScalaFiles = recursiveListFiles(file("src/")).map(_.parse[Source]).collect{
    case Parsed.Success(tree) => tree}.toList

  val counts = allScalaFiles.foldLeft(Counts.initial)((acc, file) => {
    file match {
      case source"..$whateverItIsInFile" => whateverItIsInFile.foldLeft(acc)((accInFile: Counts, elem) => elem match {
        case q"..$mods object $name extends $template" =>
          accInFile.incObjectNo
        case q"..$mods class $tname[..$tparams] (...$paramss) extends $template" =>
          accInFile.incClassNo
        case q"..$mods trait $tname[..$tparams] extends $template" =>
          accInFile.incTraitNo
        case q"package object $name extends $template" =>
          accInFile.incPackageObjNo
        case _ => accInFile
      })
    }
  })
  ...
}

Where counts is accumulator object, which hold important data for us. In our case it’s number of classes, objects etc.

Counts.scala
1
2
3
4
5
6
7
8
9
10
11
12
package model

case class Counts(classNo: Int, objectNo: Int, traitNo: Int, packageObjNo: Int) {
  def incClassNo      = this.copy(classNo      = this.classNo + 1)
  def incObjectNo     = this.copy(objectNo     = this.objectNo + 1)
  def incTraitNo      = this.copy(traitNo      = this.traitNo + 1)
  def incPackageObjNo = this.copy(packageObjNo = this.packageObjNo + 1)
}

object Counts {
  val initial = Counts(0, 0, 0 ,0)
}

Invoke it to see results. I’ve hardcoded path to be /src

Main.scala
1
println(CodeMetrics.counts)

Full example you might see here

After getting a little bit familiar with quasiquotes Api I hope that this code is very straightforward for you.

Example no. 4- Code review tool

I’ve extended a little bit example from previous paragraph. I wanted to build some nice UI with some statistics about objects. For example: - I want to see dependencies between types in my project - I want to see number of statements in specific type of object - I want to see type of object

For sake of simplicity I’m interested in 3 scala object types: class, object, trait.

My simple solution you can find in this repo:

github

Summary

Presented examples are small and easy to implement. I wanted to give you some ideas how in my opinion scalameta might be used. But you can go further with those ideas. In Ruby on Rails community there is popular statement called ‘convention over configuration’. It means that if you follow some conventions everything works out of the box. Examples:

  • controllers in mvc are in folder ‘controllers’
  • name of controller has to follow some rules (it’s related to route name)
  • data layer has to be names accordingly to schema in db

etc.

Imagine to implement this conventions in some scala framework. Biggest advantage over ruby is that those rules might be checked at compile time. It opens brand new possibilities for scala ecosystem.

Last but not least: I encourage you to preview links below. Thx for reading hope that you’ve found this post useful. Don’t hesitate to text me if you’ve got some comments ;)

Useful links:

New Scalameta Inline Macros- Examples

Metaprogramming in scala evolves at the speed of light. Great example of that is Scaladays 2016 conference. One day we have heard that macros will be removed from the language:

images

Next day we heard about bright future of them:

images

Recently, I’ve been observing scalameta project and reading its gitter. After previewing some papers(links on the end of post) about new inline macros and Eugene Burmako presentation mentioned above I’ve decided to give them a try and port my simple examples from previous posts to new inline macros. I wanted to check if it is possible to run them and… it is :) But If you will preview commits on master branch you will see what I’ve meant in my first sentence of this post where I wrote that metaprogramming in scala evolves in speed of light. On Thursday I’ve version with some bugs, Saturday was release of Snapshot version, which removed some of them, and on Tuesday was release of scalameta 1.1.0 and macro paradise 3.0.0-M5 which solves bugs I had in my examples.

I encourage you to preview this repo. Also don’t hesitate to give me reply if you see something wrong on them.

Link to repository: github

Link to my previous post about macro annotations (Idea is mostly the same, api has changed): Blog

Other (probably more) useful links:

Sip-nn inline/meta

scalameta overview

Scaladays 2016 Berlin Keynote- Martin Odersky

Scaladays 2016 Metaprogramming 2.0- Eugene Burmako

Scala Macros Part III- Intellij Idea Support

I. Introduction.

This is continuation of post Scala macros part II- macro annotations where we’ve implemented @TalkingAnimalSpell macro annotation. As a reminder: This macro annotation add new method before compilation to annotated class, and this method was invisible for Intellij Idea because its coding assistance is based on static code analysis and it is not aware of AST changes. In this post we will try to create plugin to solve this problem. You will need Intellij Idea 15. Code as always is on my Github:

Intellij plugin repo

Part 2 repo

II. Setup

In October 2015, Jetbrains team published this post. According to this blog let’s clone example repo for building intellij idea macros plugins.

1
git clone git@github.com:JetBrains/sbt-idea-example.git

And open Injector.scala:

Injector.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
package org.jetbrains.example.injector

import org.jetbrains.plugins.scala.lang.psi.api.statements.ScValue
import org.jetbrains.plugins.scala.lang.psi.api.toplevel.typedef.ScTypeDefinition
import org.jetbrains.plugins.scala.lang.psi.impl.toplevel.typedef.SyntheticMembersInjector
import org.jetbrains.plugins.scala.lang.psi.types.result.TypingContext

/**
  * @author Alefas
  * @since  14/10/15
  */
class Injector extends SyntheticMembersInjector {
  override def injectFunctions(source: ScTypeDefinition): Seq[String] = {
    source.members.flatMap {
      case v: ScValue if v.hasAnnotation("example.JavaGetter").isDefined =>
        v.declaredElements.map { td =>
          s"def get${td.name.capitalize} : ${td.getType(TypingContext.empty).getOrAny.canonicalText} = ???"
        }
      case _ => Seq.empty
    }
  }
}

Building plugin which will support your macro annotations is about creating class which extends SyntheticMembersInjector class and implementing one of SyntheticMembersInjector functions:

  1. If you want to add methods to any class, object or trait you will have to implement this method:

    1
    2
    
        def injectFunctions(source: ScTypeDefinition): Seq[String]
    
    

    We will implement this method later for our plugin, and example implementation you have also in this cloned from Intellij sbt-plugin-example project.

  2. If you would like to custom inner class or object to any class, object or trait you would have to implement this method:

    1
    2
    
        def injectInners(source: ScTypeDefinition): Seq[String]
    
    

    Returning types for those 2 methods is Seq[String], it is just collection of texts which will appear on code completion menu when you will operate on proper object.

  3. SyntheticMembersInjector has also third method:

    1
    2
    
        def needsCompanionObject(source: ScTypeDefinition): Boolean = false
    
    

    And according to sources: “Use this method to mark class or trait, that it requires companion object.”

Actually, using this api for building macro support reminds me writing macros. Look, argument of, for example, injectFunction is weird ScTypeDefinition, so it sounds a little bit like one of the c.Tree subclasses. Then you’ve got ScValue, we can check if it has some annotations, so we are operating, not on Tree types of course, but some types provided by IntelliJ but they are just some code tokens. So let’s look how to use this api for out purposes.

III. Plugin for @TalkingAnimalSpell

This is our implementation of Injector:

Injector.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
package org.jetbrains.example.injector

import org.jetbrains.plugins.scala.lang.psi.api.toplevel.typedef.{ScClass, ScTypeDefinition}
import org.jetbrains.plugins.scala.lang.psi.impl.toplevel.typedef.SyntheticMembersInjector

class Injector extends SyntheticMembersInjector {
  override def injectFunctions(source: ScTypeDefinition): Seq[String] = {
    source match {
      case c: ScClass if c.hasAnnotation("TalkingAnimalSpell").isDefined =>
        Seq(s"def sayHello: Unit = ???")
      case _ => Seq.empty
    }
  }
}

I hope that this code is pretty straightforward. If source, is class and has annotation called “TalkingAnimalSpell” then return seq with text corresponding to sayHello method signature. And that’s all :)

Now inside your plugin directory run:

1
sbt package-plugin

This command will create .jar with your plugin. Go to Idea settings, Plugins section, then click on install plugin from disc, and choose this .jar, restart Idea, open project from part II, and you should have this effect:

images

IV. Fun with the IntelliJ API

I encourage you to play this api a little bit, check what methods are available and what they do. For example we can slightly change behavior of this plugin and provide code completion only when annotated class extends proper Animal trait. Possible solution for this would be:

Injector.scala
1
2
3
4
5
6
7
    //...
      case c: ScClass if c.hasAnnotation("TalkingAnimalSpell").isDefined && c.superTypes.map(_.canonicalText).contains("Animal") =>
        Seq(s"def sayHello: Unit = ???")
      case _ => Seq.empty
    }
  }
}

V. Summary

In this blog series I tried to make some gentle introduction to scala macros. In Javeo I used them to build library which reduced huge amount of boilerplate code. You will find code of Gridzzly(That’s the name of our beast) on Javeo’s github, I will update repo in few days. The last missing factor was Intellij idea support for macros, but now nothing can stop you from writing your own library using scala macros ;) By the way, Gridzzly is also Macro Annotation so you probably want plugin for it ;) It’s here.

Thank you for reading this, feel free to comment and… have a great day!

Scala Macros: Part II- Macro Annotations

I. Introduction. What are macro annotations?

With macro annotations you can annotate any definition with something that Scala recognizes as a macro and it will give you ability to modify arbitrarily this definition. Personally speaking, it is my favorite type of macro with many cool use cases.

Code is on my github, branch part-2. Let’s start!

II. Setup

Setup is the same as in part 1

III. Example no.2: Benchmark

In part 1 we created pretty much useless macro, just to become familiar with new quasiquotes api and macro project structure. Now let’s try to create something more useful. Imagine following problem. We want to benchmark methods. We would like to check how much time takes method to execute. This is our method. As simple as it could be. Type parameter is a fake, just to make this method look more fancy.

1
2
3
4
5
6
object Test{
    def testMethod[String]: Double = {
        val x = 2.0 + 2.0
        Math.pow(x, x)
    }
}

Now how to measure running time of body of this method? This is one possibility:

1
2
3
4
5
6
7
8
9
10
11
12
object Test{
    def testMethod[String]: Double = {
        val start = System.nanoTime()
        val result = {
            val x = 2.0 + 2.0
            Math.pow(x, x)
        }
        val end = System.nanoTime()
        println("testMethod elapsed time: " + (end - start) + "ns")
        result
    }
}

So we wrap body of function with time snapshots then we assign result of #testMethod() to val result, then we println time difference and return result. The problem with it is we had to touch #testMethod() and modify it. Better solution would be to not touch code of #testMethod() at all. Moreover it is boilerplate code, not interesting for developer at all.

In Test.scala you will find one possible solution could be, for example:

Test.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
object Test{
  @Benchmark
  def testMethod[String]: Double = {
    val x = 2.0 + 2.0
    Math.pow(x, x)
  }

  @Benchmark
  def methodWithArguments(a: Double, b: Double) = {
    val c = Math.pow(a, b)
    c > a+b
  }
}

And we want that result of above code to be the same as previously. We can easily do this with macro annotations.

Look at Main.scala, there’s usage of our methods:

Main.scala
1
2
3
4
object Main extends App{
  Test.testMethod
  //...
}

Now check the implementation of @Benchmark

Benchmark.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
import scala.annotation.StaticAnnotation
import scala.language.experimental.macros
import scala.reflect.macros.blackbox.Context

class Benchmark extends StaticAnnotation {
  def macroTransform(annottees: Any*) = macro Benchmark.impl
}

object Benchmark {
  def impl(c: Context)(annottees: c.Expr[Any]*): c.Expr[Any] = {
    import c.universe._

    val result = {
      annottees.map(_.tree).toList match {
        case q"$mods def $methodName[..$tpes](...$args): $returnType = { ..$body }" :: Nil => {
          q"""$mods def $methodName[..$tpes](...$args): $returnType =  {
            val start = System.nanoTime()
            val result = {..$body}
            val end = System.nanoTime()
            println(${methodName.toString} + " elapsed time: " + (end - start) + "ns")
            result
          }"""
        }
        case _ => c.abort(c.enclosingPosition, "Annotation @Benchmark can be used only with methods")
      }
    }
    c.Expr[Any](result)
  }
}

There are few differences between def macros and macro annotations when looking at their implementations.

  1. First of all class Benchmark has to extend StaticAnnotation trait.

  2. Second difference is we need to implement macroTransform method which take annottees: c.Expr[Any]* as argument. You might think about Expr as wrapper around AST.

Let’s focus on implementation:

Benchmark.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
...
    val result = {
      annottees.map(_.tree).toList match {
        case q"$mods def $methodName[..$tpes](...$args): $returnType = { ..$body }" :: Nil => {
          q"""$mods def $methodName[..$tpes](...$args): $returnType =  {
            val start = System.nanoTime()
            val result = {..$body}
            val end = System.nanoTime()
            println(${methodName.toString} + " elapsed time: " + (end - start) + "ns")
            result
          }"""
        }
        case _ => c.abort(c.enclosingPosition, "Annotation @Benchmark can be used only with methods")
      }
    }
    c.Expr[Any](result)
  }
}

What is going on here?

  1. Annotees is Seq of annotated definitions. We want to have them in form of AST this is why we map over this Seq and return new Seq with AST nodes.

  2. Next outstanding feature of quasiquotes is extractor pattern and this is why we can use pattern matching over Tree node. And you will find in docs syntax summary which pattern corresponds to which scala construction. In our case we will annotate methods so we used syntax for extracting method tokens.

Look at this weird syntax:

Benchmark.scala
1
$methodName[..$tpes](...$args)

Ok, so we are extracting methodName but what are those “..” and “…” signs means?

Let’s start with ..$- this pattern expects List[universe.Tree]. And this is nice because our annotated method could take many type parameters.

And what is …$- this pattern expects List[List[universe.Tree]]. This is becase our method can take many parameters sets, so it could look like this one:

1
2
3
private def foo[A, B, C](a:A ,b:B)(c: C): A = {
//body
}

In conclusion we are matching each annottee to this method pattern, we extract from annottee(method in our case) all potentially useful elements like method name or parameter list etc. And we are returning new collection of modified AST’s. In this example we return the same method signature with modified body. You see that inside quasiquote we are doing what we actually expected. So we wrap code with those timestamps, and println time difference between endTime and startTime.

If we would apply @Benchmark to some other definition like class or val or whatever, then preventing from match error, I’ve added default case:

1
    case _ => c.abort(c.enclosingPosition, "Annotation @Benchmark can be used only with methods")

IV. Example no.3: Talking Animal

Macros can be treated as some kind of magic. Previous examples have shown that they can modify AST and slightly change behaviour of your code. But they can do much more than that. Let’s look at Main.scala:

Main.scala
1
2
3
4
5
object Main extends App{
  //...

  Dog("Szarik").sayHello
}

Something is wrong here, I personally use Intellij Idea and it doesn’t see this method sayHello on object Dog. But this code compiles properly. To solve this mystery open Animal.scala:

Animal.scala
1
2
3
4
5
6
trait Animal{
  val name: String
}

@TalkingAnimalSpell
case class Dog(name: String) extends Animal

We have got simple case class Dog which extends Animal trait. It doesn’t have method sayHello implemented, but there is @TalkingAnimalSpell annotation. So let’s look at implementation of it:

TalkingAnimalSpell.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
import scala.annotation.StaticAnnotation
import scala.language.experimental.macros
import scala.reflect.macros.blackbox.Context

class TalkingAnimalSpell extends StaticAnnotation {
  def macroTransform(annottees: Any*) = macro TalkingAnimalSpell.impl
}

object TalkingAnimalSpell {
  def impl(c: Context)(annottees: c.Expr[Any]*): c.Expr[Any] = {
    import c.universe._

    val result = {
      annottees.map(_.tree).toList match {
        case q"$mods class $tpname[..$tparams] $ctorMods(...$paramss) extends Animal with ..$parents { $self => ..$stats }" :: Nil => {
          val animalType = tpname.toString()
          q"""$mods class $tpname[..$tparams] $ctorMods(...$paramss) extends Animal with ..$parents{
            def sayHello: Unit = {
              println("Hello I'm " + $animalType + " and my name is " + name)
            }
          }"""
        }
        case _ => c.abort(c.enclosingPosition, "Annotation @TalkingAnimal can be used only with case classes which extends Animal trait")
      }
    }
    c.Expr[Any](result)
  }
}

Difference between this and previous benchmark example is in pattern matching. Case condition is different. In this TalkingAnimalSpell macro we want to annotate classes not methods like in benchmark. In quasisquotes syntax summary you will find pattern for classes. And because I want this annotation to work only with classes which extends Animal trait I specified it explicitly in pattern case.

What this macro does is returning the same object but with added new method sayHello which println the name argument. Writting this blog post I’ve noticed a bug. Does it return exactly the same object? Modify a little bit Dog class:

TalkingAnimalSpell.scala
1
2
3
4
5
6
@TalkingAnimalSpell
case class Dog(name: String) extends Animal {
    def apport: Unit = {
        println("Apporting...")
    }
}

And invoke it in Main.scala

TalkingAnimalSpell.scala
1
2
3
4
5
object Main extends App{
  //...

  Dog("Szarik").apport
}

And you get compilation error :) Everything looks fine, method is implemented Idea can see it but you’ve got compilation error. What is wrong? Clearly @TalkingAnimalSpell is really some kind of magic, look at its implementation again:

We are returning class with the same signature, but we are missing existing body of annotated class. Let’s add missing body:

TalkingAnimalSpell.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
    //...
    val result = {
      annottees.map(_.tree).toList match {
        case q"$mods class $tpname[..$tparams] $ctorMods(...$paramss) extends Animal with ..$parents { $self => ..$stats }" :: Nil => {
          val animalType = tpname.toString()
          q"""$mods class $tpname[..$tparams] $ctorMods(...$paramss) extends Animal with ..$parents{
            $self => ..$stats
            def sayHello: Unit = {
              println("Hello I'm " + $animalType + " and my name is " + name)
            }
          }"""
        }
        case _ => c.abort(c.enclosingPosition, "Annotation @TalkingAnimal can be used only with case classes which extends Animal trait")
      }
    }
    c.Expr[Any](result)
  }
}

And run code again, everything should compile fine. So as you see you can make arbitrary change with you annotated code. With great power comes great responsibility. I encourage you to play a little bit with this example.

V. Homework

Maybe some kind of good homework exercises would be

1) Imagine that you are funny software developer who likes to make jokes. Change implementation and returning type of

1
def apport

2) change implementation of sayHello to println each animal attributes. For example, you’ve got this class with @TalkingAnimalSpell annotation:

Animal.scala
1
2
@TalkingAnimalSpell
case class Cat(name: String, favoriteFood: String, race: String, color: String) extends Animal

And invoking sayHello on this object:

Main.scala
1
2
3
4
object Main extends App{
  //...
  Cat("Tom", "Whiskas", "persian", "black").sayHello
}

should result with this println:

1
Hello I'm Cat and my name is Tom my favorite food is whiskas, my race is persian, my color is black.

3) Change @Benchmark annotation. It should be possible to annotate scala object and it will add this benchmark code to each non private method inside this object. So this code, after compilation:

1
2
3
4
5
6
7
8
9
10
11
12
13
object Test{
  @Benchmark
  def testMethod[String]: Double = {
    val x = 2.0 + 2.0
    Math.pow(x, x)
  }

  @Benchmark
  def methodWithArguments(a: Double, b: Double) = {
    val c = Math.pow(a, b)
    c > a+b
  }
}

should work the same as this code:

1
2
3
4
5
6
7
8
9
10
11
12
@Benchmark
object Test{
  def testMethod[String]: Double = {
    val x = 2.0 + 2.0
    Math.pow(x, x)
  }

  def methodWithArguments(a: Double, b: Double) = {
    val c = Math.pow(a, b)
    c > a+b
  }
}

VI. Summary of part II

In this post, we explored another type of macro: macro annotations. But there are still some question marks. One of them is highlighted by last example with @TalkingAnimalSpell and exercise no. 1 from your homework. Intellij Idea coding assistance is based on static code analysis and it is not aware of AST changes, so there is lack of support for macros.

You probably feel disappointed now. What if your macro generates methods that are invisible for your IDE? Your code will be highlighted with red , you wouldn’t know returning type of generated methods, moreover somebody will use macros to change existing code etc. Without documentation this macro could be more confusing than useful. Thankfully Intellij Idea has API for writing plugins to support macros. In Part 3 of this blog series, we will create plugin for Intellij Idea to add support to our @TalkingAnimalSpell macro annotation. See you in Part 3!

Scala Macros: Part I - Def Macros

I. Introduction. What are scala macros?

Macros in scala allow you modify existing, or even generate new code before it gets compiled. They are used under the hood of many known libraries in scala ecosystem like shapeless, slick or play framework. They come in different flavors, one of them are: def macro and macro annotations.

In this blog post I will give you example of how you could easily start writing your own def macro. In Part 2 I will give you more practical usage of macro annotations and in Part 3 we will build intellij idea plugin to support our macros.

You will find code on my github, branch part-1

Let’s get started!

II. How macros work?

You might think about scala macros like about framework/api for generating/modifying Abstract Syntax Trees(AST).

A) What is AST?

Abstract syntax tree is core data structure used in compilers. Early stage of compiling code is lexical analysis. What it does is checking the sequence of code elements and it assigns them to tokens. Next stage is parsing code. What parsing does is checking sequence of tokens in terms of grammar and builds some data structure base on parsed code. This data structure is called parse tree and it includes each token found in parsed string. AST is similar but without redundant information. Let’s look at this simple string:

1
1 + (2+3)

after lexical analysis this string could look like this:

1
int(1) '+' '(' int(2) '+' int(3) ')'

Now compiler need to have some structure which will give him an information about grammar relationships between those tokens. Without redundant information like parenthesis etc. AST will provide it, and it could look like this:

images

Or maybe this ‘tree like’ structure, would be more intuitional:

images

After creating AST, it is further passed to compiler which makes use of it. What macro does is it “replaces” AST written by developer with AST produced by macro itself, and then compilation goes further. So we are not resigning from any advantages of static typed, compiled language assets.

III. Setup

Look at build.sbt:

build.sbt
1
2
3
4
5
6
7
8
9
10
11
12
13
name := "macros_examples_part_1"

version := "1.0"

scalaVersion := "2.11.7"

lazy val macros_implementations = project

lazy val root = (project in file(".")).aggregate(macros_implementations).dependsOn(macros_implementations)

val paradiseVersion = "2.1.0-M5"

addCompilerPlugin("org.scalamacros" % "paradise" % paradiseVersion cross CrossVersion.full)

Macros must be defined in other compilation phase(in other sbt project in our case). I’ve created new project called macros_implementations, and make our main project dependent on it. By the way this line:

build.sbt
1
lazy val macros_implementations = project

is also scala macro. According to sbt docs: “The name of the val is used as the project ID and the name of the base directory of the project” So we have 2 projects. Main one and dependent one called macros_implementations. To use macros we will also need compiler plugin. called macro paradise.

IV. Example no.1: def macros

Look at Main.scala. I’ve placed there example of macros usage.

Main.scala
1
2
3
object Main extends App{
  Welcome.isEvenLog(12)
}

From “outside” developer cannot specify if method is implemented with scala macros or not. Invoking of def macro is the same as any other method. What this macro does, it println some message depending on argument is even or odd. In main project directory run:

1
sbt run

to see result of this method. So let’s look closer at the implementation.

Welcome.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
import scala.reflect.macros.blackbox.Context
import scala.language.experimental.macros

object Welcome {
  def isEvenLog(number: Int): Unit = macro isEvenLogImplementation

  def isEvenLogImplementation(c: Context)(number: c.Tree): c.Tree = {
    import c.universe._

    q"""
      if ($number%2==0){
        println($number.toString + " is even")
      }else {
        println($number.toString + " is odd")
      }
    """
  }
}

There is few interesting lines, first we need to import macros:

1
import scala.language.experimental.macros

Next, we need some api to modify AST, we would like to have some typechecker, some error logger etc:

1
import scala.reflect.macros.blackbox.Context

Macros in scala comes in 2 flavors: blackbox and whitebox. What is important: use blackbox ;). Talking seriously, if macro faithfully follows its type signatures, and its implementation is not necessary to understand its behaviour then we call that macro blackbox. In other case, in whitebox macro type signature is only an approximation.

Next, implementation of macro:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
object Welcome {
  def isEvenLog(number: Int): Unit = macro isEvenLogImplementation

  def isEvenLogImplementation(c: Context)(number: c.Tree): c.Tree = {
    import c.universe._

    q"""
      if ($number%2==0){
        println($number.toString + " is even")
      }else {
        println($number.toString + " is odd")
      }
    """
  }
}
  1. To specify macro implementation we use keyword macro followed by implementation method name.

  2. This implementation method takes 2 parameters: first one is mentioned earlier Context, and second is parameter as element of Tree. Tree is node type of AST. Returning type is also Tree.

  3. Next thing is importing context universe which provide API for modifying AST.

  4. And the main part of out implementation: building a Tree. This q”“ interpolator is called quasiquotes. What it does provides developer an easier way to create/modify AST. I’ve said ‘easier’, so there should be also harder way. Let’s modify a little bit body of our macro to see what q”“ produces in our case.

Welcome.scala
1
2
3
4
5
6
7
8
9
10
11
     ...
     val result = q"""
       if ($number%2==0){
         println($number.toString + " is even")
       }else {
         println($number.toString + " is odd")
       }
     """
     println(showRaw(result))
     result
...

Now, run the project. Printed result looks like the following:

1
If(Apply(Select(Apply(Select(Literal(Constant(12)), TermName("$percent")), List(Literal(Constant(2)))), TermName("$eq$eq")), List(Literal(Constant(0)))), Apply(Ident(TermName("println")), List(Apply(Select(Apply(Select(Apply(Select(Literal(Constant("12")), TermName("$plus")), List(Literal(Constant("= ")))), TermName("$plus")), List(Select(Literal(Constant(12)), TermName("toString")))), TermName("$plus")), List(Literal(Constant(" and it is even")))))), Apply(Ident(TermName("println")), List(Apply(Select(Apply(Select(Apply(Select(Literal(Constant("12")), TermName("$plus")), List(Literal(Constant("= ")))), TermName("$plus")), List(Select(Literal(Constant(12)), TermName("toString")))), TermName("$plus")), List(Literal(Constant(" and it is odd")))))))

I hope that you see now, what quasiquotes give you ;) Each of Those If, Apply, Select objects are Tree subclasses, and you can build your AST using them instead of quasiquotes, but it would be frustrating to do this, when you can use q”“ and write human readable code.

To understand more what is going on in this example let’s modify our code a little bit. In Main.scala write this code:

Main.scala
1
2
3
4
5
object Main extends App{
  val x = 2
  val y = 3
  Welcome.isEvenLog(x + y)
...

And to see generated code change showRaw to showCode in Welcome.scala:

Welcome.scala
1
2
3
4
5
6
7
8
9
10
11
   ...
   val result = q"""
     if ($number%2==0){
       println($number.toString + " is even")
     }else {
       println($number.toString + " is odd")
   }
   """
   println(showCode(result))
   result
...

And now run this project again.

1
sbt run

You should see generated code in console. Without redundant syntax, generated code looks like the following:

1
2
3
4
5
     if((x+y)%2==0){
        println((x+y).toString + " is even")
     }else{
        println((x+y).toString + " is odd")
     }

So you maybe see now better that this (number: c.Tree) parameter of #isEvenLogImplementation is exactly only tree node. Not Int, it is just some token, you might say, which is placed in places where you specify it. Of course this is some kind of hidden bug(multiple evaluation), because you can imagine what will happen if you provide non deterministic argument, like random integer:

Main.scala
1
2
3
4
object Main extends App{
  val x = 2
  Welcome.isEvenLog(x + Random.nextInt())
...

Result will look like the following:

1
2
3
4
5
     if((x+Random.nextInt())%2==0){
        println((x+Random.nextInt()).toString + " is even")
     }else{
        println((x+Random.nextInt()).toString + " is odd")
     }

And resulted print statement often will be as smart as following:

1
17 is even

The simplest fix of this issue is to explicitly evaluate input arguments first, so assign them to val, and use those vals further in generating AST:

Welcome.scala
1
2
3
4
5
6
7
8
   val result = q"""
      val evaluatedNumber = $number
      if (evaluatedNumber%2==0){
        println(evaluatedNumber.toString + " is even")
      }else {
        println(evaluatedNumber.toString + " is odd")
      }
   """

For closer look at quasiquotes look at their docs.

V. Summary of part 1

I hope that this simple example will encourage you to experiment with def macros. If you have been using play framework, then probably you have been parsing json too, so look at source of Play json api and check how macros are used to reduce boilerplate in play framework. There was a post about this json api, available here. Few useful links, where you can find more extensive/formal answers to the topic:

Def macro in docs

Macro annotations in docs

Context in docs

In part 2 of this blog series I will describe macro annotations and give few more examples of scala macros usage.

Akka Http Rest Api

I. Intro

In this post I will try to explain simple rest api made in Akka http, Slick and PostgreSql. Here is git repo: Github repo

A) What is akka http?

Docs: Akka http docs

Zalando presentation: Zalando

Much more advanced akka http example which was great help for me, when doing my example: ArchDev example

B) What is slick?

Docs: Slick docs

Fantastic presentation by Stefan Zeiger: Stefan Zeiger presentation

II. Setup

A) Project structure

This is structure of sbt project. Check

1
build.sbt

to see what dependencies has been used in this project. Whole structure should look similar to this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
|-- akkaRestApi
    |-- build.sbt
    |-- project
    |-- src
    |   |-- main
    |   |   |-- java
    |   |   |-- resources
    |   |   |   |-- application.conf.example
    |   |   |   |-- db
    |   |   |   |   |-- migration
    |   |   |   |       |-- V1__Create_users_table.sql
    |   |   |   |       |-- V2__Create_posts_table.sql
    |   |   |   |       |-- V3__Create_comments_table.sql
    |   |   |   |-- public
    |   |   |       |-- index.html
    |   |   |-- scala
    |   |   |   |-- Main.scala
    |   |   |   |-- Routes.scala
    |   |   |   |-- api
    |   |   |   |   |-- ApiErrorHandler.scala
    |   |   |   |   |-- CommentsApi.scala
    |   |   |   |   |-- PostsApi.scala
    |   |   |   |   |-- UsersApi.scala
    |   |   |   |-- dao
    |   |   |   |   |-- BaseDao.scala
    |   |   |   |   |-- CommentsDao.scala
    |   |   |   |   |-- PostsDao.scala
    |   |   |   |   |-- UsersDao.scala
    |   |   |   |-- mappings
    |   |   |   |   |-- JsonMappings.scala
    |   |   |   |-- models
    |   |   |   |   |-- Comment.scala
    |   |   |   |   |-- Post.scala
    |   |   |   |   |-- User.scala
    |   |   |   |   |-- package.scala
    |   |   |   |   |-- definitions
    |   |   |   |       |-- CommentsTable.scala
    |   |   |   |       |-- PostsTable.scala
    |   |   |   |       |-- UsersTable.scala
    |   |   |   |-- utils
    |   |   |       |-- Config.scala
    |   |   |       |-- DatabaseConfig.scala
    |   |   |       |-- MigrationConfig.scala
    |   |   |-- scala-2.11
    |   |-- test
    |       |-- java
    |       |-- resources
    |       |-- scala
    |       |   |-- BaseServiceSpec.scala
    |       |   |-- CommentsApiSpec.scala
    |       |   |-- PostsApiSpec.scala
    |       |   |-- UsersApiSpec.scala
    |       |-- scala-2.11

B) Database

I use postgresql, example of configuration of database you can find in

1
src/main/resources/application.conf.example

So create your own application.conf inside

1
src/main/resources/

Schema is as simple as it could be:

images

I use flyway for database migration so you can preview .sql files in

1
src/main/resources/db/migration/

C) Project configuration

Main.scala is an entry point to our application

1
2
3
4
5
6
7
8
9
10
object Main extends App with Config with MigrationConfig with Routes{
  private implicit val system = ActorSystem()
  protected implicit val executor: ExecutionContext = system.dispatcher
  protected val log: LoggingAdapter = Logging(system, getClass)
  protected implicit val materializer: ActorMaterializer = ActorMaterializer()

  migrate()

  Http().bindAndHandle(handler = logRequestResult("log")(routes), interface = httpInterface, port = httpPort)
}

As you see there is initialization of required variables. We should start ActorSystem(), create executor and materializer. There is also binding of http request to our routes. We will go back to it later.

object Main extends many other objects, App of course, but also other like Config.

1
2
3
4
5
6
7
8
9
10
11
trait Config {
  private val config = ConfigFactory.load()
  private val httpConfig = config.getConfig("http")
  private val databaseConfig = config.getConfig("database")
  val httpInterface = httpConfig.getString("interface")
  val httpPort = httpConfig.getInt("port")

  val databaseUrl = databaseConfig.getString("url")
  val databaseUser = databaseConfig.getString("user")
  val databasePassword = databaseConfig.getString("password")
}

I will not copy the documentation: Akka configuration docs

Next file is MigrationConfig.scala:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
trait MigrationConfig extends Config {

  private val flyway = new Flyway()
  flyway.setDataSource(databaseUrl, databaseUser, databasePassword)

  def migrate() = {
    flyway.migrate()
  }

  def reloadSchema() = {
    flyway.clean()
    flyway.migrate()
  }
}

Here we’ve got initialization of flyway object, we setup it with proper database values(they come from Config.scala). There are two methods, one for running migrations and second for reloading schema.

And the last ‘extention’ is Routes.scala

1
2
3
4
5
6
7
8
trait Routes extends ApiErrorHandler with UsersApi with PostsApi with CommentsApi{
  val routes =
    pathPrefix("v1") {
      usersApi ~
      postsApi ~
      commentsApi
    } ~ path("")(getFromResource("public/index.html"))
}

As you suppose this is highest level of our api routes. There is specified some prefix, and default route. Routes are extending some objects but we will go back to them later.

III. Rest Api

Ok so what about that Routes.scala. It is our ‘dispatcher’ which will match http requests to proper actions. Let’s focus on ApiErrorHander which trait Routes extends:

1
2
3
4
5
6
7
8
trait ApiErrorHandler {
  implicit def myExceptionHandler: ExceptionHandler = ExceptionHandler {
    case e: NoSuchElementException =>
      extractUri { uri =>
        complete(HttpResponse(NotFound, entity = s"Invalid id: ${e.getMessage}"))
      }
  }
}

I made this implicit because I want each error, which I suppose to happen in my application, recover in one place. I can do this because each layer of my application is returning Future object, so everything is wrapped up in this monad, our errors will be safely wrapped up and at the end of the world (so in our case ApiErrorHandler) we will recover them. NoSuchElementException will be thrown, for example, when you would like to GET user with non existing id. And because this is implicit all I need is just extend that object, and I have this recover part code in proper scope.

Exception handling in docs: Akka exception handling docs

Now, lets look at Users api:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
trait UsersApi extends JsonMappings{
  val usersApi =
    (path("users") & get ) {
       complete (UsersDao.findAll.map(_.toJson))
    }~
    (path("users"/IntNumber) & get) { id =>
        complete (UsersDao.findById(id).map(_.toJson))
    }~
    (path("users") & post) { entity(as[User]) { user =>
        complete (UsersDao.create(user).map(_.toJson))
      }
    }~
    (path("users"/IntNumber) & put) { id => entity(as[User]) { user =>
        complete (UsersDao.update(user, id).map(_.toJson))
      }
    }~
    (path("users"/IntNumber) & delete) { userId =>
      complete (UsersDao.delete(userId).map(_.toJson))
    }
}

Everything here should be self explanatory: You’ve got some path, rest method definition, and response when Dao method will be completed. Users Dao methods are returning Future so we’ve got to ‘unpack’ it with value by using map and we need to transform result to json. You should notice that UsersApi is extending JsonMappings:

1
2
3
4
5
trait JsonMappings extends DefaultJsonProtocol {
  implicit val userFormat = jsonFormat5(User)
  implicit val postFormat = jsonFormat4(Post)
  implicit val commentFormat = jsonFormat4(Comment)
}

I placed here all json formats objects which will implicitly transform object to json. Without it you will get compilation error because if you see signature of method toJson():

1
def toJson(implicit writer: JsonWriter[T]): JsValue

It expects implicit object of type JsonWriter and this part:

1
implicit val userFormat = jsonFormat5(User)

Creates JsonWriter[User] and JsonReader[User] so our api methods can transform scala object to json and json to scala object.

Ok last layer is DAO. Let’s look at UsersDao:

1
2
3
4
5
6
7
8
9
10
object UsersDao extends BaseDao{
  def findAll: Future[Seq[User]] = usersTable.result
  def findById(userId: UserId): Future[User] = usersTable.filter(_.id === userId).result.head
  def create(user: User): Future[UserId] = usersTable returning usersTable.map(_.id) += user
  def update(newUser: User, userId: UserId): Future[Int] = usersTable.filter(_.id === userId)
    .map(user => (user.username, user.password, user.gender, user.age))
    .update((newUser.userName, newUser.password, newUser.gender, newUser.age))

  def delete(userId: UserId): Future[Int] = usersTable.filter(_.id === userId).delete
}

This should look familiar to you, even if you were not using slick. We are creating sql queries and send them to db. When you will look in docs you will see that each slick statement like this one for example:

1
usersTable.filter(_.id === userId).result.head

Should be done within db.run() method (where db is config variable from DatabaseConfig.scala). So those Dao methods should look like this:

1
db.run(usersTable.filter(_.id === userId).result.head)

etc. It would be tedious to write this boilerplate code. This is why we have implicits :)

Look at BaseDao which UsersDao extends:

1
2
3
4
5
6
7
8
9
10
11
12
trait BaseDao extends DatabaseConfig {
  val usersTable = TableQuery[UsersTable]
  val postsTable = TableQuery[PostsTable]
  val commentsTable = TableQuery[CommentsTable]

  protected implicit def executeFromDb[A](action: SqlAction[A, NoStream, _ <: slick.dbio.Effect]): Future[A] = {
    db.run(action)
  }
  protected implicit def executeReadStreamFromDb[A](action: FixedSqlStreamingAction[Seq[A], A, _ <: slick.dbio.Effect]): Future[Seq[A]] = {
    db.run(action)
  }
}

We’ve got here initialization of TableQuery objects, which are representation of tables in db, and some implicit methods. Those methods transforms result of other methods. But which one? Those one who are explicitly said that should return Future[A] or Future[Seq[A]] but instead they are returning SqlAction or FixedSqlStreamingAction(Which are results of building query in slick, so our DAO methods are example of those). So those methods apply db.run to this SqlAction and FixedSqlStreamingAction objects and this is how we avoid writing db.run(…) each time we want to call dao method.

IV. Tests

I tested those api methods using scalatest. Check it out ;)

V. Conclusion

Akka http is additional layer on top of akka which helps you communicate with actor system via Http requests. With a little effort you can build webservice easily. Akka http is very flexible and lightweight so it is not some kind of ‘fullstack’ framework but maybe you should consider using it in your future project.

Thanks for checking out this post, if you have any questions, feel free to comment or contact ;)

Reactive Autocomplete With Polymer and RxJS

I. Intro

In my first blog post I will try to make basic introduction to Polymer and RxJs.

A) What is polymer?

Polymer is library for creating web components. Web components are reusable elements, containers with their own isolated api, templates and style. You may think about them like about AngularJs directives. Difference between angularjs and polymer is fundamental. Polymer is a library which targets one task when angular is a framework for building whole apps. Moreover it is built on top of web components. You can of course use polymer inside angular project and you can build app based only on web components but those tools are targeting different problems in web development.

B) What is Rx?

Rx is, according to main rx page, “an API for asynchronous programming with observable streams”.

C) Assumptions

In following example we will try to create component for searching. After typing we should get list of best suggestions. We would like it to be reusable, it should have his own encapsulated style and what most important API. This is why we have chosen Polymer. Search elements seems simple but they have some interesting corner cases. Some of them:

  • User types something, component send request only if user provides long enough text to search.

  • User types very quickly some phrase for example: ‘polymer tutorial’. Our component should be intelligent enough to know that it should send only one request to server, with param ‘polymer tutorial’, and not sending requests after each typed by user letter. This seems simple. Maybe we need some timeout. Probably we don’t need to incorporate external library like Rx to achieve this.

  • User types some phrase: ‘polymer tutorial’, then request goes to a server, then(before or after first request finishes, that’s not important) user add more letters to phrase for example he adds new words like ‘for beginners’ but in a moment he realises that he is polymer king and he doesn’t need this ‘for beginners’ part, so he quickly deletes it. Our search component should know that people make mistakes and it should not make another request since it would be with same param: ‘polymer tutorial’.

  • User types some phrase: ‘polymer’, request goes to a server, then(before first request finishes) he adds new phrase to existing one: ‘tutorial’. How we could guarantee that the second request with param: ‘polymer tutorial’ will finish after the first one(with param ‘polymer’) and user will get expected suggestions?

Especially last case seems to be non trivial. And this is case where RX fits very well, as we see later, concept of observables(maybe more intuitive name is ‘stream’) gives us abstraction for dealing with such problems.

II. Setup

I assume that you have git, Node, and Bower installed.

A) Clone from github

Before cloning project install polyserve- it will run localhost server for us:

1
npm install -g polyserve

Then clone the project:

1
git clone -b initial-setup https://github.com/BBartosz/polymer-rx-tutorial.git

And finally run:

1
bower install

In you search-component directory run:

1
polyserve

You should have started project at port 8080, so go to http://localhost:8080/components/search-component/. We can start fun part now.

B) Create project from scratch

Other option to start is to create directory named ‘search-component’, then download and run seed-element which is default template for building polymer web components. Follow instructions in Polymer docs till this section: https://www.polymer-project.org/1.0/docs/start/reusableelements.html#develop-and-test I encourage you to preview this template and after running (in /search-component directory where you should place unzipped seed-element files)

1
bower install

and then:

1
polyserve

you should have server on port 8080 started. When you go to http://localhost:8080/components/seed-element/ you should see default docs and demo page for seed-element component. You can read how should be use, how its api looks like and preview in action.

So after review, we can now safely delete seed-element.html, and create new file search-component.html. It should look like this:

search-component.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
<link rel="import" href="../polymer/polymer.html">

<!--
An element for searching.

Example:

    <search-component></search-component>

@demo
-->
<script>
    Polymer({
        is: 'search-component',
        properties: {},

        // Element Lifecycle

        ready: function() {}
    });
</script>

You should also rename occurences of seed-element and change it to search-component then go to http://localhost:8080/components/search-component/ to check if you did it right. Restart server to have guarantee.

Run project

In you search-component directory run:

1
polyserve

You should have started project at port 8080. We can start fun part now.

III. Closer look at polymer component

First of all we have to register our new component.

search-component.html
1
2
3
4
5
6
7
8
9
10
11
<link rel="import" href="../polymer/polymer.html">

<script>
    Polymer({
        is: 'search-component',
        properties: {},
        ready: function() {
            alert("Component loaded!");
        }
    });
</script>

You can find these info in docs but I will repeat them anyway. Registering an element associates name of the element with a prototype so you can add properties and methods to your component. As you see function Polymer(arg) takes as argument object that defines your element’s prototype. We need to import polymer, then in script we can initialize our component. We do this by specifying its name, must contain “-”. This is because HTML5 specification which says that custom components should have “-” and native html components don’t have to.

Next we can see ready which is lifecycle callback and it means that when your component is loaded then function from ready will be fired. To see this component in action go to demo/index.html and look how it is initialized.

So we have got our component but it is invisible. We can add local DOM which will be encapsulated. To do this we have to wrap it inside <dom-module> with id the same as name defined inside Polymer(args). Inside dom-module we can have <script> tags and <template> tags as well. In <script> we will define logic and behavior and in <template> we will define…html template.

search-component.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
<link rel="import" href="../polymer/polymer.html">
<!--
An element for searching.

Example:

    <search-component></search-component>

@demo
-->
<dom-module id="search-component">
    <template>
        <input type="text" placeholder="Search something">
    </template>

    <script>
        Polymer({
            is: 'search-component',
            properties: {},

            ready: function() {

            }
        });
    </script>
</dom-module>

Before we go any further we can review another feature of components: properties. One of the tasks of properties is to make your component generic. They are values you can pass to your component when you initialize your component on page. You can set type of property, its default value and even you can observe changes of it. Properties can be used also for data-binding as you will see later.

In our example we can extract already some property, we could parameterize placeholder in input tag. So this is how you define your property:

search-component.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
...
    <script>
        Polymer({
            is: 'search-component',
            properties: {
                inputPlaceholder: {
                    type: String,
                    value: "Default placeholder text"
                }
            },

            ready: function() {

            }
        });
    </script>
</dom-module>

As you can see we set the type (String) in declaration of inputPlaceholder. This tells polymer how to deserialize passed value. For us it is hint how to pass values to component. Value attribute specifies default value if none would be passed to inputPlaceholder.

Now we have to place somehow this property inside DOM:

search-component.html
1
2
3
4
5
6
7
<dom-module id="search-component">
    <template>
        <input type="text"
                       id="searchInput"
                       placeholder="[[inputPlaceholder]]"
                       autofocus>
    ...

We added also autofocus to have this input focused after page refresh.

[[somePropertyName]] creates one-way data binding

{{somePropertyName}} creates one-way or two-way data binding, depending whether property calles somePropertyName is confugured for two way data binding.

More about data binding: https://www.polymer-project.org/1.0/docs/devguide/data-binding.html

Providing attribute value to component looks like following:

demo/index.html
1
2
3
4
5
6
7
...
    <body>
        <search-component input-placeholder="Reactive search">

        </search-component>
    </body>
...

As you can see attribute name passed to component, when it is with dash, is converted to camel case:

input-placeholder => inputPlaceholder

If you would pass argument with camel case to component it would be converted to lowercase.

someOtherAttribute => someotherattribute

More about declaring properties you can find in docs: https://www.polymer-project.org/1.0/docs/devguide/properties.html

Next we need to declare property for keeping all elements which we will possibly get as a result of search. I will give that property name searchResults, its type would be an array, and default value- array with 3 elements just to simulate some results.

search-component.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
...
    <script>
        Polymer({
            is: 'search-component',
            properties: {
                inputPlaceholder: {
                    type: String,
                    value: "Default placeholder text"
                },
                searchResults: {
                    type: Array,
                    value: ["result1", "result2", "result3"]
                }
            },

            ready: function() {

            }
        });
    </script>
</dom-module>

The we would like to show those results somehow. To do it we have to use template tag which we specify as ‘reapeatable’ part of template. This is the way to iterate over collection in polymer. You specify template tag, telling that it is dom-repeat element, you pass collection to items property and then you can do with item what you want.

search-component.html
1
2
3
4
5
6
7
8
9
10
11
12
<dom-module id="search-component">
    <template>
        <input type="text"
               id="searchInput"
               placeholder="[[inputPlaceholder]]"
               autofocus>

        <template id="resultList" is="dom-repeat" items="">
            <li></li>
        </template>
    </template>
    ...

So now previewing our component you should see simple input with static, hardcoded 3 options.

It’s time to get some real data. Clean default value of property searchResuls:

search-component.html
1
2
3
4
5
6
...
    searchResults: {
         type: Array,
         value: []
    }
...

We need to install jQuery and rxjs in our project, to do this:

1
2
bower install jquery --save
bower install rxjs --save

This commend will install both libraries (–save flag will add dependency to bower.json)

Import those 2 libraries to our component.

search-component.html
1
2
3
4
<link rel="import" href="../polymer/polymer.html">
<script src="../jquery/dist/jquery.js"></script>
<script src="../rxjs/dist/rx.lite.js"></script>
...

IV. Using RxJs to get search result

Bread and butter of our component. Rxjs!

Ok so lets explain it line by line.

search-component.html
1
2
3
4
5
...
ready: function() {
    var self = this;
    var observable = Rx.Observable.fromEvent(this.$.searchInput, 'keyup');
...

Here we are creating observable(or stream as you wish). Observable is mix of 2 design patterns known from software engineering: iterator and observer. Why we need this? In age of big data, big doesn’t only mean huge amount. It means also different sources of data. You might treat big file as data, database with tables and rows as data, social media notifications and events created by user as data. It would be great to treat those data in similar way, to have some abstraction which would help us dealing with such big amount/vast sources of data.We know already something what could be useful: Iterator is simple design pattern: take a collection(no matter what collection) and give me the next element of it as long as it has next element. This is nice, we want to use something like iterator for examples of data mentioned above, without worring about kind of data, that’s great abstraction. You might think about Iterable collection as data producer and you asking for next element as consumer. There is one big problem: iterator doesn’t know about notion of time. It works only for ‘synchronous’ collections. This is unacceptable since we are dealing with web related problems. Moreover iterator throws an error if sth unpredictable or unacceptable happens. This is not ‘happy path’, where we could forget about tedious problems.

But there is observer pattern, pretty similar thing. You give a callback to data producer and it calls you, observes changes. And this repeats after…after what? This is one of the missing factors, there is no automatic way to saying producer: no more data, no more observations! Moreover observer pattern has a lot of downsides, it breaks good software engineering principles like encapsulation and so on. Those 2 patterns are about the same thing: sending data to consumer. And data means: whatever data. List of ints, collection of events, chunks of some big file. But they were not connected with each other. They were not sufficient with they basic form(iterator only for synchronous collections). And this is why observable was created. To mix both observator and iterator, and use it for asynchronous data. In observable we can get the next element. Elements in observable may occur asynchronously. Producer can say: that’s all I have and then method onComplete(), implemented by us to match our needs will be fired. Same with onError(). We can think about Observable as a timeline with max 3 things: event, error and completion. On diagram below there is no error, completion is marked with vertical line, events are letters marbles and there is error marked with X.

—-e—e–e—–e-x-|—>

So how it is working: user is typing something for example ‘polymer’, how would our observable look like?

—e—e—e—e—e—e—e—>

Where ‘e’ is event binded to our stream. Now when we have got some abstraction like observable which should have main feature of Iterator pattern: give me next element!

search-component.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
  ...
          var observable = Rx.Observable.fromEvent(this.$.searchInput, 'keyup');
          observable.subscribe(
              function (event) {
                  console.log(event);
              },
              function (error) {
                  console.log("Something gone wrong: " + error);
              },
              function (e) {
                  console.log("Finally completed!");
              }
          );
  ...

subscribe() subscribes observer to an observable. This is way to mix iterator with observer. As you see it might take 3 parameters. First is function(in this case look for docs, link below) which is fired when taking next element, second is fired when something goes wrong, and last is function to complete. Subscribe() has an alias: forEach(). Look for docs for more explicit explanation: https://github.com/Reactive-Extensions/RxJS/blob/master/doc/api/core/operators/subscribe.md For our purposes we can get rid of onError and onComplete functions.

search-component.html
1
2
3
4
5
6
7
8
...
     var observable = Rx.Observable.fromEvent(this.$.searchInput, 'keyup');
     observable.subscribe(
         function (event) {
             console.log(event);
         }
     );
...

Now we see in our console that we have some events as results of typing on keyboard, and here we discover great feature of observable: composability. Before we would take next element of observable, we would like to prepare it somehow to fit our needs. We need to map our stream:

search-component.html
1
2
3
4
5
6
7
8
9
...
   var observable = Rx.Observable.fromEvent(this.$.searchInput, 'keyup');
   var subscription = observable.map(function (e) {
       return e.target.value;
   }).subscribe(function (event) {
        console.log(event);
      }
   );
...

So map takes a function as parameter and it applies it to each element in observable.

We are returning return e.target.value in passed to map function because it is way to get value input after pushing key. Now our observable looks like this:

—p—po—pol—poly—polym—polyme—polymer—>

Why is that? User pushes ‘p’ so observable looks like this:

—p—>

After transformation it looks the same because in this moment value of input is one letter- p:

—p—>

User types another letter ‘o’ so our transformed observable looks like this:

—p—po–>

Because when user typed ‘o’ value of input was ‘po’, and so on.

Now lets add feature that our search would not hit requests when text is shorter than 2.

search-component.html
1
2
3
4
5
6
7
8
9
10
...
  var observable = Rx.Observable.fromEvent(this.$.searchInput, 'keyup');
  var subscription = observable.map(function (e) {
      return e.target.value;
  }).filter(function (inputText) { return inputText.length > 2 })
    .subscribe(function (event) {
       console.log(event);
     }
  );
...

If you are familiar with any functional programming api this is probably obvious for you. If you are not familiar with any of those apis, then probably it is obvious too ;). Filter takes as parameter function. This function must return Boolean and it takes as parameter each element of our stream. If result is true it lets this element go to new filtered observable, else the element is popped out from the observable. So we managed first problem, mentioned on beginning of this text.

Now lets solve one of our initial problems with search. User types very quickly ‘polymer’, we don’t want to call server for each time as we would do in existing solution. When typing, you can see in console that each typed letter is taken in subscribe().

Lets use debounce() method:

search-component.html
1
2
3
4
5
6
7
8
9
10
11
12
13
...
 ready: function() {
    var self = this;
    var observable = Rx.Observable.fromEvent(this.$.searchInput, 'keyup');
    var subscription = observable.map(function (e) {
       return e.target.value;
    }).filter(function (inputText) { return inputText.length > 2 })
    .debounce(300)
    .subscribe(function (event) {
       console.log(event);
    }
);
...

How is it working? After map(), and filter, when user types ‘polymer’ very quickly we have observable (and dash ‘-’ means 100ms in our timeline):

–po-pol-poly-polym-polyme-polymer- - ->

debounce(ms) says: ‘I will go further if time in ms specified in my parameter will pass, after event in observable occurs’. After ‘po’ there were 100ms, after ‘pol’ the same, and so on, so debounce is saying ‘Hold your horses! When specified time will pass I will take current value from observable.’

After typing ‘polymer’ 300ms passed so resulting observable after debounce looks like this:

—-polymer—>

This is nice, working as expected, check it out in console.

Now look how we specified our observable:

search-component.html
1
2
3
4
5
6
7
...
 ready: function() {
    var self = this;
    var observable = Rx.Observable.fromEvent(this.$.searchInput, 'keyup');
    ...
);
...

It is bounded to an event ‘keyup’, so when uses types arrow it will be added to resulting observable. It would be useless for our purposes since we are interested only in changes in input field. We don’t want this kind of data, we are interested only when text in input field actually changes. Moreover think about situation described above in paragraph C). Users types ‘polymer’ and request go to server then he adds ‘for beginners’, but before request goes (so before 300ms specified in debounce), user deletes this phrase ‘for beginners’ and he is left with ‘polymer’ as before typing. So he already made a request with term ‘polymer’ we don’t want to make it one more time, it would be not necessary. Rxjs gives us a function distinctUntilChanged() which will filter out element of an observable if is is the same as previous element. It will solve those 2 problems described in this paragraph.

search-component.html
1
2
3
4
5
6
7
8
9
10
11
12
13
...
    ready: function() {
        var self = this;
        var observable = Rx.Observable.fromEvent(this.$.searchInput, 'keyup');
        var subscription = observable.map(function (e) {
            return e.target.value;
        }).filter(function (inputText) { return inputText.length > 2 })
          .debounce(500)
          .distinctUntilChanged()
          .subscribe(function(e) {
            console.log(e)
          });
...

Its time to write a function which will call the wikipedia server and invoke it when needed.

search-component.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
...
 searchWikipedia: function searchWikipedia(term) {
     return $.ajax({
         url: 'http://en.wikipedia.org/w/api.php',
         dataType: 'jsonp',
         data: {
             action: 'opensearch',
             format: 'json',
             search: term
         }
     }).promise();
 },

 ready: function() {
    var self = this;
    var observable = Rx.Observable.fromEvent(this.$.searchInput, 'keyup');
    var subscription = observable.map(function (e) {
       return e.target.value;
    }).filter(function (inputText) { return inputText.length > 2 })
    .debounce(500)
    .distinctUntilChanged()
    .flatMapLatest(self.searchWikipedia)
    .subscribe(function (event) {
       console.log(event);
    }
);
...

How flatMap() works? Similarly to map(). It applies a function to each element in observable. This function returns new observable or Promise(this is our case) then it fires it, and merge/flatten resulted elements to final/resulted observable. To show simple result try this code, create simple script file or even use JsBin fo this demo. We will use interval() method:

https://github.com/Reactive-Extensions/RxJS/blob/master/doc/api/core/operators/interval.md

someScriptJs.js
1
2
3
4
var observable = Rx.Observable.interval(200).take(10);
observable.subscribe(function (x) {
   return console.log(x);
});

After 200ms observable will produce next number, till number 9. We can now map this observable

someScriptJs.js
1
2
3
4
var observable = Rx.Observable.interval(200).take(10).map(function (x) {return x+1;});
observable.subscribe(function (x) {
   return console.log(x.toString());
});

Now we would transform original observable, and for each number we will add 1. In diagram:

0–1–2–3–4—5–6–7–8–9|>

After mapping: 1–2–3–4—5–6–7–8–9–10|>

Nothing new, but what if we would like to map using asynchronous function. Let’s try it:

someScriptJs.js
1
2
3
4
5
6
7
var source = Rx.Observable.interval(100).take(10).flatMap(function (x) {
   return Rx.Observable.interval(10)
});

source.subscribe(function (x) {
   return console.log(x.toString());
});

And result is console is “[object Object]”. We can now use method mergeAll() to ‘unpack’ this nested object.

someScriptJs.js
1
2
3
4
5
6
7
var source = Rx.Observable.interval(100).take(10).map(function (x) {
   return Rx.Observable.interval(10).take(1)
}).mergeAll();

source.subscribe(function (x) {
   return console.log(x.toString());
});

You might consider flatMap() as shorthand for this mix of map() and mergeAll(), used mostly for many asynchronous objects nested in your observable. Solution with flatMap looks like this:

someScriptJs.js
1
2
3
4
5
6
7
var source = Rx.Observable.interval(100).take(10).flatMap(function (x) {
   return Rx.Observable.interval(10).take(1)
});

source.subscribe(function (x) {
   return console.log(x.toString());
});

Back to our autocomplete. We used flatMapLatest() here. Why? Because one of the biggest problem of our autocomplete is that we don’t have guarantee that result of first sent request will return before result of request sent little later. flatMapLatest() works pretty similar to flatMap() except when new item is emitted by original observable (source), Observable that was generated from the previously-emitted item will disappear, flatMapLatest() will unsubscribe from it and it will begin only mirroring the current one, the latest. As you see non trivial problem solved with one liner. Try commenting debounce() and takeUntilCHanged() and play with flatMap and flatMapLatest(), you will get the difference for sure.

This is mostly end of using RxJs in our example. I encourage you to look at the docs, they are great source of knowledge, written in easy to understand way. There is also a lot of other examples which would help you understand concept of observables.

V. Back to Polymer again

Main RxJs part is finished, now lets polish this component a little bit and make it useful.

In definitions of properties we declared value ‘searchResult’ which is an array. We should populate it with results, after typing by user new phrases. To do this we need to use polymers ‘set’ method. Btw Polymer has its own api for updating properties which are arrays and you can find those methods in docs:

https://www.polymer-project.org/1.0/docs/devguide/properties.html#array-mutation

search-component.html
1
2
3
4
5
6
...
.flatMapLatest(self.searchWikipedia)
.subscribe(function(e) {
    self.set('searchResults', e[1]);
});
...

this.set(‘nameOfProperty’, value) sets value of sets result to second value of resulted observable element, why second value? Because wikipedia sends back response in this form, where 2. value is array of resulted elements. So now you should see how our search input works. BTW You can also find methods for array mutation: https://www.polymer-project.org/1.0/docs/devguide/properties.html#array-mutation

Now we should parameterize it a little bit. So for example debounce method can take parameter from outside. Value of debounce could be provided by user who will use our component. We will call it ‘timeout’ because debounce is reserved name in polymer, btw It is name of method which does similar thing.

search-component.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
...
<script>
Polymer({
    is: 'search-component',
    properties: {
        inputPlaceholder: {
            type: String,
            value: "Default placeholder text"
        },
        searchResults: {
            type: Array,
            value: []
        },
        timeout: {
            type: Number,
            value: 500
        }
    },
...
        .filter(function (inputText) { return inputText.length > 2 })
        .debounce(self.timeout)
        .distinctUntilChanged()

Default value would be set to 500ms. Let’s use it:

demo/index.html
1
2
3
4
...
<body>
    <search-component input-placeholder="Reactive search" timeout="100">
...

Now we can add some polymer sugar to our component. Lets make it more ‘material design’. Go to https://elements.polymer-project.org/ and browse the available components. We will use paper-input. Download it:

1
bower install --save PolymerElements/paper-input#^1.0.0

And import it to our component

search-component.html
1
2
3
<link rel="import" href="../polymer/polymer.html">
<link rel="import" href="../paper-input/paper-input.html">
...

And use it similarly as input. Everything you need to know is in docs page, from where you downloaded component. I changed placeholder to label to have fancy animation.

search-component.html
1
2
3
4
5
6
7
8
<dom-module id="search-component">
    <template>
        <paper-input type="text"
               id="searchInput"
               value=""
               label="[[inputPlaceholder]]"
               autofocus>
        </paper-input>

Now there is a time to add better dropdown we will make it using following components: paper-button, paper-material, iron-collapse, paper-item. Check those out in elements catalog.

1
bower install --save PolymerElements/iron-collapse#^1.0.0 PolymerElements/paper-button#^1.0.0 PolymerElements/paper-item#^1.0.0 PolymerElements/paper-material#^1.0.0

Now add this dropdown to our component:

search-component.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
        ...
        </paper-input>
        <iron-collapse id="collapse">
            <paper-material>
                <div class="collapse-content">
                    <template id="resultList" is="dom-repeat" items="">
                        <paper-item>
                            <paper-button value=""></paper-button>
                        </paper-item>
                    </template>
                </div>
            </paper-material>
        </iron-collapse>
        ...

Set observer function to searchResults property:

search-component.html
1
2
3
4
5
6
7
        ...
        searchResults: {
            type: Array,
            value: [],
            observer: "_resultsChanged"
        },
        ...

and define this function which will collapse the dropdown when it will be needed:

search-component.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
        ...
             timeout: {
                type: Number,
                value: 500
            }
        },
        _resultsChanged: function(results) {
            var collapse = this.$.collapse;
            if (results.length > 0 && !collapse.opened) {
                this.$.resultList.render();
                collapse.toggle()
            } else if (results.length == 0 && collapse.opened) {
                collapse.toggle()
            }
        },
        searchWikipedia: function searchWikipedia(term) {
        ...

we can add one more condition to our observable because we want our collection to change when input length will decrease to 0:

search-component.html
1
2
3
4
5
6
        ...
            var subscription = observable.map(function (e) {
               return e.target.value;
            }).filter(function (inputText) { return inputText.length > 2 || inputText.length == 0;})

        ...

This should be pretty straightforward. Let’s define function which will do something when we choose result from dropdown.

First of all use _chooseItem() on tap:

search-component.html
1
2
3
4
5
        ...
            <paper-item>
                <paper-button on-tap="_chooseItem" value=""></paper-button>
            </paper-item>
        ...

And now define _chooseItem():

search-component.html
1
2
3
4
5
6
7
        ...
            _chooseItem: function(event, sender) {
                var clickedButtonValue = event.path[1].value;
                this.set('searchTerm', clickedButtonValue);
                this.$.collapse.toggle();
            },
        ...

You can parameterize this component more, add minLength:

search-component.html
1
2
3
4
5
6
7
8
9
10
11
12
...
    timeout: {
        type: Number,
        value: 500
    },
    minLength: {
        type: Number,
        value: 2
    }
},
_resultsChanged: function(results) {
...

Use it in our observable filter():

search-component.html
1
2
3
4
5
...
    var subscription = observable.map(function (e) {
        return e.target.value;
    }).filter(function (inputText) { return inputText.length > self.minLength || inputText.length == 0;})
...

And parameterize this ajax call:

search-component.html
1
2
3
4
5
6
7
8
9
...
    minLength: {
        type: Number,
        value: 2
    },
    getRemoteSuggestions: {
        type: Object
    }
...

It will be parse as object. Use it in our observable transformations:

search-component.html
1
2
3
4
5
6
7
8
...
    var subscription = observable.map(function (e) {
        return e.target.value;
    }).filter(function (inputText) { return inputText.length > self.minLength || inputText.length == 0;})
    .debounce(self.timeout)
    .distinctUntilChanged()
    .flatMapLatest(self.getRemoteSuggestions)
...

To pass function from outside you need to wrap your component inside a <template> tag.

demo/index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
...
    <body>
        <template id="wrapperTemplate" is="dom-bind">
          <search-component
                  input-placeholder="Wikipedia search"
                  get-remote-suggestions='[[searchWikipedia]]'>
          </search-component>
        </template>
      </body>

      <script>
        document.querySelector('#wrapperTemplate').searchWikipedia = function searchWikipedia(term) {
          return $.ajax({
            url: 'http://en.wikipedia.org/w/api.php',
            dataType: 'jsonp',
            data: {
              action: 'opensearch',
              format: 'json',
              search: term
            }
          }).promise();
        };
      </script>
...

You can delete searchWikipedia() from search-component.html now.

Now lets add more api documentation to our component. To achieve this simply add comments above each property:

search-component.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
...
Polymer({
    is: 'search-component',
    properties: {
        /**
         * `inputPlaceholder` indicates the placeholder of search input
         */
        inputPlaceholder: {
            type: String,
            value: "Default placeholder text"
        },
        /**
         * `searchResults` are the results of searching
         */
        searchResults: {
            type: Array,
            value: [],
            observer: "_resultsChanged"
        },
        /**
         * `timeout` is time in ms after which search request will be send to server
         */
        timeout: {
            type: Number,
            value: 500
        },
        /**
         * `minLength` minimal length of input value needed to make request to server
         */
        minLength: {
            type: Number,
            value: 2
        },
        /**
         * `getRemoteSuggestions` function returning promise. Here you should specify ajax call to server. This
         * function should take search term as parameter
         */
        getRemoteSuggestions: {
            type: Object
        }
    },
...

Check the results, you can build your component’s documentation with no trouble using comments in appropriate places. The same with sample usage of your component.

This component could be extended even more, to work not only with remote collections but I will finish here because this post has grown a little to much. You can find repo with this project on my github(branch: final-version):

https://github.com/BBartosz/polymer-rx-tutorial/tree/final-version

Thanks for reading. Feel free to give me a reply ;)