Category: Uncategorized

  • Exploring some Rust basics with actix-web

    Actix web describes itself as

    a small, pragmatic, and extremely fast rust web framework.

    The README has an example to start with so let’s create a new Rust project.

    $ cargo new web_app 
        Created binary (application) `web_app` package 
    $ cd web_app

    First we need these dependencies in our Cargo.toml:

    [dependencies] 
    actix-web = "2" 
    actix-rt = "1"

    (did you know there’s a utility called “cargo-edit” which lets you add dependencies from the command line? It lets you do cargo add serde, for example, which will add the serde crate to your dependencies in Cargo.toml. Check it out here https://github.com/killercup/cargo-edit)

    In main.rs let’s paste the snippet from the example in the README.

    use actix_web::{get, web, App, HttpServer, Responder};
    
    #[get("/{id}/{name}/index.html")]
    async fn index(info: web::Path<(u32, String)>) -> impl Responder {
        let (id, name) = info.into_inner();
        format!("Hello {}! id:{}", name, id)
    }
    #[actix_rt::main]
    async fn main() -> std::io::Result<()> {
        HttpServer::new(|| App::new().service(index))
            .bind("127.0.0.1:8080")?
            .run()
            .await
    }
    

    Cool. We can run it with cargo run and curl our index endpoint:

    $ curl http://localhost:8080/1/foo/index.html 
    Hello foo! id:1%

    Looks good.

    Let’s look at main. We declare the function with the async keyword to create an asynchronous function. The value returned by an async function is a Future which represents an asynchronous value.

    Nothing “happens” when we create a Future unless we run it. To run a Future we use an executor. For our purposes think of an executor as a “runner”. Notice the line above the function declaration #[actix_rt::main]? Rust lets you specify which runtime you’d like to use for executing Futures. In this example, we’re using the actix_rt::main macro to tell Rust we’d like our async function to be run on the actix system. There are lots of different runtimes available and you can even define your own. The actix_rt one is a specific implementation which runs everything on the current thread.

    Ok so our main function is asynchronous and it’ll run on the actix_rt runtime. Our main returns a std::io::Result<()>. The Result type in general represents either success (Ok) or failure (Err). There’s actually a std::result::Result type, but we’re using a version specialized for I/O operations. We’re not going to go into the details of that here.

    Moving on to the body of our function, we call the HttpServer::new function. This type signature of this new function is pub fn new(factory: F) -> Self. In Rust, there’s a clear distinction between pure data types (structs) and implemention or behavior of those types. In other words, unlike in, say, Java, we don’t have some class with a bunch of fields and then methods defined on that class. We have a struct and then an impl. Consider this example:

    struct Greeter {
        name: String,
        salutation: String,
    }
    impl Greeter {
        fn greet(&self) {
            println!("{}, {}!", self.salutation, self.name);
        }
    }
    

    We can use that like this:

    fn main() {
        let greeter = Greeter {
            salutation: "hello".to_string(),
            name: "levi".to_string(),
        };
        Greeter::greet(&greeter);
    }
    

    Why do I bring this up? Because the HttpServer::new that we used is a function defined in the HttpServer impl. The HttpServer struct is parameterized on four types like so:

    pub struct HttpServer<F, I, S, B>
    where
        F: Fn() -> I + Send + Clone + 'static,
        I: IntoServiceFactory<S, Request>,
        S: ServiceFactory<Request, Config = AppConfig>,
        S::Error: Into<Error>,
        S::InitError: fmt::Debug,
        S::Response: Into<Response<B>>,
        B: MessageBody,
    {
        pub(super) factory: F,
        config: Arc<Mutex<Config>>,
        backlog: u32,
        sockets: Vec<Socket>,
        builder: ServerBuilder,
        #[allow(clippy::type_complexity)]
        on_connect_fn: Option<Arc<dyn Fn(&dyn Any, &mut Extensions) + Send + Sync>>,
        _phantom: PhantomData<(S, B)>,
    }

    Think of the <F, I, S, B> part like generics in Java. We use those type parameters to make a function generic over some type(s).

    Rust lets us add constraints to the type parameters. We use + if we need multiple bounds. It’s like <T extends B1 & B2 & B3> in Java-land. One way to add these bounds is with the where keyword. So this whole thing:

    where
        F: Fn() -> I + Send + Clone + 'static,
        I: IntoServiceFactory<S, Request>,
        S: ServiceFactory<Request, Config = AppConfig>,
        S::Error: Into<Error>,
        S::InitError: fmt::Debug,
        S::Response: Into<Response<B>>,
        B: MessageBody,

    are the constraints on types params F, I, S and B. Let’s look at the first one for F

    F: Fn() -> I + Send + Clone + 'static

    Basically, this says F must be an instance of Fn (there’s a trait called Fn, for more on Fn check out this answer on stackoverflow) which takes no args and returns something which is an I as well as a Send and Clone. Let’s ignore the last constraint (the 'static). So roughly speaking, whatever F is, F must be a function which takes no arguments and returns something that and returns an I. F must also be an instance of the Send and Clone traits.

    Did we satisfy these constraints by using || App::new().service(index)?

    Well, what the hell does that line mean? This is the syntax for a closure in Rust. Here’s an excerpt from the Rust book which shows the various valid syntaxes for closures.

    fn  add_one_v1   (x: u32) -> u32 { x + 1 }
    let add_one_v2 = |x: u32| -> u32 { x + 1 };
    let add_one_v3 = |x|             { x + 1 };
    let add_one_v4 = |x|               x + 1  ;

    Actually that first line is a regular function definition included so you can compare closures to the usual function def.

    Back to that F we wanted to supply to HttpServer::new. Again, we passed

    || App::new().service(index)

    The || clues us in that this is closure syntax. The empty pipes means it takes no arguments which makes sense because the type constraint on F said Fn() -> I, namely that it’s a function which takes no arguments and returns an I, etc.

    What’s the I in our case? It’s whatever the type of App::new().service(index) is. In this case it’s an instance of actix-web’s App struct. This actually satisfies another one of the type constraints, namely that I must implement IntoServiceFactory

    I: IntoServiceFactory<S>,

    which is a “trait for types that can be converted to a ServiceFactory“. If we look around the actix-web source, we’ll see that that App does indeed implement IntoServiceFactory

    impl<T, B> IntoServiceFactory<AppInit<T, B>> for App<T, B>
    // etc, etc ....

    Point is, the types can get kind of crazy, but I recommend taking it slow, start by just getting the gist of it and later get a more rigorous understanding of the types and the constraints as needed.

    So we’ve created an HttpServer using the new function. Now we can call bind which sets up our server to listen on the provided port. actix-web is using something along the lines of the builder pattern here so the chained method calls keep returning the HttpServer. More accurately, the server wrapped in an io::Result (either Ok or Err).

    That’s where that ? question mark comes in. We can’t call run on an io::Result. We want a server, but we might not have one because one of our chained calls might’ve returned an Err. So we need to deal with the fact that we’re actually working with a Result type here and not merely a server. For more on this ? operator, read all three sections on Error Handling in the Rust Book https://doc.rust-lang.org/book/ch09-00-error-handling.html

    The question basically says: “ok, I acknowledge this could be Ok or an Err. If it was an Ok, unwrap it and gimme the inner value (the server). If it was an Err, just propagate and return the error. (This will bring to mind monads for Haskellers and Scalaites (?), such as when using Maybe or Option).

    It’s the same exact thing as explicitly pattern mathing on the success or failure. In fact, we can rewrite our server startup with pattern matching:

    #[actix_rt::main]
    async fn main() -> std::io::Result<()> {
        let server = HttpServer::new(|| App::new().service(index));
        let hopefully_bind = server.bind("127.0.0.1:8080");
        match hopefully_bind {
            Ok(bound) => bound.run().await,
            Err(e) => Err(e),
        }
    }
    

    There’s a lot to unpack here, I just wanted to give a sense for how I try to make sense of things in Rust in a way that at least lets me get some stuff done.

  • References and & in Rust

    In Rust, there are two types of references: a shared reference and a mutable reference. A reference is denoted by &. A mutable reference is denoted as &mut.

    The docs tell us that a ‘reference lets you refer to a value without taking ownership of it.’ What does that mean? What is ownership?

    Let’s look at a really dumb, simple example.

    fn main() {
        let x = "cool";
    }

    We declared a variable x which refers to the string literal "cool". Plain and simple variable assignment. "cool" is a value and the Rust docs tell us that every value in Rust has a variable called its owner. In our example, the value "cool" is owned by x. Said another way, x is the owner.

    Actually, that is the first of three ownership rules which are:

    1. Each value in Rust has a variable that’s called its owner.
    2. There can only be one owner at a time.
    3. When the owner goes out of scope, the value will be dropped.

    So x is the owner ("cool" is owned by x). What this means is that "cool" lives as long as x does. So when x is dropped "cool" is dropped.

    Is there was a way to see this concretely, some way to see this in action? Let’s try.

    fn main() {
        let y = "hi";
        let x = "cool";
        println!("{}", y);
        println!("{}", x);
    }

    We added another variable y and then printed x and y. So far so good, no problems. If we add braces around the declaration of x

        let y = "hi";
        {
            let x = "cool";
        }
        println!("{}", y);
        println!("{}", x);

    our code no longer compiles and we get an error pointing to println!("{}", x);. The error says "cannot find value x in this scope." A scope or "scope block" is the region in which a variable binding is valid. Our program fails because we created a new scope delineated by the curly braces around x and x lives in this scope and not outside of it. x is invalid outside of the braces we introduced so we can’t use it in that println! The body of main is itself a block and everything inside of its braces lives in its scope.

    What’s this have to do with ownership and dropping? Well, let’s skip to rule #3. It says that when an owner goes out of scope, its value will be dropped.’ In our example x goes out of scope at that closing brace so it is dropped and, since it owns the value "cool", that gets dropped too.

    There’s actually a trait called Drop which has a function drop that gets called when something goes out of scope. Let’s define a simple struct just so we can implement Drop for it so we can see when it gets called.

    #[derive(Debug)]
    struct Foo(u32);
    
    impl Drop for Foo {
        fn drop(&mut self) {
            println!("Dropping {:?}", self);
        }
    }
    
    fn main() {
        let f = Foo(1);
        {
            let g = Foo(2);
        }
    
    }

    When we run this, it prints

    Dropping Foo(2)
    Dropping Foo(1)

    which shows that g is dropped before f which makes sense given the rules stated above.

    Now let’s make things more interesting by defining a function that takes a Foo.

    fn gimme_a_foo(f: Foo) {
        println!("{:?} is a nice lookin' Foo.", f);
    }

    and then let’s use it:

    fn main() {
        let f = Foo(1);
        gimme_a_foo(f);
    }
    $ cargo run
    ...
    Foo(1) is a nice lookin' Foo.

    gimme_a_foo is such a fun function, let’s call it twice!

    fn main() {
        let f = Foo(1);
        gimme_a_foo(f);
        gimme_a_foo(f);
    }

    and run it

    $ cargo run
    ...
    error[E0382]: use of moved value: `f`
      --> src/main.rs:19:17
       |
    17 |     let f = Foo(1);
       |         - move occurs because `f` has type `Foo`, which does not implement the `Copy` trait
    18 |     gimme_a_foo(f);
       |                 - value moved here
    19 |     gimme_a_foo(f);
       |                 ^ value used here after move
    
    error: aborting due to previous error

    Wait… what?

    The problem is that rule #2 tells us that resources can only have one owner. But, ownership can be transferred by assignment or by passing an argument by value.

    When we start out, the value Foo(1) is owned by f. Then, when we call gimme_a_foo(f);, the ownership of f was "moved" to gimme_a_foo. And when a resource has been moved, the previous owner cannot be used. This is a very good thing because by enforcing this, Rust ensures that we never have dangling pointers (pointing to a memory location that’s already been deleted/freed).

    One quick fix is to return ownership of the resource after using it. We can modify our function to do this:

    fn gimme_a_foo(f: Foo) -> Foo {
        println!("{:?} is a nice lookin' Foo.", f);
        f
    }
    
    fn main() {
        let f = Foo(1);
        let ff = gimme_a_foo(f);
        gimme_a_foo(ff);
    }

    That works, but it’s obviously a giant pain-in-the-ass. This is where references come in! We said at the beginning of this post that references let you refer to a value without taking ownership which sounds like exactly what we want! We use & to denote a reference.

    fn gimme_a_foo(f: &Foo) {
        println!("{:?} is a nice lookin' Foo.", f);
    }
    
    fn main() {
        let f = Foo(1);
        gimme_a_foo(&f);
        gimme_a_foo(&f);
    }

    By using &f instead of f we’ve created a reference to the value of f without taking ownership of it. For this to work, the signature of our function had to change to reflect that the type of the f parameter is a reference.

    If we didn’t change gimme_a_foo in this way, it wouldn’t work. The problem would have nothing to do with calling gimme_a_foo twice, it’d be an issue even calling it once because the types just don’t match up:

    fn gimme_a_foo(f: Foo) { // expects a Foo
        println!("{:?} is a nice lookin' Foo.", f);
    }
    
    fn main() {
        let f = Foo(1);
        gimme_a_foo(&f); // but we passed an &Foo
    }

    So we’d get the following compiler error:

    note: expected type `Foo`
                  found type `&Foo`

    So by changing our function to take a reference to a Foo, we never take ownership in the first place. Another way to think about this is in terms of blocks or scopes which discussed earlier. Our original function was:

    fn gimme_a_foo(f: Foo) {
        println!("{:?} is a nice lookin' Foo.", f);
    }

    the f argument has a scope which is the body of the function (the block is the area enclosed by { and }) so what happens is gimme_a_foo takes ownership of f and then it goes out of scope which is why we can’t do the double call to gimme_a_foo. But when we changed the function parameter to &Foo, we’re saying that we are NOT taking ownership.

    When a function parameter takes a reference to something, Rust calls this "borrowing".

  • Database migrations with Rust and Diesel

    Diesel describes itself as

    the most productive way to interact with databases in Rust because of its safe and composable abstractions over queries.

    http://diesel.rs/

    To try it out, create a new project using Cargo

    cargo new --lib diesel_demo
    cd diesel_demo

    Then edit the Cargo.toml file to add diesel and dotenv dependencies

    Next you’ll need to install the standalone diesel CLI. In my case, I only want support for postgres so I specify this using the features flag.

    cargo install diesel_cli --no-default-features --features postgres

    Connect to Postgres and create a database called “diesel_demo” using psql.

    psql -p5432 "levinotik"
    levinotik=# create database diesel_demo;                        
    CREATE DATABASE

    Create a .env file in your project root and add a variable for your database url (I don’t have a password in this example)

    DATABASE_URL=postgres://levinotik:@localhost/diesel_demo

    If all of that is right, we can use the diesel CLI to set everything up using diesel setup

    ~/dev/rust/diesel_demo  master ✗
    » diesel setup
    Creating migrations directory at: /Users/levinotik/dev/rust/diesel_demo/migrations

    That will create a migrations folder in your project and also generate a couple migrations with helper functions that diesel uses to do its work. Your project should look something like this now

    We’re ready to create our own migration now. Once again, we’ll use the diesel CLI. For our example, we want create a movie database. To do that, we run diesel migration generate create_movies which should output this:

    Great, we’ve got migration files now for creating and reverting our migration. These files are empty. Let’s write some sql to create our migration. In up.sql add this:

    CREATE TABLE movies (
      id SERIAL PRIMARY KEY,
      title VARCHAR NOT NULL,
      director VARCHAR NOT NULL,
      release_year INTEGER NOT NULL 
    );

    And in down.sql add the sql for reverting:

    DROP TABLE movies

    Now we apply our migration with diesel migration run:

    Looks like that worked. Let’s connect to our diesel_demo database and check it out.

    Great, the movies table is there. Let’s see what that looks like:

    diesel has a redo command for checking that the down.sql works correctly.

    Looks good. Diesel ran the down.sql migration and then the up.sql again. Back in psql, we can see everything is there as it was before.

    That’s it. This is how you manage database migrations in Rust using diesel. Pretty straightforward really and much as you would expect if you’ve used any other migration management tool.

    Future posts will go over how we can write and read to/from the database using the diesel package in Rust.

  • Static Types and their Impact on Testing

    In this post (series?), I’d like explore how to write a program in a type safe and testable way. I’m using Scala because it’s the language I’m most proficient in; this would be prettier and less boilerplatey in Haskell.

    The basic point that I hope to get across in this post (and the potential follow-ups) is that by encoding our domain into static types and avoiding side effects we can accomplish a few things:

    1. We can greatly limit the scope of our tests
    2. For the tests that still make sense to write, we can write very strong tests
    3. By using meaningful values and separating side-effecting functions from pure ones, we can more easily reason about our program

    We’re going to write a simple service that takes a form filled out by a user, sends an email based on that form, and then records that event to a database.

    Here’s a possible data model for our email and a form.

    package com.notik
    
    case class EmailAddress(address: String)
    case class FromAddress(address: EmailAddress)
    case class ToAddress(address: EmailAddress)
    case class EmailBody(body: String)
    case class Recipient(name: String)
    case class Email(from: FromAddress, to: ToAddress, body: EmailBody, recipient: Recipient)
    case class WebForm(from: FromAddress, to: ToAddress, body: EmailBody, recipient: Recipient)

    Right away, we’ve gained something. Not a ton, but something.

    If we had encoded the idea of an Email as Email(fromAddress: String, toAddress: String, body: String, recipient: String) then we can easily shoot ourselves in the foot. We might mistakenly write Email("hello", "jason", "jason@gmail.com", "body of email"), for example. We’ve mixed up all of the parameters and we only know things have gone wrong once we run our program and it blows up for InvalidEmailAddressException or whatever. Worse, it might not blow up at all and instead our program is just wrong.

    What we should do is encode our domain as =>ly as possible and let the type system/compiler do as much work as possible.

    Ok, so with our case classes we have separate types representing the different parts of an email. But, we’re probably creating this Email by populating it from the fields of some web form and there’s still nothing preventing us from mixing up the strings and getting incorrect data, right?

    This is always a possibility and users might input bad data altogether. Ok, so let’s encode this into our types. Firstly, we’re going to create a function which takes in a WebForm and produces an Email. But instead of producing an Email directly, we’re going to produce “possibly” an email. We’re not going to assume we have an email and wait for runtime exceptions nor are we going to do validation checks and simply halt in the case of invalid input. Instead, we’re going to encode the notion that something could always go wrong when submitting the form. See EmailService.mkEmail. It takes a WebForm and returns a ValidationNel[String, Email]. If the form is incorrect (as defined by our domain logic) then we get back a list of all the errors. If nothing is wrong, we get an Email.

     import scalaz._
      import Scalaz._
      import argonaut.Parse
    
      /*
        Validation represents the idea of success or failure. NEL stands for non-empty list. We can actually require
        that the list of errors we get back in the case of failure is non-empty.
    
        This is the first step towards a more =>ly typed way of handling web form submissions. We encode the notion
        of failure in our type. We no longer have simply an Email as a result of a WebForm. Instead, we are forced, at
        the level of the type system, to deal with the possibility of failure.
       */
    
      val emailRegex = "^[_A-Za-z0-9-+]+(.[_A-Za-z0-9-]+)*@[A-Za-z0-9-]+(.[A-Za-z0-9]+)*(.[A-Za-z]{2,})$".r
    
      def mkEmail(form: WebForm): scalaz.ValidationNel[String, Email] =
        (validateFrom(form.from).toValidationNel |@| validateTo(form.to).toValidationNel |@| validateBody(form.body).toValidationNel |@| validateRecipient(form.recipient).toValidationNel)(Email.apply)
    
      def validateFrom(from: FromAddress):Validation[String, FromAddress] =
        emailRegex.findFirstIn(from.address.address)
         .map(_ => from.success)
         .getOrElse(s"$from is not a valid email address".fail[FromAddress])
    
      def validateTo(to: ToAddress):Validation[String, ToAddress] =
        emailRegex.findFirstIn(to.address.address)
         .map(_ => to.success)
         .getOrElse(s"$to is not a valid email address".fail[ToAddress])
    
      def validateBody(emailBody: EmailBody): Validation[String, EmailBody] =
       if(emailBody.body.nonEmpty) emailBody.success
       else "body of the email cannot be empty".fail[EmailBody]
    
      def validateRecipient(recipient: Recipient): Validation[String, Recipient] =
        if(recipient.name.nonEmpty) recipient.success
        else "recipient name cannot be empty".fail[Recipient]
    
      def emailFromJsonForm(json: String):Validation[NonEmptyList[String], Email] = for {
        form => Parse.decodeValidation[WebForm](json).toValidationNel
        email => mkEmail(form)
      } yield email
    }

    With this in place, if we tried creating an email out of a form with something like from = "gmail.com", to "@gmail.com", body = "", recipient = "Levi" we’d get a list of errors like this:

    NonEmptyList(FromAddress(EmailAddress(gmail.com)) is not a valid email address, 
    ToAddress(EmailAddress(@gmail.com)) is not a valid email address, body of the email cannot be empty)

    Our ultimate goal is to send an email and log a record of that. But we’re still dealing with just our data/model at this point.

    Ok, so how do we get this WebForm? For our example, we’ll assume a user is filling out some input fields which will then be posted as JSON. If we were doing it through query params or some other way, the same general principles would apply.

    On to JSON. We’ll use the Argonaut library to deserialize some JSON data into our WebForm type.

    /*
    We need a codec that defines, in a *type safe* way, how to decode our JSON into our WebForm class. We'll put our codecs in the respective companion objects so the implicits can be found without additional imports.
    */
    
    object WebForm {
      implicit def WebFormCodecJson: CodecJson[WebForm] =
        casecodec4(WebForm.apply, WebForm.unapply)("from", "to", "body", "recipient")
    }
    
    /*
    case class EmailBody(body: String)
    case class Recipient(name: String)
    case class Email(from: FromAddress, to: ToAddress, body: EmailBody, recipient: Recipient)
    */
    
    object FromAddress {
      implicit def FromAddressCodecJson: CodecJson[FromAddress] =
        casecodec1(FromAddress.apply, FromAddress.unapply)("address")
    }
    
    object ToAddress {
      implicit def FromAddressCodecJson: CodecJson[ToAddress] =
        casecodec1(ToAddress.apply, ToAddress.unapply)("address")
    }
    
    object EmailBody {
      implicit def EmailBodyCodecJson: CodecJson[EmailBody] =
        casecodec1(EmailBody.apply, EmailBody.unapply)("body")
    }
    
    object Recipient {
      implicit def RecipientCodecJson: CodecJson[Recipient] =
        casecodec1(Recipient.apply, Recipient.unapply)("recipient")
    }

    Parsing/decoding may fail for a number of reasons. As with the validation examples above, we the possibility of failure is encoded into our types. Specifically, we we decode, the value produced is an Either[String, WebForm] where the left side contains any error message in the case of failure and the right side contains the Email in the case of success. Again, the basic idea is simple but powerful: instead of pretending that we have the values we ultimately want even though we know that things may very well blow up, we simply encode the possiblity of failure into our type.

    Using functional constructs we can deal with the “happy path” and deal with errors at the very end, instead of sprinkling error handling all over our code. We do this by decoding the form from json and then calling mkEmail. The function emailFromJsonForm returns a Validation[NonEmptyList[String], Email] with the list of any errors on the left and the Email on the right side.

    At this point, we have no side-effecting functions. This may seem like a lot of boilerplate, but we’ve gained a lot from this. And the benefits are only increase as our program grows larger.

    To see the benefits of this approach, consider an alternative program where functions don’t return meaningful values and are only called for their side effects.

    def sendEmail(webForm: String): Unit = {
        val form = deserialize[WebForm](parse(webForm))
        val email = Email(from.from, form.to, form.body, form.recipient)
        EmailService.sendEmail(email) //sends the email
        logEvent(email) //writes record to the DB
    }

    This monolithic function mixes the data transformations with the side effect of sending an email. The Unit return type is completely opaque. It’s not a meaningful value that we can reason about. By returning Unit, we’re saying “Nothing to look at here..move along”. By definition the only way to test our program is to test the entire thing at once. Since Unit has no meaning, we can only test that our program does what we want by inspecting that the side effects seem to meet our requirements.

    This kind of program becomes impossible to maintain, especially as it grows larger and we add more functionality. Reasoning about this program requires an ever-increasing amount of mental energy. With no meaningful types anywhere, we can’t add functionality without keeping everything in mind at once. This function also ignores the real possibility of errors occurring when deserializing to our WebForm type and leaves out any validations on our Email.

    Contrast this to the program we’ve written where we have clearly defined types and transformations that return clear values which indicate the possibility of failure. We can now write tests using something like ScalaCheck to confirm that malformed email addresses, for example,are rejected and return failures. Furthermore, the scope of our tests is greatly diminished. Instead of writing tests that exhaustively check that the side effects of our program occur as we expect (since we can’t inspect anything directly), we write small functions that return meaningful values for which we can write strong tests that confirm the correctness of our code directly. With the monolithic and side-effecting approach, if the tests don’t pass, we’re not necessarily sure why. We’re left to fish through our code, trying to find the bug that caused the problem.

    Even with all the problems in my monolithic sendEmail example, I’m still using more clearly defined types than a lot of programs that I often see. For example, a lot of people seem to use hashes as their data type for everything. Someone showed me an example where they deleted two entries in some hash where they had intended to only remove one. Their tests blew up and eventually they tracked down that the unintended deletion which caused the issue. The problem here is that a hash is the wrong type for almost anything in your program’s domain. All we know about a hash is that contains keys and values. That’s it. There’s no information there and no type (beyond simply the most general notion of a structure that contains keys/values). By designing a very specific model, encoding it into strong types, and using a compiler you can catch errors before things blow up at runtime.

    I’ll either edit this post or add a part two that completes the picture, illustrating how to perform the effect of sending an email, how to do type safe database queryies and insertions, etc.

  • An illustrative solution in Scala

    I’ve been messing around with Scala a lot recently and I think the language hits such a sweet spot. Scala is a multi-paradigm language which means if you want to use an exclusively procedural/imperative/functional style or a mix of the three then you can. As Martin Odersky (the creator of the language) mentions in his fantastic book Programming in Scala, Scala doesn’t force you to do anything, but it definitely encourages you to use a functional style where it makes sense. So, in this respect Scala is very welcoming to beginners; you can use it initially just to write more concise Java, but you are also lead to discover the beauty of functional programming.

    Functional programming takes some getting used to when you’re coming from an imperative language like Java. You have to learn to think functionally. The essence of functional programming is the application of functions to some input which produces a well-defined output. Functional programming, therefore, in contrast to imperative programming, shuns changing state and mutating data. As such, whereas in imperative languages it is common to use iteration/loops to continuously update the value of some variable to solve a given problem, in functional programming there is only f(x) – the function applied to x always yields a particular, well-defined value. As a consequence, in functional programming, many problems can be solved using recursive approaches. Scala supports tail recursion which makes recursive solutions practicable (in Java, the lack of tail call optimization frequently results in a stack overflows).

    I just wanted to share a small bit of code that I thought showcases a lot of the neat things about Scala (and other functional languages as well).

    The problem is a simple one: for any number, calculate the sum of its digits. For example, the sum of the digits of 123 is 6, the sum of the digits of 375 = 15. It’s easy to see that this problem lends itself to a recursive solution, since the sum of the digits of the number 123 can be defined as the sum of 3 + the sum of the digits of 12. Here’s a solution in scala:

    def sumOfDigits(n: Int) = {
      @annotation.tailrec
      def sumOfDigitsAcc(n: Int, acc: Int): Int = {
        n match {
          case 0 => n + acc
          case _ => sumOfDigitsAcc(n / 10, acc + (n % 10))
        }
      }
      sumOfDigitsAcc(n, 0)
    }
    

    view rawgistfile1.scalaThis Gist brought to you by GitHub.
    The basis for the solution is the fact that when dealing with ints, n/10 will always drop the last digit of a number and n%10 will always yield the rightmost digit. So, for example 45/10 will give you 4 and 45%10 yields 5. In our solution, once we arrive at a single digit number, we can simply add that number to our accumulated value. The accumulated value is also needed in order to allow the scala compiler to optimize the tail call here. Why? Because a solution is only tail recursive if the final statement in the function is a call to the functional itself. Without the ability to pass the stored up sum into our nested function, we wouldn’t be able to do this and we wouldn’t have tail call optimization. The beauty of being able to have nested functions is that since we know our accumulator must start at 0, we can avoid leaking the fact that we use an accumulator into our public API. Instead, we simply require an Int to be passed and we then define the solution as our nested function. Another awesome thing is that Scala has an annotation @tailrec which will raise a compiler error if the function cannot be optimized into a loop.

    There are a lot of points here, but I thought this small example was actually pretty informative. It demonstrates the use of recursion, the ability to have nested functions, the bonus @tailrec annotation and how nicely this all comes together in the language.

    I’m pretty new to Scala and functional programming so if I’m wrong about something, I’d love feedback.

    EDIT –

    As Nicolas B. kindly pointed out in the comments, I had a mistake in the above code. for case 0, I was returned n + acc when there’s no need to add n since I’m going from n to 0. Nicolas also pointed out that I can clean things up a bit more by using default arguments. Scala lets you specify default values for function parameters. The argument for such a parameter can optionally be omitted from a function call, in which case the corresponding argument will be filled in with the default. For us, this means we don’t need to supply the initial value of our accumulator when we call sumOfDigitsAcc. Combining Nicolas’s two points results in the following:

    def sumOfDigits(n: Int) = {
      @annotation.tailrec
      def sumOfDigitsAcc(n: Int, acc: Int = 0): Int = {
        n match {
          case 0 => acc
          case _ => sumOfDigitsAcc(n / 10, acc + (n % 10))
        }
      }
      sumOfDigitsAcc(n)
    }
    
  • Using textured/repeating patterns as backgrounds

    In order to use textured pattern images as backgrounds for Layouts and Views in Android, it’s not enough to simply crop out a part of the image and run it through the Draw 9-Patch tool. Similarly, if you simply set the background resource/drawable to your image, you’ll find that it won’t look right. You’ll get the image repeating many times over in a way that doesn’t fill the background with the pattern you’re expecting.

    Here’s how you should do it: define a Bitmap resource in your res/drawable and set the android:tileMode to “repeat” (it works similarly in html) which will repeat the bitmap in both directions.

    res/drawable/mybackground.xml

    <?xml version="1.0" encoding="utf-8"?>
    <bitmap xmlns:android="http://schemas.android.com/apk/res/android"
            android:src="@drawable/background"
            android:tileMode="repeat" />

    then you’re free to use that as your backgroundResource in any View or Layout.

    Head on over to subtlepatterns.com for some great, free textured patterns.

  • Android Loopers, Handlers, RuntimeExceptions explained…

    What’s a Looper, what’s a Handler and what’s up with “Can’t create handler inside thread that has not called Looper.prepare()”

    NOTE: as always, consider everything I say as prefaced with “my understanding is that…”

    A Looper is a simple message loop for a thread. Once loop() is called, an unending loop (literally while (true)) waits for messages to appear in a MessageQueue and then processes it once one shows up.

    Some basic facts: a Looper is associated with a particular thread. When Looper.prepare() is called, it simply checks to see if the calling Thread already has a Looper associated with it. If it does, it throws a RuntimeException because you can only have one Looper per thread.

    Now, many have experienced issues when using the parameterless constructor to create a Handler. Let’s back up. What’s a Handler? Handler is an object that simply lets you send/process messages and runnables associated with a thread’s MessageQueue. Internally, the MessageQueue that a Handler uses in order to do this is the MessageQueue that the Thread’s Looper “waits” for messages in. In other words, a Handler is the interface to the MessageQueue that the Looper is constantly “scanning”. Now you can understand the RuntimeException of “Can’t create handler inside thread that has not called Looper.prepare()”. This happens because the Handler can’t find a Looper associated with the Thread in which you’re trying to create the Handler (and therefore has no queue for it to use). It’s like you’re telling the system that you want to start pumping messages into a non-existent Queue. Calling Looper.prepare is what initially associates a new Looper (and the MessageQueue) with the “current thread”.

    So, you can of course use the Handler constructor that takes a Looper (new Handler(Looper looper) if you already have a reference to one. And, you can always get a Looper by calling the static method Looper.getMainLooper(), which “Returns the application’s main looper, which lives in the main thread of the application.”

    How does this work? Seems like magic, but the reason it works is because theres a static method inside of Looper called prepareMainLooper which the system calls inside of main(). Yes, the main which developers never encounter. main() is in the class ActivityThread and, sure enough, it calls Looper.prepareMainLooper. This is how/why you can do something Handler handler = new Handler(Looper.getMainLooper()). It’s because the system already created a “main” looper for you.


    Feedback I’m interested in:

    1. “You were wrong about X”
    2. “You could be clearer about Y”
    3. “Here are some other helpful resources for learning about Loopers/Threads/Handler internals”
  • Demystifying Context in Android

    The topic of Context in Android seems to be confusing too many. People just know that Context is needed quite often to do basic things in Android. People sometimes panic because they try to do perform some operation that requires the Context and they don’t know how to “get” the right Context.

    I’m going to try to demystify the idea of Context in Android. A full treatment of the issue is beyond the scope of this post, but I’ll try to give a general overview so that you have a sense of what Context is and how to use it.

    To understand what Context is, let’s take a look at the source code:

    http://codesearch.google.com/codesearch#search&q=package:android.git.kernel.org+file:android/content/Context.java

    What exactly is Context? Well, the documentation itself provides a rather straightforward explanation: The Context class is an “Interface to global information about an application environment.”
    The Context class itself is declared as abstract class, whose implementation is provided by the Android OS. The documentation further provides that Context “…allows access to application-specific resources and classes, as well as up-calls for application-level operations such as launching activities, broadcasting and receiving intents, etc.” You can understand very well, now, why the name is Context. It’s because it’s just that. The Context provides the link or hook, if you will, for an Activity, Service, or any other component, thereby linking it to the system, enabling access to the global application environment. In other words: the Context provides the answer to the components question of “where the hell am I in relation to app generally and how do I access/communicate with the rest of the app?”

    If this all seems a bit confusing, a quick look at the methods exposed by the Context class provides some further clues about its true nature. Here’s a random sampling of those methods:

    1. getAssets()
    2. getResources()
    3. getPackageManager()
    4. getString()
    5. getSharedPrefsFile()

    What do all these methods have in common? They all enable whoever has access to the Context to be able to access application-wide resources. Context, in other words, hooks the component that has a reference to it to the rest of application environment. The assets (think ‘/assets’ folder in your project), for example, are available across the application, provided that an Activity, Service or whatever knows how to access those resources. Same goes for “getResources()” which allows to do things like “getResources().getColor()” which will hook you into the colors.xml resource (nevermind that aapt enables access to resources via java code, that’s a separate issue).

    The upshot is that Context is what enables access to system resources and its what hooks components into the “greater app.”

    Let’s look at the subclasses of Context, the classes that provide the implementation of the abstract Context class. The most obvious class is the Activity class. Activity inherits from ContextThemeWrapper, which inherits from ContextWrapper, which inherits from Context itself. Those classes are useful to look at to understand things at a deeper level, but for now it’s sufficient to know that ContextThemeWrapper and ContextWrapper are pretty much what they sound like. They implement the abstract elements of the Context class itself by “wrapping” a context (the actual context) and delegating those functions to that context. An example is helpful – in the ContextWrapper class, the abstract method “getAssets” from the Context class is implemented as follows:

     @Override
        public AssetManager getAssets() {
            return mBase.getAssets();
        }

    mBase is simply a field set by the constructor to a specific context. So a context is wrapped and the ContextWrapper delegates its implementation of the getAssets method to that context. Let’s get back to examining the Activity class which ultimately inherits from Context to see how this all works.

    You probably know what an Activity is, but to review – it’s basically ‘a single thing the user can do. It takes care of providing a window in which to place the UI that the user interacts with.’ Developers familiar with other APIs and even non-developers might think of it vernacularly as a “screen.” That’s technically inaccurate, but it doesn’t matter for our purposes.

    So how do Activity and Context interact and what exactly is going in their inheritance relationship?

    Again, it’s helpful to look at specific examples. We all know how to launch Activities. Provided you have “the context” from which you are you are starting the Activity, you simply call startActivity(intent), where the Intent describes the context from which you are starting an Activity and the Activity you’d like to start. This is the familiar startActivity(this, SomeOtherActivity.class). And what is “this”? “this” is your Activity because the Activity class inherits from Context. The full scoop is like this:

    When you call startActivity, ultimately the Activity class executes something like this:

    Instrumentation.ActivityResult ar =
                    mInstrumentation.execStartActivity(
                        this, mMainThread.getApplicationThread(), mToken, this,
                        intent, requestCode);

    Ok, so it utilizes the execStartActivity from the Instrumentation class (actually from an inner class in Instrumentation called ActivityResult). At this point we are beginning to get a peek at the system internals. This is where OS actually handles everything. So how does Instrumentation start the Activity exactly? Well, the param “this” in the execStartActivity method above is the your Activity, i.e. the Context, and the execStartActivity makes use of this context. A 30,000 overview is this: the Instrumentation class keeps tracks of a list of Activities that it’s monitoring in order to do it’s work. This list is used to coordinate all of the activities and make sure everything runs smoothly in managing the flow of activities. There are some operations which I haven’t fully looked into which coordinate thread and process issues. Ultimately, the ActivityResult uses a native operation – ActivityManagerNative.getDefault().startActivity() which uses the Context that you passed in when you called startActivity. The context you passed in is used to assist in “intent resolution” if needed. Intent resolution is the process by which the system can determine the target of the intent if it is not supplied. (Check out the guide here for more details.) And in order for Android to do this, it needs access to information that is supplied by Context. Specifically, the system needs to access to a ContentResolver so it can “determine the MIME type of the intent’s data.”

    This whole bit about how startActivity makes use of context was a bit complicated and I don’t fully understand the internals myself. My main point was just to illustrate how application-wide resources need to be accessed in order to perform many of the operations that are essential to an app. Context is what provides access to these resources.

    A simpler example might be Views. We all know what you create a custom View by extending RelativeLayout or some other View class, you must provide a constructor that takes a Context as an argument. When you instantiate your custom View you pass in the context. Why? Because the View needs to be able to have access to themes, resources, and other View configuration details. View configuration is actually a great example. Each Context has various parameters (fields in Context’s implementations) that are set by the OS itself for things like the dimension or density of the display. It’s easy to see why this information is important for setting up Views, etc.

    One final word: for some reason people new to Android (and even people not so knew) seem to completely forget about object-oriented programming when it comes to Android. For some reason, people try to bend their Android development to pre-conceived paradigms or learned behaviors. Android has it’s own paradigm and a certain pattern that is actually quite consistent if let go of your pre-conceived notions and simply read the documentation and dev guide. My real point, however, while “getting the right context” can sometimes be tricky, people unjustifiably panic because they run into a situation where they need the context and think they don’t have it. Once again, Java is an object-oriented language with an inheritance design. You only “have” the context inside of your Activity because your activity itself inherits from Context. There’s no magic to it (except for the all the stuff the OS does by itself to set various parameters and to correctly “configure” your context). So, putting memory/performance issues aside (e.g. holding references to context when you don’t need to or doing it in a way that has negative consequences on memory, etc), Context is an object like any other and it can be passed around just like any POJO. Sometimes you need might need to do clever things to retrieve that context, but any regular Java class that extends from nothing other than Object itself can be written in a way that has access to context; simply expose a public method that takes a context and then use it in that class as needed.

    This was not intended as an exhaustive treatment on Context or Android internals, but I hope it’s helpful in demystifying Context a little bit.


    Feedback I’m interested in:

    1. “You were wrong about X”
    2. “You could be clearer about Y”
    3. “Here are some other helpful resources for learning about Context/Android internals”
  • Android custom fonts and memory issues: a quick fix

    Android allows you to import custom fonts into your project (just copy a .ttf file into your assets folder and you’re good to go). Typically, you’ll grab the custom font, like this

    Typeface myTypeface = Typeface.createFromAsset(getResources().getAssets(), 
        "fonts/DroidSerif-Bold.ttf");

    If you’re using custom fonts a lot (constantly grabbing the font inside of your Views or Activities), it can create a major strain on memory. In my app, I noticed that as I switched between activities, Logcat was spitting out something like the following: DEBUG/skia(1510): purging 197K from font cache [20 entries]. Ok, so apparently there’s some caching mechanism and it’s getting purged. Sounds good. The problem was that this was happening way too often. Eventually, the memory was so strained that Android started killing all processes on the device until, eventually, my app was killed as well.

    Here’s how to fix this: if you need to grab a custom font often, use a Singleton which holds on to and returns the Typeface when you request. For good measure, you can even hold onto the Typeface as a static field inside of the classes in which you use it.

    All you need is something like this:

    public class TypefaceSingleton {
    
        private static TypefaceSingleton instance = new TypefaceSingleton();
    
        private TypefaceSingleton() {}
    
        public static TypefaceSingleton getInstance() {
            return instance;
        }
    
        public Typeface getDroidSerif() {
            return Typeface.createFromAsset(MyApp.getContext().getResources().getAssets(), "fonts/DroidSerif-Bold.ttf");
        }
    }

    Notice, I’m using “eager”, as opposed to “lazy” instantiation (where getInstance() checks if the instance is null), but either way should work. After I switched to using the Singleton implementation, the memory issues disappeared.

    Hope this helps.