Omkar Khair Technology Evangelist / Consultant Polyglot Developer working at Rapid Circle. I like to build, break and build again.

Solidity contract essentials

Understanding that everything on Ethereum is public and permanent, caution needs to be maintained while deploying code in form of Smart Contracts. There is of course an exception to this permanence, which comes at the cost of a community initiated hard-fork. In the history of Ethereum, this has happened once with ~$70 million at stake due to a buggy contract. Considering what is at stake, here are a few considerations before deploying your smart contract.

Routing contracts

Excluding some very exceptional cases, your contracts will most likely need an upgrade. While you cannot update code for an already deployed contract, you can very well upload a newer version of a contract. If it is not just your Dapp that interacts with the contracts, you need a way to ensure that your contract is accessible at the same address to avoid confusion. This is where Routing Contracts help. You can design contracts like the one below to simply act a pointer to your final contract. You can utilize delegatecall to delegate the operation to another contract and allow state modifications. While upgrading the contract code, you can then simply update the routing to point to the new address.

contract Router {
    address public owner;
    address public greeter;
    
    modifier isOwner () {
        if (msg.sender != owner) throw;
        _;
    }

    function SetOwner (address _owner) isOwner() {
        owner = _owner;
    }

    function UpdateGreeter (address _greeter) isOwner() {
        greeter = _greeter;
    }

    function () {
        if(greeter.delegatecall(msg.data)) throw;
    }
}

Overrides

It is ideal to set override flags that are checked before executing high stake transactions. In case a vulnerability is discovered post deployment, these override flags can be set to block any transactions from being executed.

contract Vote {
    address public owner;
    bool private override;

    ...

    modifier isOwner () {
        if (msg.sender != owner) throw;
    }

    modifier isOverride () {
        if (override) throw;
        _;
    }

    // Override can be set only by owner
    function setOverride (bool flag) isOwner() returns (bool) {
        override = flag;
        return true;
    }

    // Block voting is override is set
    function castVote () isOverride() {
        ...
    }
}

Block lock

A single contract is bound to receive concurrent transactions, and as a result frequent state changes. Certain states like ownership or override flags if changed frequently can lead to unintended transactions by users. In such cases block locks can be implemented to block changes to these states for a preset block height.

contract Donation {
    address public owner;
    uint public blocklock;

    // Minimim block height to block state change
    uint constant BLOCKLOCK_HEIGHT = 10;

    // Check if modification is older than set block lock height
    modifier checkBlocklock() {
        if (blocklock + BLOCKLOCK_HEIGHT > block.number) {
            throw;
        }
        _;
    }

    // Set blocknumber when modification was carried out
    modifier setBlocklock () {
        blocklock = block.number;
        _;
    }

    // Change owner, and start a block lock for set block height
    function changeOwner (address _owner) checkBlocklock() setBlocklock() returns (bool) {
        owner = _owner;
    }
}

Users can then ensure that the state they are referring to is at least X block height old, to be sure that no concurrent transactions will drop in leading to unintended transactions.

Suicide

For contracts that are intended to be temporary, Ethereum provides an OPCODE that acts as a kill switch (suicide) to destroy the contract and free up space on the blockchain. All Ether owned by the contract is transferred to the address passed to this OPCODE. Solidity example below.

function KillContract() IsOwner() returns (uint) {
    suicide(owner);
}

Note that suicide take negative gas, as an incentive for freeing up space on the blockchain, so any cleanup operations carried out in a transaction that calls suicide can be discounted. Suicide is also helpful when running a high stake contract that runs into an unintended state, and you wish prevent further damage that any incorporated overrides don’t.

These are some less discussed operational considerations on the top of my list, you can find an exhaustive list of security considerations here to follow here: https://github.com/ConsenSys/smart-contract-best-practices. Think I’m missing something on my list? Feel free to add your comments below 🙂

 

Cryptography, Entropy and the art of not winning a lottery ticket

With the new kids in town called “Blockchains”, I have ended up spending weeks if not months to really get a grip on how things function under the hood; and of the many, many, many things I learnt – I found one aspect of asymmetric cryptography the most interesting.

This wasn’t my first introduction with Asymmetric key encryption, but this was the first time I really worried about how reliable it is. Algorithms like RSA and Elliptic Curve Digital Signature Algorithm (ECDSA) which is used by Bitcoin and Ethereum, generate a private key. This key acts as root to derive a public key. In case of Ethereum, a hash of this public key is used to derive the (public) address of the account. I’m leaving the explanation of how and why it works to the experts, and jumping to my point.

If you noticed, the private key (password) and its address on the blockchain is generated by a computer out of thin air; just like a GUID is generated in most applications. This is all based on the principle that the odds of generating the same GUID are real low, putting all faith in probability. To be very honest, not having a 100% guarantee on security of these possibly high stake accounts scared me bit. That brought me to the question.

“What are the odds that someone ends up generating the same private key as me, and gains complete access to my Bitcoin address?”

A quick Google search led me to interesting answers all over the web. And of course, the probability is real low for a 160-bit address.

2^160 = 1,460,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000.

Though this is a large number, without a solid background of mathematics it is still difficult to remain convinced that no one on the face of earth will ever come across one of my keys simply by a stroke of luck. What did bring this number into solid perspective is the definition of Entropy (Physics. Not Information Theory).

Apparently, matter all around us acts the way it does because entropy dictates this behavior; also via probability. Entropy is the simplest form is a measurement of disorder. I’ll let the TED-Ed by Jeff Philips explain it in detail.

IMAGE ALT TEXT HERE

Now if you think of the probability of stumbling into a key twice; the probability of it really happening is close to the probability of a steady glass of water heating up while next to ice. Or as one Bitcoin Stack exchange user explains – “the odds of your computer burning down and turning into a million-dollar lottery ticket are better than you stumbling into a duplicate address”. Quite literally.

If that doesn’t assure you how close to impossible it is to stumble into a key for a valid account, nothing will. May the mathematical odds be in your favor.

Splitting long hangfire jobs into smaller ones

I recently came across a scenario where I needed to implement a long running job, that could probably take more than 8 hours to finish the desired task. After a ton of unpleasant experiences with Web Jobs, I made Hangfire as my go to library for long running jobs. But this isn’t about Hangfire, or any specific library. This approach would work as well for any alternative that does not readily provide batch continuation (like Hangfire Pro does, but it comes at a price $$$).

Here’s the scenario. I am consuming a web service that provides a list of notifications that my app has missed. Each notification object contains a URL of the content that needs to be retrieved and analyzed.

Solution I: One job to rule them all

I think the diagram below is self explanatory.

Problem

Executing a long running job in a single thread like this can be very time consuming. While a long running job is bound to run into problems, even with re-entrant code this could waste hours of CPU time.

Solution II: Divide and conquer

We can improve our original solution by passing each notification object to a new job (Job B) that will further fetch the content object and analyze it. Thus splitting Job A into multiple jobs. Here we have the benefit of utilizing multiple threads, speeding up the execution of Job A.

The Problem

Hangfire Jobs have to be re-entrant, or idempotent based on your level of expertise in programming theory. If Job A fails in retrieving the notifications due to a network or service unavailable error, Hangfire’s transient fault tolerance mechanism will kick in, and trigger the job again after a pre-configured duration. This behavior puts us at risk of creating duplicate jobs. A simple solution would be to ensure Job B is re-entrant such that duplicates do not interfere with business logic, but this comes at the cost of CPU accompanied in most cases with Bandwidth and Storage. Imagine this happening for a million incoming notification objects. Back to the whiteboard.

Solution III: New world order

We were pretty close to solving the problem in the previous solution. We will simply modify it a bit.

  1. In Job A,  we will create a Unique ID (GUID) on every invoke, and use this as a Batch ID to refer to.
  2. Instead of triggering Job B directly; we store the Content URL (Job description) in a database with the Batch ID.
  3. We create a new Job definition (Job X) that accepts a Batch ID and will fetch all Job descriptions with the Batch ID.
  4. Each Job description is then iterated through and the Content URL is fetched to enqueue an instance of Job B.
  5. After each successful enqueue, we will mark the Job description as done.

If Job A fails while iterating over notifications, Job B duplicates will not be created as we have only stored their Job description. On the next successful respawn of Job A a new Batch ID will be created. If Job X fails, while iterating through Job descriptions, we have been marking Job descriptions as done. So on the next respawn only the Job descriptions that were previously not added to queue will be picked up. Order restored.

There is certainly room for improvement, by making Job A more efficient at handling its own state when dealing with a powerful API such as OData. This case is fruitful for all scenarios irrespective of the flexibility provided by the service Job A is dependent on. Got suggestions or improvement, feel free to buzz me or leave a comment below.

Implementing Multi-tenant SSO in ASP.net MVC5 using Owin

Decorating your SaaS offering with multi-tenant Single-Sign-On (SSO) can be a huge benefit, allowing your enterprise users to switch between their Office 365 home and your web app seamlessly.

While you can find a few resources that help you build just that, most of them are in the form of a project template or a complete solution on Github. Like this one (TodoListWebApp). For my projects I prefer to keep them clean, and write as much code by myself; avoiding templates that carry more capabilities than I need, or even carry out operations that I don’t completely understand.

You can find my complete tutorial here https://blog.puneusergroup.org/building-a-multi-tenant-active-directory-web-app/

DIY – BLE Controlled Light using Intel Edison

A while ago I built a DIY for the Raspberry Pi that could control lights over Wifi. While Intel’s Edison can pull off the same feat, it does one more things. It talks BLE. This is hands-on tutorial on turning your Intel Edison board into a BLE peripheral that you could control from your smartphone. As always, Node.js will be my weapon of choice. You can find the github link and a demo video at the end of this blog.

The Setup

Before getting started, make sure you have your Intel Edison running the Yocto build. If you haven’t setup your Edison follow this guide. Once installed the Yocto build, SSH to your board. You will have preinstalled Node.js.

For the development setup, I prefer to use Putty for SSH/Serial and WinSCP to move code from my machine to Edison.

Enable Bluetooth

This is a on boot ritual that you might have to repeat to enable BLE on Edison. You can automate with a shell script. Use the following 4 commands to enable BLE.

bluetooth_rfkill_event &
rfkill unblock bluetooth
hciconfig dev
hciconfig hci0 up

Hardware setup

We will be using the Arduino breakout board with Intel Edison development board to interface with a LED (my setup has a relay). This will be our BLE peripheral.

BLE light hardware setup

Step 1: Node.js package config

Create a directory for your Node.js app and configure the package.json to install necessary modules. Your package.json should have the following dependencies – async, noble, bleno, util.

{  "name": "ble-light",
  "description": "",
  "version": "0.0.1",
  "main": "app.js",
  "engines": {
    "node": ">=0.10.0"
  },
  "dependencies": {
      "async": "latest",
      "noble": "latest",
      "bleno": "latest",
      "util": "latest",
      "mraa": "latest"
  }
}

Step 2: BLE app and your services

For a BLE peripheral, you need to register services that you will be exposing for other hosts. Using the bleno module, you can register your service and start advertising capabilities.

var bleno = require('bleno');<br>var BlenoPrimaryService = bleno.PrimaryService;
var FirstCharacteristic = require('./characteristic');
bleno.on('stateChange', function(state) {
  console.log('BLE State: ' + state);
  if (state === 'poweredOn') {
    bleno.startAdvertising('BLE Light', ['fc00']);
  }
  else {
    if(state === 'unsupported'){
      error.log("BLE error. Check board configuration.");
    }
    bleno.stopAdvertising();
  }
});

bleno.on('advertisingStart', function(error) {console.log('Advertising:' + (error ? 'error ' + error : 'success'));
  if (!error) {
    bleno.setServices([
      new BlenoPrimaryService({
        uuid: 'fc00', // Custom BLE Service
        characteristics: [] // TODO: Add characteristic
      })
    ]);
  }
});

console.log("BLE app initiated...");

The is the basic skeleton which will form your app.js. We are yet to add Characteristics to our service, which we will be doing in the next section.

Step 3: Create BLE Characteristic

Every BLE peripheral exposes services that can be consumed by host devices. You find a comprehensive list of standard services here. Each service has a range of characteristics that used to interact with the service. We will be creating one such characteristic to operate our BLE light through a custom service.

Use the following code for your characteristic.js. The Characteristic below is plain vanilla definition carrying out read, write and notify operations on a variable in memory. We will be modifying this Characteristic in next steps.

var util = require('util');
var bleno = require('bleno');

var BlenoCharacteristic = bleno.Characteristic;

// Initialize BLE Characteristic
var FirstCharacteristic = function() {
  FirstCharacteristic.super_.call(this, {
    uuid: 'fc0f',
    properties: ['read', 'write', 'notify'],
    value: null
  });
  this._value = new Buffer("OFF", "utf-8");
  console.log("Characterisitic's value: "+this._value);
  this._updateValueCallback = null;
};

// Inherit the BlenoCharacteristic
util.inherits(FirstCharacteristic, BlenoCharacteristic);

// BLE Read request
FirstCharacteristic.prototype.onReadRequest = function(offset, callback) {
  console.log('FirstCharacteristic - onReadRequest: value = ' + this._value.toString("utf-8"), offset);
  callback(this.RESULT_SUCCESS, this._value);
};

// BLE write request
FirstCharacteristic.prototype.onWriteRequest = function(data, offset, withoutResponse, callback) {
  this._value = data;
  console.log('FirstCharacteristic - onWriteRequest: value = ' + this._value.toString("utf-8"));
  if (this._updateValueCallback) {
    console.log('FirstCharacteristic - onWriteRequest: notifying');
    this._updateValueCallback(this._value);
  }
  callback(this.RESULT_SUCCESS);
};

// BLE subscribe
FirstCharacteristic.prototype.onSubscribe = function(maxValueSize, updateValueCallback) {
  console.log('FirstCharacteristic - onSubscribe');
  this._updateValueCallback = updateValueCallback;
};

// BLE unsubscribe
FirstCharacteristic.prototype.onUnsubscribe = function() {
  console.log('FirstCharacteristic - onUnsubscribe');
  this._updateValueCallback = null;
};

module.exports = FirstCharacteristic;

Step 4: Modify Characteristic for Light control

Modify characteristic.js to perform GPIO operation using the mraa module.

The characteristic initialization needs to define and initialize the GPIO pins. I have a relay connected to my setup on the Arduino digital pin 3.

// Initialize BLE Characteristic
var FirstCharacteristic = function() {
  FirstCharacteristic.super_.call(this, {
    uuid: 'fc0f',
    properties: ['read', 'write', 'notify'],
    value: null
  });
  this._value = new Buffer("0", "utf-8");
  console.log("Characterisitic's value: "+this._value);
  this._light = new mraa.Gpio(3);
  this._light.dir(mraa.DIR_OUT);
  this._light.write(0);
  this._updateValueCallback = null;
}

util.inherits(FirstCharacteristic, BlenoCharacteristic);

The BLE write request will read the BLE data for a string. If the string equates to “1” we turn the light on. Else we switch it off. Quick and dirty.

// BLE write request
FirstCharacteristic.prototype.onWriteRequest = function(data, offset, withoutResponse, callback) {
  this._value = data;
  if (data == "1") {
    this._light.write(1);
  }
  else {
    this._light.write(0);
  }
  console.log('FirstCharacteristic - onWriteRequest: value = ' + this._value.toString("utf-8"));
  if (this._updateValueCallback) {
    console.log('FirstCharacteristic - onWriteRequest: notifying');
    this._updateValueCallback(this._value);
  }
  callback(this.RESULT_SUCCESS);
};

Do not forget to require the mraa module in the JS if you missed reading between the lines. 😉

Step 5: Try it out

Run the node app.

node app.js

Download the BLE Scanner or another equivalent app for your smartphone. Connect to “BLE Light” and use the “Custom Service”. You can now read, write and subscribe to notifications from your peripheral. To turn the light on, send string “1” and “0” to turn off.

The source code of this DIY is available to fork on github here.

For any questions, suggestions feel free to get back to me on twitter.

Machine Learning : Data Science essentials

Machine Learning is one of the top rated industry trends playing a key role in nearly every vertical. Big Data, and Cloud services being key enablers. This post is aimed at making it easy to explore the abilities of Machine Learning without going into too much detail. What I will be covering is merely the tip of ice-berg. Follow through, and feel free to post your feedback in comments, or get back to me on twitter.

The Big Data pipeline

There is a simple pipeline proposed through which Data is processed and churned to produce meaningful / conclusive results.

  1. Acquire Data: Gather any information that is being generated from all available sources. These could be log files, SQL databases, Document storage, Excel sheets.
  2. Extract, Clean, Annotate: Extract all relevant data from the pool, clean it off erroneous or anomalous entries and  annotate or label the data appropriately.
  3. Integrate, Aggregate, Represent: Carry out necessary correlations and present the data in shape to suit the architecture. For e.g: flatten the data in a CSV or SQL.
  4. Analysis / Modelling: Run the purified data through a modelling algorithm.
  5. Evaluate: Compare the results with real world information to evaluate the accuracy of the model.

This pipeline is pretty standard. Anyone with a Computer Science background and enough exposure would intuitively follow a pipeline of this fashion.

**The principle of Machine Learning

**

Traditionally, we have been using computers or machine to produce a output (O) based on input (I). The relation between O and I is defined by f.

where, O = f(I)

Machine Learning is about using computers to understand the relation between I and O, and produce f. This is what modelling is about. There are a few modelling algorithms that help produce f that we’ll take a look at next.

Classification

When the data is being used to create a model that predicts a category for the observation, Classification algorithms are used. Each observation is a set of vectors, and each parameter is weighed individually by the algorithm. Some sample scenarios would be identifying a Chair, Cat or a Car based on a picture. Sample data would include pictures labelled with Chair, Cat or Car.

  • Minimizing classifications errors can be a hard task.
  • Classification algorithms are susceptible to imbalanced data. A few observations for a Chair, and many observations for a Car in the training data will rarely predict a Chair.
  • Yes/No or Boolean (or with 2 categories) classification are done using  Decision trees or _Binomial classification _algorithms.
  • Dataset with more than 2 categories are popularly modelled using Multi-class classification.

The quality of a classification model can be plotted on the True Positive Rate (TPR) i.e. number of positives rated my the algorithm as positives divided by total number of positives; against False Positive Rates (FPR) i.e. total number of negatives classified by the algorithm as positives divided by the total number of negatives. This creates the Receiver Operator Characteristic (ROC). The area under this curve is used to denote the accuracy of the model. The linear line represents 50% accuracy. More area under the curve denotes higher accuracy.

ROC

Regression

Regression operates on observations that predict numerical values. For example, temperature based on observations of city, date, time, humidity. It is necessary to handle over-fitting and under-fitting of the model against supplied observations.

  • When evaluating the training data with test data, it is important to note the difference between f(I) and actual O. So, [O – f(I)] should be close to zero.
  • Computer calculations work better with smooth functions, as opposed to absolute values. Hence, [O – f(I)]2 should be close to zero.
  • When this is applied to training and test data the summation of all errors ?[O – f(I)]2 also known as sum of squares error (SSE) should be close to zero.
  • Regression algorithms minimize the SSE by adjusting values of baseline variables in the function.

The choice of Regression algorithms is very critical. The choice not only depends on the nature of O, but also the relation and distribution of O. Some popular regression algorithms would be:

Simple Linear Regression: A simple linear regression model has a single factored I. Usually used when finding a relation between two continuous datasets.

  1. Ridge Regression: Ridge Regression is used when dealing with multiple I’s. This method is susceptible to over-fitting. Especially when dealing with large number of parameters.
  2. SVM Regression: Support Vector Machine Regression uses threshold ranges to give a zero error. The error grows in a linear fashion beyond threshold.

The two types of validations for Regression algorithms.

  1. Cross-validation: With cross-validation, n-folds of dataset are created and a model is trained on n-1 folds. The model is then tested on the 1 remaining fold. This process is repeated to test with n-folds.
  2. Nested cross-validation: Popular for tuning parameters, especially tricky ones. This process is simply following Cross-validation for every possibility of parameter K.

To sum it up, a good regression model is neither under-fitted nor over-fitted to training data. A good model is one that is simple and suits the data in a reasonable manner. Some error tolerances are accounted for.

An Ideal Model

**Clustering

** As the name suggests, clustering is used to group similar observations together. Examples being, grouping customers with similar buying behaviours. Most scenarios of clustering lack ground truth, making it very difficult to validate during application. The only way to track ground truth, is based on test data.

  1. K-means clustering: The K-means algorithm accepts the number of clusters to be created, and simulates a clustering behaviour on the observations based on randomly selected centers. As the simulation progresses, the centers move towards actual centers of newly forming clusters. This is the most popular clustering algorithm.
  2. Hierarchical Agglomerative Clustering: This algorithms accepts points in their own cluster, and the simulation grows these clusters based the distance between two closest points.

In all clustering algorithms, distance metrics play a very important role and have a huge impact on the result. User adaptive distance metrics that consider local density of the data are important.

Recommendation System

The recommender system uses matrix factorization. It is primarily used to recommend items to a user based on the user’s own behaviour and the behaviour of users falling the similar category. We won’t delve into the details of recommender algorithms as there are a wide variety of approaches, each suiting their own application.

This ends a a very brief overview of Machine Learning algorithms. Feel free to post your feedback or discussions in comments, or get in touch on twitter.

Useful links

 

 

Doing it right: Your first open source project

I do frequent rounds to Github to contribute to small projects or libraries I find useful. Generally my contributions come from issues I have had with the project, and identified a fix. Last year I made my first attempt to write my own jQuery library. Not meaning to make anything big, this was just something I would like to use for myself and thought about hosting it on github to feed my self-worth. It played pretty decently, and I noticed a few people using it as well. But something didn’t feel quite right.

Recently, I made another attempt. This time I wanted to keep things organized. I had realized that there is a difference between an Open Source project, and a well maintained Open Source project. As a developer, I would hate to rely on a project that is being least cared about the original author; unless I’m using it for a garage project. So what is it that makes a project well maintained?

I started working on Jello a few weeks ago. And here are the few things I made sure I followed. I might be stating the obvious here, in which case you are free to fly over to some more interesting stuff on the internet.

Here is a quick checklist:

  1. **Add every feature you wish to implement as an enhancement in the issue list.

** This helps you and your audience keep track of your progress, and you don’t miss out on crucials. Remember, people looking at your half-baked project can easily take a look at reported enhancements to know where your project is heading.

  1. Create project milestones, and add issues to milestones.
Milestones can be scheduled. This gives your audience a clear idea about what to expect, and when. Once milestones are closed, these can be easily referred to create change logs.   3. **Close or Fix issues with commits.**
  
This leaves less ambiguity in your commits. Your audience now knows what just got fixed and how.   4. **Assign yourself on any open issues you are planning to address.**
  
This develops a sense of responsibility among contributors. Unallocated issues are bound to remain on the backburner, esp. when there is a volume of issues coming up. This also prevents any parallel planning during development.   5. **Isolate the master branch from your development branch.**
  
Agree or not, there can be a lot of half baked commits, reverts that you would not want on your master. It would just confuse your audience. A dev branch will make it easy for contributors. They can remain care free with their commits.   6. **Mark releases.**
  
Your audience might prefer to have the luxury of choosing an older build, as opposed to the latest one. Marking releases makes your development more reliable.   7. **Go through some pain to write a contributor guideline.

** A good project will demand more than one contributors, and to make the on-boarding easy make sure you put a readme for your prospective contributors. Contributors should be instructed to describe their commits in details for pull requests to be merged.

 

I will be trying to update this checklist as I proceed. Feel free to get back to be on twitter @omtalk, with any feedback, suggestions or rants.

 

1 week on Android, after 4 years on Windows Phone

I have been using Windows Phone as my primary phone since its inception. I started with a Windows Phone 7, that got upgraded to Windows Phone 7.1 but didn’t upgrade to Windows Phone 8. I got a Windows Phone 8, that is now upgraded to Windows Phone 8.1 and will also get the Windows 10 mobile upgrade. I develop apps for Windows Phone, and I have enjoyed the OS a lot of the last 4 years. After 4 long years, I switched my primary device to an Android device. I still use Windows Phone as a secondary device. So what triggered the change?

As a Developer

It was a gradual process. As a Windows Phone developer, life isn’t always easy. While developing apps for Windows Phone is a cake walk, implementing competing features in apps is still a massive challenge. Especially when you have to rush to Stackoverflow for issues as small as Accelerometer shake events http://stackoverflow.com/questions/24596915/windows-phone-8-1-accelerometer-detecting-a-shake. If you are a Windows Phone developer, you’d be kidding if you were never left disappointed due to absence of APIs. Low market share is a rant that follows.

I started working on some hardware projects recently, and APIs for hardware integration are still a distant dream. While I hope things to get better with Windows 10 mobile, there isn’t a specific release date announced yet. I had to get started soon, and Android as a platform has a lot to offer on the API front. I jumped.

As an End-user

I had no major complains as an end-user on Windows Phone. I could pretty much manage all my emails and calendars. Any app that you can’t live without is on Windows Phone – WhatsApp, Telegram, Facebook, Twitter, you name it. But I missed the chrome in the UI. I remember a Microsoft rep justifying the dual-tone Modern UI as easy to use, with no distracting chrome/gradiance around. I agreed. But somehow I missed the chrome. The UI feels like a controlled diet, as opposed to a calorie loaded barbeque of gradient colors and shades. I know it is good for me, but I need an option to customize out of it. Again, lack of APIs stopped developers from doing much about the customization part of it.

Developers can only go as far as customizing the lock screen image. Some geniuses, like the devs at Facebook have created apps that add notifications inside your lock screen “image”. But it still isn’t the finished experience that you would expect. Custom lock screens are coming, but so far the APIs are exclusive. Android on the other hand, as something called custom launchers. I jumped. Again.

After 1 week on Android

I bought the Mi 4i, which got delivered last week. I was impressed when I saw the launch event in Delhi, and the attractive price point the device was launched in. I like MIUI better than stock Android. This might not go well with most Android fans, but to each his own. One week after using the device, here is what I have to say.

**User experience

** I am enjoying the UX a lot. There are a few but tolerable micro-lags in the UI, especially when scrolling through images. One thing I loved about Windows Phone, was the smooth scroll; which Android wasn’t able to deliver well on mid range device. That seems to have changed since ART. I love the chrome. I went bananas over launchers and widgets. I loved the fact that I could choose a keyboard I am comfortable with. I’m not a fan of swiping through keys, in which case Fleksy on Android did the job for me. Do note, that the stock Windows Phone keyboard is a 100 times better than the stock Android one.

**Apps

** So many apps that can go crazy about. Ironically the best apps I found on Android are (acquired) by Microsoft. Next lock screen, Outlook and Sunrise calendar are apps that I now cannot live without. I missed Google’s apps on Windows Phone, esp. the taste of Google Now. I find it smarter than Cortana, but less friendly. Google definitely knows more about me than Cortana, while Cortana gives a wonderful human touch to the interaction. It seems Cortana will be coming on Android too, so see ya there.

Performance

Mi 4i is a mid range device in the Android family. It has a 2GB RAM which is as good as my Lumia 1520 which is a high end device in the Windows Phone family. This says a lot. Android eats up RAM like anything. On the performance front, I would rate Windows Phone far better than Android. Somehow Android still does not prioritize UI over background tasks, so the UI can go unresponsive very easily. This rarely happens on Windows Phone. Close to never on a high device with 2GB RAM. Booo!

**Battery

** I have seen people around me amused by the battery life Mi 4i provides. It has a 3120mAh battery that lasts for a little more than a day for me using it a little conservatively. Screen-on time eats the battery like anything. My Lumia 625 with 2000mAh fares far better with the same usage. While 4i’s 441ppi screen can drain a good amount juice from the battery; I feel Windows Phone, because of its wise RAM usage goes well on the battery front.

All in all

So far I feel Android does take the cake, because I am ready to compromise with performance and battery with hardware becoming dirt cheap. Windows Phone could do wonders only if Microsoft allows its developers to innovate on the platform with strong competing APIs. The Apps vs Market share dead-lock comes in a bit later. Personally, I will still be using and developing on Windows Phone, but Android has now split my attention.

On the other hand, the build quality and the hardware Mi is providing at attractive price points will soon start hurting Lumia devices where they fared well. The Lumia devices in India were placed very well between 7k to 18k with good build quality. I hate to say this, but Mi has managed to bring in far superior build between the same price points. In the long run, Microsoft cannot rely on its devices to be sold purely on attractive price points. They now have competition here.

Did I miss something? Tweet to me @omtalk, I would like to hear from you.

 

Controlling lights connected to a Raspberry Pi from a Windows Phone app

This might work just right

Control your lights with Cortana – watch this space for a DIY

I worked on a quick demo for Pune User Group’s DevCon 2014 in September last year. I’ve managed to work on a DIY for enthusiasts who would want to use their Raspberry Pi’s to control their lights with Cortana. Here is a quick preview. Watch this space for a DIY coming in later this week.