Help I'm trapped in a code factory! http://gusclass.com/blog Short programming tutorials and random geek stuff. Fri, 30 Mar 2018 14:34:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.10 This blog is medium continued… http://gusclass.com/blog/2018/03/29/this-blog-is-medium-continued/ http://gusclass.com/blog/2018/03/29/this-blog-is-medium-continued/#respond Thu, 29 Mar 2018 22:06:58 +0000 http://gusclass.com/blog/?p=1935 Not sure it will stick but you can find my newest personal post on Medium.

]]>
http://gusclass.com/blog/2018/03/29/this-blog-is-medium-continued/feed/ 0
Accessing the People API from C# / .NET http://gusclass.com/blog/2016/02/14/1912/ http://gusclass.com/blog/2016/02/14/1912/#comments Sun, 14 Feb 2016 04:43:38 +0000 http://gusclass.com/blog/?p=1912 The Google People API has launched, this post describes how to access the API from .NET projects using C#.

 

Code overview

The following code shows how to access the API:

using Google.Apis.Auth.OAuth2;
using Google.Apis.People.v1;
using Google.Apis.People.v1.Data;
using Google.Apis.Services;
using Google.Apis.Util.Store;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;

namespace PeopleQuickstart
{
    class Program
    {
        // If modifying these scopes, delete your previously saved credentials
        // at ~/.credentials/people-dotnet-quickstart.json
        static string[] Scopes = { PeopleService.Scope.ContactsReadonly };
        static string ApplicationName = "People API .NET Quickstart";

        static ClientSecrets secrets = new ClientSecrets()
        {
            ClientId = "YOUR_CLIENT_ID",
            ClientSecret = "YOUR_CLIENT_SERCRET"
        };

        static void Main(string[] args)
        {
            UserCredential credential;


            string credPath = System.Environment.GetFolderPath(
                System.Environment.SpecialFolder.Personal);
            credPath = Path.Combine(credPath, ".credentials/people-dotnet-quickstart");

            credential = GoogleWebAuthorizationBroker.AuthorizeAsync(
                secrets,
                Scopes,
                "user",
                CancellationToken.None,
                new FileDataStore(credPath, true)).Result;
            Console.WriteLine("Credential file saved to: " + credPath);

            // Create Drive API service.
            var service = new PeopleService(new BaseClientService.Initializer()
            {
                HttpClientInitializer = credential,
                ApplicationName = ApplicationName,
            });

            // List People.               
            Console.WriteLine("People:");
            GetPeople(service, null);

            Console.WriteLine("Done!");
            Console.Read();
        }

        static void GetPeople(PeopleService service, string pageToken)
        {
            // Define parameters of request.
            PeopleResource.ConnectionsResource.ListRequest peopleRequest =
                    service.People.Connections.List("people/me");

            if (pageToken != null)
            {
                peopleRequest.PageToken = pageToken;
            }            

            ListConnectionsResponse people = peopleRequest.Execute();

            if (people != null && people.Connections != null && people.Connections.Count > 0)
            {
                foreach (var person in people.Connections)
                {
                    Console.WriteLine(person.Names != null ? (person.Names[0].DisplayName ?? "n/a") : "n/a");
                }

                if (people.NextPageToken != null)
                {
                    GetPeople(service, people.NextPageToken);
                }
            }
            else
            {
                Console.WriteLine("No people found / end of list");
                return;
            }
        }
    }    
}

 

Quickstart

1. Create a project in Visual Studio and install the People API NuGet package.

2. Add a new project on the Google Developer Console. Create a project, add the People API, and add an API key of type Other.

3. From your project Credentials, copy the client ID and secret into your Visual Studio project, replacing YOUR_CLIENT_ID and YOUR_CLIENT_SECRET in the provided code.

4. Build and run, after you authenticate, you will see the contacts available to the app.

]]>
http://gusclass.com/blog/2016/02/14/1912/feed/ 4
Pimp my Windows Box http://gusclass.com/blog/2015/03/26/pimp-my-windows-box/ http://gusclass.com/blog/2015/03/26/pimp-my-windows-box/#respond Thu, 26 Mar 2015 04:07:53 +0000 http://gusclass.com/blog/?p=1756 When I first started at Google, I wrote a short article on the essential utilities I add to an OSX machine when it’s brand new. A friend at work recently asked me what I do when I set up a Windows machine and thus this post began 🙂

He was interested primarily in development so this recipe is for a typical development machine.

For developers, Windows is a very productive environment. This is not just for Windows development on the Microsoft stack but for development targeting most typical web / mobile clients and server backends.

I’ll just break it down into two quick sections: things to install and developer productivity within Windows.

General Windows Productivity tips

I’m most productive in Windows because of the combination of tools that I choose for working on Windows and because I know the keyboard shortcuts and built-in tools extremely well relative to other OSes. I have written blog posts on this before but a few basics for being fast at Windows:

  • If you purchased a computer that wasn’t “Signature Edition” wipe and reinstall Windows using your included key and media for the appropriate version. Bloat is a mess and WHQL drivers are usually trustworthy and stable enough with the exception being graphics drivers.
  • Pin your most used apps to the start menu and use the Win+# keyboard shortcut to launch
  • Use Winsnap for core window management
  • Use alt+drag for adjusted window management – e.g. mouse 3 + window edge dragging to resize
  • Use Win+R and search anywhere gratuitously to launch apps quickly with known paths
    • Win+R mspaint is my favorite example. There is no faster tool for cropping and rotating screenshots
    • Win+R notepad is my extra clipboard
    • Win+R cmd is my immediate terminal
  • Add the cygwin path to your system path by pressing Win+pause/break => Advanced System Settings => Environment Variables and then editing the PATH to contain c:cygwinbin. Example path with more dev tools added:
    C:Program Files (x86)GNUGnuPG;c:usersgusbin;c:mingwbin;c:python27;C:Progra~1Javajdk1.7.0_21bin;C:UsersGusapache-ant-1.9.4bin;C:Program Files (x86)NVIDIA CorporationPhysXCommon;C:Program Files (x86)InteliCLS Client;C:Program FilesInteliCLS Client;C:Windowssystem32;C:Windows;C:WindowsSystem32Wbem;C:WindowsSystem32WindowsPowerShellv1.0;C:Program Files (x86)Microsoft ASP.NETASP.NET Web Pagesv1.0;C:Program Files (x86)Windows Kits8.0Windows Performance Toolkit;C:Program FilesMicrosoft SQL Server110ToolsBinn;C:Program FilesIntelIntel(R) Management Engine ComponentsDAL;C:Program FilesIntelIntel(R) Management Engine ComponentsIPT;C:Program Files (x86)IntelIntel(R) Management Engine ComponentsDAL;C:Program Files (x86)IntelIntel(R) Management Engine ComponentsIPT;C:Program Files (x86)Windows LiveShared;C:Program Files (x86)QuickTimeQTSystem;C:Program Files (x86)ATI TechnologiesATI.ACECore-Static;C:Program Files (x86)CineFormTools;C:Program Files (x86)GoProTools;C:ProgramDataRazerSwitchBladeSDK;C:WINDOWSsystem32;C:WINDOWS;C:WINDOWSSystem32Wbem;C:WINDOWSSystem32;C:UsersGusAppDataRoamingnpm;C:Program Files (x86)Googlegoogle_appengine;C:Program Files (x86)Arduino
  • Know your IDE and the keyboard shortcuts that it has specific to Windows. The Home/End/Page Up/Page Down keys are your friend.

The Apps

For development you at a minimum need a decent compiler, editor, and command prompt. Beyond that, it’s nice to have good window management,

Install Chrome

I work for Google and am very familiar now with their apps workflow. All my stuff is in Google assets linked to a Google account that is connected to my Chrome profile. For cloud living on Google’s ecosystem, Chrome’s essential. Bonus, you can add the Chrome app launcher to your start menu and get a “Start menu” like experience for Chrome apps.

chrome_launcher

I also make sure to install the usual extensions if they’re not cloud sync’d:

Install Cygwin

Ever since reading Unix Power Tools I have insisted on having your typical suite of *nix awesome on all my boxes. There are many alternatives to Cygwin but for me, this suite of tools is the best way to get everything in one fell swoop. At a minimum, I install all of the core development tools I use:

  • Git
  • Make (and related g* compilers)
  • Python
  • Ruby
  • SSH
  • Unix utils (*grep)
  • Unix dev utils (sed / awk / )
  • Vim

I usually also add rxvt because the terminal window is resizeable horizontally unlike the default command window. Sometimes, the git client from Cygwin is broken so I end up with a random install of Mingw and Git.

After installing cygwin, I “move in” by copying my bashrc to the usual places.

Install Sublime

Sublime is not my favorite editor but it is a great default and in particular Sublime is a reliable default text editor.

Install Divvy

I love Windows snap but because I’m now using Linux and OSX as well, I have become accustomed to more advanced window management. As such, I install Divvy everywhere and map my keyboard shortcuts to be consistent from machine to machine for window management.

Install Alt+Drag

Alt-drag window behavior is correct, this can not be debated.

Install Console2

I cribbed this from Scott Hanselman’s post on Console2 but in short, I dig transparent command prompts so I run console. Protip: Setup console2 to run with bash by replacing the shell in settings, e.g.:

C:cygwinbinbash.exe

console2

 

Install OSS Maker Developer tools

The core tools I’ve used (either as compilers, IDEs of convenience, etc) consist of:

Install Visual Studio Express (Or better)

I various Netduino and .NET projects and Visual Studio is tip top as far as IDEs go. For most at home projects, VS Express is enough.

When using visual studio, there are a number of useful plugins, but one I always add is CodeMaid.

Install VirtualBox

I loves me some cygwin but VirtualBox gets me anything that I can’t get with it. For example, I built Android recently and this is a task made much simpler with package managers.

I typically run Ubuntu in VMs because it’s well supported.

Install Android Studio

I write Android apps so Android Studio is the cat’s meow. The Windows installer is pretty straightforward.

Install F.lux

I code late, flux is helpful in keeping me from getting eye strain and helps me sleep after a late night.

]]>
http://gusclass.com/blog/2015/03/26/pimp-my-windows-box/feed/ 0
Home Automation Armageddon! http://gusclass.com/blog/2014/12/09/home-automation-armageddon/ http://gusclass.com/blog/2014/12/09/home-automation-armageddon/#comments Tue, 09 Dec 2014 16:47:57 +0000 http://gusclass.com/blog/?p=1669 I have a friend who is new to programming and has asked me how I do it. It made me reflect for a minute to think about the patterns I take while learning something new or even completely broken and then get it working. Curious myself now, I decided to document a simple project from start to finish.

I found some relatively inexpensive “WiFi” LED light bulbs online that have an open developer community and documented network protocol. I grabbed a set of bulbs and the WiFI controller off of the internet and then setup the whole getup using the web instructions. After playing with the provided app, I started coming up with harebrained ideas about tricks I could pull using the API. So I started hacking.

Some background on the Mi-Light WiFi

I read the documentation on the product and found some code that used the protocol.

The way the whole system works is a combination of (a cloud service maybe?), the WiFi connector, and radio control of the various lights in your house. Pairing is easiest through the app and flipping lights on and off (with the switch) seems to help when pairing. Once the lights are paired with the WiFi connector, it uses radio to control them.

Network messages to the router are extremely simple, UDP packets sent to the listener on the router are handled and the commands are transmitted over radio to the lights. Btw, I like this protocol, it’s obviously simple and does specifically what you say. It’s no surprise to me that as much code exists out there working against the system.

Here’s the rub. As far as I could tell, the protocol changed between the time that most of the demo apps were made and when I had received my lights. When I tried the code I could find, I encountered issues and needed to diagnose so I started coding my own tests and reverse engineering.

Testing the Network Protocol for the Mi-Light

You need a way to monitor UDP packets and to send UDP packets. My home machine happens to be Windows so I got things rocking with a little WinPCap, Packet Sender, and SmartSniff. The plan is to use the sender to test the network API and then use the sniffer to monitor the responses. This way, I can clear out any potential network / protocol bug issues.

To kick things off, I tested the WiFi discovery query, which is a broadcast (255.255.255.255) UDP send to port 48899 over the network that the WiFi connector is paired with.  I tracked the broadcast packets using SmartSniff and the output looks something like this:

 

disco

The packet of interest is the response from the Mi-Light that includes its IP and MAC addresses. This response contains a tuple with the IP and MAC address of the WiFi connector. At this point I gave myself a little high-five because I now had a relatively simple discovery protocol that could be worked with and I had verified the light system was operational on my network.

Time to try some more fun stuff. A short UDP blast to my WiFi connector on port 8899 was definitely in order. Packets out were:

00000000  42 00 55                                           B.U

And my living room lights turned on, along with all my other lights, soo awesome. Time to try turning them off, I sent the following packets to the server on 192.168.1.174, port 8899:

00000000  41 00 55                                           B.U

Some decoding of the commands:

The first byte (41 or 42) is the command from the documentation, the rest is a packet suffix. For any of the commands, you just need to check the op code table:

	            Hexidecimal (byte)	 Decimal (integer)

RGBW COLOR LED ALL OFF	   0x41	           65

RGBW COLOR LED ALL ON	   0x42	           66

DISCO SPEED SLOWER	   0x43	           67

DISCO SPEED FASTER	   0x44	           68

GROUP 1 ALL ON		   0x45	           69	(SYNC/PAIR RGB+W Bulb within 2 seconds of Wall Switch Power being turned ON)

GROUP 1 ALL OFF		   0x46	           70

GROUP 2 ALL ON		   0x47	           71	(SYNC/PAIR RGB+W Bulb within 2 seconds of Wall Switch Power being turned ON)

GROUP 2 ALL OFF		   0x48	           72

GROUP 3 ALL ON		   0x49	           73	(SYNC/PAIR RGB+W Bulb within 2 seconds of Wall Switch Power being turned ON)

GROUP 3 ALL OFF		   0x4A	           74

GROUP 4 ALL ON		   0x4B	           75	(SYNC/PAIR RGB+W Bulb within 2 seconds of Wall Switch Power being turned ON)

GROUP 4 ALL OFF		   0x4C	           76

DISCO MODE	           0x4D	           77

SET COLOR TO WHITE (GROUP ALL)  0x42    100ms followed by:	0xC2

SET COLOR TO WHITE (GROUP 1)    0x45	100ms followed by:	0xC5

SET COLOR TO WHITE (GROUP 2)    0x47	100ms followed by:	0xC7

SET COLOR TO WHITE (GROUP 3)    0x49	100ms followed by:	0xC9

SET COLOR TO WHITE (GROUP 4)    0x4B	100ms followed by:	0xCB

Darn those set color to white commands!!! At this point, sending the packets in a network tool won’t suffice – but wifi light developer don’t care, this is the right problem to have. At this point, we’ve confirmed the network API works and are ready to code.

The code at this point is a super simple Visual Studio project that is definitely worth keeping around for testing lights during initial setup. I might add a setup command (send group [1..4] n times at y frequency) because this phase for lights is a trick or you can fork it and do that for me.

Crawl, walk, run…

After I have a minimal test with as few software variables as possible (sigh, crawl), I tend to start coding with a reusable template (walk). In this case, sending the messages / receiving broadcast responses is the clear point of interest. After wrapping making API calls, it then makes sense to create meaningful wrappers around API calling. In this way, you can easily refactor the API calls independently from calling the API.

To accomplish this design, I created a second project to be used while build out my utility class / library that will eventually take care of message queuing and performing API calls – all the bits relevant for integrating the lights with software.

The intermediate project used for testing the API wrapper looks something like this:

intermediate_program

It’s a boring project, I know. But, this is how it’s supposed to be. The intermediate library is most importantly a bootstrap for a more exciting project that will take full advantage of the functioning library. While developing in an environment where I can’t be sure if anything even works, I try to minimize the variables that can interfere with determining whether the client application code is the issue or it’s the state of the system – a WiFi router and radio rig in this case.

I filled out the methods on the Utility class. These just are bootstrapped by the original starter project that I created. The API calls are abstracted out, the constants for the various supported commands are made internal, and then refactoring happens if the class is getting bloated.

A first pass over the API yields the following structure:

diagram_methods

I also whipped together a command prompt and added console colors for flair:

lightShell

At this point I have a pretty good understanding of how well my library works. Here is the GitHub commit for this change that also has a few fun tests for checking the potential of the API. The following video shows it off:

 

Making it more robust

Using a RESTful API is an easy way to connect more services to the existing UDP service and should make it easier for me to extend to mobile apps or shell scripts. I haven’t started it yet but the final phase (run) is to create a Web API endpoints project with endpoints corresponding to the API calls the controllers will make use of the library I created.

But what about testing? Where is your Test Driven Development hat?

These projects are still throwaway prototypes at this point so I haven’t yet integrated a test framework such as NUnit. Additionally, behavioral testing becomes pretty tough when you’re flipping on and off lights in the house using a fire and forget protocol. Finally, the intermediate project which exercizes the library functions as a full system test. However, physically testing all the commands can make refactoring much less of a chore to put together tests, proper, though so while things are working it’s time to take a step back and protect your code.

If the prototype needs to be iterated, the first thing that I do is refactor based on what I’ve learned while using the library. Before or during the refactor, tests are extremely helpful – if you don’t do this, things get out of control again in terms of variables (are the lights broken or is it my code?) to debug.

Testing starts with the utility library that is moved to a separate project and tested as a separate component. All further development on the library becomes TDD. Subsequent projects that use the library become TDD. Test all the things.

Thoughts on my prototype coding process

I can think of a number of projects that I have worked on where I followed exactly this pattern and I’m wondering at this point if everybody does this when working on similar projects. When I was working on the early Google+ samples, the process was:

  1. Create Quickstart app that tests a single API call – leave as test to confirm configuration
  2. Explore the potential of the API – what are the limits? (JavaScript deferral, fully testing APIs in client)
  3. Write code that takes full advantage of the API (create graphs using connections, use app activities, integrate best practices, and add tests)

Note that this process has worked for me on small projects but on larger projects and even more experimental projects, the process is entirely different. Perhaps worth another post in the future. Testing similar projects could also be worth a few words.

]]>
http://gusclass.com/blog/2014/12/09/home-automation-armageddon/feed/ 3
Building and running native Android apps on Windows http://gusclass.com/blog/2014/11/11/building-and-running-native-android-apps-on-windows/ http://gusclass.com/blog/2014/11/11/building-and-running-native-android-apps-on-windows/#respond Tue, 11 Nov 2014 16:19:43 +0000 http://gusclass.com/blog/?p=1721 There are a number of well-documented approaches to developing native Android apps but many are not devoted to developing from the Windows command line but instead focus on using IDEs such as Eclipse. This article covers getting going quickly on the command line using just the Android developer tools and no IDEs.

The high level steps are as follows:

  • Install the prerequisites
  • Find and build a sample’s native components
  • Package the sample as an Android app
  • Deploy the sample to a device

After I go over doing this “the hard way” I’ll introduce you to using the Fun Propulsion Labs native app utilities for simplifying the build process. Let’s get going!

Prerequisites

You will need the following components:

  1. The Android Native Development Kit (NDK) – Provides compiler tools and sample code
  2. Apache Ant – Works with the Android build tools to create Android APKs
  3. The Java Development Kit JDK

You must correctly configure the components. This means opening up your environment variables and adding a user variable, JAVA_HOME, that points to the root folder of the JDK. For compatibility, you should use a Windows-abridged style path, e.g.:

C:Progra~1Javajdk1.7.0_21

Because Ant may not work with newer style paths on Windows.

Build a sample

Navigate to the folder you want to build from e.g. samplesnative-plasma. After navigating to the folder, build using the NDK. For example:

....ndk_build

When you invoke the command, you will get output indicating the build was successful.

[arm64-v8a] Gdbserver      : [aarch64-linux-android-4.9] libs/arm64-v8a/gdbserver
[arm64-v8a] Gdbsetup       : libs/arm64-v8a/gdb.setup
[x86_64] Gdbserver      : [x86_64-4.9] libs/x86_64/gdbserver
[x86_64] Gdbsetup       : libs/x86_64/gdb.setup
[mips64] Gdbserver      : [mips64el-linux-android-4.9] libs/mips64/gdbserver
[mips64] Gdbsetup       : libs/mips64/gdb.setup
[armeabi-v7a] Gdbserver      : [arm-linux-androideabi-4.6] libs/armeabi-v7a/gdbserver
[armeabi-v7a] Gdbsetup       : libs/armeabi-v7a/gdb.setup
[armeabi] Gdbserver      : [arm-linux-androideabi-4.6] libs/armeabi/gdbserver
[armeabi] Gdbsetup       : libs/armeabi/gdb.setup
[x86] Gdbserver      : [x86-4.6] libs/x86/gdbserver
[x86] Gdbsetup       : libs/x86/gdb.setup
[mips] Gdbserver      : [mipsel-linux-android-4.6] libs/mips/gdbserver
[mips] Gdbsetup       : libs/mips/gdb.setup
[arm64-v8a] Compile        : native-plasma <= plasma.c
[arm64-v8a] Compile        : android_native_app_glue <= android_native_app_glue.c
[arm64-v8a] StaticLibrary  : libandroid_native_app_glue.a
[arm64-v8a] SharedLibrary  : libnative-plasma.so
[arm64-v8a] Install        : libnative-plasma.so => libs/arm64-v8a/libnative-plasma.so
[x86_64] Compile        : native-plasma <= plasma.c
[x86_64] Compile        : android_native_app_glue <= android_native_app_glue.c
[x86_64] StaticLibrary  : libandroid_native_app_glue.a
[x86_64] SharedLibrary  : libnative-plasma.so
[x86_64] Install        : libnative-plasma.so => libs/x86_64/libnative-plasma.so
[mips64] Compile        : native-plasma <= plasma.c
[mips64] Compile        : android_native_app_glue <= android_native_app_glue.c
[mips64] StaticLibrary  : libandroid_native_app_glue.a
[mips64] SharedLibrary  : libnative-plasma.so
[mips64] Install        : libnative-plasma.so => libs/mips64/libnative-plasma.so
[armeabi-v7a] Compile thumb  : native-plasma <= plasma.c
[armeabi-v7a] Compile thumb  : android_native_app_glue <= android_native_app_glue.c
[armeabi-v7a] StaticLibrary  : libandroid_native_app_glue.a
[armeabi-v7a] SharedLibrary  : libnative-plasma.so
[armeabi-v7a] Install        : libnative-plasma.so => libs/armeabi-v7a/libnative-plasma.so
[armeabi] Compile thumb  : native-plasma <= plasma.c
[armeabi] Compile thumb  : android_native_app_glue <= android_native_app_glue.c
[armeabi] StaticLibrary  : libandroid_native_app_glue.a
[armeabi] SharedLibrary  : libnative-plasma.so
[armeabi] Install        : libnative-plasma.so => libs/armeabi/libnative-plasma.so
[x86] Compile        : native-plasma <= plasma.c
[x86] Compile        : android_native_app_glue <= android_native_app_glue.c
[x86] StaticLibrary  : libandroid_native_app_glue.a
[x86] SharedLibrary  : libnative-plasma.so
[x86] Install        : libnative-plasma.so => libs/x86/libnative-plasma.so
[mips] Compile        : native-plasma <= plasma.c
[mips] Compile        : android_native_app_glue <= android_native_app_glue.c
[mips] StaticLibrary  : libandroid_native_app_glue.a
[mips] SharedLibrary  : libnative-plasma.so
[mips] Install        : libnative-plasma.so => libs/mips/libnative-plasma.so

Next, create the android project using the Android build tools. For example:

android update project --path . --name native-plasma --target 20

This will generate the Android project files, such as build.xml. Finally, you can build the APK using Ant. For example:

ant debug

Will generate the debug package for the application in the current folder.

Deploy and run the app

Now that you have built the APK, you are ready to install it to your Android device. The following command uses the Android developer tools again to install the app:

adb install binnative-plasma.apk

With the app deployed to your device, you can now run it! The following image shows the native-plasma app running on my MotoX.

Screenshot_2014-11-11-08-26-32

Congratulations, you’ve built and installed a purely native Android app!

Building using the Fun Propulsion Labs utilities

The Fun Propulsion Labs (FPL) team has made a collection of utilities for native developers. These tools simplify the native development process, enable build automation, and also provide performance insights into your code. For the FPL utilities, you must first install Python. After installation finishes, add Python to your path (it was in c:python27 for me).

With Python on your system path, it’s time to clone the FPL git repository.

git clone https://github.com/google/fplutil

Now, you can try building one of the example projects. I started with the buildutil_example/android. From this folder call:

build.py -n c:usersGusandroid-ndk-r10c

This will kick off the build and will compile the native-plasma project. After building, you can install the APK from the project sources folder just as you did before:

adb install native-plasma/bin/native-plasma-debug.apk

Building with the FPL utilities is much cleaner!

Note If you’re using Cygwin, you must modify the folder containing the Windows Android Tools to have an alias from the Android.bat file to just android:

pushd /cygdrive/c/Users/Gus/android-sdks/tools

ln -s android.bat android

Then you would run:

./build.py -n ~/android-ndk-r10c/ -s ~/android-sdks/

from the Cygwin command line … The build can fail, with OSError: [Errno 11]. If this happens, just retry once or twice and the build will get past this issue.

Running Built Samples on the new Visual Studio 2015 Plugin

There was a recent announcement from Microsoft that the next version of Visual Studio will support Android. So, naturally, I tried it out. The sample structure that is used in the Visual Studio template doesn’t use Java and is built around Native Activity so I couldn’t easily port any of the existing samples. However, the emulator worked great!

To run in the emulator:

  • Install Visual Studio 2015 Preview, be sure to install the Android Simulator
  • Start the emulator from Visual Studio
  • From here, you can just drag and drop your apk to the emulator

Using ADB with the emulator

  • Determine the device IP address by opening up the network tab of the emulator settings. In the following example, it’s 192.168.1.134
  • network
  • Connect over TCP to the emulator:
    adb connect 192.168.1.134
  • Run your install  command (I was using cygwin paths, if you use cmd.exe these are backslashes):
    adb install ~/fplutil/examples/buildutil_example/android/native-plasma/bin/native-plasma-debug.apk
  • The app will install on the device and can be launched
    native_plasma

 

Conclusions

Building native Android apps isn’t too bad. When you’re developing native apps, the FPL utilities can save you a number of steps and make it easier. Because the tools are new, I haven’t had enough time to fully explore what they are capable of but for starters, being able to create Android apps using the more conventional pattern from C/C++ of beginning execution in main() is interesting! I’ll look deeper into the library in a future article. For now, you can discuss fplutil with other developers and users on the fplutil Google Group. File issues on the fplutil Issues Tracker or post your questions to stackoverflow.com with a mention of fplutil.

In celebration of getting native apps, building, I produced an accurate Androidified version of me doing the happy dance:

android

Until next time!

]]>
http://gusclass.com/blog/2014/11/11/building-and-running-native-android-apps-on-windows/feed/ 0
Building a Hexacopter http://gusclass.com/blog/2014/11/03/building-a-hexacopter/ http://gusclass.com/blog/2014/11/03/building-a-hexacopter/#comments Mon, 03 Nov 2014 17:12:46 +0000 http://gusclass.com/blog/?p=1705 It’s been a while since I last wrote about multirotors and I have learned a lot since then. The DJI Phantom I started with had a few upgrades done to it:

  • Carbon fiber blades were added
  • Body was painted black
  • The mainboard was replaced with the H3-2D upgrade kit
  • Smoother video and camera control were added with a H3-3D gimbal
  • Taller landing gear was added
  • The frame was modified to accommodate the gimbal
  • FPV was added using a FatShark Attitude v2
  • Added a video transmitter frame plane (carbon fiber as well)

 

phantom

 

The DJI that I had, the v1, had a number of glitches that periodically had to be fixed – most notably the receiver died twice on me. After the H3-2D upgrade, the receiver issue went away and the Phantom was capable of using the H3-3D, with gimbal control. It actually is not even supposed be able to do this, go Phantom!

phantom_2

FPV on the Phantom is super fun and I enjoyed it for many hours of flight. Being able to see what you are recording (via a GoPro in my case) is fun and also can result in better shots. To set up the GoPro for FPV, I wired the packaged transmitter to a break-out mini-USB cable that gives you video and ground. Once connected, the video just streams to the FatShark.

I had some CAN-BUS enhancements ready to go but needed to modify the connector to be able to support both the gimbal, on-screen-display (OSD), and bluetooth add-ons. Unfortunately, before the CAN-BUS could be added, I lost the drone.

The Hex!

I actually was sorta excited to lose the Phantom – it started to feel cramped! I couldn’t decide whether I wanted to build an inexpensive drone or go with the good stuff but I knew I wanted an open frame and potentially wanted a hexacopter.  I ended up going cheap 🙂 I found an ARF kit hexacopter that had pre-chosen parts:

  • F550 clone body
  • 30A Electronic Speed Controllers (ESCs)
  • 1000 kv motors
  • KK multicopter computer
  • ???? transmitter/receiver package
  • battery and charger

The whole thing was sold at under $250 shipped which was about a third of what the brand name ones run so I gave it a chance. The box came with no instructions and just a jumble of parts:

 

jumble

 

After eyeballing the pieces I decided to  just hack it. My procedure was as follows:

  • Solder parts to the mainboard
  • Build out the frame
  • Attach all the electronic parts
  • Pair radio
  • Calibrate and fly

Solder and attach parts to the  mainboard

First, you solder the ESCs to the mainboard. Be extra careful that the + and – match, ground black negative -, red + positive. Next, solder power to the mainboard: double check the + and – match the ESCs – with ground black – and red +. You can see in the following picture to see a few soldered ESCs and the power cable around the edges of the mainboard:

ESCs_mainboard

 

Build out the arms

Attach motors to arms using Phillips head screws. This is relatively easy, just be careful to get the motors even with the arms where they attach. You might want to orient the arms in a way that works well for you, e.g. white in the front, yellow on the sides, and red in the back.

Slide ESC cables under the arms and attach using the mainboard screws. To determine which screws were the mainboard screws, I just chose whatever screws was the most numerous. Attach and tighten the screws once you have your cables tidy. Run motor wires under the arms so that they can tuck under and meet the ESC wires. Finally, you solder pin connectors to motors and ESCs and then heat-shrink wrap them. The following picture shows how I soldered the wires into the bullet pin connectors, it was a pain to do this but it worked.

pin_connector_soldering

With the motors and ESCs wired, you can attach the motors to ESCs and can celebrate because there’s no more soldering!

Connect the mainboard

Attach the pilot to the middle of the mainboard using double sided tape / velcro and orienting the arrow in the direction that you want to be forwards. Wire the pilot to the ESCs – pin 1 on the pilot goes to the arm at 1:30, pin 2 goes to the arm at 3:00, pin 3 to 5:00, and so on. Now attach the radio to the mainboard next to the pilot using double sided tape / velcro.

Now you will wire the radio to the pilot. You should use the KK blackboard pinout for pilot pins and  the T6EHP-E manual for radio pinouts. When attaching the two, match the 1 pins on the first cables to  orient the pins correctly, if done wrong, it will throw you off later when binding your radio.

Attach the USB programmer to the pilot. To orient it, I cheated because mine only went on one way with all the pins touching.

Program the pilot

Flash KK Firmware using the KK-flasher. On Windows, this requires a driver, USBasp, that is unsigned, requiring you to do bad things with Windows 8. Flash with the correct controller – mine was kk blackboard 168/PA (16kB flash). Choose a firmware for your configuration – I used X6 V2.9 XXControl KR by Minsoo Kim. When you flash the firmware, there are beeps, if it doesn’t work, make sure you selected the right controller in KK-flasher.

Connect the radio

This is the part where you bind the radio transmitter (TX) to receiver (RX). On my radio, I needed to hold a pin down on the receiver while the radio was in off throttle. After successfully binding, I could control the motors. It scared me the first time it worked. Note, if the receiver is not binding, check your wiring on the receiver you attached to the mainboard.

Calibrate

Next, you will need to make sure all the motors are working and are spinning in the right direction. You will start by slowly powering on your motors. To do this, you first must arm them using the firmware’s trigger that you may accidentally have activated while connecting the radio.

The firmware I used had a trigger done by moving the left stick to the bottom right. When the motors spin up, observe the rotation of the motors. I printed arrows to mark the observed direction of motor spin with left corresponding to a counterclockwise turn of the motor and right corresponding to a clockwise motor spin. This was done in case I got mixed up while flipping over the hexacopter. Why are you flipping over the hexacopter?

You do this because you must rewire the motors to ensure their rotation is corresponds to the directions indicated on this chart. Reversing the polarity of the motors (switching red and black) reverses to the motor’s direction. So, you just change the motors based on the observed directions until all the motors are spinning correctly. It’s relatively easy because of the pin connectors.

Final assembly

With the motors set right, attach the top mechanism using the same body screws as used before on the bottom and it will become much more stable. After that, fasten down the ESCs and motor cables so that they are snug against the frame. I used the included zip ties to accomplish this and it was relatively easy to find a spot to attach given the large frame.

Next you attach the propellers to the motors. The plastic bushings included with the propellers go into the propellers to match the screw size on the clamps (aka prop adapters, collet-type prop adapters in my kit) that attach to the motor pins. Speaking of, the motor pins were just the bolt, nut, and crimp pin and not the included gaskets and metal connectors. The crimping end of the bolt goes down onto the motors with the tightening cone on top and the propeller pushing down on the crimp bolt.

Note that multirotor blades spin with the rolling side facing the way the blade is turning. Match the blade direction to motor direction. If you look at the assembled picture below, you will notice the curving or scooping edge of the blade pointing in the direction of the arrows showing the motor direction.

Finally, attach the battery straps and strap in the battery and aha, you have a hexacopter.

big_bertha

Hover test

After putting everything together I took it into the alley and did a quick hover test. It hovered. I brought the craft back inside and then had a good look over it. The build quality was kinda questionable, but it actually flew and seemed relatively stable and controllable. All things considered, this was a bargain.

There were a few things though – the fit of the frame screws, having these weird pinch bolts that held on the blades, and the integrity of the blade/bushing setup all feels a little rickety. It was cheap though and for all I could tell resembled other similarly sized hexacopters. I can’t wait to actually fly it and see what happens!

Update:

Here is the Hexa flying in the park. I was just testing it so for safety I didn’t bring it high enough to actually fly around. Because I’m used to the Naza with Altitude and Position hold, this was a difficult flying experience. I’m going to try replacing the Flight Controller with a Naze32 equipped with GPS Antenna and will see if I fare better.

 

As you can see, lots of crashes and difficulty controlling the craft’s direction. I’m guessing there are a load of tweaks I still need to make on the FC before it is as stable and controllable as I want.

]]>
http://gusclass.com/blog/2014/11/03/building-a-hexacopter/feed/ 1
Programming an Adafruit LED matrix http://gusclass.com/blog/2014/10/30/programming-an-adafruit-led-matrix/ http://gusclass.com/blog/2014/10/30/programming-an-adafruit-led-matrix/#respond Thu, 30 Oct 2014 15:25:38 +0000 http://gusclass.com/blog/?p=1685 The plan

 

For Halloween, I am making costumes that are like discos and for the costume I wanted to create a light setup that would flash colored lights off of the costume. I wanted to support various modes like a beating heart:

heart-o-ween

a strobe:

flashy

and  various geometric animations:

geoms

 

I decided to build a system using an Arduino and an Adafruit Neopixel matrix. I wrote some software and built a simple button-based system /circuit that lets me control the lights and puts on a fun show. The following video shows me playing with the prototypes:

I’ll run through my overall process of hacking the system together, hope it’s helpful if you’re building your own.

Getting to where I can code

I’m not a hardware guy and can’t solder to save my life, I started with software. I did the bare minimum and wired jumper cables for prototyping. Because I’m lazy, clumsy, and wanted to see if the hardware even worked, I took an Arduino Uno and powered my lights directly from the 5V power pins and connected a 1A power supply. I started up the Adafruit basic project from GitHub, (License) and forked it for building out my sample grid app.

I upgraded Arduino Studio and compiled the software. After connecting the data pin and power pins and running example apps, I realized that I would need auxiliary power: the LEDs would dim when I powered them up. This indicated that I didn’t have enough power going to the Neopixel – most likely because I had been lazy and directly connected my microcontroller to the power lines on the Neopixel.

I took some scissors and cut apart a small 5v power supply I was no longer using. With the wires separated, I soldered on a JST connector. On the Neopixel matrix, I soldered the paired JST to power. Now powered independently, I assumed that had a working testbed for coding. This was confirmed after powering up the power supply and re-running my code from earlier.

Programming the software

Now that I could reliably control the LEDs, I was ready to start coding. I started with a simple method that added a mask function to the existing color wipe function for the Adafruit library:

void maskedColorWipe(uint8_t wait, uint32_t color, uint8_t brightness, uint8_t shift, byte mask[][ROW_SIZE]) {
  uint16_t i, j;

  for(i=0; i<strip.numPixels(); i++) {
    if (drawGivenMask(i / ROW_SIZE, i % ROW_SIZE, shift, mask)){
      strip.setBrightness(brightness);
      strip.setPixelColor(i, color);
    }else{
      strip.setPixelColor(i, 0);
    }
  }
  
  strip.show();
  delay(wait);
}

This function was the start of being able to properly render data to the matrix. The plan is to allow on bits of the mask would render and the off (0) bits would be rendered as blank. The following examples show matrices specified in the program for various designs:

byte doubleheart[][ROW_SIZE] = {
  {0,0,0,0,0,0,0,0},
  {0,1,1,0,1,1,0,0},
  {1,1,1,1,1,1,1,0},
  {1,1,1,1,1,1,1,0},
  {0,1,1,1,1,1,0,0},
  {0,0,1,1,1,0,0,0},
  {0,0,0,1,0,0,0,0},
  {0,0,0,0,0,0,0,0}
};


byte doubleArrow[][ROW_SIZE] = {
  {0,1,0,0,0,0,0,0},
  {0,0,1,0,0,0,0,0},
  {0,0,0,1,0,0,0,0},
  {0,0,0,0,1,0,0,0},
  {0,0,0,0,1,0,0,0},
  {0,0,0,1,0,0,0,0},
  {0,0,1,0,0,0,0,0},
  {0,1,0,0,0,0,0,0}
};

byte flatline[][ROW_SIZE] = {
  {0,0,0,0,0,0,0,0},
  {0,0,0,0,0,0,0,0},
  {0,0,0,1,0,0,0,0},
  {0,0,1,0,1,0,0,0},
  {1,1,0,0,0,1,1,},
  {0,0,0,0,0,0,0,0},
  {0,0,0,0,0,0,0,0},
  {0,0,0,0,0,0,0,0}
};

If it’s not apparent from the function names, there is a heart, an arrow, and a little spike pattern in each of those grids. I chose the design of bit masks based on how it would be easy to visually produce the masks in this early prototype stage.

The function needed a helper, drawGivenMask, that would determine whether to render given a bit of the matrix:

boolean drawGivenMask(int row, int col, int shift, byte mask[][ROW_SIZE]){
  #ifdef FLIPMODE
  if (row & 1) {
    col = ROW_SIZE - (col + 1);
  }  
  #endif
  col = (col + shift) % ROW_SIZE;
  
  if (mask[row][col] & 1){
    return true;
  }
  return false;
}

Between these two functions, I could take a mask and color and then draw pictures to the matrix. Next step was to be able to rotate the matrix around because still light images are way too light-bright.

uint8_t shiftty = 1;
int scrollDir = 1;
//  speed - Speed to draw (lower is faster).
//  scrollLimit - # times to show before reversing pattern scrolling.
//  pattern - The displayed pattern, defined by 0's (off) and 1's (on).
void patternRotate(int speed, int scrollLimit, byte pattern[][ROW_SIZE]) {
  uint32_t color = strip.Color(255,0,137);
  if (true) {
    maskedColorWipe(speed, Wheel(shiftty*5), 50, shiftty, pattern);  
  } else if (true) {
    maskedRainbowCycle(speed, 50, shiftty, pattern);
  } else {
  }
  shiftty += scrollDir;
  
  if(shiftty > scrollLimit){
    scrollDir = -1;
  }
  if (shiftty <= 0){
    scrollDir = 1;
  }
  return;
}

Note that I use the HSV hack from one of the other example apps to render a nice rainbow while the colors rotate. At this point I had spinning designs like the following gratuitous gif:

demo_anim_neopixel

Next, I added code for switching patterns when a button is pressed:

void checkButton () {
  if (val == HIGH) {
    if (bounceCount < bounceLimit){
      bounceCount++;
    }else{
      pattern++;
      bounceCount = 0;
    }
    if (pattern > patternLimit){
      pattern = 0;
    }
  }
}

And a switch for changing the pattern:

  switch (pattern) {
    case 0:
      patternRotate(50, 256, flatline);  
      break;
    case 1:
      patternRotate(50, 32, doubleheart);
      break;
    case 2:
      patternRotate(50, 256, doubleArrow);  
      break;
    default:
      pattern = 0;
      break;
}

Next, I added the switch and started completing the hardware.

Building out the hardware system

With the software in place and working, it was time to get the actual decorations mobile!  I wasn’t sure it could cover the power needs but I found a 1A 5v USB battery used for charging phones that potentially could work for my project. Emphasis on USB – this is extremely convenient for powering Arduinos or other peripherals that have USB power-in. The plan here was to split off power from the battery to both charge the Arduino over micro usb as well as power the lights over the same power line. The circuit design looks something like this:

circuit

 

Of note, the Arduino digital line is connected to the Neopixel on pin 13, a button circuit is set to the analog pin, and the Neopixel power line is buffered by a small capacitor. It’s recommended that you also use a resistor on the data line to reduce noise.

After putting together the circuit, I was ready to go. Again, the key to keeping it simple was the battery. I used the USB phone charger and then split apart a micro usb cable. The cable was wired into the circuit on one part, powering the Neopixel, and then split off to resume power to the micro usb end that was used to power the Arduino micro. As opposed to my “prove it works” prototype, the power here was no longer going through the Arduino to the Neopixels.

Advanced patterns / animations

Now that I knew my costume piece was complete, it was time to add more complex patterns. The first draws a plaid-like pattern:

void grid(int delayMs, uint32_t color) {
  uint16_t i, j, displace;
  displace = 0;

  for (displace = 0; displace < ROW_SIZE; displace++){
    for (int i=0; i < ROW_SIZE; i++){
      for (int j=0; j < ROW_SIZE; j++){
        if (i == displace || j == displace || (i == (ROW_SIZE-1) - displace) || (j == (ROW_SIZE-1) - displace)){
          strip.setPixelColor(j + (i * ROW_SIZE), color);
        }else{
          strip.setPixelColor(j + (i * ROW_SIZE), 0);
        }
      }
    }
    strip.show();
    delay(delayMs);
  }
}

The next draws a shrinking box:

void box(int delayMs, uint32_t color) {
  uint16_t i, j, displace;
  displace = 0;

  for (displace = 0; displace < ROW_SIZE; displace++){
    for (int i=0; i < ROW_SIZE; i++){
      for (int j=0; j < ROW_SIZE; j++){
        if ( (i >= displace && i <= (ROW_SIZE-1)-displace) && (j >= displace && (j <= (ROW_SIZE-1) - displace)) ){
          strip.setPixelColor(j + (i * ROW_SIZE), color);
        }else{
          strip.setPixelColor(j + (i * ROW_SIZE), 0);
        }
      }
    }
    strip.show();
    delay(delayMs);
  }

  for (displace = ROW_SIZE-1; displace > 0; displace--){
    for (int i=0; i < ROW_SIZE; i++){
      for (int j=0; j < ROW_SIZE; j++){
        if ( (i >= displace && i <= (ROW_SIZE-1)-displace) && (j >= displace && (j <= (ROW_SIZE-1) - displace)) ){
          strip.setPixelColor(j + (i * ROW_SIZE), color);
        }else{
          strip.setPixelColor(j + (i * ROW_SIZE), 0);
        }
      }
    }
    strip.show();
    delay(delayMs);
  }
}

Another is the beating heart:

void heartPulse() {
  // New test patterns.
  int steps = 20;
  int speed = 40;
  uint32_t color = strip.Color(255,0,137);
  
  for (int i=0; i < steps; i++){  
    maskedColorWipe(speed, color, (100 / steps) * i, 0, doubleheart);
  }
  maskedColorWipe(250, color, 100, 0, doubleheart);
  checkButton();
  
  for (int i=steps; i > 0; i--){  
    maskedColorWipe(speed, color, (100 / steps) * i, 0, doubleheart);
    checkButton();
  }
  
  maskedColorWipe(500, color, 100, 0, maskEmpty);
  checkButton();
  
  // Prevents bouncing.
  if (pattern < patternLimit) pattern = 0;
  return;
}

I plan on adding another animation or two before finishing but am not sure what to add. Feel free to suggest animations.

Tweaks

The lights are super bright, so I added various diffusion layers to the LED matrices. For one, I used bubble packing that was used as packaging for the LEDs. For the other, I used diffusion paper ordered online. For both, the diffusion helps tremendously to prevent blindness (exaggeration, but my eyes have gotten hurt) and also blurs out the rendered patterns to make nice color bleed as you can see in the end of my demo video.

For some of the patterns, it was difficult to detect the button press at the correct time, so switching them to rotate a fixed period of time made sense. I would love to refactor the code to use an interrupt-driven button but didn’t have time to really get into doing it in the way it’s done on my Netduino blinky project.

Closing thoughts

First and foremost: the Neopixel is super bright. I could have gone with a less powerful setup than I had chosen and powered more lights, reduced power requirements, and lowered heat coming off of the thing. Despite the power and brightness issues, programming the Neopixel lights is a dream – I never had issues with rendering colors to pixels and using a pre-fab matrix saved loads of time. This was very, very much in part to the Adafruit Neopixel starter library and examples (License). The portable battery system worked surprisingly well too, I was from working in a test environment to mobile in no time.

In conclusion, Neopixels coupled with an Arduino are a great way to make blinky things, check them out if you’re into bright flashy objects. Hopefully my matrix code can help you jump-start if you are looking to test a quick and dirty matrix controller.

]]>
http://gusclass.com/blog/2014/10/30/programming-an-adafruit-led-matrix/feed/ 0
Using page tokens to retrieve extended results from Google APIs http://gusclass.com/blog/2014/08/18/using-page-tokens-to-retrieve-extended-results-from-google-apis/ http://gusclass.com/blog/2014/08/18/using-page-tokens-to-retrieve-extended-results-from-google-apis/#respond Mon, 18 Aug 2014 03:42:52 +0000 http://gusclass.com/blog/?p=1656 Typically, Google APIs return between 20 and 100 results from API calls. This is done to minimize the amount of unnecessary data is sent to API clients which improves client performance. However, sometimes you want to get additional results. What do you do if you want to get additional results?

The quick answer is that to get additional results, you must pass a special variable called a page token. The page token represents the position in the result set you are looking at, like a bookmark in a book, and is returned when additional results are available.

In this blog post, I will give you a very basic reference for how to use page tokens to seek deeper into query results.

A basic sample – Url to ID

To demonstrate getting additional results, I have created a demo app, URL to ID that, given a Google+ URL, will try and determine the post associated with the URL. You can see the demo Google+ URL to Post ID app here.

The app works as follows:

  • Parse the URL parts to determine the Google+ use associated with the post
  • Search the user’s post until you find the post URL
  • Return the post URL when the post is found

Because the post you are searching for could be further back in a user’s history than 20-100 posts, you must continue searching through additional pages of results until you find the matching URL.

Using the JavaScript Google API client library

The easiest and most reliable way to use Google APIs from JavaScript is to use the Google API client library. To initialize the client library, you must include the sources from Google:

<script src="https://apis.google.com/js/client.js?onload=handleClientLoad"></script>

Because I’m specifying handleClientLoad in the onload parameter, the client library will call the handleClientLoad function when the script has been initialized. The following function, called after the script initializes, constructs the client library from the JSON discovery document for the Google+ API and sets the simple API key internally within the client library.

  /**
   * Handles loading the API clients from endpoints.
   */
  function handleClientLoad(){
    console.log('Loading plus api...');
    gapi.client.setApiKey(key);
    gapi.client.load('plus', 'v1', function(){console.log('...done loading plus.')});
  }

Something very important to note on the callback function triggered by the client.js script: it must be defined before the script loads. Always put the script include after you have defined your callback function. Now that you have loaded the client library and initialized it with the Google+ discovery document, you will be able to access Google+ APIs through the gapi.client.plus.* APIs. Worth noting is that you can load ANY of Google’s APIs in this manner. For example, you could load the Google Drive v2 API as follows:

gapi.client.load('drive', 'v2');

At this point, we’re ready to query Google+ for the relevant social activities.

Querying Google APIs – the basics

At a basic level, the relevant Google API is called as follows:

    gapi.client.plus.activities.list({userId: userId}).execute(function(resp){
          // Perform operations here...
        });

In  the API call, the user ID, parsed from the input URL, is passed to the backend server via the Google RESTful endpoints on Google servers in the API method activities.list. You can explore the API call and resulting data using the Google API explorer and searching for the relevant Google+ API. The execute method accepts a function that is called when the API query finishes. When exploring the API and testing results from my own apps, I do the following:

    gapi.client.plus.activities.list({userId: userId}).execute(function(resp){
          console.log(resp);
        });

This way, I can see the data returned from the API call without requiring the reference every time. What’s even more convenient is using the developer console to trigger API calls and checking results within the console while you develop.

Traversing results – the actual implementation

Now that you’ve seen the basics of using the Google APIs from the JavaScript client library, let’s take a look at the API call in the code itself. The following code is how I’m querying the Google API:

  /**
   * Performs the API queries for searching,
   *
   * @param {String} searchUrl The Url containing the user ID for searching.
   * @param {int} queryCount The number of API calls made.
   * @param {String} nextPageToken The next page token for paged api calls.
   */
  function searchForUrl(searchUrl, queryCount, nextPageToken){
    var userId = searchUrl.split('/')[3];

    gapi.client.plus.activities.list({userId: userId,
        pageToken: nextPageToken}).execute( function(resp){
          handleActivities(resp.items, searchUrl, queryCount, resp.nextPageToken);
        });
  }

In the above code, an additional parameter to those passed in the preceding examples, the page token, is passed to the Google API endpoint. This page token, when null, returns the first set of data. When the page token is present, the specified page of data is returned. Let’s look at the code for handling the result set:

  /**
   * Parses and stores activities from the XMLHttpRequest
   *
   * @param {Object} activities The response activity objects as an array.
   * @param {String} postUrl The URL of the post.
   * @param {int} queryCount The number of API calls.
   * @param {string} nextPageToken The next page token.
   */
  function handleActivities(activities, postUrl, queryCount, nextPageToken){
    for (var activity in activities){
      activity = activities[activity];
      if (activity['url'] == postUrl){
        targetActivity = activity;
        document.getElementById('result').value = 'ID is: ' +
            activity.id + '\n' +
            'PlusCount is: ' + activity.object.plusoners.totalItems + '\n' +
            'Replies are: ' + activity.object.replies.totalItems + '\n' +
            'Reshares are: ' + activity.object.resharers.totalItems + '\n';
        isFound = true;
      }else{
        console.log(activity);
      }
    }

    if (queryCount < maxQueryCount && !isFound){
      queryCount++;

      // throttle calls using timer to avoid reaching query limit
      console.log('retrying with ' + nextPageToken);
      setTimeout(searchForUrl(postUrl, queryCount, nextPageToken), 100);
    }
  }

In the above example function, the activities are passed as a JavaScript array of objects. Of note, objects returned from Google API calls are typically returned in the items parameter which was passed to this function in the API callback. Details of the activities object are described in the Google+ REST API documentation. To traverse the result set, the page token is passed back to the same function called before in order to keep searching for the social activity.

Closing Thoughts

When you need to get more results than the initial window of results, use page tokens to retrieve additional results.

]]>
http://gusclass.com/blog/2014/08/18/using-page-tokens-to-retrieve-extended-results-from-google-apis/feed/ 0
Using Google APIs from Console apps in .NET http://gusclass.com/blog/2014/07/29/using-google-apis-from-console-apps-in-net/ http://gusclass.com/blog/2014/07/29/using-google-apis-from-console-apps-in-net/#comments Tue, 29 Jul 2014 16:26:16 +0000 http://gusclass.com/blog/?p=1610 I have noticed a common and recent issue that developers have been having with the Google APIs has been building out console apps. In this post I’ll give you a few short demos for API calls and authorization. The following screenshot shows the token verification demo running from the console:

dotnet_console_app

 

 

Running the Demo Solution / Projects

The demo for these console apps is available from my GitHub account. To clone the project from your console, just run:

git clone https://github.com/gguuss/google-dotnet-demo

From your GitHub shell.

After you have cloned the project, open the solution file, GoogleDotNetDemo.sln. Right click on the solution, select restore NuGet packages, and then press F5 to try running the apps. If everything worked correctly, the app should build and you will see the default app, the token verification demo, start running.

Demo of Simple API Calls: Token Verification from Console

The first example I’m going to show is a console app that performs Google OAuth 2 token verification. This is the most concise demo I could come up with and it’s actually useful if you want to check your access tokens during debugging.

First, set up your project from the NuGet package manager interface in Visual Studio, add Google.Apis.Oauth2.v2.

The following code is the full source of the app’s main function:

        private static void Main(string[] args)
        {
            Console.WriteLine(@"Input an Access token:");
            String accessToken = Console.ReadLine();

            Oauth2Service service = new Oauth2Service(
                new Google.Apis.Services.BaseClientService.Initializer());
            Oauth2Service.TokeninfoRequest request = service.Tokeninfo();
            request.AccessToken = accessToken;

            Tokeninfo info = request.Execute();
            Console.Write(@"Scope: " + info.Scope + "n");
            Console.WriteLine(@"Expires: " + info.ExpiresIn);
            Console.ReadLine();
        }

To make the API call, I’m just getting a service object for OAuth 2, constructing the request, adding the token, and finally executing the request.

Demo of Authorization: Get Google+ Profile information

In this demo, the user is authorized and then their profile information is retrieved and displayed.

First, as before, we’ll enable the required Google API client package, Google.Apis.Plus.v1. This will install additional client library dependencies.

The following is the full source of the main function:

        // These come from the APIs console:
        //   https://code.google.com/apis/console
        public static ClientSecrets secrets = new ClientSecrets()
        {
            ClientId = "YOUR_CLIENT_ID",
            ClientSecret = "YOUR_CLIENT_SECRET"
        };


        static void Main(string[] args)
        {
            Console.WriteLine(@"Starting authorization...");

            UserCredential credential = GoogleWebAuthorizationBroker.AuthorizeAsync(
                secrets,
                new[] { PlusService.Scope.PlusLogin },
                "me",
                CancellationToken.None).Result;

            // Create the service.
            var plusService = new PlusService(new BaseClientService.Initializer()
            {
                HttpClientInitializer = credential,
                ApplicationName = "Console Google+ Demo",
            });

            Person me = plusService.People.Get("me").Execute();

            Console.Write(@"Authorized user: " + me.DisplayName + "n");
            Console.Write(@"Press enter to exit.");
            Console.Read();
        }

You construct your credentials object with parameters from the Developer API console, authorize the user to get your credential, create your service object from the credential, and then can make your API calls with the service object.

Closing thoughts

Authorizing the user from the console is pretty easy. Note that doing so with the web browser is necessary because the user must input their credentials on the Google OAuth server.

More information can be found at:

 

]]>
http://gusclass.com/blog/2014/07/29/using-google-apis-from-console-apps-in-net/feed/ 6
You will eventually own a smart watch. http://gusclass.com/blog/2014/07/23/you-will-eventually-own-a-smart-watch/ http://gusclass.com/blog/2014/07/23/you-will-eventually-own-a-smart-watch/#respond Wed, 23 Jul 2014 19:49:23 +0000 http://gusclass.com/blog/?p=1608 For the past ~year – the time since Pebble made headlines for its crowdfunded success – people have been hyping smart watches. It’s becoming annoying if you are like me and comb gadget blogs. Well, not really annoying, but not particularly exciting.

During the hype, I have been skeptical of the utility of these new devices given there are activity trackers, phones, etc, which offer the same functionality with more power. It’s been tried before and it’s weak sauce.

I had my heart set that smartwatches are cute gadgets but they are not very useful. I mean, you have a cell phone already, right? After spending some time with Android Wear, I’m sold, these are going to become very popular because they are useful. I’ll try and explain how in this blog post.

My first Smartwatch

220px-Timex_Datalink_Model_150

I’m not sure I’d really call it a smart watch per-se, but the Timex Datalink was the first watch I ever had that I could communicate to a computer with. This watch was pretty awesome at the time. I vaguely remember making an eager sojourn to the EB Games (Babbage’s, for you in the UK) in the local mall. I picked up some sort of giant package for this watch thing that I had forever lusted over and done random jobs fixing computers to earn. I knew every imaginable thing that a 14-year old nerd can know about an obsessed-over gadget.

Here’s why I was obsessed with the Timex – it was a technological marvel. Because the connectors on computers were too big (RS-232 or parallel? Nerd, please!), the Timex Datalink used a photosensor as a sort of crazy modem that would receive address book information that you could input on a PC using proprietary software. To download the data to the watch, you would hold your wrist up to the CRT screen and the screen would strobe to transfer the data.

The flashing was awesome. It was like a rave for your watch! The software transfer method was a potentially fatal epilepsy hazard though.

Sorry, that got dark fast. I don’t think there were ever any Timex Datalink-related epilepsy incidents but the communication software was pretty crazy in terms of strobing.

Anyways, this whiz-bang gadget of the 90s was a true marvel of its time.  You no longer needed a calculator watch to carry around your contacts with you and you didn’t have to look like a complete geek just to keep all your contacts with you.  Hold on..   I take that back, this watch is about as nerdy as it gets.  It was a completely dysfunctional device that was just as much impractical.  For one, you would lose functionality when you didn’t have a computer around.  For another, why reinvent the wheel for the sake of making it digital.  Alan Cooper would have a joyous time wondering at the complexity of the thing.

I really wish I had a video of the unboxing, it would be hilarious to see.  I opened up the blister packing and installed software using floppy disks because CDs were too bougey.  After the app installed on Windows 95, I manually input all my contacts using the horrid WinForms client app like a data entry specialist, and stared transfixed as the flashing CRT downloaded the data from my PC.  The technology was mind-boggling at the time and I proudly wore the watch for a few months, giving my fellow nerds the smug grin of science in passing. The wonder wore off before long though.  The tipping point on it was when my friend asked me why I had it. I started explaining myself and then he pulled out a notepad and pencil, scribbled down names, addresses, and phone numbers, and proved this smartwatch thing was obsoleted by low tech.

The watch is gone and I don’t miss it a bit aside from the nostalgia of the thing.

 

My first “connected” Smartwatch

Synchronizing contacts wasn’t enough to actually justify using a smartwatch.  You need NEWS, updates, statistics, and all sorts of information made convenient to you in order for it to actually be useful.  That’s what you need, right? I’m sure that a few product managers at Microsoft had a hallway conversation along those lines and then kickstarted the SPOT watch project with a .NET run-time that they were convinced would blow up the developer ecosystem and create a new segment…

Naturally, the SPOT watch never took off.

I completely randomly was a beta tester on the Microsoft Spot Watch around 2003-2004.  The device I had was large for a watch and had a square display similar to a Nokia 3390 but with a circular shape and a giant LCD bezel.  The device used radio to receive notifications such as weather information and some sorts of MSN alerts like sports scores, stock quotes, and news headlines.  I don’t remember caring at all about using the thing, I had most likely participated because I just wanted to see what Microsoft was like and to play with new toys.

In fact, I remember scoffing at the idea of the device because I then was rebelling against commercial software and *cough* Microsoft sometimes can be considered the villain in that narrative.  I can’t say I wasn’t curious though. I used the watch regularly over the course of the program.  My conclusion: it’s neat but the information I was getting from the device was useless – I didn’t care about the weather in Seattle, it’s cloudy, duh.  The MSN notifications were also pretty useless because I could have cared less about what was going on outside of where I was.  After some time with the watch and a number of feedback sessions, I found myself impressed with what it did but bored at the same time.

There was also the factor that social networks were virtually non-existent at the time and notifications from Facebook / MySpace were done through email and nobody cared anyways.


I did have a feeling that Microsoft was on to something.  When the production devices came out later, I watched from the sidelines because there wasn’t that much value to the thing and there were service fees tied to the device.  I’d wager Microsoft was doing something crazy / prohibitively expensive like leasing radio time from radio towers (or satellites?) to transmit the signals.

I nearly joined the SPOT team during a mass exit around 2007 when the Digital Media Division at Microsoft was disbanded. Upon inspection, the team was clearly being defragged so I instead started working on Windows touch features.

 

My first Smartphone Smartwatch

I had been interested in the Motoactv when it launched but couldn’t justify the starting price.  Later, I was looking at GPS Sport Watches and the Motoactv looked pretty good considering it was a tracker with support for phone connectivity and music.  Motorola had also just slashed the price of the Motoactv.

motoactv-620x474

If you’re thinking what I’m thinking at this point…  Then you are thinking that Motorola did their market research. Yup, I bit.  I impulsively scooped up the gadget and felt buyer’s remorse creeping up on the drive home.

I calmed myself and rationalized the device as an activity tracker — I really just needed a run / bike tracker — how bad could it be.  At first use, I was very disappointed.  The battery life was bad.  I could get what felt like 8 hours of battery life with minimal use. I took a deep breath, again, this was a watch-sized Android device, what could I expect. I wasn’t blown away by its looks either – it was a giant watch, heavy. I then started to appreciate the thing. At the end of the day, the device got the job done of tracking runs and counting steps. A FitBit or a GPS Sport Watch would accomplish either of these tasks far better than the chimeric Motoactv.

Over time, I realized the Motoactv really was not at all what I expected it to be: it was entertaining. It played music and functioned as a bluetooth headset really well. The included headphones were better than any other athletic headphones I own. As others have discussed, it was a good fitness gadget in its own right in this sense. There really wasn’t much like it out there at the time. I still was annoyed by the device though. The battery life was dismal. For what the Motoactv was to me, I could not justify having to mess with the strap and find an extra micro usb cable to charge it when it was dying in the middle of the day. Half the time I wanted to use it, the watch was dead.

Then Motorola released a Motoactv firmware update that made a significant difference in device battery life. Under ideal circumstances, I could get battery life on the device that could last just as long as my phone with room to spare. The device also charged faster. I feel like I should repeat that again, Motorola performed some sort of firmware witchery and more than doubled or tripled the battery life of their wearable and made the update free for everyone. I was stoked, my rather decent fitness tracker was now reborn better and even on par in terms of battery life with the devices I compromised on to get the shinier tracker – let’s face it, an active-matrix touch screen of the Motoactv’s size doesn’t come at zero cost to battery life.

Later, Motorola released an app that would let you sync messages from your phone to your watch. This app was amazing in concept. Receive texts, tweets, and Facebook messages on your Motoactv watch. When it worked, this feature was a game changer. However, it barely ever worked for me. Every once in a while the app and watch would start dancing nicely together and I would simultaneously get important notifications on the watch and my phone and could just verify the notification on the watch. Then, when I most needed it, all my notifications would break.

I however was still a huge fan, the Motoactv convinced me that this was at least nifty and cute. If it worked perfectly, it would have been amazing. The impact on battery life and the Schrodinger’s cat game with my notifications was too much. Maybe it was too soon for the market or maybe the bridging between Android and the device was impossible at the time of design. To sum it all up, something about the device was not quite seamless enough to be perfect.

Over time, the watch became a cult classic. The touch screen was decent, the screen itself was actually pretty good looking. Eventually, a rag-tag team of Android enthusiasts hacked the device and gutted some of the less-used watch features. The results were astounding – some users claimed a week of battery life with the lean and mean updates. Developers integrated apps and games – even Angry Birds made it to the device.

I enjoyed my Motoactv. It was big, it was clunky, but hot dang it was a glympse at the future. I wore the thing as a fun wearable and very regularly as a runner / mp3 player. Sadly, I dropped it just the wrong way after a workout and shattered the screen. That was the end of it for me. I wasn’t too upset to lose it though, my phone did everything it did and more. I mean, MP3 player and fitness tracker, phones do that well enough to make it difficult to justify another device. The thing was big and ugly too, definitely the Nokia N-Gage of smart watches contemporary to its time.

My first actual Smartwatch

For I/O 2014, I was fortunate enough to get access to an Android Wear Smartwatch. The device was inactive when I received it and I did not get it functioning until a while after the event. In fact, I left it unused for more time than I typically will give a gadget. I was disinterested due to my experience with battery life in the early Motoactv releases and the whole notification roulette that made me nervous about wearable notification reliability.

GrLivBlack_600x600_xlarge_1

A coworker who is extremely skeptical of early technology trends told me he found the device indispensable. He described a scenario where couldn’t use his hands but was still able to receive and respond to texts without distraction using just a glance and voice commands or canned responses. I decided to give the little watch a second chance.

Pessimistically, I charged up the watch and waited while it booted. Within a few minutes I was bridged between my phone and the watch. It was pretty dang seamless. Later, I received a hangout message, “Want to meet for lunch?” – I touched the screen, distrustingly slid up the voice input. The canned response “yes” appeared first, nice. Click, send, done. At this point I got it.

That whole process of: Remove phone from pocket » Look at screen » Unlock screen »  Find app / select notification » Read or respond.

… is broken.

The watch fixes it. This was the first time the watch delighted me. Everything that kept the Motoactv from being a game-changer was there in that one simple interaction. A repetitive task that I perform on my phone just was optimized by a noticeable margin.

I feel my wrist vibrate again, open the watch, and then I see the Google Now notification that a package was delivered. Yes, I know what you’re thinking. I also got this notification (redundantly) on my phone. The key point is that the check / dismiss interaction pattern on the watch is much less disruptive than the phone pattern you perform to accomplish the same thing. By the end of the first day I used it, I had become confident with the device. I experimented with sending messages using voice.  It just worked. By this point, I would not give up the device without a fight, I was committed to the cult of the smart watch.

On the end of the second day I had the watch, I returned home from work and tapped my phone to my Moto Stream. After listening to music for a few minutes, a duplicate track showed up in my playlist. I reflexively looked at my watch intending to do something and before I realized I couldn’t, I saw an icon that allowed me to skip tracks with my music. Delighted again, I skipped to the next song and from the watch.  It’s a subtle thing, but having seamless music control from a watch is pretty darn awesome.

Later, I was testing my video of a San Francisco bike ride last weekend on my TV using my ChromeCast. Somewhere along the lines, I wanted to pause and check to see if a task was completed on my computer. I checked my watch to see whether enough time had passed and delightfully, there was a new notification: I could control the ChromeCast from my wrist, no apps or configuration necessary.  This is why the watch, like the ChromeCast, is a game changer. It’s easy to use, works with minimal configuration, and is useful.

Over time, OEMs will create  watches that will be cheaper, have longer battery life, and that will be attractive enough to entice fasionistas. The Moto 360 is pretty svelte, for example:

Trust me, smart watches are here to stay and if you’re reading this there’s a good chance you will inevitably own one and will find it indispensable.

See Also

]]>
http://gusclass.com/blog/2014/07/23/you-will-eventually-own-a-smart-watch/feed/ 0