Zac Gross

Code & More

Postgres With Entity Framework Code First

| Comments

Npgsql is the most popular Postgres data provider for .net. None of the integration examples involving entity framework I found online used the db first paradigm. After a lot of testing I posted a working example of EF code first with Npgsql below.

Note: this example requires the database to already exist. EF automatic creation/migrations will not work.

Db configuration class, this ensures the correct connection factory is used. My testing found this was the only way to get it set, various connection string formats were ignored/overridden at runtime.

DB Config
1
2
3
4
5
6
7
8
9
10
11
   
  public class NpgsqlConfiguration
      : System.Data.Entity.DbConfiguration
    {
        public NpgsqlConfiguration()
        {
            SetProviderServices("Npgsql", Npgsql.NpgsqlServices.Instance);
            SetProviderFactory("Npgsql", Npgsql.NpgsqlFactory.Instance);
            SetDefaultConnectionFactory(new Npgsql.NpgsqlConnectionFactory());
        }
    }

Next define a context class decorated with the custom db config attribute. Ensure the default schema is set to “public” (or the relevant schema name).

Key Points:

  • Ensure context decorated with Npgsql config class
  • Ensure correct schema name is set in OnMOdelCreating method
  • Don’t use an initializer
  • Apply any case/naming conversions needed

Some sort of case conversion will likely need to made in the OnModelCreating method, in my case I made all column names lowercase and did the necessary column name transformation there. For table names I used decorator attributes on the entity classes.

Context
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
  
  [DbConfigurationType(typeof(NpgsqlConfiguration))]
    public class ExampleContext
        : DbContext
    {

        public DbSet<Example> Examples { get; set; }


        public ExampleContext(string connectionString)
            : base(connectionString)
        {
            this.Configuration.LazyLoadingEnabled = false;
            this.Configuration.ProxyCreationEnabled = false;

            //Helpful for debugging            
            this.Database.Log = s => System.Diagnostics.Debug.WriteLine(s);
        }


        protected override void OnModelCreating(DbModelBuilder modelBuilder)
        {
            Database.SetInitializer<ExampleContext>(null);
            modelBuilder.HasDefaultSchema("public");
            modelBuilder.Conventions.Remove<OneToManyCascadeDeleteConvention>();

            modelBuilder.Properties().Configure(c =>
            {
                var name = c.ClrPropertyInfo.Name;
                var newName = name.ToLower();
                c.HasColumnName(newName);
            });

        }

    }  

Migrating Existing Xamarin iOS App to Unified API

| Comments

Recently I had to migrate an existing Xamarin iOS app to the unified API. The following are solutions to a few of the problems I encountered.

After running the automated update command from the Xamarin Studio OSX build menu I encountered the following error on subsequent builds:

1
2
The type MonoTouch.UIKit.UIViewController' is defined in an assembly that is not 
referenced. Consider adding a reference to assemblymonotouch, Version=0.0.0.0,

If you get a similar error it will probably be necessary to:

  1. Clean the solution.
  2. Reinstall all components.
  3. Rebuild.

    If the issue still persists (like in my case) and you are using Xamarin Studio you may have to manually delete the old component package files from your file system then reinstall them after you are sure they have been wiped. Xamarin studio can store them in multiple locations depending on platform be sure to check: https://kb.xamarin.com/customer/portal/articles/1865772-where-are-the-components-stored-on-my-machine- to find the location on your machine.

One other thing to note is the compiled binary size of your app will probably double because the resulting package now contains both 32 and 64 bit versions.

Configuring Android Wear Emulator for Debugging With Physical Device

| Comments

I recently started developing an Android Wear app and needed to pair a watch running in an emulator to a physical phone(s4). The current documentation on the net is somewhat stale so the following is an up to date procedure:

  1. Power on Android Watch AVD
  2. Connect the physical phone via usb and ensure usb debugging is turned on in system options.
  3. Turn on blue tooth debugging on the watch
    • swipe to get to the settings menu
    • scroll down to the about menu
    • tap 7 times on the build number to enable developer mode
    • navigate back to the main settings menu and select developer options
    • tap enable avd debugging
    • tap enable bluetooth debugging (disabled until avd is enabled)
    • on the home screen a notification should be displayed stating bluetooth debugging is enabled
  4. In an SDK console type “avd devices” both the physical phone and watch emulator should be listed
  5. Type “adb -d forward tcp:5601 tcp:5601” to map the emulator ports
  6. On the phone download the Android Wear app from the play store.
  7. Open the wear app and choose pair device.
  8. On the pair device screen press the physical menu/settings button on the phone. An option to “connect to emulator” will appear, tap it.
  9. The phone should find and connect to the emulator and display the connected options in the wear app. You can scroll to the bottom and tap one of test options to send a test notification to the paired watch.

Go Asserts and Multiple Return Values

| Comments

Recently I have been writing many tests for functions with multiple return values. I was hoping to write a single line assertion. Something like the following:

1
2
3
4
5
6
7
8
   
  func testableFunc() (int,int,int){
      return 5,6,7
  }

  assert.Equal([]interface{}{5,6,7},testableFunc());

  //Note Equal()'s signature expects interface{} arguments

This won’t compile, the method signature for Equal takes interface{} parameters. Understandably the compiler can’t figure it out.

After checking the mailing list it turns out the compiler is smart enough to map multiple return values to function params when their types and order match. e.g:

1
2
3
4
5
6
7
8
9
10
   func testableFunc() (int,int,int){
      return 5,6,7
  }

  func anotherFunc(a int,b int, c int) int {
      return a + b + c
  }

  //valid because returned types match arguments
  anotherFunc(testableFunc())

With this behavior in mind we can write a shim function to map multiple return values to a slice of values that the assert function can compare. Here is a 2 value shim:

1
2
3
4
   
  func Shim(a, b interface{}) []interface{} {
      return []interface{}{a, b}
  }

It can be used like so

1
2
3
4
5
6
   
  func testableFunc() (int,int){
      return 5,5
  }

  assert.Equal(Shim(5,5),Shim(testableFunc()))

Now shorten the name and create shims for different number of return values.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
   //Shim for 2 param return values
  func M(a, b interface{}) []interface{} {
      return []interface{}{a, b}
  }

  //Shim for 3 param return values
  func M3(a, b, c interface{}) []interface{} {
      return []interface{}{a, b, c}
  }

  //Shim for 4 param return values
  func M4(a, b, c, d interface{}) []interface{} {
      return []interface{}{a, b, c, d}
  }

  assert.Equal(M(5,5),M(someMethod))

And finally here is a complete working example.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
   
  import (
      "github.com/stretchr/testify/assert"
      "testing"
  )

  //Shim for 2 param return values
  func M(a, b interface{}) []interface{} {
      return []interface{}{a, b}
  }

  //Shim for 3 param return values
  func M3(a, b, c interface{}) []interface{} {
      return []interface{}{a, b, c}
  }

  //Shim for 4 param return values
  func M4(a, b, c, d interface{}) []interface{} {
      return []interface{}{a, b, c, d}
  }

  func testableFunc() (int, int) {
      return 5, 5
  }

  func otherTestableFunc() (int, int, string) {
      return 6, 7, "hi"
  }

  func TestMulti(t *testing.T) {

      assert.Equal(t, M(5, 5), M(testableFunc()))
      assert.Equal(t, M3(6, 7, "hi"), M3(otherTestableFunc()))

  }

Noise Words

| Comments

Here is a javascript array of “noisy” words that may help you with NLP related algorithms.

Noisy Words
1
2
   var noiseWords =
  ["able","available","bad","big","black","central","certain","clear","close","common","concerned","current","different","difficult","due","early","easy","economic","far","final","financial","fine","following","foreign","free","full","general","good","great","happy","hard","high","human","individual","industrial","international","important","large","last","late","legal","likely","line","little","local","long","low","main","major","modern","new","name","national","natural","necessary","nice","normal","old","only","open","other","particular","personal","political","poor","possible","present","previous","prime","private","public","real","recent","red","right","royal","serious","short","significant","simple","similar","single","small","social","sorry","special","strong","sure","true","various","was","white","whole","wide","wrong","young","labor","left","dead","specific","total","appropriate","military","basic","original","successful","aware","popular","professional","heavy","top","dark","ready","useful","not","out","up","so","then","more","now","just","also","well","only","very","how","when","as","mean","even","there","down","back","still","here","too","on","turn","where","over","much","is","however","again","never","all","most","about","in","why","away","really","cause","off","always","next","rather","quite","right","often","yet","perhaps","already","least","almost","long","together","are","later","less","both","once","probably","ever","no","far","actually","today","enough","therefore","around","soon","particularly","early","else","sometimes","thus","further","ago","yesterday","usually","indeed","certainly","home","simply","especially","better","either","clearly","instead","round","to","finalty","please","forward","quickly","recently","anyway","suddenly","generality","nearly","obviously","though","hard","okay","exactly","above","maybe","and","that","help","but","or","as","it","think","than","when","because","so","while","where","although","whether","until","though","since","alter","before","nor","unless","once","the","a","form","this","this","that","which","an","their","what","all","her","some","its","my","your","no","these","any","such","our","many","those","own","more","same","each","another","next","most","both","every","much","little","several","half","whose","few","former","whatever","either","less","to","yeah","no","yes","well","will","would","can","could","should","may","must","might","shall","used","come","get","give","go","keep","let","make","put","seem","take","be","do","have","say","see","send","may","will","about","across","after","against","among","at","before","between","by","down","from","in","off","on","over","through","to","under","up","with","as","for","of","till","than","a","the","all","any","every","little","much","no","other","some","such","that","this","I ","he","you","who","and","because","but","or","if","though","while","how","when","where","why","again","ever","far","forward","here","near","now","out","still","then","there","together","well","almost","enough","even","not","only","quite","so","very","tomorrow","yesterday","north","south","east","west","please","yes","able","acid","angry","automatic","beautiful","black","boiling","bright","broken","brown","cheap","chemical","chief","clean","clear","common","complex","conscious","cut","deep","dependent","early","elastic","electric","equal","fat","fertile","first","fixed","flat","free","frequent","full","general","good","great","hanging","happy","hard","healthy","high","hollow","important","kind","like","living","long","male","married","material","medical","military","natural","necessary","new","normal","open","parallel","past","physical","political","poor","possible","present","private","probable","quick","quiet","ready","red","regular","responsible","right","round","same","second","separate","serious","sharp","smooth","sticky","stiff","straight","strong","sudden","sweet","tall","thick","tight","tired","true","violent","waiting","warm","wet","wide","wise","yellow","young","awake","bad","bent","bitter","blue","certain","cold","complete","cruel","dark","dead","dear","delicate","different","dirty","dry","false","feeble","female","foolish","future","green","ill","last","late","left","loose","loud","low","mixed","narrow","old","opposite","public","rough","sad","safe","secret","short","shut","simple","slow","small","soft","solid","special","strange","thin","white","wrong","this"];

Go Snippets

| Comments

Some random code snippets I wrote while learning GO.

Bitwise Ops

Flip Bits
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
//num is odd or even
func isEven(n int) bool {
  return n&1 == 0
}

//test nth bit is set
func isSet(x int, n uint) bool {
  return (x & (1 << n)) > 0
}

func setBit(x int, n uint) int {
  return x | (1 << n)
}

func unSetBit(x int, n uint) int {
  // negate = all 0 except 1 position
  return x & ^(1 << n)
}

func toggleBit(x int, n uint) int {
  // ^ = xor = both same = 0 else 1
  // only of or is true
  //xor twice returns original value
  return x ^ (1 << n)
}

//turn off rightmost 1-bit
func turnOffRightMost(x int) int {
  return x & (x - 1)
}

//turns off all other bits except rightmost "on" bit
func isolateRightmostBit(x int) int {
  return x & (-x)
}

//produces all 1's if x = 0
func rightPropogateRightmostBit(x int) int {
  return x | (x - 1)
}

//flip all bits in integer
func flipBits(x int) uint {
  return ^uint(x)
}

Sorting

Counting Sort
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
func CountingSort(vals []int, maxVal int) {
  histogram := make(map[int]int, maxVal+1)
  for _, val := range vals {
      histogram[val]++
  }

  pos := 0
  for i := 0; i < maxVal; i++ {
      if _, ok := histogram[i]; ok {
          for j := 0; j < histogram[i]; j++ {
              vals[pos] = i
              pos++
          }
      }
  }
}
Radix Sort
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
func RadixSort(vals []int) {
  m := vals[0]
  const base = 10

  for _, val := range vals {
      if val > m {
          m = val
      }
  }

  exp := 1
  for m/exp > 0 {
      bucket := make([][]int, base)
      //count keys
      for i := 0; i < len(vals); i++ {
          key := (vals[i] / exp) % base
          bucket[key] = append(bucket[key], vals[i])
      }
      idx := 0
      for i := 0; i < len(bucket); i++ {
          for j := 0; j < len(bucket[i]); j++ {
              vals[idx] = bucket[i][j]
              idx++
          }
      }
      exp *= 10
  }
}
Insert Sort
1
2
3
4
5
6
7
8
9
func InsertSort(vals []int) {
  for i, _ := range vals {
      j := i
      for j > 0 && vals[j-1] > vals[j] {
          vals[j], vals[j-1] = vals[j-1], vals[j]
          j--
      }
  }
}
Merge Sort
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
func merge(vals []int, helper []int, low int, middle int, high int) {
  //copy both halves into helper slice
  for i := low; i <= high; i++ {
      helper[i] = vals[i]
  }

  helperLeft := low
  helperRight := middle + 1
  current := low

  //iterate through helper array. Compare left and right
  // half, copying the smaller of the 2 into original
  for helperLeft <= middle && helperRight <= high {
      if helper[helperLeft] <= helper[helperRight] {
          vals[current] = helper[helperLeft]
          helperLeft++
      } else {
          vals[current] = helper[helperRight]
          helperRight++
      }
      current++
  }

  // Copy the rest of the left side of the array into the
  // target array.
  remaining := middle - helperLeft
  for i := 0; i <= remaining; i++ {
      vals[current+i] = helper[helperLeft+i]
  }

}

func mergesort(vals []int, helper []int, low int, high int) {
  if low < high {
      middle := (low + high) / 2
      mergesort(vals, helper, low, middle)
      mergesort(vals, helper, middle+1, high)
      merge(vals, helper, low, middle, high)
  }
}

func MergeSort(vals []int) {
  helper := make([]int, len(vals))
  mergesort(vals, helper, 0, len(vals)-1)
}
Quick Sort
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
func partition(vals []int, left int, right int) int {
  middle := (left + right) / 2
  pivotVal := vals[middle]

  //swap middle/right
  vals[middle], vals[right] = vals[right], vals[middle]

  lessPointer := left
  for greaterPointer := left; greaterPointer < right; greaterPointer++ {
      if vals[greaterPointer] <= pivotVal {
          vals[greaterPointer], vals[lessPointer] = vals[lessPointer], vals[greaterPointer]
          lessPointer++
      }
  }

  vals[lessPointer], vals[right] = vals[right], vals[lessPointer]

  return lessPointer
}

func quickSort(vals []int, min int, max int) {
  if min < max {
      p := partition(vals, min, max)
      quickSort(vals, min, p-1)
      quickSort(vals, p+1, max)
  }
}

func QuickSort(vals []int) {
  quickSort(vals, 0, len(vals)-1)
}

Data Structures

Heap
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
type Heap struct {
  data []int
  //maxSize      int
  //itemsInArray int
}

func NewHeap(size int) *Heap {
  h := new(Heap)
  h.data = make([]int, 0, size)
  return h
}

//moves element up to its proper position
//respecting binary tree property
func (h *Heap) up(idx int) {
  for {
      parent := (idx - 1) / 2
      if idx == parent || h.data[idx] < h.data[parent] {
          //at end of array or idx is in proper position
          break
      }
      //Swap with parent
      h.data[parent], h.data[idx] = h.data[idx], h.data[parent]
      idx = parent
  }
}

func (h *Heap) down(idx int) {
  end := len(h.data)
  for {
      //start with left child
      child := 2*idx + 1

      if child > end {
          break
      }

      if child+1 < end && h.data[child] < h.data[child+1] {
          //use right child
          child++
      }

      if h.data[child] < h.data[idx] {
          //proper position
          break
      }

      //swap parent with child
      h.data[child], h.data[idx] = h.data[idx], h.data[child]
      //move to grandchildren
      idx = child
  }
}

func (h *Heap) Pop() int {
  result := h.data[0]
  h.data[0] = h.data[len(h.data)-1]
  h.down(0)
  h.data = h.data[:len(h.data)-2]

  //shrink when excess capacity hits 3/4
  if len(h.data) <= cap(h.data)/4 {
      h.resize(len(h.data) / 4)
  }
  return result
}

func (h *Heap) resize(size int) {
  len := len(h.data)
  temp := make([]int, len, size)
  for idx, val := range temp {
      temp[idx] = val
  }
  h.data = temp
}

func (h *Heap) Push(v int) {
  h.data = append(h.data, v)
  h.up(len(h.data) - 1)
  //grow when cap hit
  if len(h.data) == cap(h.data) {
      h.resize(len(h.data) * 2)
  }
}
Hash Table
1

Linked List
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
type Node struct {
  Data int
  Next *Node
}

func (n *Node) Append(newNode *Node) {
  temp := n
  for temp.Next != nil {
      temp = temp.Next
  }

  n.Next = newNode
}

func reverseLinkedList(node *Node) {
  var prev *Node
  current := node

  for current != nil {
      next := current.Next
      current.Next = prev
      prev = current
      current = next
  }
  node = prev
}
Stack
1
2
3
4
5
6
7
8
9
10
11
   //simple stack structure
  type stack []int

  func (s stack) Empty() bool { return len(s) == 0 }
  func (s stack) Peek() int   { return s[len(s)-1] }
  func (s *stack) Put(i int)  { (*s) = append((*s), i) }
  func (s *stack) Pop() int {
      d := (*s)[len(*s)-1]
      (*s) = (*s)[:len(*s)-1]
      return d
  }

Misc

And 2 Collections
1

Binary Search
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
   //Binary search
  func Search(vals []int, target int) int {
      return binarySearch(vals, target, 0, len(vals)-1)
  }

  func binarySearch(vals []int, target int, start int, end int) int {
      if start > end {
          //not found
          return -1
      }

      middle := int(math.Floor((float64(start) + float64(end)) / 2.0))
      value := vals[middle]

      if value > target {
          return binarySearch(vals, target, start, middle-1)
      }

      if value < target {
          return binarySearch(vals, target, middle+1, end)
      }

      return middle //found
  }
Reverse Polish Calculator
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
   
  //reverse polish notation calculator
  //takes a string like "699+/"
  //prints result e.g. 9
  func ReversePolishCalculator(expression string) {
  var s stack
  for _, c := range expression {
      switch c {
      case '+':
          a := s.Pop()
          b := s.Pop()
          s.Put(a + b)
          break
      case '-':
          a := s.Pop()
          b := s.Pop()
          s.Put(a - b)
          break
      case '/':
          a := s.Pop()
          b := s.Pop()
          s.Put(a / b)
          break
      case '*':
          a := s.Pop()
          b := s.Pop()
          s.Put(a * b)
          break
      default:
          s.Put(int(c) - 48)
      }
  }

  //result
  fmt.Printf("%v=%v", expression, s.Pop())

  }

Porting Nupic to Go

| Comments

Recently I ported the core parts of the Nupic project to Go.

Nupic is Numenta’s current open source implementation of Jeff Hawkin’s hierarchical temporal memory(HTM) model. It currently consists of the CLA (cortical learning algorithm) which is a single stage/layer of the HTM implemented in a mix of python and C++.

In an effort to better understand the difference between the current implementation and the whitepaper I decided to try and port the spatial and temporal poolers to Go. Porting line by line gave me the opportunity to understand the design better as well as it’s dependencies: python, numpy, etc…

One of the more difficult parts of this project was interpreting the numpy expressions and translating them into a statically typed language. A few nested numpy expressions can easily end up being 10 of lines of Go.

The result is 2 simple APIs for the spatial and temporal pooler which are go gettable.

1
2
    go get github.com/zacg/htm
    go get github.com/zacg/htm/utils
Spatial Pooler Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
   ssp := htm.NewSpParams()
    ssp.ColumnDimensions = []int{64, 64}
    ssp.InputDimensions = []int{32, 32}
    ssp.PotentialRadius = ssp.NumInputs()
    ssp.NumActiveColumnsPerInhArea = int(0.02 * float64(ssp.NumColumns()))
    ssp.GlobalInhibition = true
    ssp.SynPermActiveInc = 0.01
    ssp.SpVerbosity = 10
    sp := htm.NewSpatialPooler(ssp)


    activeArray := make([]bool, sp.NumColumns())
    inputVector := make([]bool, sp.NumInputs())

    for idx, _ := range inputVector {
        inputVector[idx] = rand.Intn(5) >= 2
    }

    sp.Compute(inputVector, true, activeArray, sp.InhibitColumns)

    fmt.Println("Active Indices:", utils.OnIndices(activeArray))
Temporal Pooler Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
package main

import (
    "fmt"
    "github.com/zacg/htm"
    "github.com/zacg/htm/utils"
)

func main() {
    tps := htm.NewTemporalPoolerParams()
    tps.Verbosity = 0
    tps.NumberOfCols = 50
    tps.CellsPerColumn = 2
    tps.ActivationThreshold = 8
    tps.MinThreshold = 10
    tps.InitialPerm = 0.5
    tps.ConnectedPerm = 0.5
    tps.NewSynapseCount = 10
    tps.PermanenceDec = 0.0
    tps.PermanenceInc = 0.1
    tps.GlobalDecay = 0
    tps.BurnIn = 1
    tps.PamLength = 10
    tps.CollectStats = true
    tp := htm.NewTemporalPooler(*tps)

    //Mock encoding of ABCDE
    inputs := make([][]bool, 5)
    inputs[0] = boolRange(0, 9, 50)   //bits 0-9 are "on"
    inputs[1] = boolRange(10, 19, 50) //bits 10-19 are "on"
    inputs[2] = boolRange(20, 29, 50) //bits 20-29 are "on"
    inputs[3] = boolRange(30, 39, 50) //bits 30-39 are "on"
    inputs[4] = boolRange(40, 49, 50) //bits 40-49 are "on"

    //Learning and prediction can be done at the same time

    //Learn 5 sequences above
    for i := 0; i < 10; i++ {
        for p := 0; p < 5; p++ {
            tp.Compute(inputs[p], true, false)
        }
        tp.Reset() //not required
    }

    //Predict sequences
    for i := 0; i < 4; i++ {
        tp.Compute(inputs[i], false, true)
        p := tp.DynamicState.InfPredictedState

        fmt.Printf("Predicted: %v From input: %v \n", p.NonZeroRows(), utils.OnIndices(inputs[i]))
    }

}

//helper method for creating boolean sequences
func boolRange(start int, end int, length int) []bool {
    result := make([]bool, length)
    for i := start; i <= end; i++ {
        result[i] = true
    }
    return result
}

You can grab the code @ https://github.com/zacg/htm

Encoding Data in DNA With Go

| Comments

As a recent programming exercise I wrote a golang library that allows encoding/decoding arbitrary data in DNA segments.

The encoding algorithm is based on the method described in this Nature paper: http://www.nature.com/nature/journal/v494/n7435/full/nature11875.html . Pseudo code and details can be found here: http://www.nature.com/nature/journal/vaop/ncurrent/extref/nature11875-s2.pdf

Usage Example:

Encoding
1
2
3
   str := "some string to encode in DNA"
  dna := dna.Encode(str)
  fmt.Println("Result: ", dna)

The resulting string is a valid DNA sequence.

Sequences can be decoded back to human readable text the same way:

Decoding
1
2
3
   dna := "ATAGTATATCGACTAGTACAGCGTAGCATCTCGCAGCGAGATACGCTGCTACGCAGCATGCTGTGAGTATCGATGACGAGTGACTCTGTACAGTACGTACGATACGTACGTACGTCGTATAGTCGTACGTACGTACGTACGTACGTACGTACTGTACAGAGTCACTCGTCATCGATACTCACAGCATGCTGCGTAGCAGCGTATCTCGCTGCGAGATGATACGTACGTACGAGC"
  str := dna.Decode(dna)
  fmt.Println("Result",str)

Source on github: https://github.com/zacg/dna

New Nupic Cerebro Docker Image

| Comments

CLA and Hierarchical Temporal Memory are finally starting to grow in popularity and an open source community is starting to grow around the open source Nupic framework. Currently Cerebro is the best tool (and only) for prototyping CLA models it allows you to visually dissect your work.

Because Cerebro is a webapp built on mongodb and python it requires a bit of work get setup. To make it easier for newcomers to get up and running quickly I created a dockerfile containing all the dependencies required to run Cerebro. Once the docker image is built Cerebro can be run with one command. (Eventually users will be able to pull an official built image from dockers index)

The Dockerfile is now located in the Cerebro repository: https://github.com/numenta/nupic.cerebro/blob/master/Dockerfile

To build the docker image:

1
 sudo docker build -t="nupic.cerebro" .

To run Cerebro on port: 1955:

1
 sudo docker run -p=1955:1955 nupic.cerebro

Simply navigate to http://localhost:1955 to start using Cerebro. For an introductory video checkout: http://youtu.be/WQWU1K5tE5o

Android Using Custom Header Title

| Comments

Trying to configure a custom header/title layout with the newer Android Holo Theme can be painful, often producing cryptic error messages. According to google the issue is with the new actionbar added in the halo theme conflicting with the older title configuration.

Here is what worked for me: after creating a layout named “CustomHeader.xml” I added the following lines to my activity OnCreate method:

1
2
3
4
5
6
7
   protected override void OnCreate (Bundle bundle)
  {
      base.OnCreate (bundle);
      RequestWindowFeature(WindowFeatures.CustomTitle);
      SetContentView (Resource.Layout.Main);
      Window.SetFeatureInt (WindowFeatures.CustomTitle, Resource.Layout.CustomHeader);
  }

I then modified my androidmanifiest.xml and style.xml files to configure a custom theme that inherits from the oridinal halo theme but removes the action bar.

androidmanifiest.xml
1
2
3
  <manifest>
  <application android:theme="@style/CustomTheme"></application>
  </manifest>
styles.xml
1
2
3
4
5
6
  <?xml version="1.0" encoding="UTF-8" ?>
  <resources>
  <style name="CustomTheme" parent="android:Theme.Holo">
  <item name="android:windowActionBar">false</item>
    </style>
  </resources>

If you don’t already have a styles.xml file, it should be created in /resources/values/

If you try setting a custom header/title without removing the actionbar you will get the following runtime exception: “Cannot combine custom title with other title features”

Azure Log4Net Appender

| Comments

Recently when deploying an existing codebase to Azure I required the ability to configure log4net to write to azure storage services.

After some quick googling I found a NuGet package that seemed to be fairly well maintained and had a modest amount of users according to the NuGet download stats. NuGet Package here: http://www.nuget.org/packages/log4net.Appender.Azure/ or type the following in package manager console:

1
 Install-Package log4net.Appender.Azure

However I immediately ran into a few issues, first problems building it with my project as it was built with dependencies on older frameworks such as log4net. Additionally when running the table appender in a real application with multiple concurrent loggers I began to get errors related to the way it submitted batch inserts to the storage service.

I was able to fork the NuGet project on github, update it’s dependencies, and fix the batch processing issue, you can find my branch here: https://github.com/zacg/log4net.Azure I have submitted a pull request for these changes, hopefully they will appear in the next NuGet package release.

Once referenced just choose your preferred storage method and set your connection string like so:

Azure Appender Config Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<log4net>
    <appender name="TableAppender" type="log4net.Appender.AzureTableAppender, log4net.Appender.Azure">
      <param name="TableName" value="testLoggingTable"/>
      <param name="ConnectionString" value="UseDevelopmentStorage=true"/>
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline"/>
      </layout>
    </appender>
    <appender name="BlobAppender" type="log4net.Appender.AzureBlobAppender, log4net.Appender.Azure">
      <param name="ContainerName" value="testloggingblob"/>
      <param name="DirectoryName" value="logs"/>
      <param name="ConnectionString" value="UseDevelopmentStorage=true"/>
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline"/>
      </layout>
    </appender>
    <root>
      <level value="INFO"/>
      <appender-ref ref="TableAppender"/>
      <appender-ref ref="BlobAppender"/>
    </root>
  </log4net>

If you ever have to debug log4net appenders here are a few snippets that may help you. The first is a function to flush the log buffers. The second turns on log4net debug logging, it will log to system.diagnostic.trace, just place the app setting in your config file. And the last snippet is just a setting to write your trace logs to file, again place in your config file.

Flush Buffers
1
2
3
4
5
6
7
8
9
10
11
12
13
//From: Alconja @ http://stackoverflow.com/questions/2045935/is-there-anyway-to-programmably-flush-the-buffer-in-log4net
public void FlushBuffers()
{
    ILoggerRepository rep = LogManager.GetRepository();
    foreach (IAppender appender in rep.GetAppenders())
    {
        var buffered = appender as BufferingAppenderSkeleton;
        if (buffered != null)
        {
            buffered.Flush();
        }
    }
}
Log log4net Debugging Info
1
2
3
  <appSettings>
    <add key="log4net.Internal.Debug" value="true"/>
  </appSettings>
Dump Trace Logs to File
1
2
3
4
5
6
7
8
9
10
 <system.diagnostics>
    <trace>
     <listeners>
  <add
       name="textWriterTraceListener"
       type="System.Diagnostics.TextWriterTraceListener"
       initializeData="C:\dev\log4net.txt" />
       </listeners>
    </trace>
  </system.diagnostics>

And finally a list of the errors encountered and fixed to help out random googlers.

  • Could not load file or assembly ‘log4net, Version=1.2.12.0, Culture=neutral, PublicKeyToken=669e0ddf0bb1aa2a’ or one of its dependencies.
  • WRN: Comparing the assembly name resulted in the mismatch: Build Number
  • All entities in a single batch operation must have the same partition key

I ran into a few roadblocks along the way but ended up with a working log4net appender that can write logs to Azure storage services with a few simple configuration settings. If you have any comments feel free to share below.

Source Code Here: https://github.com/zacg/log4net.Azure

Checking Nested Models in Backbone Forms

| Comments

When defining custom form templates in backbone forms you may want to conditionally include content based on whether or not the form is nested. The following code will allow you to check using the template markup:

1
2
3
4
5
<% if(this.options.fieldTemplate != "nestedField") { %>
  //form is nested inside another
<% } else { %>
  //form is not nested
<% } %>

This check will enable you to write fewer, more reusable form templates. The following changes to the default bootstrap template included with backbone forms allows you to include form submit and cancel buttons by setting a template data flag.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Form.template = _.template('\
    <div>\
    <form class="form-horizontal" data-fieldsets>\
    </form>\
    <% if(submitbtn && this.options.fieldTemplate != "nestedField") { %>\
      <button class="btn btn-primary createBtn" >Create</button>\
      <button class="btn btn-danger cancelBtn" >Cancel</button>\
    <% } %>\
    </div>\
  ');

//Setting the submitbtn flag in the templateData
//paramater will cause the buttons to appear on the form

var ExampleForm = Backbone.Form.extend({
      templateData: { submitbtn: true },
     });

Golang Type Comparisons and Struct Initialization Using Reflection

| Comments

While writing some basic CRUD code for a recent project, I decided to create a base struct containing the redundant fields for all of my data entities (id,date_created, etc…). I then embedded this struct in all data entities and tagged it as inline so json/bson marshalers would treat it accordingly. A problem arose when I wanted to pass newly created data entities from clients into a json webservice. Normally when instantiating a data entity struct I would use the associated creation method (NewSomeEntity()) which would set the appropriate id/created date etc… however the json marshaler is not smart enough to do this as it builds the object graph. If the object graph is only one level deep, you can just run an init function on the new object returned from the marshaler, when the object contains n-levels (n-many relationships) it becomes a problem.

I had two options: I could implement custom marshal interfaces for every data entity struct, or I could write a function that reflects over the object graph after it has been built by the json marshaler and run my initialization function against any new/unintitalized base entity structs. I decided to go with the later option.

There are a few key functions needed to achieve the method described above, mainly: reflecting over an object to get a list of it’s fields, checking the type of the reflected field against your base struct, checking if the reflected field value is uninitialized or in the case of a pointer nil, and finally setting the value of an empty field to an initialized struct.

Here are some code examples:

1
2
3
4
5
6
7
8
9
10
  //base struct for all data entities
  type Entity struct {
      Id        Uuid `bson:"_id,omitempty" json:id`
      CreatedOn time.Time
  }

 //initialization function
 func NewEntity() Entity {
  return Entity{CreatedOn: time.Now(), Id: Uuid.NewUuid()}
 }

Reflect over object and get list of fields

1
2
3
4
5
6
7
8
obj := Entity{};
//retrieve list of fields
r := reflect.ValueOf(obj).Elem()

//iterate over fields
for i := 0; i < r.NumField(); i++ {
      f := r.Field(i)
}

Compare reflected type

1
2
3
if (f.Type() == reflect.TypeOf(Entity{})) {
  //reflected type is of type "Entity"
}

Checking for uninitialized/empty struct

1
2
3
if (f.Interface().(Entity) == Entity{}) {
  //reflected field is an uninitialized entity struct
}

Checking for nil pointer - if you are using pointers you may need to check for a nil pointer rather than an uninitialized struct.

1
2
3
if (f.Kind() == reflect.Pointer && f.IsNil()) {
  //reflected field is a null pointer
}

Finally once an empty field is found set it to an initialized Entity struct

1
2
//sets field f to an initialized Entity struct
f.Set(reflect.ValueOf(NewEntity()))

With the above snippets you can easily build a custom function for iterating over your object graph and initialize empty structs.

Beware When Installing Swig From Debian Package

| Comments

Recently while generating Go bindings for a C++ library with Swig I begain noticing memory issues. Specifically when memory pressure was placed on the application I noticed output paramaters of standard types like std::vector were being randomly deallocated. After triple checking my Swig template files for the correct wrapping signatures and reviewing the generated Go and C/C++ code I was at a loss.

After reviewing the Swig project commit log and noticing some fixes for Go related issues, I remembered I had installed Swig from the debian aptitude manager. Sure enough after running “swig -version” my swig version was at 2.0.7 (the current version at this time of writing was 2.0.10). 2.0.7 lacks many bug fixes related to Go. Downloading and installing 2.0.10 from the Swig website solved my memory issues.

Solution:

  1. Check Installed Swig version
    1
    2
    
         swig -version
          
    
  2. If version is < latest: unistall packaged version, then download and install current release from the website: http://swig.org

File Parameter Support for Portable Restsharp Library

| Comments

When developing mobile cross platform apps with Xamarin I like to keep as much common code as possible in a shared portable class library (PCL) for easy reuse. This includes any webservice calls and the associated boilerplate code. I have been using a fork of the Restsharp library which has been modified to be PCL compliant. This strategy was working fine until recently when I needed to start uploading files through webservices. The Restsharp library has robust file upload support but because it is often tied to OS specific file operations it was left out of the PCL port. As a work around I simply imported the necessary file upload code from the original library using raw bytes and streams as interfaces rather than filenames. Leaving it up to the caller to implement the file loading in their native OS code.

My fork can be found on github at: https://github.com/zacg/geoserver-csharp

Xamarin Simple Text List View Helper

| Comments

Recently while working on an Xamarin based Android project I came up with a handy helper class for displaying basic list views. It saves alot of boilerplate code by not forcing a new adapter implementation for every domain object you want to use in a list view.

The code:

And to use it:

1
ListAdapter = new SimpleTextAdapter<SomeBusinessObject> (this,objects, (item) => { return item.Name;});

Use it with a custom id field:

1
ListAdapter = new SimpleTextAdapter<SomeBusinessObject> (this,objects, (item) => { return item.Name;}, (item) => { return item.CustomId;});

Running Qt Creator Build Commands as Sudo

| Comments

Recently I had a requirement to install a shared library as a build/deployment step in a QtCreator project, which of course required sudo permissions. Not surprisingly Qt Creator does not simply let you prepend “sudo” to a custom build step, here is the workaround I found:

First I moved all the commands that needed to run with sudo into a single make file like the following, mine was called InstallLib.make:

InstallLib.make
1
2
3
4
5
6
7
install:
  @echo "Installing go shared lib..."
  sudo cp -f libImgSearch.so.1.0.0 /usr/local/lib/
  sudo cp -f libImgSearch.so.1.0 /usr/local/lib/
  sudo cp -f libImgSearch.so.1 /usr/local/lib/
  sudo cp -f libImgSearch.so /usr/local/lib/
  sudo ldconfig

Next in Qt Creator with your project open, goto the project section, add a new “custom process” build step. In the command field type: “ssh-askpass” this program will popup a widget to enter the sudo password when executed. In the build step arguments field enter: “Sudo Password | sudo -S make -f InstallLib.make”. This will make the ssh-askpass program execute the “InstalledLib.make” make file when a correct sudo password is provided.

If you prefer not to use the GUI, you could also edit your projects .user file and add some xml similar to the following:

project.user
1
2
3
4
5
6
7
8
9
   <valuemap type="QVariantMap" key="ProjectExplorer.BuildStepList.Step.10">
      <value type="bool" key="ProjectExplorer.BuildStep.Enabled">true</value>
      <value type="QString" key="ProjectExplorer.ProcessStep.Arguments">Sudo Password | sudo -S make -f InstallLib.make</value>
      <value type="QString" key="ProjectExplorer.ProcessStep.Command">ssh-askpass</value>
      <value type="QString" key="ProjectExplorer.ProcessStep.WorkingDirectory">%{buildDir}</value>
      <value type="QString" key="ProjectExplorer.ProjectConfiguration.DefaultDisplayName">Custom Process Step</value>
      <value type="QString" key="ProjectExplorer.ProjectConfiguration.DisplayName"></value>
      <value type="QString" key="ProjectExplorer.ProjectConfiguration.Id">ProjectExplorer.ProcessStep</value>
     </valuemap>

Binary Combinations in Javascript

| Comments

While writing test coverage for a recent javascript project I was required to test every possible call to a function with a large number of binary parameters. Here is the function I came up with to generate the combinations:

It’s pretty straight forward, pass in the length (n) of binary digits and it will return an array of all possible combinations as boolean values.

Lowercase JSON Fields With Golang

| Comments

The base GO libraries provide a handy function for marshaling structs into JSON. I recently came across an issue when writing webservices in GO for an existing javascript client. The problem was the client expected the JSON data to have field names starting with lowercase letters. GO’s naming convention is obviously going to make all struct fields uppercase by default as they need to be exported. I ended up copying the JSON marshaler code from GO’s library and modifiying it with a new paramater that will lowercase JSON field names when set. Because GO isn’t on github and I am strapped for time I just copied the code into a new util namespace and made the modifications as a couple of other gophers in #go-nuts were interested in using it.

Usage is simple, when the 2nd paramater is set to true, all fieldnames will start with a lowercase letter(other capitalization remains unchanged):

1
b, err = jsonutils.Marshal(<some obj>, <lowercase fieldnames:true/false>)

The source code can be downloaded from github: https://github.com/zacg/goutils

And here is some boilerplate code to use it in a Revel controller:

Calling C++ Code From Go With SWIG

| Comments

Recently while working on a Go based project I needed to use some functionality from another large C++ library. The library’s size and complexity made re-writing it in Go unfeasible. After some research I decided to use the popular SWIG (Simplified Wrapper and Interface Generator) framework to enable interop between my two projects.

The following is a brief tutorial to guide you through wrapping a C++ class with a Go package which can be called from any Go program.

Nested Folders in Qt Creator

| Comments

At first glance qt creator seems to be a feature rich ide however it still lacks many basic features such as being able to add sub folders for code to projects through the project explorer window. The following tutorial demonstrates how to get around the ide limitation by adding the folders manually.

The goal is to achive the below folder strucutre, project being the root, and “ModuleA” being the folder we need to add.

1
2
3
4
5
6
7
--Project
--/ModuleA/
----a.cpp
----a.h
----b.cpp
----b.h
--main.cpp

Start by navigating to the projects root directory, and create a new folder:

1
 mkdir ModuleA

Create a new file inside named “modulea.pri”. If you are going to copy existing files into this new folder you need to add them to the pri file. (Adding new files can be done through the qt creator gui once the project is configured properly)

Note the path names must be relative from the project root

title:ModuleA.pri
1
2
3
4
5
SOURCES += ModuleA/a.cpp \
    ModuleA/b.cpp \

HEADERS += ModuleA/a.h \
    ModuleA/b.h \

Now we need to include the new .pri file for the folder in the project configuration. Open up the .pro file located in the project root directory and add the following line:

title:Project.pro
1
include(ModuleA/ModuleA.pri)

Reload the project, the new ModuleA folder should now be visible as a subfolder in your main project. Right clicking on it will give you the option to add new files. Simply repeat the above process for each new sub folder you wish to add.

jqGrid Inline Editing With asp.net MVC

| Comments

I am a frequent user of the popular jQuery plugin jqGrid. It comes with a large feature set for viewing and manipulating tabluar data in the browser. When I am working on an asp.net mvc projects I work with it via Robin van der Knaap’s lightweight html helper library: https://github.com/robinvanderknaap/MvcJqGrid . It has strongly typed html helpers and a handy model binder for handling async grid functions.

Recently I required jqGrid’s inline editing feature which is not supported in the MvcJqGrid library so via the power of github I went ahead and added it https://github.com/zacg/MvcJqGrid.

The syntax follows the exisitng MvcJqGrid builder pattern and is very straightforward:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
@(Html.Grid("editing")
    .SetCaption("Inline Editing")
    .AddColumn(new Column("CustomerId")
        .SetLabel("Id")
        .SetCustomFormatter("buttonize"))
    .AddColumn(new Column("Name")
        .SetFormatter(Formatters.Email)
        .SetEditable(true)
        .SetEditType(EditType.Text))
    .AddColumn(new Column("Company")
                .SetEditable(true)
                .SetEditType(MvcJqGrid.Enums.EditType.Select)
                .SetEditOptions(new EditOptions() { Value = "0:Twitter; 1:Google; 2:Microsoft; 3:Cisco" }))
    .AddColumn(new Column("EmailAddress")
        .SetFormatter(Formatters.Email)
        .SetEditable(true)
        .SetEditType(EditType.Text)
        .SetEditRules(new EditRules() { Email = true }))
    .AddColumn(new Column("Last Modified"))
    .AddColumn(new Column("Telephone"))
    .SetUrl(Url.Action("GridDataBasic"))
    .SetAutoWidth(true)
    .SetRowNum(10)
    .SetRowList(new[] { 10, 15, 20, 50 })
    .SetViewRecords(true)
    .SetPager("pager"))

<script type="text/javascript">
    function buttonize(cellvalue, options, rowobject) {
        return '<input type="button" value="Edit" onclick="edit(' + options.rowId + ')">';
    }
    function edit(id) {
        $("#editing").jqGrid("editRow", id, true);
    }
</script>

I have submitted a pull request for my additions so it will become part of the core library shortly, if you can’t wait that long just clone my fork here: https://github.com/zacg/MvcJqGrid

Notes on C++ for C# Developers

| Comments

After primarily working in C# for the last 5 years I recently switched back to c++ for a large project. I am going to use this article to post notes/gotcha’s/tips as a I come across them. I already have a few to add, and will update as I go along. Hopefully they help some c# programmers out there. They are listed in no particular order.

C++ does not support template type constraints/guards on template definitions

Gotcha: C++ does not support template type constraints/guards on template definitions. I ended up using static asserts (requires c++ 11), the example shown below uses helper methods from the popular boost library.

c#
1
2
3
4
5
class TestSuite<T>
  :  where T : SomeType
{
  
}
C++
1
2
3
4
5
6
template<class Case>
class TestSuite
{

 BOOST_STATIC_ASSERT((boost::is_base_of<ITestCase, Case>::value));
}

By default inheritance is private in c++

C#
1
2
3
4
class b
  : class a
{
}

Don’t forget the “public” declaration if you are used to c#.

C++
1
2
3
4
5
class b
 : public class a
{

}

Member access specified at group level

Access identifier specified on each member.

C#
1
2
public string SomeProp {get; set;}
public string GetSomeOtherProp();

Access can be grouped.

C++
1
2
3
public:
  string GetSomeProp();
  string GetSomeOtherProp();

C++ supports multiple inheiritance

Unlike C#, C++ supports multiple inheiritance, while this may seem like a benifit it should be used very very rarely. If you find your object model requires multiple inheiritance it is best to reevaluate your design or you will end up facing issues like the diamond of death.

Rule of 3

Remember the rule of 3. If you require any of the following then you should explicitly define all three.

  • copy constructor
  • copy assingment operator
  • destructor

Temproraries with Parameterless Constructors

Don’t include parathesis on paramaterless constructors the compiler will favour resolving it as a function declaration and you will end up with an error message like:

1
error: request for member 'method' in 'someObj', which is of non-class type 'someType()'
Parthensis can still be used when using the “new” keyword
1
2
3
4
5
//bad
someObj obj();

//good
someObj obj;

STL containers copy their values.

When adding items to STL containters remember the container does a copy and keeps track of it’s own copy of the item being added.

Follow the Virtual Constructor Idiom

When creating abstract base classes follow the virtual constructor idiom and create virtual clone and create methods. This allows you to create collections of base types while still being able to copy them without knowing their concrete type.

Dynamic Linking Libraries

Here are some tips and tools for working with dynamically linked libraries and shared objects.

  • Shared library files to be registered must start with “lib”
  • Use ldconfig -v to see registered libraries
  • Use ldconfig to reload linked libraries
  • Export ldpath to temporarily point executable at your lib, handy for scripting.
  • Use ldd command to view an executables dependencies.
  • Use nm –demangle to view shared objects exported symbol list.
  • Use c++filt to lookup runtime symbol lookup errors.

Can’t call virtual methods from constructor

Virtual methods can’t be called from within constructors. This usually means you have to use a two stage approach to initialization of derived classes.

Avoid importing namespaces

It is bad practice to use “using namespace <...>;”. When including third party libraries there can be naming conflicts which become a pain to track down in a large project.

Setup Opencv Project With Qt Creator on Linux

| Comments

The following tutorial will show you how to setup a console project in qt creator for opencv based projects. It assumes you have already installed the opencv library.

If not installed, install qt creator

1
sudo apt-get qtcreator

Open qt creator

create a new console application project

Add following lines to .pro file

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
INCLUDEPATH += /usr/local/include/opencv2/

LIBS += -L /usr/local/lib/
LIBS += -lopencv_core
LIBS += -lopencv_nonfree
LIBS += -lopencv_imgproc
LIBS += -lopencv_highgui
LIBS += -lopencv_ml
LIBS += -lopencv_video
LIBS += -lopencv_features2d
LIBS += -lopencv_calib3d
LIBS += -lopencv_objdetect
LIBS += -lopencv_contrib
LIBS += -lopencv_legacy
LIBS += -lopencv_flann
LIBS += -lboost_system
LIBS += -lboost_filesystem

By default qt creator will add some message pump related code to your main.cpp file, this can be commented out if you are just writing a console application (leaving it unmodified may prevent you from seeing output in the xterm console window)

If you are using ubuntu you will probably have to configure the x-term environment settings.

Goto: Tools -> options -> environment settings

Set the terminal field to the following:

1
/usr/bin/xterm -e

Fixing Opencv Xserver Error With Eclipse CDE

| Comments

When running your first opencv project with CDE you may experience the “cannot connect to x server” error when calling code that requires the opencv imageview or UI elements.

x server error




To fix simply set DISPLAY in your projects environment variables, make sure a file in your project is selected then goto run-> run configurations -> environment and add a display variable like the following:

x server error

Installing Opencv 2.4.5 on Ubuntu 12 With Eclipse CDE

| Comments

opencv logo ubuntu logo eclipse logo

The following is a brief tutorial on getting the opencv library setup with eclipse CDE on Ubuntu 12. The same eclipse project settings detailed in this tutorial can be re-used to build applicaions on top of opencv.

  1. Start by getting things up to date
    1
    2
    
    sudo apt-get update
    sudo apt-get upgrade
    
  2. Install opencv dependencies so we can compile the opencv library
    1
    
    sudo apt-get install build-essential libgtk2.0-dev libjpeg-dev libtiff4-dev libjasper-dev libopenexr-dev cmake python-dev python-numpy python-tk libtbb-dev libeigen2-dev yasm libfaac-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev libx264-dev libqt4-dev libqt4-opengl-dev sphinx-common texlive-latex-extra libv4l-dev libdc1394-22-dev libavcodec-dev libavformat-dev libswscale-dev
    
  3. Grab the current stable release of opencv, at the time of this post it was: 2.4.5
    1
    2
    
    wget https://github.com/Itseez/opencv/archive/2.4.5.tar.gz
    tar -xvf OpenCV-2.4.5.tar.bz2
    
  4. Now we need to build a makefile with cmake if you are just messing around and aren’t sure which modules to install you can run the following command which will include the most common ones including python bindings. Otherwise you can skip to 4a to select which options you want.
    1
    2
    3
    4
    
    cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local
        -D WITH_TBB=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D WITH_V4L=ON
        -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON
        -D BUILD_EXAMPLES=ON -D WITH_QT=ON -D WITH_OPENGL=ON ..
    
    If you chose the above generic build options continue to step 5.
    1. We will use the cmake gui to select which components to include, start by installing it:
      1
      2
      
      sudo apt-get cmake-qt-gui
      sudo cmake-gui
      
    2. A user interface dialog should appear. Select the directory you extraced the opencv source files to for the source directory. Create a new build direcory and select that in the gui as the build destination. Then click the configure button to the lower left, select “unix make files” for a generator, and “use native compilers” option. cmake gui
    3. The configuration process should populate the gui dialog with the available components, tick off desired ones, hover over the right column with the mouse cursor to see a more detailed description.
    4. Click generate, cmake should populate the build directory with the necessary make files. Check the output window to ensure there were no errors.
  5. Navigate to your build directory in a terminal and make. Then install.
    1
    2
    3
    
    cd release
    make
    sudo make install
    
  6. To configure the dynamic linker we need to add a line to the end of ld.so.conf.d. The following command will open the file in a text editor (the file may be blank, that is fine).
    1
    
    sudo gedit /etc/ld.so.conf.d/opencv.conf
    
    Add the line:
    1
    
    /usr/local/lib
    
    and save it.
  7. To configure bash.bashrc:
    1
    
    sudo gedit /etc/bash.bashrc
    
    And add:
    1
    2
    
    PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig
    export PKG_CONFIG_PATH
    
  8. Logout/restart before moving on to eclipse installation.
  9. If you don’t already have eclipse with cde installed do that first with the following command
    1
    
    sudo apt-get install eclipse eclipse-cdt g++
    
  10. Open eclipse and create a new empty c++ project using the linux GCC toolchain cmake gui
  11. Next we need to add some code to the project so we have something to build/run. I am goint to take the bag of words example out of the /samples/cpp/ directory of the opencv project. Simple copy/paste the file into your new project.
  12. Next we need to tell eclipse what libraries to include with the project and where to find them. Goto project -> properties on the file menu or just right click on the project in the projects pane and click on properties. A dialog should appear, click on C/C++ Build -> settings then click on includes. Add the following to the include paths list Noted: if you need to compile projects with the older c++ api, you would replace opencv2 with opencv
    1
    
    /usr/local/include/opencv2
    
    Next goto the GC++ Linker tab and add the following to the library search paths list:
    1
    
    /usr/local/lib
    
    then add the following libs to the libary list
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    
    opencv_core
    opencv_nonfree
    opencv_imgproc
    opencv_highgui
    opencv_ml
    opencv_video
    opencv_features2d
    opencv_calib3d
    opencv_objdetect
    opencv_contrib
    opencv_legacy
    opencv_flann
    
    for other projects you can remove uneceoctopress spell checkingssary libs or add dependent other dependant ones.
  13. These project settings should now allow you to compile projects referencing opencv. Goto project -> build the project should build. Then run it, if you are using the bag of words example from the samples folder you should see console output similar to the following:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    
    This program shows how to read in, train on and produce test results for the PASCAL VOC (Visual Object Challenge) data.
    It shows how to use detectors, descriptors and recognition methods
    Using OpenCV version %s
    2.4.5
    Call:
    Format:
     .//home/zac/dev/workspace2/Test2/Debug/Test2 [VOC path] [result directory]
           or:
     .//home/zac/dev/workspace2/Test2/Debug/Test2 [VOC path] [result directory] [feature detector] [descriptor extractor] [descriptor matcher]
    
    Input parameters:
    [VOC path]             Path to Pascal VOC data (e.g. /home/my/VOCdevkit/VOC2010). Note: VOC2007-VOC2010 are supported.
    [result directory]     Path to result diractory. Following folders will be created in [result directory]:
                             bowImageDescriptors - to store image descriptors,
                             svms - to store trained svms,
                             plots - to store files for plots creating.
    [feature detector]     Feature detector name (e.g. SURF, FAST...) - see createFeatureDetector() function in detectors.cpp
                             Currently 12/2010, this is FAST, STAR, SIFT, SURF, MSER, GFTT, HARRIS
    [descriptor extractor] Descriptor extractor name (e.g. SURF, SIFT) - see createDescriptorExtractor() function in descriptors.cpp
                             Currently 12/2010, this is SURF, OpponentSIFT, SIFT, OpponentSURF, BRIEF
    [descriptor matcher]   Descriptor matcher name (e.g. BruteForce) - see createDescriptorMatcher() function in matchers.cpp
                             Currently 12/2010, this is BruteForce, BruteForce-L1, FlannBased, BruteForce-Hamming, BruteForce-HammingLUT
    
    Tip: the bag of words example references each module individually, if you want to quickly reference all the free modules for tesing just include
    1
    
    #include "opencv.hpp"
    

References:
http://docs.opencv.org/doc/tutorials/introduction/linux_eclipse/linux_eclipse.html
http://www.samontab.com/web/2012/06/installing-opencv-2-4-1-ubuntu-12-04-lts/