SushiHangover

PowerShell, Learn it or Perish ;-)

master nix
Gitter

GtkSharp-Explorer update for Irony

Irony GtkSharp Explorer I was working on a Irony/C# based DSL that I wrote awhile back and noticed that I had some strange NameSpace issues with the GTK UI (exposed only within Xamarin’s Stetic Designer, not sure how those naming conflicts were not a compile time error…).

Updated source for my Gtk# addition to Irony is on GitHub now. Remember that my additions are on the “gtksharp-explorer” branch.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
git branch --all
* master
  remotes/origin/HEAD -> origin/master
  remotes/origin/gtksharp-explorer
  remotes/origin/master
  remotes/origin/xplat-nunit-fix
git checkout gtksharp-explorer
  Branch gtksharp-explorer set up to track remote branch gtksharp-explorer from origin.
  Switched to a new branch 'gtksharp-explorer'

git branch --all
* gtksharp-explorer
  master
  remotes/origin/HEAD -> origin/master
  remotes/origin/gtksharp-explorer
  remotes/origin/master
  remotes/origin/xplat-nunit-fix
open Irony_All.MonoDevelop.sln


Grammar Explorer based on Gtk#

For cross-platform Irony work in Mono 3.2.X and MonoDevelop/Xamarin 4.1.X/4.2.X Instructions for building on Mono:

Via MonoDevelop/Xamarin IDE:

  • Release or Debug Targets: Load and build via the Irony_All.MonoDevelop.sln Via cmd line:

  • Release: xbuild /p:Configuration=Release Irony_All.MonoDevelop.sln mono Irony.GrammarExplorer.GtkSharp/bin/Release/Irony.GrammarExplorer.GtkSharp.exe

  • Debug: xbuild /p:Configuration=Release Irony_All.MonoDevelop.sln mono Irony.GrammarExplorer.GtkSharp/bin/Debug/Irony.GrammarExplorer.GtkSharp.exe

Ellcc.org build fix for OS-X

In my ARM Bare Metal searches for using Clang/LLVM I stumbled across The ELLCC Embedded Compiler Collection that provides a one-stop build enviroment for all the LLVM tools for cross-platform compiling.

I’m not sure if they are trying to be a YAGARTO for LLVM vs. GCC. I waiting for a reply to post on their forum to understand the actual code changes to Clang/LLVM that they include (if any). I will update when I hear back. (Update: Read Rich’s full reply, it cleared everything up for me)

ELLCC is really just a weekly repackaging of clang/LLVM with two minor additions.
1. The triples of the form -ellcc- (where OS is linux for now, but will include others eventually) control how include files and libraries are found. You might notice for example that the #include path for ELLCC...

But in the mean time I figured I give it a build and include it in my GCC/ARM vs. Clang/LLVM-ARM testing but hit a build error on OS-X. On the linking of QEMU, libintl (GNU’s gettext) is not found:

1
2
3
4
5
LINK  i386-softmmu/qemu-system-i386
ld: library not found for -lintl
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[1]: *** [qemu-system-i386] Error 1
make: *** [subdir-i386-softmmu] Error 2

I do have gettext on my system, but it is in my “Cellar”“ as I use HomeBrew as my package manager and try not to install anything to ”/usr/bin" or other systems places that can muck everything up and thus can run parrallels versions of different applications (i.e. If I have to ‘sudo’ to an open-source software install, it is not going on my system unless they have a serious reason for it and I trust the code from a security viewpoint).

FYI: Brew does not ‘hard’ link gettext as compiling software outside of the HomeBrew can cause problems:

brew link gettext
Warning: gettext is keg-only and must be linked with --force
Note that doing so can interfere with building software.

So I mod’d the “ellcc/gnu/build” to force brew to link gettext before compiling/linking qemu and unlink it after.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
svn diff build
Index: build
===================================================================
--- build (revision 3780)
+++ build (working copy)
@@ -69,6 +69,10 @@
     ppc-linux-user ppc64-linux-user ppc64abi32-linux-user sparc-linux-user"
 fi
 echo Configuring package qemu for $targets
+if [!  -e `which brew` ]; then
+    ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"
+fi
+brew link gettext --force
 qemu_target_list=`echo $qemu_target_list | sed -e "s/ /,/g"`
 make DIR=src/qemu CC=$cc HCC=$hcc AR=$ar TARGET=$host OS=$os \
     targetlist=$qemu_target_list haslibs=$haslibs \
@@ -76,6 +80,7 @@
     qemu.configure || exit 1

 make -C src/qemu || exit 1
+brew unlink gettext

 # Finally, install into the target specific bin dir.
 mkdir -p $bindir

Everything builds fine after that…

The entire file is here:

OS-X LLVM / CLANG Build

I wanted to test out some C code that I am writting for a ARM Bare Metal (Embedded) project in QEMU (qemu-system-arm) and normally would just use the GNU Tools for ARM Embedded Processors but I was wondering what the current state of LLVM is for cross-compiling to bare-metal ARM.

Since this is a new area for me and I am having a dang hard time finding what is and isn’t supported in CLang/LLVM for embedded ARM development, I figured I would compile the latest version and see the difference in code that gets produced between the gcc and Clang compilers.

Thus I needed to latest and greatest Clang/LLVM and did not feel like nurse-maiding a huge git download and long compile session, so I spent a minute and hacked up a really simple script so I could catch up on “Game of Thrones” ;-)

FYI: Cross-compilation using Clang

OMeta Binary data parsing

Josh Marinacci has a blog post concerning using OMeta to parse binary data and while there was not a complete cut/paste of all the Javascript code needed to run it in OMeta/JS, I saved his grammar for review as I was working on binary parsing using an OMeta that was using C# as the host language. Recently I give the link of his posting to someone else, but turns out Josh’s blog was offline (crashed?). (Update; Appears his blog is working again, so you can refer to the link below for his original post)

So I dug up what I had and whipped up an OMeta/JS example for a complete working proof of concept and here are the results. I am not a JavaScript kind-of-guy, so be nice regarding the code. ;-)

Note: The W3 spec has 18 chunks that can be defined in PNG files and I added ‘iTXt’ to Josh’s original as the PNG I was using as an example had a large chunk of XML data in it, but a lot of chunks are still missing as this is just a proof of concept and the original binaries that I was parsing were not PNGs, but custom AMF2 byte streams that were getting converted to objects ‘on the 'fly’ via IlGenerator in C#…

Original PNG parse concept from Josh is at the following link assuming he gets the blog working: http://joshondesign.com/2013/03/18/ConciseComputing

And his related email thread on vpri.org : http://vpri.org/pipermail/ometa/2013-March/000524.html

So if you load up OMeta/JS, the complete grammar and Javascript functions needed is shown below. Just open up your JS console before doing a “Do It” so you can see the chunk information found in the PNG and interact with the final object.

Here is an example console output of parsing a PNG file via this OMeta/JS script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[Log] loaded
[Log] got 24648 bytes
[Log] i32 : 13 <= [13, 0, 0, 0]
[Log] ChunkType :IHDR : [73, 72, 68, 82]
[Log] i32 : 139 <= [139, 0, 0, 0]
[Log] i32 : 119 <= [119, 0, 0, 0]
[Log] i32 : 25 <= [25, 0, 0, 0]
[Log] ChunkType :tEXt : [116, 69, 88, 116]
[Log] String:SoftwareAdobe ImageReady (...byteArrayOmitted...)
[Log] i32 : 1974 <= [182, 7, 0, 0]
[Log] ChunkType :iTXt : [105, 84, 88, 116]
[Log] String:ML:com.adobe.xmp<?xpacket begin="" id="W5M (...only first 50 bytes shown...)
[Log] i32 : 22568 <= [40, 88, 0, 0]
[Log] ChunkType :IDAT : [73, 68, 65, 84]
[Log] i32 : 0 <= [0, 0, 0, 0]
[Log] ChunkType :IEND : [73, 69, 78, 68]
[Log] ["PNG HEADER", Array[5], Array[0]]

This is a working example of parsing binary data parsing in Ometa/JS.

OMeta/JS PNG Parselink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
ometa BinaryParser <: Parser {
    // Portable Network Graphics (PNG) Specification (Second Edition)
    // http://www.w3.org/TR/PNG/
    // Note: not all chunk are defined, this is just a POC
    //entire PNG stream
    START  = [header:h (chunk+):c number*:n] -> [h,c,n],

    //chunk definition
    chunk  = int4:len str4:t apply(t,len):d byte4:crc
        -> [#chunk, [#type, t], [#length, len], [#data, d], [#crc, crc]],

    //chunk types
    IHDR :len  = int4:w int4:h byte:dep byte:type byte:comp byte:filter byte:inter
        -> {type:"IHDR", data:{width:w, height:h, bitdepth:dep, colortype:type, compression:comp, filter:filter, interlace:inter}},
    gAMA :len  = int4:g                  -> {type:"gAMA", value:g},
    pHYs :len  = int4:x int4:y byte:u    -> {type:"pHYs", x:x, y:y, units:u},
    tEXt :len  = repeat('byte',len):d    -> {type:"tEXt", data:toAscii(d)},
    iTXt :len  = repeat('byte',len):d    -> {type:"iTXt", data:toShortAscii(d)},
    tIME :len  = int2:y byte:mo byte:day byte:hr byte:min byte:sec
        -> {type:"tIME", year:y, month:mo, day:day, hour:hr, minute:min, second:sec},
    IDAT :len  = repeat('byte',len):d    -> {type:"IDAT", data:"omitted"},
    IEND :len  = repeat('byte',len):d    -> {type:"IEND"},

    //useful definitions
    byte    = number,
    header  = 137 80 78 71 13 10 26 10    -> "PNG HEADER",        //mandatory header
    int2    = byte:a byte:b               -> byteArrayToInt16([b,a]),  //2 bytes to a 16bit integer
    int4    = byte:a byte:b byte:c byte:d -> byteArrayToInt32([d,c,b,a]), //4 bytes to 32bit integer
    str4    = byte:a byte:b byte:c byte:d -> toChunkType([a,b,c,d]),  //4 byte string
    byte4   = repeat('byte',4):d -> d,
    END
}
BinaryParser.repeat = function(rule, count) {
  var ret = [];
  for(var i=0; i<count; i++) {
     ret.push(this._apply(rule));
  }
  return ret;
}
toAscii = function(byteArray) {
  var foo = String.fromCharCode.apply(String, byteArray);
  console.log ("String:" + foo + " (...byteArrayOmitted...)");
  return foo;
}
toShortAscii = function(byteArray) {
  var embeddedText = String.fromCharCode.apply(String, byteArray);
  // The iTxt chunk can contain a lot of text/xml, so truncate for proof of concept
  console.log ("String:" + embeddedText.slice(1, 51) + " (...only first 50 bytes shown...)");
  return embeddedText;
}
toChunkType = function(byteArray) {
  var aChuckType = String.fromCharCode.apply(String, byteArray);
  console.log ("ChunkType :" + aChuckType + " : " + byteArray );
  return aChuckType;
}
byteArrayToInt32  = function(localByteArray) {
  var uint8array = new Uint8Array(localByteArray);
  var uint32array = new Uint32Array(
                    uint8array.buffer,
                    uint8array.byteOffset + uint8array.byteLength - 4,
                    1 // 4Bytes long
                  );
  var newInt32 = uint32array[0];
  console.log ( "i32 : " + newInt32 + " <= " + localByteArray );
  return newInt32;
}
byteArrayToInt16  = function(byteArray) {
  var ints = [];
  alert(byteArray.length);
  for (var i = 0; i < byteArray.length; i += 2) {
    //ints.push((byteArray[i] << 8) | (byteArray[i+1]));
  }
  console.log (ints);
  return ints;
}
fetchBinary = function() {
    var req = new XMLHttpRequest();
    req.open("GET","http://sushihangover.azurewebsites.net/Content/Static/IronyLogoSmall.png",true);
    req.responseType = "arraybuffer";
    req.onload = function(e) {
        console.log("loaded");
        var buf = req.response;
        if(buf) {
            var byteArray = new Uint8Array(buf);
            console.log("got " + byteArray.byteLength + " bytes");
            var arr = [];
            for(var i=0; i<byteArray.byteLength; i++) {
                arr.push(byteArray[i]);
            }
            // watch out if you uncomment the next line, it can kill your browser w/ large png files
            // console.log(arr);
            var parserResults = BinaryParser.match(arr, "START");
            console.log(parserResults);
        }
    }
    req.send(null);
};
fetchBinary();

Moving to GitHub Pages

I am moving SushiHangover from Blogger and an Azure-based blog engine to GitHub Pages using Octopress so hang in there for a few days as I get the older content parsed and set to use markdown…

As always, email or post a comment if you need anything.

-R

image

MonoMac: Detect when Mac sleeps or wakes

Saw a question on the Xwt.Mac group concerning getting Sleep and Wake events from MonoMac/C# on OS-X. Normally I would look for those event on the NSApplication default notification center, but a quick look at the Apple developer site quickly directed me to the those events being on the NSWorkspace’s notification center, so another quick look in MonoMac and lucky those are already exposed so you do not have to do the AddObserver work yourself, but finding them in the ‘online MonoMac API’ did not return any direct results(?)… So here is my answer from that group in case anyone else google/bing this in the future:

Sleep and Wake are available on the NSWorkspace’s notification center and MonoMac exposes those so so you do not have to write the AddObserver code yourself:

Apple Dev info on NSWorkspaceWillSleepNotification &NSWorkspaceDidWakeNotification

C# "Wake and Sleep Events"
1
2
3
4
5
6
7
Console.WriteLine ("Add the sleep/wake observers");
NSWorkspace.Notifications.ObserveWillSleep ((object sender, NSNotificationEventArgs e) => {
    Console.Write ("Your Mac is getting sleepy\n");
);
NSWorkspace.Notifications.ObserveDidWake ((object sender, NSNotificationEventArgs e) => {
    Console.Write ("Time to go to work again\n");
};

PlayScript :What happened to the open source version on GitHub

Poof: An open source project disappears: https://github.com/playscript/playscript-mono.git

I am assuming with removal of PlayScript’s public repo on GitHub that the project is either becoming a commercial offering from Xamarin (or Zynga) and future releases will have a license change?

I am assuming there will be some big reveal in the future when Xamarin (and Zynga?) announces Playscript is in Beta for licensed users of Studio, Xamarin.iOS and Xamarin.Andriod, …. Or Not…..

The really important note here is if someone pulls a GitHub repo, you will LOSE your GitHub forks! Gone, Poof, No Mas, No warning, No chance to make a backup… I sure hope you had a complete local backup of your work cause it is now gone from GitHub