Lloyd.NET

Programing experiments

Client Side Web Application primer

Download sample’s source code:

 

I plan to share my experiment with client side application and MVC backend. In this sample, as many other web site before I use many 3rd party libraries (hint: look at README / package list: EntityFramework (Extended), jQuery, Knockout, chosen, bootstrap, datatables). But I don’t plan to dwell much on them in this post. Instead I want to focus on how I use T4 Templates to generate strongly WebAPI server proxy in TypeScript and use Knockout to drive the UI. With a little bit extra on WebAPI and SQL.

This post is intended for web developer having basic to medium knowledge of MVC, Javascript, jQuery.ajax(), TypeScript. I am not going to explain the whole application but just the challenging parts and the solutions I implemented. For the rest there is the source code.

My sample application (largely inspired from work needs) is a 3 pages application. A first page manage dynamic questionnaire (with dynamic groups of dynamic questions), a second page can answer those questionnaires and a third can query them.

Compatibility remark: this app has been tested on IE 8, IE 11, Chrome 30, Firefox 24

Here is a screenshot of what will be achieved:

 QPage1 QPage2 QPage3

 

Getting Started

I am using VS2013 for this and if you don’t have it some option might slightly different. VS2012 at least is required for the Visual Studio – TypeScript plugin to work.
To be able to open the solution or replicate it, you will need to install the following:

  1. First install the TypeScript plugin for visual studio (I wrote this app with TypeScript 0.9.1).
  2. From tools => extensions: install Web Essentials
  3. From tools => extensions: install T4 Tangible editor, it’s not required but makes working with T4 template more pleasant.
  4. Prepare the database with the 2 SQL scripts (at the root if the ZIP file). “createDB.sql” create the schema (question) and tables, “reports.sql” create the search related SQL code (One SQL table data type and one stored procedure) (as I’m not using EF code first).

Now you can either use the provided project (you need to modify the connection string the Web.config to point to your DB first!) or:

  1. Create a basic blank web project supporting MVC and WebAPI.
  2. Install required packages (list provided in README.txt) with NuGET using the package manager console.
  3. Create a “~/MyScripts” folder, where custom / handwritten scripts will be written, so they are clearly separated from 3rd parties (personal preference, doesn’t matter much).

 

Remark I’m using chosen instead of the more recent select2 for my combobox as select2 seems to be very slow on IE8.

Remark I’m using jQuery 1.10.2 instead of 2.x for IE8 support.

Remark I’m using EF5 (instead of newer EF6) as EF.Extended (which I use for Future query) doesn’t support EF5 yet. Future query is used to group multiple query in one database command, hence reducing network and connection overhead.

Remark I choose Knockout over AngluarJS for my template engine because Knockout is fully declarative. I.e. it needs absolutely no code to setup and choose templates. Whereas AngularJS is a mix of declarative / imperative and you can’t escape writing infrastructure code when using template heavily.

 

About WebAPI

Web API looks very much like MVC. It has controllers (inheriting from ApiController instead of Controller) and returning plain objects (instead of ActionResult). The HTTP headers are used to select the in and out serialization method. It also has its own routing mechanisms, where one should set a route that won’t conflict with MVC (so as to be unambiguous).

Here is the WebAPI configuration I use in my project:

public static void Register(HttpConfiguration config)
{
    config.MapHttpAttributeRoutes();
    config.Routes.MapHttpRoute(
        name: "DefaultApi",
        routeTemplate: "api/{controller}/{action}",
        defaults: new { }
    );
    // Uncomment the line below to disable XML serialization
    //config.Formatters.Remove(config.Formatters.XmlFormatter);
}

Unlike the out of the box WebAPI config I specify the {action} in my route, as my WebAPI controller will have many methods, returning heterogeneous type. Unlike most sample on the net where there is one controller per data type with 4 actions (select/insert/update/delete).

 

Here is a much simplified version of my WebAPI controller:

public class SearchResult
{
    /** properties... **/
}
public class QuestionaireApiController : ApiController
{
    public List<SearchResult> GetAllAnswers()
    {
        List<SearchResult> results;
        /** do the thing, return the result **/
        return results;
    }
    /** more methods **/
}

This can be called with a simple HTTP GET request at: http://MyApp/api/QuestionaireApi/GetAllAnswers

And it will return an object (as JSON / XML or whatever other format it supports and the query specifies).

Web API, use the start of the name of a method to check what HTTP method it supports but one can also decorate with [HttpGet], [HttpPost], etc… to specify the method.

Argument are passed by default on the URL, but one (and only one) argument can be passed as request body. It is useful for complex object or model, that can hardly be passed in the URL, as in:

[HttpPost]
public List<SearchResult> SearchAnswers(bool isOrAnd, [FromBody]List<SearchCriteria> criterium)
{
    List<SearchResult> results;
    /** do the thing, return the result **/
    return results;
}

Remark HTTP GET method don’t support FromBody parameter.

 

About T4 Code Proxy Generation

I would like to encapsulate my WebAPI as a ServerProxy class that I can just call. The above GetAllAnswer() WebAPI method above could be encapsulated with an AJAX call as follows:

export class ServerProxy {
    cache = false;
    timeout = 2000;
    async = true;
    constructor(public baseurl: string) { }

    GetAllAnswers(): JQueryPromise<Array<ISearchResult>> {
        var res = $.ajax({
            cache: this.cache,
            async: this.async,
            timeout: this.timeout,
            dataType: 'json',
            contentType: 'application/json',
            type: 'GET',
            url: this.baseurl + 'api/QuestionaireApi/GetAllAnswers',
        })
        ;
        return res;
    }
}

Thanks to TypeScript it is strongly typed and would help removing parameter errors. However what I really would like is the server proxy code to be automatically synchronized or generated from my code. Here enter T4!

If you right click somewhere in the Solution Explorer, you can create a new (T4) Text Template

Q4

I won’t go into much about detail on the intricacies of T4 templates, MSDN is here for that. Here I want to explain how to explore the current project’s code.

When one create a T4 template, the first line will look like that:

<#@ template debug="false" hostspecific="false" language="C#" #>

The hostspecific property is the one that will give you access to VS data. It is false by default, you want to make it true.

When it is true you can access the some visual studio service, of interest there are those 2:

DTE DTE { get { return (DTE)((IServiceProvider)this.Host).GetService(typeof(DTE)); } }
Project ActiveProject { get { return DTE.ActiveDocument.ProjectItem.ContainingProject; } }

The DTE interface is in the EnvDTE namespace. This is the namespace one would use to browse the project for code files and find their class and and methods.

Armed with that one can explore all files in the project, find classes and methods. A Project has a ProjectItems property which contains ProjectItem (which also has a ProjectItems property). Each ProjectItem has a FileCodeModel property which might or might not be null (whether its a code file or not).

The FileCodeModel has a CodeElements property which enumerate CodeElement (which also have a Children property of CodeElements).

Each CodeElement interface has a property Kind which tells you what it is (class, function, interface, attribute, and so on…) and then it can be cast to the appropriate interface for more info (CodeClass, CodeFunction, …).

i.e. the following code can be written to enumerate the project

DTE DTE { get { return (DTE)((IServiceProvider)this.Host).GetService(typeof(DTE)); } }
Project ActiveProject { get { return DTE.ActiveDocument.ProjectItem.ContainingProject; } }

IEnumerable<ProjectItem> EnumerateProjectItem(ProjectItem p)
{
    yield return p;
    Level++;
    foreach (var sub in p.ProjectItems.Cast<ProjectItem>())
        foreach (var sub2 in EnumerateProjectItem(sub))
            yield return sub2;
    Level--;
}
IEnumerable<ProjectItem> EnumerateProjectItem(Project p)
{
    foreach (var sub in p.ProjectItems.Cast<ProjectItem>())
        foreach (var sub2 in EnumerateProjectItem(sub))
            yield return sub2;
}
// enumerate projects and code elements
IEnumerable<CodeElement> EnumerateCodeElement(CodeElement element)
{
    yield return element;
    Level++;
    foreach (var sub in element.Children.Cast<CodeElement>()) {
        foreach(var sub2 in EnumerateCodeElement(sub))
            yield return sub2;
    }
    Level--;
}
IEnumerable<CodeElement> EnumerateCodeElement(CodeElements elements)
{
    Level++;
    foreach (var sub in elements.Cast<CodeElement>()) {
        foreach(var sub2 in EnumerateCodeElement(sub))
            yield return sub2;
    }
    Level--;
}
IEnumerable<CodeElement> EnumerateCodeElement(FileCodeModel code)
{
    return EnumerateCodeElement(code.CodeElements);
}
IEnumerable<CodeElement> EnumerateCodeElement(ProjectItem p)
{
    return EnumerateCodeElement(p.FileCodeModel.CodeElements);
}

// cache usefull TS data
void ParseProject()
{
    if (_allClasses == null)
    {
        _allClasses = new List<CodeClass>();
        _allEnums = new List<CodeEnum>();
        var proj = ActiveProject;
        foreach (var item in EnumerateProjectItem(proj))
        {
            var code = item.FileCodeModel;
            if (code == null)
                  continue;
            foreach(var e in EnumerateCodeElement(code))
            {
                switch (e.Kind)
                {
                    case vsCMElement.vsCMElementClass:
                        _allClasses.Add((CodeClass)e);
                        break;
                    case vsCMElement.vsCMElementEnum:
                        _allEnums.Add((CodeEnum)e);
                        break;
                }
            }
        }
    }
}
List<CodeClass> _allClasses;
List<CodeEnum> _allEnums;
List<CodeClass> AllClasses {
    get {
        ParseProject();
        return _allClasses;
    }
}
List<CodeEnum> AllEnums {
    get {
        ParseProject();
        return _allEnums;
    }
}

Now one can explore the code in the current project and write a proxy generator. I won’t go too much in the details of my implementation I will just talk a little more about the result.
I got 2 generators, one to generate TypeScript definition of my JSON exchange object, and one to generated TypeScript server proxy. I created an attribute ToTSAttribute, which I use to flag what I want to be recreated in TypeScript.

I modified my EF template to mark my EF class with ToTSAttribute as I want to manage them with this UI. I generate 2 TypeScript interface for each of my class. The normal exchanged interface. And a Knockout friendly interface (more on that later).

 

About TypeScript

Here is a couple of tips I realized on how better use TypeScript.

 

TypeScrit files are .ts files. They are compiled (when building the solution) in .js file with the same name. Sadly at this stage the .js file are not marked as part of building output and, in case of automated build, one should manually add them to be part of the deployed files!

One solution is to edit project to add each of the .js files produced to it. “Unload Project” then “Edit MyProject.csproj”, then use the DependentUpon tag.

<TypeScriptCompile Include="MyScripts\Common.ts" />
<Content Include="MyScripts\Common.js">
  <DependentUpon>Common.ts</DependentUpon>
</Content>

 

I was not able to use RequireJS with TypeScript 0.9.1. I just explicitly include all the .js file needed in each page. Fortunately there isn’t that many anyway (yet)!

 

There is a silly issue about overriding and method declaration, summarized in the code below

class A {
    Who() {
        return "Who.A";
    }
    Who2 = function () {
        return "Who2.A";
    }
}
class B extends A {
    Who() {
        return super.Who();
    }
    // won't compile
    //Who2 = function () {
    //    return super.Who2();
    //}
}

There are 2 ways of declaring a member method. One as a method (Who) and one as a property being a function (Who2).the later can’t be overridden. And also the intellisense get sometimes confused about the special property “this”.

 

Class can’t be extended (you can’t add property to a strongly typed class object), but you can always extend interface, as in:

interface IPoint {
    getDist(): number;
}

interface IPoint {
    translate(x: number, y: number);
}

Interface are best used to describe data exchange objects. The T4 generate interface declarations. Which is complemented with hand written extra properties added and used by client side JavaScript data view model. Also one can’t really safely cast from a class to another (as nothing really happen) whereas interface casting is more appropriate.

 

About KnockoutJS

KnockoutJS is a library to do MVVM in JavaScript. For that it has a templating engine and introduce observable JavaScript object. There is great documentation and live example on the web site.

To setup KO write a few data template and tag with data binding and then setup a model on the whole page with

ko.applyBinding(model) // whole page
ko.appplyBinding(model, domElement) // part of the page

Remark The model could be any object. But if you want 2 way binding (i.e. UI automatically updating from data changes) you need to use observable.

 

Let’s say you want to make a file system tree view with the following data model:

interface IDirectory {
    subdirectories: Array<IDirectory>;
    files: Array<string>;
    name: string;
}

To have 2 way binding you need to use observable such as this one:

interface IKODirectory {
    subdirectories: KnockoutObservableArray<IKODirectory>;
    files: KnockoutObservableArray<string>;
    name: KnockoutObservable<string>;
}

You can then display particular property (with 2 way binding) with some knockout specific binding in the the data-bind tag as in:

<span data-bind="text: name"></span>
First Child <span data-bind="text: subdirectories()[0].name"></span>

where the text property (in the data-bind tag), is a path to an observable.

Remark the data-bind tag is where all the knockout magic reside

 

You can turn any simple value into an observable with ko.observable(value), for array use ko.observableArray(array), for object use ko.mapping.fromJS(obj) (it’s a plugin, need to be downloaded separately), and it will recursively set every property as an observable. The get the value from an observable you just invoke it, like so: myObservable(), to set it: myObservable(newValue). To be notified of change you can subscribe, like so: myObservable.subscribe(function(newValue) {}).

 

Knockout Template

Where Knockout really shines (more than AngularJS that is) is in how easy it is to define and use reusable template. Here is a recursive template to display the directory object defined just before.

<script id="tplDirectory" type="text/html">
    <span data-bind="text: name"></span>
    <div style="margin-left:2em;" data-bind="template: {
        name: 'tplDirectory',
        foreach: subdirectories }"></div>
</script>

<div data-bind="template: { name: 'tplDirectory', data: root }"></div>

Template are defined in script tag. Referenced by their ID property. When one want to use the template pass the name of the template and data. And that’s all there is to it. The above sample is fully functional!

 

About the Application

There are 3 views: Manage.cshtml (define the questionnaires), Answer.cshtml (answer them), Query.cshtml (search the answers). The MVC controller methods are empty just returning the views, this being a client side app. Each page share the common Questionnaire.js (generated from Questionnaire.ts). And a page specific javascript file.

I defined 2 sets of KnockoutJS templates. One set is for editing objects, as they are pretty much all the same and numerous. The other set is for viewing and they are used on every screen.Because the templates are shared by all views I wrote them in a partial view (PartialTemplates.cshtml) and the data model for them should be an APPModel class (define in the common Questionnaire.ts).

 

Editing Questionnaire

My T4 template generate IQuestionaire, IKOQuestionaire, IQGroup, IKOQGroup, IQuestion, IKOQuestion interfaces, which I extend in my hand written view model code (with extra info needed for the UI to work) as follow:

interface IAppItemData {
    app_type: QOType;
    app_selected: KnockoutObservable<boolean>;
    app_editing: KnockoutObservable<boolean>;
}
module WebQuestionaire {
    interface IKOQuestionaire extends IAppItemData {
        app_answerSet: KnockoutObservable<IKOQAnswerSet>;
        app_groups: KnockoutObservableArray<IKOQGroup>;
    }
    interface IKOQGroup extends IAppItemData {
        app_questions: KnockoutObservableArray<IKOQuestion>;
    }
    interface IKOQuestion extends IAppItemData {
        app_options: KnockoutObservableArray<IKOQOption>;
        app_answer: KnockoutObservable<IKOQAnswer>;
    }
}

I use ko.mapping.fromJS() to turn my data exchange object into the KO friendly interface.
var deo: IQuestionaire /* = something */
var qs = <IKOQuestionaire> ko.mapping.fromJS(deo);

I also make them inherit from a common interface and add some extra UI properties. Including a app_type property so my model’s methods just take an IAppItemData and use the app_type property to find what kind of item it is. All my extra method have an obvious prefix to avoid colliding with EF generate data classes.

Then my questionnaire view model looks like that, with 3 identical properties (all APPItem<T>) for each column

class APPModel {
    questionaires = new APPItem<WQ.IKOQuestionaire>({
        /* init data */
    });
    groups = new APPItem<WQ.IKOQGroup>({
        /* init data */
    });
    questions = new APPItem<WQ.IKOQuestion>({
        /* init data */
    });

    model: WQ.IQuestionaireConfig;

    constructor(model?: WQ.IQuestionaireConfig) {
        if (model)
            this.load(model);
    }
    load(model: WQ.IQuestionaireConfig) {
        /** set up data, extends exchange data **/
    }
}

All my server proxy methods return a jQuery deferred which I can act upon its return with the .then() method. I then set the Knockout model for the page by invoking my server proxy and binding the result.

declare var pserver: WQ.QuestionaireApiProxy;

$(document).ready(function () {
    pserver.GetQConfig(null).then(
        function(data: WQ.IQuestionaireConfig) {
            var model = new APPModel(data);
            ko.applyBindings(model);
        }
        , onGenericAjaxFail);
});

Remark all my editing method are in the root APPModel object. To access them in all the template I use the KO property “$root” which is the model set by the user on that location (as opposed to the current model “$data”, in case of recursive template use).

 

And display the editing column using bootstrap grid and knockout template.

<div class="row rpadded">
    <div class="col-md-4 qsection" data-bind="template: { name: 'template-column', data: questionaires }"></div>
    <div class="col-md-4 qsection" data-bind="template: { name: 'template-column', data: groups }"></div>
    <div class="col-md-4 qsection" data-bind="template: { name: 'template-column', data: questions }"></div>
</div>

At the top there are 3 columns set up with bootstrap grid layout (class: “row”, “col-md-4”) each with an identical knockout template (data-bind: template: name) of my column data (“questionaires”, “groups”, “questions”)

Finally the UI looks like that, reusable template marked in red:

image

The template for editing the items might depends on the item, so instead of being a string it’s a function (returning a string) in APPModel (which I access with “$root”).

Every button call an action on my model, which update the data model which automatically update the UI.

For example here is how “+” button is handled:

<button type="button" class="btn btn-default navbar-btn" data-bind="click: function () { $root.addItem(id); }">
    <span class="glyphicon glyphicon-plus"></span>
</button>

in data-bind I use the click binding to call a method on my control which just create a new element. All method editing the object call the server and only do their stuff if the server method is successful, hence making sure the database is always update.

addItem(id: QOType) {
    var self = this;
    var name: string;
    switch (id) {
        case this.questionaires.id:
            name = "New Questionnaire";
            pserver.CreateQuestionaire(name).then(nid => {
                var Q = new WQ.Questionaire();
                Q.ID = nid;
                Q.Name = name;
                Q.Label = name;
                self.questionaires.items.push(self.extendItemData(ko.mapping.fromJS(Q), QOType.Questionnaire));
            }, onGenericAjaxFail);
            break;
        /* other cases */
    };
}

Finally the server method is vanilla Entity Framework code:

[HttpPost]
public int CreateQuestionaire(string nameAndLabel)
{
    using (var ctxt = QuestionaireEntities.Create())
    {
        var q = new DD.Questionaire
        {
            Name = nameAndLabel,
            Label = nameAndLabel,
        };
        ctxt.Questionaires.Add(q);
        ctxt.SaveChanges();
        return q.ID;
    }
}

Of interest in the GetQConfig() method (which returns either all questionnaire data, or only one for a particular questionnaire) which use .Future() to turn multiple EF DB query into a single database call!
Behold, there is only one database call done when the method below execute:

public QuestionaireConfig GetQConfig(int? id)
{
    using (var ctxt = QuestionaireEntities.Create())
    {
        // use .Future() for performance // to have only 1 SQL query
        var questionaires = ctxt.Questionaires.Where(x => id == null || x.ID == id).Future();
        var qgroups = ctxt.QuestionaireGroups.Where(x => id == null || x.Questionaire.ID == id).Future();
        var groups = ctxt.QGroups.Where(x => id == null || x.QuestionaireGroups.Any(qg => qg.Questionaire.ID == id)).Future();
        var gquestions = ctxt.QGroupQuestions.Where(x => id == null || x.QGroup.QuestionaireGroups.Any(qg => qg.Questionaire.ID == id)).Future();
        var questions = ctxt.Questions.Where(x => id == null || x.QGroupQuestions.Any(gq => gq.QGroup.QuestionaireGroups.Any(qg => qg.Questionaire.ID == id))).Future();
        var options = ctxt.QOptions.Where(x => id == null || x.Question.QGroupQuestions.Any(gq => gq.QGroup.QuestionaireGroups.Any(qg => qg.Questionaire.ID == id))).Future();

        var result = new QuestionaireConfig()
        {
            questionaires = questionaires.ToList(),
            qgroups = qgroups.ToList(),
            groups = groups.ToList(),
            gquestions = gquestions.ToList(),
            questions = questions.ToList(),
            options = options.ToList(),
        };
        RemoveNonJSON(result);
        return result;
    }
}

 

Viewing Questionnaire

There is also some templates used on every page to view or answer a particular questionnaire. In the managing screen the selected Questionnaire is automatically previewed lived below, and update live as its configuration changes.

 image

As shown above when of the extra property I add to question is the “app_answer” property, this way I can just get the answer from the questions.

 

Answering Questionnaire

image

The model inherit from APPModel, add 5 short methods (4 for the button and one to load the selected questionnaire on demand).

The UI and code is really simple

<p>
    Select a Questionnaire <select style="width:200px;"
        data-bind="
    options: qlist,
    value: selectedQID,
    optionsText: 'Name',
    optionsValue: 'ID',
    chosen: {}
    "></select>
</p>
<div class="btn-group">
    <button type="button" class="btn btn-default" data-bind="click: $root.resetAnswers">Reset Answers</button>
    <button type="button" class="btn btn-default" data-bind="click: $root.loadLastAnswers">Load Last Answers</button>
    <button type="button" class="btn btn-default" data-bind="click: $root.copyLastAnswers">Copy Last Answers</button>
</div>

<p>&nbsp;</p>
<div class="panel panel-default" data-bind="if: questionaires.selected">
    <div class="panel-body">
        User Name: <input class="form-control" data-bind="value: questionaires.selected().app_answerSet().UserName" />
    </div>
    <!-- ko template: { name: 'template-view-questionaire', data: questionaires.selected } --> <!-- /ko -->
</div>

Of interest the comment below is not a comment, but a Knockout template without container DOM element.

 

Searching Answers

Remark only LIKE and == operators on text field are implemented in this sample.

 

I was looking for something which can represent a relatively flexible query with multiple AND / OR criteria. Unfortunately it appeared to me that letting the user choose arbitrarily nested query of arbitrary depth will lead to slow recursive SQL (with cursor). Instead I opted for 2 nested query level block, with OR at the top level and AND in sub block, or vice versa.

 Q8

Coding the answer was relatively trivial. Of interest, I used datatable to render the result in a grid and a bootstrap modal to show individual results.

image

Also I used individual Knockout model for the popup and the rest of the page:

// finally updating UI
ko.applyBindings(qmodel, $("#query")[0]);
ko.applyBindings(popup_model, $("#result_popup")[0]);

 

What’s more interesting is the SQL implementation of the search. Due to the complexity of the search I decided to write as a SQL stored procedure (or sproc), instead of a C# query with EntityFramework. I pass a list of criteria block to SQL as a user defined custom table type.

Here is the definition of the table typed passed to a search SQL

CREATE TYPE [question].[Criteria] AS TABLE(
    qgroupID int NOT NULL,
    questionID int NULL, -- question ID or null for ctype 0,1
    ctype int NOT NULL, -- 0:AS:UserName, 1:AS:LastModified, 2:A:Abool, 3:Atext, 4:Anumeric, 5:Adate, 6:Alist
    cop int NOT NULL, -- 0:LIKE, 1:IN, 2:==, 3:!=, 4:<, 5:<=, 6:>, 7:>=
    valueText nvarchar(max) NULL,
    valueBit bit NULL,
    valueNum numeric(18, 0) NULL,
    valueDate datetime2(7) NULL
)
GO

the groupID is the an arbitrary query block number which is only used to group the the criteria result together by query block.

The answer to a questionnaire is stored across multiple row as shown on this database diagram

Q9

In my sproc I will have to match each answer with its question and criteria and check whether there is a match or not, represented as 0 for fail or 1 for success. Then I have to aggregate all the results applying the AND/OR logic displayed in the UI.

Below is the search stored procedure. For clarity sake I replaced the calculation which match a single criteria with ‘1’ so as to highlight how I declare and use the custom table type (at the top) and how I aggregate the answer. The individual row match are in a common table expression and the select below aggregates them and return matching answer sets IDs.

CREATE PROCEDURE [question].[Search]
    @isOrAnd bit = 1 -- 0: AND/OR, 1: OR/AND
    , @criterias question.Criteria READONLY
AS
BEGIN
    
    -- TODO: implement all the ctype/cop combination
    -- this CTE check individidual criteria against individual answers
    ;WITH Criterium AS (
        SELECT [AS].ID, Q.ID AS IDQ, Q.Name AS QName, [AS].LastModified, [AS].UserName, C.qgroupID IDG, (
        -- ==========================================================================
        1 -- match 1 criteria against 1 answer to one question here and return 1 or 0
        -- ==========================================================================
        ) AS Success
        FROM question.QAnswer A
        INNER JOIN question.QAnswerSets [AS] on A.SetID = [AS].ID
        INNER JOIN question.Questionaires Q ON Q.ID = [AS].QuestionaireID
        INNER JOIN @criterias C on A.QuestionID = C.questionID
    )
    -- do grouping and calculate final YES/NO answer
    SELECT ID, IDQ, QName, LastModified, UserName
    FROM (
        -- calculate success for all question group
        SELECT ID, IDQ, QName, LastModified, UserName
            , (CASE COUNT(CASE isAND WHEN 1 THEN 1 ELSE NULL END) WHEN 0 THEN 0 ELSE 1 END) isOrAnd -- 0 isAND means !isOrAnd
            , (CASE COUNT(CASE isOR WHEN 1 THEN NULL ELSE 1 END) WHEN 0 THEN 1 ELSE 0 END) isAndOr -- 0 !isOR means isAndOr
        FROM (
            -- caculate success of each question group
            SELECT ID, IDQ, QName, LastModified, UserName, IDG
                , (CASE COUNT(CASE Success WHEN 1 THEN 1 ELSE NULL END) WHEN 0 THEN 0 ELSE 1 END) isOr -- 0 Success means !isOR
                , (CASE COUNT(CASE Success WHEN 1 THEN NULL ELSE 1 END) WHEN 0 THEN 1 ELSE 0 END) isAnd -- 0 !Success means isAND
            FROM Criterium
            GROUP BY ID, IDQ, QName, LastModified, UserName, IDG
        ) Results
        GROUP BY ID, IDQ, QName, LastModified, UserName
    ) RA
    WHERE
        isOrAnd = (CASE @isOrAnd WHEN 1 THEN 1 ELSE NULL END)
        OR isAndOr = (CASE @isOrAnd WHEN 0 THEN 1 ELSE NULL END)
    ;
END

The individual criteria itself are just a gigantic nested (2 level) CASE statement looking at ctype (for field matched) and cop (for operator used).

Now this is well and good but calling this sproc was tricky too. EF doesn’t support custom table type. I had to revert to the lower level ADO.NET API. Custom table type are passed as DataTable. With the help of a few extension method calling this sproc and reading its result proved trivial:

[HttpPost]
public List<SearchResult> SearchAnswers(bool isOrAnd, [FromBody]List<SearchCriteria> criterium)
{
    using(var conn = new SqlConnection(ConfigurationManager.ConnectionStrings["DefaultConnection"].ConnectionString))
    using (var cmd = (SqlCommand)conn.CreateCommand())
    {
        conn.Open();
        cmd.CommandType = CommandType.StoredProcedure;
        cmd.CommandText = "question.Search";
        cmd.Parameters.Add(new SqlParameter("@isOrAnd", isOrAnd ? 1 : 0));
        cmd.Parameters.Add(new SqlParameter("@criterias", criterium.ToDataTable()));

        var adap = new SqlDataAdapter(cmd);
        var ds = new DataSet();
        adap.Fill(ds);

        var table = ds.Tables[0];
        return table.ToList<SearchResult>();
    }
}

 

Last few words

Well I hope this rough explanation of my sample would have whet your appetite about Knockout, TypeScript and WebAPI. Hopefully it will also help understood the source code better if you want to study the sample more in depth. Finally I hope my T4 template for strongly typed proxy generation will stir some interest too.


Categories: .NET | Web
Permalink | Comments (0) | Post RSSRSS comment feed

Metro App and Effect

Today I thought it is time I get better with shader language (HLSH)!

One problem I had with shaders so far, it is that most of sample on the web use Effects. Sadly, not only Effects are not part of the DirectX SDK (or Windows SDK where it has been integrated on Windows 8), but they are just not allowed in Metro app, where you can’t use the runtime shader compiler, (allegedly ^^) for security reason.

Thankfully Frank Luna came to the rescue! (Frank Luna is the author of the DirectX I am currently learning from).

He posted a .pdf on how to port an example of his book to Metro here, there is a section on effect towards the bottom.

Just in case Frank Luna move his files around I also download it and attached it below!

 

Also, in shaders the in / out vertex structure are tagged with semantic names which seem to be part of a well known predetermined lot. A bit of Googling found the list of semantic names, good to know, here is the link on MSDN!

 

Lastly, in most sample I have seen or read so far (it’s not that many, granted! ^^) Shader writer seems to have something against conditional statements.. :~

They use effect to compile multiple shaders at once, depending on some condition (i.e. one effect = multiple shader!!).

But in WinRT there is no effect, dynamic linkage is not supported either.

I decided I will write shaders with conditional statements!! Will see how it goes! ^^


Tags: ,
Categories: WinRT | DirectX
Permalink | Comments (0) | Post RSSRSS comment feed

Moved my WinRT binding

Just a quick note.

My DirectX WinRT C++/CX wrapper used to be at: http://directwinrt.codeplex.com/

It is now at: http://directxwinrt.codeplex.com/

Thanks CodePlex, it was really easy to correct the spelling mistake! :)
Furthermore, at least for now, CodePlex is redirecting the old URL to the new!


Permalink | Comments (0) | Post RSSRSS comment feed

Hit Testing

Hit Testing

This time for my DirectX self training I implemented hit testing, frustum culling and octtrees.

On the figure above the red triangle is the triangle that the user hit with the mouse. All the code is in CodePlex, at http://directwinrt.codeplex.com/ and the screenshot is from Sample2_D2D.

 

Basics

As I described in my previous blog entry, calling into C++ is costly, even with WinRT C++/CX. By that I mean crossing the ABI divide between .NET and C++ is (slightly) expensive, not that C++ itself is slow!

It’s all the motivation it took for me to port all my math code to C#.

Furthermore, hit testing need be done in code, i.e. there is no help from DirectX here.

The math being rather long to explain I will just explain what is hit testing, frustum culling and outline the basic steps involved in doing it, and let the user refer to the full source code on CodePlex and Google for an explanation of the inner mathematical working when one is needed.

 

But first, a word of warning,

 

Note on coordinate system

When one peruse at the code, one might believe there are some elementary sign error in my math, and while it could be there are 2 initial source of confusion to be aware of!

 

In 3D, DirectX use a left oriented coordinate system! (as described on MSDN), whereas in normal math courses the right coordinate system is used!

coordinatesystem3d

 

Also, another source of confusion, while the Y direction goes up in 3D (as seen above) it goes down in 2D!

 

Ray

The ray struct represent a half line with a point of origin and a direction

public struct Ray
{
    public Vector3F Origin;
    public Vector3F Direction;

    public static Ray operator*(Matrix4x4F m, Ray r) { /* */ }
}

It can also be applied a space transformation matrix to change its coordinate system to that of a particular object of interest.

 

When the user click on the screen, the code can ask the camera to create a shooting ray from the camera location in the direction of the point clicked on screen:

public class Camera : ModelBase
{
    public Ray RayAt(DXContext ctxt, float x, float y) { /* */ }
    //...
}

And then it can check if this ray intersect any geometry and how far they are from the ray’s source. This is the process of “hit testing”, testing that when the user click the screen, it might have “hit” a geometry under the mouse.

 

Rays have different method to calculate if they intersects with some objects and the distance to a given mesh (list of triangles)

public struct Ray
{
    public bool IntersectBox(Box3D box) { /* */ }
    public bool IntersectSphere(Vector3F p, float radius) { /* */ }
    public bool IntersectTriangle(Triangle3D triangle, out float dist) { /* */}

    public HitMeshResult IntersectMesh(IEnumerable<Triangle3D> mesh) { /* */ }
    public struct HitMeshResult
    {
        public bool Hit;
        public float Distance;
        public int Triangle;
    }
    //...
}

Of course one should make sure that the ray and mesh are in the same coordinate system!
Often the mesh is loaded from a model and not edited after that, instead a world matrix transformation is applied to it. In such case one should multiple the Ray by the inverse of the object’s world transformation (provided the space is not scaled) before calculating intersection distance.

 

Doing an IntersectMesh on a big mesh (i.e. a mesh with lots of triangles) is expensive (the method is going to compute an intersection distance on each and all triangle of the mesh). To improve that various technique can be employed:

  • First check that the bounding box of the mesh is hit (it’s a quick eliminating test)
  • Place the mesh in some kind of spatial index and only test meshes that are likely to be hit, I detail this idea below in my section about OctTree.
  • Index the triangle of the mesh itself in a spatial index, often an AABBTree (AABB: Axis Oriented Bounding Box), to quickly find out the triangle that should be hit tested instead of enumerating them all. This I didn’t implement at this time. Remark those particular indexes might be what make AABBTee / OctTree example on the internet rather complicated: the extra mesh’s vertice info.

 

OctTree

AABB Tree, Oct Tree, Quad Tree, they are all space binary search tree; 3D space for OctTree, 2D space for QuadTree and any D for AABBtree. The only difference between each of them is the way a tree node divide the space amongst its children.

While we are on topic, I just wanted to mention here, BSP are different, they will make an ordered tree of object (instead of partitioning space).

A QuadTree node will divide the 2D space amongst its children node in 4 equal size rect. And OctTree will divide 3D space in 8 equal size wedges. An AABB tree will divide it in 2, along the longest axis for this node. Below is an example quad tree:

quadtree

Seeing they all behave more or less the same I have a SpatialIndex<TItem, TBox> base class, subclasses only need to define how they partition the space. I only implemented QuadTree and OctTree so far, but I did took a page from the AABB tree and I don’t always divide space in 4 and 8 (respectively) but apply some heuristic. Consequently insert might be more costly… But it should improve memory and keep same or better query time and it will also improve some insert by making more meaningful box (i.e. equally sized, if possible).

Here is the relevant part of the implementation

public abstract class SpatialIndex<TItem, TBounds> : ICollection<TItem>
    where TItem : IHasBounds<TBounds>
    where TBounds : IBounds
{
    public IEnumerable<TItem> Query(TBounds r) { /**/ }
    public IEnumerable<TItem> Query(Predicate<TBounds> intersects) { /**/ }
    //...
}

public class OctTree<T> : SpatialIndex<T, Box3D>
    where T : IHasBounds<Box3D>
{ /*  */ }

public class QuadTree<T> : SpatialIndex<T, Box2D>
    where T : IHasBounds<Box2D>
{ /* */ }

public interface IHasBounds<T>
    where T : IBounds
{
    T Bounds { get; }
}

public interface IBounds
{
    bool Contains(IBounds b);
    bool Intersects(IBounds b);
    float MaxLength { get; }
}

As you can see both OctTree and QuadTree are collections of item with bounds.

And they have (efficient) method to query object they contain with an intersection box or an intersection predicate.
Remark the tree is thread safe for reading and querying.

 

Remark this seem simple, because it is!
Most OctTree/AABB tree on the internet are complicated because they are used to index a mesh’s triangle to further optimize hit testing by only testing some triangle of a mesh instead of all triangles. This is something I left for later.

 

Finally OctTree brought me to the next enhancement. When one render a scene we want to render only what’s needed (for performance! DirectX can sort out what’s not visible, but it costs some time).

 

Frustum culling

A frustum is a portion of a solid that lies between 2 parallel planes cutting it.

piramid

In 3D graphic the frustum refer to the area that is viewed by the camera. And frustum culling mean only drawing what is in that area, culling the rest.

Again one can query the camera for the frustum. One can even do rect selection on the screen by passing a 2D box.

public class Camera
{
    public Frustum GetViewFrustum(DXContext ctxt, Box2D area) { /***/ }
    public Frustum GetViewFrustum() { /***/ }
    //...
}

Because frustum define intersection methods it can be used to query an OctTree for visible object and only render what’s needed as in:

public override void Render(D3DGraphic g)
{
    // setup graphic...
    var frustum = CameraController.Camera.GetViewFrustum();
    foreach (var item in octTree.Query(b => frustum.Intersects(b)))
    {
        // draw the item ...
    }
}

 

Hit Testing a scene in … Parallel

Now we got an OctTree to to index item by location, some method to test hitting, there is one last optimization that come to mind when doing hit test, how about parallelizing it?

The idea is to hit test each mesh with an hit ray in its own thread and aggregating result in the end. There is one special trick, we need to return some data!
There is Parallel.ForEach method for that! Basically it pass a value around at each iteration to aggregate the result and the code should provide a final aggregation method to aggregate all those aggregate result when all the elements have been tested!

Here is a relevant code fragment:

void HitTest()
{
    // ...
    // the ray of interest
    var ray = RayAt(ctxt, x, y);

    // will be used to coordinate threads when aggregating final results
    object hitlock = new object();
    // value that are calculated
    float min = 0;
    hitMesh = null;
    // let's hit, REMARK the octTree query!
    Parallel.ForEach<Tuple<HitItem, Ray.HitMeshResult>>(octTree.Query(b => ray.IntersectBox(b)),
        () => null, // original result
        (it, state, local) =>
        {
            // convert ray to mesh coordinate,
            // instead of converting ALL vertices to world transform!!!
            var mray = it.Transform.Invert() * ray;
            var result = mray.IntersectMesh(EnumSphereTriangle());
            // no hit? return current result
            if (!result.Hit)
                return local;

            // hit!, now let's do (thread local) aggregate

            // no previous result
            if (local == null)
                return Tuple.Create(it, result);
            // merge with previous result
            if (result.Distance < local.Item2.Distance)
                return Tuple.Create(it, result);
            return local;
        },
        r =>
        {
            if (r == null)
                return;
            lock (hitlock)
            {
                // now all thread have completed, merge the results
                if (hitMesh == null || min < r.Item2.Distance)
                {
                    hitMesh = r.Item1;
                    iHitTriangle = r.Item2.Triangle;
                }
            }
        }
    );
}        

 

 

Conclusion

This times I described Hit testing, Frustum culling and OctTrees, which help make those operation faster by indexing the space. I provided some code fragment to show how they are used.

There is also a fully functional hit test sample in the CodePlex repository, http://directwinrt.codeplex.com/: Sample2_HitTesting.

See you next time! :)


Tags: ,
Categories: C# | DirectX
Permalink | Comments (0) | Post RSSRSS comment feed

D2D Progress

For my DirectX WinRT wrapper (now on CodePlex http://directwinrt.codeplex.com/) I took a break of D3D as I had some problem with it (more on that later) and implemented most of D2D, it was easy! ^^

And I also made the initialization even easier and more flexible.

Screenshot of Sample_D2D

So today I will write a quick how to start (with D2D) and about my latest D3D breakthrough (which is just some  C++/CX – C# technicality)

 

Initializing DirectX Graphics

To start DirectX graphic (with my API) one just initialize a DXContext and set a target, like so

var ctxt = new DXContext();
ctxt.Target = (IRenderTarget) ...;

There are 3 types of target to choose from (so far):

DXTargetSwapPanelOrWindow;
DXTargetImageSource;
Texture2D;

Target can even be changed while drawing (for example draw first on a Texture2D, which then can be used in the final scene).

DXTargetSwapPanelOrWindow wrap a SwapChainBackgroundPanel in XAML application.
Of interest DXUtils.Scenes.ScenePanel initialize DirectX, create a Target and call render on every frame.

Then one can draw, for each frame, with pseudo code like that

void Draw(DXContext ctxt)
{
    ctxt.Render = true;
    ctxt.Clear(Colors.White);

    var g3 = new D3DGraphic(ctxt)
    g3.DrawSomething(...);

    var g2 = new D2DGraphic(ctxt);
    g2.DrawSomething(...);

    ctxt.Render = false;
}

 

D2D

Remark at this stage one can download the source (http://directwinrt.codeplex.com/) and have a look at the D2D sample: Sample2_D2D.

 

This weekend and week I wrapped most of D2D API. To be precise I wrapped the following:
- geometries, brushes, bitmap, stroke style, text format, text layout, transforms

(Missing, from the top of my head, would be 2d effects, glyph run and lots of DWrite)

 

Initialization

When drawing a scene (whether D2D or D3D), for performance reason, one should initialize all items first and just call the draw primitives while rendering.

For example lets create a few geometry and brushes

public MyD2DScene()
{
    bImage = new BitmapBrush { Bitmap = new Bitmap(), };
    bImage.Bitmap.CreateWic("Assets\\space-background.jpg");

    bYRB = new LinearGradientBrush
    {
        StartPoint = new Point(100, 100),
        EndPoint = new Point(800, 800),
        GradientStops = new []
        {
            new GradientStop { position = 0, color = Colors.Yellow },
            new GradientStop { position = 0.4f, color = Colors.Red },
            new GradientStop { position = 1, color = Colors.Blue},
        },
    };

    gPath = new PathGeometry();
    using (var sink = gPath.Open())
    {
        sink.BeginFigure(new Point(), FIGURE_BEGIN.FILLED);
        sink.AddLines(new[] {
            new Point(25, 200),
            new Point(275, 175),
            new Point(50, 30),
        });
        sink.EndFigure(FIGURE_END.OPEN);
    }

    //......
}

 

In that scene snippet I created an image brush from a resource image, a gradient brush and a simple path geometry.

 

Rendering

Now I can just render all my items with just few drawing command, like so (for the geometry):

public void Render(DXContext ctxt)
{
    var g = new D2DGraphics(ctxt);
    g.Clear(Colors.Beige);

    g.Transform = DXMath.translate2d(pSkewedRect);
    g.FillGeometry(gPath, bImage);
    g.DrawGeometry(gPath, bYRB, 12, null);
}

 

D3D

Now that was easy and I got back to try to solve my dissatisfaction with my current implementation of D3D. Mostly that the C# developer is (currently) limited to the vertex buffer that I hard coded in the API.

And then I had a breakthrough. Let’s create a native pointer wrapper and write some code on the C# side that make it strongly typed.

My first attempt looked like that:

public ref class NativePointer sealed
{
private:
    uint32 sizeoft, capacity;
    void* data;

internal:
    property void* Ptr { void* get(); }

public:
    NativePointer(uint32 sizeOfT);
    virtual ~NativePointer();

    property Platform::IntPtr Data { Platform::IntPtr get(); }
    property uint32 SizeOfT { uint32 get(); }
    property uint32 Capacity;
    void Insert(uint32 offset, uint32 count);
    void Remove(uint32 offset, uint32 count);
};

That look promising, it even compiled! But then… WTF!!! Platform::IntPtr doesn’t cross ABI!!
Damn you Microsoft!

Then I had a breakthrough, what about… size_t ?!

It’s a perfectly ordinary type, except for the little twist that it projects to int32 when compiling for x86 and int64 when compiling for x64! It worked just fine, sweet!

So this is the change:

property size_t Data { size_t get(); }

On the C# side I was originally hoping to have a generic Pointer<T> class and finally use some unsafe C#.

Well I did use the unsafe C#, but I couldn’t compile code like that

public unsafe static T Get<T>(IntPtr p, int pos)
{
    T* pp = (T*)p;
    return pp[pos];
}

The compiler returns the error

Cannot take the address of, get the size of, or declare a pointer to a managed type ('T')

But it did accepts

public unsafe static int GetInt(IntPtr p, int pos)
{
    int* pp = (int*)p;
    return pp[pos];
}

Sweet…

In the end I create an abstract BasePointer<T> and a .tt template that generate all the template that I need!

Now I just have to implement a class like XNA’s VertexDeclaration and I will get a (reasonably?) solid base to go on…

And also rewrite all my buffers to use this native pointer class.

That’s it for today!
And remember: http://directwinrt.codeplex.com/


Tags: , , ,
Categories: .NET | DirectX | WinRT | C#
Permalink | Comments (0) | Post RSSRSS comment feed

DirectX and WinRT continued

models

temple

Here are my latest developments at writing a clean WinRT component exposing a clean yet complete DirectX D3D (and maybe D2D as well) API to C#.

There would be a (not so) small part about WinRT C++/Cx, generic / template and the rest will be about DirectX::D3D API and my component so far.
I can already tell now that the DirectX initialization and drawing has become yet even simpler than my previous iteration while being more flexible and closer to the native DirectX API!

Finally I want to say my main learning material (apart from Google, the intertube, etc..) is Introduction to 3D Game Programming with DirectX 11 by Frank D. Luna. His source code can be found at D3Dcoder. My samples are loosely inspired by his, I wrote my own math lib for example.

 

1. Exposing C++ array to C#

In DirectX there are many buffers. Shape’s vertices is a typical one. One want to expose a strongly typed array which can be updated by C# and with unlimited access to the underlying data pointer for the native code as well, of course.

Platform::Collection::Vector<T> wouldn’t do. Maybe it’s just me but I didn’t see how to access the buffer’s pointer. Plus it has no resize() method. Platform::Array<T> is fixed size as far as C# code is concerned.

I decided to roll my own templated class.

1.1. Microsoft templated collections

The first problem is it’s not possible to create generic definition in C++/Cx. One can use template but they can’t be public ref class, i.e. can’t be exposed to non C++ API. But there is a twist. It’s possible to expose concrete implementation of Microsoft’s C++/Cx templated interfaces.
There are a few special templated interface with a particular meaning in WinRT. The namespace Windows::Foundation::Collections contains a list of templates that will automatically be mapped to generic collection type in the .NET runtime.

For instance by defining this template:

template <class T>
ref class DataArray sealed : Windows::Foundation::Collections::IVector<T>
{
public:
    // implementation of IVector<T>

internal:
    std::vector<T> data;
};

I have a class that I can return in lieu of IVector<T> (which will be wrapped into an IList<T> by the .NET runtime) and I can directly manipulate its internal data, even get a pointer to it (with &data[0])

1.2. Concrete implementation

This is a first step, but I need to provide concrete implementation. Let’s say I want to expose the following templated class

template <class T>
ref class TList
{
public:
    TList() : data(ref new DataArray<T>()) {}
    property IVector<T> List { IVector<T> get() { return data; }
    public void Resize(UINT n) { data->data.resize(n); }

internal:
    DataArray<T>^ data;
};

I can’t make it public, but I can write a concrete implementation manually as I need it, and simply wrap an underlying template, as in:

public ref class IntList sealed
{
    IntList() {}
    property IVector<int> List { IVector<int> get() { return list.Data; } }
    public void Resize(UINT n) { list.Resize(n); }

internal:
    TList<int> list;
};

But this is quickly becoming old!
What if I use an old dirty C trick, like… MACRO! I know, I know, but bear with me and behold!

#define TYPED_LIST(CLASSNAME, TYPE)\
public ref class CLASSNAME sealed\
{\
    CLASSNAME() {}\
    property IVector<TYPE> List { IVector<TYPE> get() { return list.Data; } }\
    public void Resize(UINT n) { list.Resize(n); }\
internal:\
    TList<TYPE> list;\
};
TYPED_LIST(Int32List, int)
TYPED_LIST(FloatList, float)
TYPED_LIST(UInt16List, USHORT)

At the end of this snippet I just declared 3 strongly typed “list” items in 3 lines!
All the code is just a no brainer simple wrapper and it will also be easy to debug, as the code will immediately step inside the template implementation!

It’s how I implemented all the strongly typed structure I need for this API. And I can easily add new one as I need them in just a single line, as you can see!! ^^.

 

2. The DXGraphic class

The BasicScene item in my previous blog post was quickly becoming a point of contention as I was trying extend my samples functionality. In the end I had a breakthrough, I dropped it and created a class called DXGraphic which is really a wrapper around the ID3D11DeviceContext1 and expose drawing primitives, albeit in simpler (yet just as complete) fashion, if I could.

All other class are to be consumed by it while drawing. Here is what the current state of my native API looks like so far:

DXBaseAPI

One just create a DXGraphic and feeds it drawing primitive. For those who are new to DirectX it’s a good time to introduce the the DirectX rendering pipeline as described on MSDN.

dxpipeline

The pipeline is run by the video card and process a stream of pixel. Most of the DirectX API is used to setup data for this pipeline: vertex, texture, shader variable (constant buffer), etc.. That must be copied from CPU memory to video card memory. And then they would be processed by the shaders, which are some simple yet massively parellelized program which process each individual vertices and turn them into pixels. In a way they are the real drawing programs, the rest is set-up.

At least 2 of these shaders must be provided by the program: the vertex shader and the pixel shader. The vertex shader will convert all the vertex in the same coordinate system in box of size 1 (using model, view and projection matrices), and the pixel shader will output color for a given pixel.

2.1. The classes in the API (so far)

Shaders (pixel and vertex so far) are loaded by the PixelShader and VertexShader class. I used shaders found in Frank Luna sample so far and haven’t written my own. Here is the MSDN HLSL programming guide, and here is an HLSL tutorials web site.

The PixelShader also takes a VertexLayout class argument. Which describe the C++ structure in the buffer to the shader. I’m only using BasicVertex class so far. In the (strongly typed buffer class) CBBasicVertex, CBBasicVertex.Layout return the layout for BasicVertex.

I have some vanilla state class, RasterizerState can turn on/off wireframe and setup face culling.

BasicTexture can load a picture.

Finally shapes are defined by one (or, optionally) many vertices data buffer and (optionally) an index buffer. I used strongly typed one: VBBasicVertex, IBUint16/32. They can be created manually or I have an helper class, MeshLoader to create some.

One of the sample update the vertex buffer with C# on each rendering frame!

MeshLoader will also returns whether the shape is in right or left handed coordinate system. DirectX use left handed, but some model are right handed. The ModelTransform class takes care of that, as well as scaling, rotation and translation.

To draw, one setup shaders, states. Then enumerate all shapes, set its texture, its shape and call draw.

Also one can pass variable to shaders (i.e. computation parameters) by using strongly typed constant buffer. A few are defined, CBPerFrame (contains lighting info), CBPerObject (contains model, view and projection matrices).

2.2. The context watcher

There is a private class used by almost all class in this API: ContextWatcher.

Most class in this API have buffers or data that are DirectX context bound and need to be reset when the context is destroy, recreated when it is, etc. This class take care of the synchronization. It is important to understand it before hacking this library.

 

3. Input and Transform

3.1. Input

To handle input I use a couple of method / events from the CoreWindow class which are wrapped in my InputController class.

GetKetStates(params VirtualKeys[]) will use CoreWindow.GetAsyncKeyStates().

GetTrails() will return the latest pointer down events. On Windows 8 mouse, pen, etc.. have been superseded by the more generic concept of “pointer” device as explained on MSDN.

The CameraController will use the InputController to move the camera and/or model around.

HOME key will reset the rotation, LEFT CONTROL will move model. MOUSE WHEEL will move the camera on the Z axis. Mouse Drag will rotate the camera or model (if LEFT CONTROL is on) using the following rotation:

MouseDrag

i.e. if M1 (x1,y1,0) is the mouse down point, and M2 (x2,y2,0) is the next drag point and O (x1, y1, –screenSize) is a virtual point above M1.
The camera controller calculate the rotation that transform OM1 into OM2 and apply its opposite to the camera. The opposite because dragging the world right is like moving the camera left.

 

3.2. Coordinate System

Initially I was keeping the camera and model transforms as matrices (along those line on MSDN). Unfortunately when I introduced mouse handling to drag the model. Continuously multiplying model matrix by mouse transform matrices introduced unsightly numerical errors. Particularly shear transformations.

 shear

After much tinkering I settled on representing the model transformation as follow:

ModelTransform = Translation * Rotation (as quaternion) * Scaling

One can multiply quaternion together and there would be some small numerical error but it will remain a rotation!

Quaternion can be created with the DXMath class:

public static quaternion toQuaternion(float x, float y, float z, float degree);

(x,y,z) being the axis of rotation.

About quaternion math (as I didn’t learn it at school :~) I found the following links:
http://www.idevgames.com/articles/quaternions
http://willperone.net/Code/quaternion.php

In the end all transformation are nicely wrapped in some class in Utils\DirectX

transforms

Camera is the typical DirectX camera.

Model is the typical DirectX model transformed decomposed in Translation, Rotation, Scaling. There is also a LeftHanded property as it should be handled differently whether the model’s coordinate are in left handed or right handed space.

The Transforms class is a utility class to create transform matrix.

CenteredRotationTransform is used to rotate the model around a point, that can be moved.

 

4. Wrapping it all together

To show what the final code look like here is the slightly simplified code that setup the scene with the column (2nd screen shot).

Even if it’s long it’s much simpler than the C++ version, and just as versatile!

public static Universe CreateUniverse4(DXContext ctxt = null, SharedData data = null)
{
    ctxt = ctxt ?? new DXContext();
    data = data ?? new SharedData(ctxt);

    var box = new BasicShape(ctxt, MeshLoader.CreateBox(new float3 { x = 1, y = 1, z = 1 }));
    var grid = new BasicShape(ctxt, MeshLoader.CreateGrid(20, 30, 20, 20));
    var gsphere = new BasicShape(ctxt, MeshLoader.CreateGeosphere(1, 2));
    var cylinder = new BasicShape(ctxt, MeshLoader.CreateCylinder(0.5f, 0.3f, 3, 20, 20));

    var floor = data.Floor;
    var bricks = data.Bricks;
    var stone = data.Stone;

    float3 O = new float3 { z = 30 };

    var u = new Universe(ctxt)
    {
        Name = "Temple",
        Background = Colors.DarkBlue,
        Camera =
        {
            EyeLocation = DXMath.vector3(0, 0.0f, 0.0f),
            LookVector = DXMath.vector3(0, 0, 100),
            UpVector = DXMath.vector3(0, 1, 0),
            FarPlane = 200,
        },
        CameraController =
        {
            ModelTransform = { Origin = O },
        },
        PixelShader = data.TexPixelShader,
        VertexShader = data.BasicVertexShader,
        Bodies =
        {
            new SpaceBody
            {
                Location = O,
                Satellites =
                {
                    new SpaceBody
                    {
                        Shape = grid,
                        Texture = floor,
                    },
                    new SpaceBody
                    {
                        Scale = DXMath.vector3(3,1,3),
                        Location = new float3 { y = 0.5f },
                        Shape = box,
                        Texture = stone,
                    },
                }
            },
        },
    };
    var root = u.Bodies[0];
    for (int i = 0; i < 5; i++)
    {
        root.Satellites.Add(new SpaceBody
        {
            Location = DXMath.vector3(-5, 4, -10 + i * 5),
            Shape = gsphere,
            Texture = stone,
        });
        root.Satellites.Add(new SpaceBody
        {
            Location = DXMath.vector3(+5, 4, -10 + i * 5),
            Shape = gsphere,
            Texture = stone,
        });
        root.Satellites.Add(new SpaceBody
        {
            Location = DXMath.vector3(-5, 1.5f, -10 + i * 5),
            Shape = cylinder,
            Texture = bricks,
        });
        root.Satellites.Add(new SpaceBody
        {
            Location = DXMath.vector3(+5, 1.5f, -10 + i * 5),
            Shape = cylinder,
            Texture = bricks,
        });
    }

    u.Reset();
    return u;
}

And the render method that render all samples so far

public void Render(DXGraphic g)
{
    g.Clear(Background);

    g.SetPShader(PixelShader);
    g.SetVShader(VertexShader);
    g.SetStateSampler(sampler);
    g.SetStateRasterizer(RasterizerState);

    g.SetConstantBuffers(0, ShaderType.Pixel | ShaderType.Vertex, cbPerObject, cbPerFrame);

    CameraController.Camera.SetProjection(g.Context);
    foreach (var item in GetBodies())
    {
        if (item.Shape == null)
            continue;

        cbPerObject.Data[0] = new PerObjectData
        {
            projection = CameraController.Camera.Projection,
            view = CameraController.Camera.View,
            model = DXMath.mul(CameraController.ModelTransform.Transform, item.FinalTransform.Transform),
            material = item.Material,
        };

        var p0 = new float3().TransformPoint(item.FinalTransform.Transform);
        var p1 = p0.TransformPoint(CameraController.Camera.View);
        var p2 = p1.TransformPoint(CameraController.Camera.Projection);

        cbPerObject.UpdateDXBuffer();

        g.SetTexture(item.Texture);

        g.SetShape(item.Shape.Topology, item.Shape.Vertices, item.Shape.Indices);
        g.DrawIndexed();
    }
}

 

5. Performance remarks

On my machine the app spend about 6 seconds loading textures at the start. However if I target x64 when compiling (my machine is an x64 machine, but the project targets x86 by default) the startup drop to about 0.2 seconds!!!

Also, in 32 bits mode the app will freeze every now and then while catching a C++ exception deep down the .NET runtime-WinRT binding code (apparently something to do with the DirectArray) but on x64 it runs smoothly.


Tags: , , ,
Categories: WinRT | DirectX | .NET
Permalink | Comments (0) | Post RSSRSS comment feed

DirectX made simple

With Windows 8, WinRT, C++/Cx I think the time to write an elegant C# / XAML app using some DirectX rendering in C++ has finally come! Thanks WinRT! :-)

Here I just plan to describe my attempt at learning DirectX and C++ and integrate it nicely in a C# XAML app.

My first exercise was to attempt to create a simple DirectX “Context” as WinRT C++/Cx component that can target multiple DirectX hosts: SwapPanel, CoreWindow, ImageSource and render an independent scene and initialize and use it from C#.

Note this is a metro app. It requires VS2012 and Windows 8.

 

First the appetizers, here is my simple scene:

SimpleSample

And it is created with the code below, mostly one giant C# (5, async inside!) object initializer:

public class Universes
{
    public async static Task<Universe> CreateUniverse1(DXContext ctxt = null)
    {
        ctxt = ctxt ?? new DXContext();

        var cubetex = await CreateSceneTexture(ctxt);

        var earth = new BasicTexture(ctxt);
        await earth.Load("earth600.jpg");

        var cube = new BasicShape(ctxt);
        cube.CreateCube();
        var sphere = new BasicShape(ctxt);
        sphere.CreateSphere();

        var u = new Universe(ctxt)
        {
            Scene =
            {
                Background = Colors.Aquamarine,
                Camera =
                {
                    EyeLocation = dx.vector3(0, 0.0f, 0.0f),
                    LookDirection = dx.vector3(0, 0, 100),
                    UpDirection = dx.vector3(0, 1, 0),
                }
            },
            Items =
            {
                new SpaceBody(ctxt)
                {
                    FTransform = t => dx.identity().Scale(10, 10, 10).RotationY(36 * t),
                    FLocation = t => dx.vector3(0, 0, 50),
                    SceneItem =
                    {
                        Shape = cube,
                        Texture = cubetex,
                    }
                },
                new SpaceBody(ctxt)
                {
                    FTransform = t => dx.identity().Scale(8, 6, 8).RotationY(96 * t),
                    FLocation = t => new float3().Translate(15, 0, 0).RotationY(24 * t).Translate(0, 15, 50),
                    SceneItem =
                    {
                        Shape = sphere,
                        Texture = earth,
                    },
                    Items =
                    {
                        new SpaceBody(ctxt)
                        {
                            FTransform = t => dx.identity().RotationY(84 * t),
                            FLocation = t => new float3().Translate(12, 0, 0).RotationY(24 * t),
                            SceneItem =
                            {
                                Shape = sphere,
                                Texture = earth,
                            }
                        }
                    },
                },
                new SpaceBody(ctxt)
                {
                    FTransform = t => dx.identity().Scale(6, 5, 6).RotationY(48 * t),
                    FLocation = t => new float3().Translate(-15, -15, 55),
                    SceneItem =
                    {
                        Shape = sphere,
                    }
                },
            },
        };

        return u;
    }

    public async static Task<BasicTexture> CreateSceneTexture(DXContext ctxt)
    {
        var tex = new BasicTexture(ctxt);
        tex.Create(300, 300);
        ctxt.SetTarget(tex);

        var scene = new Scene(ctxt);
        scene.Background = Windows.UI.Colors.DarkGoldenrod;
        scene.Add(new DXBase.Scenes.CubeRenderer());
        scene.Add(new DXBase.Scenes.HelloDWrite());
        await scene.LoadAsync().AsTask();
        scene.RenderFrame();
        return tex;
    }
}

There is much to say about this sample but I won’t go into the detail of DirectX too much (this is a very basic sample as far as DirectX is concerned and the source code is available, at the bottom), instead I will mostly speak about C++/Cx – C# communication.

 

1. The main DirectX C++/Cx components

1.1. DXContext

First there is the DirectX context, here is an extract of its important methods and properties

  public ref class DXContext sealed :  Windows::UI::Xaml::Data::INotifyPropertyChanged
  {
  public:
      DXContext();

      // Target the top level CoreWindow
      void SetTarget();
      // Target the argument top level SwapChainBackgroundPanel
      void SetTarget(Windows::UI::Xaml::Controls::SwapChainBackgroundPanel^ swapChainPanel);
      // Target the argument ImageSource
      void SetTarget(Windows::UI::Xaml::Media::Imaging::SurfaceImageSource^ image, int w, int h);
      // Target a texture
      void SetTarget(DXBase::Utils::BasicTexture^ texture);

      property float Dpi;
      property Windows::Foundation::Size Size;
      property Windows::Foundation::Rect Viewport;

      DXBase::Utils::BasicTexture^ Snapshot();

      // internal (shared) DirectX variables
  internal:
      // device independent resources
      Microsoft::WRL::ComPtr<ID2D1Factory1> m_d2dFactory;
      Microsoft::WRL::ComPtr<IDWriteFactory1> m_dwriteFactory;
      Microsoft::WRL::ComPtr<IWICImagingFactory2> m_wicFactory;

      // device resource
      D3D_FEATURE_LEVEL m_featureLevel;
      Microsoft::WRL::ComPtr<ID3D11Device1> m_d3dDevice;
      Microsoft::WRL::ComPtr<ID3D11DeviceContext1> m_d3dContext;
      Microsoft::WRL::ComPtr<ID2D1Device> m_d2dDevice;
      Microsoft::WRL::ComPtr<ID2D1DeviceContext> m_d2dContext;

      // target and size dependent resources
      DirectX::XMFLOAT4X4 mDisplayOrientation;
      Microsoft::WRL::ComPtr<ID3D11RenderTargetView> m_renderTargetView;
      Microsoft::WRL::ComPtr<ID3D11DepthStencilView> m_depthStencilView;
};

DXContext is a ‘public ref class’ meaning it’s a shared component (Can be used by C#), it must be sealed (unfortunately… Except those inheriting from DependencyObject, all C++ public ref class must be sealed, as explained here, inheritance section)

All the public members are accessible from C#, the most important are the overloaded “SetTarget()” methods that will set the DirectX Rendering target. Can be changed anytime (although it seems to be an expensive operation, I think rendering on a Texture should probably be done an other way, when I will know better).

Finally it hold all DirectX device information as internal variables. These can’t be public or protected as they are not WinRT component. But, being internal, they can be accessed by other component in the library, it’s how the scene can render. I tried to trim the fat to the minimum number of DirectX variable that such an object should contains.

Note plain C++ doesn’t have the ‘internal’ visibility, this is a C++/Cx extension and it means the same thing as in C#, i.e. members are accessible by all code in the same library.

ComPtr<T> is a shared COM Pointer. Take care of all reference counting for you.

DXContext implements INotifyPropertyChanged and can be observed by XAML component or data binding!
I also created a macro for the INotifyPropertyChanged implementation as it is repetitive and I had to write a long winded implementation due to some mysterious bug in the pure C++ sample.

It has a Snapshot() method to take a screen capture! And BasicTexture have a method to save to file.

 

1.2 Scene

My first attempt at using this DXContext was to create a Scene object which contains ISceneData object.

An ISceneData can be ripped of, more or less, verbatim from various DirectX sample around the web. And the Scene object will take care of initializing it and rendering it when the time is right. I have 2 ISceneData implementations: CubeRenderer, HelloDWrite.

 

1.3 BasicScene, BasicShape, BasicTexture

Unfortunately all the sample on the web often have a lot variables, all mixed up and trying to sort out what does what takes some thinking.

So I created a BasicScene which takes a list of shapes with texture and location (transform) and renders it

public ref class BasicSceneItem sealed
{
public:
    BasicSceneItem(DXContext^ ctxt);
    property DXContext^ Context;
    property DXBase::Utils::BasicShape^ Shape;
    property DXBase::Utils::BasicTexture^ Texture;
    property DXBase::Utils::float4x4 WorldTransform;
};

public ref class BasicScene sealed
{
public:
    BasicScene();
    BasicScene(DXContext^ ctxt);

    property PerformanceTimer^ Timer;
    property Windows::UI::Color Background;
    property DXBase::DXContext^ Context;
    property DXBase::Utils::BasicCamera^ Camera;

    property Windows::Foundation::Collections::IVectorView<BasicSceneItem^>^ Shapes;
    void Add(BasicSceneItem^ item);
    void Remove(BasicSceneItem^ item);
    void RemoveAt(int index);

    property bool IsLoaded;

    void RenderFrame();
    void RenderFrame(SceneRenderArgs^ args);
};

It also has some Background and a Camera, all WinRT component that can be controlled by C++.

The BasicShape contains point and index buffer for triangles and has various create method that will populate the buffers.

The BasicTexture can load a file or be created directly in memory (and rendered to by using Context.SetTarget(texture)), and contains the texture and textureView used by the rendering process.

Each of these class has very few DirectX specific variables making it relatively easy to understand what’s going on.

 

2. C++/Cx to C# mapping

When C++/Cx components are called from C#, the .NET runtime does some type mapping for you. There is the obvious, the basic types (int, float, etc..) and value types (struct) are used as is. But there is more, mapping for exception and important interfaces (such as IEnumerable).

It’s worth having a look at this MSDN page which details the various mapping happening.

Also, to refresh my C++ skill I found this interesting web site where most Google query lead to anytime I had a C++ syntax or STL issue!

 

3. Exception across ABI

You can’t pass custom exception or exception’s message across ABI (C++ / C# / JavaScript boundary). All that can pass is an HRESULT, basically a number. Some special number will pass some special exception as explained on this MSDN page.

If you want to pass some specific exception you have to use some unreserved HRESULT (as described here) and have some helper class to turn the HRESULT in a meaningful number.

Here comes the ExHelper class just for this purpose

// this range is free: 0x0200-0xFFFF
public enum class ErrorCodes;

// You can't throw custom exception with custom message across ABI
// This will help throw custom Exception with known HRESULT value
public ref class ExHelper sealed
{
public:
    static void Throw(ErrorCodes c);
    static ErrorCodes GetCode(Windows::Foundation::HResult ex);
    static Windows::Foundation::HResult CreateWinRTException(ErrorCodes c);
};

Note you can’t expose Platform::Exception publicly either (well maybe you can, but it was troublesome). But you can expose an HRESULT. The runtime will automatically turn it into a System.Exception when called from C#.

 

4. Reference counting and weak pointer

C++/Cx is pure C++. There is no garbage collection happening when writing pure C++ app, even if one use the C++/Cx extension. The hat (^) pointer is a ref counted pointer that can automatically be turned into a C# reference.

That can lead to a problem when 2 C++/Cx components reference each other as in the following (simplified) scenario

public ref class A sealed
{
    A^ other;
public:
    property A^ Other
    {
        A^ get() { return other; }
        void set(A^ value) { this.other = value; }
    }
};

{
    auto a1 = ref new A();
} // a1 is automatically destroyed

{
    auto a1 = ref new A();
    auto a2 = ref new A();
    a1.Other = a2;
    a2.Other = a1;
} // no automatic destruction takes place!

To solve such problem WinRT comes with a WeakReference. The class A can be modified as follow to not hold strong reference:

public ref class A sealed
{
    WeakReference other;
public:
    property A^ Other
    {
        A^ get() { return other.Resolve<A>(); }
        void set(A^ value)
        {
            if (value)
                this.other = value;
            else
                this.other = WeakReference();
        }
    }
};

 

5. debugging / logging

Sometimes logging is helpful for debugging. For example I log creation and deletion of some items to be sure I don’t have any memory leak. However, printf, cout<<, System::Console::WriteLine won’t work in a metro app.

One has to use OutputDebugString, output will appears in Visual Studio output window.

 

6. IEnumerable, IList

If you use C# you must love IEnumerable, IEnumerator, IList and LINQ. When writing a C++ component you should make sure it plays nice with all that.

The .NET runtime does some automatic mapping when calling in C++/Cx component, as explained here.

6.1 IEnumerable

In C++ one shall expose Windows::Foundation::Collection::IIterable<T> to be consumed in C# a System.Collections.Generic.IEnumerable<T>.

IIterable has a single method First() that return and IIterator. That will be mapped to an IEnumerator.

However there is a a little gotcha. Unlike C# IEnumerator which starts before the first element (one has to call bool MoveNext()) IIterator starts on the first element.

6.2 IList

One can return an Windows::Foundation::Collections::IVector<T> to be mapped to an IList<T>. There is already a class implementing it:

Platform::Collections::Vector<T>.

Or one can use vector->GetView() to return a Windows::Foundation::Collections::IVectorView<T> that will be mapped to an IReadonlyList<T>.

 

7. Function pointers and lambda

C++ 0x (or whatever is called the latest C++ standard) introduced lambda expression to create inline function, much like in C#.

There is a long description of on MSDN.

Basically it has the following syntax

[capture variable](parameters) –> optional return type specification { body }

It’s all quite intuitive except for the capture part. You have to specify which value you want to capture (this, local variable) and you can specify by value or reference (using the ‘&’ prefix), or all local variables and this with equal as in: ‘[=]’

 

In some instance I had problem assigning lambda to a function pointer, for example the code below didn’t compile for me (maybe I missed something?)

IAsyncOperation<bool>^ (*func)(Platform::Object^) = [] (Object^ ome) -> IAsyncOperation<bool>^ { ... };

Fortunately C++0x introduce “function object” which works fine

#include <functional>
//....
std::function<IAsyncOperation<bool>^(Object^)> func = [] (Object^ ome) -> IAsyncOperation<bool>^ { ... };

Remark the function object will keep reference to the captured variable as long as it exists! Be careful with circular reference and WinRT component (hat pointers ‘^’).

 

8. Async

With Metro Async programming is an inescapable reality!

Of course in your C++/Cx code you can use the class from System.Threading.Tasks, but there is also some C++ native API just for that: task<T>.

One can create task from .NET IAsyncOperation or a simple C function:

#include <ppltasks.h>
#include <ppl.h>

using namespace concurrency;
using namespace Windows::Foundation;

bool (*func)() = []()->bool { return true; };
task<bool> t1 = create_task(func);

IAsyncOperation<bool>^ loader = ...;
task<bool> t2 = create_task(loader);

Conversely one can create .NET IAsyncOperation from task or C function with create_async, as in:

#include <ppltasks.h>
#include <ppl.h>

using namespace concurrency;
using namespace Windows::Foundation;

bool (*func)() = []()->bool { return true; };
task<bool> t1 = create_task(func);
IAsyncOperation<bool>^ ao1 = create_async([t1] { return t1; });
IAsyncOperation<bool>^ ao2 = create_async(func);

Tasks can be chained with ‘then’ and one can wait on multiple task by adding them with ‘&&’ such as in:

task<void> t1 = ...
task<void> t2 = ...

auto t3 = (t1 && t2).then([] -> void
{
    OutputDebugString(L"It is done");
});

Remark tasks are value type and start executing immediately once created (in another thread).

 

When chaining tasks with ‘then’ you can capture exception from previous task by taking a task<T> argument instead of T. And put a try/catch around task.get(). If you do not catch exception it will eventually brings the program down.

task<bool> theTask = ....
task<void> task = theTask.then([](concurrency::task<bool> t) -> void
{
    try
    {
        t.get();
        // success...
    }
    catch (Exception^ ex)
    {
        // failure
        auto msg = "Exception: " + ex->ToString() + "\r\n";
        OutputDebugString(msg->Data());
    }
});

 

9. Conclusion

It proved pleasantly surprisingly easy to have the C++ and C# works together with WinRT. Smooth and painless. C++ 11 was easier to use that my memory of C++ was telling me. And in the end I mixed and matched them all with great fun. To boot my C# app starts real quick (like a plain C++ app)! It’s way better than C++ CLI!

A few frustrating point with C++/Cx still stands out though:

  • Microsoft value types (Windows::Foundation::Size for example) have custom constructors, methods and operator, yours cannot.
  • You can’t create a type hierarchy! (Can be worked around tediously with an interface hierarchy, but still!)

 

10. Source code

download it from here!

 

 

.


Tags: , , , ,
Permalink | Comments (0) | Post RSSRSS comment feed

Silverlight Menu

Recently I needed a Silverlight menu. I found the following implementations:

But none behave like an ordinary HeaderedItemsControl and all did impose some constraint which make using databinding, templating or MVVM more awkward than it should be. So here is my take on this control.

 image

Implementation:

 

Below I’m going to quickly explain the salient point of the implementation and then show some usage samples.

 

Implementation

 

Basic ItemsControl

The main class is MenuItem. Inspired by WinForm, WPF, I did create a Menu class but it’s just an optional container with a different layout (and it also hide the arrow on the side).

The UI, at its most basic is defined as follows (non necessary styling element removed)

<Style TargetType="local:MenuItem">
    <Setter Property="ItemsPanel">
        <Setter.Value>
            <ItemsPanelTemplate>
                <StackPanel Orientation="Vertical"/>
            </ItemsPanelTemplate>
        </Setter.Value>
    </Setter>
    <Setter Property="Template">
        <Setter.Value>
            <ControlTemplate TargetType="local:MenuItem">
                <Grid Background="Transparent">
                    <Border x:Name="PART_HighlightBg" Margin="2" Opacity="0"
                            Background="{Binding HighlightBrush, Source={StaticResource SystemBrushes}}"
                        />
                    <ContentPresenter
                        Margin="4"
                        Content="{TemplateBinding Header}"
                        ContentTemplate="{TemplateBinding HeaderTemplate}"
                            />
                    <Popup x:Name="PART_Popup" IsOpen="False">
                        <ItemsPresenter />
                    </Popup>
                </Grid>
            </ControlTemplate>
        </Setter.Value>
    </Setter>
</Style>

There is a Transparent Grid background grid for hit testing, a border at lowest Z order for highlight, the content presenter for the header and an ItemsPresenter in a Popup for children.

 

Now the most basic implementation of an ItemsControl just setup children.
I also copy the style in PrepareContainerForItemOverride, as MenuItem’s children item are whole new MenuItem with the default style. This code make sure all children and children’s children and so on looks the same as the top level MenuItem, in case it has been styled.

public class MenuItem : HeaderedItemsControl
{
    public MenuItem()
    {
        this.DefaultStyleKey = typeof(MenuItem);
        ((INotifyCollectionChanged)Items).CollectionChanged += delegate { HasChildren = Items.Count > 0; };
    }

    #region ItemsControl override

    protected override DependencyObject GetContainerForItemOverride()
    {
        return new MenuItem();
    }
    protected override bool IsItemItsOwnContainerOverride(object item)
    {
        return item is MenuItem;
    }
    protected override void ClearContainerForItemOverride(DependencyObject element, object item)
    {
        base.ClearContainerForItemOverride(element, item);
    }
    protected override void PrepareContainerForItemOverride(DependencyObject element, object item)
    {
        // copy import UI value to child element automatically...
        // copy first in case something is set in the template (would be set in base.XXX())
        var mi = (MenuItem)element;
        mi.Style = Style;
        mi.Background = Background;
        mi.Foreground = Foreground;
        mi.BorderBrush = BorderBrush;
        mi.BorderThickness = BorderThickness;

        base.PrepareContainerForItemOverride(element, item);
    }

    #endregion
}

 

To manage the popup by code I added an IsOpen property

#region IsOpen

internal Popup GetPopup()
{
    return this.GetTemplateChild("PART_Popup") as Popup;
}

public bool IsOpen
{
    get
    {
        var p = GetPopup();
        return p != null && p.IsOpen;
    }
    set
    {
        // separator don't open
        value = value && !IsSeparator;

        var p = GetPopup();
        if (p == null)
            return;

        if (p.IsOpen == value)
            return;

        p.IsOpen = value && Items.Count() > 0;
        MenuPopupManager.OnOpen(this, value);
    }
}

#endregion

 

Mouse Handling

Now all is needed is to overrides OnMouseEnter, OnMouseLeave, OnMouseLeftButtonDown, OnMouseLeftButtonUp.

Most of them do little and delegate the thinking to the class MenuPopupManager. This class maintains a list of all open popup, position the Popup appropriately, hide those Popup that need be hidden and close all MenuItem after a timeout.

Ideally it should close all menu when the user click outside the MenuItem but I could not get it to work reliably.

A skeleton implementation looks like that

public static class MenuPopupManager
{
    #region PopupData, CurrentPopupData

    internal class PopupData
    {
        public PopupData()
        {
        }

        public List<MenuItem> Items = new List<MenuItem>();
        public Timer Timer;
        public Dictionary<MenuItem, PlacementMode> Placements = new Dictionary<MenuItem, PlacementMode>();

        public MenuItem Hovering;
    }

    static PopupData CurrentPopupData;

    #endregion

    #region PlacePopup

    internal static void PlacePopup(MenuItem mi, Popup p)
    {
        Action place = delegate
        {
            // placement logi
        };

        // placement needs size calculation, need be done after being shown
        // otherwise DesiredSize will always be {0,0}, despite calls to Measure() !!??
        if (mi.IsOpen)
        {
            place();
        }
        else
        {
            EventHandler onOpen = null;
            onOpen = delegate
            {
                place();
                p.Opened -= onOpen;
            };
            p.Opened += onOpen;
        }
    }

    #endregion

    #region OnMouseEnter(), OnMouseLeave(), OnOpen(), OnFinish(),

    internal static void OnMouseEnter(MenuItem item)
    {
        var pd = CurrentPopupData;
        if (pd == null)
            return;
        pd.Hovering = item;
    }
    internal static void OnMouseLeave(MenuItem item)
    {
        var pd = CurrentPopupData;
        if (pd == null)
            return;
        pd.Hovering = null;
    }
    internal static void OnOpen(MenuItem item, bool open)
    {
        var pd = CurrentPopupData;
        var pi = item.GetParentItem();
        // update CurrentPopupData and close popup that should be closed
    }
    internal static void OnFinish(MenuItem item)
    {
        // close everything and update CurrentPopupData
    }

    #endregion
}

 

Obviously MenuItem implement Click and triggering an ICommand and hiding itself after that. If no Click event handler or Command is defined it will do nothing (including not closing the menu), making it easy to embed advanced control in popup menu.

#region Command

public ICommand Command
{
    get { return (ICommand)GetValue(CommandProperty); }
    set { SetValue(CommandProperty, value); }
}

public static readonly DependencyProperty CommandProperty =
    DependencyProperty.Register("Command", typeof(ICommand), typeof(MenuItem), new PropertyMetadata(null));

#endregion

#region CommandParameter

public object CommandParameter
{
    get { return (object)GetValue(CommandParameterProperty); }
    set { SetValue(CommandParameterProperty, value); }
}

// Using a DependencyProperty as the backing store for CommandParameter.  This enables animation, styling, binding, etc...
public static readonly DependencyProperty CommandParameterProperty =
    DependencyProperty.Register("CommandParameter", typeof(object), typeof(MenuItem), new PropertyMetadata(null));

#endregion

#region Click, OnClick()

public event RoutedEventHandler Click;

protected virtual void OnClick(RoutedEventArgs e)
{
    if (IsSeparator)
        return;

    var c = Command;
    var p = CommandParameter;
    if (c != null && c.CanExecute(p))
    {
        c.Execute(p);
        MenuPopupManager.OnFinish(this);
    }

    var he = Click;
    if (he != null)
    {
        he.Invoke(this, e);
        MenuPopupManager.OnFinish(this);
    }
}

#endregion

 

Styling

And lastly I added a bit of Styling. I downloaded the Silverlight toolkit and took the SystemBrushes class from the SystemColor theme’s source code. Used various SystemBrushes’s Brush for styling the Menu and MenuItem.

The nice silverish background brush of the Menu is SystemBrushes.ButtonGradient
image

 

For the mouse hover effect where an highlight color (quickly but progressively) appears I finally took the time to use animation and view state. It proved to be real easy!

First I setup 2 states. One which progressively increased the opacity of  the background control (PART_HighlightBg). The second state does nothing, in effect reverting the opacity change instantly. I could have used an animation too but I liked the shorter template and result is good too.

<ControlTemplate TargetType="local:MenuItem">
    <Grid Background="Transparent">
        <vsm:VisualStateManager.VisualStateGroups>
            <vsm:VisualStateGroup x:Name="Highlight">
                <vsm:VisualState x:Name="HighlightOn">
                    <Storyboard>
                        <DoubleAnimation
                            Duration="0:0:0.3" To="0.4"
                            Storyboard.TargetName="PART_HighlightBg" Storyboard.TargetProperty="Opacity">
                            <DoubleAnimation.EasingFunction>
                                <ExponentialEase Exponent="3" EasingMode="EaseInOut"/>
                            </DoubleAnimation.EasingFunction>
                        </DoubleAnimation>
                    </Storyboard>
                </vsm:VisualState>
                <vsm:VisualState x:Name="HighlightOff"/>
            </vsm:VisualStateGroup>
        </vsm:VisualStateManager.VisualStateGroups>
        <Border x:Name="PART_HighlightBg"
            Margin="2" Opacity="0"
            Background="{Binding HighlightBrush, Source={StaticResource SystemBrushes}}"
            BorderBrush="{Binding ControlDarkBrush, Source={StaticResource SystemBrushes}}"
            BorderThickness="{TemplateBinding BorderThickness}"
            />
        <!-- other controls -->
    </Grid>
</ControlTemplate>

 

Then I simply setup the state in mouse handler

protected override void OnMouseEnter(MouseEventArgs e)
{
    base.OnMouseEnter(e);
    MenuPopupManager.OnMouseEnter(this);
    VisualStateManager.GoToState(this, "HighlightOn", true);
}
protected override void OnMouseLeave(MouseEventArgs e)
{
    base.OnMouseLeave(e);
    MenuPopupManager.OnMouseLeave(this);
    VisualStateManager.GoToState(this, "HighlightOff", true);
}

 

Examples

And now some example on how to use this control.

 

Inline definition

image

<Grid x:Name="LayoutRoot" Background="White">

    <local:Menu Margin="10,0">
        <local:MenuItem Header="File lala">
            <local:MenuItem Header="Foo">
                <local:MenuItem Header="tadat"/>
                <local:MenuItem Header="Footaise">
                    <local:MenuItem Header="tadat"/>
                    <local:MenuItem Header="tadat"/>
                    <local:MenuItem Header="tadat"/>
                </local:MenuItem>
                <local:MenuItem Header="tadat"/>
                <local:MenuItem Header="tadat"/>
            </local:MenuItem>
            <local:MenuItem IsSeparator="True"/>
            <local:MenuItem Header="tadat"/>
            <local:MenuItem Header="tadat"/>
        </local:MenuItem>
    </local:Menu>
    
</Grid>

 

MVVM - databinding

First I’ll define a data model used for my MVVM usage

public class ModelCommand : ICommand
{
    ModelItem item;
    public ModelCommand(ModelItem item) { this.item = item; }
    public bool CanExecute(object parameter) { return true; }
    public event EventHandler CanExecuteChanged;
    public void Execute(object parameter) { MessageBox.Show("Clicked: " + item.Name); }
}
public class ModelItem
{
    public ModelItem()
    {
        Items = new List<ModelItem>();
        Command = new ModelCommand(this);
    }
    public string Name { get; set; }
    public List<ModelItem> Items { get; private set; }
    public ICommand Command { get; private set; }
}
public class Model
{
    public Model()
    {
        // initialize a recursive list of Items and their children
        // and their children and their children's children and so on...
    }
    public List<ModelItem> Items { get; private set; }
}

 

Then I can do some simple binding

image

<local:Menu Margin="10,0" HorizontalAlignment="Right">
    <local:MenuItem Header="Persons" ItemsSource="{Binding Items, Source={StaticResource MM}}">
        <local:MenuItem.ItemContainerStyle>
            <Style TargetType="local:MenuItem">
                <Setter Property="Command" Value="{Binding Command}"/>
            </Style>
        </local:MenuItem.ItemContainerStyle>
        <local:MenuItem.ItemTemplate>
            <DataTemplate>
                <TextBlock Text="{Binding Name}"/>
            </DataTemplate>
        </local:MenuItem.ItemTemplate>
    </local:MenuItem>
</local:Menu>

Remark to setup the Command on the MenuItem I should use the ItemContainerStyle property.

 

Hierarchical binding

image

<local:Menu
    Margin="10,0" HorizontalAlignment="Right" VerticalAlignment="Bottom" PopupPlacement="Top" CornerRadius="5,5,0,0" BorderThickness="1,1,1,0">
    <local:MenuItem Header="Persons" Command="{Binding Command}" ItemsSource="{Binding Items, Source={StaticResource MM}}">
        <local:MenuItem.ItemContainerStyle>
            <Style TargetType="local:MenuItem">
                <Setter Property="Command" Value="{Binding Command}"/>
            </Style>
        </local:MenuItem.ItemContainerStyle>
        <local:MenuItem.ItemTemplate>
            <sdk:HierarchicalDataTemplate ItemsSource="{Binding Items}">
                <TextBlock Text="{Binding Name}"/>
            </sdk:HierarchicalDataTemplate>
        </local:MenuItem.ItemTemplate>
    </local:MenuItem>
</local:Menu>

 

Styling

I can also add a single parentless MenuItem anywhere. And I can style it too, setup the background brush for instance (whether it’s parentless or not) and the value will flow down.

image

<local:MenuItem
    Margin="100,0" HorizontalAlignment="Right" VerticalAlignment="Bottom" PopupPlacement="Top" BorderThickness="1,1,1,0"
    Background="Aquamarine" Header="Persons" ItemsSource="{Binding Items, Source={StaticResource MM}}">
    <local:MenuItem.ItemContainerStyle>
        <Style TargetType="local:MenuItem">
            <Setter Property="Command" Value="{Binding Command}"/>
        </Style>
    </local:MenuItem.ItemContainerStyle>
    <local:MenuItem.ItemTemplate>
        <sdk:HierarchicalDataTemplate ItemsSource="{Binding Items}">
            <TextBlock Text="{Binding Name}"/>
        </sdk:HierarchicalDataTemplate>
    </local:MenuItem.ItemTemplate>
</local:MenuItem>

Permalink | Comments (0) | Post RSSRSS comment feed

OData with async

Today I watched this video about using OData in a WinRT app:
http://blog.jerrynixon.com/2012/08/new-episode-devradio-let-odata-make.html

To summarize he (Jerry Nixon) add a service reference to the Netflix OData API at:
http://odata.netflix.com/Catalog/

 

and then proceed to do something like that

protected override void OnNavigatedTo(NavigationEventArgs e)
{
    var _uri = new Uri("http://odata.netflix.com/Catalog/");
    var ctxt = new NetFlixService.NetflixCatalog(_uri);
    var data = new DataServiceCollection<NetFlixService.Title>(ctxt);

    var query = from t in ctxt.Titles
                where t.Name.Contains("Star Trek")
                select t;
    data.LoadCompleted += delegate 
    {
        this.DataContext = data;
    };
    data.LoadAsync(query);
}

Basically he create an OData service reference, run a database query against it and show the results.

 

What I wanted to change is, use the new async / await keyword in .NET4.5 instead of the LoadCompleted delegate.

There seem to be no obvious way of doing that!
Time to Google!

From the await (C# Reference) reference it seems that await can (only) be applied to a method returning a Task, and a Task (from Googling) can be created from IAsyncResult.

 

So first I started by creating a very simple and reusable IAsyncResult implementation class:

public class SimpleAsyncResult : IAsyncResult, IDisposable
{
    ManualResetEvent waitHandle = new ManualResetEvent(false);

    public void Finish()
    {
        IsCompleted = true;
        waitHandle.Set();
        waitHandle.Dispose();
    }

    public void Dispose() { waitHandle.Dispose(); }

    public bool IsCompleted { get; private set; }
    public object AsyncState { get; set; }
    public bool CompletedSynchronously { get; set; }

    public WaitHandle AsyncWaitHandle { get { return waitHandle; } }
}

 

With that I can easily create an extension method for my OData classes returning a Task:

public static class OData
{
    public static Task<DataServiceCollection<T>> AsyncQuery<T>(this DataServiceCollection<T> data, IQueryable<T> query = null)
    {
        var asyncr = new SimpleAsyncResult();
        Exception exResult = null;
        data.LoadCompleted += delegate(object sender, LoadCompletedEventArgs e)
        {
            exResult = e.Error;
            asyncr.Finish();
        };

        if (query == null)
            data.LoadAsync();
        else
            data.LoadAsync(query);

        return Task<DataServiceCollection<T>>.Factory.FromAsync(asyncr
            , r =>
            {
                if (exResult != null)
                    throw new AggregateException("Async call problem", exResult);
                return data;
            }
        );
    }
}

Remark here I wrap the exception because I don’t want to lose the stack trace (with “throw exResult”)

 

And voila I can update my NavigatedTo method to be async friendly!

protected async override void OnNavigatedTo(NavigationEventArgs e)
{
    var _uri = new Uri("http://odata.netflix.com/Catalog/");
    var ctxt = new NetFlixService.NetflixCatalog(_uri);
    var data = new DataServiceCollection<NetFlixService.Title>(ctxt);

    var query = from t in ctxt.Titles
                where t.Name.Contains("Star Trek")
                select t;
    await data.AsyncQuery(query);
    this.DataContext = data;
}

Tags:
Categories: .NET
Permalink | Comments (0) | Post RSSRSS comment feed

Zip files after build

One of the pet project I am working on now is an installer application (in D!). One of the problem I need to solve is automatic archival of my files.

Well it turn out that MSBuild (the underlying build tool used by Visual Studio) is quite extensible. While I haven’t read anything which made me a MSBuild master yet, I found this library (msbuildtask) with plenty of MSBuild goodies (including a zip task).

And also a post on Bell Hall’s blog which explain how to take advantage of this task to zip your project output automatically.

Here, I reproduced it below:

Using MSBuild to create a deployment zip

Monday, September 01, 2008

Automated builds are one of the core fundamental musts for software development. However, your build doesn't just have to build the solution and execute your unit tests. For IronEditor, my build also creates two zip files. One zip is the output from the build for archiving purposes, the second is my deployment zip - the zip which actually gets pushed up to CodePlex containing only the files required by the application. In this post, I will cover how you can get MSBuild to zip your build output.

To use zipping functionality within your build scripts, you need to use the MSBuild Community Tasks which is a great collection of MSBuild extensions and a must if you are using MSBuild.

In order to zip your files, you need to specify which files you want to zip. In the script below, I create an ItemGroup called ZipFiles, this includes all the subdirectories (**) and files (*.*) from my Release directory which is my build output folder. I also specify that this group should not include any other zip files. I then create a Builds directory if it doesn't already exist. Finally, I use the Zip task, passing in my ZipFiles ItemGroup which the task uses to know which files to include.

<Target Name="BuildZip" DependsOnTargets="Test">
  <ItemGroup>
      <!-- All files from build -->
      <ZipFiles Include="$(BuildDir)\Release\**\*.*" Exclude="*.zip" />
  </ItemGroup>
  <MakeDir Directories="$(BuildDir)\Builds" Condition="!Exists('$(BuildDir)\Builds')" />
  <Zip Files="@(ZipFiles)"
       WorkingDirectory="$(BuildDir)\Release\"
       ZipFileName="$(BuildDir)\Builds\IronEditor-Build-$(Version).zip"
       ZipLevel="9" />
</Target>

The most important property is the WorkingDirectory, this is the root directory where all the files you want to exist live. If you don't have this set correctly, you will have the additional directories in your zip file which are navigated in order to get to your actual files and just looks rubbish.

My deployment zip also looks very similar and is executed after the above target. The only different is that I individually specify which files and directories to include. For some directories, such as Config, I still include all sub-directories and files it contains as they will all be relevant and required.

<Target Name="BuildInstallZip" DependsOnTargets="BuildZip">
  <ItemGroup>
      <!-- Selected Files -->
      <InstallFiles Include="$(BuildDir)\Release\Config\**\*.*" />
      <InstallFiles Include="$(BuildDir)\Release\LanguageBinaries\**\*.*" />
      <InstallFiles Include="$(BuildDir)\Release\SyntaxFiles\**\*.*" />
      <InstallFiles Include="$(BuildDir)\Release\Fireball.*.dll" />
      <InstallFiles Include="$(BuildDir)\Release\IronEditor.UI.WinForms.exe" />
      <InstallFiles Include="$(BuildDir)\Release\IronEditor.UI.WinForms.config" />
      <InstallFiles Include="$(BuildDir)\Release\IronEditor.Engine.dll" />
      <InstallFiles Include="$(BuildDir)\Release\Microsoft.Scripting.Core.dll" />
      <InstallFiles Include="$(BuildDir)\Release\Microsoft.Scripting.dll" />
      <InstallFiles Include="$(BuildDir)\Release\System.Core.dll" />
  </ItemGroup>
  <MakeDir Directories="$(BuildDir)\Builds" Condition="!Exists('$(BuildDir)\Builds')" />
  <Zip Files="@(InstallFiles)"
       WorkingDirectory="$(BuildDir)\Release\"
       ZipFileName="$(BuildDir)\Builds\IronEditor-$(Version).zip"
       ZipLevel="9" />
</Target>

One thing which tripped me up was that while my ItemGroup was created within a target, it actually has global scope. As such, you need to call the two groups within the two different targets something different.

Once my script has executed, I have two zip files created - one containing everything, the other ready to be released on CodePlex.

image


Categories: Visual Studio | link
Permalink | Comments (0) | Post RSSRSS comment feed