Extension methods in C++

A few days ago Bjarne Stroustrup has published a proposal paper (N4174) to the C++ standard committee called Call syntax: x.f(y) vs. f(x,y). The following excerpt from the paper summarizes the proposal:

The basic suggestion is to define x.f(y) and f(x,y) to be equivalent. In addition, to increase compatibility and modularity, I suggest we explore the possibility of ignoring uncallable and inaccessible members when looking for a member function (or function object) to call.

x.f(y) means

  1. First try x.f(y) –does x’s class have a member f? If so try to use it
  2. Then try f(x,y) – is there a function f? If so, try to use it
  3. otherwise error

f(x,y) means

  1. First try x.f(y) – does x’s class have a member f? If so try to use it
  2. First try f(x,y) – is there a function f? If so, try to use it
  3. otherwise error

This may sound a bit crazy, but to me this immediately shouted EXTENSION METHODS, which is something that I’ve been wondering for a while how could be added to the language. I find this one of the most important proposals (I am aware of) for the evolution of the C++ language.

UPDATE: Recently I have discovered that a second paper on the same topic exists. The N4165 paper, called Unified Call Syntax, is authored by Herb Sutter. Unlike the first paper, Sutter’s paper proposes only making x.f(y) equivalent to f(x,y) and not the other way around. Here is a quote from the paper:

This single proposal aims to address two major issues:

  • Enable more-generic code: Today, generic code cannot invoke a function on a T object without knowing whether the function is a member or nonmember, and must commit to one. This is long-standing known issue in C++.
  • Enable “extension methods” without a separate one-off language feature: The proposed generalization enables calling nonmember functions (and function pointers, function objects, etc.) symmetrically with member functions, but without a separate and more limited “extension methods” language feature. Further, unlike “extension methods” in other languages which are a special-purpose feature that adds only the ability to add member functions to an existing class, this pro-posal would immediately work with calling existing library code without any change. (See also following points.)

Herb Sutter argues that the unified call syntax would achieve major benefits including consistency, simplicity, and teachability, improve of discoverability and usability of existing code and improve C++ tool support. However, he also explains why why making f(x) equivalent to x.f() is not possible since it would break existing code.

Extension Methods in C#

I’ll take a step back for a short paragraph on extension methods in C#.

An extension method allows you to add functionality to an existing type without modifying the original type or creating a derived type (and without needing to recompile the code containing the type that is extended.)

Let’s assume you want to write a method that counts words in a text. You could write a method called WordCount that looks like this (for simplicity we’ll only consider space as a delimiter):

You can use it like this:

Just by changing the syntax a bit and adding the this keyword in front of the type of first argument (always the type you want to extend), the compiler treats the method as part of the type.

With this change we can now write:

WordCount(text) vs. text.WordCount() is exactly what the Stroustrup’s N4174 paper is proposing.

Notice the extension methods in C# have several requirements including the following:

  • the extension method is always a public static member of a static class
  • the extension method has access only to the public interface of the extended type

Extension Methods in C++

The question that one may ask is how would this equivalence of x.f(y) and f(x,y) be beneficial to the language. My immediate answer is that it defines extension methods and enable developers to extend functionality without touching exiting code.

Let’s take a real case example. The C++ standard containers provide methods like find() to find an element in the container. There are also generic algorithms for the same purpose (that work in a generic way for various ranges defined by iterators). But this find() methods return an iterator and you have to check the value against the end() to interpret the result. Using std::map for instance, many times you just need to know whether it contains a key or not. std::map does not have a contains() method, but you can easily write a helper function:

And with that in place you can write:

However, I would very much like to be able to say (because in an object oriented world this seems much more natural to me):

If x.f(y) and f(x,y) were equivalent this later code would be perfectly legal (and beautiful).

Here is a second example. Suppose you want to define some query operators like the ones available in LINQ in .NET. Below is a dummy implementation of several such operators for std::vector.

(Many thanks to Piotr S. and Yakk for helping with the implementation of select.)

Those functions enable us to write code that “sums the square of the even numbers from a range” as shown below:

I don’t particularly like how the code looks. You have to catch the return value from each function even though it’s an intermediary value that you discard later.

One can improve that by using the return value from the previous call as a direct argument to the next call:

However, I like this even less. First, because it gets hard to follow what are the arguments for each call (and even if you try to format it differently it’s not going to help much) and second because it inverses the natural reading order or the operations. You first see sum, then select and last where. Even though this is how we described it earlier (“sums the square of the even numbers from a range”), it is misleading with regard to the order of the operations.

However, if x.f(y) and f(x,y) were equivalent it would be very easy to write the above code like this:

Isn’t that beautiful? I think it is.

Conclusion

The N4174 paper is rather an exploration of possibilities for uniform calling syntax than a very formal proposal. There are various aspects that have to be carefully considered especially when considering how to treat f(x, y). The N4165 paper makes a good case of the uniform calling syntax, explains the benefits better and argues against treating f(x) equivalent to x.f(). You should go ahead and read the two papers for detailed information. However, I sincerely hope that one day this will be accepted and become a core feature of the C++ language.

, , , , Hits for this post: 659 .

Several CTPs for Visual Studio 2014 have been released so far. The 3rd and 4th CTPs can be actually used with a Windows Azure Virtual Machine. If you have a Windows Azure account you can go ahead and create a VM. If you are an MSDN Subscriber or you have a trial account, you have a number of free hours that you can use, so you won’t have to pay anything to run VS2014 CTP in the clound.

NOTE: Details about the limits and cost in Windows Azure are available here (also see this article).

Below is a step-by-step walk through of how to create and start a VM for Visual Studio 14 CTP 4.

Step 1: Log to Windows Azure.

Step 2: Create a new virtual machine.

In the Azure portal press the New button.
azure_vs2014ctp_1

Select Compute > Virtual Machine > From Gallery
azure_vs2014ctp_2

Choose the Visual Studio 2014 CTP 4 Image
azure_vs2014ctp_3

Select the virtual machine configuration
azure_vs2014ctp_4

azure_vs2014ctp_5

azure_vs2014ctp_6

Step 3: Wait until the virtual machine starts up.

This may take a few minutes.
azure_vs2014ctp_7

azure_vs2014ctp_8

Step 4: Connect remotely to the virtual machine.

See How to Log on to a Virtual Machine Running Windows Server.

Note: You have to authenticate with the username (make sure you use the format machinename\username) and the password you have created, not the account you are initially prompted in the RDP window.

azure_vs2014ctp_9

Step 5: Launch and use Visual Studio 2014 CTP.

azure_vs2014ctp_10

, , , , Hits for this post: 564 .

The C++ Ten Commandments

This article presents a list of good practices for C++ development. Obviously there are many other good practices that one should adhere to and perhaps some of them are more important than the ones in this list. The following list is a personal recommendation and should be taken as is.

Thou shalt follow the Rule of Five

Before the advent of C++11 this was known as the Rule of Three. The rule said that if a class needs to define one of the following members it has to define all of them: destructor, copy constructor and copy assignment operator. When C++11 was released it introduced move semantics and the old Rule of Three has been extended to include two new special functions: move constructor and move assignment operator.

All these are special functions. If you don’t implement them explicitly, the compiler provides a default implementation. Make sure that when you implement one of them you implement them all. (There are exceptions to this rule, but that is beyond the scope of this article.)

Thou shalt use almost always auto (judiciously)

Using auto for type deduction of variables or parameters is a key feature of C++11. Using auto for variables instructs the compiler to deduce the type in the same manner it deduces the type of parameters of function templates (with a small exception related to std::initializer_list). There are two ways to declare variables using auto:

There are some gotchas though that you should be aware of:

  • auto does not retain constness/volatileness (const and volatile) or reference-ness (& and &&). Here is an example:

    If you expect that the type of a is int const and the type of ar is int const& then you’re wrong. They are both simply int. You need to explicitly add const and & to retain the const-ness and reference-ness.

  • auto captures initializer_list as a type. Here is an example:

    The type of a is int, but the type of both b and c is initializer_list<int>.

  • the form where you commit to a type does not work with multi-word build in types, nor with elaborated type specifiers (e.g. “struct tag”):

Though many consider auto a nice feature to save typing because you don’t have to write long type names that is probably the least important reason to use it. There are more important reasons such as correctness, robustness and maintainability. When you specify variable types explicitly you can leave the variables uninitialized. But when you use auto you must initialize the variable (so that the compiler can infer the type). Using auto helps thus avoiding uninitialized variables. It also helps programming towards interfaces not implementations. Most of the times you don’t care about the type, you only care about what a variable does. And when you still care about the type, you can still use auto.

C++14 introduces two new features that extends the way auto can be used: function return type deduction (that allows auto to be used for the return type) and generic lambdas (that allows lambda parameter to be declared with the auto type specifier). There are various scenarios and pros and cons for using auto as the return type of a function. Most of them are probably of personal preferences. I personally do not favor the use of auto as function return type mainly for readability and documentation (reference documentation where all functions return auto is not very helpful). Unlike variables, where the type is not important many times, I believe the return type of a function is important most of the times.

This is a large and complex subject and I recommend some additional readings: Auto Variables, Part 1, Auto Variables, Part 2, AAA Style (Almost Always Auto).

Thou shalt use use smart pointers

Use of raw pointers in C++ (that implies explicit allocation and release of memory) is one of the most hated features of the language (despite the advantages they pose) because it is one of the most importance source of bugs in C++ development. Developers tend to forget to release memory when no longer necessary. Smart pointer come to rescue. They look and behave like naked pointers, by supporting operations like dereferencing (operator *) and indirection (operator ->), but they do more than just that, hence the adjective “smart”. A smart pointer is a proxy to a raw pointer and basically handles the destruction of the object referred by the raw pointer. The standard library provides a std::shared_ptr class for objects whose ownership must be shared and a std::unique_ptr for objects that do not need shared ownership. The first one destroys the pointed object when the last shared pointer object that points to the object is destroyed, the second when the smart pointer is destroyed (since it retains sole ownership of the object). There is another smart pointer, std::weak_ptr that holds a non-owning reference to an object managed by a std::shared_ptr. These smart pointers provide a deterministic way of destructing objects in a safe manner, avoiding memory leaks that are so easily introduced with raw pointers. Smart pointers can be created in an exception safe manner by using the std::make_shared and std::make_unique functions from the standard library.

Thou shalt use smart classes/resources (RAII)

What I call “smart class” or “smart resource” is known as RAII (Resource Acquisition Is Initialization), CADRe (Constructor Acquires, Destructor Releases) or SBRM (Scope-based Resource Management). I don’t like any of those names because they are so cryptic. Inspired from the term smart pointers, I like to call RAII smart resources. RAII is a programming idiom for exception-safe resource management. Acquisition of resources is done in the constructor an the release in the destructor, thus avoiding resource leaks. This is a generalization of the smart pointers, where the resource is memory. In case of RAII it can be anything, a system handle, a stream, a database connection, etc.

Using smart pointers is not enough if you do not take the extra step and use smart resources too. Consider the following example where we write to a file:

This code has several issues. It is possible to forget to close the file handle (especially with larger code). Even if you close the handle, the code is not exception safe and the handle will not be closed if an exception occurs between opening the file and closing it.

These problems can be avoided by using a smart handle resource. The bellow implementation is the bare minimum and a real implementation may be more elaborated.

The previous code can now change to:

Not only that the client code became simpler, it is also safer. The file handle is closed in the smart handle destructor. That means you cannot forget to close it, but also, in case of an exception, it is guaranteed to be closed, because the destructor of the smart handle will be called during stack unwinding.

Smart pointers and smart resources (RAII) enable you to write exception safe, leak-free code, with deterministic release of resource.

Thou shalt use std::string

std::string (and it’s wide character counterpart std::wstring) should be the default and the de facto type for strings in C++. Using char* like in C has many drawbacks: you must allocate memory dynamically and make sure you release it correctly, you must have arrays large enough to accommodate actual values (what if you declared an array of 50 chars and you read 60?), are prone to ill-formed declarations (char* name = "marius"; is incorrect and triggers a runtime exception if you attempt to change the data) and are not exception safe. The string classes from the standard library avoid all this problems: they handle memory automatically, can be modified, can be resized, they work with the standard algorithms and if an exception occurs the internal buffer is automatically freed when then object is destructed during stack unwinding.

Thou shalt use standard containers

std::string is not a built in type, but a special container for characters. The standard library provides other general purpose containers including std::vector, std::list, std::array, std::map, std::set, std::queue. You should use them accordingly to your needs. std::vector should be the default container (if the size is fixed and known at compile time then you should consider using std::array in that case). These containers, used appropriately, provide great performance and can be used uniformly with the standard algorithms. In practice it is rarely that these containers do not suit all your needs and you have to rely on other special implementations for better performance.

Thou shalt use standard algorithms and utilities

The C++ standard library provides many general purpose algorithms that you can use in your code. Don’t reinvent the wheel. If you need to count, search, aggregate, transform, generate, sort or many other operations you’ll find something already available in the standard library. Most algorithm are available in the <algorithm> header, but some of them can be found in the <numerics> header. Also many utility functions are available in the standard, such as functions to convert between string and numeric types. See the <cstdlib> for such utilities.

Thou shalt use namespaces

Unfortunately, namespaces are a C++ feature that is not used as much as it should. Like in any other language that supports them, namespaces provide a way to logically group functionality into units, but also help you avoid name collisions (because you cannot have two symbols with the same name in the same namespace, but you can have in two different namespaces).

Though library implementators do use namespaces (for the reason mentioned above) I’ve seen little use in line of business applications. A reason may be that IDEs like Visual Studio do not promote namespaces. No project and item templates for C++ in Visual Studio use namespaces. No code generated by a C++ wizard will be inside a namespace. In fact if you put MFC code into namespaces the Visual Studio wizards will no longer worked with your code.

Do use namespaces. It helps grouping your code logically and it helps avoiding name collisions.

Thou shalt use const

The const keyword can be used on variables and function parameters to indicate they are immutable, but also on non-static member functions to indicate that a function cannot alter member variables of a class, nor it can call any non-const member of the class.

The const keyword should be used on all variables that do not change their value and all member functions that do not alter the state of the object. This helps not only better documenting your code, but also allow the compiler to immediately flag incorrect use of immutable variables or functions and also give it a chance to better optimize your code.

Let’s consider the following (dummy) example of a function:

Neither the parameter a nor the variable x change their value, so they should be both declared as const.

It is very easy to omit the const keyword and in practice I have seen little use of it. I strongly recommend taking the effort to put const wherever possible to ensure const correctness of your programs.

Thou shall use virtual and override (and final)

This may seem of little importance comparing to other practices in this list, but I personally find in important especially for code readability and maintainability. Unfortunately, C++ does not enforce you to specify the virtual keyword on derived classes in a hierarchy to indicate that a function is overriding a base class implementation. Having virtual in the class where the function is first declared is enough. Many developers tend to ignore the virtual keyword on derived classes and that makes it hard to figure, especially on large code bases or large hierarchies which function is virtual and is actually overriding a base implementation.

C++11 has added two new reserved words, override and final to actually indicate that a virtual function is overriding another implementation, or that a virtual function can no longer be overridden. These should be used on all virtual methods accordingly.

, Hits for this post: 654 .

I am investigating using PhoneGap as a platform for building mobile apps for several operating systems including Android. While doing so I have ran into various issues, and this article is intended to help others avoid the same headaches (with a focus on Android).

PhoneGap vs Cordova

The PhoneGap installation documentation says you could install PhoneGap using this command (will come back to that later).

But if you check the documentation or take various online tutorials you’ll notice they say you should install and use cordova. So what is PhoneGap and what is Cordova and which one should you use?

The best explanation comes from the PhoneGap itself:

PhoneGap is a distribution of Apache Cordova. You can think of Apache Cordova as the engine that powers PhoneGap, similar to how WebKit is the engine that powers Chrome or Safari.

Over time, the PhoneGap distribution may contain additional tools that tie into other Adobe services, which would not be appropriate for an Apache project. For example, PhoneGap Build and Adobe Shadow together make a whole lot of strategic sense. PhoneGap will always remain free, open source software and will always be a free distribution of Apache Cordova.

Currently, the only difference is in the name of the download package and will remain so for some time.

In summary, PhoneGap is a distribution of Cordova and for the time being there is no other difference than the name of the package.

(Bonus read: PhoneGap or Cordova? Don’t confuse and tell me the differences!)

The question is then, which one should you use? If you plan to use Adobe’s utilities and services such as PhoneGap Build, then you should use PhoneGap. Otherwise you could pick Cordova. That is what I picked.

(Bonus read: Apache Cordova vs Adobe PhoneGap: the differences and which one to use)

Pre-requisites

In order to install either PhoneGap or Cordova you need to first install Node.js. Node.js is is a cross-platform runtime environment and a library for running applications written in JavaScript outside the browser.

  1. download and install Node.js
  2. add Node.js installation folder (e.g. C:\Program Files\nodejs) to the PATH environment variable

Installing Cordova

To install Cordova run this command in a console:

The version I have installed is 3.5.0. The rest of the article relates to this version of Cordova.

Dependencies for Android

In order to be able to create and build for Android you need additional components.

  • Java
  • Apache Ant, a Java library and command-line tool for building software
  • Android SDK that provides libraries and tools for building, testing and debugging apps for Android

However, just installing these components is not enough. You need to set-up several environment variables, as they are required by Cordova/Phonegap.

  • create a variable called JAVA_HOME and set it to the Java installation folder (e.g. C:\Program Files\Java\jdk)
  • create a variable called ANT_HOME and set it to the Apache Ant installation folder
  • create a variable called ANDROID_HOME and set it to the Android SDK installation folder

After you have created these environment variables you also need to update the PATH variable to enable all these command line tools to be run from a console. Add the following paths to PATH:

Just having the Android SDK installed is not enough. You need to use the Android SDK Manager to install the appropriate tools for your Cordova version. For 3.5.0 the target API is 19 (or Android 4.4.2). When you install the tools make sure you also install a system image so you could create an Android emulator later. (Note that installation may take a while.)
android19
The next step is creating a virtual device for Android. You can run the following command to open up the Android Virtual Device Manager.

android-avd

Having done all this you can go ahead and create a cordova/phonegap application.

Create a HelloWorld application

To create, build and run a first cordova application execute the following commands in a console:

  1. create an application called HelloWorld
  2. edit the application index.html page replacing the text of the text of the “event received” class paragraph:

    with

  3. add the Android platform to the application

  4. build the application for Android:

  5. run the application in an Android emulator

Here is a screenshot of the Android application running in the emulator.
cordova-hw

Troubleshooting

If you missed any of the steps described in this article you will run into various errors when you try to create, edit, build or emulate your application. Below are several possible problems you may encounter and their solution.

Missing Apache Ant

When you try to add a platform to your application you may get this error message:

Error : Executing command ‘ant’, make sure you have ant installed and added to your path.

This is either because you did not install Apache Ant or because you have not setup the environment variables as described above.

Error querying Android targets

When you try to add a platform to your application you may get this error message:

Checking Android requirements…
(Error: An error occurred while listing Android targets)

This is either because you did not install the Android SDK or because you have not setup the environment variables as described above.

Missing Android target

When you try to add a platform to your application you may get this error message:

Error: Please install Android target 19 (the Android newest SDK). Make sure you have the latest Android tools installed as well. Run “android” from your command-line to install/update any missing SDKs or tools.

This is because you do not have the appropriate Android tools required by the installed cordova/phonegap version. For Cordova 3.5.0 the required platform tools are version 19 or Android 4.4.2. You can use the SDK Manager to install the appropriate tools.

No emulator available

When you try to emulate your application you may get this error message:

ERROR : No emulator images (avds) found, if you would like to create an avd follow the instructions provided here: http://developer.android.com/tools/devices/index.html
Or run ‘android create avd –name –target ‘ in on the command line.

This is because no emulator is defined. You need to use the Android Virtual Device Manager to create one or more emulators.

Using Eclipse

You probably do not want to do everything from the command line and plan to use an IDE. Eclipse comes bundled with everything you need for developing for Android. See the following references for using Android with Cordova.

, , , , , Hits for this post: 2114 .

Visual Studio “14” CTP ships with a refactored C Runtime. The first thing you’ll notice is that msvcrXX.dll has been replaced by three new DLLs: appcrtXX.dll, desktopcrtXX.dll and vcruntimeXX.ddl (where XX stands for the version number so in this version it’s appcrt140.dll, desktopcrt140.dll and vcruntime140.dll).

crtdlls
You can see in this image that both desktopcrt140.dll and vcruntime140.dll depend on appcrt140.dll.

These three new DLLs export run-time routines in different categories, with some of them overlapping, as shown by the bellow table (assembled by directly analyzing the exports of the three modules).


Function

Appcrt140.dll

Desktopcrt140.dll

Vcruntime140.dll
Buffer Manipulation
Byte Classification
Character Classification
Console and Port I/O
Data Alignment
Data Conversion
Debug Routines
Directory Control
Error Handling
Exception Handling
File Handling
Floating-Point Support
Low-Level I/O
Process and Environment Control
Robustness
Searching and Sorting
Stream I/O
String Manipulation
System Calls
Time Management

Breaking CRT routines in several DLLs is not the only change. The CRT has been rewritten for safety and const correctness. Many of the routines have been re-written in C++. Here is a random example: the _open function, that was available in open.c was implemented like this in Visual Studio 2013:

In Visual Studio “14” CTP it is available in function appcrt\open.cpp and looks like this:

UPDATE

To read more about the refactoring see the VC++ team’s blog posts:

, , , , Hits for this post: 4501 .

Visual Studio 2012 introduced a new framework for writing debugger visualizers for C++ types that replaced the old autoexp.dat file. The new framework offers xml syntax, better diagnostics, versioning and multiple file support.

Visualizers are defined in XML files with extension .natvis. These visualizers are loaded each time the debugger starts. That means if you make a change to visualizers, it is not necessary to re-start Visual Studio, just re-start the debugger (for instance detach and re-attach the debugger to the process you debug).

These files can be located under one of these locations:

  • %VSINSTALLDIR%\Common7\Packages\Debugger\Visualizers (requires admin access)
  • %USERPROFILE%\My Documents\Visual Studio 2012\Visualizers\
  • VS extension folders

In Visual Studio “14” CTP (in response to a UserVoice request) these files can also be added to a Visual C++ project for easier management and source control integration. All you have to do is add the .natvis file to your .vcxproj file.

Here is an example. Suppose we have the following code:

If you run this in debugger you can inspect the value of p and it looks like this:
natvis1

To change the way the point objects are visualized create a file called point.natvis with the following content:

Add this file to the project.
natvis3
When you run the application in debugger again the point object is visualized according to the per-project .natvis file.
natvis2

UPDATE
There are two things to note:

  • changes in the natvis files are now picked up automatically by the debugger; you no longer need to stop the debugging session and then start again if you make changes to a natvis file
  • natvis files from the project are evaluated after all the other files from the other possible locations; that means you can override existing (general) visualizers with project-specific visualizers

For more see Project Support for Natvis.

, , , , , Hits for this post: 4608 .

Visual Studio 2012 provides support for new features, such as code review and feedback through the use of the Work Item tracking system in TFS. (You can read more about it in this article New Code Review feature in Visual Studio 2012).

However, to be able to use these features you must use a process template that supports them. If you try to access these features without upgrading the process template you get errors.

This feature can’t be used until your Team Foundation administrator has enabled it on the team project.

To use My Work to multi-task and manage your changes, you must enable the feature on the server.
Click here to enable the new features

features1
features2

The error in the verification step for configuring the team project happen because an old process template is in use.

If your current process template is MSF for Agile Software Development version 4.x then you need to follow the steps in this article: Update a Team Project Based on an MSF v4.2 Process Template.

To being able to update, you first have to download the latest version of the process template. You can do this from Visual Studio. Go to Team > Team Project Collection Settings > Process Template Manager and download the template.
updatetemplate2

After you have the process template files go ahead and update according to the steps defined in the before mentioned article. However, there is a missing command in the first step of the process. You need to change an additional system field than those mentioned in the article:

If you need to do this for multiple projects then you’ll have to run most of these steps for each project. So here are two batch files with commands you need to run:

  • update the Team Project collection (run only once):

    Note: Make sure you set the correct URL to your collection and replace TemplateDir with the actual path of the process template that you downloaded.

  • Update the Team Project (run once for each project)

    You execute the batch passing the name of the project (in quotes if it contains spaces).

After these commands have executed successfully you can go ahead and use the new features.
updatetemplate3

, , , , , Hits for this post: 6332 .

My first Windows Store app (for Window 8.1) is now available in Windows Store. It’s called Your Chemical Name and shows names (and text) using chemical elements symbols in the Breaking Bad style.

yourchemicalname11

The application allows to:

  • customize the appearance of text, colors, background
  • customize the position of the text on the background
  • save image to a file
  • post image on a facebook album
  • share image with other apps

yourchemicalname12

yourchemicalname13

yourchemicalname14

You save the images to disk or share them on facebook or with apps supporting the Windows Share Charm.

yourchemicalname15

Here are a few screenshots:
yourchemicalname2

yourchemicalname3

yourchemicalname5

More about the application here.

Download Your Chemical Name from Windows Store.

, , , , , , , Hits for this post: 8225 .

Windows 8 features a Settings charm to display both application (the top part) and system (the bottom part) settings (you get it from swiping from the side of the screen). The system provides two entry points, Permissions and Rate and Review, the later only for applications installed through the store.

You can customize the settings charm by adding new entry points. For instance, you may want to add an About pane. If your application uses network capabilities then you have to add a privacy policy, otherwise your application will not pass the Windows Store Certification.

charms settingscharm6

In this post I will show how you can add new entries to the settings charm for Windows 8.1 applications (this won’t work for Windows 8 applications). We have to use two classes:

  • SettingsPane: enables the app to control the Settings Charm pane. The app can add or remove commands, receive a notification when the user opens the pane, or open the pane programmatically.
  • SettingsFlyout: represents a control that provides in-context access to settings that affect the current app. This class is new to Windows 8.1

The following code adds a new entry to the settings pane called Privacy policy and provides a handler for the command. In the handler we create a new instance of a SettingsFlayout and show it.

The text of the privacy policy is kept in a text file under the Settings folder. We asynchronously open and read the content of the file and when the text is available we create a new TextBlock control and use it as the content of the flyout content control.

Then we have to initialize the settings pane when the application starts.

When you start the application and swipe the right edge of the screen the charms bar shows up. Opening the Settings charm will now show two entries for the application: Privacy Policy and Permissions.
settingscharm2 settingscharm3

The next sample shows how to add an About page. It’s very similar actually.

Notice that the entries in the settings charm appear in the order they where added.
settingscharm4 settingscharm5

The content of the flyout can be any visual object (the simple TextBlock is used only for demo purposes). It is also possible to customize the flyout header, icon, background, etc. Here is the same About page with additional flyout settings.

settingscharm7

Here is some additional reading: Guidelines for app settings (Windows Store apps).

, , , , , Hits for this post: 9916 .

In WPF, Silverlight and Windows Phone it is possible to render a visual object into a bitmap using the RenderTargetBitmap. This functionality, that I find pretty basic, was not available for Windows Store applications. Fortunately, Windows 8.1 provides that functionality for Windows Store applications too, through the same RenderTargetBitmap class.

There are some limitations though:

  • it should be used in the code behind (not declared in XAML) because you have to call RenderAsync
  • collapsed visual objects are not rendered (only visible ones)
  • in rare circumstances the content can be lost due to the interaction with lower level systems; in this case a specific exception is triggered
  • the rendered target bitmap does not automatically scale when the current DPI settings change
  • the maximum rendered size of a XAML visual tree is restricted by the maximum dimensions of a DirectX texture

Here is a demo Windows Store application that has several controls and a button that when pressed a screenshot of the area shown in red (it’s a grid) is taken. The bitmap is saved on disk, but also displayed as the source for the image control shown in the preview area.

wsas1

The handler for the Click button even looks like this:

SaveScreenshotAsync is an async method that takes the reference to the FrameworkElement to be rendered to a bitmap (in this case the constrolsGrid) and returns a Task<RenderedTargetBitmap> that can be awaited on. As soon as we have the bitmap we set it as the source for the image control (imagePreview).

wsas2

SaveScreenshotAsync is an async method that takes the FrameworkElement to be rendered to a bitmap and returns a Task<RenderedTargetBitmap> that can be awaited on. This method first prompts the user to select a destination file for the rendered bitmap. When the file is available it calls SaveToFileAsync to rendered the bitmap and write it to the file.

SaveToFileAsync is an async method that takes the FrameworkElement to be rendered to a bitmap and the StorageFile when the bitmap is to be saved and returns a Task<RenderedTargetBitmap> that can be awaited on. The file is opened asynchronous for read-write access and the returned IRandomAccessStream is passed further together with the framework element and the bitmap encoder id (that specifies how the bitmap should be encoded, i.e. BMP, JPEG, PNG, GIF, etc.) to CaptureToStreamAsync.

CaptureToStreamAsync creates a new RenderTargetBitmap object and calls RenderAsync to render the visual tree of the framework element to a bitmap. After the bitmap is rendered it retries the image as a buffer of byes in the BGRA8 format. It then asynchronously creates a BitmapEncoder for the IRandomAccessStream stream that it received as an argument, it calls SetPixelData to set the pixels data (notice the BitmapPixelFormat.Bgra8 parameter that matches the pixels format returned by GetPixelsAsync) and later asynchronously flushes all the image data, basically writing it to the file. It then returns that RenderTargetBitmap object that it created, which is used eventually as the source for the image control.

Here is how the saved JPEG image (also seen in the preview screenshot above) looks likes:
wsas3

You can check the source code of the attached WinRT Screenshot demo (458). It requires Visual Studio 2013 and Windows 8.1.

, , , , , , Hits for this post: 11258 .