In my previous post I shown how you can build a C++ application with the C++ REST SDK that fetches search results from a search engine. In this post I will go a step further and develop a client-server application from scratch using version 1.1.0 of the SDK. This version features an HTTP listener implementation (still in an experimental phase). Notice that for the time being this 1.1.0 SDK release does not work with Visual Studio 2013 Preview. This samples and built with Visual Studio 2012.

Overview of the problem to solve

The server manages a dictionary of key-value pairs (both strings) and supports several HTTP request methods:

  • GET: retrieves all the key-value pair from the dictionary.
    The response is a JSON object representing key-value pairs (eg. {"one" : "100", "two" : "200"}).
  • POST: retrieves the values of the specified keys from the dictionary.
    The request is a JSON array of strings (eg. ["one", "two", "three"]).
    The response is similar to the GET method, except that only requested keys are returned.
  • PUT: inserts new pairs of key-values in the dictionary. If a key is already found its value is updated.
    The request is a JSON object representing pairs of keys and values (eg. {"one" : "100", "two" : "200"})
    The response is a JSON object representing they key and the result for the action, such as addition or update (eg. {"one" : "<put>", "two" : "<updated>"}).
  • DEL: deletes the specified keys from the dictionary.
    The request is a JSON array of strings (eg. ["one", "two", "three"]).
    The response is a JSON object representing they key and the result for the action, such as success or failure (eg. {"one" : "<deleted>", "two" : "<failed>"}).

Notice that the server implements both GET and POST. The GET method is supposed to request a representation of the specified URI. Though it is theoretically possible that a GET request carries a body, in practice that should be ignored. The C++ REST library actually triggers an exception if you make a GET request with a body. Therefore, GET is used to return the entire content of the dictionary and the POST method, that supports a body, returns only the requested key-value pairs.

The client can make HTTP requests to the server, adding or updating key-values, fetch or delete existing pairs.

All communication, both for the request and the answer, is done using JSON values.

The server implementation

On the server side we have to do the following:

  • instantiate an http_listener object, specifying the URI where it should listen for requests.
  • provide handlers for the HTTP request methods for the listener.
  • open the listener and loop to wait for messages.

The core of the server application is shown below (except for the request handlers).

In this simple implementation the dictionary is a std::map. Its content is not persisted to disk, it is reloaded each time the server starts.

Let’s now look at the handlers. As mentioned earlier the GET method is a bit different than the others. A GET request should return all the key-value pairs in the server’s dictionary. Its implementation looks like this:

What it does is iterating through the dictionary and putting its key-value pairs into a json::value::field_map. That object is then sent back the the client.

The POST, PUT and DEL methods are a bit more complicated, because they all receive a JSON value specifying either keys to fetch or delete or pairs of key-value to add or update in the dictionary. Since some code would get duplicated several times I have created a generic method for handling requests that takes a function that evaluates the JSON request value and builds the response JSON value.

The handlers for POST, PUT and DEL will then call this generic method providing a lambda with the actual core implementation of each request handling.

And that is all with the server.

The client implementation

On the client side we need a http_client object to make HTTP requests to the server. It has an overloaded method request() that allows specifying the request method, a path and a JSON value for instance. A JSON value is not sent if the method is GET (or HEAD). Since for each request the answer is a JSON value, I have created a method called make_request() that dispatches the request and when the response arrives it fetches the JSON value and displays it in the console.

The core of the client code looks like this:

In the main() function I then just make a series of requests to the server, putting, fetching and deleting key-values from the server’s dictionary.

The client and server in action

You need to start the server first and then run the client. The output from running the client is:

On the server console the output is:

, , , , , , Hits for this post: 41541 .

CPtrArray is a nasty MFC container that should not be used. However, if you deal with legacy code you may not have a choice and have to work with it. Unfortunately, the Visual Studio debugger is not able to display its elements, since these are pointers to void and that can be anything. In this post I will explain how to write a visualizer for Visual Studio 2012 to address this problem.


Visual Studio 2012 has introduced a new framework for writing debugger visualizers for C++ types. This replaces the old autoexp.dat (that you might be familiar with). The new framework offers xml syntax, better diagnostics, versioning and multiple file support.

Visualizers are defined in XML files with extension .natvis. These visualizers are loaded each time the debugger starts. That means if you make a change to visualizers, it is not necessary to re-start Visual Studio, just re-start the debugger (for instance detach and re-attach the debugger to the process you debug). These files can be located under one of these locations:

  • %VSINSTALLDIR%\Common7\Packages\Debugger\Visualizers (requires admin access)
  • %USERPROFILE%\My Documents\Visual Studio 2012\Visualizers\
  • VS extension folders

You can read how to write visualizers in these articles:

Writing the visualizer

There are two things you must do to enable Visual Studio debugger to display CPtrArrays in a nice way.

The first step is to define a type derived from CPtrArray. This type will not be used in code, but will allow the visualizer to figure what is the type of the array elements.

The second step is to create a .natvis file (let’s call it mfc.natvis) under %USERPROFILE%\My Documents\Visual Studio 2012\Visualizers\ with the following content:

And that’s all. Let’s see how this works. Assume we have the following code:

In the Watch window cast the pointer to the array object to CPtrArray<T>. This is where the template type defined above is used. Even though your array is not an instance of CPtrArray<T> it will still work since the derived type does not add anything.

As you can see in the screenshot, the content of the CPtrArray is nicely expanded.

You probably noticed the nd specifier in the watch expression. This specifier (probably standing for “no derived”) displays only the base class info of the object, and not the derived parts (see Format Specifiers in C++). Without this specifier, when the MFC symbols are loaded (for mfc110xx.dll) the array is not visualized correctly.

A simpler hard-coded solution

If you don’t want (or cannot) add the generic type CPtrArray<T> you can still accomplish the same, but with some drawbacks.

In this case the .natvis file should look like this:

You must change the placeholder TYPE with the actual type of the array elements in ValueNode. Then, all you have to do is add the object in the watch window.

However, the big disadvantage of this solution is that all the CPtrArrays in your code are treated in the same manner, as storing elements of type TYPE*. That means if you want to watch arrays of different types, you have to stop the debugger, change the visualizer and attach again. It is impossible to watch arrays that store different element types in the same debugging session.

, , , , , Hits for this post: 39416 .

NuGet has recently added support for native projects. This simplifies a lot deployment of native libraries. Even though cpplinq is not a big library (in fact is just a header file) I have created a NuGet package so that you are able to automatically add it to your project.

Here is what you have to do.

  1. Make sure you have NuGet 2.5 or newer, otherwise the NuGet package manager won’t show up in your VC++ projects.
  2. In the context menu for your project choose Manage NuGet Packages…
  3. Search for cpplinq and install the package.
  4. Include the cpplinq.hpp header and start using the library. Here is a sample to test that everything is all right.

Notice that all the settings for library (such as adding the proper entry for the include directories or defining NOMINMAX so that min and max macros will not be defined for the project) are automatically performed, so you can focus on coding.

, , , , , , , Hits for this post: 39035 .

I was writing recently about an MFC bug in CDatabase class in Visual Studio 2012. In the meanwhile I have stumbled upon an even greater bug (check this report on Connect for the details): when the OpenEx function fails (wrong credentials, service not reachable, machine is shot down etc.) it corrupts memory. What happen next is chance. I don’t know when a fix will be available, but until that happens there is a workaround for this. Just like for the previous bug, you have to derive CDatabase and override OpenEx method, copying the original implementation from the MFC sources and adding one more line to null the pointer to already released memory. So, I’ll extend my previous CDatabaseEx class to the following implementation:

, , , , , Hits for this post: 35922 .


I have recently migrated a C# 2.0 project registered for COM interop to .NET 4.5 and when I imported the type library in a C++ project with no_registry, suddenly I got some errors because the type library could not be imported. Here are the steps to reproduce:

  • create a .NET Class Library project and set platform target to .NET framework 4.5
  • check Register for COM interop
  • build the project
  • import the type library in a C++ project:

The result is the following error:


Searching for the solution I found that this was a known issue when you have both CLR 2.0 and 4.0 installed on the same machine. See this KB article: VC++ 2010 #import with no_registry fails with error C1083 or C3510. Unfortunately I was unable to fix the problem with the solution indicated there.

There are two tools that can generate a type library from an assembly:

  • tlbexp.exe: generates a type library from a specified .NET assembly
  • regasm.exe: registers metadata from an assembly to the Windows Registry, but in addition can create a type library from for the input assembly when /tlb switch is used.

When a project specifies to register for COM interop what Visual Studio does is similar to calling regasm.exe with /codebase switch specified. Since I had before problems with interop assemblies automatically generated by Visual Studio (with tlbimp.exe) I though it would be the same (only the other way around). Therefore I unchecked “register for COM interop” and added as a custom build step registration with regasm.exe, like this:

Not very surprisingly, the generated file was different and the #import command executed without problems.

Problem solved!


The question that arises is why are the two files, generated with Visual Studio and with regasm.exe, different? You can see they are different if you open them in an hex editor. But if you just use oleview.exe, the disassembled type library looks identical.

The obvious answer that occurred to me, but eventually proved wrong, was that Visual Studio is not actually using regasm.exe to register the assembly and generate the type library. It actually uses a MSBuild task for that.

When the RegisterForComInterop property is set in a .csproj an MSBuild task is executed.

The task can be found in Microsoft.Common.targets (in c:\Windows\Microsoft.NET\Framework\v4.0.30319\)

To check if I can reproduce, I have created MSBuild file (explicitreg.xml) with some hard-coded values that only runs that registration task.

But surprise: this produced the exact same output as the regasm.exe command. Comparing the diagnose logs from MSBuild (for the build of the .csproj and my custom file) I couldn’t spot any difference in the execution of the task. Also using the Process Monitor (procmon.exe from Sysinternals) to check access to the TLB file, I could clearly see the difference for the file writing, because different lengths were produced from Visual Studio build and explicit MSBuild run, though again I could not see any difference in the call stacks.

So, the actual cause for this behavior is still unknown to me and I would appreciate if anyone that knows the answer clarifies it.

, , , , , , , Hits for this post: 47598 .

CDatabase bug in MFC in VS2012

After migrating an MFC application from Visual Studio 2008 to Visual Studio 2012 I run into an unexpected error: the application was having problem fetching data from the SQL Server database. After debugging it turned out that function CDatabase::GetConnect that I was using to retrieve the connection string after opening the database (for different purposes) was suddenly returning an empty string. It turned out this was a known MFC bug, reported on Microsoft Connect. The cause of the problem is that CDatabase encrypts the connection string, and then empties it. Here is a snippet of OpenEx:

Though Microsoft promised to be solved the bug in the next major version I needed a fix now. So here is my fix.

CDatabase has a protected member m_strConnect which is supposed to keep the connection string in plain text, and one called m_blobConnect that represents the encrypted connection string. However, there is no method in CDatabase to return this blob. Getting the blob would allow you to decrypt it and get the actual connection string. So the solution is to derive CDatabase, provide a method to return the connection string, and in your code replace CDatabase with this derived class and the call to GetConnect() to this new method.

, , , , Hits for this post: 31561 .

Microsoft has announced that Visual Studio 2012 Update 2 will bring support in Visual Studio and TFS for git. They already used git on codeplex and this move shows how popular git has become. I don’t work much with git, but as I said codeplex uses git, and I used it for working on cpplinq. However, the experience with Git Source Control Provider extension for Visual Studio was not the most pleasant. Fortunately, Visual Studio Tools for Git is a different story, allowing you to work exclusively from Visual Studio, without the need of addition tools. After giving it a try, I must say that even if it’s in CTP and still has some issues, it’s a totally different story than the previous extension or external tools I used. However, support for git will not be added to the previous versions of Visual Studio.

Here is more about the support for git:

Before you can start working with git, you need two things:

  1. Visual Studio 2012 Update 2 CTP 2
  2. Visual Studio Tools for Git

Extensions manager

In this post I’ll show what it takes to clone a git repository, work on the project, commit and push the changes on the remote branch.

To clone a repository you have go to Connect to Team Projects in Team Explorer, and under Local Git Repositories use the Clone pane to enter the URL of the server repository and the local destination. After the repository has been cloned, it shows up in the list, as shown below.

Cone a git

You can change the settings, such as your credentials from the Settings pane.

Git settings

To view your changes you can use the Git Changes > Commits pane. You can see included and excluded changes, and also untracked files. To commit your changes, provide a description (mandatory) and hit the Commit button. The page updates after the commit finishes.

Committing changes

It is possible to view the changes on each file with the built-in tools:

source file diff

After you committed all your changes and you’re ready to publish them on the server you can push the ongoing commits. If there are changes on the server, you must first pull those changes, merge locally, commit again and then push.

push ongoing commits

You can view the history for an entire branch from Team Explorer > Git Changes > Branches

Branch history

or for a single file from Solution Explorer (it is also possible to compare two versions).

File history

, , , Hits for this post: 33323 .

Microsoft has made available the release candidates for the new Windows and the new Visual Studio tool set. Some of the products have been renamed:

  • Windows Server 2012 is the new name for Windows 8 “Server”
  • Visual Studio 2012 is the new name for Visual Studio 11

MSDN subscribers can download them from their account. If you’re not a subscriber you can use the following links:

You can read more about these releases here:

The only thing I’d like to mention at this point is that after butchering the Windows logo (no, I don’t believe in the ingenuity of the light blue rectangle) someone felt the need to do the same with the Visual Studio logo. And if I could live with the new Windows logo (the color is awful) the new Visual Studio logo is huge failure, in my opinion.

Yes, that’s right, the one in the right is the new VS2012 logo.

After seeing the things Microsoft has done to Windows and Visual Studio UI (especially the colors in Windows that knock you in the head and, the opposite, the lack of colors in VS) it looks like all UI experts left to Apple.

, , , , , Hits for this post: 35163 .