Attend Meeting C++ 2013

Boost Dependency Analyzer

I have something special to announce today. A tool I’ve build over the last 2 weeks, which allows to analyze the dependencies in boost. With boost 1.53 this spring, I had the idea to build this, but not the time, as I was busy writing a series over the Papers for Bristol. Back then I realized, how easy it could be to build such a tool, as the dependencies could be read & listed by boosts bcp tool. I already had a prototype for the graphpart from 2010. But lets have a look at the tool:

The tool is very easy to handle, it is based on the out of bcp, which is a tool coming with boost. Actually bcp can help you with ripping libraries out of boost, so that you don’t have to add all of boost to your repository when you would like to use smartpointers. But bcp also has a listing mode, where it only shows the dependencies thats whats my tool build up upon. Lets have a short look at the results, the dependencies of boost 1.54:

A few words on how to read this graph. The libraries in the middle of the “starshape” are the ones with the most dependencies, each line between the nodes is a dependency. A dependency can be one or multiple files. The graphlayout is not weighted.

How to

A short introduction on what you need to get this tool to run. First boost, as this tool is build to analyze boost. I’ve tested with some versions (1.49 – 1.54) of boost. You also need a version of bcp, which is quite easy to build (b2 tools/bcp). Then you simply need to start the tool, if BOOST_ROOT is set, the tool will try to read it, other wise you will be asked to choose the location of boost when clicking on Read dependencies. Next thing is selecting the location of bcp. That is the setup, and the tool will now run for some time. On my machine its 90 seconds to 2 minutes the analysis takes, it might be lot longer on yours, depending on how much cores you got. The tool will spawn for each boost library (~112) a bcp process, and analyze this output in a thread pool. After this is done, the data is loaded into the tool, and then saved to a SQLITE database, which will be used if you start the tool a second time and select this version of boost. Loading from the database is far faster.

A screenshot to illustrate this:

tl_files/blog/bda/bda.png

To the left are all the boost libraries, the number of dependencies is shown in the braces. To the right is a Tabwidget showing all the dependencies, the graph is layouted with boost graph. When you click on show all you’ll get the full view of all dependencies in boost. The layouting is done in the background, so this will take some time to calculate, and is animated when its done. The results of the layouting are good, but not perfect, so that you might have to move some nodes. Exporting supports images, which are transparent PNGs, not all services/tools are happy with that (f.e. facebook, twitter nor G+ could handle the perfectly fine images), this can be fixed by postprocessing the images and adding a white background.

Inner workings

I’ve already written a little about the tools inside, its build with Qt5.1 and boost. Where boost is mostly used for the graph layouting. As I choose to work with Qt5, it has a few more dependencies, for windows this sums up to a 18 mb download, which you’ll find at the end. The tool depends on 3 libraries from my company Code Node: ProcessingSink, a small wrapper around QProcess, that allows to just start a bunch of processes, and lets you connect to the finished and error slot. This was necessary, as I could only spawn 62 parallel processes under windows, so this library does take care of spawning the parallel processes now. Which are currently 50 at a time. GraphLayout is the code that wraps the innerworkings of boost::graph, its a bit dirty, but lets me easily process the graphlayouting. The 3rd library is NodeGraph, which is the Graph UI, based on Qts GraphicsView Framework.
I plan to release the tool and its libraries under GPL later on github, for now I don’t have the time to polish everything.

Problems

One of the earliest questions I had when thinking about building such a tool, was where to get a list of the boost libraries? This sounds easy. But I need to have this readable by machine, not human, so HTML is a great format, but I refused to write a parser for this list yet. I talked to some people about this at C++Now, and most agreed, that the second option would be best: maintainers.txt. Thats what the tool reads currently to find the boost libraries. Unfortunately at least lexical_cast is missing in this list. So, the tool isn’t perfect yet, while lexical_cast is already patched, I’m not sure if anything else is missing. A candidate could be signals, as its not maintained anymore. Currently the tool analyzes for 1.54 112 libraries.

boost dependencies

Working for 2 weeks on this tool has given me some inside knowledge about the dependencies in boost. First, the way it is shown in the tool, is the view of bcp. Some dependencies will not affect the user, as they are internal. f.e. a lot of libraries have a dependency to boost::test, simply because they provide their tests with it. The bcp tool really gets you ALL the dependencies. Also most (or was it all?) libraries depend on boost::config. I plan to add filtering later, so that the user has the ability to filter some of the libraries in the GraphView.

The tool

Here is how to get the tool for now: there is a download for the binaries for windows and linux. I’ll try to get you a deb package as soon as I have time, but for now its only the binaries for linux, you’ll have to make sure to have Qt5.1 etc. on linux too, as I do not provide them. For Windows, its 2 archives you’ll need to download: the programm itself, and needed dlls for Qt5.1 if you don’t have the SDK installed ( in this case you also could copy them from the bin directory)

Note on linux: this is a one day old beta version. Will update this later.

The Evolution of Direct3D

* UPDATE: Be sure to read the comment thread at the end of this blog, the discussion got interesting.

It’s been many years since I worked on Direct3D and over the years the technology has evolved Dramatically. Modern GPU hardware has changed tremendously over the years Achieving processing power and capabilities way beyond anything I dreamed of having access to in my lifetime. The evolution of the modern GPU is the result of many fascinating market forces but the one I know best and find most interesting was the influence that Direct3D had on the new generation GPU’s that support Welcome to Thunderbird processing cores, billions of transistors more than the host CPU and are many times faster at most applications. I’ve told a lot of funny stories about how political and Direct3D was created but I would like to document some of the history of how the Direct3D architecture came about and the architecture that had profound influence on modern consumer GPU’s.

Published here with this article is the original documentation for Direct3D DirectX 2 when it was first Introduced in 1995. Contained in this document is an architecture vision for 3D hardware acceleration that was largely responsible for shaping the modern GPU into the incredibly powerful, increasingly ubiquitous consumer general purpose supercomputers we see today.

D3DOVER
The reason I got into computer graphics was NOT an interest in gaming, it was an interest in computational simulation of physics. I Studied 3D at Siggraph conferences in the late 1980’s Because I wanted to understand how to approach simulating quantum mechanics, chemistry and biological systems computationally. Simulating light interactions with materials was all the rage at Siggraph back then so I learned 3D. Understanding light 3D mathematics and physics made me a graphics and color expert roomates got me a career in the publishing industry early on creating PostScript RIP’s (Raster Image Processors). I worked with a team of engineers in Cambridge England creating software solutions for printing color graphics screened before the invention of continuous tone printing. That expertise got me recruited by Microsoft in the early 1990’s to re-design the Windows 95 and Windows NT print architecture to be more competitive with Apple’s superior capabilities at that time. My career came full circle back to 3D when, an initiative I started with a few friends to re-design the Windows graphics and media architecture (DirectX) to support real-time gaming and video applications, resulted in gaming becoming hugely strategic to Microsoft. Sony Introduced in a consumer 3D game console (the Playstation 1) and being responsible for DirectX it was incumbent on us to find a 3D solution for Windows as well.

For me, the challenge in formulating a strategy for consumer 3D gaming for Microsoft was an economic one. What approach to consumer 3D Microsoft should take to create a vibrant competitive market for consumer 3D hardware that was both affordable to consumers AND future proof? The complexity of realistically simulating 3D graphics in real time was so far beyond our capabilities in that era that there was NO hope of choosing a solution that was anything short of an ugly hack that would produce “good enough” for 3D games while being very far removed from the ideal solutions mathematically we had implemented a little hope of seeing in the real-world during our careers.

Up until that point only commercial solutions for 3D hardware were for CAD (Computer Aided Design) applications. These solutions worked fine for people who could afford hundred thousand dollars work stations. Although the OpenGL API was the only “standard” for 3D API’s that the market had, it had not been designed with video game applications in mind. For example, texture mapping, an essential technique for producing realistic graphics was not a priority for CAD models roomates needed to be functional, not look cool. Rich dynamic lighting was also important to games but not as important to CAD applications. High precision was far more important to CAD applications than gaming. Most importantly OpenGL was not designed for highly interactive real-time graphics that used off-screen video page buffering to avoid tearing artifacts during rendering. It was not that the OpenGL API could not be adapted to handle these features for gaming, simply that it’s actual market implementation on expensive workstations did not suggest any elegant path to a $ 200 consumer gaming cards.

TRPS15In the early 1990’s computer RAM was very expensive, as such, early 3D consumer hardware designs optimized for minimal RAM requirements. The Sony Playstation 1 optimized for this problem by using a 3D hardware solution that did not rely on a memory intensive the data structure called a Z-buffer, instead they used a polygon level sorting algorithm that produced ugly intersections between moving joints. The “Painters Algorithm” approach to 3D was very fast and required little RAM. It was an ugly but pragmatic approach for gaming that would have been utterly unacceptable for CAD applications.

In formulating the architecture for Direct3D we were faced with difficult choices Similar enumerable. We wanted the Windows graphics leading vendors of the time; ATI, Cirrus, Trident, S3, Matrox and many others to be Able to Compete with one another for rapid innovation in 3D hardware market without creating utter chaos. The technical solution that Microsoft’s OpenGL team espoused via Michael Abrash was a driver called 3DDDI models (3D Device Driver Interface). 3DDDI was a very simple model of a flat driver that just supported the hardware acceleration of 3D rasterization. The complex mathematics associated with transforming and lighting a 3D scene were left to the CPU. 3DDDI used “capability bits” to specify additional hardware rendering features (like filtering) that consumer graphics card makers could optionally implement. The problem with 3DDDI was that it invited problems for game developers out of the gate. There were so many cap-bits every game that would either have to support an innumerable number of feature combinations unspecified hardware to take advantage of every possible way that hardware vendors might choose to design their chips producing an untestable number of possible hardware configurations and a consumer huge amount of redundant art assets that the games would not have to lug around to look good on any given device OR games would revert to using a simple set of common 3D features supported by everyone and there would be NO competitive advantage for companies to support new hardware 3D capabilities that did not have instant market penetration. The OpenGL crowd at Microsoft did not see this as a big problem in their world Because everyone just bought a $ 100,000 workstation that supported everything they needed.

The realization that we could not get what we needed from the OpenGL team was one of the primary could be better we Decided to create a NEW 3D API just for gaming. It had nothing to do with the API, but with the driver architecture underneath Because we needed to create a competitive market that did not result in chaos. In this respect the Direct3D API was not an alternative to the OpenGL API, it was a driver API designed for the sole economic purpose of creating a competitive market for 3D consumer hardware. In other words, the Direct3D API was not shaped by “technical” requirements so much as economic ones. In this respect the Direct3D API was revolutionary in several interesting ways that had nothing to do with the API itself but rather the driver architecture it would rely on.

When we Decided to acquire a 3D team to build with Direct3D I was chartered surveying the market for candidate companies with the right expertise to help us build the API we needed. As I have previously recounted we looked at Epic Games (creators of the Unreal engine), Criterion (later acquired by EA), Argonaut and finally Rendermorphics. We chose Rendermorphics (based in London) Because of the large number of 3D quality engineers and the company employed Because The founder, Servan Kiondijian, had a very clear vision of how consumer 3D drivers should be designed for maximum future compatibility and innovation. The first implementation of the Direct3D API was rudimentary but quickly intervening evolved towards something with much greater future potential.

D3DOVER lhanded
Whoops!

My principal memory from that period was a meeting in roomates I, as the resident expert on the DirectX 3D team, was asked to choose a handedness for the Direct3D API. I chose a left handed coordinate system, in part out of personal preference. I remember it now Only because it was an arbitrary choice that by the caused no end of grief for years afterwards as all other graphics authoring tools Adopted the right handed coordinate system to the OpenGL standard. At the time nobody knew or believed that a CAD tool like Autodesk would evolve up to become the standard tool for authoring game graphics. Microsoft had acquired Softimage with the intention of displacing the Autodesk and Maya anyway. Whoops …

The early Direct3D HAL (Hardware Abstraction Layer) was designed in an interesting way. It was structured vertically into three stages.

DX 2 HAL

The highest was the most abstract layer transformation layer, the middle layer was dedicated to lighting calculations and the bottom layer was for rasterization of the finally transformed and lit polygons into depth sorted pixels. The idea behind this vertical structure driver was to provide a relatively rigid feature path for hardware vendors to innovate along. They could differentiate their products from one another by designing hardware that accelerated increasingly higher layers of the 3D pipeline resulting in greater performance and realism without incompatibilities or a sprawling matrix of configurations for games to test against art or requiring redundant assets. Since the Direct3D API created by Rendermorphics Provided a “pretty fast” implementation software for any functionality not accelerated by the hardware, game developers could focus on the Direct3D API without worrying about myriad permutations of incompatible hardware 3D capabilities. At least that was the theory. Unfortunately like the 3DDDI driver specification, Direct3D still included capability bits designed to enable hardware features that were not part of the vertical acceleration path. Although I actively objected to the tendency of Direct3D capability to accumulate bits, the team felt extraordinary competitive pressure from Microsoft’s own OpenGL group and from the hardware vendors to support them.

The hardware companies, seeking a competitive advantage for their own products, would threaten to support and promote OpenGL to game developers Because The OpenGL driver bits capability supported models that enabled them to create features for their hardware that nobody else supported. It was common (and still is) for the hardware OEM’s to pay game developers to adopt features of their hardware unique to their products but incompatible with the installed base of gaming hardware, forcing consumers to constantly upgrade their graphics card to play the latest PC games . Game developers alternately hated capability bits Because of their complexity and incompatibilities but wanted to take the marketing dollars from the hardware OEM’s to support “non-standard” 3D features.

Overall I viewed this dynamic as destructive to a healthy PC gaming economy and advocated resisting the trend OpenGL Regardless of what the people wanted or OEM’s. I believed that creating a consistent stable consumer market for PC games was more important than appeasing the hardware OEM’s. As such as I was a strong advocate of the relatively rigid vertical Direct3D pipeline and a proponent of introducing only API features that we expected up to become universal over time. I freely confess that this view implied significant constraints on innovation in other areas and a placed a high burden of market prescience on the Direct3D team.

The result, in my estimation, was pretty good. The Direct3D fixed function pipeline, as it was known, produced a very rich and growing PC gaming market with many healthy competitors through to DirectX 7.0 and the early 2000’s. The PC gaming market boomed and grew to be the largest gaming market on Earth. It also resulted in a very interesting change in the GPU hardware architecture over time.

Had the Direct3D HAL has been a flat driver with just the model for rasterization capability bits as the OpenGL team at Microsoft had advocated, 3D hardware makers would have competed by accelerating just the bottom layer of the 3D rendering pipeline and adding differentiating features to their hardware capability via bits that were incompatible with their competitors. The result of introducing the vertical layered architecture THING that was 3D hardware vendors were all encouraged to add features to their GPU’s more consistent with the general purpose CPU architectures, namely very fast floating point operations, in a consistent way. Thus consumer GPU’s evolved over the years to increasingly resemble general purpose CPU’s … with one major difference. Because the 3D fixed function pipeline was rigid, the Direct3D architecture afforded very little opportunity for code branching frequent as CPU’s are designed to optimize for. Achieved their GPU’s amazing performance and parallelism in part by being free to assume that little or no branching code would ever occur inside a Direct3D graphics pipeline. Thus instead of evolving one giant monolithic core CPU that has massive numbers of transistors dedicated to efficient branch prediction has as an Intel CPU, GPU has a Direct3D Hundreds to Welcome to Thunderbird simple CPU cores like that have no branch prediction. They can chew through a calculation at incredible speed confident in the knowledge that they will not be interrupted by code branching or random memory accesses to slow them down.

DirectX 7.0 up through the underlying parallelism of the GPU was hidden from the game. As far as the game was concerned some hardware was just faster than other hardware but the game should not have to worry about how or why. The early DirectX fixed function pipeline architecture had done a brilliant job of enabling dozens of Disparate competing hardware vendors to all take different approaches to Achieving superior cost and performance in consumer 3D without making a total mess of the PC gaming market for the game developers and consumers . It was not pretty and was not entirely executed with flawless precision but it worked well enough to create an extremely vibrant PC gaming market through to the early 2000’s.

Before I move on to discussing more modern evolution Direct3D, I would like to highlight a few other important ideas that influenced architecture in early modern Direct3D GPU’s. Recalling that in the early to mid 1990’s was relatively expensive RAM there was a lot of emphasis on consumer 3D techniques that conserved on RAM usage. The Talisman architecture roomates I have told many (well-deserved) derogatory stories about was highly influenced by this observation.

Talsiman
Search this blog for tags “Talisman” and “OpenGL” for many stories about the internal political battles over these technologies within Microsoft

Talisman relied on a grab bag of graphics “tricks” to minimize GPU RAM usage that were not very generalized. The Direct3D team, Rendermorphics Heavily influenced by the founders had made a difficult choice in philosophical approach to creating a mass market for consumer 3D graphics. We had Decided to go with a more general purpose Simpler approach to 3D that relied on a very memory intensive a data structure called a Z-buffer to Achieve great looking results. Rendermorphics had managed to Achieve very good 3D performance in pure software with a software Z-buffer in the engine Rendermorphics roomates had given us the confidence to take the bet to go with a more general purpose 3D Simpler API and driver models and trust that the hardware RAM market and prices would eventually catch up. Note however that at the time we were designing Direct3D that we did not know about the Microsoft Research Groups “secret” Talisman project, nor did they expect that a small group of evangelists would cook up a new 3D API standard for gaming and launch it before their own wacky initiative could be deployed. In short one of the big bets that Direct3D made was that the simplicity and elegance of Z-buffers to game development were worth the risk that consumer 3D hardware would struggle to affordably support them early on.

Despite the big bet on Z-buffer support we were intimately aware of two major limitations of the consumer PC architecture that needed to be addressed. The first was that the PC bus was generally very slow and second it was much slower to copy the data from a graphics card than it was to copy the data to a graphics card. What that generally meant was that our API design had to growing niche to send the data in the largest most compact packages possible up to the GPU for processing and absolutely minimize any need to copy the data back from the GPU for further processing on the CPU. This generally meant that the Direct3D API was optimized to package the data up and send it on a one-way trip. This was of course an unfortunate constraint Because there were many brilliant 3D effects that could be best accomplished by mixing the CPU’s branch prediction efficient and robust floating point support with the GPU’s parallel rendering incredible performance.

One of the fascinating Consequences of that constraint was that it forced the GPU’s up to become even more general purpose to compensate for the inability to share the data with the CPU efficiently. This was possibly the opposite of what Intel intended to happen with its limited bus performance, Because Intel was threatened by the idea that the auxiliary would offload more processing cards from their work thereby reducing the CPU’s Intel CPU’s value and central role to PC computing. It was reasonably believed at that time that Intel Deliberately dragged their feet on improving PC performance to deterministic bus a market for alternatives to their CPU’s for consumer media processing applications. Earlier Blogs from my recall that the main REASON for creating DirectX was to Prevent Intel from trying to virtualize all the Windows Media support on the CPU. Intel had Adopted a PC bus architecture that enabled extremely fast access to system RAM shared by auxiliary devices, it is less Likely GPU’s that would have evolved the relatively rich set of branching and floating point operations they support today.

To Overcome the fairly stringent performance limitations of the PC bus a great deal of thought was put into techniques for compressing and streamlining DirectX assets being sent to the GPU performance to minimize bus bandwidth limitations and the need for round trips from the GPU back to the CPU . The early need for the rigid 3D pipeline had Consequences interesting later on when we Began to explore assets streaming 3D over the Internet via modems.

We Recognized early on that support for compressed texture maps would Dramatically improve bus performance and reduce the amount of onboard RAM consumer GPU’s needed, the problem was that no standards Existed for 3D texture formats at the time and knowing how fast image compression technologies were evolving at the time I was loathe to impose a Microsoft specified one “prematurely” on the industry. To Overcome this problem we came up with the idea of ​​”blind compression formats”. The idea, roomates I believe was captured in one of the many DirectX patents that we filed, had the idea that a GPU could encode and decode image textures in an unspecified format but that the DirectX API’s would allow the application to read and write from them as though they were always raw bitmaps. The Direct3D driver would encode and decode the image data is as Necessary under the hood without the application needing to know about how it was actually being encoded on the hardware.

By 1998 3D chip makers had begun to devise good quality 3D texture formats by DirectX 6.0 such that we were Able to license one of them (from S3) for inclusion with Direct3D.

http://www.microsoft.com/en-us/news/press/1998/mar98/s3pr.aspx

DirectX 6.0 was actually the first version of DirectX that was included in a consumer OS release (Windows 98). Until that time, DirectX was actually just a family of libraries that were shipped by the Windows games that used them. DirectX was not actually a Windows API until five generations after its first release.

DirectX 7.0 was the last generation of DirectX that relied on the fixed function pipeline we had laid out in DirectX 2.0 with the first introduction of the Direct3D API. This was a very interesting transition period for Direct3D for several could be better;

1) The original founders DirectX team had all moved on,

2) Microsoft’s internal Talisman and could be better for supporting OpenGL had all passed

3) Microsoft had brought the game industry veterans like Seamus Blackley, Kevin Bacchus, Stuart Moulder and others into the company in senior roles.

4) Become a Gaming had a strategic focus for the company

DirectX 8.0 marked a fascinating transition for Direct3D Because with the death of Talisman and the loss of strategic interest in OpenGL 3D support many of the people from these groups came to work on Direct3D. Talisman, OpenGL and game industry veterans all came together to work on Direct3D 8.0. The result was very interesting. Looking back I freely concede that I would not have made the same set of choices that this group made for DirectX 8.0 in chi but it seems to me that everything worked out for the best anyway.

Direct3D 8.0 was influenced in several interesting ways by the market forces of the late 20th century. Microsoft largely unified against OpenGL and found itself competing with the Kronos Group standards committee to advance faster than OpenGL Direct3D. With the death of SGI, control of the OpenGL standard fell into the hands of the 3D hardware OEM’s who of course wanted to use the standard to enable them to create differentiating hardware features from their competitors and to force Microsoft to support 3D features they wanted to promote. The result was the Direct3D and OpenGL Became much more complex and they tended to converge during this period. There was a stagnation in 3D feature adoption by game developers from DirectX 8.0 to DirectX 11.0 through as a result of these changes. Became creating game engines so complex that the market also converged around a few leading search providers Including Epic’s Unreal Engine and the Quake engine from id software.

Had I been working on Direct3D at the time I would have stridently resisted letting the 3D chip lead Microsoft OEM’s around by the nose chasing OpenGL features instead of focusing on enabling game developers and a consistent quality consumer experience. I would have opposed introducing shader support in favor of trying to keep the Direct3D driver layer as vertically integrated as possible to Ensure conformity among hardware vendors feature. I also would have strongly opposed abandoning DirectDraw support as was done in Direct3D 8.0. The 3D guys got out of control and Decided that nobody should need pure 2D API’s once developers Adopted 3D, failing to recognize that simple 2D API’s enabled a tremendous range of features and ease of programming that the majority of developers who were not 3D geniuses could Easily understand and use. Forcing the market to learn 3D Dramatically constrained the set of people with the expertise to adopt it. Microsoft later discovered the error in this decision and re-Introduced DirectDraw as the Direct2D API. Basically letting the Direct3D 8.0 3D design geniuses made it brilliant, powerful and useless to average developers.

At the time that the DirectX 8.0 was being made I was starting my first company WildTangent Inc.. and Ceased to be closely INVOLVED with what was going on with DirectX features, however years later I was Able to get back to my roots and 3D took the time to learn Direct3D programming in DirectX 11.1. Looking back it’s interesting to see how the major architectural changes that were made in DirectX 8 resulted in the massively convoluted and nearly incomprehensible Direct3D API we see today. Remember the 3 stage pipeline DirectX 2 that separated Transformation, lighting and rendering pipeline into three basic stages? Here is a diagram of the modern DirectX 11.1 3D pipeline.

DX 11 Pipeline

Yes, it grew to 9 stages and 13 stages when arguably some of the optional sub-stages, like the compute shader, are included. Speaking as somebody with an extremely lengthy background in very low-level 3D graphics programming and I’m Embarrassed to confess that I struggled mightily to learn programming Direct3D 11.1. Become The API had very nearly incomprehensible and unlearnable. I have no idea how somebody without my extensive background in 3D and graphics could ever begin to learn how to program a modern 3D pipeline. As amazingly powerful and featureful as this pipeline is, it is also damn near unusable by any but a handful of the most antiquated brightest minds in 3D graphics. In the course of catching up on my Direct3D I found myself simultaneously in awe of the astounding power of modern GPU’s and where they were going and in shocked disgust at the absolute mess the 3D pipeline had Become. It was as though the Direct3D API had Become a dumping ground for 3D features that every OEM DEMANDED had over the years.

Had I not enjoyed the benefit of the decade long break from Direct3D involvement Undoubtedly I would have a long history of bitter blogs written about what a mess my predecessors had made of a great and elegant vision for the consumer 3D graphics. Weirdly, however, leaping forward in time to the present day, I am forced to admit that I’m not sure it was such a bad thing after all. The result of stagnation gaming on the PC as a result of the mess Microsoft and the OEMs made of the Direct3D API was a successful XBOX. Having a massively fragmented 3D API is not such a problem if there is only one hardware configuration to support game developers have, as is the case with a game console. Direct3D shader 8.0 support with early primitive was the basis for the first Xbox’s graphics API. For the first selected Microsoft’s XBOX NVIDIA NVIDIA chip giving a huge advantage in the 3D PC chip market. DirectX 9.0, with more advanced shader support, was the basis for the XBOX 360, Microsoft roomates selected for ATI to provide the 3D chip, AMD this time handing a huge advantage in the PC graphics market. In a sense the OEM’s had screwed Themselves. By successfully Influencing Microsoft and the OpenGL standards groups to adopt highly convoluted graphics pipelines to support all of their feature sets, they had forced Themselves to generalize their GPU architectures and the 3D chip market consolidated around a 3D chip architecture … whatever Microsoft selected for its consoles.

The net result was that the retail PC game market largely died. It was simply too costly, too insecure and too unstable a platform for publishing high production value games on any longer, with the partial exception of MMOG’s. Microsoft and the OEM’s had conspired together to kill the proverbial golden goose. No biggie for Microsoft as they were happy to gain complete control of the former PC gaming business by virtue of controlling the XBOX.

From the standpoint of the early DirectX vision, I would have said that this outcome was a foolish, shortsighted disaster. Microsoft had maintained a little discipline and strategic focus on the Direct3D API they could have ensured that there were NO other consoles in existence in a single generation by using the XBOX XBOX to Strengthen the PC gaming market rather than inadvertently destroying it. While Microsoft congratulates itself for the first successful U.S. launch of the console, I would count all the gaming dollars collected by Sony, Nintendo and mobile gaming platforms over the years that might have remained on Microsoft platforms controlled Microsoft had maintained a cohesive strategy across media platforms. I say all of this from a past tense perspective Because, today, I’m not so sure that I’m really all that unhappy with the result.

The new generation of consoles from Sony AND Microsoft have Reverted to a PC architecture! The next generation GPU’s are massively parallel, general-purpose processors with intimate access to the shared memory with the CPU. In fact, the GPU architecture Became so generalized that a new pipeline stage was added in DirectX 11 DirectCompute called that simply allowed the CPU to bypass the entire convoluted Direct3D graphics pipeline in favor of programming the GPU directly. With the introduction of DirectCompute the promise of simple 3D programming returned in an unexpected form. Modern GPU’s have Become so powerful and flexible that the possibility of writing cross 3D GPU engines directly for the GPU without making any use of the traditional 3D pipeline is an increasingly practical and appealing programming option. From my perspective here in the present day, I would anticipate that within a few short generations the need for the traditional Direct3D and OpenGL APIs will vanish in favor of new game engines with much richer and more diverse feature sets that are written entirely in device independent shader languages ​​like Nvidia’s CUDA and Microsoft’s AMP API’s.

Today, as a 3D physics engine and developer I have never been so excited about GPU programming Because of the sheer power and relative ease of programming directly to the modern GPU without needing to master the enormously convoluted 3D pipelines associated with Direct3D and OpenGL API’s. If I were responsible for Direct3D strategy today I would be advocating dumping the investment in traditional 3D pipeline in favor of Rapidly opening direct access to a rich GPU programming environment. I personally never imagined that my early work on Direct3D, would, within a couple decades, Contribute to the evolution of a new kind of ubiquitous processor that enabled the kind of incredibly realistic and general modeling of light and physics that I had learned in the 1980 ‘s but never believed I would see computers powerful enough to models in real-time during my active career.

Hacking with a Hacker

What is it like to hack with one of the original hackers? It is certainly much different than what Appears to be the modern rendition of hacking. My experience was not getting really drunk with tons of junk food. It was not working on “beautiful” designs or “authentic” typography. It was not so much about sharing with the world as it was sharing with your peers. It had a very different feel to it than the “hacker culture” Promoted by some of the top technical Silicon Valley companies. It felt more “at home”, less dreamy, and more memorable.

I meet with Bill Gosper every so Often; I had the pleasure of giving him a tour of Facebook when I worked there. (He was so surprised that they had Coke in the glass bottles there, just like the old days.)

He is still very much a hacker, a thinker, a tinkerer, and a wonderer. Every time I meet up with him, he has a new puzzle for me, or someone around him, to solve, whether it’s really clever compass constructions, circle packing, block building, Game of Life automata solving, or even something more tangible like a Buttonhole homemade trap (which was affixed to my shirt for no less than two weeks!). He is also the bearer of interesting items, such as a belt buckle he gave me roomates depicts, in aluminum, a particular circle loose packing.
Gosper succeeding in tricking me with the Buttonhole Trap
When we meet up, all we do is hack. Along with him and one of his talented young students, we all work on something. Anything interesting, really. Last time we met up, we resurrected an old Lisp machine and did some software archeology. I brought over some of the manuals I own, like the great Chinual, and he brought over a dusty old 1U rackmount Alpha machine with OpenGenera installed. After passing a combination of Hurdles, such as that the keyboard was not interfacing with the computer Correctly, we finally got it to boot up. Now, I got to see with my own eyes, a time capsule containing a lot of Bill’s work from the 70s, 80s, and 90s, roomates could only be commanded and Examined through Zmacs dired and Symbolics Common Lisp. Our next goal was to get Symbolics Macsyma fired up on the old machine.

There was trouble with starting it up. License issues were one problem, finding and loading all of the files were compiled another. Running applications on a Lisp machine is very different than what we do today on modern machines, Windows or UNIX. There’s no. Exe file to click, or. App bundle to start up, or even a single. / File to execute. Usually it’s a collection of compiled “fast loading” or “fasl” files that get loaded side-by-side with the operating system. The application, in essence, Becomes a part of the OS.

In hacker tradition, we were Able to bypass the license issues by modifying the binary directly in Lisp. Fortunately, such as Lisp makes things easy disassembly. But how do we load the damn thing? Bill frustratingly muttered, “It’s been at least 20 years since I’ve done it. I just do not remember. “I, being an owner of MacIvory Symbolics Lisp machines, fortunately did remember how to load programs. “Bill, how about LOAD SYSTEM Macsyma?” He typed it into the native Lisp “Listener 2” window (we kept “Listener 1” for debugging), sometimes making a few typing mistakes, but finally succeeding, and then we saw the stream of files loading. We all Shouted in joy that progress was being made. I recall Bill was especially astounded at how fast everything was loading. This was on a fast Alpha machine with gobs of memory. It must have been much slower on the old 3600s they used back in the day.
The Lisp Machine Manual, or Chinual
It was all done after a few minutes, and Macsyma was loaded. To me, this was a sort of holy grail. I personally have Macsyma for Windows (which he uses in a VirtualBox machine on his 17 “MacBook), and I’ve definitely used Maxima. But Macsyma is something I’ve never seen. It was something that seems to have disappeared with history, something I have not been Able to find a copy of in the last decade.

Bill said, “let’s see if it works.” And he typed 1 +1; in, and sure enough, the result was 2. He saw I was bubbling with excitement and asked me if I’d like to try anything. “I’d love to,” and he handed the keyboard over to me and I typed in my canonical computer algebra test: integrate (sqrt (tan (x)), x);, roomates computes the indefinite integral
—- √ ∫ tanθ dθ
Out came the four-term typeset result of logarithms and arctangents, plus a fifth term I’d never seen before. “I’ve never seen any computer algebra system add that fifth term,” I said, “but it does not look incorrect.” The fifth term was a floored expression, Whose Increased value with the period of the function preceding it. “Let’s plot it,” Bill said. He plotted it using Macsyma’s menu interface, and it was what appeared to be an increasing, non-periodic function. I think we determined it was properly handled Because Macsyma branch cuts, with other systems have been known to be unorthodox about. It definitely had a pragmatic feel to it.

Now, Bill wanted to show us some interesting things; however all of Bill’s recent work Macsyma was on his laptop. How do we connect this ancient to a modern Macintosh hardware? We needed to get the machine onto the network, and networking with old machines is not my forte.

Fortunately, Stephen Jones, a friend of Bill’s and seemingly an expert at a rare combination of technical tasks, showed up. He Was able to do things that Neither Bill nor I could do on such an old machine. In only a few moments, he Was able to get Bill’s Mac talking to the Alpha, roomates shared a portion of its file system with Genera. “Will there be enough space on the Alpha for Macsyma my files?” Bill asked Stephen. “Of course, there’s ton’s of space.” We finally got Bill’s recent work transferred onto the machine.
Bill hacking in Macsyma in OpenGenera (Image courtesy of Stephen M. Jones)
We spent the rest of the night hacking on math. He Demonstrated to us what it was like to do a real mathematician’s work at the machine. He debuted some of his recent work. He went though a long chain of reasoning, showing us the line-after-line in Macsyma, number theoretic amazing to do things I’ve never seen before.

I did ask Bill why he does not publish more often. His previous publications have been landmarks: his algorithm for hypergeometric series and his summation algorithm for playing the Game of Life at light speed. He RESPONDED, “when there’s something interesting to publish, it’ll be published.” He seemed to have a sort of disdain for “salami science”, where scientific and mathematical papers present the thinnest possible “slice” or result possible.

Bill is certainly a man that thinks in a different way than most of us do. He is still hacking at mathematics, and still as impressive as before. I’m very fortunate to have met him, and I was absolutely delighted to see that even at 70 years old, his mind is still as sharp as can be, and it’s still being used to do interesting, Gosper-like mathematics.

And you would not believe it. We all were ready to head home at around 9 PM.

Official feedback on OpenGL 4.4 thread

 SIGGRAPH – Anaheim, CA – The Khronos™ Group today announced the immediate release of the OpenGL® 4.4 specification,bringing the very latest graphics functionality to the most advanced and widely adopted cross-platform 2D and 3D graphics API (application programming interface). OpenGL 4.4 unlocks capabilities of today’s leading-edge graphics hardware while maintaining full backwards compatibility, enabling applications to incrementally use new features while portably accessing state-of-the-art graphics processing units (GPUs) across diverse operating systems and platforms. Also, OpenGL 4.4 defines new functionality to streamline the porting of applications and titles from other platforms and APIs. The full specification and reference materials are available for immediate download at http://www.opengl.org/registry.

In addition to the OpenGL 4.4 specification, the OpenGL ARB (Architecture Review Board) Working Group at Khronos has created the first set of formal OpenGL conformance tests since OpenGL 2.0. Khronos will offer certification of drivers from version 3.3, and full certification is mandatory for OpenGL 4.4 and onwards. This will help reduce differences between multiple vendors’ OpenGL drivers, resulting in enhanced portability for developers.

New functionality in the OpenGL 4.4 specification includes:

Buffer Placement Control (GL_ARB_buffer_storage)
Significantly enhances memory flexibility and efficiency through explicit control over the position of buffers in the graphics and system memory, together with cache behavior control – including the ability of the CPU to map a buffer for direct use by a GPU.

Efficient Asynchronous Queries
(GL_ARB_query_buffer_object)
Buffer objects can be the direct target of a query to avoid the CPU waiting for the result and stalling the graphics pipeline. This provides significantly boosted performance for applications that intend to subsequently use the results of queries on the GPU, such as dynamic quality reduction strategies based on performance metrics.

Shader Variable Layout (GL_ARB_enhanced_layouts)
Detailed control over placement of shader interface variables, including the ability to pack vectors efficiently with scalar types. Includes full control over variable layout inside uniform blocks and enables shaders to specify transform feedback variables and buffer layout.

Efficient Multiple Object Binding (GL_ARB_multi_bind)
New commands which enable an application to bind or unbind sets of objects with one API call instead of separate commands for each bind operation, amortizing the function call, name space lookup, and potential locking overhead. The core rendering loop of many graphics applications frequently bind different sets of textures, samplers, images, vertex buffers, and uniform buffers and so this can significantly reduce CPU overhead and improve performance.

Streamlined Porting of Direct3D applications

A number of core functions contribute to easier porting of applications and games written in Direct3D including GL_ARB_buffer_storage for buffer placement control, GL_ARB_vertex_type_10f_11f_11f_rev which creates a vertex data type that packs three components in a 32 bit value that provides a performance improvement for lower precision vertices and is a format used by Direct3D, and GL_ARB_texture_mirror_clamp_to_edge that provides a texture clamping mode also used by Direct3D.Extensions released alongside the OpenGL 4.4 specification include:

Bindless Texture Extension (GL_ARB_bindless_texture)
Shaders can now access an effectively unlimited number of texture and image resources directly by virtual addresses. This bindless texture approach avoids the application overhead due to explicitly binding a small window of accessible textures. Ray tracing and global illumination algorithms are faster and simpler with unfettered access to a virtual world’s entire texture set.

Sparse Texture Extension (GL_ARB_sparse_texture)
Enables handling of huge textures that are much larger than the GPUs physical memory by allowing an application to select which regions of the texture are resident for ‘mega-texture’ algorithms and very large data-set visualizations.

OpenGL BOF at SIGGRAPH, Anaheim, CA July 24th 2013
There is an OpenGL BOF “Birds of a Feather” Meeting on Wednesday July 24th at 7-8PM at the Hilton Anaheim, California Ballroom A & B, where attendees are invited to meet OpenGL implementers and developers and learn more about the new OpenGL 4.4 specification.

5 Coding Hacks to Reduce GC Overhead

In this post we’ll look at five ways in roomates efficient coding we can use to help our garbage collector CPU spend less time allocating and freeing memory, and reduce GC overhead. Often Long GCs can lead to our code being stopped while memory is reclaimed (AKA “stop the world”). Duke_GCPost

Some background

The GC is built to handle large amounts of allocations of short-lived objects (think of something like rendering a web page, where most of the objects allocated Become obsolete once the page is served).

The GC does this using what’s called a “young generation” – a heap segment where new objects are allocated. Each object has an “age” (placed in the object’s header bits) defines how many roomates collections it has “survived” without being reclaimed. Once a certain age is reached, the object is copied into another section in the heap called a “survivor” or “old” generation.

The process, while efficient, still comes at a cost. Being Able to reduce the number of temporary allocations can really help us increase of throughput, especially in high-scale applications.

Below are five ways everyday we can write code that is more memory efficient, without having to spend a lot of time on it, or reducing code readability.

1. Avoid implicit Strings

Strings are an integral part of almost every structure of data we manage. Being much heavier than other primitive values, they have a much stronger impact on memory usage.

One of the most important things to note is that Strings are immutable. They can not be modified after allocation. Operators such as “+” for concatenation actually allocate a new String containing the contents of the strings being joined. What’s worse, is there’s an implicit StringBuilder object that is allocated to actually do the work of combining them.

For example –

1
a = a + b; / / a and b are Strings
The compiler generates code comparable behind the scenes:

1
StringBuilder temp = new StringBuilder (a).
2
temp.append (b);
3
a = temp.toString () / / a new string is allocated here.
4
/ / The previous “a” is now garbage.
But it gets worse.

Let’s look at this example –

1
String result = foo () + arg;
2
result + = boo ();
3
System.out.println (“result =” + result);
In this example we have 3 StringBuilders allocated in the background – one for each plus operation, and two additional Strings – one to hold the result of the second assignment and another to hold the string passed into the print method. That’s 5 additional objects in what would otherwise Appear to be a pretty trivial statement.

Think about what happens in real-world scenarios such as generating code a web page, working with XML or reading text from a file. Within a nested loop structures, you could be looking at Hundreds or Thousands of objects that are implicitly allocated. While the VM has Mechanisms to deal with this, it comes at a cost – one paid by your users.

The solution: One way of reducing this is being proactive with StringBuilder allocations. The example below Achieves the same result as the code above while allocating only one StringBuilder and one string to hold the final result, instead of the original five objects.

1
StringBuilder value = new StringBuilder (“result =”);
2
value.append (foo ()). append (arg). append (boo ());
3
System.out.println (value);
By being mindful of the way Strings are implicitly allocated and StringBuilders you can materially reduce the amount of short-term allocations in high-scale code locations.

2. List Plan capacities

Dynamic collections such as ArrayLists are among the most basic dynamic structures to hold the data length. ArrayLists and other collections such as HashMaps and implemented a Treemaps are using the underlying Object [] arrays. Like Strings (Themselves wrappers over char [] arrays), arrays are also immutable. Becomes The obvious question then – how can we add / put items in their collections if the underlying array’s size is immutable? The answer is obvious as well – by allocating more arrays.

Let’s look at this example –

1
List <Item> <Item> items = new ArrayList ();
2

3
for (int i = 0; i <len; i + +)
4
{
5
Item item = readNextItem ();
6
items.add (item);
7
}
The value of len Determines the ultimate length of items once the loop finishes. This value, however, is unknown to the constructor of the ArrayList roomates allocates a new Object array with a default size. Whenever the internal capacity of the array is exceeded, it’s replaced with a new array of sufficient length, making the previous array of garbage.

If you’re executing the loop Welcome to Thunderbird times you may be forcing a new array to be allocated and a previous one to be collected multiple times. For code running in a high-scale environment, these allocations and deallocations are all deducted from your machine’s CPU cycles.
%0

Lambda Expressions Backported to Java 7, 6 and 5

Do you want to use lambda expressions already today, but you are forced to use Java and a stable JRE in production? Now that’s possible with Retrolambda, which will take bytecode compiled with Java 8 and convert it to run on Java 7, 6 and 5 runtimes, letting you use lambda expressions andmethod references on those platforms. It won’t give you the improved Java 8 Collections API, but fortunately there are multiple alternative libraries which will benefit from lambda expressions.

Behind the Scenes

A couple of days ago in a café it popped into my head to find out whether somebody had made this already, but after speaking into the air, I did it myself over a weekend.

The original plan of copying the classes from OpenJDK didn’t work (LambdaMetafactory depends on some package-private classes and would have required modifications), but I figured out a better way to do it without additional runtime dependencies.

Retrolambda uses a Java agent to find out what bytecode LambdaMetafactory generates dynamically, and saves it as class files, after which it replaces the invokedynamic instructions to instantiate those classes directly. It also changes some private synthetic methods to be package-private, so that normal bytecode can access them without method handles.

After the conversion you’ll have just a bunch of normal .class files – but with less typing.

P.S. If you hear about experiences of using Retrolambda for Android development, please leave a comment.

Parallel and Concurrent Programming in Haskell

As one of the developers of the Glasgow Haskell Compiler (GHC) for almost 15 years, I have seen Haskell grow from a niche research language into a rich and thriving ecosystem. I spent a lot of that time working on GHC’s support for parallelism and concurrency. One of the first things I did to GHC in 1997 was to rewrite its runtime system, and a key decision we made at that time was to build concurrency right into the core of the system rather than making it an optional extra or an add-on library. I like to think this decision was founded upon shrewd foresight, but in reality it had as much to do with the fact that we found a way to reduce the overhead of concurrency to near zero (previously it had been on the order of 2%; we’ve always been performance-obsessed). Nevertheless, having concurrency be non-optional meant that it was always a first-class part of the implementation, and I’m sure that this decision was instrumental in bringing about GHC’s solid and lightning-fast concurrency support.

Haskell has a long tradition of being associated with parallelism. To name just a few of the projects, there was the pH variant of Haskell derived from the Id language, which was designed for parallelism, the GUM system for running parallel Haskell programs on multiple machines in a cluster, and the GRiP system: a complete computer architecture designed for running parallel functional programs. All of these happened well before the current multicore revolution, and the problem was that this was the time when Moore’s law was still giving us ever-faster computers. Parallelism was difficult to achieve, and didn’t seem worth the effort when ordinary computers were getting exponentially faster.

Around 2004, we decided to build a parallel implementation of the GHC runtime system for running on shared memory multiprocessors, something that had not been done before. This was just before the multicore revolution. Multiprocessor machines were fairly common, but multicores were still around the corner. Again, I’d like to think the decision to tackle parallelism at this point was enlightened foresight, but it had more to do with the fact that building a shared-memory parallel implementation was an interesting research problem and sounded like fun. Haskell’s purity was essential—it meant that we could avoid some of the overheads of locking in the runtime system and garbage collector, which in turn meant that we could reduce the overhead of using parallelism to a low-single-digit percentage. Nevertheless, it took more research, a rewrite of the scheduler, and a new parallel garbage collector before the implementation was really usable and able to speed up a wide range of programs. The paper I presented at the International Conference on Functional Programming (ICFP) in 2009 marked the turning point from an interesting prototype into a usable tool.

All of this research and implementation was great fun, but good-quality resources for teaching programmers how to use parallelism and concurrency in Haskell were conspicuously absent. Over the last couple of years, I was fortunate to have had the opportunity to teach two summer school courses on parallel and concurrent programming in Haskell: one at the Central European Functional Programming (CEFP) 2011 summer school in Budapest, and the other the CEA/EDF/INRIA 2012 Summer School at Cadarache in the south of France. In preparing the materials for these courses, I had an excuse to write some in-depth tutorial matter for the first time, and to start collecting good illustrative examples. After the 2012 summer school I had about 100 pages of tutorial, and thanks to prodding from one or two people (see the Acknowledgments), I decided to turn it into a book. At the time, I thought I was about 50% done, but in fact it was probably closer to 25%. There’s a lot to say! I hope you enjoy the results.

Audience

You will need a working knowledge of Haskell, which is not covered in this book. For that, a good place to start is an introductory book such as Real World Haskell (O’Reilly), Programming in Haskell (Cambridge University Press), Learn You a Haskell for Great Good! (No Starch Press), or Haskell: The Craft of Functional Programming (Addison-Wesley).

How to Read This Book

The main goal of the book is to get you programming competently with Parallel and Concurrent Haskell. However, as you probably know by now, learning about programming is not something you can do by reading a book alone. This is why the book is deliberately practical: There are lots of examples that you can run, play with, and extend. Some of the chapters have suggestions for exercises you can try out to get familiar with the topics covered in that chapter, and I strongly recommend that you either try a few of these, or code up some of your own ideas.

As we explore the topics in the book, I won’t shy away from pointing out pitfalls and parts of the system that aren’t perfect. Haskell has been evolving for over 20 years but is moving faster today than at any point in the past. So we’ll encounter inconsistencies and parts that are less polished than others. Some of the topics covered by the book are very recent developments: Chapters 4, 5, 6, and pass:[14 cover frameworks that were developed in the last few years.

The book consists of two mostly independent parts: Part I and Part II. You should feel free to start with either part, or to flip between them (i.e., read them concurrently!). There is only one dependency between the two parts: Chapter 13 will make more sense if you have read Part I first, and in particular before reading “The ParIO monad”, you should have read Chapter 4.

While the two parts are mostly independent from each other, the chapters should be read sequentially within each part. This isn’t a reference book; it contains running examples and themes that are developed across multiple chapters.

OpenMP 4.0 Specifications Released

The OpenMP 4.0 API Specification is released with Significant New Standard Features

The OpenMP 4.0 API supports the programming of accelerators, SIMD programming, and better optimization using thread affinity

The OpenMP Consortium has released OpenMP API 4.0, a major upgrade of the OpenMP API standard language specifications. Besides several major enhancements, this release provides a new mechanism to describe regions of code where data and/or computation should be moved to another computing device.

Bronis R. de Supinski, Chair of the OpenMP Language Committee, stated that “OpenMP 4.0 API is a major advance that adds two new forms of parallelism in the form of device constructs and SIMD constructs. It also includes several significant extensions for the loop-based and task-based forms of parallelism already supported in the OpenMP 3.1 API.

The 4.0 specification is now available on the 

Standard for parallel programming extends its reach

With this release, the OpenMP API specifications, the de-facto standard for parallel programming on shared memory systems, continues to extend its reach beyond pure HPC to include DSPs, real time systems, and accelerators. The OpenMP API aims to provide high-level parallel language support for a wide range of applications, from automotive and aeronautics to biotech, automation, robotics and financial analysis.

New features in the OpenMP 4.0 API include:

· Support for accelerators. The OpenMP 4.0 API specification effort included significant participation by all the major vendors in order to support a wide variety of compute devices. OpenMP API provides mechanisms to describe regions of code where data and/or computation should be moved to another computing device. Several prototypes for the accelerator proposal have already been implemented.

· SIMD constructs to vectorize both serial as well as parallelized loops. With the advent of SIMD units in all major processor chips, portable support for accessing them is essential. OpenMP 4.0 API provides mechanisms to describe when multiple iterations of the loop can be executed concurrently using SIMD instructions and to describe how to create versions of functions that can be invoked across SIMD lanes.

· Error handling. OpenMP 4.0 API defines error handling capabilities to improve the resiliency and stability of OpenMP applications in the presence of system-level, runtime-level, and user-defined errors. Features to abort parallel OpenMP execution cleanly have been defined, based on conditional cancellation and user-defined cancellation points.

· Thread affinity. OpenMP 4.0 API provides mechanisms to define where to execute OpenMP threads. Platform-specific data and algorithm-specific properties are separated, offering a deterministic behavior and simplicity in use. The advantages for the user are better locality, less false sharing and more memory bandwidth.

· Tasking extensions. OpenMP 4.0 API provides several extensions to its task-based parallelism support. Tasks can be grouped to support deep task synchronization and task groups can be aborted to reflect completion of cooperative tasking activities such as search. Task-to-task synchronization is now supported through the specification of task dependency.

· Support for Fortran 2003. The Fortran 2003 standard adds many modern computer language features. Having these features in the specification allows users to parallelize Fortran 2003 compliant programs. This includes interoperability of Fortran and C, which is one of the most popular features in Fortran 2003.

· User-defined reductions. Previously, OpenMP API only supported reductions with base language operators and intrinsic procedures. With OpenMP 4.0 API, user-defined reductions are now also supported.

· Sequentially consistent atomics. A clause has been added to allow a programmer to enforce sequential consistency when a specific storage location is accessed atomically.

This represents collaborative work by many of the brightest in industry, research, and academia, building on the consensus of 26 members. We strive to deliver high-level parallelism that is portable across 3 widely-implemented common General Purpose languages, productive for HPC and consumers, and delivers highly competitive performance. I want to congratulate all the members for coming together to create such a momentous advancement in parallel programming, under such tight constraints and industry challenges.
With this release, the OpenMP API will move immediately forward to the next release to bring even more usable parallelism to everyone.
 – Michael Wong, CEO OpenMP ARB.

Integrating C++ with QML

Introduction

Qt Quick’s QML language makes it easy to do many things, especially fancy animated user interfaces. However, some things either can’t be done or are not suitable for implementing in QML, such as:

  1. Getting access to functionality outside of the QML/JavaScript environment.
  2. Implementing performance critical functions where native code is desired for efficiency.
  3. Large and/or complex non-declarative code that would be tedious to implement in JavaScript.

As we’ll see, Qt makes it quite easy to expose C++ code to QML. In this blog post I will show an example of doing this with a small but functional application.

The example is written for Qt 5 and uses the Qt Quick Components so you will need at least Qt version 5.1.0 to run it.

Overview

To expose a C++ type having properties, methods, signals, and/or slots to the QML environment, the basic steps are:

  1. Define a new class derived from QObject.
  2. Put the Q_OBJECT macro in the class declaration to support signals and slots and other services of the Qt meta-object system.
  3. Declare any properties using the Q_PROPERTY macro.
  4. Call qmlRegisterType() in your C++ main program to register the type with the Qt Quick engine.

For all the details I refer you to the Qt documentation section Exposing Attributes of C++ Types to QML and the Writing QML Extensions with C++ tutorial.

Ssh Key Generator

For our code example, we want a small application that will generate ssh public/private key pairs using a GUI. It will present the user with controls for the appropriate options and then run the program ssh-keygen to generate the key pair.

I implemented the user interface using the new Qt Quick Controls since it was intended as a desktop application with a desktop look and feel. I initially developed the UX entirely by running the qmlscene program directly on the QML source.

The UI prompts the user for the key type, the file name of the private key to generate and an optional pass phrase, which needs to be confirmed.

The C++ Class

Now that have the UI, we will want to implement the back end functionality. You can’t invoke an external program directly from QML so we have to write it in C++ (which is the whole point of this example application).

First, we define a class that encapsulates the key generation functionality. It will be exposed as a new class KeyGenerator in QML. This is done in the header file KeyGenerator.h below.

#ifndef KEYGENERATOR_H
#define KEYGENERATOR_H

#include <QObject>
#include <QString>
#include <QStringList>

// Simple QML object to generate SSH key pairs by calling ssh-keygen.

class KeyGenerator : public QObject
{
    Q_OBJECT
    Q_PROPERTY(QString type READ type WRITE setType NOTIFY typeChanged)
    Q_PROPERTY(QStringList types READ types NOTIFY typesChanged)
    Q_PROPERTY(QString filename READ filename WRITE setFilename NOTIFY filenameChanged)
    Q_PROPERTY(QString passphrase READ filename WRITE setPassphrase NOTIFY passphraseChanged)

public:
    KeyGenerator();
    ~KeyGenerator();

    QString type();
    void setType(const QString &t);

    QStringList types();

    QString filename();
    void setFilename(const QString &f);

    QString passphrase();
    void setPassphrase(const QString &p);

public slots:
    void generateKey();

signals:
    void typeChanged();
    void typesChanged();
    void filenameChanged();
    void passphraseChanged();
    void keyGenerated(bool success);

private:
    QString _type;
    QString _filename;
    QString _passphrase;
    QStringList _types;
};
#endif

Next, we need to derive our class from QObject. We declare any properties that we want and the associated methods. Notify methods become signals. In our case, we want to have properties for the selected key type, the list of all valid ssh key types, file name and pass phrase. I arbitrarily made the key type a string. It could have been an enumerated type but it would have made the example more complicated.

Incidentally, a new feature of the Q_PROPERTY macro in Qt 5.1.0 is the MEMBER argument. It allows specifying a class member variable that will be bound to a property without the need to implement the setter or getter functions. That feature was not used here.

We declare methods for the setters and getters and for signals. We also declare one slot called generateKey(). These will all be available to QML. If we wanted to export a regular method to QML, we could mark it with Q_INVOCABLE. In this case I decided to make generateKey() a slot since it might be useful in the future but it could have just as easily been an invocable method.

Finally, we declare any private member variables we will need.

C++ Implementation

Now let’s look at the implementation in KeyGenerator.cpp. Here is the source code:

#include <QFile>
#include <QProcess>
#include "KeyGenerator.h"

KeyGenerator::KeyGenerator()
    : _type("rsa"), _types{"dsa", "ecdsa", "rsa", "rsa1"}
{
}

KeyGenerator::~KeyGenerator()
{
}

QString KeyGenerator::type()
{
    return _type;
}

void KeyGenerator::setType(const QString &t)
{
    // Check for valid type.
    if (!_types.contains(t))
        return;

    if (t != _type) {
        _type = t;
        emit typeChanged();
    }
}

QStringList KeyGenerator::types()
{
    return _types;
}

QString KeyGenerator::filename()
{
    return _filename;
}

void KeyGenerator::setFilename(const QString &f)
{
    if (f != _filename) {
        _filename = f;
        emit filenameChanged();
    }
}

QString KeyGenerator::passphrase()
{
    return _passphrase;
}

void KeyGenerator::setPassphrase(const QString &p)
{
    if (p != _passphrase) {
        _passphrase = p;
        emit passphraseChanged();
    }
}

void KeyGenerator::generateKey()
{
    // Sanity check on arguments
    if (_type.isEmpty() or _filename.isEmpty() or
        (_passphrase.length() > 0 and _passphrase.length() < 5)) {
        emit keyGenerated(false);
        return;
    }

    // Remove key file if it already exists
    if (QFile::exists(_filename)) {
        QFile::remove(_filename);
    }

    // Execute ssh-keygen -t type -N passphrase -f keyfileq
    QProcess *proc = new QProcess;
    QString prog = "ssh-keygen";
    QStringList args{"-t", _type, "-N", _passphrase, "-f", _filename};
    proc->start(prog, args);
    proc->waitForFinished();
    emit keyGenerated(proc->exitCode() == 0);
    delete proc;
}

The constructor initializes some of the member variables. For fun, I used the new initializer list feature of C++11 to initialize the _types member variable which is of type QStringList. The destructor does nothing, at least for now, but is there for completeness and future expansion.

Getter functions like type() simply return the appropriate private member variable. Setters set the appropriate variables, taking care to check that the new value is different from the old one and if so, emitting the appropriate signal. As always, please note that signals are created by the Meta Object Compiler and do not need to be implemented, only emitted at the appropriate times.

The only non-trivial method is the slot generateKey(). It does some checking of arguments and then creates a QProcess to run the external ssh-keygen program. For simplicity and because it typically executes quickly, I do this synchronously and block on it to complete. When done, we emit a signal that has a boolean argument that indicates the key was generated and whether it succeeded or not.

QML Code

Now let’s look at the QML code in main.qml:

// SSH key generator UI

import QtQuick 2.1
import QtQuick.Controls 1.0
import QtQuick.Layouts 1.0
import QtQuick.Dialogs 1.0
import com.ics.demo 1.0

ApplicationWindow {
    title: qsTr("SSH Key Generator")

    statusBar: StatusBar {
    RowLayout {
        Label {
            id: status
            }
        }
    }

    width: 369
    height: 166

    ColumnLayout {
        x: 10
        y: 10

        // Key type
        RowLayout {
            Label {
                text: qsTr("Key type:")
            }
            ComboBox {
                id: combobox
                Layout.fillWidth: true
                model: keygen.types
                currentIndex: 2
            }
        }

        // Filename
        RowLayout {
            Label {
                text: qsTr("Filename:")
            }
            TextField {
                id: filename
                implicitWidth: 200
                onTextChanged: updateStatusBar()
            }
            Button {
                text: qsTr("&Browse...")
                onClicked: filedialog.visible = true
            }
        }

        // Passphrase
        RowLayout {
            Label {
                text: qsTr("Pass phrase:")
            }
            TextField {
                id: passphrase
                Layout.fillWidth: true
                echoMode: TextInput.Password
                onTextChanged: updateStatusBar()
            }

        }

        // Confirm Passphrase
        RowLayout {
            Label {
                text: qsTr("Confirm pass phrase:")
            }
            TextField {
                id: confirm
                Layout.fillWidth: true
                echoMode: TextInput.Password
                onTextChanged: updateStatusBar()
            }
        }

        // Buttons: Generate, Quit
        RowLayout {
            Button {
                id: generate
                text: qsTr("&Generate")
                onClicked: keygen.generateKey()
            }
            Button {
                text: qsTr("&Quit")
                onClicked: Qt.quit()
            }
        }

    }

    FileDialog {
        id: filedialog
        title: qsTr("Select a file")
        selectMultiple: false
        selectFolder: false
        nameFilters: 
        selectedNameFilter: "All files (*)"
        onAccepted: {
            filename.text = fileUrl.toString().replace("file://", "")
        }
    }

    KeyGenerator {
        id: keygen
        filename: filename.text
        passphrase: passphrase.text
        type: combobox.currentText
        onKeyGenerated: {
            if (success) {
                status.text = qsTr('<font color="green">Key generation succeeded.</font>')
            } else {
                status.text = qsTr('<font color="red">Key generation failed</font>')
            }
        }
    }

    function updateStatusBar() {
        if (passphrase.text != confirm.text) {
            status.text = qsTr('<font color="red">Pass phrase does not match.</font>')
            generate.enabled = false
        } else if (passphrase.text.length > 0 && passphrase.text.length < 5) {
            status.text = qsTr('<font color="red">Pass phrase too short.</font>')
            generate.enabled = false
        } else if (filename.text == "") {
            status.text = qsTr('<font color="red">Enter a filename.</font>')
            generate.enabled = false
        } else {
            status.text = ""
            generate.enabled = true
        }
    }

    Component.onCompleted: updateStatusBar()
}

The preceding code is a little long, however, much of the work is laying out the GUI components. The code should be straightforward to follow.

Note that we import com.ics.demo version 1.0. We’ll see where this module name comes from shortly. This makes a new QML type KeyGeneratoravailable and so we declare one. We have access to it’s C++ properties as QML properties, can call it’s methods and act on signals like we do withonKeyGenerated.

A more complete program should probably do a little more error checking and report meaningful error messages if key generation fails (we could easily add a new method or property for this). The UI layout could also be improved to make it properly resizable.

Our main program is essentially a wrapper like qmlscene. All we need to do to register our type with the QML engine is to call:

    qmlRegisterType<KeyGenerator>("com.ics.demo", 1, 0, "KeyGenerator");

This makes the C++ type KeyGenerator available as the QML type KeyGenerator in the module com.ics.demo version 1.0 when it is imported.

Typically, to run QML code from an executable, in the main program you would create a QGuiApplication and a QQuickView. Currently, to use the Qt Quick Components there is some additional work needed if the top level element is an ApplicationWindow or Window. You can look at the source code to see how I implemented this. I basically stripped down the code from qmlscene to the minimum of what was needed for this example.

Here is the full listing for the main program, main.cpp:

#include <QApplication>
#include <QObject>
#include <QQmlComponent>
#include <QQmlEngine>
#include <QQuickWindow>
#include <QSurfaceFormat>
#include "KeyGenerator.h"

// Main wrapper program.
// Special handling is needed when using Qt Quick Controls for the top window.
// The code here is based on what qmlscene does.

int main(int argc, char ** argv)
{
    QApplication app(argc, argv);

    // Register our component type with QML.
    qmlRegisterType<KeyGenerator>("com.ics.demo", 1, 0, "KeyGenerator");

    int rc = 0;

    QQmlEngine engine;
    QQmlComponent *component = new QQmlComponent(&engine);

    QObject::connect(&engine, SIGNAL(quit()), QCoreApplication::instance(), SLOT(quit()));

    component->loadUrl(QUrl("main.qml"));

    if (!component->isReady() ) {
        qWarning("%s", qPrintable(component->errorString()));
        return -1;
    }

    QObject *topLevel = component->create();
    QQuickWindow *window = qobject_cast<QQuickWindow *>(topLevel);

    QSurfaceFormat surfaceFormat = window->requestedFormat();
    window->setFormat(surfaceFormat);
    window->show();

    rc = app.exec();

    delete component;
    return rc;
}

In case it is not obvious, when using a module written in C++ with QML you cannot use the qmlscene program to execute your QML code because the C++ code for the module will not be linked in. If you try to do this you will get an error message that the module is not installed.

Why designed a front-end programming language from scratch

Today’s programming languages have traditionally been created by the tech giants. These languages are made up of millions of lines of code, so the tech giants only invest in incremental, non-breaking changes that address their business concerns. This is why innovation in popular languages like C, Java, and JavaScript is depressingly slow.

Open-source languages like Python and Ruby gained widespread industrial use by solving backend problems at startup scale. Without the constraints of legacy code and committee politics, language designers are free to explore meaningful language innovation. And with compile-to-VM languages, it has become cheap enough for individuals and startups to create the future of programming languages themselves.

Open-source language innovation has not yet disrupted front-end programming. We still use the same object-oriented model that took over the industry in the 1980s. The tech giants are heavily committed to this approach, but open-source has made it possible to pursue drastically different methods.

Two years ago, I began to rethink front-end programming from scratch. I quickly found myself refining a then-obscure academic idea called Functional Reactive Programming. This developed into Elm, a language that compiles to JavaScript and makes it much easier to create highly interactive programs.

Since the advent of Elm, a lively and friendly community has sprung up, made up of everyone from professional developers to academics to beginners who have never tried functional programming before. This diversity of voices and experiences has been a huge help in guiding Elm towards viability as a production-ready language.

The community has already created a bunch of high quality contributions that are shaping the future of Elm and are aiming to shape the future of front-end programming.

Dev tools

Early on, I made it a priority to let people write, compile, and use Elm programs directly from their browser. No install, no downloads. This interactive editor made it easy for beginners and experts alike to learn Elm and start using it immediately.

In-browser compilation triggered lots of discussion, ideas, and ultimately contributions. Mads Flensted-Urech added in-line documentation for all standard libraries. Put your cursor over a function, and you get the type, prose explanation, and link to the library it comes from. Laszlo Pandy took charge of debugging tools. He is focusing on visualizing the state of an Elm program as time passes, even going so far as pausing, rewinding, and replaying events.

Runtime

I designed Elm to work nicely with concurrency. Unfortunately, JavaScript’s concurrency support is quite poor with questionable prospects for improvement. I decided to save the apparent implementation quagmire for later, but John P. Mayer decided to make it happen. He now has a version of the runtime that can automatically multiplex tasks across many threads, all implemented in JavaScript.

Common to all of these cases are driven individuals who knew they could do it better. This is how Elm got started and how it caught the attention of Prezi, a company also not content to accept JavaScript as the one and only answer for front-end development. I have since joined the company for the express purpose of furthering work on Elm.

We do not need to sit and hope that the tech giants will someday do an okay job. We can create the future of front-end programming ourselves, and we can do it now.