It's done: Rules 2 is out!

Submitted by fago on Mon, 10/10/2011 - 19:13
Finally, slightly more than two years after I started the initial development I'm happy to announce the release of Rules 2.0 for Drupal 7!

So what's new compared to Rules 1.x?

While the fundamental concepts of "Event-Condition-Action rules" and parametrized actions remain, Rules 2.x is a complete re-write - quite some things changed. Now, it's building upon the Entity API module to fully leverage the power of entities and fields in Drupal 7. Change a taxonomy term? - No problem. Moreover that, Rules 2 now allows you to just select any entity property or field via it's so called "data selection" widget:

Data selection

The Rules data selection widget shows all suiting data properties when configuring an action or condition argument. Let's consider, you configure an action to send a mail - by using the data selector comment:node:author:mail you can easily send mail to the comment's node's author. For that the data selection auto-complete helps you finding suiting data selector: You might note, that data selectors like node:title look like token replacements. But as actions need more than just textual data, the data selector gives them back the raw data, e.g. full entity objects or whatever fits depending on the data type. Thus, data selectors are not implement via token replacements, but via the entity.module's Entity property info system. Still, the Entity Tokens module (comes with Entity API) makes sure there are token replacements available for all the data selectors too. The very same way one can naturally access fields too - e.g. node:field-tags gets you all the tags of your article node. However as only articles have tags, for that to work Rules needs to know that the variable node is an article first. Thus, make sure you've used the "Content is of type" or the "Data comparison" condition to check it's an article. Analogously, if you have an "entity" data item you can use the Entity is of type condition to make sure it's a node and access node-specific properties afterwards! Read more about data selection in the handbooks.

Switching parameter input modes

Related, Rules 2 allows you to switch the input modes while configuring the argument for an action parameter. Consider, you have an action that works with a vocabulary. Usually people might select the vocabulary to work with from the list of available vocabularies, but in some circumstances one wants the action to use the vocabulary of a specific taxonomy term. This is, where switching parameter input modes comes into play as it allows you to switch from fixed input mode (= configuring a specific vocabulary) to the data selection input mode - so you could just configure term:vocabulary as an argument by using the data selection widget. Rules 2 provides small buttons below each parameter's configuration form which allow you to switch the input mode:


Components are standalone Rules configurations that can be re-used from your reaction rules or from code. In Rules 1.x there are already "rule sets" available as "components" - but with Rules 2.x there are multiple component types: Rule Sets, Actions Sets, Rules, "AND Conditions Sets" and "OR condition sets". Rule sets come with maximum flexibility, but if the extra layer of having multiple rules is unnecessary for your use case, you can go with the simpler action set or a single "rule" component now! Next, the conditions sets make it possible to define re-usable condition components. Components work upon a set of pre-defined variables (e.g. a node), just as for Rules 1.x. However with Rules 2.x it's now possible to provide new variables back to the caller, too. Read more about components in the handbooks.

Loops and lists

Rules 2 is finally able to properly deal with loops and lists! That means you can now access all your fields with multiple items, e.g. the tags of an article node. So you can easily loop over the list of tags and apply an action to each tag. That's also very handy in combination with node-reference or user-reference fields. Send a notification mail to all the referenced users? No problem. Furthermore, one can access individual list items directly using the data selector - just use node:field-tags:0:name to access the first tag. If you do so, you might want to check whether a tag has been specified by using the "Data value is empty" condition though. Read more about loops in the handbooks.

Improved debug log

Fortunately, there has been another Rules related Google Summer of Code project this year. Sebastian Gilits worked on improving the rules debug log as part of his project! Now, the debug log makes use of some Javascript and appears all collapsed by default, so it's much easier to get which events have been triggered and which rules have fired in the first place. Also, we've included edit links so you can easily jump from the debug log to the Rules UI in order to fix up a miss-configured rule.

Publishing Linked Open Data of Austria at the Vienna Create Camp 11

Submitted by fago on Sun, 10/09/2011 - 18:03
The last weekend I attended the Vienna Create camp 11, a great event for collaboratively creating applications related to open data and accessibility. 6 members of Drupal Austria formed a team to make use of open data published by the cities of Vienna and Linz. Unfortunately, it turned out that the data is published in various different data structures and formats, so re-using the data in an efficient manner is hard. It's nice to see that the city of Linz is using CKAN to publish the data, so there is some basic information about the data sources (format, url, ..) available. However, still each data set is published using different data structures, so making use of a data source requires writing or configuring a specific adapter. So, we've started "drupalizing" the data by using feed importers, whereby we've configured one content type and feeds importer per data source. Fortunately, once the data is in Drupal we can use it with all of Drupal's tools. So publishing the data as Linked Open Data is as easy as enabling the "rdf" module of Drupal and providing some meaningful mappings. For that, we've made use of vocabularies as far as possible. Now, all imported data items are available via RDFa, RDF, JSON or XML. But most convenient is probably the Sparql endpoint, which enables one to directly query the published datasets. So finally, we have real Linked Open Data of Austria - yeah! Then, we've made use of Openlayers with some nice base layers from the TileStream powered to create nice looking maps - check out the demonstration site. Next, we've published our open data features at, so everyone can easily make use of our work and quickly use all that data by using Drupal. See Unfortunately, the data of the city of Linz required some custom massaging in order to get proper geo coordinates from the project they used. Thus, we were not able to create easy to use feeds configurations for Linz as we've done for Vienna. Maybe the city of Linz improves that in the future...

Speeding-up Drupal test-runs on Ubuntu 11.04

Submitted by fago on Fri, 07/15/2011 - 14:30
Using a ramdisk for MySQL helps *a lot* to speed up simpletests in Drupal, as well as disabling innodb as described on the according d.o. docu page. While the documentation provides an init.d script, Ubuntu 11.04 comes with an upstart script for MySQL. So I modified the instructions and come up with the following in order to put MySQL on a ramdisk on ubuntu 11.04 (and later):

Nginx clean-URLs for Drupal installed in various sub-directories

Submitted by fago on Fri, 07/15/2011 - 11:29

klausi has already posted a nice, actually working nginx configuration for Drupal on his blog. This configuration is intended for Drupal installations installed on separate (sub)domains. However, recently I came up with the need of having multiple Drupal installations in sub-directories for my development environment. To achieve that I've created the following location directive:

        location ~ ^/([^/]*)/(.*)$ {
                try_files $uri /$1/index.php?q=$2&$args;

That way, you can just add as many Drupal installations as you want in sub-folders, while they are automatically clean-URL enabled. :)

Drupal 8: Entities and data integration

Submitted by fago on Fri, 03/11/2011 - 19:41
As follow-up to my previous blog post Drupal 8: Approaching Content and Configuration Management, I'm going to shortly cover how the Entity API could help us with two more of Dries' points: Web services and information integration. First off, for getting RESTful web services into core, having a unified API to build upon makes lots of sense. That way we make sure we locally have the same uniform interface for CRUD functions available as we expose it to the web. But moreover, the possibility of having remote entities can help us a lot with integrating with remote systems. In a way, we'll get that anyway once we implement pluggable entity storage controllers (and you can even do so already in D7). But for that really being useful, we need to know the data we want to work with. This is why, I come up with the hook_entity_property_info() in the Entity API module for d7. While for d7 it is built on top of the stuff that is there anyway, I think it should play a much more central role in D8 for various reasons:
  • A description of all the data properties of an entity enables modules to deal with any entity regardless of the entity type just based on the available data properties (and their data types). That way, modules can seamlessly continue to work even with entities stemming from remote systems. This is how, the RestWS, SearchAPI and Rules modules already work in d7.
  • With pluggable storage backends, I see no point in SQL-centric schema information except we are going to use SQL based storage. By defining the property info, storage backends can work based on that though, i.e. generate the sql backend can generate the schema out of the property information.
  • When working with entities, what bothers is the data actually available in the object. To a module, the internal storage format doesn't matter. In a way, the property information defines the "contract" how the entity has to look like. Given that, all APIs should be built based on that, i.e. efq should accept the value to query for exactly the same way it appears in the entity and not in storage-backend dependend way (no one can predict once storage is pluggable). The very same way display formatters and form widgets could just rely on the described data too.
As we discussed in the entity API bof, it might make sense to build it around "field types" evolving to "data types" - thus being decoupled from being a "field". The very same way we can start building the display components around the entity properties, thus not necessarily being based on fields (bye bye field api "extra fields"). Any way the implementation will look like, let's let the entity API become our long missing data API!

Drupal 8: Approaching Content and Configuration Management

Submitted by fago on Wed, 03/09/2011 - 19:02
As a follow-up to heyrocker's great core conversations talk I'd like to share my thoughts on that topic. So is X content? Or configuration? As heyrocker pointed out: We shouldn't have to care. So yes, the line between content and configuration is blurry - stuff being content on a site, would more fit the configuration bill on another site. Still this doesn't mean we have to treat content and configuration exactly the same way. We can and we need to build separate systems for configuration management and content deployment, but the point is: We need one and the same type of objects to be able to behave as content and configuration dependent on the actual requirements of the site. Not necessarily out-of-the box, but well, it should be doable such that people can implement their use-cases. To facilitate that, we need a foundational unified API, an API that deals with objects the same way regardless whether its configuration or content, an API that allows us to fetch any object, to export it, and to import and save it on another site. As said in the core conversation, I think the entity API perfectly fits into that. That way, an entity would be basically any data object which is integrated into Drupal, such we have a unified CRUD API - and a unified way to import and export that data. It should be able to deal with machine names or auto-incremented numeric id, but as well we might want it to be able to deal with UUIDs out of the box. So we'd finally get a unified data API and exportability, the very much foundation for solving the configuration management and content deployment problems. But for that to take off, we need to keep the entity concept slim and don't bake too much assumptions into it. Is it content or configuration? Is it user-facing? Is it fieldable or viewable/renderable? Well maybe, maybe not. So while it makes a lot of sense to build more APIs around entities, we should never actually require them. Instead, we could just provide the APIs such that any entity type is able to opt-in, if it fits the bill. In addition to that, I'd like to share my thoughts on how the Entity API could help us to cover 2 more points Dries mentioned in his key-note: Web services, and information integration. I'll come back to that in a follow-up post.