Tag Archives: Yahoo Pipes

Listening to the interwebs

The other day I saw a tweet that pointed me towards @SavvyCitizens and their Top Ten online resources for getting savvier ahead of the general election. It really is a top list of resources and all but one of them are fully featured, for free.

Council Monitor is the one that you have to pay for to get all its features. Now, for a basic ability to look around the country and see your council’s sentiment rating or look at how it compares to others this is excellent. It can give you at a glance that sort of information. However, to find out anything more you need to pay. And, for me, the costs they’re suggesting make this a little bit more than just Freemium.

The basic subscription of £99 a month gives you insight into your own organisation and allows you to actually see the mentions being made about you. For an extra £100 a month you can add 5 keywords and for the princely sum of £299 a month you can have up to 10. And they’re the special introductory prices.

OK so that’s only about £1,200 a year and no more than £4,000 tops, so what’s the big deal? After all that’s nothing for organisations with budgets in the multi-millions. But it’s this seemingly common attitude to the pennies leaves me thinking that it’s no wonder we have a problem with the pounds. So, perhaps what will follow is not worth your effort but I end up feeling a bit disappointed that we’d rather end up paying for a service than shaping the tools that are out there ourselves*.

Council Monitor is an aggregator of content that can be found ‘out there’ but happens to be housed within a shiny package that allows for comparison across the national picture. I have nothing at all against the shiny but what is most important is actually hearing what is being said, listening to it and then responding. It’s gratifying to know that ours is the council with the most positive mentions in the country but it’s not what’s important.

What’s important is recognising that people are saying things and we need to hear them because our service delivery can be improved by responding to the comments being offered in cyberspace. Not in a terrifying Big Brotheresque fashion but in the way that we’re coming to expect of organisations and companies that are important to us. The way that sees a need and then fills it or hears a criticism and fixes it so that not only are people valued for their contribution but the next person benefits from such a proactive response.

So, on that basis it’s content which is key. It’s not the overview of sentiment, which can be picked up for free but it’s the actual information itself, the stuff that sets you back £99 a month for a single keyword that an organisation wants to hear. Set against Radian6 or one of the other very impressive, and fairly expensive reputation management/social media monitoring services that seems good value.

But we don’t have the budget even for the good value. Dave Briggs flagged up this list of Social Media Monitoring resources which are free and sometimes have similarly shiny interfaces. But we’ve been thinking about how we make something that we can control and that which can pull a variety of different sources into one place. It is still in a prototype stage but the idea of doing something ourselves, whilst it might seem daunting, may actually be preferable.
But first, 2 caveats, and 3 tools:
  • These are not pretty solutions
  • There is a lot of potential to improve them
  • A variety of search engines
  • Yahoo Pipes
  • Netvibes

 

There is an impressive array of tools with which to interrogate the internet. We identified the following 15 services as being useful for different reasons but it is by no means exhaustive.

 

 

Almost all of these use APIs to enable the interrogation of their results from afar. What this means is that we can enter a search term away from the site in question and get a response directly to us or as part of an RSS feed or into an email or a widget.

In order to do this I use Yahoo Pipes. Now, I’ve blogged here and here about how to do this for wildly different purposes and I like Yahoo Pipes. It is certainly quite daunting to begin with and it can be quite temperamental but on the whole it is a very clever environment in which to build tools that can search for information, connect it together and then filter it as necessary. We’ve used it to make pipes for the search engines listed above.

So, using that lovely technology we’ve put together a pipe that looks at SocialMention, and, crucially, for the point I’m hoping to make, it does return sentiment too. At the moment it is pulling two pipes – the Social Mention search and the Sentiment tracker into one.

If it doesn’t work in the page then take a look at one of the following:

SocialMention (all mentions)

SocialMention (sentiment)

SocialMention (mentions and sentiment)

We’ve got pipes for all the search engines we listed above and had wanted to make a single feed from these individual elements but find Pipes cannot cope with this although, if we had there was a recognition of what we could do and a commitment to resourcing it I think we could probably identify some other solutions too. And the beauty of it? Once you have it set up outputting for one search term you can set up more using the same infrastructure (if you want it all lumped together it will accept multiple search terms separated by commas).

We have a dump of all our pipes onto one page. Some of them do not contribute anything to us and will be rooted out; some of them duplicate content and that may mean those two feeds could be merged into one; and being a public page makes tracking and storing activity impossible. In practice this would be a private page accessible to whoever has the responsibility to keep track of the content that had been seen and/or dismissed and that which was still of interest or had not yet been looked at. The Comms & Marketing team are going to be testing it out and exploring how best to use the information and how to process it for the benefit of the organisation.

It is true that we might not be getting the same results as Council Monitor and we might not be able to gauge sentiment elsewhere (although with the right commitment to developing this we could certainly get there). It’s also true that we can’t get trends but that’s just a metric that means nothing if we’re not hearing or responding to the people who are talking about, or to, us.

We’re exploring how we make this better and more useable. I’m moving placements but I’ll look to blog through the technical aspects of making these things happen so that you can make your own sentiment monitoring tools. But, in the meantime, feel free to test ours and see whether they’re useful as an alternative to spending money.

🙂

*I feel the same about GovDelivery. What they offer is much more technical and would require more effort to duplicate but, nevertheless, it is essentially publishing content in ways that would not be difficult to fashion yourselves. At least that’s my take on it.

>How to get BBC Travel updates via RSS using Yahoo Pipes

>

Here’s a bit of a departure from my normal blogging content, sporadic though it is.
I’ve just been at university and while I was there I got an email from a colleague asking about good examples of transport content for local government websites. I didn’t throw the query out to Twitter particularly well as the responses I got were about examples of dynamic travel news such as the Highways Agency Clearspring/GovDelivery widget or Godalming’s repurposing of the same content to give geographical proximity.
Yesterday I was looking at how I might get back to York because of the weather. During the afternoon’s lecture, cursing my stupidity at not leaving at lunchtime, I visited the BBC and discovered, to my surprise that their traffic details offer nothing in the way of subscription.
With plenty of time on trains, platforms and coaches to tinker I thought I’d see if I could manage to do something about that. The terms of the BBC’s travel feed are that they are for personal and non-commercial usage so if you want to be able to get the latest information for yourselves then here’s how to do it very simply.
Visit this Yahoo Pipe, enter the relevant locality or service and click Run. The wonderful thing about Yahoo Pipes is that it will then give you an RSS feed, or with a quick click of a button a badge you can put onto your own blog.
But maybe you want to get to grips with what’s going on behind the scenes, so here’s a quick introduction to the world of Yahoo Pipes.
Now, I love technology and have some basic knowledge about php, html and css that’s faded over time but I’ve found Pipes to be a brilliant tool for doing a whole host of things. This might not be perfect but it does work! Obviously the BBC don’t want this stuff being used commercially because they pay for it but if you’d like to build this pipe yourself here’s how I did it.
Step 1
The first thing to do is extract the data from the BBC. The irony is that the BBC actually use RSS to populate the page but don’t expose it for syndication. The format of the feed is http://www.bbc.co.uk/travelnews/local/york.shtml. ‘york’ is the only part that changes for each different feed.
Step 2
So we need to build that URL. To do that I created a new pipe and selected User inputs > Text input. The ‘name’ field designates what these things will be called; the ‘prompt’ is the text displayed alongside the empty entry boxes when you run the pipe; the position organises where the input is displayed; the ‘default’ is what the field contains automatically; and ‘debug’ is the content used by the pipe in its design state when you’re testing it.
Step 3
This provides the area information, in my case york. The BBC feed needs to have .shtml added to the end so we use String > String Builder (and the Highways requires .xml and follows the same principles). We need to connect the String Builder to the Text input box and this is where the name ‘pipes’ comes from. By clicking on the circular connector and dragging it to another it will connect, or wire, them together. Having done that click the ‘+’ to add another part to the string, the .shtml or .xml.
Step 4
Now we can finish the URL itself. To do that we need URL > URL Builder.
The ‘base’ is the URL we’re tacking the string we’ve created onto. For us this is:
http://www.bbc.co.uk/travelnews/local and
http://www.highways.gov.uk/rssfeed
The ‘path elements’ is what we’ve just made, so wire them together. In this instance there are no ‘Query parameters’ so just ignore that part.
Step 5
Having got the source data URL we need to fetch it. The Highways Agency is already in the right format so we need only use Sources > Fetch Feed and wire it into the URL. For the time being nothing more needs to be done to the Highways Agency feed so we’ll come back to it.
The BBC content is more complicated. We need to fetch the page Sources > Fetch Page and then cut the information from within the page. First of all we wire the URL and the URL Builder together. Having looked at the source of the page the information we’re interested in is between these two pieces of html: and . So we cut content from one, to the other.
Because the page is structured using a table each individual piece of information is within a table row, or and so (the end of each table row) is our ‘delimiter’ (the term that separates one piece of content from another.
Step 6
This is where things become considerably more complicated but I’ll try to explain it as simply as possible. Add an Operators > Regex module (short for Regular Expression), this takes a piece of code and, according to what you tell it, will repurpose it. The data from the BBC is written to be displayed as a website, not as a feed so contains html and other formatting information. We want to get rid of it.
So, ‘item.content’ needs tidying up. This piece of regex module removes formatting instructions such as bold, italic and font size. In all cases we want to remove any mention of them so tick the ‘g’ for ‘global matching’.
The other thing we want to remove is the initial code that labels each table row with a unique reference that ends : name=”3469238″>. The regex ‘\”>’ removes everything until it comes across the exact combination of “> and so takes that initial code away.
In the image you’ll see checkboxes marked g, s, m and i. Most of the g boxes are ticked this allows global matching, so all instances of a string are covered.
Step 7
Now the content needs to be made into a series of separate parts. We do that using the Operators > Rename. Doing this splits the content into ‘title’, ‘description’ and ‘time’ so that we can duplicate the information and build the final items for our feed.
Step 8
Having created those some more regex is required to restrict the content of each part. You have to analyse the source to see where the breaks, or the changes need to be made. I was using the most basic regex ‘.+’, the full stop represents any character and the plus sign refers to any number of the preceding character.
I decided to extract the severity image, Road Name and Location as the title. The way the source code was written meant this required removing everything after the first line break (<BR>). The regex to do this is <BR>.+.
The item.description was next up. The relevant data was separated in the code by a line break preceded by a space ( <BR>), the regex was consequently “.+ <BR>”. The second thing I did here was remove an errant comma.
I also wanted to pull out the time of the update so that the feed can be sorted at the end. This data could be separated from the remainder of the content by a double line break (<BR><BR>). The regex for this was .+<BR><BR>.
On this occasion, the s check boxes are marked. These allow the ‘.’ to match across newlines which is needed with this data because the html source code of the BBC page is split across a lot of them.
Step 9
As far as I understand RSS, which isn’t particularly technically, they need a pubDate in order to publish properly and to sort. At the moment the feed doesn’t have one. However, what was just done to item.time has put it into an acceptable format so by renaming item.time to item.pubDate this will work and allow a way of sorting the feed to make sure the most recent content is seen first.
Step 10
The module Operators > Union will bring multiple feeds together which means wiring up the original Highways Agency feed and the Rename module.
Step 11
Operators > Sort takes both those feeds and sorts them. In this instance by item.pubDate and in descending order (most recent first)
Step 12
And that’s it, you can wire it all together and publish the pipe. Click save and then run it.
Hope that’s useful to someone! While it works there may be better ways of doing it so if you can help me learn how to do that I’d be very interested.