Tuesday, March 01, 2011

Social TV poster #1: PeoplePixPlaces

(Update: This concept has evolved further and turned into a final project called WorldTV, complete with a software demo and video) From the Social TV class I'm taking this semester at the MIT Media Lab: A social TV application based on news. I came up with PeoplePixPlaces, a Web-based application that gives a window into local news, using geocoded video, pictures, and tweets, as well as individual users’ own social lenses. The poster explains the concept in more detail:

social TV

The genus of the idea predates MAS 571. Last semester in 6.898 (Linked Data Ventures), I proposed a similar project, PixPplPlaces. The one-sheet vision:


“People want to know a lot about their own neighborhoods.”

- Rensselaer Polytechnic Institute Professor Jim Hendler, discussing Semantic Web-based services in Britain, 10/18/2010

While superficial mashups that plot data about crime, celebrity sightings, or restaurants on street maps have been around for years, there is no service that takes geotagged tweets, photos, and videos, as well as associated semantic context, and plots it on a map according to the time the information was created. The idea behind PixPplPlaces:

• Index some publicly available location-based social media data in a Semantic Web-compatible form
• Plot the data by time (12:25 pm on 10/24/2010) and location (Lat 42.33565, Long -71.13366) on existing Linked Data geo resources
• Bring in other existing Linked Data resources (DBPedia, rdfabout U.S. Census, etc.) that can help describe the area or other aspects of what's going on, based on the indexed social media data

Potential business models:

• Professional services: News organizations can embed PPP mashups of specific neighborhoods on their websites, add location-based businesses who are their ad clients, or use the tool as an information resource for journalists -- what was the scene at the site of a fire on Monday evening, just before the fire broke out? Lawyers, insurance companies, and others might be interested in using this for investigations.
• Advertising services: A suggestion from Reed - "a source of ads/offers in Linked Data format - for the sutainability argument as a business. Maybe in the project you can develop an open definition that would let multiple providers publish ads in the right format that you could scrape /aggregate and then present to end users? If you demonstrate a click-wrap CPC concept you might be able to mock it up by scraping ads from Google Maps or just fake it."

To be researched:
• Is social media geodata (geotagged Flickr photos, geolocated Tweets) precise enough to be plotted on a map?
• Should this be a platform or a service?
• How can the data be scraped, indexed, or made into "good" Semantic Web information?
• Would any professional organization -- news, legal, insurance -- pay for it?
• How viable is the advertising model in a crowded field chasing a (currently) small pool of clients?
The Semantic Web requirements for the 6.898 project and emphasis on tweets and photos gave the tool a different flavor than the Social TV version; in addition, I didn't consider the possibility of using "social lenses" to filter the contributions of people in the user's social circle. But for both projects, I recognized that the business case is weak, not only in terms of revenue, but also in terms of maintaining a competitive advantage if open platforms and standards are used.

Incidentally, I first had the idea for a geocode-based application for user-generated content back in 2005 or 2006. My essay Meeting The Second Wave explains the original idea:

In the second wave of new media evolution, content creators and other 'Net users will not be able to manually tag the billions of new images and video clips uploaded to the 'Net. New hardware and software technologies will need to automatically apply descriptive metadata and tags at the point of creation, or after the content is uploaded to the 'Net. For instance, GPS-enabled cameras th at embed spatial metadata in digital images and video will help users find address- and time-specific content, once the content is made available on the 'Net. A user may instruct his news-fetching application to display all public photographs on the 'Net taken between 12 am and 12:01 am on January 1, 2017, in a one-block radius of Times Square, to get an idea of what the 2017 New Year's celebrations were like in that area. Manufacturers have already designed and brought to market cameras with GPS capabilities, but few people own them, and there are no news applications on the 'Net that can process and leverage location metadata — yet.

Other types of descriptive tags may be applied after the content is uploaded to the 'Net, depending on the objects or scenes that appear in user-submitted video, photographs, or 3D simulations. Two Penn State researchers, Jia Li and James Wang, have developed software that performs limited auto-tagging of digital photographs through the Automatic Linguistic Indexing of Pictures project. In the years to come, autotagging technology will be developed to the point where powerful back-end processing resources will categorize massive amounts of user-generated content as it is uploaded to the 'Net. Programming logic might tag a video clip as "violence", "car," "Matt Damon," or all three. Using the New Years example above, a reader may instruct his news-fetching application to narrow down the collection of Times Square photographs and video to display only those autotagged items that include people wearing party hats.

For the Social Television class, we have to submit two more ideas in poster sessions. I may end up posting some of them to this blog ...

No comments:

Post a Comment

All comments will be reviewed before being published. Spam, off-topic or hateful comments will be removed.