Rudy first introduced me to Nightwish, I think sometime last year. Since then, I have been hooked - their combination of opera and heavy rock/metal/instrumental was amazing, as was their inclusion of other styles of music into their whole package. Since then some things have changed: their lead singer, and the operatic voice, Tarja Turunen left, and a new singer Anette Olzon was selected after a worldwide audition.
They released their latest album last month, and I finally went to get it this morning. The musical style has certainly changed - but not necessarily for the worse. Anette's style is a lot more poppy, but more Gwen Stefani rather than Britney ... so not bad at all.
What impressed me most about the album though is how it has been put together, and features a very impressive mix of music and styles. It is overall still a rock album, but features a lot more string instruments, some Irish music, some choir and even a full orchestra. Most impressive is their first song, The Poet and The Pendulum, is like a mini opera, and as an album, I think it is the most impressive album I have come accross for a long time. At 18 Euros, it was certainly not cheap, but definitely well worth it.
-------------
On the note of pop stars, I found it very amusing, that at Virtual Goods, every one used Britney Spears as the artist of the song they were trying to protect. I have also found that people tend to use Titanic as the movie they want to protect. I think that these are the works I would least like to protect with DRM and is probably why no one takes DRM seriously.
About Me
- alapan
- I ramble about a number of things - but travel experiences, movies and music feature prominently. See my label cloud for a better idea. All comnments and opinions on this blog are my own, and do not in any way reflect the opinions/position of my employer (past/current/future).
20 October 2007
19 October 2007
Reflections on Virtual Goods 2007
The Virtual Goods workshop series is an interesting gathering from different disciplines: IT, Law and Business and is sponsored by IFIP TC 6.11. It was a comparatively small conference (considering the length of the program) with about 35 attendees, but featured participants from at least 11 countries and 5 continents (there was no South American attendee). This diversity certainly made the conference very interesting.
This is the first conference I have attended which officially started in the afternoon, and then carried on to Saturday. I am not sure if this is a bad idea or a good idea, and I think the conference could have been accommodated into two days. But this did allow for two social events ... so I have no problem with the organisation! I am only reflecting on the papers I found interesting. Complete program, abstracts and the presentations can be found here.
In the first (and only) paper session of the first day, Eetu Luoma's paper on copyright management was definitely the highlight. He is specifically looking at the requirements for electronic copyright management in universities. Universities are in a strange position in some ways - they need to encourage learning and publications, but at the same time, have control over the copyrights of these publications which are complex to manage due to the number of parties involved: publishers, the authors and the university. Add to this the cost of lawyers and administration, and copyright management is often just a mess ... and mostly not available in an electronic form.
The social event featured a key note talk by Dr Susanne Guth, who discussed content protection in the mobile TV standard DVB-H, which is being rolled out in Germany. There are two profiles available for DVB-H: Smartcard profile (driven by smartcards such as SIM) and DRM. The talk was enlightening particularly because of the decision process and the factors that affected the decision. The DRM profile is cheaper and easier to implement, and arguably offers a more complete, open and flexible solution. The smartcard profile is more expensive, a lot more complicated to implement and features some proprietary technology. Yet, at the end of the day it was the smartcard profile that is being deployed; for a simple reason. The smartcard profile allows operators to lock customers in for a longer time and thus it means that there will be less numbers of customers who will switch networks. At the end of the day, that means a higher probability of breaking even, and thus the economics dictated the choice of system.
Some of the issues raised by Eetu, were addressed by my presentation, bright and early, first thing in the next morning. My presentation on negotiations was really an advancement of my first ACM paper and then the paper I contributed to the Digital Media Project last year. It is one of the cornerstones of my PhD, and it was nice to see that the paper following mine, looking at the use of ODRL to specify web service agreements, would be a great application of my protocols.
The second session of the second day was possibly one of the most interesting of the conference. Martin Springer gave a presentation on music sampling, and an ontology that can represent sampling rights. The ultimate aim of the ontology is to create a mapping for copyright law. I have two reservations on this: I do no think it is possible to make such ontology, and I do not think it is possible to technically enforce licenses that allow sampling. Regardless, it should not mean that such attempts should be ignored. The next paper was from Australia, looking at search engines and copyright infringements - and some famous cases were analysed. The last paper of the session was interesting to me for two different reasons. Firstly, the author presented an alternative rights model: instead of focusing on licensing, it focused on copy control. basically, if you have a copy, you can do what you want. The model is a very impressive representation of the analogue world - no doubt about that - but I think it is digitally irrelevant and not enforceable: digital goods exist and operate through copies - on the disk, on the network, in memory - controlling this is infeasible. The second reason I found it interesting, is that the author was an independent researcher; and in fact not even involved in IT in his daily professional life. Since the growth of large universities and corporate research labs, private research is almost non existent, and it is the first time I have seen such a contribution (in IT at least). The author, Nicholas Bentley, told me that many conferences and journals have refused to even consider his work ... maybe we should get off our high horses. Surely, public access to academic work is what academia is all about?
The next interesting session was the next day, on superdistribution, which featured two contrasting papers. The first paper, presented an incentive scheme for super distribution. A lot like Amway, but for digital goods. It sounded a bit like a pyramid scheme to me, and I do not think the business model can be supported, for music anyway. The next paper was on why superdistribution incentive schemes will fail. In their, admittedly short, study the authors found that users are just not interested in superdsitribution, and one of the key reasons: users just did not want to make money off friends.
The last session of the conference had two interesting papers: first on user collaboration in second life. I had not thought a lot of virtual environments and their impact on virtual goods - but they represent some of the most interesting cases. If you think about it, the real market for World of Warcraft items exists because they are unique and cannot be replicated. I wonder if some of these models can be replicated outside the tight controls of the virtual worlds. The last paper was on universities - specifically the specific DRM needs of universities. It promoted a lively debate, and was a great finish to the conference.
This is the first conference I have attended which officially started in the afternoon, and then carried on to Saturday. I am not sure if this is a bad idea or a good idea, and I think the conference could have been accommodated into two days. But this did allow for two social events ... so I have no problem with the organisation! I am only reflecting on the papers I found interesting. Complete program, abstracts and the presentations can be found here.
In the first (and only) paper session of the first day, Eetu Luoma's paper on copyright management was definitely the highlight. He is specifically looking at the requirements for electronic copyright management in universities. Universities are in a strange position in some ways - they need to encourage learning and publications, but at the same time, have control over the copyrights of these publications which are complex to manage due to the number of parties involved: publishers, the authors and the university. Add to this the cost of lawyers and administration, and copyright management is often just a mess ... and mostly not available in an electronic form.
The social event featured a key note talk by Dr Susanne Guth, who discussed content protection in the mobile TV standard DVB-H, which is being rolled out in Germany. There are two profiles available for DVB-H: Smartcard profile (driven by smartcards such as SIM) and DRM. The talk was enlightening particularly because of the decision process and the factors that affected the decision. The DRM profile is cheaper and easier to implement, and arguably offers a more complete, open and flexible solution. The smartcard profile is more expensive, a lot more complicated to implement and features some proprietary technology. Yet, at the end of the day it was the smartcard profile that is being deployed; for a simple reason. The smartcard profile allows operators to lock customers in for a longer time and thus it means that there will be less numbers of customers who will switch networks. At the end of the day, that means a higher probability of breaking even, and thus the economics dictated the choice of system.
Some of the issues raised by Eetu, were addressed by my presentation, bright and early, first thing in the next morning. My presentation on negotiations was really an advancement of my first ACM paper and then the paper I contributed to the Digital Media Project last year. It is one of the cornerstones of my PhD, and it was nice to see that the paper following mine, looking at the use of ODRL to specify web service agreements, would be a great application of my protocols.
The second session of the second day was possibly one of the most interesting of the conference. Martin Springer gave a presentation on music sampling, and an ontology that can represent sampling rights. The ultimate aim of the ontology is to create a mapping for copyright law. I have two reservations on this: I do no think it is possible to make such ontology, and I do not think it is possible to technically enforce licenses that allow sampling. Regardless, it should not mean that such attempts should be ignored. The next paper was from Australia, looking at search engines and copyright infringements - and some famous cases were analysed. The last paper of the session was interesting to me for two different reasons. Firstly, the author presented an alternative rights model: instead of focusing on licensing, it focused on copy control. basically, if you have a copy, you can do what you want. The model is a very impressive representation of the analogue world - no doubt about that - but I think it is digitally irrelevant and not enforceable: digital goods exist and operate through copies - on the disk, on the network, in memory - controlling this is infeasible. The second reason I found it interesting, is that the author was an independent researcher; and in fact not even involved in IT in his daily professional life. Since the growth of large universities and corporate research labs, private research is almost non existent, and it is the first time I have seen such a contribution (in IT at least). The author, Nicholas Bentley, told me that many conferences and journals have refused to even consider his work ... maybe we should get off our high horses. Surely, public access to academic work is what academia is all about?
The next interesting session was the next day, on superdistribution, which featured two contrasting papers. The first paper, presented an incentive scheme for super distribution. A lot like Amway, but for digital goods. It sounded a bit like a pyramid scheme to me, and I do not think the business model can be supported, for music anyway. The next paper was on why superdistribution incentive schemes will fail. In their, admittedly short, study the authors found that users are just not interested in superdsitribution, and one of the key reasons: users just did not want to make money off friends.
The last session of the conference had two interesting papers: first on user collaboration in second life. I had not thought a lot of virtual environments and their impact on virtual goods - but they represent some of the most interesting cases. If you think about it, the real market for World of Warcraft items exists because they are unique and cannot be replicated. I wonder if some of these models can be replicated outside the tight controls of the virtual worlds. The last paper was on universities - specifically the specific DRM needs of universities. It promoted a lively debate, and was a great finish to the conference.
18 October 2007
MP3, AAC, DRM and the Future of Music
One of the highlights of the 2007 Virtual Goods Workshop was the presentation by Prof. Dr. Karlheinz Brandenburg titled “From data compression to virtual goods - technical perspectives for the usage of digital music”. Prof Brandenburg is one of the inventors of MP3, and has been involved in the audio field ever since. I must, at this point, also point out that the department head of Multimedia Security at the Fraunhofer IIS (where I am currently interning, and thus my boss); Stephan Krägeloh is also one of the co-inventors. However, Prof. Brandenburg is the main inventor of the MP3, and can be regarded as the “Father of the MP3”.
The focus of the conference was virtual goods, and MP3 is perhaps the most significant virtual good. For the first part of his talk, Prof. Brandenburg focussed on the development of the MP3, which like many new technologies was greeted with scepticism (why would anyone need audio compression?) and took a long time to get through the standardising process.
Off course, MP3 really took off when the Internet took off; but even then, ironically, piracy was a big factor in its success. In the early 90s, MP3 decoders were available for free (i.e. without any patent costs), but encoders cost in the 100s of US dollars. Somewhere along the line, a rogue employee was involved in releasing the encoder software for free (with a redesigned front end). And once it was on the Internet, it was hard to remove, and MP3 encoders became freely available to the public, and the rest is history …
AAC, first really thrown into the spotlight for being the base format for Apple’s iTunes Service is the follow up, providing better quality at the same compression ratio. AAC is also more flexible – according to Prof Bandenburg, there is no improvement in MP3 quality after 192 kbit/s, even though the maximum bit rate is 320 kbit/s.
Off course no discussion of lossy compression can be complete without a listening test, of lossy compression (AAC) and lossless encoding. To make it harder, the test comprised of three audio tracks, with three samples, with at least one sample being lossless, and one sample being lossy. I found it easy to distinguish between lossy and lossless for a classical music track, but could not find any difference in the speech and pop music tracks. No one in the audience picked up all the correct answers.
The last part of his talk was about DRM, and what he thinks of the future of music. In his opinion, DRM for audio will depend entirely on how much piracy occurs for non protected files within the next year. If the record companies do not suffer significant losses, in his opinion, DRM will be dead within a year after that. He pointed out that othe efforts at securing music distribution, such as SDMI, failed horribly, and interoperability will remain the main factor in determining whether DRM will ultimately succeed.
But the future of audio is not only about DRM and compression; but rather search and organisation. It is after all quite common to have gigabytes of music and the organisation and use of the information is now more important than the actual storage of the music. New ideas would include automatic playlist generation (not from the tags but from the actual content of the music) and search by humming.
Personally, I have my doubts about whether non protected music distribution will work. As I commented on my last post, I have very good reasons to believe that there will be more pirated copies than legitimate copies after a year or so, and thus DRM will be needed, if protection is required.
The focus of the conference was virtual goods, and MP3 is perhaps the most significant virtual good. For the first part of his talk, Prof. Brandenburg focussed on the development of the MP3, which like many new technologies was greeted with scepticism (why would anyone need audio compression?) and took a long time to get through the standardising process.
Off course, MP3 really took off when the Internet took off; but even then, ironically, piracy was a big factor in its success. In the early 90s, MP3 decoders were available for free (i.e. without any patent costs), but encoders cost in the 100s of US dollars. Somewhere along the line, a rogue employee was involved in releasing the encoder software for free (with a redesigned front end). And once it was on the Internet, it was hard to remove, and MP3 encoders became freely available to the public, and the rest is history …
AAC, first really thrown into the spotlight for being the base format for Apple’s iTunes Service is the follow up, providing better quality at the same compression ratio. AAC is also more flexible – according to Prof Bandenburg, there is no improvement in MP3 quality after 192 kbit/s, even though the maximum bit rate is 320 kbit/s.
Off course no discussion of lossy compression can be complete without a listening test, of lossy compression (AAC) and lossless encoding. To make it harder, the test comprised of three audio tracks, with three samples, with at least one sample being lossless, and one sample being lossy. I found it easy to distinguish between lossy and lossless for a classical music track, but could not find any difference in the speech and pop music tracks. No one in the audience picked up all the correct answers.
The last part of his talk was about DRM, and what he thinks of the future of music. In his opinion, DRM for audio will depend entirely on how much piracy occurs for non protected files within the next year. If the record companies do not suffer significant losses, in his opinion, DRM will be dead within a year after that. He pointed out that othe efforts at securing music distribution, such as SDMI, failed horribly, and interoperability will remain the main factor in determining whether DRM will ultimately succeed.
But the future of audio is not only about DRM and compression; but rather search and organisation. It is after all quite common to have gigabytes of music and the organisation and use of the information is now more important than the actual storage of the music. New ideas would include automatic playlist generation (not from the tags but from the actual content of the music) and search by humming.
Personally, I have my doubts about whether non protected music distribution will work. As I commented on my last post, I have very good reasons to believe that there will be more pirated copies than legitimate copies after a year or so, and thus DRM will be needed, if protection is required.
Free vs Piracy
16 October 2007
Rough Minutes of the Open ODRL WG Meeting
These are the rough minutes of the ODRL WG meeting at the 2007 Virtual Goods Workshop in Koblenz, Germany. I hope it is complete, but I could have unintentionally left out things ... they are reconstructed from notes I typed at the meeting.
There hasn’t been a face to face ODRL meeting since the last International ODRL workshop in Lisbon in 2005. With ODRL joining forces with the Virtual Goods workshop, the conference also provided an opportunity to have a face to face meeting of the ODRL v2 working group. Three regular contributors from the mailing list were at the meeting: the co-leaders Dr Renato Iannella and Dr Susanne Guth and myself. However, since it is an open meeting, a number of other interested parties were also present, which provided the discussions with some new positions and insights. The other attendees included (and this is not the complete list) Pramod Jamkhedkar (PhD student from the University of New Mexico), GR Gangadharan (PhD student from the University of Trento), Martin Springer (independent contributor to the DMP) and Dr Rüdiger Grimm (from our hosts at the University of Koblenz-Landau).
The main thrust of the meeting was a push to simplify the ODRL v2 model, in an attempt to create a simpler core language; which could then be extended to have different profiles such as licensing and negotiation support. Susanne Guth and I promoted the use of access control as the base model for v2. Pramod Jamkhedkar however promoted the use of database style definition, and maybe the use of tupple calculus and a sound logical (mathematical) structure. In the paper I am due to present at the ACM DRM Workshop in two weeks time, I do present something that bridges these two approaches, and could lay the foundation for the v2 model. I will release a link to the paper on the WG mailing list after I present the paper in Washington in 2 weeks time.
Martin Springer raised the point that a model depends on what we need to describe, and that requires detailed use cases. Susanne Guth countered that detailed use cases would however lead to very specific models, which would not achieve the generality required for ODRL. In this respect, the current approach of stating general requirements (or goals) for the model is much better than specific use cases.
Rüdiger Grimm raised the question on the necessity of the duties element. After all, duties could be reworded as constraints. Pramod Jamkhedkar commented that, everything could be modelled as rights and constraints – and the use of duties and parties are dependent on the level of abstraction we want. It was felt that duties provide an additional level of expressiveness and thus should be retained.
Susanne Guth raised the issue of a container. The container, as defined in ODRL 1 was too complex, and needed to be refined. Susanne proposed a narrower definition of the container, as defined in the current model document. She also suggested the use of XLink for the XML implementation of the concept.
Renato Iannella raised the issue of whether the exclusive attribute needs to be retained. It is a rarely used concept, and I commented that it could easily be expressed as a duty instead of an attribute. It was agreed that this may be the best approach, and an example on how it can be used can be discussed in the model.
Also with duties, the non-performed section was removed, as it can also be expressed as another separate duty. This approach could also have less processing requirements than the current approach of a non-performed section.
In the discussion of the Assets element, it was decided to remove WEMI and metadata. Parts, which aimed to define collections of assets is strictly not necessary, and was thus removed. The inheritance model however needs to be revisited – OMA uses the inheritance model, but it does not strictly belong under the Asset element. Any changes to the inheritance model, would require some clarifications from OMA.
reType, which I introduced to simplify the agreement/offer model was retained. It offers a high degree of flexibility and it was decided that the vocabulary for reType will not be defined (apart from agreement probably).
The tradable attribute was removed, as negotiation support will be a profile and not a core component of the model.
The following elements were removed: signature, encryption, legal and communication.
There hasn’t been a face to face ODRL meeting since the last International ODRL workshop in Lisbon in 2005. With ODRL joining forces with the Virtual Goods workshop, the conference also provided an opportunity to have a face to face meeting of the ODRL v2 working group. Three regular contributors from the mailing list were at the meeting: the co-leaders Dr Renato Iannella and Dr Susanne Guth and myself. However, since it is an open meeting, a number of other interested parties were also present, which provided the discussions with some new positions and insights. The other attendees included (and this is not the complete list) Pramod Jamkhedkar (PhD student from the University of New Mexico), GR Gangadharan (PhD student from the University of Trento), Martin Springer (independent contributor to the DMP) and Dr Rüdiger Grimm (from our hosts at the University of Koblenz-Landau).
The main thrust of the meeting was a push to simplify the ODRL v2 model, in an attempt to create a simpler core language; which could then be extended to have different profiles such as licensing and negotiation support. Susanne Guth and I promoted the use of access control as the base model for v2. Pramod Jamkhedkar however promoted the use of database style definition, and maybe the use of tupple calculus and a sound logical (mathematical) structure. In the paper I am due to present at the ACM DRM Workshop in two weeks time, I do present something that bridges these two approaches, and could lay the foundation for the v2 model. I will release a link to the paper on the WG mailing list after I present the paper in Washington in 2 weeks time.
Martin Springer raised the point that a model depends on what we need to describe, and that requires detailed use cases. Susanne Guth countered that detailed use cases would however lead to very specific models, which would not achieve the generality required for ODRL. In this respect, the current approach of stating general requirements (or goals) for the model is much better than specific use cases.
Rüdiger Grimm raised the question on the necessity of the duties element. After all, duties could be reworded as constraints. Pramod Jamkhedkar commented that, everything could be modelled as rights and constraints – and the use of duties and parties are dependent on the level of abstraction we want. It was felt that duties provide an additional level of expressiveness and thus should be retained.
Susanne Guth raised the issue of a container. The container, as defined in ODRL 1 was too complex, and needed to be refined. Susanne proposed a narrower definition of the container, as defined in the current model document. She also suggested the use of XLink for the XML implementation of the concept.
Renato Iannella raised the issue of whether the exclusive attribute needs to be retained. It is a rarely used concept, and I commented that it could easily be expressed as a duty instead of an attribute. It was agreed that this may be the best approach, and an example on how it can be used can be discussed in the model.
Also with duties, the non-performed section was removed, as it can also be expressed as another separate duty. This approach could also have less processing requirements than the current approach of a non-performed section.
In the discussion of the Assets element, it was decided to remove WEMI and metadata. Parts, which aimed to define collections of assets is strictly not necessary, and was thus removed. The inheritance model however needs to be revisited – OMA uses the inheritance model, but it does not strictly belong under the Asset element. Any changes to the inheritance model, would require some clarifications from OMA.
reType, which I introduced to simplify the agreement/offer model was retained. It offers a high degree of flexibility and it was decided that the vocabulary for reType will not be defined (apart from agreement probably).
The tradable attribute was removed, as negotiation support will be a profile and not a core component of the model.
The following elements were removed: signature, encryption, legal and communication.
Subscribe to:
Posts (Atom)