[AR Standards Discussion] Wrap up of Program Committee member meeting January 31
dirk at layar.com
Fri Feb 4 19:01:32 EST 2011
Great post about the two types of AR content you see. While I agree
that these are the two most obvious types, I'm not so sure they're the
only ones. Creativity will find its way and surprise us.
Regarding the first type: why wouldn't HTML be the most obvious way to
'program' these floating billboards? There's a lot of web content
already existing that could be fit into such a space (think of all the
widgets) and that is already in HTML.
We currently offer predefined spaces in the current AR browsers where
you only need to define a few fields. But this is clearly limiting
creativity, so the most quick route to unlock a lot of content is to
go the HTML route. I don't think that extending the Layar JSON API
definition in a proprietary (though open) way would make it that easy
for content publishers to create the 'flat' billboards. The HTML/CSS
box model perfectly fits such a scenario.
What do you think?
Dirk Groten | CTO Layar | +31 623 08 0177
Sent from my iPhone
On 3 feb. 2011, at 22:43, Thomas Wrobel <darkflame at gmail.com> wrote:
> I just like to chip-in and apologize for not being able to follow up
> on the conversation here earlier.
> Fortunately it seems Jens understood my concerns well and argued them
> on my on my behalf.
> Just one thing I would like to address though is the easyness of
> content creation, because I do think content creation in some regards
> should be relatively simple, but I think with AR theres at least two
> distinct types of content creation, that separates it somewhat from
> the 2d web;
> 1. Basic "inline" data which is small and doesn't need an external
> files of any sort. The most obvious use of this would just be a
> floating billboard - a text annotation to the world. Clearly it would
> be a waste of bandwidth to model this separately as a 3d file and load
> it in. All thats, essentially, needed is the text, location
> information, and possibly a few other fields.
> I think this sort of AR content should be as simple as possible -
> preferably more simple even then making a webpage is today. That is,
> more simple then HTML.
> 2. More elaborate mesh's, modeled in Blender, Max etc, that are
> positioned in world-space but have separate files storing their actual
> mesh data. I think, while the positioning should be as simple as
> above, the data itself isn't ever going to be particularly easy to
> create, and will always need a different skillset to html. (all though
> I daresay "Google Sketchup" like solutions might emerge to make 3d
> creation a simple as possible).
> I personally think that html formatting doesn't offer advantages for
> either of these creation scenarios.
> Where html formatting would be strong would be in a 3rd scenario;
> Complex 3d scenes coded by hand from primitives.(rather then by 3d
> software). This would be somewhat akin to the structure of webpage in
> that elements could be positioned relative or within eachother.
> However, I'm not sure I see any thing usefull that can be done in this
> scenario. I think the majority of content will fall into the first two
> I do think we shouldn't reinvent the wheel, however. Which is why I am
> in favour of possibly borrowing elements from CSS where they seem to
> fit. ("Left: 50%" might not make much sense in 3D space, but font
> handling and styling certainly will still be applicable).
> Reviews of anything, by anyone;
> Please try out my new site and give feedback :)
> On 3 February 2011 05:40, Admin at simulation3d.biz <admin at simulation3d.biz> wrote:
>> Hi Phillip, VRsonic in DC do that, i can connect you
>> Have a nice day!
>> Thank you
>> Yohan Baillot
>> Augmented Reality consulting
>> yohan at simulation3d.biz
>> (425) BAILLOT
>> On Feb 2, 2011, at 1:22 PM, Philipp Slusallek <slusallek at cs.uni-saarland.de> wrote:
>>> Hi Jacques,
>>> This sounds extremely interesting indeed. A student tried to get some
>>> sound rendering into 3D a while ago, which was OK but we are not really
>>> the experts on this. It would be great to join forces here. I have
>>> started to look into A2ML but need a bit more time to really understand
>>> the approach. Associating audio with 3D objects and having an audio
>>> renderer to compute the reverberations in a virtual environment would be
>>> just great. We do a lot of realtime ray tracing (which is why our
>>> current demonstrators use that for rendering as well) applying this also
>>> to global lighting simulations and it would be interesting to apply this
>>> also to sound simulations.
>>> Am 02.02.2011 18:08, schrieb jacques lemordant:
>>>> Hi Philipp,
>>>> I was supporting XML3D (against X3D)at the last AR Standards meeting in Seoul.
>>>>> The goal of XML3D is to embed declarative 3D scene descriptions directly
>>>>> into any HTML document via a couple of new HTML elements (just like SVG
>>>>> for 2D in HTML5 today). Essentially, we want to enable any Web developer
>>>>> to easily start using 3D graphics in any project without leaving his/her
>>>>> common programming environment.
>>>> We have the same goal with 3D interactive audio and have defined an XML format called A2ML for it.
>>>> it's very similar to SVG with embedded SMIL for synchronization.
>>>> Would be nice to put it in HTML5 like with SVG.
>>>> We have an A2ML sound engine running on iOS and an iOS AR browser with content in HTML5 and A2ML.
>>>> It's based on the webkit and is able to read an XML POIs format
>>>> (So we could probably test XML3D on the iPhone rather easily in the context of an AR application)
>>>> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
>>>> <poap:Poi rdf:ID="GVAjardinDeVille">
>>>> <poap:name>Jardin de Ville</poap:name>
>>>> <poap:coordinates lat="45.1925890" lon="5.726197" ele="217.861"/>
>>>> <foaf:maker rdf:resource="http://wam.inrialpes.fr/rdf/foaf/Audrey-Colbrant.rdf"/>
>>>> <poap:update timestamp="2010-11-01T18:17:44Z"/>
>>>> <poap:panoramic rdf:resource="./resource/panorama/jardinDeVille.jpg" type="hemispherical"/>
>>>> <poap:content doctype="html5" rdf:resource="./resource/content/jardinDeVille.html"/>
>>>> <poap:audio rdf:resource="./resource/atmosphere/jardinVilleAtmosphere.mp3"/>
>>>> <poap:graphics rdf:resource="./resource/atmosphere/jardinVilleAtmosphere.xml3d"/>
>>>> <poap:triggering radius="5.5" />
>>>> <poap:visibility radius="4.5" />
>>>> <poap:channel category="parc"/>
>>>>> XML3D has been presented to the W3C at the last TPAC meeting in Lyon and
>>>>> we have been asked to start a W3C Incubator group preparing possible
>>>>> standardization in this area.
>>>> A2ML was presented at the last TPAC meeting with a live demo of an ARA application( guidance of visually impaired people in Grenoble with a remote console in Lyon)
>>>> We hope to have something like A2ML standardized one day by the W3C audio working group.
>>> Discussion mailing list
>>> Discussion at arstandards.org
>> Discussion mailing list
>> Discussion at arstandards.org
> Discussion mailing list
> Discussion at arstandards.org
More information about the Discussion