MUFF WIGGLER Forum Index
 FAQ & Terms of UseFAQ & Terms Of Use   Wiggler RadioMW Radio   Muff Wiggler TwitterTwitter   Support the site @ PatreonPatreon 
 SearchSearch   RegisterSign up   Log inLog in 
WIGGLING 'LITE' IN GUEST MODE

rendering abstract geometries in a module
MUFF WIGGLER Forum Index -> Video Synthesis Goto page 1, 2  Next [all]
Author rendering abstract geometries in a module
johnnywoods
to break off from the other thread....

creatorlars wrote:
I don't mean to divert the discussion, but I am curious: what sorts of features/workflow would you like to see in a digital module designed for rendering abstract geometries? We still have in mind to do a reprogrammable frame buffer module (our first frame buffer module, the decoder/synchronizer, has been in development for a long time and is getting closer to completion -- so we're getting near that point.) But feature discussion on a more arbitrary one has yet to really be discussed. What kind of stuff do you want to do?


I'm very very interested in this. I used to do a lot of Processing stuff along similar lines, and have used Maya scripting to create generative geometry. The idea of having this in a module is beyond droolworthy.

Personally, i think the best answer is to create a library of objects (assuming the language for the rpfb is oop) which can be assembled with a minimum of coding to create personalized programs. Sort of like what processing has done with Java, or openFrameworks with C++. Granted, this is no small task at all, but it could be a community effort perhaps?

I'm thinking simple things, like an object which draws a circle, a line between two points (3d points would be even better), a simple text generator, etc... of course, over time, if things like physics libraries, flocking behaviours, etc could be implemented, that would be badasssss!
barto
this sounds pretty interesting. i dont know any scripting, but i do 3d modeling and would love to some how get some geometry to interact with video. im working on a sound reactive piece with someone in processing but im rending out my 3d geometry as 2d sprites but it would be way cooler to have live 3d elements. id be glad to lend any 3d modeling assistance
snufkin
I would love that guys

another idea would be a module (or program within a module) with this feature; (I feel would not emulate but echo oscillographics) It would be a
module that would create a single point that is x and y (possibly even z_ position could be voltage controlled with a point "dwell" control to allow the user to ether draw lines or just create movement with comet trails


something that could achieve functionality such as this would be amazing
lizlarsen
It's interesting to surmise the different possibilities here.
While it all could be implemented in an integrated fashion as programs inside of a reprogrammable frame buffer module, there seem to be several modular components it's interesting to think about in separate terms and "what ifs"...

1) Vector to Raster Converter. At it's core, it would take X, Y & Z inputs and sample them at high speeds, then convert them to a raster image. This basically would be an alternative to using a vector monitor or oscilloscope, and then rescanning it with a camera. It could further be enhanced to do color by substituting Z for RGB inputs instead, and then things like controls over vector persistence, the width and focus of the line, and other things like that.

2) Things to generate XYZ information! In the modular context, this could be VCOs, and pattern generators, or even video sources -- in the same way that they are used to generate a raster images directly. In addition to the more simple technique of mixing and combining VCOs, etc to create this signals, this is also where the 3D rendering aspect comes in I think (doing things like the clip above that snufkin posted, or displaying an object in 3D space.) A 3D renderer would output XYZ and effectively be analogous to what a "clip player" module would be to raster video, but storing 3D objects instead of video clips.

Anyway, like I said while the above two concepts could be addressed in an integrated way, but I like the idea of thinking about them separately, in a modular context. Think about what you could do with analogue processing modules to the XYZ signals coming out of a "3D Object Renderer" module before sending them on to a "Vector to Raster Converter" module, and then afterwards, taking the raster image and colorizing it, etc!

Talking theoretically, what would a "3D Object Renderer" look like as a module? SD card slot to transfer over 3D objects in various formats? High speed voltage control over geometric manipulation and placement of the object? In an extreme example, I guess you could have CV inputs for the XYZ position of reach vector point in the model independently! Or more realistically, maybe it's interesting to think about being able to select groups of vector points to modulate independently, or having the animation characteristics of a rendered object be defined beforehand and controlled by CV access to different animation variables. (You 3D guys out there, excuse my lack of knowledge of all the proper terminology.)

Anyway, this is very interesting stuff to think about.
Veqtor
I quite like this structure:

* One module that puts out basic geometric shapes and objects as a matrix of coordinates (mapped to a single video-frame) that can be processed.
* One module decodes the coordinates, textures etc.

Being able to process coordinates is great for abstract geometry since it really doesn't limit what you do with them:

Combine a module like this with a multi-dimensional perlin noise generator some video-logic etc you could end up with something like this:

I believe he's freezing geometry frames and fading between them. It's also quite easy to acheive win95 screensaver-a-like graphics with this configuration in wireframe mode.

Which brings me to another thing, it's great if you can toggle between 3 modes for the back and front rendering: lines, points and poly.

It would be really great if the 3d-render module also puts out a depth-map. You can use it with comparators or as a crossfade cv. Maybe an alpha channel could be put out as well?
Also, blend modes are important, I really like semi-translucent etc.
lizlarsen
Interesting. Yeah, alpha and depth maps should be really easy, as well as different types of blending. The Reprogrammable Frame Buffer concept includes several video rate outputs and inputs simultaneously. So one thing to try (in the case that the module was putting out raster graphics) would be to have each output jack putting out a different segment of the model, and then mix/key the elements using the analog modules.

I want to be able to do this, with a trigger input for each mouth! lol
Veqtor
So, as I was saying, in jitter, simple shapes can be represented as 3-plane matrices, so almost anything can be used to distort these shapes, which is cool if you want to make abstract and/or glitchy geometry etc, but for other objects, we need something else:
If we were to devise a special 3d format, where points could be pre-configured to be displaced by cv when making the object, say like the mouths would have certain points that were bound to cv inputs 1,2 and 3 (for example), then you could cv the mouths!

It would be wise to make a generator-template for the objects in processing so that one could have a template where the variables that correspond to the cv inputs are pre-defined and the user can use the standard cv commands and just toggle a boolean to export an object instead of previewing it.
lizlarsen
Quote:
points could be pre-configured to be displaced by cv when making the object, say like the mouths would have certain points that were bound to cv inputs 1,2 and 3 (for example)


Yeah, this sounds like the way to do it. I could see there being a whole scale too, like point #1's x is affected by CV1*0.8, point #2 by CV1*0.2, etc.
I mean, basically like anything you'd do in Processing, but you've got access to several video-rate CV inputs (and since they are video-rate, they could be actual video images too) and using them as variables inside the program.
Veqtor
Well, the program would only serve as a prototyping enviroment, what's nice is that every user could bind them to lfo's or such and save that as a template.

Scale is a good idea, I thought something similar.

Also it could be:
point #1:
X=pos+(cv1*0.3)-(cv2*-0.4)
Y=pos+(cv3*-3)
Z=pos

Perhaps you could ditch the computer by providing a usb keyboard input and an onscreen programming enviroment... (with the geometry in the background when programming) Dead Banana

Also, crazy idea: What if we could draw vertices using high-speed oscillators like a vectorscope but in 3d, like the scope has 3 inputs, x y and z and also r g b and alpha inputs.

With some post-processing like glow effects it could look really awesome.
lizlarsen
Quote:
What if we could draw vertices using high-speed oscillators like a vectorscope but in 3d, like the scope has 3 inputs, x y and z and also r g b and alpha inputs.


yep! that's the power of having inputs sampled at video rates. you can use video signals or high speed oscillators as modulation sources (which is basically what happens with the rutt-etra type rescanning effect, when the Z signal, the video, gets added back into the X or Y.)

Quote:
Perhaps you could ditch the computer by providing a usb keyboard input and an onscreen programming enviroment... (with the geometry in the background when programming)


that's possible! I'm not sure what the interface for this module will look like yet. probably something like TipTop Z-DSP.

one idea i've had for a while too is to just do an Parallax Propeller interface module that runs the propeller chip off LZX sync signals and a custom video driver for it. there are tons of cool stuff you can do with it:
http://www.parallax.com/propeller/
It wouldn't do video-rate input sampling or processing, but as far as modulating and vector graphics generation, there are lots of cool possibilities.
Veqtor
I don't know a lot about GPU rendering pipeline's and such things but given that the milkymist (with its quite limited graphics) is based on a FPGA then I think the propeller might be little bit too weak to do 3d rendering...

This is what I think is the heaviest gfx it can do:
http://www.youtube.com/watch?v=Pc_Ujd6y71c

But FPGA's can do this:
http://www.youtube.com/watch?v=ZX_7YT05IGA

But some kind of tegra based solution would be made of win ofc:

http://www.toradex.com/Products/Colibri/Modules/Colibri-T20-512
http://www.avionic-design.de/en/products/nvidia-tegra-tamonten-system- en/nvidia-tegra-tamonten-pb-en.html

The colibri is €149 for a hundred, which is a bit much, but:
http://www.youtube.com/watch?v=vr29BoeHt74

A tegra based system could run a stripped down android system.
lizlarsen
Yes, the Propeller solution would be fun and make a very inexpensive module opportunity -- basically a couple CV inputs, a USB connector for reprogramming, some knobs and buttons, and RGB outputs -- plus, easy to code for, lots of existing stuff -- but it definitely wouldn't be a full resolution 3D graphics engine. It'd be more equivalent capability-wise to the lower priced AVR and PIC-based visualizers out there like my Bitvision, the HSS3i, Critter and Guitari stuff, etc. But it would be reprogrammable, which is cool. It's not high priority, we're more interested in the FPGA stuff right now.

The Nvidia stuff looks awesome! I'll share the link with my partner. This kind of development is a foreign world for me too, but Ed has been developing this kind of stuff for a long time. There's also the Beagleboard, which I think can take video in and out as well. At least output.
Veqtor
creatorlars wrote:
There's also the Beagleboard, which I think can take video in and out as well. At least output.


Beagleboard could be good! This is very exciting, given the option to extend a video-modular perhaps with this cutting edge hardware video-modulars could once again become the top-of-the-line in real-time video-processing system!
w00t
lizlarsen
Speaking of all this, the MilkyMist full device finally went on sale last week!
http://www.milkymist.org/mmsoc.html

The MilkyMist SoC would be another option for a module. We'd just want something that can do NTSC/PAL timing and output RGB, as well as having genlock capabilities, for a full integration into an LZX system.

Here's some notes from Ed on other FPGA based devices:

Quote:
In my research, there's a few attempts at rasterizers and rendering
pipelines in FPGAs

Manticore - 2D rasterizer.
MilkyMist (Texture Mapper Unit) - TMU - Maps quads to quads - good for warping.
OGP - Open Graphics Project - I don't know how far that got. They
released a card, but I don't know about code.
Mesa3D - Open source OpenGL

Apart from the Nvidia Tegra, there is also the Freescale iMX.
fletch
I had fun doing a basic audio visualizer using an Altera DE2 FPGA development board for my final project at school.

I think the next step would be to program a smaller version and mount it behind a euro panel!
something like this

de0-nano.terasic.com
Veqtor
So, I was thinking, with the possibility of 3d-modules coming up, wouldn't it be great if those were to output depth maps, that is, depth information where white is as near the "camera" as possible and black is the furthest away.

Applications:

1) Combining several renders - By comparing two depth-maps you can determine which one would obscure the other. That is, the whitest one comparison is sent to a switch, then you can combine two different renders.

2) Depth of field blur - Use the depth map to simulate camera aperture focus by controlling a possible voltage controlled blur effect with the depth-map.

3) Crazy shit - Who know what we could use this for, experimentation is what will reveal some really cool stuff!!!
barto
^ z depth
daverj
I've used depth maps with my laser engraver to engrave 3D into pieces of wood from images created in LightWave.
Veqtor
Yes, my idea was, aside from the render outputs (with shadows etc) perhaps we could have separate depth out (z out). Also, this made me think, it could be interesting, to have a lot of different outputs on a module like this.

Ideas?
lizlarsen
Yeah, Z-depth is a great idea! Layering things with a single Z-depth signal at different depths in the image based on a priority scheme and comparators could be patched up with multiple Triple Video Fader & Key Generator modules. I've been meaning to document a patch like this (ie, 4 "images" that have priority over each other based on a 5th "z-depth" image.) So, the idea of using "Z-depth" to map different layers of an image is something we can apply already, in the analogue patching, but generating Z-depth from a digital 3D object render as a brightness map is something that makes this technique even more useful.

Lots of different outs is a good idea.

My partner Ed has the MilkyMist now. Adapting it with an LZX output driver is probably possible, we're going to investigate that.
barto
yes! as far as render elements go, there are so many, mostly to allow lots of control in post production to be able to fine tune different elements. along with stuff like zdepth, perhaps you could have just the specular highlights, reflections etc so that way a certain patch or effect would only occur on those render elements
lizlarsen
I think the best way to accomplish this in modular format is to have a selection of arbitrary outputs, and then a large variety of different display "modes" which could route different aspects of the render to different analogue video outputs. If this is a function of the reprogrammable frame buffer type concept, then that leaves it wide open for new sorts of output parameters and modes to be defined.
Matos
The milkymist seems like a great stand alone digital building block. Having it work with the lzx would be amazing.
barto
i was thinking about this the other day when i was tweaking some animations...in the 3d world you have animation curves that define how an object changes (y axis) over time (x axis). heres kind of what it looks like


this one has a bit of a sine wave goin on

not sure how this would be possible but what if you could assign a waveform to the animation curve? you could take a primitive piece of geometry like a cube and then apply some deformation modifier (bend, twist, taper, ripple) and then somehow apply a waveform as its animation curve.

It would be cool to experiment with someone who had an oscilloscope to create a basic waveform and then trying to manually recreate that as an animation curve in 3ds max to see how the video corresponds to the animating geometry
ndkent
time to resurrect the analog computer
MUFF WIGGLER Forum Index -> Video Synthesis Goto page 1, 2  Next [all]
Page 1 of 2
Powered by phpBB © phpBB Group