Theo Verelst Diary Page

Tue Feb 26 2001, 1:56 AM

I've decided after good example to write some diary pages with toughts and events.

Oh, in case anybody fails to understand, I'd like to remind them that these pages are copyrighted, and that everything found here may not be redistributed in any other way then over this direct link without my prior consent. That includes family, christianity, and other cheats. The simple reason is that it may well be that some people have been ill informed because they've spread illegal 'copies' of my materials even with modifications. Apart from my moral judgement, that is illegal, and will be treated as such by me. Make as many references to these pages as you like, make hardcopies, but only of the whole page, including the html-references, and without changing a iota or tittel...

And if not? I won't hesitate to use legal means to correct wrong that may be done otherwise. And I am serious. I usually am. I'm not sure I could get 'attempt to grave emotional assault' out of it, but infrigement on copyright rules is serious enough. And Jesus called upon us to respect the authorities of state, so christians would of course never do such a thing. Lying, imagine that.

Previous Diary Entries | List of Diary Pages | Home Page

Tue Feb 26 2001, 1:56 AM

I've made a prototype proxy server, in tcl, which is worth sharing. I think I'l make a list of activities, and I probably will do a curriculum update. There were some more things, I'll think about later.

I just though about something I shouldn't forget writing about again: the connection machine. A 65000 processor node machine with a total of 32 Mega bytes of memory, about 1 Giga byte per secons overall effective inter processor bandwidth, and a sum processing power of about 50,000 MIPS, which is about 50 contemporary pentiums, and a power use of 12 horsepower. Huh? Made and operational in about 1984 (!) already, maybe even earlier. Well well, serious computer science. Or lets say electrical engineering work in the area of that. With serious science in it. Yes ma'm. 65000 communicating nodes, and programming language to do parallel operations with associative lisp structures, and then doing generalisations on the behaviour of the whole thing. Nice. Not an especially selected PhD thesis from MIT from the time for nothing. Hah, that is at least serious stuff and no insult to ones senses and intelligence.

Graphical pms musical string simulator program

For quite some months already, I have a working prototype of the string simulator program with a 3 dimensional interface running under windows (95/98/2000/ and I guess XP), based on the extensive opengl graphics library. Basically, it can serve as the basis for a program which can be easily operated by users who don't have time or skills to go into all the command line options to make a string simulation. The windows shows a few 3D objects, each of which 'triggers' a different string when clicked on with the mouse. Nothing realy to put in a user package, just some basic objects as a test scene, but the principle works, which means a 3D program interacts with the program parts driving the sound card, and the (complicated) string simulator runs through it all, depending on machine power. Maybe one medium length string on a pentium 100MHz, more for every hundred MHz.

I've written about it already some time ago, its the same software, I just realized I didn't put the example on the web jet, some may want to try it out, see it is for real. I've made a screendump, but where I am now, I can't access it, so maybe in some days, and also I'll put on the programs' executable instead of the sources, together with the cygwin lib in a zip file, for those who aren't affraid of command line parameters, or maybe even to recompile and change the algorithms, and like to see what sound it produces and wether the graphics actually run. It's not a tour the force, but getting a decent opengl (mesa) graphics application to run with a portable sound library, in decent development environment, with a demanding dsp algorithm is not nothing, and at least is a good proof of what I can get to work. Any synth builder investors out there? Need like half a ton to make serious stuff happen, and live.

Anyhow, I also have the glut and 2D library (which isn't perfect) and can compile them with success and reliable results under cygwin, currently I think even from a linux partition, either on windows95 or 2000 server, and also from a single disc share on multiple machines, and I've practiced more than a little with the sound algorithms and basic routines, and even samples, so with all that together, it should be a good starting point to produce quite some original audio programs, and with work probably of decent quality. Check out the audio files on some of the older diary pages (right click on the link, 'save as' to disc as type '.wma', the microprocessor one comes from the computer system I built from not recent parts, not even from a 16 sound card, which should run the same algorithms after I port the programs, except for the delay in driving the programs and the absence of custum digital to analogs electronics in the signal path.

The audio file downloads wrong in netscape it seems, I think explorer does take the .wma (windows sound) format correctly for a binary format, or maybe use ftp if that works, I'll make an updated page on the whole synth stuff when I have the chance, and look into the formats, or maybe make a zip file or convert to mpeg, I'll see. It would be good to have a page with all the more recent results on it, including some on the programs I ran on the windows 3.11 and DOS machines to drive the microprocessor synth real-time. The tape I made some months ago is in the z80synth file, without editing, enhancement or additional processing, just straight as it is, with the machine being played by me in real time on its little baby keyboard, and loaded mostly in real time from a 468 32 MHz machine running a dos application for additive synthesis with 16 components mixed graphically, and a filter simulation with control sliders and waveform display, which loaded the machine near realtime with the resulting samples of 8 bits times 4k. the first part of the recording uses samples produced by physical string modeling, which are played over two oscilators with detune variable (over the keyboard range) detune, and looped after the initial part, also with stacked transposed samples. The latter part is made running the Z80 sequencer, which can be pitch controlled in real time (and realy, too, it responds razor sharp, almost sample accurate, to the keyboard), which is reloaded with new samples while running.

The Physical Modeling Simulator in windows 95/+ version can be real time controlled, minus the audio delay setting, though I didn't give it many controls yet, see some of the older pages for some sound examples.

Inflating software intelligence value, or a proxy server on an A4

For years, lets say following the 'invention' of Java and it's starting spread in webbrowsers in '95 or so, which I remember clearly from browsing the web on unix workstations such as HP702 and sun spark VI's and the like, there has been a tendency which in some ways looked parallel to what happened to programs on those platforms: growing sizes of programs and development environments, growth of sets of functions, modules, startup and running times for simple enough programs and problems.

That effect with X programs for instance happened quite some years before because the programs running using X windows required libraries of graphics routines which were relatively big, so even a simple program using graphics would be relatively big, and in unix there are many programs.

With Java, a virtual machine was 'invented', or lets say a specific one, because in electrical engineering, which is computer sciences' only source of machines, architectures, operating system basics and implementations, and probably of most sensible programming basics including languages and the way they are implemented, it not that special in itself to make a computer simulation of a certain 'machine' in software, that is simply a good way to predict and test the behaviour of a computer which is still on the drawing board. The java one had to run an object oriented langauge, java, which requires quite a multi media library of routines, and a standard object oriented structure, which through interpretation on a fixed virtual machine can run on any computer platform without changing the program.

On top of that, java is a compiled language, which means a program has to pass through another set of programs, called the compiler, to arrive at the final executable programs, which are fed t the virtual machine. For any serious programmer, which some knowledge of the way things have logically enough evolved and grown, and some understanding of normal logic to put something together that makes sense, there are more than a few idiosyncracies in the whole idea, and not for nothing, the whole thing is clearly and unambiguously for the eyes of the world equated or refered to as the main thing progammers think about, engage in or even mention in exact words when having started a compilation of a large program on a slow or not fast enough computer system: getting and slowly sipping a cup of coffee. Realy. Standard language.

Compilation makes a progam which directly runs without an interpretation step on a computer, normally, if that is not the target, then normally there is no reason to bother about compilation either, which compilation usually makes one program file after linking, and under java there are intermediate files after compilation for each source file, which aren't (dynamically or not) linked in the end, but loaded in the interpreter of te virtual machine. Not unlogically, but having both a compilation and a interpretation and a virtual machine task makes nothing work the fastest way possible, neither the process of making programs, nor the execution of the programs, nor the development cycle at some point, because the interpreted part is never accessed by programmers directly, not even during debugging. And then the whole thing runs an object oriented language on a virtual machine made for it, but then again, I remember looking at the relay virtual java machine, it is not that special or resourcefull or even specially well made for that purpose, it is even quite limited for modern machines, keeping it runnable on small machines, and of course it does let one maintain the one-language and one program form idea, where every java program, of a certain version, runs on every computer.

At the same time I was already into tcl/tk, which was rapidly developing as a scripting and user interface language, especially in university and telecommunication circles, why that is so I don't know, it's just the way it happened, which makes more sense in the traditional way: an interpreted language, currently even with inline compilation, with a powerfull user and operating system interface, also standard on many platforms often without having to change a single line of program code, a capable interpreter, and even quite advanced variable or list structure housekeeping and operations. The package is easily obtained, installs on most anything I've had my hands on without sweat, and is quite powerfull, as an example, there are excellent web browsers in tcl, drawing programs, word processors, and even a web browser. For building interactive prototypes, a very suitable environment, ask ericson or cisco and many others.

I've done a block-based program builder with it, a communication tool, web tools, audio and graphics program controls, and a database, which may serve better for medium sized databases than many standard solutions, especially on todays machines which are hardly used effectively for their technical merites of for instance leaving enough core space for serious and reliable full database storage, and associative operations and indirections on the data with more flexibility and speed than a disc would permit, and of course it's simpler. Especially on operating systems which have decent process and stream implementations using multiple child processes and streaming their in and output can be jsut as powerfull as a well used distributed unix system, except with a lot more programming facilities, for instance all standard user interface components like menus, buttons, lists and even images, than a unix shell would have. Windows does not qualify, unfortunately, streams are available, but processes with their own (standard) input/output streams don't work right, though maybe on 2000 it is possible to have them, I don't know, only Unix and Linux have that.

Anyhow, while Java programs have hardly any real general or acceptable or comparisonwise advantage except the standard protable virtual machin and large interface library, and probably some advantage from the object idea, and much of the disadvantages, btw, Tcl programs tend to be compact, well readable enough, relatively reliable, very well testable and quite powerfull, so I wasn't dissatisfied with the lets say Basic-replacement role, and the sort of lisp/smalltalk like qualities quite achievable with the modestly sized package and libraries.

As an example of looking at the problem and a technically at least bearable solution (who ever does set-analyis and creation / deletion bookkeeping on their object oriented programming work plan?), I've, hopefully to get some stuff off the ground which also may help funancially, made a rudimentary, limited, but functioning proxy server in a short bit of tcl.

Here it is:

################################ proxy.tcl ####################################
#proc httpcallback {n h} {puts $n,$h ; puts sock176 [http::data $h]}
#set sd [http::geturl -command "httpcallback 11 " ]

proc proxysocket {{port 3000}} {
   global serversock
   set serversock [socket -server proxyservsetevent $port]

proc proxyservsetevent {s i p} {
   fconfigure $s -encoding binary
   fconfigure $s -translation binary
   fconfigure $s -blocking 0
   fileevent $s readable "proxyservfirstevent $s"

proc proxyservfirstevent {s} {
   global in
   gets $s in
   set l1 [split [lindex [split $in \n] 0] " "]
   set command [lindex $l1 0]
   set url     [lindex $l1 1]
   set proto   [lindex $l1 2]
#puts $url
puts $in
puts "url=$url"
   if {$url == "http://test"} {puts $s "Test !"; close $s; return}
   switch $command {
   GET {
    set hh [http::geturl $url -command "proxyfeedpage $s" ]
    fileevent $s readable "proxyservnextevent $s"
   POST {
    set hh [http::geturl $url -command "proxyfeedpage $s" -querychannel $s ]
#    fileevent $s readable "proxyservnextevent $s"

proc proxyservnextevent {s} {
   gets $s in
# ignored for now

proc proxygeturl {s h} {

proc proxyfeedpage {s h} {
   puts $s [http::data $h]
   flush $s
   proxyclosepage $s $h

proc proxyclosepage {s h} {
   http::cleanup $h
   close $s

proc proxyinit {} {
package require http
proxysocket 3000

console show

Isn't that something? And it realy works, too, even not bad , except it doesn't seem to pass all types of cookies, url redirections, and some for data methods, so I can't use it to log in yahoo mail, and some links don't work, or require manual redirection to a new web location. And I need to clean up inactive sockets by closing the other side of a proxy connection when the one side ends at inappropriate times, which I've looked in to, interaction procedure wise.

But overall, I can load my wavelaboratory page in a netscape browser on linux, which gets fed through a (also linux) Squid server, which in turn connects to another machine, a 2000 server, which has this code running in a tcl program, and uses a windows 2000 isdn driver to connect to the internet provider and the web. And that works neatly enough for many sites, is fine, too as is the hosting offices' official web site no probs, and swift enough (considering isdn connection, which could be improved on), with pleasant enough response patterns, probably because the proxy doesn't enforce some kind of communication order but follows data as it comes in and flows out.

I'm aware of a few possible improvements, but the main idea is that I use the http library to obtain pages on the basis of their url, which is obtained through the links made by the web browser asking the server for connections. Once the request is made, the http library routines (which are structured and have oo like qualities, to make clear that area insn't out the picture, and quite well implemented, I think, which can be checked, tcl is open source from the beginning, as I think java might become soon) dowload the page or image requested, and once their have been received, they are pushed back to the browser over its socket stream, and the connection ends are both cleaned up. The serving and http access to the web are completely event driven and non blocking, that is they don't get in eachothers way, and pass data as soon as it is available, even when the browser, as it usually is, opens more than one (e.g. 4) connections simulaneously, they are all receiving attention and don't have to wait for eachother.

Needless to say, in tcl it is not so hard to include all sorts of restraints, such as site checking, or replacement schemes, url shortcuts, and checks such as based on IP address or user names, on top of this flexible basis. The http access is also based on tcl libraries, so if they aren't to ones' liking, they can be adapted easily, they're readable enough. I've earlier already experimented with including sockets and server with bwise, to give the whole thing a graphical interface with network editing.

The linux machine as it is now sits between two intra nets, a small one with the isdn server, and a larger one with a few hundred machines, which in this way are all in principle internet enabled this way, sort of through a double safety net, the squid is quite safe by itself, and this proxy passes only exactly what I want, so the whole thing is quite well. Samba on the linux serves the large intranet quite easily with for instance a partition on another 2000 server on the small net which contains netscape, so all workstations can in principle browse away easy.

The button pushers

Alias the Guards of the Fusepanel. That's the impression I have of the ideal position of certain computer controller wannabees. 'We have power'. Gmpf.

The idea that software activation and control of installations and programming facilities is just a means to play magician and maintain the attention and maybe submission of the less informed is repulsing in my idea. I wholeheartedly agree with the open source thought, though probably that is not always needed, and I still hold it that people with must be able to get not insulted using interesting technology, and have reasonable access to it, when wanting to use it, and not be confronted with vacume cleaners which requite shift alt F7 and a secret undocumented code to operate.