Theo Verelst Diary Page

Latest: Februari 23 2001

I've decided after good example to write some diary pages with toughts and events.

Oh, in case anybody fails to understand, I'd like to remind them that these pages are copyrighted, and that everything found here may not be redistributed in any other way then over this direct link without my prior consent. That includes family, christianity, and other cheats. The simple reason is that it may well be that some people have been ill informed because they've spread illegal 'copies' of my materials even with modifications. Apart from my moral judgement, that is illegal, and will be treated as such by me. Make as many references to these pages as you like, make hardcopies, but only of the whole page, including the html-references, and without changing a iota or tittel...

And if not? I won't hesitate to use legal means to correct wrong that may be done otherwise. And I am serious. I usually am. I'm not sure I could get 'attempt to grave emotional assault' out of it, but infrigement on copyright rules is serious enough. And Jesus called upon us to respect the authorities of state, so christians would of course never do such a thing. Lying, imagine that.
 

Previous Diary Entries

February 23, 2001

I've tried screen resolutions on an old PC, and than look at some of the palette possibilies in q and gw basic. Shouldn't have done that.

Computer basics

For long it was known that computer applications are divided in database, spreadsheet and wordprocessor parts, with amusement added to the picture in various forms such as drawing, music, all kinds of games, and maybe one or two more areas of interest. I never was in to games much. I liked PSION chess for its 3d idea until I had it cracked enough (which was about 15 years ago), and an ocasional 15 minutes of loosing another game (I dont think I ever won one), meteors on the trs80 was fun enough shootem up kind on such machine, packman was for weenies, space invaders fun to look at,and of course I was into flight simulator when that became possible, preferably graphically. The Atari ST could do that just fine, and I even put the planes down again, more than fun enough.

Sequencing I've compared t word processing in music, which is not an altofgether good comparison, but not the worst: its essential, everyone needs and wants it at some time, and for some its even essential copmuter use. And it is about making something with an art or expression form in it, that is supported by all kinds of sensible tools, from cut and paste, search and replace to spell checks, illustrations and thesauruses. When the program works smooth and intuitive, everyone can agree it exists for a legitimate reason, and many will use it.

Spreadsheets are different. I'm sure there are economicists and presentationalists and maybe bookkeepers that use them a lot and wouldn't know how to live without them, but for many computer users I don't think they are of essential value. Make a sheet with numbers that automatically add up. Fine enough, but it isn't clear why it should be a grid, why I couldn't use a calculator in a word processor, and how it all works. I knew an excel predecessor long ago enough to drag out relative and absolute 2d range copies and graph them in no time, and of course I didn't fund it all too scientifically challenging, but I made a few graphs, and did findit of interest to automatically generate some tables and make some experiments work.

Databases are the target today a bit. I am thinking about dong some database C programming just to get that stuff sorted out, and test around a little bit, not primarily experiencing the incredible (it seems to some) mapping mysteries and their log2 or so implicit categorizing and poison to gaussian hit chance changes per implicit category, I do that with continuous numbers in coupled oscilators for thousands of time steps when I feel like it, but simply for the legitimate idea of storing and processing data in an organised and more or less associative way.

Basically, data goes in a database in some handy enough way, usually organized per item or set of items, and comes out by associatve means with a numerical or alphabetical ordering, maybe multi level. And that is enough to at least make a phonebook with, invert it when needed, and maybe automatically generate a layout for a printer. And of course when 10000 persons have moved or a new number, the database should change to reflect that, in short enough time, preferably still making sense when the changes are being made to users consulting in in the meanwhile. And it should run on the equipment being available at the time, preferably efficient enough to make users not wait longer than they can stand.

Is that it? That depends.

Do I know? I coudl generate extensive address lists on dbase long ago, print them in neat reports, and update them by data generated from other programs, no problem. Associativeness gets one in another ballgame, which makes it more a programming matter. Can I find all objects in an interactive, text and graphics driven 3D graphics aobject database that cover a certain 3D subspace, and are connected with some objects, and than real time automatically change certain properties, generate the changes in the input files that define them automatically, and render the new database with radiosity progrms connected with it, int interaction time? That's the sort of thing I've been into, and that worked, for for instance 100000 (hundred thousand) database entries, not the simplest ones, on a workstation (hp720, 128 meg, 100MHz parisc), 5 years ago.

Is that a trick, and is it worth something? A general database may do some things such as retrieving fields and entries matching certain textual criteria, but most databases will not respond to the idea of wanting all synthesizer sounds stored in it that roughly sound the same as a specimen. Databases aren't programmed for such things, they usually deal with texts, addresses for instance, web pages, progra source files, those things, and then make logical operations work on them, some math, and sort and associate in various ways over entries and fields, and in some cases allow search criteria and queries to be automatically generated from database content themselves.

That means one moves into the direction of programming, and in ways not usually understoodin programming terms judging various utterances in databse land. As often, the technical designs implicit in certain database approaches is mostly based on the existing infrastructure, which programmingwise is not very rich or efficient, necessarily, and certainly not necessarily leaning on the applicable knowledge in various programming disciplines.

As and example, the idea of resolving associativeness in database engines is made with certain types of data in mind, and can programmingwise be done in various ways, which is never visible to the database 'programmer'. Not that that is bad, but it is limiting, and defines the boundaries of sensible and effective use.

Lets have a look at a UNIX utility known for decades, that is related to what one does in database land, but then in the OS realm, called 'grep'. I recently compiled and installed one for programming support use. 'Where is that module where I defined function so and so'? Over millions of bytes of program code, the answer can be there in seconds, using on a command line:

c:\C >  grep myfunc *.c

For unix users, or some like me liking the idea enough, it could also be something like this:

c:\C >  grep  myfunc 'find \Sources -name '*.c' -print'

My local version based on this script:

@echo off
del tt
for %%f in (%2) do grep %3 %1 %%f >> tt

gave this result:

c:\cc\examples> grtt test *.c
File EXTEN.C:
void abstest(void)

int intrtest(char *v1)

int traptest(void)

int asmtest(int pp)

void structest(void)

File MT.C:
   printf("Testing overflow trap\n");

   sprintf(p+100,"Testing binary file write!\n(new line)");

   printf("Testing div 0:");

The script is because the the dos shell seems not to perform wildcard expansion the way I use it (or at all), that means I needed a script to make me search all files in the dir. There are at least a dozen or more .c files in the dir, not big ones, and as expected, search time is maybe two second, short enough.

Is it possible to realy use this idea? I'm sure there are unix stations humming as I write that use unix shell scipts that do this stuff, and that for instance generate the equivalent of database reports using grep and some lineprocessor porgrams such as sed or awk to maybe even generate web pages. Pearl didn't just come falling from the sky.

How would a database giveone this result? In principle, simpy by putting all source files in the database somehow, and making a query to the extend of finding all entries with the word test somewhere in the file content field, the results could be sorted, even and it would work. But, one would have to keep the database up to data all the time, meaning for each save operation on some source file, the corresponding database entry would have to be updated, every time again, Normally that would mean the file system gros twice as fast, the save operation must be extended to do this (hard on a wordprocessor used as editor), and (cpu and disc) time it takes to update the database must be taken for granted.

So there is good reason to tackle such things differently, and remain intelligent about associativeness and programming in the database area. Now lets have a look at the example operating system utility text search program

GREP

The program basically takes a search pattern as input, such as 'test' or 'myfunction', but also '[t,u]*.???' for all occcurences of the letters t or u followed by one or more characters, a dot and three moe characters. The same syntax most Linux shells can probably understand as file name specifications. The other part of the input specification is a list of files to search, preferably using wildcard expansion, so a lot of files can be easily referenced.

Then there are the options, to specify how it should grep the expressions from the files. WIthout options specified it will echo the lines in every file containing the patterns asked for.

Lets have a look at the source code, first only the header and the main function, in C comments are between /* and */, so the first part is of course only to make clear who wrote the program and what it is for. Next I omited defines and global variables, and listed the main function, taking the argument list from the operating system.

/*
 * The  information  in  this  document  is  subject  to  change
 * without  notice  and  should not be construed as a commitment
 * by Digital Equipment Corporation or by DECUS.
 *
 * Neither Digital Equipment Corporation, DECUS, nor the authors
 * assume any responsibility for the use or reliability of  this
 * document or the described software.
 *
 *      Copyright (C) 1980, DECUS
 *
 * General permission to copy or modify, but not for profit,  is
 * hereby  granted,  provided that the above copyright notice is
 * included and reference made to  the  fact  that  reproduction
 * privileges were granted by DECUS.
 *
 * Compile command: cc grep -fop
 */

#include 

   ...


/*** Main program - parse arguments & grep *************/
void main(argc, argv)
int argc;
char *argv[];
{
   register char   *p;
   register int    c, i;
   int             gotpattern;

   FILE            *f;

   if (argc <= 1)
      usage("No arguments");
   if (argc == 2 && argv[1][0] == '?' && argv[1][1] == 0) {
      help(documentation);
      help(patdoc);
      return;
      }
   nfile = argc-1;
   gotpattern = 0;
   for (i=1; i < argc; ++i) {
      p = argv[i];
      if (*p == '-') {
         ++p;
         while (c = *p++) {
            switch(tolower(c)) {

            case '?':
               help(documentation);
               break;

            case 'C':
            case 'c':
               ++cflag;
               break;

            case 'D':
            case 'd':
               ++debug;
               break;

            case 'F':
            case 'f':
               ++fflag;
               break;

            case 'n':
            case 'N':
               ++nflag;
               break;

            case 'v':
            case 'V':
               ++vflag;
               break;

            default:
               usage("Unknown flag");
            }
         }
         argv[i] = 0;
         --nfile;
      } else if (!gotpattern) {
         compile(p);
         argv[i] = 0;
         ++gotpattern;
         --nfile;
      }
   }
   if (!gotpattern)
      usage("No pattern");
   if (nfile == 0)
      grep(stdin, 0);
   else {
      fflag = fflag ^ (nfile > 0);
      for (i=1; i < argc; ++i) {
         if (p = argv[i]) {
            if ((f=fopen(p, "r")) == NULL)
               cant(p);
            else {
               grep(f, p);
               fclose(f);
            }
         }
      }
   }
}

For those willing to delve into the code, it is clear enough for C programmers with some experience what the idea here is, the arguments passed to the program at the command line or from a script shell are scanned, the options read, and the arguments after the pattern string is found interpreted asfile names. List processing this way looks wieldy and probably could be written a lot clearer, but this programming is efficient and offers complete bit level freedom, and is known quite enough to still be valid. Check the linux kernel, and find out why C is used for such things, and why in spite of the syntactic gibberish or lets call it compactness, this programming style at least is still around in many professional programs.

When the pattern string found, after all the arguments are checked, the files requested or just the standard input (which is like the keyboard input to a program when it is running, where data can be read from, or the output of a previous program when grep is used as part of a 'pipeline', with the | character) are opened and offered to the function 'grep' which does the actual grepping.

That's basically list processing, and it works for anything that fits the address space of the computer, so 5 thousand files, with filenames of a kilobyte long in principle can be offered as argument list, though even on professonal unix versions, buffering of the argument list is probably limited to quite a bit less. When this code is used for such big lists, it will work just fine, be more efficient then pretty much every other method with maybe a little optimisation, and is still quite prortable. I compiled this program on a compiler certainly not meant for it, and as it sais in the header, this stuff was written in days I didn't even know Unix existed (though maybe I'd heard of it), and I needed hardly any editing, maybe non whatsoever toget a working grep program.

The next issue of general interest is the idea of objects, already present in this code. There is no trick here, I just loked at some sited containing programmers tools source files, I think this was at simtel, picked up this program because I didn't have it yet, put the 16 kilobytes of source code on a floppy read it in a completely obscure PC, used a completely (to me that is) unknown C compiler with good enough K&R and ansi compatibility, and this stuff works on m*f* dos even, on a processor not even existing at the time these sources were probably part of some expensive minicomputers' operating system, I guess a dec unix flavour.

The other subject clearly seen in this ancient but still upto date source, I used grep Linux myself, is the idea of objects, let's have a look at some more functions from this source file. The idea of a class of patterns is used in a pattern compiler, which makes a pattern such as ?.c understood to pattern matching code, and the function making such a pattern class we'll look at after the main grep function:

/*** Scan the file for the pattern in pbuf[] ***********/
grep(fp, fn)
FILE       *fp;       /* File to process            */
char       *fn;       /* File name (for -f option)  */
{
   register int lno, count, m;

   lno = 0;
   count = 0;
   while (fgets(lbuf, LMAX, fp)) {
      ++lno;
      m = match();
      if ((m && !vflag) || (!m && vflag)) {
         ++count;
         if (!cflag) {
            if (fflag && fn) {
               file(fn);
               fn = 0;
            }
            if (nflag)
               printf("%d\t", lno);
            printf("%s\n", lbuf);
         }
      }
   }
   if (cflag) {
      if (fflag && fn)
         file(fn);
      printf("%d\n", count);
   }
}

The 'while' loop reads input lines using the fget function, which returns a line from the input file, then the actual 'match()' function is called to do the searching, followed by printing the filename and the match data on the output, depending on the choices set by arguments represented by a few flag variables.

The following function is the subclass class compile function for character patterns like [a,b], meaning the character a or b (but no other( as example, I'll currently spare you the full compile and match function, which is a bit lenghty and not currently my aim. The match function basically takes the patterns generated by the compile and the next function as guideline to extensively compare every possible character string on a line with the requirements, in principle starting from every possible character position in the line from first to last.

The compile function makes a list of wanted characters in a coded form the pattern match function understands.

/*** Compile a class (within []) ***********************/
char *cclass(source, src)
char       *source;   /* Pattern start -- for error msg. */
char       *src;      /* Class start */
{
   register char   *s;        /* Source pointer    */
   register char   *cp;       /* Pattern start     */
   register int    c;         /* Current character */
   int             o;         /* Temp              */

   s = src;
   o = CLASS;
   if (*s == '^') {
      ++s;
      o = NCLASS;
   }
   store(o);
   cp = pp;
   store(0);                          /* Byte count      */
   while ((c = *s++) && c!=']') {
      if (c == '\\') {                /* Store quoted char    */
         if ((c = *s++) == '\0')      /* Gotta get something  */
            badpat("Class terminates badly", source, s);
         else    store(tolower(c));
      }
      else if (c == '-' &&
            (pp - cp) > 1 && *s != ']' && *s != '\0') {
         c = pp[-1];             /* Range start     */
         pp[-1] = RANGE;         /* Range signal    */
         store(c);               /* Re-store start  */
         c = *s++;               /* Get end char and*/
         store(tolower(c));      /* Store it        */
      }
      else {
         store(tolower(c));      /* Store normal char */
      }
   }
   if (c != ']')
      badpat("Unterminated class", source, s);
   if ((c = (pp - cp)) >= 256)
      badpat("Class too large", source, s);
   if (c == 0)
      badpat("Empty class", source, s);
   *cp = c;
   return(s);
}

'store()' puts the argument in an array, in subsequent positions, the function looks for various ways of defining data in the class of [] character pattern definitions.

Is there a point to all this token manipulation? In the end it makes it possible to communicate intelligently to some computer what it should look for in an amount of text files, for instance if one wants the names of persons with all first names, surnames starting with a or b, living in amsterdam, stored per line in a file like this:

john Doodle city=some city
al smith, some street ;city=amsterdam
...

a grep as follows would do the job:

grep '* [a,b]* city=amsterdam' addresses.txt

Suppose we have various address files, called address1, address2, etc, the command would be:

grep '* [a,b]* city=amsterdam' address*.txt

I just looked it up that the commais in this case part of the characters to look for, this grep just takes any character between the brackets as to search for. In shell scripts, and some other programs, te comma indicates seperate characters, also possibly being number wise represented, and can be replaces by a dash to indicate for instance x through z by [x-z]. Not within this program, though. The escape character is ^ here, I think.

Now suppose someone is updating the addresses while we want to look some up, the idea is that if the file update for each address is done at once, after the updates have been prepared, there is no problem, if the search is done a millisecond before the update, the old address is returned, if it is done a second later, the updates are reflected, no problem. A problem would arise if a person moved from one address file to another, and the update is such that the files themselves are updated in one stroke or as it called an atomic operation, but not all files together, meaning first one where the person was in and later where he ends up in, with time for access in between, there may be errors in queries listing the person not at all or twice, when the grep approach is used.

Such problems are close to the essence of many database structures, and being aware of them makes one understand quite well what they are about.

The per file atomic update can be done by preparing a new file containg the changes, and renaming that file to the correct name when chances are performed, the renaming is usually automatically a in-one-stroke or uninteruptable operation in an operating system. For multiple files, this trick doesn't work, though it could be attempted at directory level, or programmed decently from the start.

Suppose this is attempted, the database software still faces the idea clear in this example: more than one file isn't necessarily nice enough, because it takes copying the whole file for each update , which causes a lot of disc access, and extra work. When a file system allows seeking, or when a different structure is adopted, such problems can be prevented. In either case, the idea is that data is somehow fragmented, and that accessing the data, just like a file system does is done by putting the right pieces together and in the right order. Conceptually, the text file could be atomically broken up at the moment an address is changed into a piece on disc until the update, and a piece after it, while making all users of the file 'see' the file as [1st part , old data for changedpiece, last part], while only the part containing the changed address is rewritten on disc. That puts the burden on the computer system of making this happen, which is possible when disc access is done intelligently, but not necessarily the most efficient way when a lot of updates are done, or incomparison with streamed disc access hardware provisions.

In short: the whole idea of a database is dealing with these issues in some intelligent way for certain classes of problems, and certain problem classes may not all be covered even by all database techniques available. A main idea of databases must not remain unmentioned in this context, which is the ideaof indexing, meaning making a list of entry or field identifiers representing a certain subset of the database, usually in a certain order. A list with references, basically. And maybe the idea of associativeness, aka 'relations' or symbolic search criteria with database fields as part of a logical or search expression. Hashing means making an= unique identifier for a peice of data in a efficient way for computers with fast logic and addition units, which means pretty much all computers when compared with disc access. A 'key' is formed for data in an efficient way that makes searching much faster, but has as imperfection that extensive searches, and many logical search kinds are not supported by the techniquewithout either considerable work, or not at all.

Having a method to access computers basic modes of operations makes it possible to tackle every problem with the right program, and examples of going about things differently usually leads to inefficient or wieldy results, that don't deliver much power for the effort.

For someone that may have an interest and no linux or unix, or desire to search simel or other source repositories themselves, the compiler is done by David Lindauer, and the grep source I'll put here. .

I thought I might look at seeking and rewriting files, using standard unix calls, then at least protable software can be the result, and seeking may be fun enough, just like testing binary IO from low level calls, I suspect that may be a lot faster. Basics are always important in these fields. And now that I can grep the includes to compensate for lack of elaborate manuals, programming could be a doable activity. Seeking, ha, that stupid stuff was security loophole into files of other users, I remember. Linus fixed it, I tried som etime ago. Does linux still not allow root login from other terminals than the console, I found that not nice when wanting to do a little system maintenance over a network. I think su was allowed, but not rlogin -u root, anyhow, security first, maybe.

Harddisc Recording

In one of the latest diary pages I mentioned the idea of putting audio signals on harddisc, such as done in hardisc recording packages and even special machines. Such hard disc multitrack machines, and windows, mac and linux programs hopefully doing a good job a well are basically recording incoming audio streams in analog to digital converted form to a harddisc with sufficient access speed and capacity.

Knowing that a hardisc fragments files to not be 'full' at some point when deleted files can be replaced by new ones of differenty sizes, it must be fast enough to record every piece of sound as it comes in at a place still free, moving the head aroud fast enough, and waiting for the right sector on the circumference of the spinning magnetic disc. The idea some time ago was that the recording of uncompressed audio would be without trouble when the data can be written as one contiguous stream to adjacent tracks on a harddisc, which makes it hard enough to make such software deal with lets say splicing a segment in a recording, or editing in general, and requires the hardisc to be maintained in (partial) special formatted structure, leaving certain cylinders for audio purposes.

I'm not too much into the idea now, although it would definately be possible to at least consider doing some sampling, and of course putting that on disc, but not good enough to make productions with. Maybe recording and converting from core to mpeg, and doing mpeg mixing would be an option, but then again, that's not what I'm into now.

The idea of producing a song or even album based on dics instead fo tape recorders is not bad, when applied right, though not necessarily of the quality I would want. Digital signal processing to do the recording and mixing I've done in various ways, see my site, that is not a problem, and modern discs, maybe after running defragment, should be fast enough.

Simulating marimba

I used this version of a older physical modeling program with recent modifications to make samples of sounds that resemble a marimba, after some processing with the fourier generator and analog simulation tool. The idea is to create harmonics from samples, not just sine waves, make a spectral change by the interpolation of two such spectrum definitions made of of few components, and set adrs and filter to get good sounds.

By looping the samples not over their last loopable section but overall, higher harmonic slider create an echo effect, which when damped with the filter and the envelope give interesting effect, though not in my opinion desirable to be used in general, then this should be used more deliberately, as is quite possible. Anyhow, just playing around with the software, again running a sequence to try the resulting sound with, the marimba sounded nice, warm, ticky, and with enough meat on it, though not fat, a many of these physically simulated sounds seem to be, fatter in the sense of livelyness than most other sounds, never thin, but only like the analogs fatness when non-linearities force wave shapes on top of the strings natural vibration.

I'm not bored with the possibilitties yet, sonically, so this one string, 100 or so section simulation is not bad, considering I'm comparing against good enough analog simulations and some library samples, and can usually get in to sounds enough when playing them for some time. It is a good sign that without getting into connected strings, sample feeds, major nonlinearities or other, active, components in the signal path, the sounds are pleasing and non-boring, that means that taking even the basics, shown a few pages ago, which are strictly mathematical inbuiltup are strong enough to base even a synthesizer on, and I'm quite convinced of the idea that the liveliness of the sound is because of the degrees of freedom in the simulation algorithms, and the fact that physical instruments, which also are not normally as boring (or strong) as certain types of synthesis have also at least such degrees of freedom in their sound forming parts.

I also did organ sounds again, which works nice when combining some pm samples, I don't know exactly why, they are not yet to the point of soaring tones that would be desirable, but lets say the DX7 has some edgy sounds of similar kinds, that are good enough to use.

Directly convincing is the parametrical generating of a banjo simulation, by just adjusting the pm parameters, a percussive, plunkey banjo sound results, which should be good with a polyphonic player even better, but playing moving third intervals 16ths sounds almost like the real thing in a mildly abstracted way I like. Without any furter processing even. Maybe I should remember the sources and the parameters for all experiments like in a journal, but I'm optimistic enough to think I'll recreate what is needed, and that a few dozen sound files is nice to play around with until I make the final pm general machine and user interface with save buttons.

Using additional processing, I also made good imitation string ensemble simulations, even a few real pleasing ones, which I don't think I could have gotten from the synths I played, with a real fine element in it, I think comes from the pm idea, though they were quite constructed sounds with the pm samples beng stacked and also looped, and with additional filtering. In general it should be noticed that I use two playback oscilators with a nice detune for most pm results, which of course makes them different, mostly thicker and warmer, at times probably less punchier and direct. The fourier adder doesn't have a save button yet, so I didn't store the results of using it, which makes it more exciting wether I'll do some of the possibilities again easy enough, some sounds definately require all 45 sliders or so to be set to at least 5 percent accuracy, which is not easy to reproduce when not having a clue how to do it...

February 24, 2001

What's the added value of a compiler and trying to compile someone else's code with it?

Ansi C

I'm fed up with 'missing prototype' warnings. I can live with variations in in line assemly formats, and with learning some assembly limitations (why can't I load every register in every addressing mode?), which at least lead to enough expertise to do some serious PC programming. I've not been thrilled with the cpu of choice in PC land, I suddenly remembered why it was again that I wanted a 68000 based machine, on top of the ST's being cheap enough and quality enough at the time. At least those latest pentiums and the bi earlier ones get a reasonable mips rating to work with, no matter wether they could or could not easily be improved, so I guess the installed base and the existence of de facto standards would have to get to me, too.

I've done a little assembly combined with C to read a (binary or ascii) file with Z80 program or sound sample data in it, and send it to the microcomputer, which work fine, and at least makes clear that even decent enough machine language (procedure) calling conventions work enough to use. Standard stack frame isn't realy my idea, but this stuff with and extra memory for the pointer and data indexed by run time resolved indirections works enough. Can't help thinking that 3 extra additions in address ressolving can't be good for risc type of response speeds, but then again, on chip processing is probably still quite some faster than even fast bus and memory access.

The ansi idea itself is fine enough, it makes it possible to have modules that work together more reliably, but the number of possibilities to generate errors from permuting the possible errors in prototype with function matches and implicit datatypes being corrected with postdefined functions is too high. Making all void definitions actually void is bad enough, but every return explicit to correct for the loaders default int after having done the void things right is annoying. Important? Not realy, except that for somewhat lengthier programs the humber of warning prints is probably slowing compilation down.

So now I want objects? I'll make them when I want to, and maybe I'll feel compelled to get the gnu c++ compiler again (which I used before), but I'm ot sure I can install it now. A little C++ doesn't hurt, and it seems that I even saw some objective C in there, though I could be mistaking. The idea to use that that to prove I can isn't appealing, but then again, a little easier printing and maybe some streaming facilities are not so bad, and I'll think of some good enough use for objects...

Seriously, I though about it: for what purpose would I currently go C++ objects, as in for instance a synth simulator program? Not having a pentium and a prefered devenv (such as cygnus or linux) it is always suboptimal what I'll make, but why not make the best of what I can do, at least now I should be able to make some things even good enough fr seriously selling. Well, the answer is that maybe I'd likesome graphics object library, fake windows, some interaction stuff, and that of course I could squeeze all I do in some OO form, like I did for many years in Objective C, only to end up with the idea that maybe the language and messagng syntax were major reasons, more than the added value of having a oo language prescribing object structure and implementation mechanisms.

Seriously, sharing a function over a certian set of data, and making some kind of inheritance work over allthis requires only a few functions, and don't require wizzardery. Anyhow, the idea of doing objects without the languae above has been shown not to be stupid or new, and of course valid, it is more maybe a matter of fashionability, ease of use and available libraries. Do C++ programmers know their C in general? I don't know, but a lot of signals point me at the idea that it may not be a good idea to go OO without considering what programming is about and made of, on most machines that exist.

I guess th subject ticks me of a bit, because it seems like unreasonable to not have deent programming around is not exactly the idea, but overemphasis of a market on things that contentwise are simply not that right to make so much fuzz about isn't motivating. WHo sais cars should have 4 wheels? No one, it just works fien enough, 2 exists two, but is called different, 3 works, but why not throw in a 4th to make it all more stable, and 5...

Some things have more than enough associated logic to justify their existence in a certain form, that's what I'm making clear, and the signs of things going wrong can easily be that those obvious lets call them little or big truths, within a context, are bing not taken serious, messed with, or deliberately countered.

Usually not a godo idea. Those pentius don't work by the semi fundamentally religeous application of a paradigm adhered to by some semi friends wanting to rule the world. Well, at least not contentwise too much. It takes hard and very complicated work to make such devices, and their work because the laws of physics, electronics and logic are not defied, but put to good use, otherwise chances of success are immeasurably slim.

Considering the amount of emphasis and spread of softeware, I don't think it hurts to stay aware of its basics, its ideas that made it the way it is, and te reasons for wanting it, and in which form.

There is not that much actual progress in software land for quite some years except in some clear enough ways. Realy there is not much in windows and most programs and methods I"ve recently used, seen or read about that hasnot in some form of another been done decades ago already. Seriously. Is that bad? No, of course not, if one wants to invent ones' own wheel in ones' own way, and spent a lifetime doing it, that is fine, as long as it is gratifying enough and not in some sense destrucive or evil, and one can feed computers anythign one likes, even destroy them, who cares, use the machines for such purposes rather than human slaves. And of course there is nothing against deciding that a certain economical activity is collectively considered worth while enough or making people occupy themselves with at least harmless enough subjects, on the contrary. But it is not a good idea to glorify or in more mundane language take for granted that ideas in software take on a role as if they have something to tell us per se, as an idea, and that the existence of certain ways of prgramming prooves their validity.

We all know the joke about the meaning of microsoft applied to certain suggested properties of Bill gates. Many don't seem to realize all to well what it means to take things for objects, that's of course fine with programs, and maybe even desirable, and not the worst, but in the human realm, there are connotations that should at least make one feel uneasy. And not take the idea up to the highest intellectual holy grail.

Processes

I found out that windows 3.11 has some form of multi tasking, which is fun to try of course. And indeed, I can run the compiler in one DOS window, while continuing editing in another, and the whole things works enough, screen updates for the partially covered shell output works, and interleaving editing and compilation works, too.

15 years ago, when I wanted and did such things not just on profi workstations but also on the at the time advanced enough 1meg 8 mhz 68000, 640x400 high q screen ST, that was good, that was something that improved work pleasure, and when desired productivity. Not that the comparison is complete, but the idea is that such ideas are worth while in some obsolute enough sense, and take work to implement. It is not so easy to make all screen updates coming from various processes work, make them efficient enough, and even pleasing enough to look at, that takes a library that is not too trivial to make oneself.

DId I say processes? A process is a program running in its own memory space sharing processor attention with other processes. The idea is that processes can be run on one machine in semi-parallel, and the idea since long has had a place in at least minicomputers to handle more than one user, by giving them millisecond time slices of attention each in turn.

The idea of processes is interesting enough, and pretty much the only existing sensible way to multiprocesss. It takes seperating the address spacespf the various processes, a way to get one process out of action and let another become active by some operating system activity, and it requires care to be taken to the time distribution. Finally, processes communicate to eachother or with system facilities, which requires contention and deadlock control, and preferably programming means and methods to make all that work smooth enough.

THe latter being very much an issue at stake for OS-es decennia younger than the ones that did such jobs in fact no so bad, whereas modern ones such in those ways quite a bit.

More later, I've got things to do.