Theo Verelst Diary Page

Latest: 30 january 2001

I've decided after good example to write some diary pages with toughts and events.

Oh, in case anybody fails to understand, I'd like to remind them that these pages are copyrighted, and that everything found here may not be redistributed in any other way then over this direct link without my prior consent. That includes family, christianity, and other cheats. The simple reason is that it may well be that some people have been ill informed because they've spread illegal 'copies' of my materials even with modifications. Apart from my moral judgement, that is illegal, and will be treated as such by me. Make as many references to these pages as you like, make hardcopies, but only of the whole page, including the html-references, and without changing a iota or tittel...

And if not? I won't hesitate to use legal means to correct wrong that may be done otherwise. And I am serious. I usually am. I'm not sure I could get 'attempt to grave emotional assault' out of it, but infrigement on copyright rules is serious enough. And Jesus called upon us to respect the authorities of state, so christians would of course never do such a thing. Lying, imagine that.
 

Previous Diary Entries
 

January 30, 2001

Is Object Orientedness a similar concept as the computer as a slave but then for softwares' sexyness? We'll have a look at various lets call them fundamental ideas that at least deserve the term in a positive sense in western societies' evident roots.

The super and root class idea

Plato has been described as wanting to inform us of the existence of the perfect triangle concept, and that we are living in cages where we can only see shadows of maybe the real and living things, or maybe the equivalents of these perfect ideas about the essence of our world and existence.

The thinking along the lines of acknowledging or assuming the possibility of 'the perfect' idea or concept or even spiritual essence is not needed to lead a nice and honorable enough life, but when it is assumed without final evidence that that is justified that 'the perfect', or at least the desirable isn't there, doesn't exist, then depression, decay and maybe the end of some isms are probably near. In pictorial terms, if we assume that the idea of a triangle, in some kind of imaginable form, such as infinetly thin legs or perfectly matching sides, or just as a geometrical idea, doesn't exist, our image of mathematics is probably not going to be optimistic, rich, or fruitfull.

The case of the triangle is doable, most people probably have no problem with that thought, though some may argue it needs to exist in a certain space, that line pieces are implicitly assumed to exists, and that these type of mathematical concepts' existence idea-wise cannot necessarily be extended to imply that similar concepts exist in other areas of life and thinking. When we are all a buch of accidentally emerging mainly waterbags with some carbon and salts, that just happen to walk upstraight because of mere chance over a long timespan, we don't need to make any assumptions about higher concepts to define our lives in the whole of things, we can postulate whatever we desire, or think what is convenient or advantageous.

Ideas are not at all coincidental, is my firm belief that most ideas have foundations that can even be found and traced, though I certainly wouldn't argue that is possible for everyone at any time. Because I'm not making that the main subject now, the spiritual is not included in the picture here to start with, even though it may well be one of the or the major source of many ideas and their place in life, but person wise and society wise.

In object oriented programming languages, such as Objective C (I've officially and professionally at university level programmed in this language for far over 5 years of my life, I can claim to be knowledgeable enough to say something about it), C++, Java, or incrTcl, the concepts in the title exist at the highest conceptual level of the languages, explicitly or implicitly. Objects are things which have properties and methods to act on those properties. Every objects has a place in its own object class. Let's say we have the object 'table' as part of the class of furniture. That means that when the class furniture for instance has the method 'paint' then the object table can be given the message to invoke its inherited (way it's called) method paint, which as a result for instance would tranfer to the subparts of the table, lets say the legs and the tabletop the message 'paint' with some parameter, the color, which woulf after receiving that message invoke a function (method) that sets a certain datastructure, color, to the desired value.

Again, I've built advanced, working, and efficient Object Oriented programs, with a lot of fashionable ideas (such as streaming, and multilevel access) in it already 10 years ago, so I do have background knowledge about the subject. The reason I'm so sort of pissy about it, is that there are ideas in this world not right, and some of them in the computer area sometimes maybe not annoy me, but sort of make me want to correct them when it means getting some very unjustified attitudes of probably some (not all) probably very evil people down. They'll not have their attitudes in informatics area as backup or justification or even utter proof for their miserable thinking modes, spiritual state, or general nazi-level. No way. Do take notice that object oriented programming skills normally assume not-object oriented programming skills as part of the whole programming just the same, and that anyone knowledgeable about non=oo programming good enough, will have little efforts to understand and not feel wronged about what I'm saying. And that as I said, I programming in OO languages myself, so why would I be against them? There are other programming lets say paradigms, imperative (assembly, C), list-based (lisp, SmallTalk, Tcl), graph-based (flow charts, for instance as in microcontroller programming envs or older programming support methods), process oriented (stream based such as under unix), stack based (Forth), for instance. Most high level and even low level languages know and use the concept of a function or a procedure, though formally functional programming can be discerned, too.

Most programs can be written in any language, minus some timing related progams, maybe. In theoretical computer knowledge sense that thesis can be proven by rewriting a language basis in components available in the other language, and making all constructions available in similar way. Works for pretty much every combination of the langauges I mentioned, I could do it myself (except I don't remember much Forth). That's in the Turing machine idea, extending it with explicit subsets of Turing behaviour, I guess in theoretical language, I'd need to look it all up again to make this formally closed enough, start with von Neumann in theoretical computer literature lists, chech some literatur references, and the ideas can be read about good enough.

Then why all these languages? Even though the same overall behaviour can theoretically be arrived at, it is not neccessarily efficient, to solve a problem in another programming language this way. Not in the sense of the speed of the resulting programs execution speed, but also not in the sense of the size of the program. And how hard it is to make, understand or maintain. And there is a taste matter at stake. And maybe most of all, the machine that is used to run the program is probably the major factor in having preferences for certian programming languages. Luckily, most modern machines are very, very alike in theoretical or structural sense. Von Neumann architectures with some extensions of similar structures for almost every case, and with very similar basic instruction sets in the core of the machine, realy their theoretical differences in terms of capabilities are quite similar. Efficiency wise there are differences over various types of operations, maybe an order of magnitude, structure wise for the various modern architectures, minus when simply the bare speed of certain parts are at stake. Making good and efficient use of the architecture and the parameters in it is of course important, but that has not much to do with a programming paradigm, that has to do with hardware properties, and the algorithms used to tackle a certain problem, not with the formal language construction used.

Electronica circuits are the basis of computers, albeit utilized in binary (two valued) form. They have their own rules, which are incredibly complicated, realy, in normal human language, and they have nothing whatsover to do with formal programming languages. Yet they do determine the behaviour of the whole machine jsut like the programming language. That must mean that even though their behaviour is complicated, and their structure also complicated and very different for instance comparing a Mac and a PC, their behaviour can in the end be made look quite alike. The question is wether that is desirable. When writing portable software, that runs on various machine types, it is. When one wants to make a special machine for some purpose, lets say a voodoo graphics accelerator card, which is similar electronics with very different structure, that is also programmed, it is not at all usefull to make the behaviour the same. It has no screen no windows, only very limited types of computations, and therefore is probably not used right or easily by using the same programming ideas or languages.

For computer makers, designers, that is not hard to understand, supercomputer are different, too, clusters of networked machines are as well, and at machine code level, the bare processor, alternatives of many kinds to the von Neumann idea even do exist in abundance, and at times, such as the voodoo card, they are usefull. No one programs the internal behaviour of a graphics card in an object oriented language in the normal sense that concept is used. Insane. The same holds for 'programming' the pentiums internal behaviour such as Intel at some point must do, to make it tick right.

Back to the table and legs. Suppose we want to paint the table, for instance as part of a graphics design program. Then making conceptual objects of the table and its parts is fine, that makes sense. Who initiates the painting? Some other object, lets say the painter. I that an object, too? Probably. Lets say the painter is part of the class of humans. And tables of the class of furniture. Those classes would come together maybe by making the furniture part of the physical realm class, and humans a part of the class of animate things, which would then both be part of the class of possible worlds. Far fetched maybe, but completely possible, and completely in line with how these software suites are made. And in programming logic sense, a lot worse and more far fetched constructions can be found.

What's at stake here? Basically I see a computer as a machine which has data going in and out, storage capacity, and computation facilities, which is historically and theoretically quite maintainable and practical. An object in oo programming, look it up in decent bookls on the subject, is a collection of data and methods, or functions, that is made into an actually exisiting entity by instancing it from the class it belongs to, in other words a class of objects is sort of the whole where the type of an object is defined. When an object is 'created' from a certain class, it will have a standard structure going with that class, that means it has certain variables, and certain functions that are defined in that class. The programmer can send the object a message, which is basically a combination of a invoking a method (function) with arguments.

Classes can be made as well, and they have hierarchical relationships dictating their inheritance behaviour. An object inherits data structures or variables and functions from its class, which may inherent them from a class higher in the inheritance hierarchy.

Now we get to the title: a superclass is a class that supercedes another (sub) class in the inheritance mechanism, or maybe more then one subclasses. The world could then be superclass of both animate objects and furniture. Where do all these classes end (without becoming all too obviously punfull)? The root class, basically, that is the class all other can inherit from, that itself does not have a superclass to inherit from. Bull? Read it yourself, see if you can figure it out, realy, this is roughly the way the whole OO language idea is put together, minus variation and other names. Superclass and root class are actual words used.

Now there is nothing against this idea if it is appropriate, which it may well be, but relationships between classes of objects are only possibly in hierarchical sense, which is not always natural, and objects may want to inherit from unrelated super or even subclasses, which distorts the whole idea.

Program technical/theoretically speaking, there are relationships and sets that need to be defined, which can be done in many way. A list can be made in a program, pointing to other lists. An item in a list may be given many pointers to other lists or items in it, and the possibilities are endless to apply these principles without restraining such relations to lists in hierarchical relationships or by making datastructures or functions in a library implicit parts of a class idea.

The object idea also isn't that valid in general as it is applied in oo programming as well. Why would certain methods have to be automatically linked with certain datastructures. When many differnt types of datastructures are used, with many methods being applicable to in principle all of them, the concept collapses or ceases to be of use. Is the wood of the table leg animate? Is trees a superclass of tables? Why would the method paint be available for blank tables and not for humans painting nails. Or painting a table... The human invokes the paint method from table (or the leg)? Or does the human have a method paint that applies to data actually stored in the object leg and tabletop? Or the human has a method... This is a question of organizing things which is quite decently possible in normal language, but in an oo langauge these limitations are not just linguistic, there make for certain programming structures, and have data and function use as a result that may make for quite inefficient programs, which is common knowledge. That is bearable or even desirable when that pays of in the ease of use or programming clairity department.

Traditionally

Is there a way out of this paradoxal realm of oo programming paradigms? Of course. There are alternatives, there are computers that do very well defined jobs accurately, quick and reliable, it must be possible.

Tradionally, a program discerns data and functions, which are seperated by the clear distinction that program code can be read by the processor which interprets it as machine instructions, and data can be anything, and can normally not be fruitfully used for the same (the machine would go bezerk).

A function is mathematically a prescription to go from lets say input data over some prescription to an output result. In programming, this is similar, arguments can be passed to a function, which then does something the programming language and processor allows, and returns values, roughly.

A library is a set of functions combined in a file for use with various programs.

A simple variable is a place to store information in a cerain form, such a a place to store one byte (character), an integer (a number such as -1 0 1 2 or 3), or a floating point number (such as 3.1415926535) with limited and know acuracy, such as 12 digits.

An array is a list of variables of a certain kind on a row, indexable by referencing to its elements by an integer number between 0 and the number of elements in the array.

A datastructure is a combination of variables that can as a whole be referenced by a certain type.

A pointer is a low level concept which is used to point to the place in physical computer memory where somethings is stored from or taken from. A computer memory is a long list of bytes on a row, and basically a pointer is number pointing to one specific byte from the list, which may on a modern machine be quite a big number, up to a hundred million, form instance. (Technically, pointers may be relative to not the base of the machines physical memory, but for instance a process, a program unit)

A linked list or datastructure is a set of datastructures with pointers in them that refer to other datastructures, linking them togehter, possibly as a regular chain structure, or just as it appears handy.

A process is part of the class of the operating systems process forms, functions are methods of any process, and are available as they are programmed and called, and every process has a collection of data of any form ther programmer wants, organized mainly as variables referenced by name association.

In normal languages, most modern operating systems can support various programs running virtually at the same time, by dividing the processor time amoung them, and giving each program their own piece of memory. Such a program in action is called a process.

Processes may communicate with files (the disk), the keyboard, and the screen of the computer system, and usually with other processes. On a machine with a network, they also can communicate with the network hardware, so for instance a process which is a active program netscape in a computer may access the internet and commnicate with a web server process on the other end of the network. A process is a program in action, running, and as we know, a program can be made into more than one active process, for instance two wordpads running semi-parallel.

The idea of a process is richer than a class, or object, simpler in terms of structuring possibly, though not necessarily, is directly related to the operating systems; method of juggling more than one program at the same time, almost always has communication methods implemented by streams, and is then inherently capable of dealing with parallel behaviour, and communication behaviour between parallel processes, not just repeated or nested function invokation.

What's a computer, anyway

I'll do a good computer model I've thought about before when I find opportunity, that can clarify how a processor actually works accurate and general enough to make clear how and why computers tick. I'll start with a short version, I'll see how far I'll get.

Beforehand, I'd like it to be noted (again, maybe) that I've been knowledgeable about as well computer building blocks and (micro) computer systems since at least the last part of the seventies, and with hands on experience in both the building as using of computer (sub) systems. And I'm electrical engineer with PhD level research experience in high speed computer design field area, all this jsut to make clear I'm using certain not puffed imagery on purpose, because I thing it serves the purpose of explaining better, not because I can't do the impressive talk stuff. Ask me questions over email, might be fun.

A computer is a machine based on the sort of logic that I've described on previous diary pages, maybe check the list, ands, interters, latches, adders, selectors, tri-state buffers (not covered yet, I think), etc. I'll not go into all these elements first, though that would be educationally preferable, but I do use the ideas also in the other pages.

Lets say we have a desk, a filecabinet, and an operator. That's our computer for now. The file cabinet is the memory, with pages containing data in a certain drawer and folder, which are all numbered consequtively. The desk has a number of fixed places to put papers on, and the operator is a trained dummy with a pocket calculator who can get data from the file cabinet and back, compute simple mathematical problems, and access things on the desk.

The idea now is that the desk plus the operator is what the Central Processing Unit (CPU) does, like a pentium. The operator is stupid, he takes simple instructions, which make him operate in fixed patterns he knows about, fromthe file cabinet. There is one place on the desk with a sheet of paper (which are all rewriteable, sort of plastic with felt point or se) called the program counter, which when the whole act starts up is set to zero.

Then the power is turned on, the cabinet will contain something to begin with, and the operator is presented with a program counter saying '0'. He goes to the file cabinet, opens drawer 0, gets folder 0, and checks page 0 for his first instruction. Lets say the instruction is 41. In the operator language, that means he must place the number on the next sheet in a position labeled 'a' on the desk. When he has done so, he knows he must increment the program counter, now pointing at 2 (0+1+1), and do another 'fetch' cycle.

The next sheet says '42'. He knows that measn something similar as 41, except that the number on the next sheet must be put in location 'b' on the desk. After having done so, the he again adds two to the program counter, and is ready for another instruction fetch.

The nexts instruction, from sheet 0.0.4, reads '112'. This being in the hundred range he immedeately knows he'll need his calculator, which he gets handy. The instruction 110, he reads in the instruction manual, means 'add the numbers in locations a and b on the desk, and put the result in a'. Which seems a bit elaborate, but this is normal language for the operator, he's used to it, so he computes the sum, and puts the result where the first number used to be in the a register on the desk.

The next instruction is a bit disrupting, after reading from sheet 0.0.7 (the add instruction took only one sheet), is 200, which he knows is a 'jump' instruction. He takes data from the next sheet to know where to jump to, which is in this case in the second drawer, 1.0.0. He dutyfully writes that number in the program counters place on the desk, and pretends nothing happens.

Another fetch.

This may look a bit artificial, or non-technical, but it is quite in line with what almost any real-life computer systems' processor does. The idea of the numbers meaning instructions that then generate a little sequence of activities in the cpu is the basis of how a computer hart works. The little sequence is called a 'microprogram', the list of instructions in the drawers from the cabinet is called a machine program. The microprogram is usually fixed, for instance one may get a book on pentium assembly programming, or look at the inter web site and look up datasheets for the pentiums 'instruction set', were about 500 or so basis instructions can be found, including the codes that the pentiums understands when he reads the instructions from memory.

The idea of the structure has its parallel in an actual digital computer, where the file cabinet is memorychips, and the desk is the cpu with its internal registers, of which the program counter is an important one.

A 'bus' is a set of electrical connections, for instance 8 or 16 or 64 on a row, running from various parts of a computer to another. A bus party is a digital circuit that can either read what signal is on the bus (many wires in parallel can form many different numbers), or can put a pattern of 0's and 1's on the bus itself, making that pattern or digital number available for other parties on the bus. The operator is in the pentium for instance as well to put signals on and get them from busses in an orderly fashion. That is where the analogy isn't completely accurate, there is for instance a bus running from the memory (the file cabinet) to the registers in the pentium (the desk), and all those registers or storage places can either read new contents from the bus, put their information on the bus, or be idle. The operator makes sure that they are signalled such that the right information is put on and read from the bus wires at all times.

The microprogram dictates which parties are made into reading fromthe bus, writing to it, or idle, for each bus in the system, and it also has a sort of piano roll to determine for each machine instruction what the exact time order for all buss control pins for eahc register will be as the little roll is played.

This is not by far all, because we could also make the registers point to certain drawers. And to make subroutines, there is another special register, called the stack pointer, which is used to store addresses to return to after the program counter has been changed when what is called a subroutine is called. And the operator knows a whole range of logical and arithmetical instructions.

With this added, and some more knowledge about the finer structures and busses and signals, this is realy the way a computer system works. The model (not the physical desk and such, just the ideas) has been in my head for at least 20+ years or so, and still fullfills its purpose just fine to understand a z80, a pentium, or a modern dsp system, no problem. So it is worth it.

The major fun, at the time incredible for me is that all this can be made completely with logic readily available, as I described in other pages, gates, registers, counters, adders, and tri-state buffers. In fact I literally made the kind of structure and idea I described above with (at the time) ttl logic parts into a working device doing what I described above (and more). The same logic is in the microcomputer described on my synth pages, completely similar thinking.

The advantage of being knowledgeable about these things? To understand the basis of all modern computer circuits (though 'understanding' completely is quite ambitious, its more about the main ideas and their logic, which is completely valid though), and understand what computers do and probably cannot do, what the fuzz is about in programming lands, and for fun.

It might be possible to do a little von Neumann machine interactive example, let me think.

Java Menus test

This applet is under test:

You need a better browser (Java applet didn't load)