[CONTACT]

[ABOUT]

[POLICY]

[ADVERTISE]

Aucbvax.fa.worksutcsrgv!utzoo!

Found at: gopher.quux.org:70/Archives/usenet-a-news/FA.works/82.01.18_ucbvax.5834_fa.works.txt

Aucbvax.5834
fa.works
utcsrgv!utzoo!decvax!ucbvax!works
Mon Jan 18 13:47:17 1982
WorkS Digest V2 #7
>From JSOL@USC-ECLB Sat Jan 16 19:51:09 1982
Works Digest	        Saturday, 16 Jan 1982       Volume 2 : Issue 7
Today's Topics:	
               Virtual Memory Concepts, Memory Schemes
----------------------------------------------------------------------
Date:  14 January 1982 01:16 est
From:  SSteinberg.SoftArts at MIT-Multics
Subject:  WORKS V2 #4: Memory Schemes
1. Why virtual memory?
When the large computers came out everyone said that memory was
getting cheaper and that virtual memory roached performance and was
obviously unnecessary.  Everyone learned to lived in cramped memory
space and thousands of programmer hours were wasted on hairbag overlay
schemes and disk based memory managers.
When minicomputers came out everyone said that memory was getting
cheaper and that virtual memory roached performance and was obviously
unnecessary.  Everyone went bananas trying to cram anything into itty
bitty 64K (B or W) address spaces which was especially ludicrous since
the machine often had .5M(B/W) on it.
When midicomputers came out I personally flamed at a couple of the
manufacturers suggesting that they consider putting in VM.  They at
least conceded that it could be done efficiently but they couldn't
imagine why I'd need given a full 20 bit address space.  When I
explained that I would have to limit the size of the symbol table at
x, they explained that I should write a software paging system.
When microcomputers came out everyone was blown away by the fact that
you could stack 300 of them in the carcass of a bombardier beetle and
have room left over for an I/O multiplexer chip so no one considered
VM.  So what do we have today?
     "Why can't VisiCalc hold more stuff in it?"
     "Damned, the editor buffer is full again and now I can't
     just add memory."
     "You can't call a Macsyma program from a PL/I program
     because they don't both fit in memory even if there is
     enough room."
In other words.  Everyone is bumping into the same garbage they bumped
into ten years ago except that more people are bumping into it and
this time we might all win.
As far as segmentation goes I approve completely.  Segmentation lets
PL/I call Macsyma since it enforces a set of rules which describe how
all programs have to communicate and it allows them to share an
address space (segmentation is not as important on object oriented
machines since each object can be viewed as a segment).
A very successful segmented machine is the HP41C calculator. It
provides segmentation and dynamic linking in a tangible form.  It
provides one system segment, one writable segment and four user
addable segments each of which may contain any number of named entry
points.  A key may be bound to any entry point.
When I buy a device, such as the magnetic card reader, I plug it into
one of the segment slots and suddenly I can invoke a whole series of
programs which can read and write data to this device.  If I buy the
business package I can suddenly run the TBOND function to evaluate
Treasury bills and so on.  I am often surprised that such a simple and
powerful scheme is used so rarely.
------------------------------
Date: 14 Jan 1982 08:19:14-PST
From: mo at LBL-UNIX (Mike O'Dell [system])
Subject: Another view of virtual memory
While the comments about a person's desires exceeding his machine is a
good argument for virtual memory, I would like to present a somewhat
different view.  (50 pounds of code in a 5 pound machine is the
classical party line taken by many vendors who sell machines with
virtual memory (except Burroughs).)
I view paging as primarily a physical memory management and protection
strategy.  While real memory will never be large enough for everything
you can think of, it is large enough so a bit here and there can be
sacrificed to fragmentation.  Relocating and protecting access of
things via the hardware in nice, fixed-size chunks makes things much
easier.  Anyone who has written a memory allocator knows it is MUCH
easier to write a good one for fixed-sized blocks than one for
variable requests.  Not having to scrimp and save each byte, and the
availability of the paging hardware makes managing the contents of
physical memory much easier.  It also lets you play the 50 pounds of
code game if you want, but that is not as visibly important (I realize
it is to MANY people).
I have a more Multics-oriented view of segments - they are structuring
tools (code or data).  However, I do subscribe to the "1-1/2
dimensional" virtual memory schemes.  By 1-1/2 dimensions I mean the
bits carry from page into segment, rather than the Multics 2
dimensional scheme whereby they don't.  There are several reasons.
Unless you use gigantic addresses (like the HP scheme mentioned here
before), you can never have the right number of bits where you need
them.  Some programs have huge arrays so need very large segments,
others have many arrays, needing many segments, and still others have
both!!  While the S1 scheme of a floating boundary is clever, it is
hard to do if you aren't designing your own hardware.  Therefore, for
machines with a flat virtual address space, like the VAX (which are
vexed by a small page size), a good scheme is 16 bits of page and 16
bits of segment with carry's allowed.  This allows a file in the
filesystem, or a large array, to be mapped into memory as several
contiguous segments, providing consecutive, indexable addresses.  To
prevent tragedy, simply force the following segment to be a hole which
will access trap.  If you put these firewalls between each active
segment, you do reduce the maximum number of distict, active segments
to 2**15, but that is still quite a few (I know, Multics-ers wouldn't
be satisfied).  And if the segments are code, they can normally be
allocated adjacently with reasonable safety.
The 1-1/2D scheme is straightforward to implement given a machine with
only demand paging, enough logical address bits (32!), and a decent
page size.  The VAX bairly qualifies on this last point.  To really
use the VAX 2 gigabyte logical address space requires 8 megabytes of
physical page tables, if you don't share segment tables.  This is the
current maximum physical memory!!!!  People building pagers should go
read the studies IBM did a long time ago.  The picked 1K and 4K byte
page sizes for a good reason!!
	-Mike
------------------------------
Date: 14 January 1982 12:43-EST
From: Stavros M. Macrakis <MACRAK MIT-MC AT>
Subject:  WorkS Digest V2 #5: VM
Large address space gives you uniformity of reference, as has been
pointed out, across physical configurations and for that matter
software configurations (what happens to your 'real memory' when you
need to enlarge some process's workspace?).  The penalty is presumed
to be performance.
The discussion on virtual memory so far has assumed that swap
management is independent of the application.  It would appear an
unwarranted assumption.  When performance requires it, it is normal to
tweak lower-level virtual machines in one's hierarchy.  For instance,
if it is discovered that some particular kind of array calculation is
taking a large fraction of the runtime of an important program, one
may well wish to modify the compiler or the microcode of the machine
one is running on, or for that matter buy a processor which runs that
calculation better.  Similarly, it should be possible to define
interfaces to swap management (note, for instance, the ITS paging
parameters PagAhd and PagRange which warn the system of linear sweeps)
which either define a particular regime of swapping or even allow
swapping to be fully controlled by the user process (e.g. in addition
to requesting swap-in of the faulted page, request several 'related'
pages): a page fault becomes essentially a kind of subroutine
invocation.
Demand paging with its various refinements is a sound general-purpose
method, but certainly other methods are possible when the application
warrants.
	Stavros Macrakis
------------------------------
Date: 14 Jan 1982 1102-PST
Subject: Re: big memory chips
From: BILLW at SRI-KL
I believe the largest RAM chip to date is an IBM 288K dynamic (in a *9
organization).  There were rumors in Electronic engineering Times of
an entirely new nonvolitile RAM technology that is alledged to be able
to put 4M bits on a chip the size of todays ram chips using ordinary
processes.  Sort of two 5 volt chips with electron beams going
inbetween them, if I recall (this was a couple of months ago).  Most
of the secrets are being kept under wraps until "they can be adaquatly
protected".
Bill W
------------------------------
From: William "Chops" Westfield <BILLW @ SRI-KL>
Subject: more on big memories
   zurich -(dj)--nippon electric co. (nec) of japan
expects to post higher profits for the year ending march 31,
senior executive vice president m. hirota told zurich
bankers thursday. 
	:
	:
   hirota said nec has -solved all the technical
complications for mass production of a 256 kilobit random
access memory (kram) circuit, which would quadruple the
memory capacity of computers and telecommunications
equipment which currently use 64 kram circuits. demand for
the 64 kram circuits is still growing and should peak in
1984 or 1985, he predicted, adding that demand for 256 kram
circuits should become significant by 1986.
	:
	:
------------------------------
End of WorkS Digest
*******************
-------
-----------------------------------------------------------------
 gopher://quux.org/ conversion by John Goerzen <jgoerzen@complete.org>
 of http://communication.ucsd.edu/A-News/
This Usenet Oldnews Archive
article may be copied and distributed freely, provided:
1. There is no money collected for the text(s) of the articles.
2. The following notice remains appended to each copy:
The Usenet Oldnews Archive: Compilation Copyright (C) 1981, 1996 
 Bruce Jones, Henry Spencer, David Wiseman.


AD:

NEW PAGES:

[ODDNUGGET]

[GOPHER]