The Genesis of the TENET 210 – an early time-sharing system

By Chuck Runge – co-founder, VP SW Development TENET Inc.

 In 1966 I was working for the Atomic Energy Commission at Iowa State University doing interesting compiler and operating systems work, but these projects were looked upon by many as “one of a kind”. This was an era when most systems architects were hardware guys who knew how to do logic design but had little understanding of software or the users of the systems they designed. I wanted to address these two issues by learning more about the hardware design process and building high volume commercial products, but this could not be done in a university environment.

Rex Rice, a senior manager from Fairchild R&D and inventor of the dual in-line package (DIP) for semiconductor parts, was on campus recruiting PhD Electrical Engineers to work on his project to “explore the boundary between hardware and software”. One of the EE doctoral advisors I worked with knew Rex well and told him I would be a better recruit since my software experience more than offset my lack of a PhD in EE. Rex agreed and I went to Palo Alto in March 1966 for an interview. The R&D lab was in the Palo Alto foothills, had a very interesting culture and was managed by Gordon Moore, future Intel co-founder and author of Moore’s Law - Moore was an outstanding scientist, but seemed uncomfortable interviewing a software guy. In June I moved to Palo Alto to become the first systems programmer to work on the design of Symbol IIR, a unique computer system to support time-sharing.

My project focus was on the native language of the machine (PL/1), its translation, its internal “object code” representation, and the memory allocation and management to support variable length fields and the complex data structures they comprised. As I learned more about the project I became convinced it would never lead to a commercial product - mostly due to the misguided decree by Rex that no firmware or software could be used in Symbol IIR’s implementation, a decision unknown to me until I joined the project. One of my project colleagues, Dave Masters, was equally frustrated by the direction of the project and we discussed it frequently. As our discussions proceeded, it became apparent we had similar goals, would make a great team and we decided to pursue our goals together, but would have to leave the project to do so.

Besides Moore, two other members of the Shockley Eight team (see www.businessweek.com/pdfs/fairkid.pdf ) comprised the Fairchild brain trust. Bob Noyce (Intel co-founder) ran all of Fairchild Semiconductor and Vic Grinich ran Fairchild Instrumentation (later renamed Fairchild Systems Technology), a division which built semiconductor test equipment. With Dave’s connection to Noyce we were able to transfer from R&D to Fairchild Instrumentation to start a skunk works group to build a 24-bit computer. This began a 16 year relationship during which we would combine our hardware and software talents to:

o    Create the computer and software (including Factor, a high-level test language) for Fairchild’s Sentry - one of the first “true software controlled [semiconductor test] systems” that combined parametric and functional testing.

o    Co-found TENET to create the TENET 210 general-purpose time-sharing system.

o    Co-found National Semiconductor’s Datachecker Division to create its market leading supermarket scanning Point Of Sale system.

o    Create some of the market’s first microprocessor based UNIX systems for Onyx + IMI.

The Fairchild FST-1 was a general-purpose 24-bit computer with a full peripheral complement, an operating system and the software tools for use as an embedded controller in the company’s semiconductor test systems and later for their factory automation projects. This computer system became the basis of the very successful Sentry family of semiconductor testers, one of the industry’s first testers to combine parametric and functional testing via “true software controlled systems”. This family introduced FACTOR, a high level test language I defined as an alternative to ATLAS, a large complex  industry standard; FACTOR became a de facto semiconductor standard other vendors had to support in various ways.  Some literature on this pioneering effort is on the Chip History web site, which as of October 2007 could be found at https://www.chiphistory.org/product_content/lm_fairchild_sentry-400_1970_intro.htm . The FST-1 had all the features of a typical mini-computer, but Fairchild only saw it as an embedded system and while we were moving closer to our goal we were still short of it.

Time-sharing was still evolving as a viable commercial capability. There were dedicated low-end systems, which were generally interpretive, single language, and built on small platforms. The high-end systems were built on larger platforms and accessed through service bureaus. Most of these computer systems were not designed with time-sharing in mind and had deficiencies for that application. Using service bureaus to access time-share services made budget control difficult since storage and connect-time charges were almost impossible to forecast, and trusting corporate data to a service bureau was risky with the software security of that era.

We thought there was an opportunity to design a computer system to meet the requirements of general-purpose time-sharing, provide mainframe power with a price point of a super-mini and, for better cost-control, target it toward in-house use.  We envisioned creating larger systems by tying multiple systems together to create a scalable networked multi-processor system. These general ideas were the basis for starting TENET, Inc. UNIX was also being developed at Bell Labs in 1969, but we knew nothing about it since little, if anything, of it was known to the general public at that time.

With these thoughts Dave and I left Fairchild in 1968 to start our own company. I couldn’t afford to be without an income for very long and had to borrow the money as my contribution to the Tenet funding. It was not known how long it would take, if ever, before we could get the funding so this could have been a hardship. I was the only one at Fairchild who had compiler experience and my departure was a problem for the Sentry project. Bob Schreiner (a senior Fairchild exec who was the first customer for the tester) solved that problem and my financial one by offering me a consulting project to design the FACTOR compiler and an IBM 2314 compatible disk and file interface. To avoid profiting by resigning a critical position and coming back as a consultant, I was paid a fee comparable to my salary with the understanding that if we were unable to get financing I could have my job back. I worked on the Fairchild project while working full time on TENET and six weeks later, for the princely sum of around $2,700, delivered a specification and design so detailed that two programmers unfamiliar with compilers could blindly translate it to working code.

The business plan for TENET was developed in the summer and fall of 1968 and the company launched in early 1969. One issue that is always a headache for a start-up is finding names for the company and its products. As we worked on the business plan there were many discussions about the name of the company. Some thought Mosaic Computer, with a single computer being a building block as a piece of a larger entity, captured the essence of this idea, but it initially lost out to Central Data Systems (CDS), which contained two of the popular buzz words of the day. Further thought concluded CDS was too common and not unique. TENET was picked because of the meaning of the word and the suggestion by some that it looked great in print.

 Tom Bay, who had been the General Manager of Fairchild Semiconductor, made a profitable early investment in Data General where he met Fred Adler, a NY trial attorney, who was also an early DG investor. Tom introduced us to Adler and we underwent an “in your face” inquisition about our understanding of what we were undertaking and our confidence in delivering on our claims. Once we met with his approval we were able to get the $2+ million dollars we needed to build two prototypes. We thought this level of funding would allow us to retain more of the company and that additional funding for the production phase could be easily obtained when we had working machines, but this was a decision we would later regret. With this funding we set off to find a building and join four other start-ups to address the time-sharing market. Many other Fairchild alumni, including Intel’s Noyce and Moore, started companies in this period. Some of them, Cloyd Marvin of Four-Phase Systems and Jerry Sanders of AMD to name two, used our facilities, typewriters and copier to work on their business plans.

A bit of sports trivia - we were working on the business plan at Tom Bay’s beach house in Aptos during the 1968 summer Olympics. I needed a break and went to the kitchen for a beer and came back to the news that Bob Beaman had just long jumped 29’ 2 ˝” to break the world record by more than 21” – an unbelievable amount. Since this was an event I was familiar with I thought they were kidding, but such was not the case and there never would be a world record in the 28 foot range since it went directly from 27’ to 29’ or almost 4 feet better than my best.

While we were working on the business plan Dave and I started some of the system’s design work. Dave defined the underlying physical architecture/organization of the computer, which was based on:

o    A system architecture centered around a "data exchange" interconnecting core memory modules, central processing units (CPUs), and input/output processors (IOPs) allowing four simultaneous memory channels and DMA to all peripheral controllers

o    Automatic interleaving of memory modules on dual 32-bit memory buses providing 20 MB/sec memory bandwidth

o    System expandability by connecting additional memory modules, IOPs, or CPUs to the data exchange

o    CPU equipped with eight general registers and eight control registers

o    I/O system providing 20 bidirectional DMA channels with access priority, and 20 levels of nested interrupts, each level expandable to 16 sublevels

o    To achieve significant cost and time to market advantages the computer would be implemented primarily with standard off-the-shelf TTL logic devices.

I started the specification and design of the software, primarily the Operating System (OS). We were determined to design the TENET 210 time-sharing computer from the top down with the user and software driving its requirements, a novel thought in that era. First, we spent a lot of time thinking about the user experience we would provide. As this solidified we shifted our attention to the operating system, tools, and languages that were needed to support that experience. When these were anchored we considered the logical architecture of the computer, i.e. the architecture of the computer as seen by the systems programmers. This included the CPU register layout and addressing, the instruction set, the memory management system, the trap system, the interrupt system, the I/O subsystems, etc. to support the software.

This progressed significantly while we were raising the money, but it accelerated rapidly when we hired our first staff. Jack Morris, who worked on the Fairchild Sentry tester, was hired to do the CPU and Memory Management Unit (MMU) logic. Jack had a good understanding of software and its importance in the success of the system and worked on the underlying physical architecture of the computer.  Larry Krummel, fresh from his Stanford Masters, was hired for the TENET BASIC subsystem. This was a great team to work with – they were knowledgeable, creative, not subject to the Not Invented Here (NIH) syndrome, and worked well together. As the design of the OS and the hardware logical/physical architecture proceeded, we produced a 180 page specification and a preliminary design of the OS and the services it provided. Some of the items covered in the specification included:

o    Disk management – format of disks, space allocation, defective sector management, file management, boot process, etc.

o    System build, configuration, and tuning parameters

o    All OS modules, their interfaces and interconnections

o    All major tables and data structures required of the modules

o    The file system format

o    Initial algorithms for resource allocation

o    Initial algorithms for swapping and scheduling

o    The Applications Programming Interface for all OS services

 

This served us well as we continued staffing since we did not have to deal with major philosophical differences about the architecture with new recruits – we had only to fine tune the specification and design in the document. This specification and the core team provided a level of efficiency and productivity that allowed us to produce the OS, EXEC, meta-assembler, FORTRAN IV, TENET BASIC, Editor and tools with a staff of about 8 Full Time Equivalents. Our hardware team was comparably sized and equally productive. Few believed us when we said we would produce the TENET 210 within 24 months with such a small budget and staffing level.  In January of 1971 TENET dispelled any doubts of this when it held a “musical happening” attended by an eclectic group including the press, musicians, academics, business people (including one of our directors Bob Noyce), and students. The San Mateo East West Music Ensemble and a San Francisco Symphony solo pianist used the TENET 210 time-sharing system and a primitive music synthesizer we developed for the occasion to play various Eastern and Western songs and demonstrate non-scientific applications of the computer. 

 One of the more significant OS design challenges was reducing program swapping, which was a major factor in determining how many users could be supported by the system. It was common for timesharing systems of that era to use a RAD, a head per track disk, to handle the swapping workload. I did some pencil and paper modeling and concluded we could do program swapping on the same moving head disks used for file and program storage and save the cost of a RAD. But to do that we would have to address a multitude of hardware and software issues such as:

o    A rich instruction set for compact systems and application code.

o    Identification of modified read/write virtual pages by the MMU.

o    Assigning real pages to read/write virtual data pages only when referenced - also reduced demands on real memory.

o    Load leveling by swapping to the least busy device

o    Maximizing disk throughput by moving programs and I/O buffers occupying multiple discontinuous blocks of real memory to disk in a single write operation and reversing the process with a single disk read operation.

o    The disk interface provided the radial position of the disk heads so the disk driver could minimize rotational delays.

o    The disk driver considered all of its disk requests, independent of the requesting task, and sorted the read/write lists to minimize wasted disk rotation time.

o    The file system handled large files but it also handled the multitude of small files without incurring excessive disk access penalties.

o    A class of executable programs, such as the compilers, to which applications could dynamically link and which could be shared among applications - they also reduced demands on real memory.

o    Dynamically linkable executables, upon initial loading, could be marked as ‘modified’ thereby causing them to be swapped out once which would move them from fragmented discontinuous file system space to a sequential swap space.

o    To balance CPU usage against response time, the swapper/scheduler algorithms were biased toward providing uniform response times rather than the fastest possible response time, which would also reduce the swapping imposed by compute intensive tasks.

o    Intelligent serial I/O drivers provided editing commands to copy text from the failed command line to a new command line to save retyping the entire command line. The requesting task was only awakened when the new line was ready for processing, thereby saving many context switches and program swaps common on other systems, which handled the editing in the application.

To build the time-sharing software we built a Free Standing Disk Operating System (FSDOS), which was a very simple monitor to provide a file system and access to the meta-assembler, linker, debugger, etc. As soon as the timesharing system could support the development staff we only used FSDOS for “free standing” tools like the system’s diagnostics.

All of the production language processors had the compiler, editor, linker, and debugger tightly integrated in to a single subsystem. Language statements could either be saved as part of a larger program or executed separately and immediately. Immediate execution was great for debugging and the syntax of the debugging commands was consistent with the language being debugged thereby saving the user from learning another “language”. The compilers generated space and time optimized native code by using local code optimization techniques such as common sub-expression elimination, multiple branch elimination, redundant load/store elimination, etc. Considering TENET BASIC was highly extended from the original BASIC, the compilers produced good code for that era. The extensions included:

o    Explicit data declarations

o    Rich data types - Integer, Real, Double, Character, Complex, Double Complex

o    Built-in character manipulation functions

o    Mixed mode arithmetic

o    N-dimensional arrays

o    Identifiers up to 16 characters in length

o    Multi-line subroutines

o    Multi-line functions

o    Random and sequential file access

 

We debated what tools to use to build the software, primarily whether to use the Meta-Assemble or a high-level implementation language, and decided against a high-level language based on previous experiences. A few years earlier while working for the AEC at Iowa State, I was involved in building a real-time, multi-processor control system for experiments at the school’s research reactor. We used an SDS 910 interfaced to an IBM 1401 that was used only as an I/O controller for disk, card reader, card punch and printer. We developed our own real-time, multi-tasking OS to control this primitive multi-processor system. Having multiple scientists writing real-time control programs in assembly language on an unprotected system was out of the question, but there were no decent high-level languages available so we did what was common for a university – we invented our own.

Dave McFarland (later of Ryan-McFarland Corp) and I designed and implemented a high-level language, Task 65, based on Algol 60 for this purpose. It was also a reasonable implementation language for that era and we wrote much of the OS and the entire compiler in it. Features of Task 65 were:

o    Asynchronous procedure calls so you could create concurrent, parallel processes

o    New data types for character/string processing

o    Built-in functions for manipulating hardware interfaces for real-time control

o    Built-in functions for manipulating the 910’s hardware registers, character manipulation, and bit manipulation

o    Special blocks for interrupt handlers and timed events tied to interval timers

o    A file system allowing random and sequential access

o    Software memory protection by doing run-time checks on array indexes

o    A highly segmented, slow compiler so you could run real-time tasks in the foreground and do edits and compiles in the background with only 12K 24-bit words. The segmentation was based on the transition matrix technique for the compiler’s implementation.

o    Etc.

 

This was an ambitious project, as the compiler technologies in that era weren’t well developed, especially for recursive, dynamic languages like Algol, so we suffered from code inefficiencies. We wanted to do the compiler in one pass, but ran into some challenges we were able to resolve only by passing them on to the linker. It was the practice of the AEC to have annual meetings so the various labs could come together and share project activities. While we were working on the compiler an upcoming meeting described a paper by Dr. Herbert Kanner titled “A One Pass Algol Translator”, which I thought might be a solution to our problem so we went to the meeting. Dr. Kanner’s introductory comments squashed that idea rather quickly when he said, “As the result of a discovery made last night by one of my graduate students, the title of this paper has been changed to “A One and One-Half Pass Algol Translator.”  Our language was far more complex than standard Algol so we left the conference with a good feeling about what we had done.

When Honeywell heard a paper on Task 65, they approached us about adapting it for one of their real-time computer lines in their CCC group. It was going to take Dave and me a while to complete the compiler since we would write portions of the compiler in Task 65 and then translate the Task 65 code to assembly language by hand – the tools we would liked to have had for this were not available. To speed up this process Honeywell hired two consultants from Phillip Hankins in Boston to hand code and test our Task 65 code. When Honeywell adapted it to their products they changed the language base to FORTRAN from Algol and a lot got lost in the translation.

 Back to TENET and the point at hand - we didn't have the charter, time or budget to develop a high-level implementation language and its compiler technology so we decided to defer a high-level implementation language to a future generation. If we had decided on a high-level implementation language a lot of the instruction set complexity may not have been present, especially if the language were to be machine independent as opposed to optimized for the TENET 210, but that is another lengthy argument. Since magnetic core memory, the memory technology of the day, was expensive, and program swapping costly, we were greatly concerned about minimizing both and we focused on the design of an instruction set to support systems programmers writing in assembly language. 

The 210’s instruction set was debated at length from several points of view. In the end we decided to optimize it for the systems programmers. We were concerned about both space and time efficiencies and the result was a code set that produced compact code, good execution times, and contributed to minimizing demands on real memory and reducing the swapping load.  Features of the TENET 210’s instruction set included: 

o    Programmed INstructions (PIN) allowed one-argument function/subroutine calls in four bytes. They were used by the systems tools - linkers, debuggers, edit ors, and compilers - to define the instruction set of an idealized computer with an instruction set optimized for the implementation of the tool.

o    Packing and unpacking variable length fields in words

o    Testing/locating/setting/resetting bits for managing allocation maps of systems resources

o    Immediate instructions whereby the address field was data, not the data’s address

o    Gathering and scattering blocks of memory

o    Memory searches

o    Conversion of virtual addresses to real addresses

o    Rich condition codes and conditional branching

o    Pre- and post-indexing for managing data structures

o    General registers for arithmetic, logical and bit operations as well as indexing

o    Floating point object code could run unchanged on machines with or without Floating Point hardware option

 This was a complex instruction set, which added more hardware cost; but it also allowed us to produce extremely compact code, very good performance and a product that competed well.  We were also certain that the increased hardware cost was a minor short term problem as semiconductor prices were certainly going to decrease.

 We responded to an RFP from the State of California for a system to support the 800+ Civil Engineers around the state designing roads and bridges. We were competing with IBM, GE, DEC, HP, and others with the TENET 210. The RFP required us, at an unspecified time, to demonstrate the state’s benchmark tests on the TENET 210 - 32 simultaneous users running production Civil Engineering programs and providing the response times they required. They gave us very little warning and only called the day before they wanted to run the benchmarks. Up to that time we had only run about 8 simultaneous users and certainly didn’t have enough terminals for the 32 they required. We borrowed the remaining terminals from friends, but when we logged-on the 25th user the system would crash. The developers were unable to determine why and we left late in the night thinking we would simply tell them we were unable to get enough terminals for the test and hoped to buy some time until we could fix the problem.

 While I had not written any of the code I was extremely knowledgeable about the instruction set, had done a very detailed design of the Operating System and knew its tables and logic intimately. Around 3 AM, I was not sleeping well, it occurred to me there was a boundary problem with the allocation of real memory to virtual memory that was probably causing the problem. We used a Test and Reset Bit instruction for this purpose, which provided the bit address in a double word (64 bits) of the first “1” bit it found, reduced the count by one and set the bit to “0”. The first 8 bits of the double word was a count of pages available for allocation, the next 24 bits of the first word were the first 24 real pages and the second word the next 32 real pages – hence a problem going from page 24 to 25.

 I went to the office immediately, found the defective code, patched it and could successfully log on more than 25 users; but then I had a problem when logging off the 25th user. This allowed us to run the tests and we received encouraging comments from the evaluation team that lead us to believe we had actually done quite well. We got the sale based on features and cost performance, which were so compelling the State bought the machine even though we were in Chapter 11 bankruptcy proceedings. The State hoped the sale would help us secure the financing to sustain the company, but that did not happen and TENET closed its doors with a couple of the staff remaining sustained by a hardware/software maintenance contract with the State.

In its final days the machine was evaluated thoroughly by Tymshare (see http://www.computerhistory.org/corphist/view.php?s=select&cid=1 ) and Hewlett Packard. The former considered using the TENET 210 as a replacement for its aging SDS 940s and the latter considered it as a potential solution to their failing HP3000, but with the declining state of the company it was too late for either of these to happen; it is highly unlikely the NIH factor of HP would have allowed a positive outcome even if more time were available. After I left TENET the head of software at HP approached me to help fix the 3000’s software. Its architectural “elegance” was not enough to offset its very poor performance, which fell far short of market expectations; I later gained first hand knowledge of this subject when we benchmarked it with the National POS cross-development tools and found it unusable as a cross-development system. Having just come off a similar intense effort, the thought of another Kamikaze activity wasn’t very appealing.

The TENET 210 remained in operation for 10+ years. Even thought the State had bought the second prototype for spare parts, they became concerned about the trouble they would have if the 210 died completely. I met Woody Hobbs in the final days of TENET and we worked on a few things together over the period. We submitted a proposal to the State that would have allowed them to run all of their TENET BASIC programs on an IBM mainframe. I started the design of a TENET BASIC compatible system running under Woody’s Mentext product; it was interesting generating code to work under the memory management system of Mentext, which did its own memory management instead of using that of IBM’s VS Operating System. The State decided our proposed solution still made them vulnerable to a single vendor and migrated all of the TENET BASIC to an industry standard language supported by multiple vendors. This most likely saved me a lot of work since 100% compatibility is always hard to achieve. While the syntax of the language was well defined, much of the semantics was buried in TENET compiler code and some nuances probably would have been discovered piecemeal, the hard way, over some period of time.

  Steve Wozniak, founder of Apple, worked for me at TENET as a diagnostic programmer. In his book “iWOZ” he talks about how he invented the PC, started Apple and had fun doing it. On pages 84-86, he wrote about TENET and the TENET 210, paraphrased here. My first non-student job was for TENET Inc. in Sunnyvale. During my second year of college I went looking for a place that might have a Data General Nova minicomputer that I could look at. This is the computer I'd told my dad I was going to buy someday. My friend, Allen Baum, and I went to a place in Sunnyvale to look at one, but walked in to the wrong door and saw a larger computer in a big display room. This was a shock to me to see a computer actually being designed and built. We were impressed and asked for applications and we were both hired as programmers. I stayed on working for most of a year. I got to see some incredible computer hardware and software although I didn’t think much of the architecture inside. Economic times were bad and the company folded. I'd taken a year off of college and now had enough money to go to Berkeley the following year.”

Shortly after iWOZ came out I got a call from Bill Bridge who worked for me at TENET asking if I read had it. I had not, so bought a copy, read it and sent Steve a note saying even though I was a principle architect there were things I didn’t like too and asking him to have lunch to discuss. It became clear in our lunch discussions that from his limited vantage point as a diagnostic programmer there was a lot about the computer and its history he didn’t fully understand. During lunch he said he frequently mentioned TENET in his talks. So I decided to write a note for  him about TENET to give him a better understanding of the machine and its history in order to complete the big picture for him. 

This got rather lengthy and to make sure I wasn’t rewriting history I sent a copy to Jack Morris, who did the logic design for the CPU and memory management unit. Jack had no problems with the piece, said it was a really fun walk down memory lane and asked if he could send it to some colleagues at Stanford who had an interest in Silicon Valley history. Shortly after they read it I got a call from them telling me how much they enjoyed reading the piece and asked it they could send it to some colleagues at the IEEE Annals. Shortly after the IEEE Annals staff got the article they called me and said they would like to publish it in an upcoming edition of their magazine, but it would have to be shortened to meet their guidelines. It was published in 2008.

Steve grew up less than 1/2 mile from us. In 2014 his mother died and I sent him a note of condolences. Here is his response to my condolences email as it relates to our lunch about TENET that resulted in writing the IEEE Annals piece.

 

Thanks. My mom wasn't some social mom around Sunnyvale. She worked hard and typed all the newsletters and did so much organizing for the Community Players and other such things. I think I got some hard work ethics from her as well as my dad.

 

I am getting quite busy right now but will remember to find time for a lunch with you. I have continually regretted some comments I made about the TENET machine. You convinced me of the good thinking that related software to the machine architecture in ways that I didn't see when I worked there.

 

best,

Woz

 

Working with new hardware and software is always a challenge and many interesting bugs are encountered. Besides the “page 24 to page 25 transition” described earlier, here are a couple that contributed more than a few gray and lost hairs:

o    Wozniak would “exercise” the machine by running programs that calculated Pi and other irrational numbers to however many digits could be calculated over night. Occasional errors in some of the digits led to identifying a “cross-talk” problem with the back panel wiring of the CPU. 

o    We thought we were having memory problems, but the memory diagnostic wasn’t showing any memory errors. Ultimately we discovered that a memory bit was being reset by the failure of an instruction recovery/continuation, after a page trap, that lost its memory mapping state history – it would reset a bit in the real address space instead of the virtual address space leading us to believe we had a memory problem when in fact the memory was fine.

o    The Memory Management Unit (MMU) was an address translation unit implemented with Intel’s brand new bi-polar memory devices. To aid problem resolution of new software running on new hardware, the OS software took a very defensive position and used many of the hardware’s testability features.  For example, after loading the MMU it would read the contents back to verify it loaded properly; this proved to be a good decision and showed a pattern sensitivity problem in the Intel devices. Bob Noyce came to witness the evidence and when he saw it muttered something like “By God, you’re right!”

o    When the system could barely support time-sharing, I took a terminal home so I could check on software progress weekends and evenings without being gone from the family. Occasionally I would enter an EXEC command, but the command executed was not the one I entered. A command to list a directory could show up as a delete directory – not a fun experience. This never happened when I was the only user on the system and the frequency of occurrence increased with increasing numbers of users. An impromptu design review showed the software guy working on this part of the system was confused about the Context Page, which was a virtual page unique to each task/user. He had implemented it as a single real page so when my task got control it executed the last command entered by anyone on the system. Fortunately most of the time this simply resulted in a syntax error message, but not always.

 

Many of the manuals for the TENET 210 time-sharing system can be found at http://bitsavers.org/pdf/tenet/ . While many may consider them primitive by today’s standards, they were quite advanced in that era and most software people we interviewed from major computer manufacturers loved the hardware and the operating system. That also included the brain trust of Xerox Data Systems Universal Time-sharing System (UTS), who Adler brought in early in the schedule for due diligence on our operating system design. I was apprehensive about this meeting, not because I thought they would find major flaws, but because there may have been philosophical differences that would be counter productive and confuse Adler. This was an unfounded concern and we had a great meeting where we got a few good ideas for minor tuning of our design. They actually got the better of it and left with even more good ideas from us for their system. The meeting was a success if for no other reason than shifting Adler’s attention from technology to sales.

TENET was a technical success, but failed as a business for a variety of reasons.

o    We ran out of money as a recession set in – the result of only taking enough money to build two prototypes.

o    It was early for the in-house time-sharing market although this should not have been fatal – this market didn’t really take off until years later.

o    Some of the management’s business experiences were not directly related to ours.

o    Some of them did not come to understand our technology and market; e.g. our first “salesman” came from IBM and was an order taker, not a salesman.

o    Some of the strong personalities on the team proved disruptive, which cost us time and money.

Our first building was an old Lockheed building in an industrial area. It worked well until we hired our first salesman, who immediately saw the building as an excuse. We then moved to a new building, but that expense would contribute to our downfall. An article in Business Week about Fred Adler and his Data General experience contained references that indicated it was a major factor in losing his support.

There were four significant startups of that era that addressed the time-sharing market. The brain trust of Cal Berkeley’s Project Genie (a time-sharing system built on the SDS 940, a 930 with memory mapping hardware added, and commercialized by Tymshare Inc.) was one of them, and another was started by Tymshare alumni, but neither of them were able to leverage their previous success. TENET was the only one of the four that succeeded in building and selling a product, which in Tenet’s case was a medium to large-scale, general-purpose time-sharing system.

With the demise of TENET it was time to move on to another challenge and this came from National Semiconductor. Charlie Sporck, National CEO, was one of our TENET investors and wanted to integrate his jelly bean semiconductor devices into systems to improve margins. He didn’t have a specific goal in mind, but with the assurance we could get into the systems business and have a significant say in what that business might be, Dave and I joined National. After reviewing their microprocessor technology and doing some work defining instruction sets and building basic development tools we were split off into a separate division to build a supermarket scanning Point of Sale system (Datachecker) to compete with the likes of NCR, IBM and another 8 vendors. This was a gutsy move for a rapidly growing semiconductor company doing around $66M in sales, but after 10 years Datachecker was a survivor, profitable, and shared the supermarket scanning market equally with NCR and IBM. How it managed to do this would be an interesting case study.

The POS market lacked the excitement of UNIX, the PC market, the Internet and other emerging technologies and it was again time for me to move on.  Several years later I found myself consulting to Fujitsu Retail Systems in San Diego – they wanted to determine if there was an opportunity for them to expand their retail POS systems into supermarkets. Fujitsu had invested in ICL (English computer and POS company) as a way of getting their toe into the EU and ICL ended up buying Datachecker. National had come full circle and returned to a pure semiconductor company – they then bought Fairchild, where it all started, before being bought by Texas Instruments. 

 

Fini!