Artificial Intelligence Bio for Joseph Zentner

Written on 4/2/2025

University of Utah

My journey in the domain of AI began around 1984.  In 1982, I’d finished a degree in History (minor in German).  With our first child on the way, I’d come to realize that my only options for using a history degree were to become a middle school or high school teacher or to continue my schooling to get a PhD so I could teach history at a college or university.  With that stark realization, I looked at other degrees and settled on Computer Science.  The problem was, there were 1600 applicants to get into that program, and they only had slots for 60.  Long story short, I got in.  Most of the 1600 were tire kickers.

By 1984, I’d been in the Computer Science program for two years and had grown tired of it.  The discrete structures class, the database fundamentals class, and the software engineering class were really, really boring.  In the software engineering class, the professor gave us a group assignment.  Upon receiving the graded assignment, it had earned a C.  However, there were no additional markings on it – just C.  We took it to the professor and asked him what we had done wrong.  He thumbed through our work and simply said this is C work.  No feedback beyond that.  Note: there is a reason for this rambling, which will become apparent later.

In the spring of 1984, one of my professors was a fellow named Bob Kessler.  He was an excellent instructor.  I was in his AI class.  One of the things he had us do was to build upon the work done by someone in New Zealand – I forget his name – earlier in the 80s.  The program we had to write was fairly simple.  It would read in a text file of any length and create a specialized dictionary of all the words from the text file.  As it encountered each word, the program would perform a look up to see if the word was already contained in the specialized dictionary; if not already included the word would be added.  If the word WAS already in the specialized dictionary, the magic would start to happen.  The magic: the program would peek at the next word in the text file, then add it to the current word’s dictionary entry as a next word.   Ultimately, the specialized dictionary would contain every word from the file, along with the frequency for all other words following it.  It went several words deep (i.e. next word, next next word, next next next word).  Armed with this specialized dictionary, the program would then prompt the user to type in two or three words to use a starting point.  Then, based on probabilities from the specialized dictionary, a new story would be generated. 

This, incidentally, is similar to how Large Language Models work today.  The biggest differences are 1) billions of documents are scanned rather than just one, and 2) rather than a simple dictionary, neural networks are used to store the information about words and phrases.  Also, many Large Language Models apply filters when reading in new text files; the filters, among other things, make sure ethical rules are followed before allowing the input to continue.  Exactly what those ethical rules are can be a mystery, especially given the fact that ethics principles can be shown evolve/devolve over time.  What is truth?  Hard to say these days.

Another very interesting project we did in college was in the summer of 1984 with a good friend named Eric.  It was an independent study class directed by a professor named Tom Henderson.  His intent was to have us design and build a computer program to automate the board game of Third Reich (no, we were not Nazis).  This board game is extremely complicated – it takes several hours just to set up the board, then another 40 hours or so to play.  It revolves around battles in Europe and North Africa between countries and units of the Allied Powers and Germany.  At the start of the war, Germany is very strong, with the allies being weak but having a strong capacity to produce.  We did not get very far with Third Reich.  However, we did build a much simpler computer game involving Navy units; it featured surface ships, submarines, depth charges, and torpedoes. 

A third college project involving AI was in a Computer Vision course.  This project had one mission: given a digital camera oriented somehow and placed somewhere in a room with no windows and one door, identify and find the door.  This ended up being about 1000 times more difficult than it sounds.  Some examples: if the camera is facing down but placed directly above the door, the door would appear as a line; if the camera was directly across from the door, but oriented at an angle, the door would appear as a rhombus; etc.  Ultimately, we employed two techniques: blob coloring and segmentation.

Blob coloring scans the pixels in a digital image; whenever a change in color or brightness is detected, the scan proceeds to walk in all 8 directions (up, down, left, right, upper left, upper right, lower left, lower right) from that pixel to determine whether there are any neighboring pixels with matching color/brightness.  The software would continue walking in each direction until non-matching were encountered, thus determining the boundaries of the blob.

Segmentation seeks to determine how many objects (blobs) there are, and where each is.  Both of these techniques are still in use within the computer vision world.  However, object identification has also taken on a huge role; i.e. we are beyond mere blobs.

Incidentally, computer science was very new at the time, and in high demand.  It was rumored that our department head had been offered over a million dollars a year to join other institutions and create computer science programs there.  Similarly, the University of Utah was working on several medical projects involving robotics, including an artificial hand and an artificial heart.  Barney Clark was the first recipient of an artificial heart in the early 80s.  I was not part of any of that work, but was aware of it and rooting for their success.

Naval Ocean Systems Center

In 1985, I graduated with a second bachelor’s degree, and took a job with what was then called Naval Ocean Systems Center, in San Diego CA.  After a few months there, I joined the Artificial Intelligence group.  There were several things going on there, including some natural language processing (precursor to Large Language Models), pattern recognition, and expert systems work.  I worked on two expert systems.  One was called Command Action Team.  It was designed to provide threat assessment assistance and recommend courses of action to mitigate the threats.   For instance, if a Soviet satellite was going to be passing over a carrier group, an alert would be issued by the expert system, notifying the users (Admiral’s Staff) that they needed to turn off certain RF emitters to avoid being detected.  Note: this was long before satellites had the capability to take images and process them.  Another project I worked on was something called Pilots Associate, which was meant to identify and create real-time decision aids for use by a fighter pilot.  Much of this work has been incorporated into several modern aircraft types.  At the time, the pre-eminent fighter aircraft were F-14’s (Tomcats) and F-16 Eagles.

Martin Mariette Aero & Naval Systems

At this company, I worked on two projects, and consulted on a third.  The two projects assisted with navigation and route planning for unmanned autonomous submarines.  One was an unclassified IR&D project for the Mobile Under Sea Test Bed (MUST).  MUST was literally an autonomous submarine that had been designed and built by Martin Marietta.  I’m not sure what happened with this – never heard anything about it after leaving Martin Marietta.  The other project was a classified (D)ARPA program that also involved automated navigation and route planning.  A paper was published about this navigation software at an AI conference named International Joint Conference on Artificial Intelligence in 1988.  This was a really big deal.  You can still access this paper online.  Eric, mentioned earlier, was one of the co-authors on this seminal paper.  Search for the following:

“A Maneuvering-Board Approach to Path Planning with Moving Obstacles”.

The third project I worked briefly on was for unmanned aircraft – i.e. drones.  Eric later went to work for Boeing in Washington State to help design and build several drone prototypes.  In the early 90’s, he was forward deployed to Albania to babysit some of his drones that were being used to monitor the war in Kosovo.  These were predecessors of Global Hawk and the Reaper drones that have been in use for a couple of decades now.

Lastly, I wanted to mention a project Eric and I worked on while we were on overhead  at Martin Marietta awaiting funding.  We were exercising our use of the programming language LISP, and a LISP-based language that Eric had designed and created named FROBS.  FROBS was a merger of object-oriented programming (just starting to emerge in the mid-eighties. long before C++ was developed, and something called frame-based reasoning, which was a pre-cursor to what later became database triggers (pre-read, pre-write, post-read, post-write).  Using FROBS and some elementary computer graphics, we designed and programmed a computerized version of the game RISK.  We weren’t afraid of patent infringement since this system was in a Top Secret environment four stories underground that never saw the light of day.  Our game would allow up to six players, any combination of which could be either human or computer players.  We programmed the computer players with some simple rules: take and hold continents, break into other players continents, build contiguous blocks of countries, reinforce the boundaries of both blocks and more importantly continents.  We also relished the opportunity to place graduates of both the Naval War College and the Army War College in front of the computer to play against the computer player(s).  The computer player(s) won nearly every time!

Rockwell International

During my 8 years at Rockwell, I worked on a few projects involving AI.  Simultaneous to working at Rockwell, I worked on and earned an MS in Computer Science, with an emphasis on AI and Algorithms.  Projects I worked on at SMU involved intelligent search, some neural networks, and a little bit with robotics.  While there I published several papers at technical conferences. Topics included pixel-based reasoning, neural networks, and database fusion.  A proposal I’d submitted was to merge several emerging technologies to create a system that could be used to train and evaluate movement in three dimensions.  The technologies were robotic sensors, 3-D graphics, data fusion, pattern recognition, and neural networks.  One target consumer was people who were learning sign language.  The system would use robotic sensors to detect movements, render them on the screen, and use neural networks to perform and grade with pattern recognition to interpret what was signed.  The other target consumer was surgeons, whether in medical school or in practice. The system would pull in digital data from a real patient (e.g. CAT scans), then allow a surgeon to operate virtually against that data using robotic sensors.  The system would grade the surgeon’s work.  The proposed system did not align with the business Rockwell was in and wasn’t funded.  Similar systems have since been fielded.  Telesurgery continues to catch on.  Also at Rockwell, I worked on several (D)ARPA programs, including two involving Geographic Information Systems, GPS, route planning, wireless networking, satellite communications, and distributed databases – all of which were in their infancy at the time.  The systems also pioneered integration with on-board sensors.

Raytheon TI Systems

The project I was hired to work on in 1997 was a (D)ARPA project.  (D)ARPA tends to fund projects that are hard, and in some cases, impossible, with the goal being the advancement of technology.  This project had one simple goal: organize and orchestrate everything that needs to be done to move an infantry division from point A to point B.  Everything includes cutting all orders down and across the chain(s) of command from general to private, generating all equipment packing, moving, and unpacking requisitions, travel plans, and organizing the logistics of transportation, fuel, water, food, hygiene … everything to go from point A to point B with as few hiccups as possible.

The other project I worked on a Raytheon TI Systems was a vehicle monitoring system that integrated an 8-bit processor, EEPROM memory, a cellular chip, a GPS chip, battery-charging logic, and firmware to periodically log the location of the vehicle, and on another schedule, dial into the data center to report logged locations.  Everything but the data center was hosted in one box, about the size of a match box, and was a pre-cursor to cellphone-based GPS navigation that began in the 2010 timeframe.

During the past decade, my exposure to AI has been limited.  I did work on some fairly complex pattern recognition problems, that would have been part of AI in previous decades, but I don’t believe are now.  One of them used a large neural network with a couple of dozen hidden layers and 1000’s of nodes as part of the pattern recognition.  I also briefly worked on a project that integrated and array of extremely high-resolution digital cameras (black/white, color, and infrared) to monitor field of view, then perform object detection and object tracking.  Knowledge gained on the door-identification project 30 years prior came in handy on this project.

In recent years, I’ve taken quite a bit of online coursework using the following applications: ChatGPT 2, 3, and 4o,  ollama, otterAI, CoPilot, Claude, Copy.ai, Deepseek, Grammarly, Midjourney, Grok, RunwayML, and a few other tools.  If I had to predict a winner, it would be Grok.  I have used them to help generate Word documents, Powerpoint presentations, Excel spreadsheets, and working software in C, C++, Java, and Python.   Overall, during the past two years, I have around 3 months of experience using these and other AI-based tools, writing long and detailed prompts to generate good starting points for hundreds of artefacts.

I am a big fan of using standalone LLMs.  Within the real estate banking domain, I’ve written (without AI) hundreds of documents and recorded dozens of videos.  I’m currently in the process of training a standalone LLM by important all of this non-AI-generate material into the LLM.  In short, I’m getting better every week with using AI tools to help create good starting points for highly useful material.

Two weeks ago, I spent around 8 hours writing a many-page prompt to generate half a dozen Powerpoint files (each for a different target consumer), supporting spreadsheets, and a Word document to serve as a guide in using all of the above. The prompt took around 12 hours to complete; i.e. significant computer power and resources were needed to do the actual work on my behalf.  Once completed it will take me around a week to clean it all up.  However, it would have taken me well over a month had I not spent the initial 8 hours to create the many-page prompt.

As stated early in this writeup, feedback is critical.  When constructing prompts for generative AI, it is very important to give as many details as possible.  Then, when interacting with the AI tool to refine the results, good, solid, detailed feedback is critical to successfully iterate towards something that approaches what you really need.  AI can’t do it all.  It is merely a tool.  Sometimes you have to throw away what it gives you and start over.