Modern Developments

Development in the space of AI is at an all-time-high, and has reached a staggering pace that even experts struggle to keep-up-with!

Obviously, the new-kid on the block that everyone's talking about are generative models, and in particular, large-language models.

You should take some time to acquaint yourself with the vocabulary and some toolsets here:

Google's LLM Primer


But generative models have gone beyond just text understanding and production...

Latest in Generative Video Production


So as we labor to keep up with this pace of advancement, let's take a step back and wonder about just what AI is capable of and how we can process these advances in comparison to the limits of humans.



AI Philosophy

Philosophers have long debated the topics of cognition, free will, and intelligence, which of course, long precede development of automated systems.

That said, many of the same discussions have recently resurfaced under the modern context of artificial intelligence, several of the largest we'll highlight herein.


Assessing Progress: Imitation vs. Benchmarks


How do we assess the progress of AI systems? In their ability to imitate humans or their ability to excel in some specific task?

Early research focused on the former, whereas modern focuses on the latter (just like your last assignments -- gotta get that accuracy high!)

Though the Turing Test was one of the earliest formalizations, this has evolved into the modern day Loebner Prize, a competition to pass increasingly difficult Turing Tests starting in language processing (which we still have not passed), with planned additional tiers into image recognition, and production.

The Loebner Prize's Most Human Human


Note: this video was made in 2017, which isn't that long ago, but feels like a technological eternity!

In only the past couple of years, advancements in generative AI would put the Loebner prize really through its paces, and is largely why it isn't around any more!

Still, we can reflect on the ethos behind it...

Discussion Question: What would it take for a machine intelligence to convince you that it had reached human-levels of general intelligence?


Machine Consciousness: Strong vs. Weak AI


This question broaches whether or not machines could ever be *conscious* the same way that we as humans experience consciousness.

This debate is sometimes found under the distinction between:

  • Strong AI: with the right programming, a machine could have a mind equivalent to that of a human, experiencing consciousness, free will, and volition as we do.

  • Weak AI: a machine system could *act* exactly as humans do, but without the experience of consciousness.

One of the most famous analogies on this point is the Chinese Room Argument, that goes something like:

Searle's Chinese Room Argument


Discussion Poll: Do you think that *it is possible* to design a strong-artificial-general-intelligence?


AI Development Ethics


Most of the technological lay don't particularly care about the above; rather, a practical consideration of the risks that either strong or weak AI pose may translate to necessary policy.

AI ethical discussions thus ask...

Just Because We Can, Should We?

Let's start with the basics: avoiding the media portrayals in the Matrix, Terminator, etc., wherein the whole AGI thing never seems to work out well for us.

If we'd only listened to Asimov, we might not get into such problems...

Asimov's Three Laws of Robotics


More subtly, we have a host of issues that affect us as a society, which our textbook posed a decade ago and to which we still lack satisfying answers.

Discussion Poll: vote on whether or not you agree to each of the following:

  1. Developing AGI will have catastrophic consequences on the economy as people lose their jobs.

  2. Freed from their jobs, people will have too much leisure time.

  3. There would be an existential dread that sweeps the world as we lose our human-uniqueness of consciousness.

  4. AI systems will be used by the powerful toward undesirable ends.

  5. AGI may signal the end of the human race as we are replaced by our robotic overlords.


On the Future of AI


Of course, not all is as bleak as these considerations might intimate.

There are some boundless potentials associated with these more advanced sides of automation.

If we are able to create strong AI systems capable of human-level AGI, and can imitate a particular human, then we may be able to "upload our consciousness" into a digital eternity.

Dennett's "Where am I?"


Of course, if we do upload our consciousness, we'll keep needing more and more compute to support it and then continue to accelerate the speed with which we make discovery, etc. There will thus come a point at which our machine intelligence exceeds that of all humans known as the Singularity.

Kurzweil's Technological Singularity

(aside: Kurzweil's kinda kooky and is taking like 4000mg of vitamin C so that he lives long enough to see his singularity, but hey, I guess you've gotta have hobbies).


And that, my friends, is a wrap on this semester. We'll have a little more time to discuss some of the above if you're joining for cognitive systems!

Otherwise, I hope you've enjoyed the ride, and I'll look for you all after the singularity!



  PDF / Print