The economics of buying and selling dinosaurs as art is a unique and fascinating topic that combines paleontology, art, and economics.

To begin with, it’s important to note that dinosaurs are not a naturally occurring resource like oil or minerals. They are extinct creatures that lived millions of years ago and can only be studied through their fossilized remains. As such, the supply of dinosaurs is extremely limited and cannot be increased.

This limited supply, coupled with the significant scientific and cultural interest in dinosaurs, has led to a high demand for fossils and other dinosaur-related items. This demand has, in turn, led to a thriving market for dinosaur fossils, particularly for well-preserved and rare specimens.

In terms of pricing, the value of a dinosaur fossil is determined by a variety of factors, including its rarity, scientific significance, and aesthetic appeal. The most valuable fossils are those that are well-preserved and complete, as they provide the most information about the species and its environment.

Another factor that can influence the price of a dinosaur fossil is its provenance, or the history of ownership. Fossils with a documented history of ownership and exhibition are often considered more valuable than those without a known provenance.

In terms of buying and selling dinosaurs as art, it’s important to note that many countries have strict laws and regulations governing the trade in fossils and other cultural artifacts. These laws are designed to protect the scientific and cultural value of such items and prevent their illegal trade. As a result, buyers and sellers of dinosaur fossils should be aware of these laws and ensure that their transactions are conducted legally.

Shhh – keep this a secret – but if by chance you are still reading this boring diatribe, I have something to tell you. What you’ve just read was not written by me but by an AI service that is supposedly going to steal my job.

In case you haven’t noticed it yet, this is not an essay on dinosaur evolution but on AI evolution. Buckle up!

*****

Alright, so there’s been a lot of noise about ChatGPT, a large-scale language model trained by OpenAI, an AI research laboratory that is going to (supposedly) benefit humanity as a whole. Because, of course.

If you’ve missed the noise about ChatGPT, it means you don’t have the internet, you don’t sh*tpost on Twitter, talk to other humans, or you live under a rock. Or quite possibly, you have a real job and do real things with your life and don’t subscribe to Internet Culture As A Way Of Life like I do (yes, sad).

The gist is simple: you ask ChatGPT a question, and it answers it for you. That question can be anything, and even includes poetry!

Wowww. If you think that is cool, what’s possibly even more impressive is the response when you ask ChatGPT to write code for you.

For obvious reasons, when this was first released into the public domain a few weeks ago, the world freaked out

The world is ending.

ChatGPT is smarter than humans.

ChatGPT will steal all of our jobs.

ChatGPT is the new Google.

ChatGPT will write all of the students’ essays.

ChatGPT will write my column for The Currency!

ChatGPT will do all of our engineering.

Aaaaaaah. Panic!

Ok, so there’s obviously a lot to unpack here. First and foremost, anybody who regularly reads my columns will know that I am a staunch opposer to the techno-utopian future that many AI engineers, VC investors, and billionaires are trying to impose on us. Half because I think that version of our world would be a terrible one. And half because I think it’s complete nonsense, and never going to happen, so let’s stop stressing people out about it.

At least in its current form, I feel pretty confident that this algorithm is not going to steal my job. Maybe, just maybe, I’m getting too big for my boots, but you can directly compare the beginning of this essay to the last essay I wrote for a direct comparison on Dinosauronomics and see who wrote it better. (Hopefully, the answer is me!).

I figured the sensible way to talk about something so insane – ChatGPT – is to call out what it can’t actually do (phew!), before diving into what it might be useful for (wow!).

What can’t ChatGPT do? Most things

It is a language model, which means that it works by using an algorithm to ingest billions and billions and billions of texts found on the internet to understand patterns between letters, and then words, and then paragraphs. 

This is actually an incredibly difficult thing to do. In my more youthful days when I was researching AI algorithms, I worked for a famous Professor who ran a quantitative hedge fund. I was part of a joint-university team that included MIT and Stanford to create language models which automated parsing through financial reports and training algorithms to “predict” which stocks would go up, and which would go down, based on the sentiment detected from the reports (such as positive or negative). We could do this with an 87 per cent hit rate, which was pretty impressive back then. In the 15 years since, (1) these types of algorithms have become vastly more sophisticated, and (2) the amount of data that is available to train them on has exploded, leading to much more sophisticated results. 

As fancy as they sound though, these algorithms can be explained quite simply. Language models are all about pattern matching. “Does X look like Y?” If so, buy. If not, sell. What ChatGPT does is actually very similar, but with vastly more data which allows for slightly more context. Not too much more context.

But, at the end of the day, and despite its impressive sophistication, that is all it is. ChatGPT is pattern matching at worst, and original replication at best. It just happens to have trillions more data points going into its algorithm than other models. 

That’s not to say that ChatGPT isn’t extraordinary and brilliant, because it is. But it is extremely limited in what it can and can’t do. It can’t write original, compelling, and imaginative essays (cough cough like mine). It can’t create a unique style. It can’t write about or suggest responses to problems that are new or that it hasn’t come across before. It can’t replace human thought because it’s not sentient. It’s a copycat. At its core, once again, it’s a matching algorithm.

Matching algorithms don’t understand context. And herein lies the crux of why my job is probably safe for now. Most writing requires a nuanced understanding of the topic being discussed, relative to the relevant context.

But, and importantly, whose context? Usually, my writing focuses on my context, my version of events, my vision of the world, and my biases. To illustrate what I mean, here’s another ChatGPT poem:

I can imagine there is a considerable number of Currency readers who may not have written this type of poem about Sinn Féin. Do you agree or disagree with the sentiment at hand? I could offer up a similar poem. Curiously, look at what happens when I ask ChatGPT to write a similar poem about Fine Gael. 

  1. These poems are very similar in style, phrasing, and sentiment
  2. These poems are extremely boring
  3. These poems offer contradictory outcomes (both parties are seemingly right)
  4. Which of these poems is right or wrong?

Let’s dig into “which of them is right or wrong?” in more detail. Let’s say I used ChatGPT to write my essays, or as a search engine. Where is this information coming from? What are the biases of the millions of inputs to the algorithms that ultimately give me these answers? Which sources are verifiable, and which are internet garbage? How can I tell?

Ultimately, I have absolutely no idea what this algorithm is telling me, where it’s getting this information, and why. And that’s a huge problem. It means that I can’t use any of it because I have no idea how to verify it. The data that OpenAI uses to train its algorithms, to teach it how to pattern match, could come from Reddit or Twitter. In which case, well we’re scraping the bottom of the barrel in terms of humanity’s collective knowledge.

Well, it’s really fun to use right now, it really doesn’t show superior levels of sophistication that would make anybody think we’re going to be firing our journalists anytime soon. Overall, I’m going to give ChatGPT’s usefulness a 1.5/10 for its ability to respond to open-ended tasks and for its general capabilities.

What can ChatGPT do? A small number of things, incredibly well

While I’m not convinced that ChatGPT is going to be writing The Currency opinion pieces anytime soon, I do think that ChatGPT has an extraordinary ability to put some people out of jobs pretty quickly: entry-level software engineers.

Unlike creative, open-ended jobs that require context and novelty, a lot of software (and other types of) engineering are fantastic use cases for this type of technology for the following reasons:

  1. They are bounded problems: e.g. “solve problem X” has a very right or very wrong solution;
  2. They are repetition-based: the same lines of code are written over and over and over again to achieve the same outcomes;
  3. There is a lot of reliable training data for these tasks;
  4. They don’t require context-specific guidance;
  5. They are formulaic in nature.

To understand just how good ChatGPT is at solving coding problems, I applied the algorithm to Project Euler, a super fun set of programming challenges that get increasingly harder. I used to use this to learn some nifty techniques when I was younger, and there are two types of problems. 

One set of problems is just “write a piece of code that can do X, Y, or Z”. AI is really great at these problems, for the reasons already outlined. However, the second set of problems is where ChatGPT starts to get less effective: mathematical reasoning. The problem to be solved is not simply writing a script, but figuring out the mathematics for the script in the first place. ChatGPT is excellent at solving maths or programming problems when the needed maths is explicitly stated. It is much less capable at identifying the specific mathematical form that is needed, as this is actually a context-specific, very open-ended question.

So what types of jobs could OpenAI’s algorithm get rid of? Possibly entry-level tasks that we always thought AI was going to get rid of anyway. And I’ve heard a lot of people saying that they’re already connecting to the algorithm using APIs that enable them to outsource the most repetitive and boring parts of their job, leaving them with the ability to spend either (1) less time working (this should be the goal!), or (2) more time working on harder problems (capitalism will force us to do this instead).

Is ChatGPT going to kill us?

The short answer is no. And so is the long answer. It’s also an inconveniently boring answer. People talk wildly about the capabilities of such technologies when they arrive because they either don’t understand how they work (pattern matching, in this case), or they are trying to sell headlines. In fact, the two are usually correlated.

Some of the reactions by the academic and engineering communities to this new algorithm genuinely terrified me when it was first released. However after playing around with it for a while (and I highly suggest you do the same), it seems pretty obvious that this is the same narrative we’ve been hearing about autonomous vehicles, supersonic travel and other technologies that always seem like they’re just on the horizon but never actually manage to materialize.

For at least now, most of our jobs are safe from ChatGPT, it’s not going to become the new Google, and it’s certainly not going to kill us. Not like the Triceratops for sale that I went to visit last month.