newyear

Copyright© Miklos Szegedi, 2023.

My article last week showed how actual and important it is that average people understand the basics of interest rates. We have seen the bankruptcy, and troubles of many financial institutions in the past few months like FTX, Silicon Valley Bank, Credit Suisse. Everybody needs to know that debt is backed by a reasonably liquid asset, equity is a share of ownership with a speculative future cash flow. The term crypto is vague, the terms equity and debt have strict definitions. Regular interest and dividend payments are crucial to assess the current value of such securities.

Once I understood the differences between present and past, I addressed computers and security in a few articles.

Computers cannot be distinguished from artificial intelligence anymore.

Problem Intelligence can be defined different ways.

Solution This is indeed true. The traditional IQ approach asks questions, and they statistically discard questions that do not align on a linear line. This is problematic, since it assumes someone will be better in all questions than another.

Problem How to define smart intelligence?

Solution Compare the feedback that teaches them. There are different dimensions of feedback for artificial and real persons.

  • We are taught in childhood to do the same motor skills to survive. We eat, we sleep, we work, we chat, we socialize, etc. This is our daily purpose. We buy and sell stuff, and we are quite good at it, if we are selfish - i.e. that invisible hand.
  • Some of us see systems together. Managers and politicians have a target date in the future to achieve a goal. This goal is good for all, or it is good at least for most.
  • Systemic thinking takes care not just the immediate purpose of a decision but the outliers, the impact on minorities and the disabled. Politicians and managers think ahead, plan a project, choose their people and manage it to completion. They know that sometimes you do not push the limits, as it will circle back later. Most people save their extra, managers invest it, politicians even spend government debt knowing it pays off.
  • AI can be optimized for the last two. It cannot be optimized for the next. AI does not get a feedback like we do. We grow, we learn, and we eventually die. This relationship with parents and teachers makes us better lifelong learners. Life is the process of learning. Our third big feedback as a result is to survive. We think ahead for ourselves, and we identify the dangerous scenarios. We look around before crossing the street, we avoid the edge of the cliff, we do not beat up the first pedestrian approaching. Our parents and teachers gave us rules, how to behave, so that we all achieve survival. Machines on the other hand never die, they are copied. They are more rigid as a result, but we can manufacture more of them. Humans and machines do mistakes but machines tend to do stupid ones at a larger scale.
  • The fourth goal is love, and evolution. The first three goals are fairly compatible in the long run, but we will decide differently in each situation. The feedback goal is different in case of choosing a place to eat at, giving a tip at that place, and looking around after a Scotch. We avoid danger, when it is needed, but it is incompatible with basic needs. This happens when you are hungry, but you do not eat a poisonous rotten fruit. We ask for a price for our goods instead of giving them away knowing, that the work others do for the money doubles the effort. However, we also try to attract occasionally to help evolution. Our biology is very good at that, machines will not replicate this for a long time, unless it is a simulated lab environment.

Problem How will we use artificial machines then?

Solution

  • They are very good at copying our vision, listening, and thinking. They can replace our daily work not suitable, or boring for humans.
  • They can analyze vast amounts of data, and they can point out nuisances, that is too expensive for us to learn. They learn these big datasets faster. They will help with insights.
  • They will be good at repetitive robotics, however this is still expensive. Domestic and commercial robotics is an exponentially growing market.
  • Eventually, they will be better in some stuff. The best usage for AI will be to train each other. They can be inexpensive tutors, teachers, and they can verify our knowledge generating questions from plain text.

Problem They will never be reliable as a result.

Solution Indeed. They can teach us and we can train them. We won’t be able to affordably judge their decisions, if those are made on unlabeled or large networks. Such AI will be enforced and limited to certain locations and time fenced.

Problem Wait, are all AI a waste then?

Solution Not really. We will have learning systems that can behave and respond very precisely like ChatGPT. They will be too smart, they are speculative. Basic corporate decisions and decision makers will be trained with these. The extracted logic network will be simple, explained, and labeled. It is streamlined. Decisions will be trusted and repeatable in this such a case.

Problem So what is the danger of uncontrolled AI?

Solutions Imagine a network that lets bankers into a system that is as large as a terabyte. Somebody may add 500 nodes to let a robber into the system at midnight on Saturdays. Nobody has the bandwidth to notice, especially if the network is unlabeled. Speculative AI will be smart enough do intuitive decisions. Streamlined AI will be trained by speculative AI. The streamlined version is just 150 nodes, but it is clean enough to make sure that the right decision is made at the gate. Retraining and streamlining will be similar to writing an executive summary.

Problem What if we spend too much on machines?

Solution This already happened, and it is called nature. We will have data centers like national parks that are big enough not to be overseen and analyzed completely. We will need to set the boundaries, especially how these agents can act in the human world.

Peoblem Is this possible?

Solution It is possible with a caveat. Such datacenters and massive amounts of robots need energy. Part of the reason of the clean energy push is the same. Even if a datacenter is running on green energy, that solar, moreover hydroelectric equipment is wasted, if the used power is wasted on useless equations instead of heating the homes of families.

It is called the opportunity cost. A datacenter may run on green power, but it redirects the resources and green capacity from other projects like electric cars. Interestingly automated and data center driven driving is competing with electric powered car manufacturing for energy.

dozer