Artificial Intelligence (AI) is a rapidly growing field of technology…
… one that is changing how we interact with the world.
It has the ability to revolutionize business as we know it…
… and exponentially accelerate our personal potential.
Most believe that AI will “take our jobs,” and while that may be true in certain scenarios - its outcomes are largely determined by the inputs of the user…
… making the quality of the output dependent upon human psychology.
AI is designed to learn from data and improve performance over time.
It’s assumed humans operate in much the same way… but that’s not always true.
"Every life form seems to strive to its maximum except human beings.”
- Jim Rohn
AI uses algorithms, statistical models & pattern recognition to make decisions.
A quantifiable and predictable outcome given the inputs in which you provide it.
Humans, in comparison, make decisions based on a multitude of logical, emotional, conditioned & historical reasons… many of which are rooted in false presuppositions.
Herein lies the problem…
AI does not learn from a neutral state.
It reflects the biases and unconscious prejudices of those who create it.
Our own psychology plays a significant role in the development and deployment of AI.
AI systems are created by people.
People bring biases, assumptions, and prejudices to the table, often unconsciously.
ll of which have an identifiable impact on the design, implementation & output of AI.
Not only does this affect the output/outcome…
… it fails to take into account emotional reasoning, morality and ethicality.
One could argue that emotionality should not play a role in decision making…
… but that would presuppose a definitive outcome in one’s life - which would negate my philosophy on multipotentiality.
We think in questions… and our answers produce outcomes.
Because of this, AI will only ever be as good as our ability to ask intelligent questions.
Therefore, the bottleneck is not how quickly it can learn…
… but how we can build a neutral foundation.
One in which it can reason logically, and with absence of presuppositions.
There are a plethora of use cases for AI, and more are being discovered daily.
Yet the systems are currently limited by capabilities in hardware and software.
Algorithms are designed to optimize specific objectives (speed, accuracy, or efficiency)...
… however, these objectives are met with a lack of fairness and transparency.
An example would be a system designed for operational efficiency…
… yet it wouldn’t take into account the human implications of its decisions.
To summarize, these are incredibly powerful tools.
I recommend you learn how to use them sooner rather than later…
… as they will become widely accepted and the faster the learning curve the better.
However, their outputs are still determined by human psychology - making us irreplaceable.
These systems are not magic…
… and if they learn based on human inputs, they will forever be flawed.
Use them as a tool to make YOU more efficient and effective…
… but keep your focus driving inward.
That’s where all exponential change happens.
P.S. If you're struggling to find use cases for AI, think like a 6th grader.
Anything you're trying to learn, plug into ChatGPT with the prompt:
"Explain to me how to (x, y, z), and make it clear enough a 6th grader could understand it."