top of page

Jacob's A.I. Journal: Vol. II

Jacob's A.I. Journal is a collection of articles on A.I. topics from's Principal ML/AI Architect, Jacob Haning


A.I. in Business

Andrew Ng predicts that the next 10 yrs of AI will focus on "Data Centric AI"

The focus of Ng's (pronounced *ing*) message is the shift in Machine Learning from focusing on the model to focusing on the data. Big tech may have millions or billions of rows of data to work with, but most industry use cases don't have that kind of history available. I tend to agree with him, and this is no exception. Ng is one of the people to know in the field of A.I. He has founded, lead, or contributed to many of the companies and projects that expanded the field in the last decade. His most recent company Landing AI is focused on manufacturing, he says, to expand AI beyond big tech. (more on data centric ai) (Andrew Ng Interveiw)

A.I. in Art and Science

Artists' response to A.I. Art. "It is pareidolia, an illusion of art"

If you've been following this conversation or reading my content you know that recently OpenAI released DALL-E2 and impressive new system that can generate art in any style from a text prompt. The U.S. Copyright Office recently rejected the idea that A.I. could patent its art which will give working artists some relief but this author contends that if A.I. art is elevated to the level of artwork created by people something fundamentally beautiful will be lost. For more on this topic check out my previous articles.

A.I. in Healthcare

DataRobot predicts Malnutrition in children at Pheonix Childrens

The model, created within a few hours on 10 yrs of clinical data, is able to predict children who may be at risk of malnutrition. With a follow up exam from a staff nutritionist, children are more likely to receive the proper care. David Higginson, the Executive Vice President and Chief Innovation Officer at Phoenix Children’s says they are finding four to five kids per week that would not have been diagnosed otherwise. This is a great example of why clinicians should embrace this technology, and how A.I. in healthcare can positively affect long term patient outcomes.

Elon Musk claims Neuralink will be able to cure disease within 5 years

The Neuralink device is about the size of a coin implanted flush with the skull and has threads one quarter the size of a human hair connected to specific neurons in the brain. Firing rates of these neurons are streamed to a computer and recorded for training machine learning algorithms (models). These models are then used to predict future intended actions based on the neuron firing patterns. Ultimately allowing the user to take action with their thoughts. The company founded in 2016 released a proof-of-concept video last year of their test subject Pager playing pong with his mind. The article above, however, brings up some good points about how long FDA approval may take and what kind of cost barrier the device might have. All in all, an incredible achievement.


Ubiquitous cameras aboard drones outfitted with remotely deployed tasers

While this story is riddled with political overtones and personal biases, it's really a tale of the difficulties faced by AI ethics boards in the current landscape. The need for real oversight in AI implementations is urgent, but the attempts to put any checks in place have fallen short in nearly every case. I shared some stories recently about Google's now infamous attempt and the newly formed National Advisory Committee which we are hopeful will succeed. My contention is that these boards are created without any real authority to modify existing processes. They can recommend, discuss, and educate, but can't affect real change.

A visual guide to 50 modern cognitive biases

My 2 favorites:

  • Fundamental Attribution Error - we judge ourselves differently than we judge others

Sally was late to class; she's lazy. You were late to class; it was a bad morning.
  • Backfire Effect - Disproving evidence sometimes confirms our beliefs

The evidence that disproves your conspiracy theory was probably faked by the government

Any interaction with the world of predictive modeling inevitably includes a line or two about bias. This wasn't always the case but as we see more and more intentional and unintentional consequences of using historical data to predict future events, it's critical to understand that systems are built by people and people are influenced by cognitive biases. Ethics in machine learning has become synonymous with removing sensitive data fields from training sets. While this is an important step, we must also consider how predictions will be used to influence decision making and business processes. Understanding our own biases is important not just for how we implement technology but how we interact with the world.


The AI model trying to re-create Ruth Bader Ginsburg's mind

The most important A.I. law you've never heard of

[Reminder] How to stop AI from recognizing your face

[Try it Out] Improve your doodles with AI powered Auto Draw


bottom of page