Site icon Tech-Wire

How Is My Role Going To Change?

Shutterstock 2372110997 800x449 1 jpg

How Is My Role Going To Change?

Summary:
AI is changing how we live, do work, and find entertainment. L&D is not an exception to this.

How To Evolve By Embracing AI In L&D

In my previous article, we started exploring lessons learned from a conference on how learning professionals can prepare for the changes that Artificial Intelligence (AI) and automation are bringing in the near future. This article continues with the next five calls to action for embracing AI in L&D, and also attempts to answer a common question about Large Language Models (LLMs): how smart are they at reasoning?

Key Takeaways For Embracing AI In L&D

Here are some of the takeaways from talking to industry leaders about this approach today at the conference:

1. Develop A Strong Understanding Of Behavioral Science

2. Build A Network

3. Focus On Building "Learning" Ecosystems, Not Just Programs

4. Strengthen Change Management Skills

5. Understand Data Security, Data Privacy, And Ethics

How Smart Are LLMs, After All?

Finally, one of the most interesting questions I got from a conference attendee was how smart current LLMs are. Are they good at reasoning or at the illusion of reasoning? How much can we rely on them for reasoning, especially if we build solutions directly connecting AI (LLMs) with the audience?

LLMs are trained on huge data sets to learn patterns, which it uses to predict what comes next. With some oversimplification, you take all the data you collected and split it into training data and testing data sets. You train your AI model on the training data set. Once you think they're doing well with pattern recognition, you test it out on the test data that they have not seen yet. It is way more complicated than that, but the point is that "smartness" and reasoning can be misinterpreted for pattern recognition.

What's an example? Let's say you trained your model on how to solve mathematical problems. When the model recognizes the problem, it follows the learned pattern of how to solve it. It does not have an opinion, belief, or any sort of fundamental stand on this. That is why when you simply tell the model that it's wrong, it apologizes and reconsiders the answer. Mathematical reasoning (as of today) is not their bright spot.

A study across all models found through the GSM-Symbolic test showed that generating versions of the same mathematical problem by replacing certain elements (such as names, roles, or numbers) can lead to model inconsistencies, indicating that problem-solving is happening through pattern recognition rather than reasoning [1].

Specifically, the performance of all models declines when only the numerical values in the question are altered in the GSM-Symbolic benchmark.

When you add seemingly relevant information to the problem that is actually irrelevant, humans, through reasoning, just ignore it. LLMs seem to try to integrate the new information even if it is not needed for reasoning, as studies found:

Adding a single clause that seems relevant to the question causes significant performance drops (up to 65%) across all state-of-the-art models, even though the clause doesn't contribute to the reasoning chain needed for the final answer.

In short, current LLMs are amazing at pattern recognition, which they can achieve at a speed and on a scale that no human can match. They're great at pretending to be someone for soft skill practice! But they do have their limitations (as of today) on mathematical reasoning, especially in reasoning out why the answer is the answer. However, new models, such as the Strawberry one by OpenAI, are attempting to change this [2].

References:

[1] GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models

[2] Something New: On OpenAI's "Strawberry" and Reasoning

Originally published at eLearning Industry.com

Exit mobile version