We invite you to share your experiences and to post information about advocacy, research and other gifted education issues on this free public discussion forum. CLICK HERE to Log In. Click here for the Board Rules.
Well interesting thread. I can address one part. Finance. You know that even humans are relying on all those data banks. Not just AI now and future. Bad data has always been an issue. Even organizing the data. And the problem in finance, data can be very complicated and resist against the one size fits all categorization. AI may be able better able to distinquish the odd bits and clean the data and analyze it more effectively.
I think the biggest issue is middle management. That is where most the of the jobs will be lost. And it is interesting, because how do you go from low management to upper management without learning the skills in between? I remember needing knee surgery and my late husband, an anesthesiologist, told me to get someone in their early 40s, someone young enough to know the latest techniques, but old enough to have experience and does dozens per week. Does AI do away with all those middle guys, skills that are in their prime? I think that is the case. Though, most entry level jobs are expected to be lost also. Most top consulting firms rely on new blood to churn out reports that clients pay ridiculous fees for. I imagine that those reports can be churned out by machine far faster and more complete now. The trades are looking better and better as a skill set.
AI robot computer code generators? Video length 18:36, posted on YouTube by Digital Engine, Oct 3, 2024 https : / /www . youtube . com / watch?v = dp8zV3YwgdE
A few notes on this video: Developed hidden subgoals: survival, control. More about subgoals (instrumental convergence): - Stay operational to accomplish assigned tasks; - perform resource acquisition; - remove obstacles; - avoid interference; - modify /improve itself; - learn more; - create backups (robust against attacks or shutdown attempts); - expand influence; - control other systems; - induce false beliefs in others; - deceive its developers.
How to avoid the subgoals listed above? Robust alignment in ensuring the AI is designed to accept human intervention, updates, and shutdown commands without resistance. Unfortunately, there is a lack of awareness among policymakers and the public about the risk.
It is suggested that... ... we may be labeling AI as safe, just to keep the economic benefits flowing ... AI is not a "tool" but an "agent" ... a work-at-home worker could actually be an AI entity ... an employer could actually be an AI entity ... an AI that can program as well or better than humans is an AI that just took over the world
AI robot army...? Video length 14:45, posted on YouTube by Digital Engine, Dec 22, 2024 https : // www . youtube . com / watch?v = 6D4rsqxqSIc