We invite you to share your experiences and to post information about advocacy, research and other gifted education issues on this free public discussion forum. CLICK HERE to Log In. Click here for the Board Rules.
The professions of news personalities and actors may also be impacted by technology replacing jobs, as shown in this YouTube video on Artificial Intelligence and Deep Fakes, posted by Exonet on Sept 10, 2022:
BT, a former state monopoly previously known as British Telecom, will eliminate about 10,000 jobs through digitization, automation and the use of AI in its processes.
“That’s about using technology to do things much more efficiently,” Jansen said.
... requested Cruise to immediately reduce its active fleet of operating vehicles by 50% ...
That means Cruise, which is the self-driving subsidiary of General Motors, can have no more than 50 driverless cars in operation during the day, and 150 in operation at night...
Despite projections of driver jobs being replaced by robotics and automation, and driverless cars now being tested in San Francisco, the US Bureau of Labor Statistics, Occupational Outlook Handbook currently projects 11% growth in occupations for drivers, which is greater than the average projected 4% growth throughout the economy.
One of the interventions, found on the page numbered 86, includes two targets or goals for reducing ownership of private transport: - One goal is 190 vehicles per 1,000 people. - The other goal is more ambitious: 0 private vehicles. This may indicate broader implementation of driverless cars, such as those currently being tested in San Francisco.
Also possibly of interest, the page numbered 82 lists two targets or goals for reducing CLOTHING ownership: - One goal is a limit of 8 new clothing items per person per year. - The other goal is more ambitious: 3 new clothing items per person per year.
The goals of other planned interventions may also be of interest, although not mentioned in this post.
In addition to technology replacing jobs, the planned interventions appear to disrupt the supply-demand economy, cancelling a number of jobs in the manufacturing sector, including the engineering & design of machines and means of production.
The cities included in the report are listed on the page numbered 55. In the United States, this includes 14 cities: Austin, Boston, Chicago, Houston, Los Angeles, Miami, New Orleans, New York City, Philadelphia, Phoenix, Portland, San Francisco, Seattle, Washington, D.C..
When advising students, including our gifted children, regarding education plans and possible future careers, parents may want to check the list of careers projected to grow faster than average, as well as careers projected to be in decline: https://www.bls.gov/ooh/, and compare the information presented there with current news.
Surely there exists the opportunity for errors in data collection. As one example, for over a decade I have received mail from an ever-growing number of commercial entities, traceable to a typographical error made by a specific firm when entering data into their system. Although I notified the firm multiple times, they chose not to dedicate their time to error correction and follow-up, describing this as an inefficient process and a waste of time... but rather the firm chose to continue replicating, selling, and disseminating their erroneous data. Although the consequences in this example may be minor, the large scale impact of similarly erroneous data or dirty data may be far-reaching and negative when processed rapidly and on a broad scale through automation driven by Artificial Intelligence.
University College London - Finance & Technology Review (UCL FTR) covers a variety of topics related to Data and AI:
- The future of Trading... intelligent systems which drive algorithmic trading at finance powerhouses. - The AI Incident Database: Documenting AI gone Wrong... the importance of understanding failures and biases of AI systems. - Interview with Didier Vila (Global Head of Data Science at QuantumBlack)... AI biases and data privacy concerns.
Linked from the above article: Editors’ Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing https://onlinelibrary.wiley.com/doi/10.1002/hast.1507 by Gregory E. Kaebnick, David Christopher Magnus, Audiey Kao, Mohammad Hosseini, David Resnik, Veljko Dubljevic, Christy Rentmeester, Bert Gordijn, Mark J. Cherry. Wiley Online Library, Hastings Center Report, Editorial, Free Access, https://doi.org/10.1002/hast.1507 First published: 01 October 2023
I found this to be excellent reading; the ideas were well-thought out, and presented in an easy-to-follow manner.
How well are we equipping today's students to compete, collaborate, and succeed among the workers, producers, and decision-makers of the future?
Is civilization beginning to normalize new levels of Human Machine Interface (HMI), including robots, self-driving vehicles, and Neuralink AI chip implants in human brains?
Brief excerpt: "In a 2023 study published in the Annals of Emergency Medicine, European researchers fed the AI system ChatGPT information on 30 ER patients. Details included physician notes on the patients' symptoms, physical exams, and lab results. ChatGPT made the correct diagnosis in 97% of patients compared to 87% for human doctors."
Warning: Scientist rant ahead! Maybe we'll just have some "bad" AI like we have some "bad" doctors? The scary part of the medscape article was the last source where the student doctor decried the lack of study in their coursework on how to use AI in their medical practice. Although the specialized medical AIs may do well in controlled patient populations and study settings when directly supervised, it does not seem that we are at the "just let the AI be the doc" stage. Even for entering the orders or health information, you'd have to check that it was correct and at some point it is just easier to do it yourself. I've tried, within the last month, having AI 1) summarize a journal article, 2) list the GPCRs that do not have seven transmembrane domains, and 3) write an email requesting reagents from the author of a journal article. Outcomes: 1) Summarized, but missed both the importance of the article to the field and the key elements that made the paper a major advance in medicine--in other words, it failed to integrate the new knowledge to the existing body of work. Also failed to find any weaknesses or flaws even when directly asked to analyze for this aspect. Every study has some weakness or flaw. 2) Hallucinated and made false statements while listing a reference that directly contradicted the AI's conclusion. Failed to find breakthrough in field (from eleven years ago in one of the top science journals) that found that plants have this feature. 3) Wrote an email, but failed to list the reagents correctly. Using this email would have required extensive editing and hunting in the paper and supplementary information on the journal's website for the correct name of each reagent. Yes, AI is shockingly better than it was even one year ago. Does that mean it is "good enough" to determine your treatment? Who could your family sue if you were completely incapacitated as a result of the treatment? Would AI learn from being sued and the associated "costs"? Thanks, my rant made me feel better about the importance of human minds in combining both breadth and depth of knowledge and applying them to complex endeavors.