Institutional Investor (August 29)
“Ever since ChatGPT burst onto the scene last November…so-called “generative AI” has turned the markets on their heads.” Venture capitalists, “coming off the worst year in recent history,” have “redirected their dollars to AI upstarts. Meanwhile, the stock prices of the big tech names suspected to be the major beneficiaries of this often-called ‘revolutionary’ form of artificial intelligence have skyrocketed.” In 2023, “generative AI and machine learning start-ups raised about $39.4 billion.” The massive inflows are creating an ideal environment for fraudsters and critics “are starting to wonder whether the latest technology is really transformational or merely evolutionary.”
Tags: Big tech, ChatGPT, Critics, Evolutionary, Fraudsters, Generative AI, Machine learning, Markets, Revolutionary, Skyrocketed, Start-ups, Stock prices, Technology, Transformational, VC
American Banker (August 9)
“Bad actors, unconfined by ethical boundaries, recently released two large language models designed to help fraudsters write phishing prompts and hackers write malware.” In the future, “banks and other companies may need to contend” with novel threats “as fraudsters master the use of large language models.” Companies will also need to consider many risks “when building and deploying their own large language models: theft of models; leaks of information (such as investing advice or personal transaction histories) by model outputs: and manipulation of models by poisoned data (such as open-source data that a malicious actor has intentionally manipulated to be inaccurate).”
Tags: Bad actors, Banks, Ethical boundaries, Fraudsters, Hackers, Investing, Large language models, Malware, Manipulation, Phishing, Risks, Theft, Threats, Transaction