The emergence of artificial intelligence (AI) and surrounding technologies have exposed the need to have oversight for their ethical use now more than ever. Tools built on AI are coming into the market in droves as tech leaders globally race to achieve the first-mover advantage.
As AI grabs public attention, it is crucial to ensure proper policies are enforced in order to tap the full potential of these emerging technologies and their positive contribution to the economy and societal progress.
Despite the glaring need for regulations around AI technologies, the United States is yet to enact laws that will ensure responsible use by all parties involved including software manufacturers, business organizations, and the general public.
Lawmakers in the European Union (EU) have for a few months now been working on laws around AI and related technologies like machine learning (ML), although TechCrunch confirms that those initiatives started a couple of years ago.
At the time, the AI Act (as it is referred to as now) was “an objective and measured approach to innovation and societal considerations.”
At the moment, executives of tech organizations together with the US government have resolved to collaborate to bring forth a unified vision for responsible use of AI.
Mitigating AI Risks – The White House
The launch of OpenAI’s ChatGPT caught many by surprise and captured the attention of business leaders, tech innovators as well as the public late last year. Interest in generative AI capabilities exploded with people ready to push the boundaries of the technology.
The silence around AI broke as technology organizations like Google and Microsoft revealed oodles of tools they are working on or had been working on. Google vigorously embarked on its ChatGPT chatbot rival Bard, which was released for use by the general public last week.
Microsoft has been taking full advantage of its multi-billion investment in OpenAI to sprinkle AI across its app empire, including the Edge Browser, Microsoft 365, and the SwiftKey board for both iOS and Android.
Nonetheless, the government cannot afford to sit blindly as artificial intelligence goes mainstream.
Most people are expected to use AI in a responsible way, but leaders like Elon Musk are urging for regulation of the technology to reduce potential risks. These risks come from people’s tendency to experiment, test systems, and share both true and false information. There are also concerns about political issues, AI’s effects on privacy, cybersecurity threats, and fraud—all of which could increase the chances of AI being overlooked.
Having said that, the Biden Administration through the official White House communication channels released a list of actions aimed at promoting “responsible American innovation in artificial intelligence (AI) and protect people’s rights and safety.” Three of the actions the White House listed encompassed:
- Injecting funds to support ethical AI research and development in the United States.
- Conducting evaluations of current generative AI technologies.
- Implementing measures to guarantee the U.S. Government sets a standard in addressing AI-related challenges and capitalizing on its potential.
Injecting New Funds to Support AI R&D in America
The White House says it will allocate $140 million to The National Science Foundation to bring up seven new National AI Research Institutes, which is a drop in the sea compared to funds that have been raised by private business organizations.
The US government is not taking AI funding seriously, falling behind other countries like China, which first invested in the technology in 2017.
A pressing need exists for strengthening the impact of investments by promoting academic partnerships for workforce development and research.
The government should purpose to support AI centers in collaboration with leading academic and corporate institutions that are already trailblazers in AI research and development. This will propel innovation and generate novel opportunities for businesses utilizing AI capabilities to improve service delivery.
Cooperation between AI centers and preeminent academic institutions, such as MIT’s Schwarzman College and Northeastern’s Institute for Experiential AI, helps to connect the dots between theoretical knowledge and practical implementation.
It is possible to achieve this milestone by assembling experts from academia, industry, and government to work jointly on pioneering research and development projects with tangible applications.
And by forming alliances with major corporations, these centers can assist businesses in adopting AI seamlessly into their operations, thus enhancing efficiency, reducing costs, and achieving superior outcomes for consumers.
These centers will also help train the next generation of AI professionals by providing students with access to the latest technology in the field. By working on real-world projects, students can learn and receive guidance from top experts in the AI industry.
The United States government will be better positioned to shape a future enhanced by AI by being proactive and adopting a collaborative approach. The government needs to get involved in research and development on AI to make sure it is not replacing humans in their jobs but enhancing productivity.
That way, society at large can benefit from opportunities birthed by the emergence of this powerful technology.
Conducting Evaluations of Existing Generative AI Technologies
It is pointless to overemphasize model assessment as a way of ensuring existing AI models are accurate, reliable, and without biases. This is critical for their deployment in real-world applications.
Take an urban planning use of generative AI that has been modeled on flagged cities known historically for their underrepresented poor populations. In this case, the model is likely to keep leading to more of the same outcomes.
The same could be true for bias lending, now that more and more financial service providers are tapping AI algorithms to make decisions on lending.
It would be unfair for some selected populations if AI models are trained with discriminatory data against them as they will always be denied access to loans, thus widening the divide between the rich and the poor, in addition to other economic and social disparities.
While these are just examples of possible bias in AI, the government must not lose touch with AI technologies and the techniques used to train and build them regardless of how quickly the wheel is turning.
In an effort to tackle bias in artificial intelligence, the administration has recently revealed a new collaborative model evaluation initiative to take place at DEFCON 31 AI Village.
This event serves as a meeting ground for AI researchers, specialists, and aficionados to delve into the most recent developments in AI and machine learning.
Key industry stakeholders, such as Anthropic, Google, Hugging Face, Microsoft, Nvidia, OpenAI, and Stability AI, are participating in this cooperative endeavor, utilizing a platform provided by Scale AI.
Furthermore, the assessment will evaluate the extent to which these models adhere to the principles and guidelines established in the Biden-Harris administration’s AI Bill of Rights Blueprint and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework.
The initiative marks a significant step forward, as the administration actively collaborates with industry stakeholders and harnesses the knowledge of technical experts in these corporate AI laboratories.
US Government’s Measures Addressing AI-related Challenges
Concerning the Biden-Harris administration’s third action on generative AI ensuring risks around AI are mitigated and the power of the technology harnessed to its full potential, the Office of Management and Budget is to embark on a draft policy to control the use of AI systems in America.
Think of it as a government to public comment, although there are no preset timelines or details as to when these policies will be released. However, an executive order directly touching on racial equity that was issued earlier this year may take the front stage.
The executive order comprises a provision addressing government agencies to utilize generative AI and automated systems in a manner that promotes equity.
Nevertheless, releasing policies will not be enough, thus, concerned government agencies must consider incentives and repercussions to have a meaningful impact and not just optional guidance.
For example, NIST security standards serve as crucial prerequisites for implementation by the majority of government agencies.
Non-compliance with these guidelines may, at the very least, be exceptionally humiliating for those implicated and potentially result in punitive measures within some government sectors.
Therefore, to be effective, governmental AI measures should be established in a manner consistent with NIST standards or similar frameworks at the very least.
Moreover, the costs associated with complying with such regulations mustn’t inhibit innovation driven by startups.
One possible solution is to establish a framework where the cost of regulatory compliance is proportional to the size of the company, keeping startups in mind.
Additionally, as the government becomes the chief purchaser of AI systems and solutions, it is indispensable for its policies to act as the foundation for the development of such technologies.
By making adherence to these guidelines an explicit or implicit requirement for procurement (e.g., The FedRamp security standard), these policies can reshape the industry landscape, thus supporting responsible innovation.
Development in AI and automated systems will continue to call upon collaboration between stakeholders, founders, operators, venture capitalists, consumers, technologists, and regulators with the sole purpose of being thoughtful and intentional when pursuing or engaging with such emerging technologies.
All parties involved in advancing AI technologies must not fly blindly. They should be aware of the potential dangers of generative AI and AI even as global tech leaders and business organizations push for their adoption across industries.
The goal should be to create new opportunities while mitigating possible challenges such as those regarding biases, privacy, and ethical considerations.
Thus, all involved parties must hold transparency with the highest regard – accountability and collaboration are two other actions that must be stressed to ensure AI is developed and used responsibly to benefit society.
This calls for investing in ethical AI research and development in addition to engaging with different points of view and communities—the establishment of clear guidelines and regulations for responsible development and deployment of these technologies.
Similar Articles:
- Florida Bill Restricting US and Foreign CBDCs Signed Into Law by Desantis
- US Government to Invest in Cultivating Tech Hubs Across the Country With $500 Million
- ‘Free Speech’ Musk Defends Decision to Censor Twitter in Turkey Ahead of Elections
What's the Best Crypto to Buy Now?
- B2C Listed the Top Rated Cryptocurrencies for 2023
- Get Early Access to Presales & Private Sales
- KYC Verified & Audited, Public Teams
- Most Voted for Tokens on CoinSniper
- Upcoming Listings on Exchanges, NFT Drops