Large Language Models (LLMs) like OpenAI’s ChatGPT or Google’s Bard are doing some incredible things. They can write like humans, handle a lot of information, and help with a bunch of tasks in areas like writing, finance, and education. But as more companies use AI, there are growing concerns about security, data privacy, content ownership and related issues. One thing that’s not being talked about enough in this context: the potential for using LLMs as a public service at the national, state, or even city level. This way, we could create a safer and well-managed AI option that works for everyone, not just for private companies. (Note: Although imprecise, we’ll use the terms “LLM” and “AI” interchangeably here).
One of the main benefits of publicly-owned LLMs is that they could help make data privacy and openness better. In a world where data is very valuable, it’s important to make sure our information is kept safe and used in the right way. The people would own the data, not private companies. This means that any data made or used by the AI would belong to the public and be managed by the relevant government. This would help lower the chance of people’s data being used in the wrong way for profit-driven reasons.
Besides data privacy, public LLMs could make AI-powered services easier to get. The democratizing and equalizing aspect is a major selling point of the idea. By offering free or low-cost access to AI services, we could help make things fair for everyone. This would let more people and communities enjoy the benefits of AI technologies without being left out because they don’t have enough money. As a result, in the future the public could get important services, like personalized learning, healthcare advice, and legal help, without it costing too much.
What Can Public AI Do For Us?
An area of growth in AI that people are excited about — and that public LLMs would be ideal for — is education. Publicly accessible LLMs could change the way students learn by offering personalized experiences. These AI-driven systems could figure out students’ strengths, weaknesses, and learning styles to make custom lesson plans and offer one-on-one tutoring. There are already AI tutors out there, like Carnegie Learning’s MATHia and OpenAI’s GPT-3-based ChatGPT, that show this is possible. And we could make it available to more subjects and people as a public service. Khan Academy is reportedly very interested in AI’s potential for education as well.
LLMs could also be used to improve public access to healthcare. For example, public LLMs could provide people with medical information and advice. They could look at individual symptoms and medical history, and then offer personalized suggestions or direct users to the right healthcare services. The UK’s National Health Service (NHS) already uses a chatbot called NHS 111 to give non-emergency medical advice, and public AI could expand on this idea by offering more detailed and personalized help.
Mental health support and counseling through public LLMs could make a real difference, especially at a time when demand is at an all-time high for these services and funding is scarce. AI-driven chatbots like Woebot and Tess show that LLMs can give caring and helpful mental health support. By having a publicly funded LLM, we could make these services easier to get, making sure that everyone has access to the mental health resources they need.
Public Safety and Civic Engagement
LLMs could also help with public safety and getting people more involved in their communities.
One way is by improving how we handle emergencies and natural disasters. Public LLMs could give real-time information and help to people when they need it most. Imagine AI-powered chatbots that guide users through emergencies, like evacuations or first aid steps. They could also help organize disaster relief by figuring out what resources are needed and where volunteers should go. We’ve seen how technology like Google’s Crisis Response and the American Red Cross’s Emergency App can help in disasters, and adding LLMs could make them even better.
AI is already being used by communities and law enforcement to help prevent crime, and this could be expanded and built upon to increase public safety, without necessarily turning into a dystopian surveillance-state. People could easily and securely report incidents using AI-powered chat systems. These systems could collect important information, suggest the best response, and send the info to the right authorities. LLMs could also look at crime data and spot patterns, helping police use their resources better and take steps to stop crime before it happens. Projects like ShotSpotter, which uses AI to find and locate gunshots in real time, show how technology can help prevent crime.
On a more basic level, these tools could help people make use of their own governments. AI-powered chatbots on government websites could help users find the services they need, answer common questions, and guide them through government processes. The U.S. Citizenship and Immigration Services’ virtual assistant, Emma, is an example of how AI is being used to help answer questions and give guidance on government procedures.
The information flows the other way too: public AI could make it easier for people to share their thoughts and ideas on different issues with government. AI-driven systems could collect everyone’s input, look at the data, and present it to decision-makers in an organized way. This could lead to more democratic and fair policymaking. Tools like Pol.is and CitizenLab are already looking into how technology can help with public participation, and using LLMs could take it even further.
Challenges and Considerations
Now that we’ve talked about some of the benefits and potential uses of public LLMs, let’s discuss some challenges and things to think about when it comes to making these ideas a reality.
First, there’s the issue of money and development. Public LLMs would need a lot of financial support to get off the ground. Governments could look into different ways to pay for these projects, like public investment, working together with private companies, or getting grants. Teaming up with companies like OpenAI or research institutions could also be a great way to share resources and knowledge.
Next, there’s the matter of keeping public LLMs safe and preventing them from being misused. Public data systems face the same threats and risks their private cousins do. We also need to protect public LLMs from people who want to use them for bad reasons. This could involve strong security measures, like making sure only authorized people can access the AI, encrypting data, and always watching for signs of anything unusual.
We need to make sure that public LLMs don’t spread false information. This might involve having rules for what kind of content the AI can generate, letting users help decide on these rules, and creating ways to check that the information is correct.
And there is the issue of laws and oversight, an area that is already heating up. If public AI becomes common, governments will need to create laws that deal with things like data privacy, who owns the content created by AI, and who is responsible if something goes wrong. These rules can help make sure public LLMs are used in a way that respects people’s rights and values. There would need to be ways to keep an eye on these projects and enforce regulations. This might involve creating special committees, doing regular audits, and getting citizens involved in making decisions.
All of these challenges and concerns and more need to be addressed, but they can be, and the potential benefits suggest the effort is more than worth it.