OpenAI has quietly changed its terms to allow it to work with Military and for Warfare. This is a worrying development, especially since OpenAI has scraped a large amount of publicly available data from across the world. While it says that its tech should not be used for harm, that doesn’t mean they can’t be used for purposes that aid military and warfare.
Now how does usage of AI in the military and warfare impact India? I don’t want to be alarmist here but IF this is an indication of intent, some thoughts:
Advertisement. Scroll to continue reading.
- No data protection: India’s data protection law has an exemption for publicly available personal data. It’s usage in surveillance, training and strategic planning while microtargeting some people is possible. We made this mistake with the data protection law.
- Generative AI can be used for analysing large datasets to detect and identify vulnerabilities and strategies for cyberattacks
- Data of identifiable security personnel is particularly susceptible. For example, location data of security personnel on patrol. Remember the Strava data leak? It can be used for simulation exercises and mission planning. Strava had patrol data in conflict areas because soldiers were using it.
- Can be used to develop and train autonomous reconnaissance systems
- Facial data can be used for target recognition
So what can India do?
- Amend or issue rules restricting the usage of publicly available personal data for AI, or for military and warfare purposes.
- Discourage the usage of foreign AI tools by military and defence personnel
- More resources towards developing Indian AI (we’re already doing a good job)
- Identify what data of Indian citizens has been collected by OpenAI. Subject them to technical scrutiny with respect to datasets, with the option of forcing them to delete datasets that can compromise Indians.
Our openness cannot be our weakness. Again, what I’m writing here is meant to be something to think about. We don’t have clarity on openAI’s intent & we really shouldn’t trust blindly. The onus is on them to assure users & countries where it’s in use, and on our government to seek information to ensure we’re protected