Deepfakes and elections

Two things stood out for me from our discussion on deep fakes and democracy on Wednesday:
Firstly, Gautham Koorma pointed out that detection of deep fakes becomes much difficult when they’re published on social media, because platforms transcode the content. With minor modifications, comparing hashes can become fruitless exercise. This means that on the whole, detecting deep fakes on social media is not possible with 100% accuracy, even if the deep fake is being compared with an existing dataset. Holding safe harbor to ransom is thus not the right approach.
Secondly, where’s the accountability for the perpetrators? Shivam Shankar Singh emphasised that even where cases were filed against politicians for disinformation by the election commission, and even though those were few and far between, they were eventually dropped or withdrawn. On social media, as is also true on the internet, especially with the usage of surrogates for disinformation, attributing accountability is tough. The way regulation is planning out now, it appears that there are going to be no penalties for misinformation, and indeed deep fakes: only for platforms for not taking them down. Of course another critical point that Shivam pointed out was that political parties flood the election commission with complaints, and it becomes very difficult for them to deal with them. There’s a clear capacity issue. Passing the buck to online platforms is easier… Because it’s not the ECs problem anymore.
https://www.youtube.com/live/s_pbiust83Y?si=1KXm5APOdjqv5-EH

On OpenAI and warfare

OpenAI has quietly changed its terms to allow it to work with Military and for Warfare. This is a worrying development, especially since OpenAI has scraped a large amount of publicly available data from across the world. While it says that its tech should not be used for harm, that doesn’t mean they can’t be used for purposes that aid military and warfare.

Now how does usage of AI in the military and warfare impact India? I don’t want to be alarmist here but IF this is an indication of intent, some thoughts:
Advertisement. Scroll to continue reading.

  1. No data protection: India’s data protection law has an exemption for publicly available personal data. It’s usage in surveillance, training and strategic planning while microtargeting some people is possible. We made this mistake with the data protection law.
  2. Generative AI can be used for analysing large datasets to detect and identify vulnerabilities and strategies for cyberattacks
  3. Data of identifiable security personnel is particularly susceptible. For example, location data of security personnel on patrol. Remember the Strava data leak? It can be used for simulation exercises and mission planning. Strava had patrol data in conflict areas because soldiers were using it.
  4. Can be used to develop and train autonomous reconnaissance systems
  5. Facial data can be used for target recognition

So what can India do?

  1. Amend or issue rules restricting the usage of publicly available personal data for AI, or for military and warfare purposes.
  2. Discourage the usage of foreign AI tools by military and defence personnel
  3. More resources towards developing Indian AI (we’re already doing a good job)
  4. Identify what data of Indian citizens has been collected by OpenAI. Subject them to technical scrutiny with respect to datasets, with the option of forcing them to delete datasets that can compromise Indians.

Our openness cannot be our weakness. Again, what I’m writing here is meant to be something to think about. We don’t have clarity on openAI’s intent & we really shouldn’t trust blindly. The onus is on them to assure users & countries where it’s in use, and on our government to seek information to ensure we’re protected

Usage of AI for military and warfare

OpenAI usage for defence, military, spying and surveillance? OpenAI has quietly changed its terms to allow it to work with Military and for Warfare.

This is a worrying development, especially since OpenAI has scraped a large amount of publicly available data from across the world. While it says that its tech should not be used for harm, that doesn’t mean they can’t be used for purposes that aid military and warfare .

Now how does usage of AI in the military and warfare impact India? I don’t want to be alarmist here but IF this is an indication of intent, some thoughts:

1. No data protection: India’s data protection law has an exemption for publicly available personal data. It’s usage in surveillance, training and strategic planning while microtargeting some people is possible. We made this mistake with the data protection law.
2. Generative AI can be used for analysing large datasets to detect and identify vulnerabilities and strategies for cyberattacks
3. Data of identifiable security personnel is particularly susceptible. For ex, location data of security personnel on patrol. Remember the strava data leak? It can be used for simulation exercises and mission planning. Strava had patrol data in conflict areas because soldiers were using it.
4. Can be used to develop and train autonomous reconnaissance systems
5. Facial data can be used for target recognition

So what can India do?
1. Amend or issue rules restricting the usage of publicly available personal data for AI, or for military and warfare purposes.
2. Discourage the usage of foreign AI tools by military and defence personnel
3. More resources towards developing Indian AI (we’re already doing a good job)
4. Identify what data of Indian citizens has been collected by OpenAI. Subject then to technical scrutiny wrt datasets, with the option of forcing them to delete datasets that can compromise Indians.

Our openness cannot be our weakness

Again, what I’m writing here is meant to be something to think about. We dont have clarity on openAI’s intent, but we really shouldn’t trust blindly. Onus on them to assure users & countries where it’s in use & on our govt to seek information to ensure we’re protected. Link https://theintercept.com/2024/01/12/open-ai-military-ban-chatgpt/

Two ideas for social media platforms to protect users who are receiving targeted abuse.

A journalist for another publication just rang me up asking about what can be done in India for addressing online abuse against the LGBTQI community. I really don’t have community specific solutions, but I do feel that there are two product changes that social media platforms can make in order to provide users with agency to protect themselves against targeted abuse online:

  1. Safe Mode: Sometimes targeted abusive campaigns last for an hour, sometimes for a week. For a user, a large part of their hurt comes from abuse they are unable to ignore because they’re being tagged, or their posts are being commented on. Platforms should allow a safe mode for accounts, which they can enable with just one click, which prevents others from tagging them or commenting on all of their social media updates. Essentially gives them the freedom to post updates, but blocks out the world so that they’re not exposed to hateful conduct.
  2. Targeted abuse reporting: Reporting abuse on platforms is often a struggle for users. Twitter now involves multiple tedious steps, as if they intend to discourage reporting. Imagine if you have to report each abusive account one by one, and highlight each abuse from that account. It’s painful for people to read such updates. One approach that platforms can take is allow users to report mass targeting, and define a period during which they were being targeted, so that all updates and accounts targeting them can be scrutinised for abuse by platforms.

Also, it’s important for platforms to have reviewers of abuse that understand local languages and local lingo, because often abuse can be in a manner that person who doesn’t understand the local culture will not be able to comprehend.

On the Broadcast Bill and its impact on online streaming

I was on an excellent panel discussion a couple of days ago organised by CCAOI, about India’s Broadcast Bill which seeks to change the way online streaming services like Netflix and Amazon Prime Video are regulated, and create a framework for regulating online news in India.

The Bill is regressive and will restrict freedom on speech both in online video and online news, especially forcing content platforms and news publishers to create an internal body for censorship content before it’s made available to the public. It seeks to regulate online content in public interest which is an unreasonable and unconstitutional restriction on freedom of speech.

A few things to note…

  1. The Broadcast Bill exists because there is a need by the government to legitimise the regulation of streaming services that was included in the IT Rules 2021, which are illegal and not backed by law. This is probably to address court cases against the IT Rules that challenge their legality because the IT Act doesn’t enable such regulation of online streaming.
  2. MIB wants to ensure that it retains all jurisdiction over streaming services and online news, and other government departments like the Ministry of Health don’t start creating their own regulations. There was also a version of the Telecom Bill wherein the department of Telecom tried to take over jurisdiction of online streaming
  3. The Broadcast Bill will end up replacing the existing regulatory framework for online news and streaming
  4. Online streaming is not broadcast. It is private viewing on personal devices, and content is pulled by the user. This is censorship of private video consumption, and there’s no valid reason for this regulation.
  5. Content Evaluation Committees will act as private censors within media companies, and will apply to online news.
  6. There are 67 areas for which rules will be made. The Bill gives the executive disproportionate freedom to make rules. Expect regular rule creation from MIB if this Bill is passed. We’re seeing this with the Data Protection Bill as well, and will see it with the Digital India Act
  7. Expect fewer documentaries about India to be released in India once this passes, because documentaries will be screened for how they represent facts/India
  8. The usage of the phrase “Public Interest” for restriction of content is unconstitutional. It’s not a restriction to free speech under the Constitution of India
  9. The Bill is lazily worded. It expands several provisions for broadcast creative content to news content.
  10. We’re seeing overreach and censorship coming from “Self Regulatory Organisations” already in place, even though the IT Rules don’t require enable such censorship.
  11. Try applying a broadcast censorship code to online streaming and so many of your favourite shows will become unwatchable because of the nanny state.

The real problem with AI fakery

As we hurtle towards India’s Deep Fakes Elections, I write in today’s Time of India about the risk of 2024 being India’s Deep Fake Elections. A few points:

1. The rise of deep fakes presents both exciting and concerning implications for entertainment and societal discourse. From resurrecting iconic stars in movies, and having your favorite singers sing songs they never did, to enabling multi-language political campaigns, the technology’s potential is profound.

2. However, the same technology that enables creative applications in advertisement and translation also holds the potential for malicious use, as seen in the spread of fake videos for fraud, hate speech, and political manipulation. The impending “Deep Fake Elections” in India in 2024 highlight the urgent need to address the dissemination of manipulated content, which can significantly impact the outcome of crucial events.

3. The government’s efforts to combat deep fakes last month are commendable, but they face significant challenges, including the difficulty of discerning intent in deep fake content (what if it’s satire or a fact-check?) and the scale of content moderation on online platforms.

4. The proposal of mandatory watermarking for AI-generated content won’t be foolproof, as tools for removing watermarks can counteract these measures, leading to an evolving arms race between detection and evasion technologies.

5. The spread of deep fake content in encrypted platforms like WhatsApp and Signal leads to a problematic push for removing end-to-end encryption, which impacts our privacy.

6. Attributing liability to platforms is problematic, since detection and removal can never be 100% accurate, and no platform can survive the ensuing liability.

7. We need nuanced strategies for mitigating the harmful effects of deep fakes, encompassing public awareness initiatives, R&D for detection technologies, and collaborative efforts between the government, platforms, and academia, and platforms should be required to remove deep fakes on a “best efforts” basis. A balanced approach, avoiding over-regulation to protect internet freedom, should be pursued, fostering a synergy between technological advancements and democratic values.

8. While the government’s actions are important, a public consultation could facilitate the identification of additional measures to address the challenges posed by deep fakes and ensure a holistic societal response.

The full article is at https://timesofindia.indiatimes.com/…/the-real-problem…/

Digital Payments mess: On the impact of recurring payments and tokenisation

I spoke with Suprita Anupam of Inc42 for a story he did on the impact of the Reserve Bank of India’s recurring payments and tokenisation on consumers and businesses. It’s a thoroughly researched article, and I highly recommend reading it.

My comments on the issue:

  • Recurring transactions, both Indian and global, fail often: “Both as a consumer and as a business user of global digital services, I’ve found that recurring transactions fail often. We’ve also had situations where some global services no longer accept Indian credit cards, post the RBI guidelines related to recurring payments, as well as the tokenisation guidelines. As a consumer, I have had to re-enter my credit card details and enable payments for a few Indian services as well, and that’s an inconvenience I wish I didn’t have to deal with.
  • Lack of a proper consultation process creates such issues: “The problem I have with the Reserve Bank of India is that they don’t appear to take into consideration the impact of their regressive regulations on merchants and consumers, and the increasing inconvenience this leads to. There was no impact assessment, no public consultation: just a diktat with a deadline, which eventually got pushed repeatedly because of lack of feasibility.
  • Why only credit cards? We also have to take into account that the RBI has enforced these guidelines on credit cards, which have better customer service, fraud detection and accountability, and yet they’ve failed to do anything to enforce accountability in case of UPI, especially in terms of fraud detection and prevention. There’s a saying “if it ain’t broke, don’t fix it.” In case of the RBI, they broke something that was working — credit card payments– and have failed to fix something that is crying out loud for regulatory intervention — UPI.”