Skip to main content

Global AI under attack

Gemini a bit slower when Deepseek under attack (January 2025)

In recent weeks, I've noticed a peculiar trend: the Gemini REST API, a widely-used service for developers, has shown signs of latency. Even more troubling are the frequent "503 Overloaded" error messages many have encountered. It seems this degradation in performance corresponds with an attack on Deepseek, another prominent player in the artificial intelligence ecosystem. This notion isn't baseless; after some digging, I stumbled upon a discussion thread where Logan Kilpatrick, a representative voice in the AI community, provided some insights. He mentioned, "Hey folks, we are not moving capacity away from 1.5 models right now. Looks like we might be getting hit by a DDoS attack. Will follow up as we mitigate this."

There is a time correlation between Deepseek and Gemini capacity issues

The temporal relationship between the issues facing Deepseek and Gemini is hard to ignore. Both entities, operating in similar technological ecosystems, may share infrastructure or have intertwined services that could lead to such a correlation. During the times when Deepseek experienced difficulties, Gemini too reported an increase in latency and server errors. This suggests that the assault on Deepseek might be indirectly impacting Gemini or perhaps both services are under simultaneous stress.

Timing can sometimes be a mere coincidence, but the parallel struggles of these services raise questions about their reliability and vulnerability. As developers, users, and industry watchers, we need to delve deeper into understanding the shared or intersecting systems that could lead to such cascading service failures.

I think there is a dirty war between all the big players in the AI field

The competitive landscape of AI has always been fierce, but the current occurrences hint at something beyond the usual business competition—perhaps a more clandestine war is brewing. With the financial stakes and potential power balance hanging on AI dominance, one can only speculate about the lengths companies or undisclosed entities might go to gain an upper hand.

The coincidence of Gemini and Deepseek facing capacity issues simultaneously leads me to consider the possibility of targeted disruptions aimed at destabilizing players in the AI race. Cyber attacks, especially DDoS attacks as suggested by Kilpatrick, are prevalent tactics used to handicap operational capacities. While there is no concrete proof pointing fingers, the indirect evidence raises enough suspicion that it cannot be dismissed.

As developers and technology enthusiasts, staying informed and vigilant about these incidents is crucial. The war for AI supremacy will not only shape the trajectory of technology but also dictate how reliant global systems adapt to such evolving disruptions.

Popular posts from this blog

npm run build base-href

Using NPM to specify base-href When building an Angular application, people usually use "ng" and pass arguments to that invocation. Typically, when wanting to hard code "base-href" in "index.html", one will issue: ng build --base-href='https://ngx.rktmb.org/foo' I used to build my angular apps through Bamboo or Jenkins and they have a "npm" plugin. I got the habit to build the application with "npm run build" before deploying it. But the development team once asked me to set the "--base-href='https://ngx.rktmb.org/foo'" parameter. npm run build --base-href='https://ngx.rktmb.org/foo did not set the base href in indext.html After looking for a while, I found https://github.com/angular/angular-cli/issues/13560 where it says: You need to use −− to pass arguments to npm scripts. This did the job! The command to issue is then: npm run build -- --base-href='https://ngx.rktmb.org/foo...

wget maven ntlm proxy

How to make wget, curl and Maven download behind an NTLM Proxy Working on CentOS, behind an NTLM proxy: yum can deal without problem with a NTLM Proxy wget, curl and Maven cannot The solution is to use " cntlm ". " cntlm " is a NTLM client for proxies requiring NTLM authentication. How it works Install "cntlm" Configure "cntlm"  by giving it your credentials by giving it the NTLM Proxy Start "cntlm" deamon (it listens to "127.0.0.1:3128") Configure wget, curl and Maven to use "cntlm" instead of using directly the NTLM Proxy Note: You will have then a kind of 2 stages Proxy : cntlm + the NTLM proxy Configure CNTLM After installing cntlm, the configuration file is in "cntlm.conf". You must have your domain (in the Windows meaning), proxy login and  proxy password. Mine are respectively: rktmb.org, mihamina, 1234abcd (yes, just for the example) You must have you NTLM Proxy Hostnama or IP ...

VMWare Keyboard Latency

Workstation VM UI lag when typing When using a VMWare Workstation VM, I noticed there is a latency when typing in the keyboard and the real appearance of the typed character. I searched and found: Noticeable typing lag in Linux VM terminals since v16.2 upgrade on Linux host To make it short, what solved it for me: Disable 3D acceleration in the VM setting .