US Intelligence: 15,000+ Were Let Free From ISIS Detention Camp After Collapse
Another ‘win’ for America’s disastrous Syria policy, long predicated on overthrowing the Assad government and installing a ‘moderate’ Sunni regime – though it turns out Jolani’s bearded Hayat Tahrir al-Sham (HTS) militants are anything but…
“U.S. intelligence agencies have concluded that 15,000 to 20,000 people, including Islamic State affiliates are now at large in Syria, after an exodus from a camp that held jihadists’ families, U.S. officials familiar with the estimate said,” The Wall Street Journal reports Friday.
Who could have predicted that chaos, instability, and terrorism would come out of the CIA’s Operation Timber Sycamore? Well, we did, and every rational observer of the Syria situation.
A billion plus dollars and hundreds of thousands of lives after the decade-long proxy war, and this is all Washington has to show for it:
Security experts have long warned that the wives of Islamic State fighters were effectively raising the next generation of militants at the sprawling Al-Hol facility. Security at the camp fell apart in recent weeks after Syria’s government routed the U.S.-backed Syrian Democratic Forces, which had guarded Al-Hol for years, raising concerns about the release of people who might have become radicalized during the years held behind the razor wire.
The size of a small city, the camp in Syria’s eastern desert at one point held more than 70,000 people after U.S.-backed forces destroyed what remained of Islamic State’s self-proclaimed caliphate in Syria in 2019. At the end of 2025, more than 23,000 people were there, according to a report this week from the Pentagon’s Inspector General.
The US military is rapidly backing out of this region after the years-long occupation, effectively throwing the Kurds (SDF) under the bus, as HTS radicals move in and take control.
Given many analysts have pointed to HTS being ‘ISIS-lite’ to begin with, the following WSJ note is no surprise: “The vast majority have left Al-Hol after the Syrian government took control last month. Western diplomats in Damascus assessed that more than 20,000 people fled the camp in a matter of days earlier amid rioting and a surge of escape attempts.”
There were even reports that the ISIS prisoners greeted the government HTS troops rolling in as ‘liberators’. The new government is certainly not “fighting” Islamic State cells… quite the contrary:
As the Asayish is set to re-take the control of the Al hol camp and region (including everything north of it) – the Syrian Government Forces currently in charge of Al Hol Camp have opened the gates of the camp and are releasing+transporting as many ISIS members and families out… pic.twitter.com/2jQOR4hisI
And now the Washington blob is simply moving on to the next regime change operation, this time a little further east in Iran, which it turns out was a key Assad ally.
So in place of the secular nationalist Baath party (under the Assad family), the West has the literal founder of Syrian al-Qaeda as president of Damascus, letting ISIS prisoners and affiliates walk free.
The FDA has typically required two studies from companies seeking approval for most new drugs, although in recent years it has approved some drugs based on a single well-run trial.
“The FDA has demonstrated disease-by-disease flexibility and has granted approvals based on a single premarket study with confirmatory evidence. In some fields, such as oncology, single trials have supported the majority of drug approvals,” Dr. Marty Makary, the FDA’s commissioner, and Dr. Vinay Prasad, head of the FDA’s Center for Biologics Evaluation and Research, said in an article published on Feb. 18 by the New England Journal of Medicine.
“However, although we have exercised flexibility in the past, there remains confusion from manufacturers regarding settings in which a single trial will be accepted. Moving forward, we are announcing that a one-trial requirement will be the FDA’s new default standard. This reform is being rolled out synchronously with the agency’s postmarket initiative to collect robust data on all drugs and devices.”
The two-study standard for drugs dates to the early 1960s, when Congress passed a law requiring the FDA to review data from “adequate and well-controlled investigations” before clearing new medications. For decades, the agency interpreted that requirement as meaning at least two studies, preferably with a large number of patients and significant follow-up time.
The second study would, in theory, confirm that the first trial’s results weren’t a fluke and could be reproduced.
Beginning in the 1990s, the FDA increasingly began accepting single studies for the approval of treatments for rare or fatal diseases that companies often struggle to test in large numbers of patients. Over the past five years, roughly 60 percent of first-of-a-kind drugs approved each year have been cleared based on a single study.
Makary and Prasad said that the historical reliance on multiple studies “was intended to provide credible causal evidence that a therapy could improve clinical outcomes with acceptable safety in a world where biologic understanding was more limited than it is today.”
They later added: “In the modern world, as drug discovery becomes increasingly precise and scientific, the FDA considers not just effects on survival, but biochemical and intermediate changes that tell a complete biologic story: does this drug actually work? In this setting, overreliance on two trials no longer makes sense.”
The change will save drug developers money and reduce the time it takes to get drugs to market, the officials said. They expect more drug development in response.
Dr. Janet Woodcock, the FDA director who led the agency’s drug center for about 20 years before retiring in 2024, said the change makes sense and reflects the FDA’s decades-long move toward relying on one trial, combined with supporting evidence, for various life-threatening diseases, including cancer.
“The scientific point is well taken that as we move toward greater understanding of biology and disease we don’t need to do two trials all the time,” Woodcock said.
Dr. Reshma Ramachandran, assistant professor of medicine at the Yale School of Medicine, said in a post on X that it’s true most FDA approvals in recent years have been based on single, strong trials.
“But as the authors noted (& have for years!), patients are increasingly left with uncertainty of their effectiveness,” she wrote, “so why set a standard continuing the (bad) same old instead of demanding more?”
Makary and Prasad said that they reserve the right to demand additional testing if a trial has limitations or deficiencies.
“Instead of prioritizing finite reviewer time reading and assessing two or more pivotal trials, we will focus our energies on ensuring that the one clinical trial we require provides the most up-to-date and useful information for American patients,” they wrote.
Amazon Cloud Unit Taken Down Twice By Its Own AI Tools: Report
Amazon’s cloud-computing arm suffered at least two recent service interruptions linked to the use of its own artificial intelligence coding assistants, prompting some internal concerns about the company’s rapid deployment of autonomous software agents inside production environments.
In mid-December, Amazon engineers allowed the company’s Kiro AI coding tool to implement system changes that ultimately led to a roughly 13-hour disruption affecting one of the systems customers use to analyze the cost of AWS services, people familiar with the matter told the Financial Times.
The agentic tool – which is capable of taking autonomous actions on behalf of users – reportedly determined that the optimal remediation step was to delete and recreate a computing environment. AWS later circulated an internal postmortem examining the outage.
Employees said the December incident marked the second time in recent months that one of Amazon’s internally deployed AI development tools had played a central role in a service disruption. In both cases, engineers permitted the software agent to execute changes without requiring secondary approval, a safeguard typically mandated for manual interventions in production systems.
AWS accounts for roughly 60% of Amazon’s operating profit and is investing heavily in artificial intelligence tools designed to function as independent “agents” capable of carrying out tasks based on high-level human instructions. The company – along with other large technology firms – is also positioning such tools for sale to external enterprise customers.
Amazon said it was a coincidence that AI tools were involved in the disruptions and maintained that the same outcome could have resulted from conventional development software or manual intervention.
“In both instances, this was user error, not AI error,” the company said, adding that it had found no evidence that mistakes occur more frequently when AI tools are involved.
The company described the December interruption as an “extremely limited event” affecting a single service in parts of mainland China and said the second incident did not impact a customer-facing AWS system.
Neither disruption approached the scale of a broader AWS outage in October 2025 that lasted approximately 15 hours and temporarily took multiple customers’ applications offline – including services operated by OpenAI.
Employees said the company’s AI development tools are often treated as operational extensions of human engineers and are granted comparable system permissions. In the December case, the engineer involved had broader access than anticipated – a user access-control issue that Amazon said allowed the changes to proceed without appropriate review.
AWS introduced Kiro in July as a next-generation coding assistant designed to go beyond so-called “vibe coding,” in which developers rapidly assemble applications using AI-generated suggestions. Instead, Kiro was intended to produce code directly from formal specifications.
Prior to Kiro’s launch, AWS engineers relied on Amazon Q Developer, an AI-powered chatbot designed to assist with software development. Employees said that tool was involved in an earlier outage.
Some staff members said they remain skeptical about the reliability of AI-assisted coding for mission-critical tasks, particularly as Amazon has set internal targets encouraging 80% of developers to use AI tools for coding at least once per week. The company is said to be closely monitoring adoption rates.
Amazon said customer uptake of Kiro has been strong and that it wants both clients and employees to benefit from efficiency gains. Following the December incident, AWS implemented additional safeguards, including mandatory peer review procedures and expanded staff training.
Artificial intelligence (AI) is often framed as a force multiplier that can accelerate decision-making and produce valuable information. Meanwhile, AI deployment exercises have yielded mixed results, highlighting challenges such as systems stalling and unpredictable software outside controlled environments.
Some defense insiders believe that AI tools also introduce new safety and escalation risks if not developed, evaluated, and trained correctly.
Over the past year, U.S. military testing has demonstrated that some AI systems are failing in the field. In May 2025, Anduril Industries worked with the U.S. Navy on the launch of 30 AI drone boats, all of which ended up stuck idling in the water after the systems rejected their inputs.
A similar setback occurred in August 2025 during the company’s test of its Anvil counterdrone system. The resultant mechanical failure caused a 22-acre fire in Oregon, according to a Wall Street Journal report.
Anduril responded to the reported AI test failures, calling them “a small handful of alleged setbacks at government experimentation, testing, and integration events.”
“Modern defense technology emerges through relentless testing, rapid iteration, and disciplined risk-taking,” Anduril stated on its website. “Systems break. Software crashes. Hardware fails under stress. Finding these failures in controlled environments is the entire point.”
But some say the challenges AI faces in the national security landscape should not be taken lightly. Problems such as brittle AI models and building on the wrong kind of training data can create systems that do not perform as expected in a battlefield scenario.
“This is why military-grade AI, purpose-built for national security use cases and the warfighter, is critical,” Tyler Saltsman, founder of EdgeRunner AI, told The Epoch Times.
Saltsman’s company has active research and development contracts with the U.S. military. He said AI systems are not typically designed for warfighting.
“[AI models] may choose to refuse or deflect certain questions or tasks if those requests do not comply with the AI system’s own rules,” Saltsman said. “A model refusing to provide guidance to a soldier in combat or giving biased responses rather than operationally relevant responses can have life-or-death implications.”
Scenarios such as the one Saltsman described can start with the wrong kind of training data.
Jeff Stollman, who has worked with defense contractors as an independent consultant and is familiar with a range of products and services used by the military and intelligence communities, said much of “the data needed has not been collected historically.”
“And because internet data is typically of limited value and internet-based models can’t be run on isolated classified networks, military and intelligence users will need to collect their own new data,” Stollman told The Epoch Times.
He said there are three categories of training data used by the defense and armed forces communities, all of which have different hurdles.
Offering an example of a sustainment—or maintenance—data challenge, Stollman said that collecting this type of information typically requires adding sensors that can record the data needed to predict malfunctions and failures.
“This includes measuring temperature, vibration, friction, the amount of wear on various parts,” he said. “This is an expensive undertaking. Sensors aren’t free. They add weight and volume to space and weight-constrained platforms such as aircraft and spacecraft.”
This type of data collection is offloaded to a database because of limited onboard computer resources. Although that sounds logical at first, the problem is the time it can take.
“For platforms like ships and submarines, windows for transmission of such data, which might give away the position of the platform, are limited,” Stollman said. “As a result, data may not be accessible for months at a time.”
Another challenge of AI integration is reliability. Issues such as AI “hallucinations” and poor decisions can be amplified in adversarial environments.
“The most dangerous assumption is that AI can distinguish between legitimate inputs and adversarial manipulation,” Christopher Trocola, founder of ARC Defense Systems, told The Epoch Times.
He cited the July 2025 experiment in which AI-powered, cloud-based platform Replit’s “vibe coding” ended with an AI assistant panicking and trying to cover its tracks. The AI coding assistant reportedly deleted a live production database, fabricated thousands of fake records, and created misleading status messages.
“Military applications amplify these vulnerabilities catastrophically,” Trocola said.
He explained that three critical AI assumptions can fail under adversarial pressure: prompt injection resistance, hallucination control, and intent recognition.
This is when adversaries can manipulate AI through carefully crafted inputs designed to override instructions, generate false information, or indicate that malicious inputs are benign.
“This represents what’s known as distribution shift: AI trained in controlled environments failing catastrophically when deployed in real-world adversarial contexts,” Trocola said.
Saltsman said this highlights the importance of building AI models with military applications in mind.
“Most commercial AI systems are black boxes,” he said. “We don’t know what data trained the models. We don’t know what guardrails or biases were baked into the models. And we don’t know if our data is truly secure. All of this is highly problematic in national security settings.”
Risk Evaluation
Stollman noted that generative AI—which is already used in U.S. intelligence and defense—is “plagued” with problems such as hallucinations. However, it is also the most practical kind of AI for military operations.
“Generative AI is useful in areas such as reconnaissance, where it is necessary to identify installations and activities from data collected by various sensors: photos, radar, sonar, etc.,” Stollman said. “It can also be used to support decision-making.”
“For example, drones or missiles could be given autonomy of action to overcome signal jamming that prevents their being controlled remotely by humans,” he said. “But before such autonomy can be deployed, it is necessary to anticipate all the failure modes that could lead to undesirable consequences.”
Saltsman said he agrees that AI development and deployment must be carefully balanced with long-term risk evaluation.
“But make no mistake, we are in an AI war against China, and we must win the race,” he said.
He noted that if China’s AI models and hardware dominate the market, the United States could become dependent on the Asian nation for critical technologies.
“Therefore, it is a national security imperative that we accelerate the pace of AI development while also balancing the risks,” Saltsman said.
In 2025, the United Nations said that the use of AI in warfighting was no longer a hypothetical future scenario. The U.N. also stressed the risks and consequences of AI system failures in this capacity.
“Without rigorous safeguards, it risks undermining international humanitarian law,” the agency stated.
“Complex battlefields already test human judgment in distinguishing between combatants and civilians; for machines, the challenge is even greater, particularly in urban settings where civilians and fighters often intermingle.”
Trocola said he shares concerns that AI deployment in the military and defense sectors is outpacing risk assessment.
“Documented patterns suggest this creates systematic vulnerabilities,” he said. “Industry data shows [70 percent to 80 percent] of AI projects fail due to organizational readiness gaps.”
The Department of War AI Acceleration Strategy launched in January, which emphasizes rapid deployment to counter strategic competitors.
Contradictory Reports Of US Evacuating Troops From Exposed Qatar, Bahrain Bases
We previously went through some plausible escalation scenarios in the event President Trump orders military strikes on Iran. The problem with a supposedly ‘limited’ attack is that Tehran’s response could be devastating, targeting American military bases across the region. This would then turn into all-out war in the region.
Iran has every interest in establishing deterrence quickly, in order to get the US administration and its allies second-guessing the pursuit of full regime change, which would probably require ground forces and not just an aerial bombardment operation.
With the countdown ticking toward some level of military operations, and even as US officials claim diplomacy is still happening, it appears the Pentagon is taking drastic actions and precaution – ordering the evacuation of some ‘exposed’ Gulf bases. However, Fox’s Pentagon correspondent has cited US officials who deny this is happening – but the officials might simply be running cover.
The NY Timesreports Friday that “Hundreds of troops have now been evacuated from Al Udeid base in Qatar, Pentagon officials said, and there have been evacuations at the cluster of U.S. bases in Bahrain that house the Navy’s 5th Fleet.”
There could be more such evacuations or at least personnel reductions to come, as “There are also American troops at bases in Iraq, Syria, Kuwait, Saudi Arabia, Jordan and the United Arab Emirates,” the same report notes.
It must be remembered that Iran launched a “devastating and powerful” missile attack on the Al Udeid last June, in retaliation for the 12-day Israeli assault, and US bombing campaign of Iranian nuclear sites.
US military planners are concerned even by how close the two US carriers in the region get to Iran’s feared ballistic missiles:
A second American military official said that U.S. Central Command is keeping two aircraft carriers deployed in the Middle East at a considerable distance from Iran, to protect them from becoming a target.
Officials also noted that it was difficult to hit an aircraft carrier traveling at speed with a ballistic missile. In addition, the carriers are escorted by destroyers, which have the ability to shoot down ballistic missiles.
There’s also the possibility that Iran could send drone swarms on US locations in the event of an unprovoked attack. The last several years have seen small drones get better and more effective at evading sophisticated anti-air defenses, though these often have a more limited range.
Fox says there’s contradictory reports and that the NYT Times claims are false…
According to a well placed US official: the US has NOT evacuated hundreds of US troops from Al Udeid air base in Qatar, nor has it evacuated bases in Bahrain, home to the US 5th Fleet. That reporting is false.
Iran this week stated to the United Nations in a formal letter that the Islamic Republic “will not initiate any war” while stressing that the United States would bear “full and direct responsibility for all the unforeseen and uncontrollable consequences” resulting from an attack against it.
The letter, issued Thursday, spelled out that US bases, facilities, and assets would be Iran’s “legitimate targets” if the US follows through on its threats. So it seems Pentagon decision-makers are taking this seriously, and are now likely moving at least some US personnel out of harm’s way.
“The bottom line is this: voter ID is not controversial in this country,” Harry Enten, the chief data analyst for CNN, recently reported. Nor is it controversial in virtually any other country in the world. Yet despite massive support among both Democrats (71%) and Republicans (95%), only one Democratic member of the House and one in the Senate are supporting the SAVE Act. Unless seven more of the 47 Senate Democrats step forward, their filibuster will kill the bill.
Democrats argue that requiring free voter photo IDs – even when the ID itself costs nothing – harms eligible voters by creating practical barriers to casting a ballot. They contend that blacks would be especially hard hit. Interestingly, every country in Africa requires government-issued identification to vote.
All of these countries have lower per-capita incomes than the United States. If citizens in those nations can obtain the necessary identification to vote, why would American Hispanics and blacks be unable to do the same?
While 83% of American adults support requiring government-issued photo identification to vote, support is also strong among the very groups Democrats claim would be harmed: 82% of Hispanics and 76% of black Americans favor the requirement.Those figures suggest that most black and Hispanic Americans do not view obtaining a photo ID as the obstacle Democrats describe. Ten U.S. states have similarly strong photo ID requirements.
Democrats claim that women are disproportionately disenfranchised by voter IDs, but women are also strongly supportive of IDs and have exactly the same level of support as men.
Democrats argue that voter ID requirements disproportionately disenfranchise people with the least education and lowest incomes. Yet, ironically, survey results show that voters who did not graduate from high school were 27 percentage points more likely to support photo voter ID laws than those who attended graduate school. Similarly, individuals earning less than $30,000 per year were seven percentage points more likely to support photo ID requirements than those earning over $200,000 annually. The well-educated and higher-income individuals thus express more concern about the impact of ID laws on the less educated and lower-income groups than those groups express themselves.
But it isn’t just South American countries and all of Africa that require voter IDs to vote. Both of our neighbors, Canada and Mexico, require them, with Mexico also requiring a thumbprint. All 47 European countries, except parts of the United Kingdom, require a government-issued photo ID .
After widespread vote fraud, Mexico enacted major voting reforms in 1991. The government mandated voter photo IDs with biometric information, banned absentee ballots, and required in-person voter registration. Even though these changes made registration more difficult and eliminated absentee voting, turnout increased after the reforms took effect. In the three presidential elections following the 1991 changes, an average of 68% of eligible citizens voted, compared with 59% in the three elections before the reforms. As confidence in the electoral process grew, more citizens chose to participate.
Many countries in Europe and beyond have learned the hard way that fraud can result from looser voting regimes – and they have instituted stricter voting measures in direct response to it.
In Northern Ireland, where a bitter sectarian conflict fuels hardball electoral tactics, parties on all sides have engaged in what observers describe as “widespreadand systemic“ voter fraud. Both Conservative and Labour governments enacted reforms to curb it. In 1985, under the conservative Margaret Thatcher, the U.K. began requiring voters to show identification before receiving a ballot, but that measure did not solve the problem. In 1998, a Select Committee on Northern Ireland reported that people could “easily forge” medical cards – accepted as ID under the 1985 law – or obtain them fraudulently, enabling non-existent individuals to cast votes.
By 2002, the Labour government strengthened voter identification cards to make them far harder to forge and used the more secure IDs, along with additional rules, to stop people from registering multiple times. These anti-fraud measures immediately reduced total registrations by 11%, suggesting to Labour how extensive earlier fraud had been.
A study of vote fraud in Northern Ireland before the 2002 reforms interviewed Brendan Hughes, the former IRA Belfast commander. Hughes described how he operated a fleet of taxis to transport fraudulent voters from one polling station to another. He said they dressed volunteers in wigs, different clothes, and glasses, and noted that this practice continued for decades. He added that they typically used young women for voter impersonation because officials were more likely to let them vote if any doubt arose.
A 2002 survey of Northern Ireland by the U.K. Electoral Commission, conducted after the rules passed but before they went into effect, found that by a 64% to 10% margin, voters thought that vote “fraud in some areas is enough to change the election results.”
“I support the SAVE America Act because I believe in a fundamental principle: American citizens should decide American elections,” Henry Cuellar, the one House Democrat voting for the bill, noted. “That principle strengthens our democracy and protects the value of every vote.” There are currently seven states that require proof of citizenship just as required in the SAVE Act (e.g., birth certificate, passport, tribal documents, naturalization papers). Sen. John Fetterman, the only Democrat in the Senate to speak out favorably for the bill, said requiring voters to show identification is not “unreasonable.”
If banning voter IDs is a hallmark of democracy, Democrats will need to start castigating virtually all the other countries in the world as anti-democratic nations.
John R. Lott Jr. is a contributor to RealClearInvestigations, focusing on voting and gun rights. His articles have appeared in publications such as the Wall Street Journal, New York Times, Los Angeles Times, New York Post, USA Today, and Chicago Tribune. Lott is an economist who has held research and/or teaching positions at the University of Chicago, Yale University, Stanford, UCLA, Wharton, and Rice.
Brewing Nor-easter Bomb Cyclone Threatens Mid-Atlantic As Meteorologists Split Over Models
We’ve seen this winter storm story before: low pressure develops off the East Coast. The question now is which model is correct: ECMWF or GFS.
Alright, we’re pretty much down to 2 scenarios at this point. ~48 hours till start time. Euro/Euro AI/CMC sticking to scenario 1. GFS/NAM with scenario 2. Either way, an impactful snow event is on the way for someone here in SNE/Mid-Atl. Just a matter of who… pic.twitter.com/AospuqpWoE
“We are barely two days away from what the American GFS model is simulating as a blockbuster snowstorm in the Mid-Atlantic, including Washington, D.C. The European model, however, depicts a more modest 3.1 inches,” MyRadar Weather wrote on X, adding, “Put bluntly, we think the GFS is off its rocker.”
There is growing confidence among meteorologists that the late-weekend storm may become a “significant nor’easter with strong winds and heavy snow along parts of the Atlantic coast,” and could even become a bomb cyclone as it moves away, AccuWeather Senior Meteorologist Chad Merrill said.
Meteorologists Ryan Maue warned…
BREAKING ⚠️
The newest data has arrived. A powerful “bomb cyclone” will become a “blockbuster blizzard” for New England. ❄️💣
Large stretches of the Interstate 95 corridor – from Washington, D.C., to Philadelphia, to New York, and up to Boston – could be blanketed with accumulating snow. However, at this point, pinning down snowfall forecasts is too premature until confidence improves in the storm’s track.
Via Capital Weather Gang:
“The exact track of this storm, along with how quickly it strengthens, will determine how much snow falls in the Mid-Atlantic and Northeast,” AccuWeather Vice President of Forecasting Operations Dan DePodwin said. “The supply of cold air is limited, which could also affect snowfall totals.”
For those traveling to or from the Mid-Atlantic and Northeast this weekend, it’s wise to keep an eye on which model meteorologists favor, because one scenario brings several inches, while the other shows a blockbuster winter event.
The list of countries that want to “ban social media for children” (read: identity-gate internet access) just continues to grow and grow.
There’s Germany…
NEW – Germany’s Chancellor Friedrich Merz (CDU), a former BlackRock chairman, wants to end the anonymity on the Internet: “I want to see real names.” pic.twitter.com/sUjG7XIdrd
At least Merz is being somewhat honest about the intention – ending anonymity.
Meanwhile, Greece is doing it to “protect democracy”…
NOW – Greece’s PM says banning social media for minors and adolescents “goes hand in hand with a democratic responsibility” to ensure “that technology strengthens the public square rather than overwhelms us with disinformation and hate,” and that if dialogue with big tech fails,… pic.twitter.com/qJ9shDiUYr
Not to mention France, Spain, Austria, the Czech Republic, Denmark, Finland, Greece, Italy, and Slovenia [link].
Social media bans are the newest trend. Heads of state, like Mad Men-style 60s housewives, are seeing what their neighbors have and jealously demanding their own.
Not since the early days of Covid have our world leaders demonstrated such school-of-fish-like hivemind synchronization.
It’s all just a coincidence, I’m sure.
Even the US, a supposed bastion of freedom under The Don, is inevitably heading in the same direction.
That’s the reason for the big “social media trial”, contrived performance theatre to air the anti-algorithm grievances of bereaved parents who or may not be real, and engage the increasingly hysterical sentiments of the digital mob.
America may be the last domino to fall, it may even be relegated to a state-level matter, but fall it will.
And that will be that.
It’s another reason why the proposed VPN ban may come to nothing, because there’s no point in spoofing your IP to another country if every country on earth requires digital ID anyway.
This is the wall of a digital prison closing in, and it’s far more important than the alleged arrest of Prince Andrew.
Which is why THAT is on every front page in the country, and THIS is not.
Steve Cohen Tops Hedge Fund Rich List With $3.4 Billion Haul
Steve Cohen spent last fall doing something few billionaire owners enjoy: apologizing. As the New York Mets staggered through a bruising 2025 campaign, he took to social media to tell fans he was sorry for the disappointment at Citi Field. Yet even as the baseball season fizzled, Cohen was clinching a very different kind of pennant, according to Bloomberg.
The founder of Point72 Asset Management finished the year as the highest-paid hedge fund manager on Bloomberg’s annual ranking, pocketing an estimated $3.4 billion. That works out to more than $9 million every day — a staggering haul even by Wall Street standards. For the first time since the list began, Cohen sat alone at the top.
The contrast is striking. Cohen, 69, bought the Mets in 2020 for a record $2.4 billion and pledged to deliver a championship within three to five years. He backed up that promise with one of the sport’s largest payrolls. But while October glory in Queens remains elusive, his investment firm in Stamford, Connecticut, has flourished.
Point72’s ascent is particularly notable given its history. Cohen’s former firm, SAC Capital, pleaded guilty in 2013 to insider-trading charges and returned outside investors’ money; Cohen himself denied wrongdoing. When Point72 reopened to clients in 2018, skeptics wondered whether investors would return. They did — quickly and in size. More than $4 billion poured in at launch, followed by steady inflows that have helped lift assets under management to $45.7 billion. That scale places it among the industry’s largest multistrategy operations, competing with firms such as Citadel and Millennium Management.
Bloomberg writes that Cohen’s 2025 payday outpaced several longtime rivals. David Tepper of Appaloosa Management claimed second place with $3.2 billion, while Izzy Englander of Millennium followed closely at $3.1 billion. Ken Griffin, who has frequently dominated the rankings in past years, earned $2.4 billion and placed fifth.
The industry’s biggest names enjoyed a banner year overall. The 10 top earners collected about $22 billion between them, and the expanded top-20 list generated $28.3 billion in total compensation. On average, each of the 20 managers made $1.4 billion — the strongest showing in five years and the largest number of billion-dollar payouts yet recorded. Buoyant, volatile equity markets helped drive hedge fund returns to their best levels since 2009.
Point72 itself delivered a 17.5% gain in its flagship strategies, a solid result that outpaced several multistrategy competitors. Citadel, which has produced returns as high as the mid-30% range in recent years, advanced just over 10% in 2025, its softest performance since 2018.
Flush with capital, Point72 has been expanding aggressively. Over the past decade it has opened a dozen new offices, grown its workforce to roughly 3,000 employees, and built out more than 190 trading teams. The firm has broadened beyond traditional stock picking into macro investing, scaled up its quantitative arm Cubist, and started laying the groundwork for private credit and venture strategies. In one unusual move, it allowed a star portfolio manager to run an internal fund vehicle, which now oversees about $3 billion after posting strong returns last year.
For Cohen, the year underscored a peculiar dual reality. On the diamond, the Mets are still chasing the success their owner promised. In the financial arena, however, he just delivered the most lucrative season of his career.
Meta’s AI Would Like To Keep You Posting After You’re Dead
Ever since social media became a fixture of daily life, an uncomfortable question has lingered: what should happen to someone’s account after they die? Leave it frozen in time? Hand it to family members as a memorial? Or quietly let it fade into the algorithm?
A few years ago, Meta Platforms explored a far more ambitious possibility, according to Futurism. In 2023, the company received a patent describing how a large language model could be trained on a user’s past posts to simulate their voice and behavior — keeping an account active if the person were “absent,” including in the event of death. The filing, led by CTO Andrew Bosworth, outlined how such a system could generate posts, comments, likes, and even private messages in the user’s style.
The idea was striking, and for many, unsettling. Meta has since said it has no plans to move forward with that example. But the patent offers a snapshot of a moment when tech companies were aggressively testing the limits of what generative AI might do — including extending a person’s digital presence beyond their lifetime.
The Futurism piece says that the concept isn’t entirely theoretical. A small but growing “grief tech” sector has promoted AI tools that recreate voices or personalities of the deceased using photos, recordings, and written messages. Proponents argue that such tools could offer comfort. Critics worry they could complicate the grieving process.
Even within Meta’s own public comments, there has been ambivalence. CEO Mark Zuckerberg has spoken about AI companions as a way to address loneliness and, in a 2023 interview with podcaster Lex Fridman, suggested that interacting with digital representations of loved ones might help some people cope with loss. He also acknowledged the psychological risks and the need for deeper study.
The business logic behind such experiments is difficult to ignore. Platforms like Facebook are filled with dormant accounts — profiles that remain but are rarely updated. More AI-generated activity could mean more engagement and more data. As University of Birmingham law professor Edina Harbinja observed, the commercial incentive is clear, even if the ethical path forward is not.
Others urge caution. University of Virginia sociologist Joseph Davis has argued that part of grieving involves confronting the reality of loss, not blurring it with simulations.
Meta has distanced itself from the patent’s more provocative scenario. Still, its existence underscores how far companies have been willing to push generative AI — and how complex the questions become when technology intersects with death, memory, and identity.