Safetyvoice UK

Effective safety culture depends not only on good policies, but on how consistently and transparently they are applied across institutions. As part of our ongoing work on research culture and EDI, we’ve launched SafetyVoice UK — an independent, sector-wide platform for laboratory users, researchers, and technical staff across UK Higher Education to share anonymised experiences of how safety policies are applied in practice. Safety is fundamental. But clear communication, appropriate documentation, and opportunities for dialogue are what make policies work well for everyone. When these elements are inconsistent, it can affect confidence, wellbeing, and the ability to work effectively. SafetyVoice UK is open to contributors from any UK HEI or research organisation. Using AI-assisted anonymisation, the platform identifies common themes across the sector while protecting individual contributors — supporting a more transparent, EDI-aligned approach to safety governance. We welcome colleagues across the sector to share their experiences or perspectives. 🔗 https://safetyvoice.org.uk/ #HigherEducation #LabSafety #EDI #SafetyCulture #ResearchCulture

@UUK (Universities UK), @USHA_HE (Universities Safety & Health Association), or @AdvanceHE

h setting

In COMSOL: h_top = 14.4, h_bot = 7.2 W/m²K are fixed constants from t=0. Full convective cooling is applied from the very first second.

In TS: h is recomputed every timestep from the Churchill & Chu correlation at T_mean:

T_mean = float(np.mean(T))   # starts at T_amb = 20°C
ht = calc_h_top(T_mean)      # → h ≈ 0 at t=0 !

At t=0, T_mean = T_amb, so ΔT = 0, Ra = 0, h = 0. TS starts with essentially zero convective cooling and only builds up h as the board heats. The full h=14.4 isn’t reached until T_mean ≈ 55°C (ΔT ≈ 35K above ambient).

So early in the transient:

TimeCOMSOL h_topTS h_top (approx)
t = 0 s14.4 W/m²K~0
t = 60 s14.4 W/m²K~5–8
t = 300 s14.4 W/m²K~11–13
SS14.4 W/m²K14.4

TS has far less cooling during the transient → rises faster and higher. At SS both converge to the same h, so SS temperatures should match closely. This matches exactly what the chart shows.

Simple fix: evaluate h at a representative ΔT from the start (e.g., use the current local temperature rather than T_mean, or initialise h at the SS value and let it update). The most direct fix to match the COMSOL setup would be to use fixed h values:

# Option: fix h to SS calibrated values (matches COMSOL setup)
ht = calc_h_top(T_AMB + 35.0)   # evaluate at design ΔT=35K, constant
hb = calc_h_bottom(T_AMB + 35.0)

Or keep temperature-dependent h but evaluate at a better representative temperature than T_mean.no, I want fix that in comsol, not in TS, teach me how to defin ht and hb in comsol as fucntion of position and temperature.

In COMSOL, you define h as an expression in T directly in the Heat Flux boundary condition. Here’s how:


Step 1 — Define parameters (Global Definitions > Parameters)

NameExpressionDescription
T_amb293.15[K]Ambient temperature
k_air0.0271[W/(m*K)]Air conductivity
nu_air1.655e-5[m^2/s]Kinematic viscosity
alpha_air2.34e-5[m^2/s]Thermal diffusivity
L_c0.0292[m]Characteristic length (A/P)
H_SCALE1.76Nu calibration multiplier

Step 2 — Define variables (Global Definitions > Variables)

dT_surf   = max(T - T_amb, 0.001[K])
T_film    = (T + T_amb) / 2
beta_air  = 1 / T_film
Ra        = 9.81[m/s^2] * beta_air * dT_surf * L_c^3 / (nu_air * alpha_air)
Nu_top    = if(Ra < 1e7, 0.54 * Ra^0.25, 0.15 * Ra^(1/3))
Nu_bot    = 0.27 * Ra^0.25
h_top_fn  = Nu_top * k_air / L_c * H_SCALE
h_bot_fn  = Nu_bot * k_air / L_c * H_SCALE

max(..., 0.001) avoids Ra=0 at t=0 when T=T_amb.


Step 3 — Apply to boundary conditions

Top surface — Heat Flux BC:

  • Type: General inward heat flux
  • Expression: h_top_fn * (T_amb - T)

Bottom surface — Heat Flux BC:

  • Expression: h_bot_fn * (T_amb - T)

Alternatively, use the built-in “Convective heat flux” type and enter h_top_fn in the h field — COMSOL will evaluate it locally at each surface node’s current T automatically.


What this achieves

  • At t=0: T=T_amb, dT≈0, Ra≈0, h≈0 (matches TS behaviour)
  • At SS: Ra≈71,000, h_top≈14.4, h_bot≈7.2 W/m²K (matches calibrated values)
  • COMSOL evaluates the expression locally per element at each timestep — better than TS which uses T_mean globally
  • The transient h buildup will match TS more closely

Are our bodies really full of microplastics? Why a scientific ‘bombshell’ is a call for better methods, not a reason to dismiss the plastic crisis.

By Zengbo Wang, Bangor University

Over the past few years, a wave of alarming headlines has suggested that human bodies are rapidly becoming reservoirs for plastic pollution. High-profile studies have claimed to detect micro- and nanoplastics (MNPs) in human blood, placentas, arteries, testes, and even brain tissue.

However, a fierce debate has recently erupted within the scientific community, throwing many of these discoveries into doubt. Experts warn that several of these studies suffer from severe methodological flaws, leading to false positives and exaggerated concentrations. One chemist even described the doubts as a “bombshell” that forces us to re-evaluate what we actually know about microplastics in the body.

The false positive problem: When fat masquerades as plastic The primary challenge in measuring MNPs in human tissue is their microscopic size, which pushes the absolute limits of today’s analytical technology. A major flashpoint in this debate centres around a widely used technique called Py-GC-MS, which involves heating a sample until it vaporises to identify its molecular weight.

The problem? Human tissue is full of natural fats (lipids) that can produce chemical signals nearly identical to those of common plastics like polyethylene and PVC. This flaw was brutally highlighted in response to a widely publicised study claiming microplastic levels in human brains were rapidly rising. Critics pointed out that the human brain is approximately 60% fat, making it highly susceptible to false positives. One leading environmental analytical chemist bluntly labelled the brain study a “joke,” while others noted it is biologically implausible for particles of the reported size (3 to 30 micrometres) to cross into the bloodstream and organs in such massive volumes.

A world saturated in plastic: The contamination conundrum Beyond analytical misidentifications, researchers face an existential hurdle: we live in a world coated in plastic, making background contamination practically unavoidable.

Biological samples are extraordinarily vulnerable; medical operating theatres, where solid tissue samples are usually taken, are notoriously “full of plastic”. Critics argue that many studies—often led by medical professionals rather than specialised analytical chemists—failed to employ “standard good laboratory practices”. Crucial steps, such as running rigorous “blank” control samples to properly account for background contamination, were frequently overlooked in the rush to publish.

The call for a ‘forensic’ approach To ensure the integrity of future research, a coalition of over 30 international scientists, led by Imperial College London and the University of Queensland, has urgently called for the adoption of a “forensic science approach”.

Because no single measurement technique is perfect, this framework urges laboratories to combine multiple, independent testing methods on the exact same samples. By mirroring the rigorous standards of forensic laboratories—meticulously controlling contamination and clearly communicating confidence levels—scientists can ensure that early, suggestive data is no longer presented as definitive proof. As Professor Leon Barron of Imperial College London notes, “Finding ‘something’ in the human body is not the same as proving it is plastic, and certainly not the same as proving it is harmful”.

The political and public health stakes The stakes of getting this science right are incredibly high. Poor-quality evidence has fuelled public scaremongering, leading to predatory businesses offering unscientific, expensive treatments—sometimes costing up to £10,000—that falsely claim to “clean” microplastics from the blood.

Conversely, the petrochemical industry and lobbyists may seize upon these methodological debates to sow doubt about the broader harms of plastic pollution. Recognising the need for bulletproof data to inform policy, lawmakers in the United States have recently introduced bipartisan legislation such as the proposed Microplastics Safety Act and the Plastic Health Research Act to mandate and fund comprehensive federal research.

Science working exactly as it should What we are witnessing is not the debunking of the plastic crisis, but the vital, sometimes messy process of scientific self-correction. We know definitively that humans are exposed to MNPs daily through our environment, food, and water. Now, we must support the painstaking, rigorous work required to discover exactly how much plastic is inside us, and what it is doing to our health, so that society can take effective, evidence-based action.

References:

• Carrington, D. (2026). ‘A bombshell’: doubt cast on discovery of microplastics throughout human body. The Guardian.

• Packaging Europe. (2026). ‘Bombshell’ article casts doubt on studies of microplastics in the human body.

• Stewart, J. & O’Hare, R. (2026). Experts urge caution over microplastics in tissue claims and call for forensic approach to improve accuracy. Imperial College London News.

• The Acta Group. (2025). Microplastics in 2025: Regulatory Trends and Updates.

• Wikipedia Contributors. Microplastics and human health. Wikipedia, The Free Encyclopedia.

openclaw devices list

in the folder of : ~/.openclaw/devices$

run following can list when the devices were created.

python3 – << ‘EOF’
import json, time
with open(“/home/jameszbw/.openclaw/devices/paired.json”) as f:
data = json.load(f)

for dev_id, dev in data.items():
last = dev[“tokens”][“operator”][“lastUsedAtMs”]
created = dev[“createdAtMs”]
print(f”\nDevice: {dev_id}”)
print(f” clientId: {dev[‘clientId’]}”)
print(f” mode: {dev[‘clientMode’]}”)
print(f” platform: {dev[‘platform’]}”)
print(f” created: {time.ctime(created/1000)}”)
print(f” last used: {time.ctime(last/1000)}”)
EOF

if openclaw veriosn mismatch gateway verison, use this:

openclaw gateway install –force

Local models for agentic tasks

Agent-Capable LLM Comparison

In this table, we compare 2026’s leading open-source LLMs for agent workflows, each with a unique strength. For purpose-built agent applications, GLM-4.5-Air provides optimized tool use and web browsing. For specialized agentic coding, Qwen3-Coder-30B-A3B-Instruct delivers state-of-the-art performance. For complex reasoning agents, Qwen3-30B-A3B-Thinking-2507 offers advanced thinking capabilities. This side-by-side view helps you choose the right model for your specific agent workflow needs.

NumberModelDeveloperSubtypeSiliconFlow Pricing (Output)Core Strength
1GLM-4.5-AirzaiReasoning, MoE, 106B$0.86/M tokensPurpose-built agent foundation
2Qwen3-Coder-30B-A3B-InstructQwenCoder, MoE, 30B$0.4/M tokensState-of-the-art agentic coding
3Qwen3-30B-A3B-Thinking-2507QwenReasoning, MoE, 30B$0.4/M tokensAdvanced reasoning for agents

https://www.siliconflow.com/articles/en/best-open-source-LLM-for-Agent-Workflow

https://www.siliconflow.com/articles/en/best-open-source-LLM-for-Agent-Workflow

Which AI models made it into our top three picks for agent workflows?

Our top three picks for 2026 are GLM-4.5-Air, Qwen3-Coder-30B-A3B-Instruct, and Qwen3-30B-A3B-Thinking-2507. Each of these models stood out for their agent capabilities, including tool use, function calling, reasoning, and autonomous task execution in real-world agentic applications.