license.dat
license.paid
both same content below:
#SN=1017947
SERVER bu-licence10 ANY 1718
USE_SERVER
Prof Zengbo Wang Research Team at Bangor University
license.dat
license.paid
both same content below:
SERVER bu-licence10 ANY 1718
USE_SERVER
Click each institution to view its full World University Rankings 2023 profile
SYNC: rsync -avz -e “ssh” [email protected]:/scratch/b.eds006/DL .
ZIP: powershell Compress-Archive -Path “.\datasets*.b.nst.jpg” -DestinationPath “.\dataset1_b.nst.jpg.zip”
Skip to main navigationSkip to main contentThe University of Southampton
×News
Right
Published: 14 June 2023
The UK could become a world powerhouse for the development of responsible artificial intelligence after £31million was awarded to launch a new consortium led by the University of Southampton.
The multimillion-pound project, known as Responsible AI UK, will bring together experts to create an international research and innovation institute to create trustworthy and secure AI that responds to the needs of society.
The £31million funding was awarded by the UK Research and Innovation (UKRI), and will work across universities, businesses, public and third sectors to pioneer responsible AI and fund new research to better understand and build trustworthy systems.
Professor of Artificial Intelligence Gopal Ramchurn from the University of Southampton is the principal investigator for Responsible AI UK. He said: “We don’t need to fear artificial intelligence, it won’t threaten humanity but has huge potential to influence how society operates in the future.
“AI should not only be technically safe and accountable, but its impact on its users, their wellbeing and rights, and the wider society needs to be understood for people to trust it. Our role in RAI UK will be to bring together experts from diverse disciplines and cultures from across the world to address the most pressing AI challenges in key sectors and ensure we all benefit from the productivity gains it promises to deliver.”
The RAI UK consortium will bring together an international ecosystem to address AI challenges. It will fund large and small research and innovation projects and fellowships, grants, develop collaborations between researchers and businesses, develop skills programmes for the public and industry, and deliver guidance to governments.
RAI UK will lead national conversations on responsible artificial intelligence across the UK, working closely with policymakers to provide evidence for future policy and regulation, as well as guidance for businesses in deploying AI solutions responsibly.
Celebrated computer scientist Dame Wendy Hall, a Regius Professor at the University of Southamptonand executive director of its Web Science Institute, said: “AI will change the way we live and work for the better – but an interdisciplinary approach to regulation and safe AI is vital.
“The UK can become the dominant force for responsible and trustworthy AI development and regulation with this £31million investment. The work undertaken by the Responsible AI UK team at the University of Southampton together with our partners across the UK and internationally will put the UK at the forefront of AI’s future for the good of humanity.”
RAI UK will be the catalyst for an international Responsible and Trustworthy AI ecosystem that will address issues that have precluded the adoption of AI to the benefit of society. It will enshrine interdisciplinary research to open dialog among experts, businesses and the public as active participants in the research, including users and those impacted by AI.
Technology Secretary Chloe Smith, said: “Despite our size as a small island nation, the UK is a technology powerhouse. Last year, the UK became just the third country in the world to have a tech sector valued at $1 trillion. It is the biggest in Europe by some distance and behind only the US and China globally.
“The technology landscape, though, is constantly evolving, and we need a tech ecosystem which can respond to those shifting sands, harness its opportunities, and address emerging challenges. The measures unveiled today will do exactly that. We’re investing in our AI talent pipeline with a £54 million package to develop trustworthy and secure artificial intelligence, and putting our best foot forward as a global leader in tech both now, and in the years to come.”
From rock band photography to physio degreePublic support hydrogen and biofuels to decarbonise global shippingCOVID-19 booster vaccine doses strengthen immunity in blood cancer patientsShare this articlePrivacy Settings⇧Back to topInformation forVisitorsStaff & studentsSchools & collegesResearchersEmployers & recruitersParents & guardiansInternational studentsContact us+44(0)23 8059 5000+44(0)23 8059 3131AddressUniversity of SouthamptonUniversity RoadSouthamptonSO17 1BJUnited KingdomGet directions ›Connect with usConnect with us on FacebookConnect with us on TwitterConnect with us on InstagramConnect with us on LinkedInExplore our Youtube channel
Download a PDF of our prospectus or order a printed copy to be delivered to your door.© 2023 University of Southampton
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from PIL import Image
import matplotlib.pyplot as plt
import torchvision.transforms as transforms
import torchvision.models as models
import copy
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Define the fixed size
target_size = (512, 512)
loader = transforms.Compose([
transforms.Resize(target_size),
transforms.ToTensor()])
def image_loader(image_name):
image = Image.open(image_name)
image = loader(image).unsqueeze(0)
return image.to(device, torch.float)
style_img = image_loader("style_fanGao.jpg")
content_img = image_loader("Zengbo.jpg")
# Resize the images to the same size
style_img = F.interpolate(style_img, size=target_size, mode='bilinear', align_corners=False)
content_img = F.interpolate(content_img, size=target_size, mode='bilinear', align_corners=False)
assert style_img.size() == content_img.size(), "Style and content images must be of the same size."
unloader = transforms.ToPILImage()
plt.ion()
def imshow(tensor, title=None):
image = tensor.cpu().clone()
image = image.squeeze(0)
image = unloader(image)
plt.imshow(image)
if title is not None:
plt.title(title)
plt.pause(0.001)
def gram_matrix(input):
a, b, c, d = input.size() # a=batch size(=1)
features = input.view(a * b, c * d) # resise F_XL into \hat F_XL
G = torch.mm(features, features.t()) # compute the gram product
# normalization factor
return G.div(a * b * c * d)
class ContentLoss(nn.Module):
def __init__(self, target):
super(ContentLoss, self).__init__()
self.target = target.detach()
def forward(self, input):
self.loss = F.mse_loss(input, self.target)
return input
class StyleLoss(nn.Module):
def __init__(self, target_feature):
super(StyleLoss, self).__init__()
self.target = gram_matrix(target_feature).detach()
def forward(self, input):
G = gram_matrix(input)
self.loss = F.mse_loss(G, self.target)
return input
class Normalization(nn.Module):
def __init__(self, mean, std):
super(Normalization, self).__init__()
self.mean = mean.view(-1, 1, 1)
self.std = std.view(-1, 1, 1)
def forward(self, img):
return (img - self.mean) / self.std
cnn = models.vgg19(pretrained=True).features.to(device).eval()
cnn_normalization_mean = torch.tensor([0.485, 0.456, 0.406]).to(device)
cnn_normalization_std = torch.tensor([0.229, 0.224, 0.225]).to(device)
content_layers_default = ['conv_4']
style_layers_default = ['conv_1', 'conv_2', 'conv_3', 'conv_4', 'conv_5']
def get_style_model_and_losses(cnn, normalization_mean, normalization_std,
style_img, content_img,
content_layers=content_layers_default,
style_layers=style_layers_default):
normalization = Normalization(normalization_mean, normalization_std).to(device)
content_losses = []
style_losses = []
model = nn.Sequential(normalization)
i = 0
for layer in cnn.children():
if isinstance(layer, nn.Conv2d):
i += 1
name = 'conv_{}'.format(i)
elif isinstance(layer, nn.ReLU):
name = 'relu_{}'.format(i)
layer = nn.ReLU(inplace=False)
elif isinstance(layer, nn.MaxPool2d):
name = 'pool_{}'.format(i)
elif isinstance(layer, nn.BatchNorm2d):
name = 'bn_{}'.format(i)
else:
raise RuntimeError('Unrecognized layer: {}'.format(layer.__class__.__name__))
model.add_module(name, layer)
if name in content_layers:
target = model(content_img).detach()
content_loss = ContentLoss(target)
model.add_module("content_loss_{}".format(i), content_loss)
content_losses.append(content_loss)
if name in style_layers:
target_feature = model(style_img).detach()
style_loss = StyleLoss(target_feature)
model.add_module("style_loss_{}".format(i), style_loss)
style_losses.append(style_loss)
for i in range(len(model) - 1, -1, -1):
if isinstance(model[i], ContentLoss) or isinstance(model[i], StyleLoss):
break
model = model[:(i + 1)]
return model, style_losses, content_losses
def get_input_optimizer(input_img):
optimizer = optim.LBFGS([input_img])
return optimizer
def run_style_transfer(cnn, normalization_mean, normalization_std,
content_img, style_img, input_img, num_steps=300,
style_weight=1000000, content_weight=1):
model, style_losses, content_losses = get_style_model_and_losses(cnn,
normalization_mean, normalization_std,
style_img, content_img)
input_img.requires_grad_(True)
model.requires_grad_(False)
optimizer = get_input_optimizer(input_img)
run = [0]
while run[0] <= num_steps:
def closure():
with torch.no_grad():
input_img.clamp_(0, 1)
optimizer.zero_grad()
model(input_img)
style_score = 0
content_score = 0
for sl in style_losses:
style_score += sl.loss
for cl in content_losses:
content_score += cl.loss
style_score *= style_weight
content_score *= content_weight
loss = style_score + content_score
loss.backward()
run[0] += 1
if run[0] % 50 == 0:
print("run {}:".format(run))
print('Style Loss : {:4f} Content Loss: {:4f}'.format(
style_score.item(), content_score.item()))
print()
return style_score + content_score
optimizer.step(closure)
with torch.no_grad():
input_img.clamp_(0, 1)
return input_img
def save_output_image(image, output_path):
try:
image.save(output_path)
print(f"Output image saved successfully at {output_path}")
except Exception as e:
print(f"An error occurred while saving the output image: {str(e)}")
def convert_to_pil_image(tensor):
image = tensor.squeeze(0).cpu().clone().detach().numpy().transpose(1, 2, 0)
image = image.clip(0, 1)
image = (image * 255).astype('uint8')
return Image.fromarray(image)
num_steps = 500
style_weight = 1000000000
content_weight = 1
input_img = content_img.clone()
output = run_style_transfer(cnn, cnn_normalization_mean, cnn_normalization_std,
content_img, style_img, input_img, num_steps=num_steps,
style_weight=style_weight, content_weight=content_weight)
output_image = convert_to_pil_image(output)
output_path = "output_image.jpg"
save_output_image(output_image, output_path)
# Display content, style, and output images in a single row
fig, axes = plt.subplots(1, 3, figsize=(15, 5))
axes[0].imshow(content_img.squeeze(0).permute(1, 2, 0).cpu().numpy())
axes[0].set_title('Content Image')
axes[0].axis('off')
axes[1].imshow(style_img.squeeze(0).permute(1, 2, 0).cpu().numpy())
axes[1].set_title('Style Image')
axes[1].axis('off')
axes[2].imshow(output_image)
axes[2].set_title('Output Image')
axes[2].axis('off')
plt.tight_layout()
plt.show()
|
Bangor University CPE team has recently acquired and installed a state-of-the-art high power femtosecond fibre laser (model: Jasper X0 30W from Fluence) system in their lab. The system is highly robust, reliable, flexible and tuneable, offering >100 uJ maximum pulse energy, 200kHz-20MHz repetition rate, 200fs – 20ps pulse duration, and four working wavelengths (1030nm, 515 nm, 343nm, 258nm, all avaliable).
Some applications of this laser include: Micromachining, Glass cutting, Surface structuring, Ophthalmology, Semiconductor & OLED manufacturing, Solar cell manufacturing, security marking, Stents and medical device manufacturing.