Gemsas is a spinout from the ICVL lab of Imperial College London and is specialized in machine learning and computer vision for robotics. We develop machine learning-based visual software solutions for robotic manipulation tasks. Our software recovers objects' 6D pose in the challenging occluded and cluttered scenarios. Taking RGB-D images as input, our complete framework outputs increased level of autonomy for the manipulators.

NEWS & EVENTS

Research intern positions for postgraduate students
12.01.23

We are recruiting postgraduate students for 16-week research internship positions. Our research is based on machine (deep) learning and computer vision. Our topics include (not limited to) object pose estimation, 3D object detection and tracking. We hire MSc/PhD students with research experience and strong programming skills in C++/Python. Interested applicants may send their application to "intern@gemsas.com.tr". Applicants can consider including a copy of their CV and research statement.


Deployment engineers (Deployers)
05.04.23

We are recruiting deployment engineers who have got a general understanding of computer systems, the linkage between hardware and software, registers, file formats, etc. Graduation from an engineering department is not a firm requirement. Interested applicants can send their application to "info@gemsas.com.tr". Applicants can consider including a copy of their CV.




We attended a series of Microsoft UK's digital events
30.06.21

In April, we joined Microsoft's digital skills event for keynotes and exclusive skill-building workshops. In the Microsoft Build event of May, the future of the technology, digital transformation and products are discussed by Microsoft experts. Senior business leaders from around the world came together to discuss the key challenges on digital transformation in June's Envision.



We are always looking for international collaborators!

WE WOULD LOVE TO HEAR FROM YOU!

Deployment on KUKAs
30.03.21

We deployed our single- and multiple-instance detector models to ROBO's robotic manipulators.

OUR TECHNOLOGY

Research

CMD-Net: Self-Supervised Category-Level 3D Shape Denoising through Canonicalization
06.11.22


Our recent research paper for the Category-Level 3D Shape Denoising through Canonicalization problem is now available online. We introduce Canonical Mapping and Denoising Network (CMD-Net), a self-supervised learning-based method for the problem. We formulate denoising as a 3D semantic shape correspondence estimation task where we explore ordered 3D intrinsic structure points. Our method is capable of canonicalizing noise-corrupted clouds under arbitrary rotations, therefore circumventing the requirement on pre-aligned data. The complete model learns to canonicalize the input through a novel transformer that serves as a proxy in the downstream denoising task. We show that CMD-Net can eliminate corruption from objects' underlying surfaces and remove clutter both from synthetic and real test data.


CMD-Net Dataset: In this research, we present a dataset for the problem and will shortly make it publicly available. More details are in the paper.




Software

software features

  • based on machine learning and computer vision
  • recovers objects' 6D pose
  • takes RGB and Depth modalities as input
  • works in highly cluttered and occluded scenarios


our models are

  • precise
  • flexible
  • scalable
  • generalizable




our complete framework

  • offers real-world solutions
  • brings autonomous behaviours
  • provides increasing level of autonomy

for robotic manipulators


PRODUCTS

gürbüz

robust

surveillance

reconnaissance

tracking


We track.

one water surface object day and night


Day and night.

removing false positives


More to be provided upon request (Product Info Bulletin).


seçkin

pick-and-place

bin picking

palletizing & depalletizing

component identification

inspection

single-instance picking


Deployment of our single-instance detector model to KUKA arm.

multiple-instance picking


Deployment of our multiple-instance detector model to KUKA arm.

bin picking


More to be provided upon request (Product Info Bulletin).
Acknowledgement. We express our appreciation to Bootstraptaste, ROBO, UNIROBOTICS, Türkiye Ministry of National Education, Imperial Computer Vision and Learning Lab, and Imperial College London.