About Services Projects Contact Email

LarKC

The aim of the EU FP 7 Large-Scale Integrating Project LarKC (Large Knowledge Collider) is to develop the Large Knowledge Collider (LarKC, for short, pronounced "lark"), a platform for massive distributed incomplete reasoning that will remove the scalability barriers of currently existing reasoning systems for the Semantic Web. This will be achieved by:

  • Enriching the current logic-based Semantic Web reasoning methods with methods from information retrieval, machine learning, information theory, databases, and probabilistic reasoning.
  • Employing cognitively inspired approaches and techniques such as spreading activation, focus of attention, reinforcement, habituation, relevance reasoning, and bounded rationality.
  • Building a distributed reasoning platform and realizing it both on a high-performance computing cluster and via "computing at home".

The consortium is an interdisciplinary team of engineers and researchers in Computing Science, Web Science and Cognitive Science, well qualified to realize this ambitious vision. The Large Knowledge Collider will be an open architecture. Researchers and practitioners from outside the consortium will be encouraged to develop and plug in their own components to drive parts of the system. This will make the Large Knowledge Collider a generic platform, and not just a single reasoning engine. The success of the Large Knowledge Collider will be demonstrated in three end-user case studies. The first one is from the telecommunication sector. It aims at real-time aggregation and analysis of location data obtained from mobile phones carried by the population of a city, in order to regulate city infrastructure functions such as public transport and to provide context-sensitive navigation information. The other two case studies are in the life-sciences domain, related respectively to drug discovery and carcinogenesis research. Both will demonstrate that the capabilities of the Large Knowledge Collider go well beyond what is possible with current Semantic Web infrastructure. is a platform that allows parsing, storing, editing, metadata annotations , versioning, searching, categorizing, visualization and querying of ontologies (rdf, rdfs, owl).

COMET

COMET (Common Ontology Modeling EnvironmenT) is a platform that allows parsing, storing, editing, metadata annotations , versioning, searching, categorizing, visualization and querying of ontologies (rdf, rdfs, owl).

The back-end of the platform is the COMET server build around J2EE5 application server (Glassfish) with a clear defined API how to access all the services.

The front-end was implemented with a SWING client as a plugin for Protege. A stand alone Eclipse RCP, a plugin for TopBraid and Web based client were also designed.

COMET server integrates with Jena, Pellet, Sesame as SPARQL query engines. Users/roles are used to protect the server resources/services.

CCBS

A large scale enterprise application used by companies selling domains and web hosting online. The application contains a e-commerce web site for end users to create, order and manage accounts, the application server where the business tier is and the plesk system that manages the domains and web hosting part.

The core application contains connectors with domain registrars and different payment systems. A web based administration and monitoring tool provides a centralized and easy way to manage the whole system (old version developed with Struts, new version with RAP).

ERP system for textile industry

An integrated management solution for a textile manufacturing factory that includes the following modules: (1) Hardware - a network of barcode scanners and communication devices connected to a central application, (2) Production – the main program for production monitoring, salary generation, reports, etc., (3) Management – of stores, materials, documents, etc, (4) Payments, (5) Orders, etc.

OntoSpace

A tool to parse, store, visualize and edit ontologies into relational databases. StAX was used for parsing ontologies stored into XML files. Hibernate was used for storing/accessing the ontology model into the relational database. For visualization a stand alone Eclipse RCP client was developed.


|About| |Services| |Projects| |Contact Us|