Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Communicability in Networked Systems - Implications for Stability, Spatial Efficiency and Dynamical Process
Ernesto Estrada, University of Strathclyde, United Kingdom

Scientific Workflows in the Era of Clouds
Péter Kacsuk, LPDS, MTA SZTAKI, Hungary

 

Communicability in Networked Systems - Implications for Stability, Spatial Efficiency and Dynamical Process

Ernesto Estrada
University of Strathclyde
United Kingdom
 

Brief Bio
Professor Estrada has an internationally leading reputation for shaping and developing the study of complex networks. His expertise ranges in the areas of network structure, algebraic network theory, dynamical systems on networks and the study of random models of networks. He has a distinguished track record of high-quality publications, which has attracted more than 8,000 citations. His h-index (number of papers with at least h citations) is 51. His publications are in the areas of network theory and its applications to social, ecological, engineering, physical, chemical and biological real-world problems. Professor Estrada has published two text books on network sciences both published by Oxford University Press in 2011 and 2015, respectively. He has demonstrated a continuous international leadership in his field where he has been invited and plenary speaker at the major conferences in network sciences and applied mathematics.


Abstract
This keynote lecture will motivate and introduce the concept of network communicability. It will give a few examples of applications of this concept to biological, social, infrastructural and engineering networked systems. Building on this concept we will show how a Euclidean geometry emerges naturally from the communicability patterns in networked complex systems. This communicability geometry characterises the spatial efficiency of networks. We will show how the communicability function allows a natural characterization of network stability and their robustness to external perturbations of the system. Finally, we will show that theoretical parameters derived from the communicability function determine the robustness of dynamical processes taking place on the networks, such as diffusion and synchronization. All the lecture will be characterized by a combination of rigorous results and illustrative examples from the real-world.



 

 

Scientific Workflows in the Era of Clouds

Péter Kacsuk
LPDS, MTA SZTAKI
Hungary
 

Brief Bio
Professor Peter Kacsuk is the Head of the Research Laboratory of the Parallel and Distributed Systems. He received his MSc and university doctorate degrees from the Technical University of Budapest in 1976 and 1984, respectively. He received the kandidat degree from the Hungarian Academy of Sciences in 1989. He habilitated at the University of Vienna in 1997. He recieved his professor title from the Hungarian President in 1999 and the Doctor of Academy degree (DSc) from the Hungarian Academy of Sciences in 2001. He has been a part-time full professor at the Cavendish School of Computer Science of the University of Westminster in London and at the Eötvös Lóránd University of Science in Budapest since 2001. He served as visiting scientist or professor several times at various universities of Austria, England, Germany, Spain, Australia and Japan. He has published two books, two lecture notes and more than 200 scientific papers on parallel computer architectures, parallel software engineering and Grid computing. He is co-editor-in-chief of the Journal of Grid Computing published by Springer.


Abstract
The use of scientific workflows (or simply workflows) has a long history in computer science. They became particularly popular when large distributed computing systems like the computational grid became available to solve very complex scientific problems. During the history of workflows many approaches and concrete workflow systems have been elaborated and many of them were intensively used by scientific communities. However, just this large variety of available workflow systems raised several important issues that should be solved in order to make workflows even more accepted and used for everyday science.

One important issue is the reuse and reproducibility of workflows. Scientific communities using different kind of workflow systems would like to collaborate and reuse the workflows developed by other scientific communities. The SHIWA European project has proposed several solutions to solve this problem. Interestingly their method called coarse-grain workflow interoperability became really usable when clouds appeared. Cloud systems provide the required technology by which workflows became really reproducible, shareable and even reusable inside new workflows.

Clouds also provide the possibility of constructing workflows as infrastructures that can dynamically be deployed in the cloud when needed in order to use them by other workflows. The WaaS (Workflow as a Service) concept enabled the introduction of the so-called infrastructure-aware workflows which is a new step in making workflows even more flexible.

The other direction where the WaaS concept can fruitfully be used is the creation of workflows that enable the processing of very large scientific data sets. A new workflow system called Flowbster has been developed based on the concept of workflow choreography and WaaS. It was designed to create efficient data pipelines in clouds by which very large data sets can efficiently be processed. The Flowbster workflow can be deployed in the target cloud as a virtual infrastructure through which the data to be processed can flow and meanwhile it flows through the workflow, it is transformed as the business logic of the workflow defines it. Instead of using the enactor based workflow concept Flowbster applies the service choreography concept where the workflow nodes directly communicate with each other. Workflow nodes are able to recognize if they can be activated with a certain data set without the interaction of a central control service like the enactor in service orchestration workflows. As a result Flowbster workflows implement a much more efficient data path through the workflow than service orchestration workflows. A Flowbster workflow works as a data pipeline enabling the exploitation of pipeline parallelism, workflow parallel branch parallelism and node scalability parallelism. The Flowbster workflow can be deployed in the target cloud on-demand based on the underlying Occopus cloud deployment and orchestrator tool. Occopus guarantees that the workflow can be deployed in any major type of IaaS clouds (OpenStack, OpenNebula, Amazon, CloudSigma). Performance results show the viability of using Flowbster workflows on top of even hybrid clouds.



footer