PhD defences at the Faculty of Engineering Science - KU Leuven [PDF]

Apr 5, 2017 - the use of band structure models that capture the full first Brillouin zone in a computationally efficient

4 downloads 33 Views 32MB Size

Recommend Stories


KU Leuven Guestrooms
Just as there is no loss of basic energy in the universe, so no thought or action is without its effects,

www.benjamins.com - KU Leuven
And you? When will you begin that long journey into yourself? Rumi

BOOK REVIEW - Lirias - KU Leuven
It always seems impossible until it is done. Nelson Mandela

Faculty of Electrical Engineering, Mathematics & Computer Science
The wound is the place where the Light enters you. Rumi

Faculty Planning & Performance Manager Faculty of Science, Agriculture & Engineering
In every community, there is work to be done. In every nation, there are wounds to heal. In every heart,

Better think before agreeing twice - Lirias - KU Leuven [PDF]
Study 4 shows that deliberation may eliminate the effect of mere agreement on compliance. Deliberation is shown to result in a breaking down of Step 2 (from perceived similarity to compliance), but not of Step 1 (from agreement to perceived similarit

Faculty of Engineering
Don't count the days, make the days count. Muhammad Ali

Ku Mechanical Engineering Capstone
Come let us be friends for once. Let us make life easy on us. Let us be loved ones and lovers. The earth

faculty of mechanical engineering
Come let us be friends for once. Let us make life easy on us. Let us be loved ones and lovers. The earth

Faculty of Manufacturing Engineering
When you talk, you are only repeating what you already know. But if you listen, you may learn something

Idea Transcript


PhD defences at the Faculty of Engineering Science

2017

Geert Buckinx Department

Mechanical Engineering

PhD defence

04 January 2017

Supervisor

Prof. dr. ir. Martine Baelmans

Funding

IWT

E-mail

[email protected]

Macro-Scale Flow and Heat Transfer in Systems with Periodic Solid Structures Introduction / Objective Heat transfer devices often contain periodic solid structures like tube bundles or fin arrays to enhance the heat transfer from a certain heat source (e.g. a computer chip) to a certain fluid (e.g. coolant air). For the design of compact heat transfer devices, models that describe the flow and heat transfer within these devices are crucial. Unfortunately, direct numerical simulation (DNS) of the detailed flow and heat transfer phenomena in periodic solid structures requires a lot of computational effort and it results in a huge amount of data. Therefore, a macro-scale modelling approach has been developed to achieve data reduction and model reduction.

Research Methodology The presented macro-scale description allows to model the physically relevant overall flow and heat transfer features in a heat transfer device from a numerical simulation of the flow and heat transfer around just a single solid structure, i.e. a unit cell. The macro-scale modelling has been developed through three steps:

DNS

Macro

Periodic similarities

Spatial averaging Detailed temperature field

Unit cell

Overall temperature field

Unit cell simulation

Results & Conclusions Data reduction w.r.t. DNS is achieved through spatial averaging, i.e. filtering: •

• •

By filtering the flow in a heat transfer device with a double volume-average filter, we can extract the overall flow properties (overall pressure gradient, interfacial force and momentum dispersion source) when the flow is developed. With the aid of a matched filter, we can extract the overall exponentially varying fluid temperature when the solid structures have the same constant temperature. With a double volume-average filter, the overall linearly varying fluid and solid temperature can be found when the solid structures have the same interfacial heat flux.

Model reduction is possible because the flow and temperature fields contain periodic similarities: • The overall flow properties and the overall heat transfer coefficients between the fluid and solid structures are all spatially constant for these filters, so that they can be determined from a numerical experiment on a unit cell.

Major publication Buckinx, G., Baelmans, M. (2015). Macro-scale heat transfer in periodically developed flow through isothermal solids. Journal of Fluid Mechanics, 780, 274-298. 1

Valentine Vanheule Department

Mechanical Engineering

PhD defence

09 January 2017

Supervisor

Prof. dr. ir. Jos Vander Sloten

Co-supervisor

Prof. dr. Jan Victor

Funding

IWT in coorporation with Materialise NV, KU Leuven and UZ Gent

Application of subject-specific models for surgical planning of knee implants Introduction and objectives Total knee arthroplasty is a surgical procedure with high long-term reliability, yet up to a fifth of primary implant patients remains unsatisfied. In this PhD dissertation, the possibility to extend preoperative surgical planning with dynamic computer simulations is investigated. The main goal is to develop an accurate computational knee model with subject-specific geometry that allows for implant position optimisation. The research hypothesis is that simulations of native knee behaviour could serve as an optimisation objective in search of the optimal implant position.

Research Methodology 1. In vitro validation with dynamic knee squat simulator performed on 4 cadaver legs •

Validation of healthy and implanted knee model in terms of tibio-femoral kinematics and collateral ligament length changes



Experimentally Introduced implant malalignment with custom printed tibial inserts

2. Search optimal implant position using surrogate models to improve computational efficiency •

Optimisation objective 1: approximate physiologic kinematics



Optimisation objective 2: approximate physiologic ligament behaviour

Results Healthy knee

 Developed subject-specific knee model

Default planning

After optimisation

Tibia Insert 40° 60° 80° 100°

 Performed experimental validation study •

Good agreement with experiments for healthy and implanted knee model



Simulations could predict component malrotation

impact

of

 Demonstrated ability to optimise implant position

The natural kinematic After introducing the implant, the kinematic behaviour of the knee changes behaviour is restored

Conclusions

 Approximation of physiological knee function after TKA is feasible

After optimisation

Ligament elongation (%)

 An accurate subject-specific knee model can be developed that allows for implant position optimisation

Default planning

After introducing the implant, collateral ligament behaviour is altered

° ° ++

MCL native LCL native

___ ___

MCL TKA LCL TKA

The natural ligament behaviour is restored

Major publication Vanheule, V., Delport, H. P., Andersen, M. S., Scheys, L., Wirix-Speetjens, R., Jonkers, I., Victor, J., Vander Sloten, J. Evaluation of predicted knee function for component malrotation in total knee arthroplasty. Medical Engineering & Physics. 2016, In Press. 2

Arne van Stiphout Department

Electrical Engineering (ESAT)

PhD defence

10 January 2017

Supervisor

Prof. dr. ir. Geert Deconinck

Assessors

Prof. dr. ir. R. Belmans, W. D’haeseleer

Funding

Geconcerteerde Onderzoeksactie KUL

E-mail

[email protected]

Short-term Operational Flexibility in Long-term Generation Expansion Planning Introduction An important aspect of realizing a reliable electricity supply is ensuring that sufficient short-term operational flexibility is available to maintain the short-term supply-demand balance and thusly keep the frequency at 50 Hz. The strong growth of the share of variable renewables such as wind and solar energy in the electricity supply, driven by the European climate goals, increases the need for such flexibility. This has an impact on the long-term planning of the power system.

Research Methodology To study the impact of short-term operational flexibility on long-term generation expansion planning, a new planning tool has been developed that captures the complexity of power system operation while keeping computational requirements feasible. The need for flexibility is modelled as driven by the variability and uncertainty of renewable electricity production. The supply of flexibility is modelled for the four different types of flexibility sources: supply-side, demand-side and storage flexibility, and flexibility via interconnection with other power systems. This model is applied to a test system, determining the best mix of technologies to supply the demand for electricity given a certain goal for the share of renewable energy.

Results & Conclusions Keeping the balance can become expensive in a highly renewable power system! g

Using today’s strategies for dealing with variability and uncertainty, and conventional sources of flexibility (i.e. supply-side flexibility), the cost of ensuring that sufficient flexibility is available to keep the shortterm balance increases strongly as the share of renewable energy increases. The biggest driver of this cost increase is the uncertainty of renewable electricity production, and the reserves that have to held to address it. g

But that can be solved by… … using an improved reserve strategy…

… and using alternative sources of flexibility!

g

g

An improved reserve sizing and allocation strategy can decrease the uncertainty-related costs of integrating a lot of renewable energy significantly. A more dynamic sizing strategy allows to match the amount of reserves better to the uncertainty present in the system. A more dynamic allocation strategy allows to more efficiently use the flexibility available in the system.

The added value of three energy storage technologies, a demand response technology and the possibility to exchange flexibility over interconnection were studied. Thanks to the combined flexibility of these alternative sources and the conventional sources, the cost of keeping the balance in a highly renewable power system will not be much higher than it is today!

Major publication A. van Stiphout, K. De Vos, G. Deconinck (2017). The Impact of Operating Reserves on Investment Planning of Renewable Power Systems. IEEE Transactions on Power Systems, 32 (1), 378-388.

3

Robert Heinrich Renner Department

Electrical Engineering (ESAT)

PhD defence

10 January 2017

Supervisor

Prof. dr. ir. Dirk Van Hertem

Funding

FP7

Interaction of HVDC grids and AC power systems Operation and control Introduction / Objective

Results & Conclusions

A new transmission system is required that can connect the huge renewable energy resources of north and south Europe with the load centers located more centrally. A HVDC grid is considered to be a likely option to connect these sources. Such a grid should be able to cover long distances, also undersea, and provide a better economic performance than traditional AC systems. The existing AC system will not be obsolete it will be operated in parallel to the new system. Thereby the interaction between both systems has to be analysed and coordinated.

This thesis offers new insights into DC energy balancing at all time frames and for the first time introduces ancillary services for DC grids. The main outcomes of this work are: 1. The definition of DC ancillary services, including a basic set. 2. The presentation of a method to calculate the buffer energy in DC grids and the suggestion of an operational DC voltage band. 3. The introduction of the first DC Voltage Restoration Reserve or DC Secondary Reserve controller which can handle DC control areas without the need of a complete data set. Research Methodology 4. The first analyses of the functionality and limitations of The research results are achieved through theoretical DC choppers in DC grids. descriptions of the problems and validation of the results 5. A description of the required reaction, activation and using simulations. provision time for DC reserve providing generators 1. 2. and the presentation of a method to achieve them. 3.

5. 4.

Major publication 1. R. H. Renner and D. Van Hertem, Ancillary Services in Electric Power Systems with HVDC Grids, IET Generation Transmission & Distribution, vol. 9, no. 11, pp. 1179–1185, Aug. 2015. 2. R. H. Renner, D. Van Hertem, Potential of using DC voltage restoration reserve for HVDC grids, Elsevier Electric Power Systems Research, 2016. 3. CENELEC TC8X, WG 06: HVDC Grid Systems - Guideline and Parameter Lists for Functional Specifications, 2017. 4. Technical Brochure 657: Cigré Working Group B4-56: Guidelines for the preparation of "connection agreements" or "Grid Codes" for multi-terminal DC schemes and DC Grids, 2016. 5. R. H. Renner, J. Beerten, D. Van Hertem, Optimal DC Reference Voltage in HVDC Grids, in Proc. IET International Conference on AC and DC Power Transmission ACDC 2015, Birmingham, UK, February 2015. 4

Thanh Le Van Department

Computer Science

PhD defence

11 January 2017

Supervisor

Prof. dr. Luc De Raedt

Co-supervisor

Prof. dr. Siegfried Nijssen

Co-supervisor

Prof. dr. Kathleen Marchal

E-mail

[email protected]

Rank matrix factorisation and its applications Introduction Rank data, in which each row is a complete or partial ranking of available items (columns), is ubiquitous. It has been used to represent, for instance, preferences of users, and the outcomes of sports events. While rank data has been analysed in data mining, pattern mining in such data has so far not received much attention. To alleviate this state of affairs, we first introduce a generic rank matrix factorisation framework based on semiring theory for pattern set mining in rank data. Then, we succesfully apply the framework for discovering different types of patterns in rank matrices, e.g., Sparse RMF and and ranked tiling, and for integrating heterogeneously molecular data to discover cancer subtypes.

Cancer subtyping We successfully applied the sRMF framework to integrate molecular data, including Boolean mutation data, numeric gene expression and prior knowledged encoded in a biological network, to simultaneously discover cancer subtypes and subtype specific features.

Major publications 1. Le Van, T., Nijssen, S., van Leeuwen, M., and De Raedt, L. Semiring rank matrix factorisation. IEEE Transactions on Knowledge and Data Engineering, Under revision. 2. Le Van, T., van Leeuwen, M., Fierro, A. C., De Maeyer, D., Van den Eynden, J., Verbeke, L., De Raedt, L., Marchal, K., and Nijssen, S. Simultaneous discovery of cancer subtypes and subtype features by molecular data integration. Bioinformatics 32(17), i445–i454, 2016. 5

Rien Quirynen Department

Electrical Engineering (ESAT)

PhD defence

13 January 2017

Supervisor

Prof. dr. Stefan Vandewalle

Co-supervisor

Prof. dr. Moritz Diehl

Funding

FWO

E-mail

[email protected]

Numerical Simulation Methods for Embedded Optimization Introduction / Objective In the context of model predictive control (MPC) or moving horizon estimation (MHE), the control or estimation task corresponds to an online sequence of dynamic optimization problems. The system parameters or actuation profile are optimized over a certain time horizon using the past measurements and directly taking into account the problem objective, the system dynamics and additional constraints or specifications.

Research Methodology This thesis considers the development of real-time feasible numerical algorithms for embedded simulation and optimization and it includes open-source software implementations and real-world control applications. More specifically, the following research topics are discussed:  Numerical simulation methods with sensitivity propagation  Tailored computational exploitation of dynamic system structures  Implicit integrators within a lifted Newton optimization algorithm  Inexact Newton optimization with iterated sensitivities (INIS)  ACADO code generation tool and real-world control applications

Results & Conclusions  Efficient sensitivity propagation for implicit Runge-Kutta (IRK) methods  Symmetric Hessian propagation for continuous- and discrete-time analysis  Lifted Newton-type collocation with efficient Jacobian approximations  INIS exactly recovers the local contraction rate of the forward scheme  NMPC results on a two-stage turbocharged gasoline engine Multiple Shooting

Lifted Collocation

Direct Collocation

Step size control

+

0

0

Embedded solvers

+

+

-

Parallelizability

+

+

0

Local convergence

0

+

+

Internal iterations

-

+

+

-

-

+

Sparsity dynamics Major publications

Quirynen, R., Gros, S., Houska, B., and Diehl, M. Lifted collocation integrators for direct optimal control in ACADO toolkit. Mathematical Programming Computation (preprint available at Optimization Online, 2016-05-5468) (2016). Quirynen, R., Gros, S., and Diehl, M. Inexact Newton-type optimization with iterated sensitivities. SIAM Journal on Optimization (preprint available at Optimization Online, 2016-06-5502) (2016). 6

Vukasin Strbac Department

Mechanical Engineering

PhD defence

13 January 2017

Supervisor

Prof. dr. ir. Jos Vander Sloten

Co-supervisor

dr. ir. Nele Famaey

Funding

FP7, FWO

E-mail

[email protected]

Fast Finite Element Simulation Using GPGPU Technology in Soft Tissue Biomechanics Introduction / Objective As the Finite Element method is becoming more pervasive in scientific and engineering endeavors, the application space of the method increases as well. Likewise, computational architectures are changing and computational power advances rapidly. It is prudent to re-examine the performance of existing algorithms on novel hardware and implementations. This work presents an analysis of an efficient finite element implementation on modern General Purpose Graphics Processing Units (GPGPUs) in elasticity. The potential for (1) real-time execution, (2) research and (3) clinical application are examined.

Research Methodology The Total Lagrangian Explicit Dynamic (TLED) finite element algorithm is implemented, optimized and tested on a variety of GPU devices. The material and element library include isotropic and anisotropic materials. Gaussian integration is adopted and three integration schemes are used: under-integration (UI), selectivereduced (SR) and full integration (FI). Solutions are performed in single and double precision, on a range of devices and problem sizes. Solutions are verified with respect to established solvers in the industry (Abaqus) and academia (FEAP).

Results & Conclusions This work shows the opportunities, challenges and limitations of using GPUs with explicit FE algorithms. Performance was analyzed and its sensitivity to all pertinent variables: single (fp32) and double (fp64) precision, integration scheme, problem size and anisotropy have been elucidated. Applicability of the approach is demonstrated through excellent speedup:  Real-time problems:

30-250x

 Research, extension-inflation:

47-300x

 Realistic clinical scenario:

10-17x

Above: speedup results using the GTX980 device on an idealized extension-inflation test involving a range of problem sizes and integration schemes in fp64. Below: solutions of realistic, clinically relevant models of abdominal aortic aneurysm pressurization, involving fiber-reinforced anisotropic materials and higher-order integration, using our implementation.

Major publication Strbac, V., Vander Sloten, J., Famaey, N. (2015). Analyzing the potential of GPGPUs for real-time explicit finite element analysis of soft tissue deformation using CUDA. Finite Elements in Analysis and Design, 105:79–89. 7

Devin Verreck Department

Electrical Engineering (ESAT)

PhD defence

24 January 2017

Supervisor

Prof. dr. ir. Guido Groeseneken

Co-supervisor

Prof. dr. ir. Bart Sorée

Funding

IWT, Imec

E-mail

[email protected]

Quantum mechanical transport towards the optimization of heterostructure tunnel field-effect transistors Introduction / Objective The driving force behind the enormous increase in computational power in everyday digital electronics since the 1960's has been the scaling of the metal-oxide-semiconductor field-effect transistor (MOSFET). Today, the MOSFET supply voltage can no longer be scaled at the same pace as the device dimensions, resulting in an untenable increase in the power density of integrated circuits. The tunnel-FET (TFET) provides a potential solution: its operating principle based on band-to-band tunneling (BTBT) enables a lower supply voltage operation. Conventional silicon implementations, however, have shown insufficient ON-currents. Different material and configuration options are therefore under investigation: III-V materials, heterostructures, dopant pockets, point and line tunneling configurations, strain... To assess these different TFET design options, a predictive quantum mechanical simulator is needed.

Research Methodology We have developed a fully quantum mechanical simulator for BTBT in TFETs based on the multi-band envelope function formalism, which we named Pharos. Numerically, it combines finite differences with a spectral approach, which enables the use of band structure models that capture the full first Brillouin zone in a computationally efficient way. Our approach allows for performance predictions and optimization of heterostructure TFETs, including arbitrary non-uniform strain profiles, and enables the comparison between different III-V material options and configurations. We implemented a two-, fifteen- and thirty-band model, each subsequent model enabling the simulation of a wider variety of configurations.

Results & Conclusions With our simulator Pharos, we have obtained the following conclusions for III-V direct bandgap TFETs: • Gate control counteracts size-induced confinement when scaling body thickness, reaching an optimum around 10nm. •

The pointTFET is preferred over the lineTFET: it obtains similar performance and is experimentally easier to fabricate.



Pocketed heterostructure pointTFETs can be optimized to outperform MOSFETs at low supply voltage (>0.5V).



pTFET source doping profile can be modified to obtain similar performance as nTFETs.



Strain can additionally boost performance.

10-8

10-6

10-4 10-8

Fig.1 Optimized heterostructure nTFET(top) and pTFET(bottom) with an improved source design. Dotted regions are InGaAs, solid regions GaAsSb.

ION(A/µm) @ VDD

nTFET

10-6

pTFET

10-4

IOFF = 10 pA/µm VDD = 0.3 V

I60(A/µm)

10-6 I60(A/µm)

10-4 resonant TFET p-p-n-i-n hetero 1GPa p-n-i-n hetero non-uniform p-p-n-i-n hetero 500MPa p-p-n-i-n hetero p-n-i-n hetero p-n-i-n homo p-i-n hetero p-i-n homo n-n-p-i-p hetero 500MPa n-n-p-i-p hetero n-n-p-i-p homo

Fig.2 TFET performance metrics of various III-V configurations simulated and designed with Pharos. The green shaded region is the target region.

Major publications D. Verreck D. Verreck D. Verreck D. Verreck

et al. (2016). et al. (2016). et al. (2015). et al. (2014).

The tunnel field-effect transistor, Wiley Encyclopedia of Electrical and Electronics Engineering, 1-24. IEEE Elec. Dev. Lett., 37(3), 337-340. J. Appl. Phys., 118, 134502. Appl. Phys. Lett., 105, 243506.

8

Rafael Bachiller Soler Department

Computer Science

PhD defence

25 January 2017

Supervisor

Prof. dr. Danny Hughes

Co-supervisor

-

Funding

iMinds

E-mail

[email protected]

Middleware for Mobile Crowd Sensing Applications Introduction Mobile crowd sensing applications realise an innovative approach for people-centric and environmental-centric sensing applications in contexts such as smart buildings, road conditions and e-healthcare. These applications involve a large number of participants using heterogeneous devices in terms of hardware (e.g. on-board sensors available), software (e.g. operating system) and connectivity (e.g. WiFi, 3G and online social networks). Nevertheless, contemporary mobile crowd sensing platforms provide poor consideration of user dynamism (mobility across networks, mobility across devices and context-awareness), reduced deployment and development efforts, and an efficient recruitment mechanism.

Research Methodology This research provides the following three main contributions to efficiently support mobile crowd sensing applications improving user participation and reducing development and deployment efforts: 

User Component: a representation of the user as a first-class software component, enabling developers to reconfigure and inspect users (mobility between contexts).



User Binding: a consistent communication and addressing abstraction that maps to multiple underlying communication channels, including online social networks (mobility between networks). It also supports selective communication with the user or with a device (mobility between devices).



ExNihilo middleware: a modular and reconfigurable Component-Based Runtime for client-side JavaScript running inside a web browser and providing secure and flexible peer-to-peer communication between browsers without installing additional software (deployment and development efforts).

Results & Conclusions In the context of SmartOffice use-case, results from a two weeks experiment reveal that the prototype @migo increases participation by 8% as well as participant availability by 24%. In the context of SmartTeaching use-case, ExNihilo does not add significant performance overhead (compared with state of the art web-based solutions) while lowering software requirements to participants. In conclusion, contributions improve support of mobile crowd sensing applications by enhancing user support and reducing development and deployment efforts.

Major publications Rafael Bachiller Soler et al. “@migo: A comprehensive middleware solution for participatory sensing applications”. Proceedings of the 14th International Symposium on Network Computing and Applications (NCA’15), pages 1-8. IEEE. September 2015. Rafael Bachiller Soler et al. “Enabling massive scale sensing with the @LooCI mobile sensing framework”. Proceedings of the 10th International Conference on Embedded and Ubiquitous Computing (EUC 2012), pages 461-468. IEEE/IFIP. December 2012. 9

Dmitry Yakimets Department

Electrical Engineering (ESAT)

PhD defence

26 January 2017

Supervisor

Prof. dr. ir. Kristin De Meyer

Co-supervisor

dr. ir. Nadine Collaert

Funding

Imec

E-mail

[email protected]

Vertical Transistors: a slippery path towards the ultimate CMOS scaling Introduction / Objective The scaling of lateral transistors is going to reach its limit soon because it mainly relies on the scaling of a gate length, S/D spacers and contacts. Reduction of any of these dimensions is undesirable as it leads to poorer electrostatic control, increased parasitic capacitance and access resistance, respectively. Vertical transistors are less constrained on gate length and spacer width as they are oriented vertically, and thus these transistors should demonstrate better scalability. In this work, we quantify the advantages of vertical devices in terms of power, performance and area (PPA) metrics.

Research Methodology Vertical and lateral devices were holistically benchmarked by combining the design techniques and technology limitations which are likely to be in place at the 5nm technology. To achieve this goal, we analyzed layouts, modeled and evaluated RC parasitics, calibrated compact models to TCAD and experimental data. Afterwards, we run SPICE simulations to extract the PPA metrics on a ring oscillator level. In addition to conventional MOSFETs, we also benchmarked vertical III-V heterojunction Tunnel FETs.

Results & Conclusions Various vertical devices made out of nanowires and nanosheets were evaluated. It turned out that the device made of several nanowires outperforms the device having single nanosheet as a channel if made at same footprint. Yet, these nanowires should not be too narrow, as otherwise there will be no enough current coming out of a device.

• Lateral devices are not scalable towards the 5nm node. • Vertical devices, because of smaller parasitic capacitance bring power-performance advantage at the 5nm node. • Scaling beyond the 5nm node requires vertical devices to have smaller active area, which results in the degraded current. Therefore, vertical architecture cannot provide sustainable scaling. • Vertical TFETs, although bring some advantages over vertical MOSFETs at relatively low switching speeds, suffer from the same limited scalability as regular devices.

Major publication D. Yakimets, G. Eneman, P. Schuddinck, T. Huynh-Bao, M. Garcia Bardon, P. Raghavan, A. Veloso, N. Collaert, A. Mercha, D. Verkest, A. V.-Y. Thean, and K. De Meyer, “Vertical GAAFETs for the Ultimate CMOS Scaling”, IEEE Transactions on Electron Devices, vol. 62, no. 5, pp. 1433–1439, May 2015. 10

Bart Van den Bogaert Department

Chemical Engineering

PhD defence

27 January 2017

Supervisor

Prof. dr. ir. Tom Van Gerven

Co-supervisor

Prof. dr. Koen Binnemans

Funding

IWT

E-mail

[email protected]

Photochemical separation of europium from rare-earth mixtures in aqueous and non-aqueous solutions Introduction / Objective The rare-earth elements (REEs) have been identified as critical raw materials, due to their crucial role in green technology and their high supply risk on the global market. One of the possible solutions to cope with the pending scarcity is urban mining, which is the recycling of rare earths from waste streams. The main issue with conventional REE separation techniques is the lack of selectivity. Due to the similar physico-chemical properties of the REEs, separation of REE mixtures into pure components is very challenging. Photochemical reduction of europium(III) to europium(II) and subsequent isolation of the reduced species from a REE mixture overcomes this problem.

Research Methodology The reduction can be established by photochemical means, through a so-called charge-transfer (CT) interaction between europium(III) and the solvent or anions present in the solution. This reaction requires energy, supplied by illumination using a UV light source, for example a low-pressure mercury lamp (LPML). By carefully selecting the most suitable light source and by optimizing the chemical conditions, a very high selectivity (up to 99%) can be reached to separate binary mixtures of europium (Eu) and yttrium (Y) in a one-step process, for instance from red lamp phosphor waste streams.

2.5

100

EuCl3

1.0

0.5

0.0 200

Europium in solution (%)

Europium in solution (%)

Absorbance

80

1.5

60

40

20

0

250

300

Wavelength (nm)

350

100

Isopropanol, pH 1 Isopropanol, pH 4 Formic acid, pH 1

EuCl3 + (NH4)2SO4

2.0

Photochemical reduction of Eu with a 160 W LPML

pH 0 pH 1 pH 2 pH 3 pH 4 pH 5

80

60

40

20

0 0

5

10

15

Illumination time (h)

20

25

0

5

10

15

20

25

Illumination time (h)

Major publication B. Van den Bogaert, D. Havaux, K. Binnemans and T. Van Gerven; Photochemical recycling of europium from Eu/Y mixtures in red lamp phosphor waste streams, Green Chemistry, 2015, 17 (4), 2180-2187. 11

Wacha Bounliphone Department

Electrical Engineering (ESAT)

PhD defence

30 January 2017

Supervisor

Prof. dr. Matthew Blaschko

Co-supervisor

Prof. dr. Arthur Tenenhaus

Funding

KULeuven, CentraleSupelec and Inria

E-mail

[email protected]

Statistically and computationally efficient hypothesis tests for dependency and similarity Abstract There are a variety of problems for testing similarity and dependence addressed in the machine learning and statistics literature. In this thesis, we focus our attention on the study of relative similarity, relative dependency and the concept of conditional independence. We presents novel statistically and computationally efficient hypothesis tests for relative similarity and dependency, and precision matrix estimation.

Research Methodology The work in statistics and machine learning, focused on the development of nonparametric statistical tests for three applications : - assessing the relative similarity between clouds of points; - assessing relative dependencies between variables - estimating the structure of a graphical model for more general distribution. The key methodology adopted in this thesis is the class of U-statistic estimators. The class of U-statistics results in a minimum-variance unbiased estimation of a parameter. We make use of asymptotic distributions and strong consistency of U-statistic estimators to develop novel non-parametric statistical hypothesis tests. Therefore, similarity and dependency is meant with respect to so-called minimum mean discrepancy (MMD) and Hilbert Schmidt Independence Criterion (HSIC), a distance, respective, dependency measure obtained by representing the distributions as vectors in a Reproducing Kernel Hilbert Space (RKHS).

Major publications W. Bounliphone, et al. A low variance consistent test of relative dependency. International Conference on Machine Learning (ICML), 2015. W. Bounliphone, et al. A Test of Relative Similarity for Model Selection in Generative Models. International Conference on Learning Representations (ICLR), 2016.

12

Huayue Shi Department

Materials Engineering (MTM)

PhD defence

30 January 2017

Supervisor

Prof. dr. Ir. Bart Blanpain

Co-supervisor

dr. Muxing Guo

Funding

CSC, IWT

E-mail

[email protected]

Characterization and modification of the secondary copper smelting slag for smooth operation and slag valorization Introduction / Objective Secondary copper is an important part of the copper production. During the secondary copper smelting process, ZnO containing fayalite slags are formed. Solid phase precipitations are observed in these slags, which result in high viscosity. This may hinder the smooth operation and bring potential safety hazard. In order to operate the smelter safely and profitably, the viscosity of the slag must be carefully controlled. The main objectives of this thesis are to identity the slag phase relations under the smelting condition, to control the slag viscosity by modify the slag chemistry and operation temperature, to tailor a proper slag microstructure by adjusting cooling path of the slag.

Research Methodology Various experiment set-up were designed and carried out:  Slags were in-situ observed with CSLM equipped with hot stage.  The phase relations were studied via equilibria experiments in a tube furnace.  Viscosity was measured with a rotation spindle type rheometer.  Solidification was studied with both lab test and software simulation.

Results & Conclusions Major results and suggestions for the industry are:  Fayalite and spinel are the major solid precipitations in the secondary copper smelting slags.  Fayalite precipitation can be eliminated by controlling FeO content, at high ZnO containing slag, fayalite amount is more sensitive with FeO.  The fully molten slags have low viscosity ( 25 s - S.S.I. OK

Stijn Clijsters Department

Mechanical Engineering

PhD defence

07 March 2017

Supervisor

Prof. dr. ir. Jean-Pierre Kruth

E-mail

[email protected]

Development of a Smart Selective Laser Melting Process Introduction / Objective The overall goal of this research is to add intelligence to the Selective Laser Melting machine to create a smart machine that has the ability to produce parts with optimal settings with a limited amount of interaction of the operators and that delivers a quality report of the produced job to indicate where possible defects are located.

Research Methodology To achieve the objective machine hardware and software was adapted to add the intelligence at different levels of the Selective Laser Melting Process.  Job Preparation: 

Detecting critical zones a priori and creating an optimized scan strategy

 Job Execution: 

Using optimal parameters for the detected critical zones



Logging melt pool signals during the process

 Job Evaluation: 

Transferring the logged data in a map that represents quality

Results & Conclusions The developed software and optimized parameters allows to detect and produce horizontal overhangs and thin structures down to 90µm. This proof-of-concept illustrates that intelligent job preparation the limits of the process can be shifted. Combining this intelligent job preparation with monitoring creates possibilities to visualize the melt pool signals in a map. This map can be correlated to pores and surface roughness.

Major publication Clijsters, S., Craeghs, T., Buls, S., Kempen, K., Kruth, J. (2014). In-situ Quality Control of the Selective Laser Melting Process using a High Speed, Real-Time Melt Pool Monitoring System. International Journal of Advanced Manufacturing Technology 26

An-Heleen Deconinck Department

Civil Engineering

PhD defence

07 March 2017

Supervisor

Prof. dr. ir. arch. Staf Roels

Co-supervisor

-

Funding

IWT

E-mail

[email protected]

Reliable thermal resistance estimation of building components from on-site measurements Introduction / Objective Current performance levels of building components typically relate to a theoretical performance: labels are determined based on the thermal properties of a component's material layers. These properties are theoretical values obtained from standards and product information and do not account for the way these materials are applied in the construction. Hence, the actual and labelled thermal performance of building components may deviate significantly from each other. In order to have a better view on their actual as-built thermal performance, characterisation from on-site measurements is required.

Research Methodology On-site measurements of building components typically consist of exterior and interior surface temperature and interior heat flux measurements (Tse, Tsi and qhfm respectively). The characterisation methods that can be used to analyse this set of measurements are distinguished into semi-stationary methods, notably the commonly used and standardised average method and its extension (ISO 9869), and dynamic methods, notably the more established techniques of Anderlind- and ARX-modelling and the innovative technique of stochastic grey-box modelling. The main contributions of this work are twofold: first, the originality lies in the in-depth study of stochastic grey-box modelling for thermal resistance estimation purposes and secondly, added value is provided by the comprehensive and systematic comparison of the different characterisation methods’ performances.

EXT

INT

Tse

Tsi qhfm

Results & Conclusions The comparative results demonstrate that semi-stationary methods are easy-to-use, but that their applicability is often limited to winter measurements. The considered dynamic methods, by contrast, prove to be more complex in use, but offer a versatile applicability from both winter and summer measurements. A thorough examination of stochastic grey-box modelling demonstrates that the typical set of on-site measurements is often not dynamically informative enough to characterise fully identifiable grey-box models. Although this has no consequence for estimating the total thermal resistance of building components, it neutralises many of the method's advanced (statistical) properties. Yet, stochastic grey-box models prove to be relevant for the characterisation of time variant performance indicators, quantifying for instance cavity walls liable to rotational air looping around their insulation layer.

R-values estimated by the average method (grey) and by ARX- and stochastic grey-box modelling (black) from different sets of on-site measurement data of an insulated cavity wall.

Major publication A.-H. Deconinck, S. Roels (2017). Comparison of characterisation methods determining the thermal resistance from onsite measurements. Energy and Buildings 130:309-320. DOI:10.1016/j.enbuild.2016.08.061 27

Fatih Gey Department

Computer Science

PhD defence

10 March 2017

Supervisor

Prof. dr. ir. Wouter Joosen

Co-supervisor

Dr. Eddy Truyen

E-mail

[email protected]

Middleware for Customizable Evolution of SaaS Applications Introduction In the Cloud Computing paradigm, Software-as-a-Service (SaaS) is a delivery model that allows small-to-mid-sized organizations (called tenants) to outsource the operation of their business applications to a SaaS provider, and consume the (software) service on-demand and remotely via the Internet. For tenants, key requirements to that service are (i) cost efficiency and (ii) reliable service quality. Moreover, (iii.a) business applications are often subject to fast-changing requirements, and (iii.b) tenants appreciate being able to benefit from trending technologies. In addressing those requirements, SaaS providers rely on sharing resources among tenants (called multi tenancy), typically up to the application level, and operate the application at large scale.

Problem The SaaS delivery model, while providing many benefits additional to cost efficiency, inherently limits control by a tenant. As a result, its success is bound to establishing and maintaining tenants' trust into the SaaS application's operation. To that end, evolving multi-tenant SaaS applications are insufficiently supported: For some upgrades scenarios, either service quality or cost efficiency can be maintained during their enactment, but not both. In practice, necessary trade-offs are decided for on a per-upgrade basis and affect all tenants of the SaaS application equally and simultaneously.

Results & Conclusions We have evaluated a prototype of our middleware (built on top of OSGi) and relying on a case study of an industrial multi-tenant SaaS provider. Our evaluation shows that customizable transition behaviours are able to satisfy a spectrum of tenant requirements within a shared application, reducing the impact of an upgrade enactment to their business up to an acceptable level. Moreover, we show that our middleware supports multi tenancy across generations, rendering the operation of tenant-specific generations nearly cost neutral, and discuss the deployments and effects of our middleware at large scale.

Approach In this dissertation, we present a middleware and a complementary software (upgrade) development process to (i) efficiently support customization of tenant-perceived behaviour during an upgrade enactment, called transition behaviour. Moreover, our middleware (ii) improves on the service degradation imposed to tenants, and (iii) supports efficient operation of tenant-specific generations of the application that result from tenants delaying or rejecting an upgrade.

More generally, our approach successfully addresses the prominently perceived lack of control in Cloud Computing, paving the way for trust into remotely consumed software services that support quick time-tomarket.

28

Mehmet Ali Recai Onal Department

Materials Engineering (MTM)

PhD defence

10 March 2017

Supervisor

Prof. dr. ir. Bart Blanpain

Co-supervisor

Prof. dr. ir. Tom Van Gerven

Funding

Marie Curie

E-mail

[email protected]

Recycling of NdFeB magnets for rare earth elements (REE) recovery Introduction / Objective NdFeB magnets currently dominate the magnet market and contain 25-35 wt. % REE (Nd, Dy, Pr, Gd and Tb), ca. 1 wt. % B while the rest being Fe and other minor exogens (Co, Al, Ga, Nb, etc.). The on-going monopoly of China on REE production and the continuously increasing demand for REE impose serious supply risks for Nd, Dy and Tb. Holistic hydrometallurgical or combined hydro- and pyrometallurgical flow sheets were developed and completed within this thesis for effective recycling of REE from these magnets.

Research Methodology In sulfation, milled powder was mixed with concentrated H2SO4, dried at 110 °C, roasted at 650-900 °C and leached in water. In nitration, the powder was mixed with concentrated HNO3, calcined at 150-600 °C and leached in water. In the complete leaching study, the powder was dissolved in dilute H2SO4. Fe was oxidized by MnO2 and precipitated by Ca(OH)2 or MnO prior to electrolysis for Mn-Co removal. In alternative steps, REE and Co were separated from Mn by oxalate and sulfide precipitation. In the last work, bulk magnets were treated by 5 hydrogenation treatments and studied for their oxidation behaviour and microstructural changes which is crucial for subsequent acid leaching.

Results & Conclusions 1) Both sulfation and nitration studies resulted in >95 % REE and 90 % purity. Nitration study was more advantageous due to: • Less particle size reduction (up to 1000 µm). • Faster acid mixing kinetics without drying (1 h at 25 °C). • Lower temperature need (200 °C for 1-2 h). • Higher solubility of REE nitrates compared to REE sulfates. • Theoretically easier recyclability chance of majority of the consumed acid (e.g. by condensation). 2) The complete leaching study was successful up to Fe removal stage but direct electrolysis was problematic due to incomplete Mn and Co removal and undesired REE losses. Alternative steps were successful but enlarged and complicated the flow sheet. 3) Among all hydrogenation treatments, hydrogen decrepitation was the most desirable one due to simpler processing and faster oxidation kinetics. However, complex REE-Fe oxide formation was more extensive requiring close control.

Effect of a) calcination temperature and b) duration on extraction % of metals (nitration study)

Major publication Önal MAR, Aktan E, Borra CR, Blanpain B, Van Gerven T, Guo M. Recycling of NdFeB magnets using nitration, calcination and water leaching for REE recovery. Hydrometallurgy 2017;167:115–123.

29

Lin Zhang Department

Mechanical Engineering

PhD defence

14 March 2017

Supervisor

Prof. dr. ir. Herman Bruyninckx

Co-supervisor

Prof. dr. ir. Peter Slaets

Funding

KU Leuven, FP7 and H2020

E-mail

[email protected]

Composable and Embeddable Software Design for Robotic and CyberPhysical Systems Introduction / Objective System-of-systems is a multidisciplinary area that involves system integration as a key to address complex tasks or problems, usually by means of composing multiple independently controlled systems together as part of a large application that often exists only temporarily. The focus of this work is the modelling methodology and software design with embedded features for cyber-physical system, a class of system-of-systems in which computers and networks are organized to monitor and control physical processes.

Research Methodology •System design phases and patterns: four design phases and several effective design patterns are suggested and advocated in system design process •Cyber-Physical Stack (CPS) meta model: a concrete set of levels of abstraction to describe and design cyber-physical systems •Meta model formalization: to formalize the CPS meta model so that it can be interpreted and understood by computers •Life-Cycle State Machine: to facilitate the coordination and the configuration of activities and behaviours, realized by a composable Finite State Machine (cFSM) meta model.

Results & Conclusions This research presents a systematic methodology for the realization of autonomous coordination and self-reconfiguration on cyber-physical systems and system-of-systems with concrete examples in robotics domain. The work models systems in a well-structured way using a set of system design patterns to improve the composability, flexibility and reusability of sub-systems, and it also explains how to integrate individual systems as system-of-systems with predictable behaviours. A composable and embeddable software (CES) for the realization and implementation of the CPS software framework as well as the cFSM meta model is developed in this research, it has been used in the system-of-systems design in the robotics research group and student projects and lectures of embedded control systems.

The Cyber-Physical Stack (CPS) meta model

Major publication Zhang, L., Slaets, P., Bruyninckx, H. (2014). An Open Embedded Industrial Robot Hardware and Software Architecture Applied to Position Control and Visual Servoing Application. International Journal of Mechatronics and Automation, 2014, 4(1), 63-72. Zhang, L., Bruyninckx, H. (2016). A Novel Approach to the Design of System-of-Systems Using Composable Models and Patterns. International Journal of Electrical Engineering Education. (submitted) 30

Ward Melis Department

Computer Science

PhD defence

23 March 2017

Supervisor

Prof. dr. ir. Giovanni Samaey

E-mail

[email protected]

Projective integration for hyperbolic conservation laws and multiscale kinetic equations Introduction / Objective Most applications in science and engineering exhibit a multiscale structure, in which smaller components interact on short time and length scales, and produce observable behavior on a much larger scale that is of practical interest. On the latter scale, various macroscopic models have been devised that capture the effective behavior without resolving the fine-scale dynamics. Frequently, however, the microscopic dynamics cannot (or should not) be thrown away and a full multiscale depiction of the problem is in order, requiring efficient multiscale simulation routines, which is the main topic of this thesis.

Research Methodology We confine ourselves to multiscale in time only. To efficiently deal with problems containing multiple time scales, we employ methods from the equation-free community. For two-scale problems with a large separation between the fast and slow scale, we use the projective integration method, which consists of an outer integrator wrapped around an inner integrator (see also Fig. 1). The purpose of the inner integrator is to take a number of small time steps until all transients are sufficiently suppressed, driving the system towards equilibrium. Then, the outer integrator performs an extrapolation forward into time over a much larger time step and the process is repeated. A generalization of the above idea that is able to more efficiently handle problems with multiple time scales is telescopic projective integration. In this method, the outer integrator time steps of projective integration are seen as inner integrator steps inside an outer integrator one level higher. By repeating this, a hierarchy of projective integrators is constructed that each capture dynamics of the problem on a separate time scale.

Results & Conclusions We applied the above projective methods to three main multiscale problems: •Kinetic relaxation approximations of hyperbolic conservation laws, in which a two-scale kinetic equation is solved instead of the original hyperbolic problem since the former is easier to treat numerically. This resulted in an explicit, general and flexible solver of arbitrary order of accuracy in time for systems of nonlinear hyperbolic conservation laws. •The evolution of large particle systems with dynamics taking place on two or multiple time scales, described by both the BGK kinetic equation and the more general Boltzmann equation (see Fig. 2). The method is again fully explicit and allows for an arbitrary order of accuracy in time. However, the method parameters (and thus its cost) are not independent from the problem’s smallest time scale present. However, the dependence is only logarithmically. •Two-scale (slow-fast) stochastic systems, for which we want to simulate the unavailable (but known to exist) reduced model in terms of the slow variables only. Here, this model is estimated using a Markov chain Monte Carlo estimator, which may lead to a large statistical error, for which we developed a control variables variance reduction technique.

Fig. 1: Conceptual sketch of projective integration. Fig. 2: Simulation of the Boltzmann equation using a level-3 telescopic integration method.

Major publication P. Lafitte, W. Melis and G. Samaey (2017), A high-order relaxation method with projective integration for solving nonlinear systems of hyperbolic conservation laws, Journal of Computational Physics, accepted on March 2017.

31

Wim Pessemier Department

Electrical Engineering (ESAT)

PhD defence

27 March 2017

Supervisor

Prof. dr. ir. Geert Deconinck

Co-supervisor

Prof. dr. Hans Van Winckel, and Ing. Philippe Saey

Funding

-

E-mail

[email protected]

Knowledge-driven development of telescope control systems Introduction / Objective As a new generation of Extremely Large Telescopes (ELTs) will see first light in the next decade, the growth in size and complexity of telescope control systems presents new challenges to current practices in systems engineering and software engineering. Current practices in those engineering disciplines are based on “informal” modeling: they use modeling languages that are informally specified (i.e. using English text), that are graphically represented (i.e. using symbols and visual aids meant for human communication), and that are applied in a too restrictive way (i.e. too much focused on traditional object-oriented design). In our thesis we aim to improve these current practices, because they hamper the reusability of all the information needed to design a telescope control system, and therefore they lead to less consistency, traceability, verifiability and evolvability of the design.

Research Methodology To increase the reusability of design information, we need a fundamental methodology change: we must formalize our development process, so that knowledge is produced instead of information (as it is currently the case). Knowledge not only consists of information (i.e. interconnected data or “models” about the electrical systems, the software systems, the system requirements, ...) but it also explains how to interpret this information in a formal, machine readable way. To achieve this, we needed to: •

develop ontologies: explicit and formal descriptions about the concepts in our domains of interest;



develop a new textual modeling language that allows us to describe a telescope, using the concepts that have been formally specified by the ontologies;



develop a tool that is able to verify those telescope models against the ontologies, and that is able to produce documents and source code to develop the actual telescope control systems.

The Mercator Telescope, at the island of La Palma, to which our methodology change was applied.

Results & Conclusions By developing the needed ontologies, modeling languages and tools, and by applying everything to a real operational telescope, we have demonstrated that our proposed methodology change is feasible, using today’s technologies. Three advantages (formal verification, information reuse, and tool independence) make our framework stand out compared to existing traditional tools. Not only have we proven the feasibility of the concept, but we have also discovered many lessons to be learned, developed several open-source software packages, and produced a new telescope control system for the Mercator Telescope.

The new control system of the Mercator Telescope, developed using a knowledge-driven methodology.

Major publication W. Pessemier et al. (2016). Knowledge-based engineering of a PLC controlled telescope. SPIE Vol. 9913, 991343.

32

Andreas Put Department

Computer Science

PhD defence

04 April 2017

Supervisor

Prof. dr. ir. Bart De Decker

Funding

IWT-SBO “MobCom”, iMinds

E-mail

[email protected]

Anonymous Credentials in Practice Realizing Anonymous Applications and Services Introduction / Objective With security and privacy becoming a key concern in the modern Online environment, it is essential that appropriate measures are strongly incorporated in applications. Unfortunately, it is not trivial to engineer systems that offer sufficient security. Privacy, on the other hand, is often neglected when engineering software systems and is often introduced as an afterthought. Therefore, we propose two approaches to facilitate the realization of anonymous applications and services by bringing anonymous credentials into practice: tools to aid developers in integrating security and privacy solutions in their applications and three applications, built with these tools, that show that practical privacy can be achieved in current applications and services.

Research Methodology Priman is a flexible, privacy-enhancing development framework that offers security technologies and PETs as simple, technology agnostic components. Moreover, the areas that Priman focusses on are credential-based authentication, connection-, and data-security. To offer a simple and easy to understand interface to developers, the abstractions made by Priman shift the technology-specific configuration details from the application code to configuration policies. This allows developers to focus on application features, while experts configure the security technologies independently from the application’s code. Results & Conclusions Tools for building secure and privacy-friendly apps • Priman framework and development strategy • A certification process for attribute based credentials Privacy-friendly applications and protocols • inShopnito: a privacy-friendly mobile shopping app • avisPoll: a secure anonymous Internet poll system • PACCo: privacy-friendly access control with context

Major publication Put, Andreas, Italo Dacosta, Milica Milutinovic and Bart De Decker. "Priman: Facilitating the development of secure and privacy-preserving applications." IFIP International Information Security Conference. Springer Berlin Heidelberg, 2014. Put, A., Dacosta, I., Milutinovic, M., De Decker, B., Seys, S., Boukayoua, F., Martens, L. (2014, June). inshopnito: An advanced yet privacy-friendly mobile shopping application. In 2014 IEEE World Congress on Services (pp. 129-136). Put, Andreas and Bart De Decker. "PACCo: Privacy friendly Access Control with Context" SECRYPT. 2016. 33

Jaime Alberto Mosquera Sánchez Department

Mechanical Engineering

PhD defence

05 April 2017

Supervisor

Prof. Dr. Leopoldo Pisanelli R. de Oliveira (USP)

Co-supervisor

Prof. dr. ir. Wim Desmet (KU Leuven)

Funding

São Paulo Research Foundation

E-mail

[email protected]; [email protected]

Sound quality-driven active control of periodic disturbances for hybrid vehicles Introduction / Objective The integration of the electric motor to the powertrain in hybrid electric vehicles (HEVs) presents acoustic stimuli that elicit new perceptions. The large number of spectral components, as well as the bandwidth of this sort of noises, pose new challenges to current noise, vibration and harshness (NVH) approaches. The objective of this research relied on designing, simulating and implementing strategies for enhancing the auditory perception of periodic disturbances, based on multichannel active noise equalizer systems guided by sound quality criteria.

Research Methodology The proposed method relies on the extensive use of multichannel active amplitude and/or relative-phase based sound profiling systems, capable of independently enhancing the sound quality (SQ) of harmonic disturbances at a number of sensor locations, while counterbalancing for cross-channel interferences due to acoustic coupling. Since Loudness, Roughness, Sharpness and Tonality are the most relevant SQ metrics for interior HEV noise, they are used as performance metrics in the concurrent multi-objective optimization analysis that drives the control design method.

Results & Conclusions The proposed method is verified experimentally, with realistic stationary hybrid electric powertrain noise, showing SQ improvement for multiple locations within a scaled vehicle mock-up. The results show success rates in excess of 90%, which indicate the proposed method is promising, not only for the enhancement of the SQ of HEV noise, but also for a variety of periodic disturbances with similar features.

Multi-objective Sound Quality optimization

Multichannel amp./rel.-phase active sound profiling

Profiled sound at each location

Enhanced SQ for each listener

Major publications J. A. Mosquera-Sánchez, W. Desmet, L. P. R. de Oliveira (2017). A multichannel amplitude and relative-phase controller for active sound quality control. Mechanical Systems and Signal Processing, 88, 145 - 165. J. A. Mosquera-Sánchez, W. Desmet, L. P. R. de Oliveira (2017). Multichannel feedforward control schemes with coupling compensation for active sound profiling. Journal of Sound and Vibration, 396C, 1 - 29. J. A. Mosquera-Sánchez, L. P. R. de Oliveira (2014). A multi-harmonic amplitude and relative-phase controller for active sound quality control. Mechanical Systems and Signal Processing, 45 (2), 542 - 562. 34

Zaker Hossein Firouzeh Department

Electrical Engineering (ESAT)

PhD defence

18 April 2017

Supervisor

Prof. dr. ir. Guy A. E. Vandenbosch

Co-supervisor In Participation Amirkabir Univ. of Technology, Tehran, Iran with Supervisors

Prof. R. Moini, Prof. S.H.H. Sadeghi

Analysis of a Horizontal Thin-Wire Antenna in the Vicinity of a Lossy Ground, Using Complex-Time Green’s Functions Introduction / Objective Transient analysis of objects residing above or inside a dielectric half-space is of great importance due to more recent applications that require analysis of wideband phenomena and fast EM analysis in the time domain. Concrete examples are the problem of antennas operating in the vicinity of an earth interface, the study of lightning effects, the detection of buried land mines and unexploded ordnance by ground penetration radar (GPR). This research is focusing on the analysis of a horizontal thin-wire structure in the vicinity of a lossy ground, using complex-time Green’s functions.

Research Methodology The closed-form Green’s functions of a stratified medium are first calculated by the transmission line method in the spectral domain. Then, these spectral Green’s functions are approximated by a series of 2D complex exponentials in terms of the frequency and the z-directed wave number. Afterwards, they can be transformed to space and time domains using Sommerfeld’s identity and an inverse Fourier transform. As a result, these closed form time-domain Green’s functions are valid over a wide range of frequencies. This study introduces the new concept called “complex time method” to obtain time-domain Green’s functions of the stratified medium for use in the time-domain mixed-potential integral equation (TD-MPIE) formulation of the problem. An accurate and efficient time-domain method of moments (TD-MoM) is developed for solving the integral equation to obtain the space-time current distribution on the antenna/scatterer.

Results & Conclusions 1. An efficient robust technique is proposed based on the expansion wave concept to calculate Sommerfeld integrals arising in an imperfect ground. 2. A new approach is found to compute new closed-form Green's functions of a layered medium using complex images theory. 3. The complex-time Green's functions of a horizontal electric dipole above/inside the ground are calculated and used in a novel TD-MPIE formulation of the problem. An accurate and fast converging TD-MoM is developed for solving the TD-MPIE. It is shown that the accuracy, stability and computation time can be improved using causal and noncausal temporal basis functions such as Lagrange and Bspline functions, respectively. Meanwhile, the spatial integrals can be computed significantly faster by using Gaussian quadrature.

Figure: Antenna excited in the center. εr = 9, σ = 0.1 S/m, z0 = 250 mm

Major publication 1. Z.H. Firouzeh, G.A.E. Vandenbosch, R. Moini, S.H.H. Sadeghi, R. Faraji-Dana (2010). Efficient evaluation of Green’s functions for lossy half-space problems. Progress In Electromagnetics Research (PIER), 109, 139-157, 2010. 2. M. Ghaffari-Miab, Z.H. Firouzeh, R. Faraji-Dana, R. Moini, S.H.H. Sadeghi, G.A.E. Vandenbosch (2012). Timedomain MoM for the analysis of thin-wire structures above half-space media using complex-time Green's functions and band-limited quadratic B-spline temporal basis function. Engineering Analysis with Boundary Elements, 36, 11161124. 35

Catarina De Brito Carvalho Department

Electrical Engineering (ESAT)

PhD defence

19 April 2017

Supervisor

Prof. dr. ir. Paul Suetens

Co-supervisor

Prof. dr. ir. Lennart Scheys

Funding

iMinds

E-mail

[email protected]

2D and 3D high-spatial and high-temporal resolution ultrasound imaging for characterization of tendon mechanics – An image registration approach Introduction / Objective Achilles tendinopathy affects competitive and recreational athletes as well as inactive people. The etiology of tendinopathies is multifactorial, but tendon overuse is assumed to be one of the main pathological stimuli. Recently published studies have demonstrated the need for a better characterization of the tendon, more specifically, the local tendon mechanics. This work presented the investigation of the implementation of a 2D highspatial and high-temporal resolution ultrasound acquisition system for the invivo characterization of tendon mechanics.

Research Methodology This work was divided into a validation step and a quantification step for the biomechanical properties of the in-vivo Achilles tendon. A 2D affine image registration method was developed for the validation step, and a fine-tuned B-spline image registration method was developed to quantify the biomechanical properties of the Achilles tendon in-vivo. A second part of this work focused on the quantification of the biomechanical properties of the Achilles tendon using 3D ultrasound images. This quantification was performed using an 3D affine image registration approach.

Results & Conclusions This work brings a new insight into the functional role of the Achilles tendon and its different deformation patterns both in asymptomatic as well as symptomatic tendons. By combining the acquisition of 2D high-spatial and high-temporal resolution US images with the developed B-spline image registration method, physiological meaningful quantifications of regional tendon strain are possible A user-friendly application was developed to allow a non-technical expert to investigate the biomechanical properties of the Achilles tendon Theoretical demonstration of reduction of out-ofplane strain estimation error using 3D US data

Major publication C. Carvalho, S. Bogaerts, L. Scheys, J. D’hooge, K. Peers, P. Suetens, 3D tendon strain estimation on high-frequency 3D ultrasound images. A simulation and phantom study, 13th IEEE international symposium on biomedical imaging - ISBI 2016, April 13-16, 2016, Prague, Czech Republic 36

Job Noorman Department

Computer Science

PhD defence

19 April 2017

Supervisor

Prof. dr. ir. Frank Piessens

Co-supervisor

Prof. dr. Bart Jacobs

Funding

IWT

E-mail

[email protected]

Sancus: A Low-Cost Security Architecture for Distributed IoT Applications on a Shared Infrastructure Introduction / Objective With the rising popularity of the Internet of Things (IoT), the use of small, low-power embedded devices is rapidly increasing. Unfortunately, these kind of devices often lack the security features we are grown used to in the domain of desktop and server computing. However, in a context where multiple mutually distrusting stakeholders are able to share an IoT infrastructure to process sensitive data, the lack of, for example, basic software isolation is becoming increasingly irresponsible. Finding secure yet inexpensive ways to protect those low-end devices is therefore becoming more and more critical. This PhD investigates one avenue in this important search.

Research Methodology The first part of this thesis proposes Sancus, an inexpensive security architecture for resource-constrained IoT devices. We start with accurately defining our context; the kind of systems we want to protect and the attacker model we will use. Then, we introduce Sancus' design in enough detail for interested parties to be able to create alternative implementations. Next, our own implementation, based on the TI MSP430 architecture, is described and evaluated in terms of hardware cost and software overhead. We conclude this part by giving an overview of related work and a comparison of Sancus with the most relevant alternative architectures. In the second part, we discuss some applications of the Sancus architecture. The first application shows how to use a small number of protected Sancus modules to attest the state of a large unprotected software base. This can be used when adapting the whole software base to make use of Sancus' features is for some reason infeasible. We then show, in our second application, how Sancus can be used to provide security guarantees for distributed applications that use I/O devices. We provide a deployment and attestation technique that gives high assurance that if a distributed application produces an output, there must have been a sequence of physical input events that, when processed by the application as specified in its source code, produces the observed output event.

Results & Conclusions This PhD resulted in a fully functional prototype of the Sancus architecture which can be used in a simulator or, using FPGAs, in hardware. Next to the necessary hardware changes, we provide compiler extensions to enable developers to easily compile standard C code to Sancus. Its strong security guarantees together with its ease of use has made Sancus a popular platform within our research group. I hope this PhD has laid the foundation for many interesting future results.

Major publication Job Noorman, Pieter Agten, Wilfried Daniels, Raoul Strackx, Anthony Van Herrewege, Christophe Huygens, Bart Preneel, Ingrid Verbauwhede, Frank Piessens. “Sancus: Low-cost trustworthy extensible networked devices with a zero-software trusted computing base” 22nd USENIX Security symposium, 2013

37

Emilio Di Lorenzo Department

Mechanical Engineering

PhD defence

20 April 2017

Supervisor

Prof. dr. ir. Wim Desmet

Co-supervisors

Prof. dr. ir. Francesco Marulo (University of Naples) Prof. dr. ir. Bart Peeters (Siemens Industry Software nv)

E-mail

[email protected]

Operational Modal Analysis for rotating machines: challenges and solutions Introduction / Objective Operational Modal Analysis (OMA) is widely employed and became an industrial standard technique for identifying the modal parameters (i.e. natural frequencies, damping ratios and mode shapes) of mechanical structures. Its main applications are in the automotive, aerospace, civil engineering domains and many others. Several hypothesis need to be verified in order to apply OMA. First of all, the structure must be Linear Time Invariant (LTI), but this is not the case if several parts are moving with respect to each other. Secondly, the forces acting on the structure must be represented by white noise in the frequency range of interest. This means that all the frequencies must be uniformly excited. This is often the case for wind excitation, but it is not valid anymore if periodical loads due to rotating elements are acting on the system. The main scope of the dissertation is to fully understand and propose solutions to the challenges and the limitations occurring when applying classical OMA techniques in case of rotating machines.

Research Methodology

Results & Conclusions

The work can be divided into three main parts:  Time variant nature of dynamic systems: Three different methods to deal with such systems are discussed: Floquet theory, Multi-Blade Coordinate transformation and Harmonic Power Spectrum. These techniques are applied in a simulation environment using wind turbine models.  Structural Health Monitoring considerations: The results of the first part are extended to the Structural Health Monitoring (SHM) domain. Two different SHM strategies are discussed. The first one uses the so-called whirling modes as damage indicators. The second one analyzes the effect of damages using the mode shape curvatures as damage indicators. Finally the damage detection procedure is tested on a wind turbine blade.  Order-Based Modal Analysis: The Order-Based Modal Analysis (OBMA) technique is introduced. This is a combination of Order Tracking and Operational Modal Analysis. Several simulations and test cases are used to underline the improvements obtained by applying the OBMA technique instead of classical OMA in case of rotating machines.

Major publication E. Di Lorenzo, G. Petrone, S. Manzato, B. Peeters, W. Desmet, F. Marulo (2016). Damage detection in wind turbine blades by using Operational Modal Analysis. Structural Health Monitoring, 15: 289-301. 38

Shailesh Kulkarni Department

Electrical Engineering (ESAT)

PhD defence

20 April 2017

Supervisor

Prof. dr. ir. Patrick Reynaert

Co-supervisor

Prof. dr. ir. Wim Dehaene

Funding

Catrene PANAMA project, ERC Advanced Grant (DARWIN), Infineon

E-mail

[email protected]

Design techniques for CMOS broadband circuits towards 5G wireless communication Introduction / Objective The smartphone revolution powered by 4G LTE is driving the need for higher data rate continuously. This data explosion is posing as a huge challenge given the limited bandwidth presently available. To alleviate this and pave way for the next generation 5G wireless communication, Millimeter-wave (mmWave) technology will be the key enabler to serve the demand for higher capacity. In this research, various broadband mmWave circuits like distortion cancellation power amplifiers and high efficiency transmitters are explored in CMOS process for gigabit wireless communication.

Research Methodology The operating frequency of the mmWave circuits are a significant fraction of the ft of the CMOS device and poses many challenges. During this research the challenges were addressed at the device, circuits and also at the architectural level to optimize performance. The whole IC design flow is executed, including circuit simulations, EM simulations, layout and measurements. Three chips are investigated in this work to explore 5G wireless communication :  Circuit level - broadband distortion cancellation technique applied to a mmWave PA  Architectural level - mmWave outphasing transmitter which is a promising efficiency-enhancement technique  Legacy frequency bands support (0.9-2.6 GHz) provided by a fully digital modulator based on RFPWM

Results & Conclusions To enable 5G wireless communication, three blocks are implemented and measured in the context of this doctoral work. These designs include (a) 60GHz power amplifier with broadband AM-PM distortion cancellation; (b) the first reported 60-GHz outphasing TX; (c) Multi-standard wideband OFDM RF-PWM transmitter. These realizations show that CMOS is indeed a viable technology choice for mmWave RF circuits and that by using these high frequencies, high data rates can be achieved which is an important requirement towards 5G wireless communication. (b) (a)

(c)

Major publication Kulkarni S., Reynaert P., "A 60-GHz Power Amplifier With AM-PM Distortion Cancellation in 40-nm CMOS" in IEEE Transactions on Microwave Theory and Techniques, 2016. Kulkarni S., Reynaert P., "A Push-Pull mm-Wave Power Amplifier with < 0.8° AM-PM Distortion in 40nm CMOS," in Proceedings of ISSCC, San Francisco, 2014. 39

Minta Thomas Department

Electrical Engineering (ESAT)

PhD defence

25 April 2017

Supervisor

Prof. dr. ir. Bart De Moor

E-mail

[email protected]

Deepening the methodology behind Data integration and Dimensionality reduction: Applications in Life Sciences Introduction / Objective The problems of high dimensionality and heterogeneity of data always raise lots of challenges in computational biology and chemistry. As the size of data sets increase, as well their complexity, dimensionality reduction and advanced analytics will gain its importance. The past 10 years or so, data integration has become an active area of research in the field of machine learning, bioinformatics and chemoinformatics

Research Methodology •In this work, we show the equivalence between maximum likelihood estimation via a generalized eigenvalue decomposition (MLGEVD) and generalized ridge regression. This relationship reveals an important mathematical property of the generalized eigenvalue decomposition (GEVD), in which the second data matrix acts as prior information in the model. •By following the idea of least squares cross-validation in kernel density estimation, we propose a new data-driven bandwidth selection criterion to tune the LS-SVM formulation of RBF-KPCA. •While bringing up the benefits of LS-SVM classifiers and generalized eigenvalue/singular value decompositions, we propose a machine learning approach, a single mathematical framework for data integration and classification, weighted LS-SVM classifier.

Results & Conclusions •Data integration plays an important role in combining clinical, and environmental data with high-throughput genomic data to identify functions of genes, proteins, and other aspects of the genome. •Nonlinear dimensionality reduction and data integration techniques significantly improve the prediction performances in classification tasks. •We propose a data driven bandwidth selection criterion for KPCA which is executing in an unsupervised mode. •We propose a kernel-based mathematical framework for data integration and classification: a weighted LS-SVM classifier •Microarray data, which was difficult and expensive to collect were incorporated as prior information into clinical decisionmaking, improving the classification performance and offering better diagnosis and prognosis. •Incorporation of literature information into microarray analysis improved the possibility for obtaining stable disease associated genes.

Major publication

An overview of chemical descriptor formation from the connection table of compounds. PCA is applied to the connection-table of each compounds to define a new structural descriptor in terms of two vectors. This results into two matrices: atoms vs. compounds and bonds vs. compounds. The weighted LS-SVM framework integrates these two vectors into a single vector named as weighted chemical descriptor and performs further prediction.

Thomas M., De Brabanter K., Suykens J.A.K., De Moor B.: Predicting breast cancer using an expression values weighted clinical classifier. BMC Bioinformatics 2014, 15:411 (2014). Thomas M., De Brabanter K., De Moor B.: New bandwidth selection criterion for Kernel PCA: Approach to Dimensionality Reduction and Classification Problems. BMC Bioinformatics 2014, 15:137 (2014). Thomas Minta, Daemen Anneleen and De Moor Bart. Maximum likelihood estimation of GEVD: Applications in Bioinformatics. IEEE/ACM Transactions on Computational Biology and Bioinformatics 2014, Volume: 11, Issue: 4, 673:680 (2014). 40

Mai Sallam Department

Electrical Engineering ESAT

PhD defence

26 April 2017

Supervisor

Prof. dr. ir. Ezzeldin A. Soliman Prof. dr. ir. Guy A. E. Vandenbosch

Co-supervisor

Prof. dr. ir. Georges Gielen

Funding

FWO

E-mail

[email protected]

Plasmonic Waveguides and Nano-Antennas for Optical Communications Introduction / Objective Plasmonics receives great attention due to the superior capability of squeezing light beyond the diffraction limit, allowing to bridge the gap between photonic and electronic devices. Plasmonic metals, however, are characterized by their lossy dielectric nature, which is different from the highly conductive classical metals. These differences must be modelled correctly so as to obtain accurate results. In this work, the main goal is to develop a plasmonic Transmission Line (TL) mode solver that calculates the propagation characteristics of plasmonic transmission lines placed in layered structures. Besides, a number of novel plasmonic wire-grid nano-antenna arrays for optical communications are presented.

Research Methodology In this work, the Method of Moments (MoM) technique is used to calculate the propagation characteristics of plasmonic TLs. These TLs can have any topology and are assumed to be located within a layered structure of dielectric or metallic layers. Compared to other numerical techniques like the finite element/difference methods, the MoM is much faster and has a much reduced number of unknowns. The developed solver is designed to calculate the propagation, and attenuation constant(s) of all propagating modes within the plasmonic TLs as well as their modal current profile. The results obtained from the developed solver are compared to commercial tools like CST Microwave Studio. The second research line in this work is the design of highly directive nano-antenna arrays. First, the developed solver is used to predict the guided wavelength and losses within the array. The optimization is then carried out using CST.

Results & Conclusions The MoM-based solver is tested on a number of TLs. Various TL topologies having different topologies, number of metallic strips and/or surrounding layered structures are all considered to verify that the solver is generic. One of these examples is a silver nano-strip TL on top of finite thickness SiO2 backed by a thick silver layer. The results show a very good agreement between the developed solver and CST. The nano-strip is used to construct various nano-antenna topologies including a circularly polarized nano-antenna array. This antenna is constructed from two orthogonal arrays (yellow & white arrays). A gap is inserted along the inner radiators of one group and the dimensions are optimized to guarantee the circular polarization characteristics. The results shows that the antenna is characterized by its high directivity, which could be further increased by adding more radiators to the array.

Major publication Mai O. Sallam, Guy A. E. Vandenbosch, Georges Gielen, Ezzeldin A. Soliman(2014). Integral equations formulation of plasmonic transmission lines. Optics Express, 22 (19), 22388-22402. 41

Iman Khajenasiri Department

Electrical Engineering (ESAT)

PhD defence

2 May 2017

Supervisor

Prof. dr. ir. Georges Gielen

Co-supervisor Prof. dr. ir. Marian Verhelst E-mail

[email protected]

Design and implementation of UWB transceivers for Internet of Things applications Introduction / Objective In today's world, daily life is influenced significantly by internet-enabled objects and individuals connected through the main architecture called the Internet of Things (IoT). The IoT has various applications in smart cities, eHealth, etc. There are however several bottlenecks for IoT implementations in such applications with respect to interoperability of heterogeneous devices, design of low-power sensor nodes and their scalability to lower technologies. The design and implementation of two UWB systems to address the aforementioned problems are investigated in this research work.

Research Methodology In the first part of this thesis, we have presented an efficient smart home energy management solution to overcome the IoT interoperability problem, based on a multi-standard system. We have addressed the second issue of the implementation of a low power wireless link through the custom chip implementation of a low power IR-UWB transmitter in 130 nm CMOS technology. In the second part of this research, we have addressed the scalability and low-cost requirements for the IoT sensor nodes. On this subject, we have designed and implemented a fully-digital asynchronous pulsed UWB receiver compliant to the IEEE 802.15.6 targeting eHealth applications . The fully digital receiver can be easily scaled over technology.

Results & Conclusions First, in the three-layered HEMS system the following contributions have been worked out:  An integration layer to overcome the interoperability problem.  A custom designed IR-UWB transmitter in 3 – 5 GHz band that consumes 39 pJ/bit. (Fig. 1) Together with a UWB receiver, it has been the first example of an IoT system based on pulsed UWB technology directly interfaced with a multi-standard centralized HEMS.

Fig.1: Die photograph of the implemented digital UWB transmitter in 3-5 GHz band.

Second, In the course of implementation of the modular asynchronous UWB receiver in 65nm CMOS technology the following results achieved: (Fig. 2)  A digital RF front-end and an event-driven and asynchronous baseband. A programmable and PVT-tunable timing block to provide the clock periods of 2ns to 64ns. The first IEEE 802.15.6 UWB receiver in which compliance with the standard is realized using an asynchronous on-chip duty-cycling block.

Fig. 2: Die photograph of the implemented fully digital UWB receiver in 65nm CMOS technology for IEEE 802.15.6 standard.

Major publication

I. Khajenasiri, P. Zhu, M. Verhelst, G. Gielen, "A Low-Energy Ultra-Wideband Internet-of-Things Radio System for MultiStandard Smart-Home Energy Management." in IEIE Transactions on Smart Processing and Computing, pp. 354-365, 2015. 42

Joris Pelemans Department

Electrical Engineering (ESAT)

PhD defence

05 May 2017

Supervisor

Prof. dr. ir. Patrick Wambacq

Co-supervisor

Prof. dr. ir. Hugo Van hamme

Funding

EWI + IWT

E-mail

[email protected]

Efficient Language Modeling for Automatic Speech Recognition Introduction / Objective Statistical language models assign probabilities to sequences of words and are used in automatic speech recognition to decide between many acoustically plausible hypotheses. Unfortunately, most advanced language models such as the current state of the art recurrent neural networks (RNNs) are computationally complex which limits their usage to reevaluation of the output of a speech recognizer that uses the efficient, but simplistic n-gram model. In this PhD, we investigate how we can apply more advanced models with minimal computational complexity.

Research Methodology We propose five different techniques and compare their speed and accuracy to existing baseline models: 1.

CHC: A word clustering that exploits the transparency and inherent structure with which compounds are created.

2.

CSM: A semantic language model that uses the similarity between words to leverage long-distance dependencies.

3.

SNM: A scalable model that is capable of mixing arbitrary features at high speed.

4.

FLaVoR: A layered speech recognition architecture that enables the application of more advanced language models.

5.

MTLM: An efficient language model adaptation technique that is applied to the recognition of spoken translations.

Results & Conclusions All the proposed techniques improve speed and/or accuracy: 1.

CHC enables adding new compounds to the system and improves estimates of existing compounds at n-gram speed.

2.

CSM outperforms existing semantic models (cache and LSA) at twice the speed of LSA.

3.

SNM trained with 10-grams and skipgrams (SNM10-skip) matches a large RNN (RNN-1024), but trains 10x faster.

4.

FLaVoR is a match for the standard all-in-one architecture, but is faster and has a lot of potential e.g. improved mismatch model (MMM).

5.

MTLM is 7x faster than regular wordbased adaptation and achieves more than 25% improvement with phrasebased and named entity models.

Major publication Joris Pelemans, Ciprian Chelba and Noam Shazeer (2016). Sparse Non-negative Matrix Language Modeling. Transactions of the Association for Computational Linguistics, 4, 329-342.

43

Andrea Isabel Gil Santos Department

Materials Engineering (MTM)

PhD defence

09 May 2017

Supervisor

Prof. dr. ir. Omer Van der Biest

Co-supervisor

Prof. dr. ir. Nele Moelans

Funding

Marie Sklodowska-Curie Action of the EU FP7 Programme

logo funding agency if applicable

Phase Diagram Assessment and Alloy Characterization of Ternary Mg Rich Mg-Ca-Si and Mg-Si-Sr Alloys for Biomedical Applications Introduction / Objective Development and characterization of new magnesium-based, aluminum-free biomedical implant materials. • The implants only stay in the human body temporarily to fix bone fractures. • They gradually dissolve and are resorbed by the human body without leaving any harmful traces. •

The alloy elements are selected due to their good biocompatibility with bone tissue: Ca,

Si and Sr.

Research Methodology  

Production: Mold gravity casting under protective (Ar+2%SF6) : Mg-Ca-Si and

Mg-Si-Sr ternary alloys Phase diagram predictions: Thermo-Calc and Pandat softwares  Comparison with experiments: Evaluation of thermodynamic Mg databases



Microstructure characterization: SEM-EDS and XRD  Identification of intermetallic phases in Mg-rich alloys



Mechanical behavior: Hardness and Compression test  Correlation with microstructures Degradation behavior: Mass Loss and potentiodynamic polarization  Correlation with microstructures and



impurities

Results & Conclusions  Fit between predicted and observed Phase fields in Mg-Ca-Si system  Strong relation between intermetallics presence and mechanical properties in both alloy systems Mg-Ca-Si and Mg-Si-Sr.

 In vitro degradation performance is affected by intermetallics presence as well as impurities and alloys composition.

 A thermodynamic database for the Mg-Si-Sr ternary system was presented in this work.

Mg-Ca-Si commercially available database and the Mg-Si-Sr database developed in this work are reliable descriptions for the microstructure predictions.

Different microstructure features in Mg-Si-Sr alloys are correlated with the different degradation behavior,

Major publication A. Gil-Santos, N. Moelans, N. Hort, O. Van der Biest. Identification and description of intermetallic compounds in Mg-Si-Sr cast and heat treated alloys. Journal of Alloys and Compounds. Volumen 669, 5 June 2016, Pages 123–133. DOI: 10.1016/j.jallcom.2016.01.221. 44

Jakob Fiszer Department

Mechanical Engineering

PhD defence

11 May 2017

Supervisor

Prof. dr. ir. Wim Desmet

Funding

FWO, IWT

E-mail

[email protected]

Advanced bearing modelling for the numerical analysis of system-level machine dynamics Introduction / Objective Rolling element bearings are among the most essential components in numerous machinery applications. If not modelled with the required level of detail, computer models are unable to attain the desired predictive power. In today’s lightweight and fast-moving machines, this implies that the bearing has to be considered as an integral part of the machine’s numerical model. The generally applied finite element models, however, end up with a computationally intractable number of degrees of freedom; together with the high cost of imposing the contact constraints, the applicability of these methods for industrial applications is limited. This dissertation aims to alleviate the computational burden of system-level time simulations of flexible bearing applications,

Research Methodology Methods are introduced that simultaneously tackle the problems related to first, the dimensionality of the finite element representation to model flexibility, and second, the large sliding or rolling contact interactions: • • •

model order reduction technique capable of efficiently reducing the dimensionality of systems with time-varying load locations; a semi-analytic modelling strategy to eliminate the Figure 1: Planetary gearbox with simulated ball need for highly refined meshes at the contact zone; bearing behaviour B-spline representation of the interacting surfaces to alleviate issues of non-smoothness. These parametric surfaces allow for highly efficient contact formulations that are independent of the mesh size;



introduction of a NURBS-modes concept, which allows for an efficient redefinition of these deformed interacting surfaces, without necessitating backprojections to the nodal coordinates.

Results & Conclusions The numerical results presented over the various chapters demonstrate the performance and accuracy of the new semi-analytic strategy to model flexibly supported rolling element bearings, and compare the proposed time-varying reduction scheme to traditional reduction techniques. The results indicate that the proposed technique outperforms the other techniques considered, either on accuracy or efficiency level. Although applied here to rolling element bearings, some of the developed concepts are readily applicable in other contact applications such as gears or cam followers in transmissions.

Figure 2: Total load over the rolling elements while considering a rigid (dashed) or a flexible (solid) housing.

Major publication Fiszer, J., Tamarozzi, T., and Desmet, W. (2016). A semi-analytic strategy for the system-level modelling of flexibly supported ball bearings. Meccanica, 51 (6), 1503–1532.

45

Komalan Manu Perumkunnil Department

Electrical Engineering (ESAT)

PhD defence

11 May 2017

Supervisor

Prof. dr. ir. Francky Catthoor

Co-supervisors

Prof. dr. ir. José Ignacio Gómez Pérez (UC Madrid) Prof. dr. ir. Christian Tenllado (UC Madrid)

Funding

imec

E-mail

[email protected]

System Level Management of Hybrid Memory Systems Introduction / Objective SoC designs are dominated in terms of performance, area and power by memories. New non-volatile memory (NVM) technologies such as Resistive RAM, Spin Transfer Torque-MRAM and Phase change RAM are emerging due to scaling issues of conventional embedded SRAM (subthreshold leakage, susceptibility to failure with low Vdd etc). Higher level memories like the Instruction Memory/Cache are largely dominated by read accesses and provides interesting scenario for NVM-based memories. NVM read accesses consume less energy than SRAM read accesses scale better with wider word accesses

Research Methodology ULTRA LOW POWER DOMAIN • Targets wireless/multimedia applications with loop dominated codes. • Focus is on embedded instruction memories or scratchpad. • An NVM based solution includes the following: (1) a Wide word OxRAM array, (2) an L0 - loop buffer and (3) a Very wide Register (VWR). • TARGET simulator environment.

GENERAL PURPOSE ARM LIKE RISC MACHINES • Traditional ARM Memory Organization • Gem5 simulator (ARM Cortex A9 environment) • STT-MRAM instead of OxRAM Instruction Cache

Extended MSHR (EMSHR)

COARSE GRAINED RE-CONFIGURABLE ARRAY • • • • •

Instruction memory Reconfigurable platform Similar Solution OxRAM based ADRES environment



Solution: STT-MRAM instruction cache + EMSHR

Data Cache Solution: STT data cache + Very wide Buffer(VWB).

Results & Conclusions NVM based higher level memory organizations are heavily application & domain dependent. The tuning of parameters across abstraction layers is necessary. Only hybrid memory solutions are feasible currently. We have seen very promising initial results with our NVM based system solutions. •

85% read energy reduction at 0% performance penalty for Ultra Low Power domain.



65% read energy reduction at 0% performance penalty for the Coarse Grained Re-configurable Array platform



35% energy reduction at around 1% performance penalty for I-cache of general purpose systems.

Major publication M. Komalan, J. I. G. Pérez, C. Tenllado, P. Raghavan, M. Hartmann & F. Catthoor, "Feasibility exploration of NVM based I-cache through MSHR enhancements," 2014 Design, Automation & Test in Europe Conference & Exhibition, pp. 1-6.

Partner University for dual PhD: UC Madrid, Spain

Cornelia Niță Department

Mechanical Engineering

PhD defence

12 May 2017

Supervisor

Prof. dr. ir. Johan Meyers

Co-supervisor

Prof. dr. ir. Stefan Vandewalle

Funding

OPTEC

E-mail

[email protected]

Efficient algorithms for DNS-based optimal control of turbulent flows Introduction / Objective This research aims at contributing to the field of large-scale optimization problems encountered in turbulent flows, in particular in DNS (LES)-based optimal control. The central goal is to explore the performance of a number of alternative gradient-based optimization methods in order to accelerate the computational time of DNS-based optimal control problems. The development of effective optimization algorithms will increase the potential to extend the current applications towards higher Reynolds number and towards more complex turbulent flows.

Research Methodology We examine the efficiency of algorithms for the optimization of an internal volume force distribution with the goal of reducing the turbulent kinetic energy or increasing the energy extraction in a turbulent wall-bounded flow. These problems are respectively related to drag reduction in boundary layers, or energy extraction in large wind farms. The cost functional gradient with respect to the control is estimated by means of a continuous adjoint approach. Various optimization methods are explored:  nonlinear conjugate gradient method (NCG)  quasi-Newton methods, e.g., L-BFGS and damped L-BFGS  multigrid optimization algorithm (MG/OPT).

Results & Conclusions The optimization methods are compared in terms of the required number of direct numerical and adjoint simulations. Results indicate that in some cases :  using the quasi-Newton methods the computational effort is reduced by a factor of four compared to the standard nonlinear conjugate gradient method

Improvement in the cost functional with equivalent number of evaluations on the finest level (cost) for: (a) turbulent kinetic energy case using single-grid methods and (b,c) energy extraction case using MG/OPT compared to single-grid algorithm.

 the MG/OPT method requires up to a factor two less DNS and adjoint DNS than the single-grid algorithm.

10

NCG

1.8

(c)

(b)

(a)

1.9 1.85

Damped L-BFGS

0

10

Damped L-BFGS

NCG

1.75

cost functional

0

MG/OPT- NCG

10

-2

MG/OPT 2 levels

1.7

L-BFGS

1.65

10

-2

10

1.6

damped L-BFGS

1.55 1.5

10 0

50

100

150

200

250

300

MG/OPT-damped L-BFGS

-4

10

0

200

400

600

800

MG/OPT 3 levels

-4

-6

0

20

40

60

80

100

cost

Major publication C. Nita, S. Vandewalle, J. Meyers, (2016) On the efficiency of gradient based optimization algorithms for DNS-based optimal control in a turbulent channel flow, Computers & Fluids 125, 11-24.

47

Mumin Enis Leblebici Department

Chemical Engineering

PhD defence

15 May 2017

Supervisor

Prof. dr. ir. Tom Van Gerven

Co-supervisor

Prof. dr. ir. Georgios Stefanidis

Funding

KU Leuven

E-mail

[email protected]

Design, modelling & benchmarking of photoreactors & separation processes for waste treatment and purification Introduction / Objective Photochemistry is a research field with high potential, promising important future industrial and environmental applications. On the other hand, making these applications feasible and therefore applied, falls within the purpose of photoreactor design branch of chemical engineering science. However, photoreactor design could not yet live up to its whole potential as many promising photochemical reactions have not yet been integrated to industrial usage. This work focuses on two photoreactor applications (wastewater treatment and lamp phosphor waste purification) and proposes a new design methodology to design reactors with more chance of practical application.

Research Methodology Following steps were followed during the PhD work; •Investigate existing photoreactor designs for waste degradation and purification. •Develop computational models to assess the potential for intensification. •Benchmark the reactors which made a high impact on photoreactor design. •Propose a new photoreactor design method.

Results & Conclusions A new computational model was developed for immobilized catalyst photoreactors (ICR). Mass transfer was found as the limiting mechanism for ICR (see Fig.1) A new mixer – phase separator for waste purification was developed. This unit could handle solvent extraction, 200 times faster than standard units. A new benchmark, namely, Photochemical Space Time Yield (PSTY) was developed for photoreactors. This benchmark was accepted and started to be used in literature.

Figure 1: Modelling result shows normalized concentration gradient in illuminated catalyst coating. More than half of photocatalyst cannot be utilized due to mass transfer limitations.

An optimization study in waste purification application proved that the specific yield may drop as the reactor geometry grew bigger. However there is an optimum geometry size to maximize product throughput (see Fig. 2) This work proposes that in order to design applicable reactors, the PSTY should be maximized first, as shown in Fig.2. Then the optimized unit should be numbered-up to match the required output.

Major publication M.E. Leblebici, B. Van den Bogaert, G.D. Stefanidis, T. Van Gerven, Efficiency vs. productivity in photoreactors, a case study on photochemical separation of Eu. Chem. Eng. J. 310 (2016) 240-248.

Figure 2: Space time yield (STY) decreases (left) with increasing reactor depth. However the PSTY indicates (right) the optimum point with the highest production per reactor volume per input lamp power.

Google scholar page for complete list: https://scholar.google.be/citations?user=7E1v kfwAAAAJ&hl=en 48

Weiming Qiu Department

Materials Engineering (MTM)

PhD defence

15 May 2017

Supervisor

Prof. dr. ir. Ludo Froyen

Co-supervisor

Prof. dr. ir. Paul Heremans

Funding

Imec

E-mail

[email protected]

Interface Layers for Efficient Organic and Perovskite Solar Cells Introduction Organic and perovskite solar cells are promising next generation photovoltaic technologies, due to their unique properties, such as light weight, flexibility and tunable color that make them suitable for various applications. In both organic and perovskite solar cells, the interface layers, i.e. hole transport layer (HTL) and electron transport layers (ETL) have significant influence on the device performance. This PhD is therefore mainly focused on the development of interface layers for efficient and stable organic and perovskite solar cells.

Key results & Conclusions  Developed low-temperature solution-processed ammonium heptamolybdate (AHM) based HTL for conventional structure organic solar cells: better than benchmark PEDOT:PSS HTL (Fig. 2).

Fig.1 Typical device structure of an organic or perovskite solar cell.

 Developed low-temperature solution-processed Nafion-modified MoOx HTL for inverted structure organic solar cells: de-wetting issue solved by mixing Nafion with MoOx (Fig.3)  Combined interface layer development with perovskite layer optimization to improve the power conversion efficiency of perovskite solar cells (Fig. 4): •

PC60BM/ZnO bilayer ETL; reactive electron-beam evaporated TiO2; TiO2/ crosslinked PC60BM ETL



New precursor combination for CH3NH3PbI3-xClx perovskite



New processing method for Cesium/formamidinium based perovskite

Fig.4 Efficiency timeline of perovskite solar cells in this PhD work.

Fig.2 J-V curves of organic solar cells using AHM or PEDOT:PSS HTLs in different photoactive layer systems.

Fig.3 Optical microscopy image of P3HT/PC60BM films after spin coating: (a) MoOx solution; (b) Nafion-modified MoOx solution

Major publications 1. W. Qiu, et al., Advanced Functional Materials, accepted.

5. W. Qiu, et al., Organic Electronics, 2015, 26, 30.

2. W. Qiu, et al., Journal of Materials Chemistry A, 2017, 5, 2466. 3. W. Qiu, et al., Energy Environmental Science, 2016, 9, 484.

6. W. Qiu, et al., ACS Applied Materials & Interfaces, 2015, 7, 3581.

4. W. Qiu, et al., Journal of Materials Chemistry A, 2015, 3, 22824.

7. W. Qiu, et al., ACS Applied Materials & Interfaces, 2014, 6, 16335. 49

Güneş Acar Department

Electrical Engineering (ESAT)

PhD defence

17 May 2017

Supervisor

Prof. dr. Claudia Diaz

Co-supervisor

Prof. dr. ir. Bart Preneel

Online Tracking Technologies and Web Privacy Introduction / Objective The main goal of this thesis is to advance the understanding of online tracking by providing an in-depth technical analysis of tracking technologies and their deployment. We focus on advanced, resilient and elusive forms of tracking such as browser fingerprinting, evercookies and cookie syncing.

Research Methodology To contribute to the understanding of real-world practices that involve browser fingerprinting, we developed FPDetective, a framework for the detection and analysis of fingerprinting with a focus on font-based fingerprinting.  For the canvas fingerprinting study, we use a similar setup.  Both crawlers were based on modified browsers.  For the mobile application study, we used a VPN to intercept the network traffic.

Results & Conclusions Our research has shown that advanced tracking mechanisms such as browser fingerprinting, evercookies and cookie syncing are actively used by third-party trackers on thousands of websites. We address the difficulty of detecting and analyzing advanced online tracking and enable similar line of work that relies on browser instrumentation.

Figure 1: FPDetective Framework FPDetective is designed as a flexible, general purpose framework that can be used to conduct further web privacy studies. FPDetective is freely available and can be downloaded from the following URL: https://www.cosic.esat.kuleuven.be/fpdetective/

Figure 2: Different images printed to canvas by fingerprinting scripts

Major publication Acar, G., Eubank, C., Englehardt, S., Juarez, M., Narayanan, A., and Diaz, C. The Web never forgets: Persistent tracking mechanisms in the wild. In 21st ACM Conference on Computer and Communications Security (CCS) (2014), ACM, pp. 674–689 50

Saurabh Jain Department

Electrical Engineering (ESAT)

PhD defence

17 May 2017

Supervisor

Prof. dr. ir. Frederik Maes

Co-supervisors

Prof. dr. ir. Sabine Van Huffel & Dr. Dirk Smeets (icometrix)

Funding

Marie Curie - TRANSACT

Combining anatomical and spectral information to enhance MRSI resolution and quantification: Application to Multiple Sclerosis Introduction Automatic segmentation of focal WM lesions on conventional brain MRI is important for monitoring multiple sclerosis (MS) disease. These images have high resolution, however, are less sensitive to lesion characterisation, which can help in understanding MS pathogenesis. Advanced MRI technique MR spectroscopic imaging (MRSI) offers this complementary information, but has low spatial resolution. Therefore, this thesis aims at using high resolution MRI to increase the resolution of MRSI.

Research Methodology Patch-based super-resolution Inputs: (a) high resolution MR images (T1-w and FLAIR) (b) low resolution MRSI raw data Steps 1. Preprocessing: (a) MRI segmentation  GM, WM, CSF and lesions using MSmetrix, (b) metabolite quantification of MRSI data using SPID, (c) alignment of MRI in MRSI space 2. Reconstruction: Estimate values in high resolution MRSI voxels using MRI tissue segmentation 3. Mean correction: Correct estimated values using point spread function and low resolution MRSI

Results & Conclusions The accuracy of our method is validated on both simulated and real images We showed that our method preserves tissue contrast and structural information, and matches well with the trend of acquired high-resolution MRSI (a) Bias corrected FLAIR image, with (b) overlaid lesion segmentation, (c) low resolution input MRSI, (d) highresolution ground truth MRSI map, and (e) our result based on (a-c).

These results suggest that the method has potential for clinically relevant neuro imaging applications

Major publication Jain, S., et al. (2017). Patch-based super-resolution of MR spectroscopic images: Application to Multiple Sclerosis. Frontiers in Neuroscience 11(13), 12 pages. 51

Cindy Smet Department

Chemical Engineering

PhD defence

22 May 2017

Supervisor

Prof. dr. ir. Jan Van Impe

Co-supervisor

Prof. dr. Vasilis Valdramidis, University of Malta

Funding

KU Leuven PFV/10/002 OPTEC

E-mail

[email protected]

Cold atmospheric plasma for food decontamination Influence of intrinsic and extrinsic factors Introduction / Objective As a nonthermal decontamination technology, cold atmospheric plasma (CAP) offers great potential for treatment of heatsensitive food products. During the short treatment the plasma, generated by applying a voltage to a gas stream, inactivates microbial cells according to different mechanisms. Although CAP is a very promising technology, more fundamental studies are needed before its application in the food industry. The overall objective of this dissertation is to evaluate the potential of CAP as an emerging non-thermal technology for food decontamination. This is realized by investigating the relation between the CAP microbial inactivation efficacy, the food product properties (intrinsic factors) and environmental (storage) conditions (extrinsic factors).

Research Methodology  Investigation of the microbial growth dynamics of Salmonella Typhimurium and Listeria monocytogenes in/on carefully designed model systems.  Design and optimization of the CAP set-up and model systems.  Assessment and modeling of the microbial inactivation kinetics of S. Typhimurium and L. monocytogenes during CAP inactivation.  Study of the microbial storage behavior after CAP treatment.

Figure 1: CAP set-up (Dielectric Barrier Discharge, helium/oxygen) and characterization.

Results & Conclusions  L. monocytogenes (Gram-positive) is more resistant to CAP treatment as compared to S Typhimurium (Gram-negative).  Cell immobilization (i.e., surface colonies) induces a higher resistance towards the treatment.  In a liquid carrier, CAP species have to diffuse through the medium to the cells, resulting in slow and limited inactivation. On a solid(like) surface, cells are easily attained and inactivated by the CAP species.  Composition stress (i.e., pH and % (w/v) NaCl) results in resistance to CAP treatment. Few days of storage

 CAP treatment significantly extends the storage life.

End of storage

20°C

8°C

Figure 2: Influence of the food (micro)structure on the inactivation kinetics of S. Typhimurium after exposure to CAP.

Figure 3: CAP treated samples (top) vs. untreated controls (bottom), for different storage times and temperatures.

Major publication C. Smet, E. Noriega, F. Rosier, J.L. Walsh, V.P. Valdramidis, J.F. Van Impe (2017). Impact of food structure on the inactivation efficacy of cold atmospheric plasma. International Journal of Food Microbiology, 240, 47-56. 52

Roxane Van Mellaert Department

Architecture

PhD defence

22 May 2017

Supervisor

Prof. dr. ir.-arch. Mattias Schevenels

Co-supervisor

Prof. dr. ir. Geert Lombaert

Funding

KU Leuven Internal Funds

E-mail

[email protected]

Optimal design of steel structures according to the Eurocodes using mixed-integer linear programming methods Introduction / Objective Structural optimization can be a powerful tool for practical design problems. It does not only lead to material savings, but also to a reduced engineering effort since the design process is automated by adopting a structural optimization tool. However, it is rarely adopted in practice due to the lack of algorithms that are able to account for all characteristics of a real-world problem, to solve them in a reasonable time, and to provide information about the optimality of the solution. The main objective of this PhD research is to develop a method for solving practical discrete sizing optimization problems. The research focuses on the optimal design of frequently built steel structures, such as industrial halls. For this type of structures, finding the lightest design for a given load is crucial. In order to be practical applicable, the method should be able to adopt real profile characteristics from commercial catalogs, to take into account all relevant Eurocode constraints during the optimization, and to achieve global optimality of the solution.

Research Methodology The optimization problem is reformulated as a Mixed-Integer Linear Programming (MILP) problem. In order to facilitate the reformulation, the simultaneous analysis and design approach is adopted: the state variables, such as the internal forces, are considered as additional design variables and the state equations, such as the equilibrium equations and member stiffness relations, are enforced by means of additional constraints. In addition, a set of binary decision variables is introduced for each member of the structure to select a profile from the catalog given by the designer. The obtained MILP is solved for global optimality with the branch-and-bound method.

Truss problems

Frame problems

Basics of the method

Application to a real design problem according to Eurocode 3

The original MILP reformulation for discrete sizing optimization of truss structures is adopted and extended for frame structures. The method is applied to real design problems taking into account all relevant Eurocode 3 constraints. As a consequence, a post-processing step to account for other constraints is avoided, therefore optimality is retained and additional engineering time is avoided. In addition, the performance of the method is compared with the performance of a genetic algorithm.

Results & Conclusions The results show that the developed method can solve discrete optimization problems of moderate scale including all relevant Eurocode constraints and reaching global optimality. The impact of Eurocode constraints on the optimal design is quantified, and shows that it is important to take into account all relevant Eurocode constraints during the optimization. In addition, the MILP reformulation method provides information about the optimality of the solution, making it very suitable to benchmark the effectiveness of other methods, such as metaheuristic algorithms.

Major publication R. Van Mellaert, G. Lombaert, M. Schevenels (2015). Global size optimization of statically determinate trusses considering displacement, member, and joint constraints. Journal of Structural Engineering, 142(2), 04015120.

53

César Javier Lockhart de la Rosa Department

Materials Engineering (MTM)

PhD defence

24 May 2017

Supervisor

Prof. Dr. ir. Marc Heyns

Co-supervisor

Prof. Dr. Stefan De Gendt

Funding

imec Nano-material Engineering (NAME)

2D transition metal dichalcogenides for beyond silicon logic devices: improving the Metal / MoS2 interface through molecular doping Introduction / Objective Now more than ever, with the advent of the “Smart” world, the industry is yearning for higher performance, lower power dissipation and lower cost devices. 2D semiconducting MoS2 have demonstrated great potential for beyond-silicon electronics given their ultra-thin body nature and wide electronic bandgap. However, better understanding is needed and some challenges needs to be addressed such as contact resistance and doping. Therefore, the focus of this thesis is the device understanding, contact resistance reduction and controllable doping of MoS2 devices.

Research Methodology To achieve this goal first the principles of operation and fabrication of MoS2 based field effect transistors (FET) is discussed. Then, MoS2 FETs were experimentally fabricated and various characterization techniques were evaluated and adapted from those used for conventional semiconductors. With the device parameters extracted from the characterization and the experimental observations, a semi-classical model was implemented to better understand MoS2 based devices. This model was then used to study the root of the high contact resistance and possible solutions. Then more experimental devices where fabricated and surface functionalized to demonstrate doping of the MoS2 film. Two approaches were followed: self-assembled molecular networks and polymer functionalization. Finally, the previously developed model was combined with experimental data from FET with MoS2 films of different thickness to stablish the relation between thickness and surface doping.

Results & Conclusions The major results of this thesis can be summarized as follows:  Identified the high Metal / MoS2 contact resistance (about 3k ohm*µm) as the major drawback specially for FETs with channel length below 100 nm.  It was demonstrated that the sheet resistance of the 2D MoS2 film in the FET varies greatly in the regions under the contact due to the space charge region created by a Schottky potential barrier (SB) of about 300meV. This have a big impact when extracting the contact resistivity of a MoS2 FET with conventional techniques.  The mechanisms for electron injection were demonstrated to be thermionic emission, thermally assisted tunneling and direct tunneling. Two main trajectories for electron injection from the metal to the MoS2 film, vertical (to the MoS2 region under the contact) and lateral injection (directly to the channel region).

Transfer characteristics of a MoS2 FET after poly(vinyl-alcohol) (PVA) doping.

 Surface doping (i. e. without substitution of atoms) was demonstrated by functionalizing the MoS2 film surface through self-assembly of oleylamine networks and by spin-coating of the polymer poly(vinyl-alcohol) (PVA). Through surface doping carrier concentration was increased by about 3×1012 cm-2 and the contact resistance was reduced by 30%. Finally, it was concluded that even thou a fast progress has been observed for this type of devices, further work is required before integration with industry such as the low thermal budged of surface doping (260°C) and low experimental mobility of the device (30 cm2V-1s-1).

Major publication C. J. Lockhart de la Rosa, A. Nourbakhsh, M. Heyne, I. Asselberghs, C. Huyghebaert, I. Radu, M. Heyns and S. De Gendt, “Highly efficient and stable MoS2 FETs with reversible n-doping using a dehydrated poly(vinyl-alcohol) coating”, Nanoscale, 2017, 9 (1), 258-265 54

Contact resistance reduction after doping.

Gomotsegang Fred Molelekwa Department

Chemical Engineering

PhD defence

29 May 2017

Supervisor

Prof. dr. ir. Bart Van der Bruggen

Co-supervisors

Prof. dr. ir. Patricia Luis; Prof. dr. Stanley Mukhola

Funding

VLIRUOS, NRF

E-mail

[email protected]

Production of potable water for small scale communities using low-cost filtration membrane Introduction / Objective Globally in 2015, about 663 million people lacked access to safe drinking water and nearly half (319 million) are in subSaharan Africa. Additionally, 8 out of 10 people still without improved drinking water sources live in rural areas. Membrane technology could offer an alternative, however; it is relatively expensive, due to the cost of membranes. This research aims to develop flat sheet microfiltration membranes from plastic waste, without any use of solvents, as the technology that will be used for decentralized purification of contaminated water in rural villages.

Research Methodology This research was divided into: (1) Implementation of Membrane Technology and (2) Membrane Development.  Implementation of Membrane Technology: Ultrafiltration hollow fiber membrane pilot plant was installed in Tshaanda, a rural village in South Africa, for treatment of groundwater to supply potable water. Water quality was analyzed before and after filtration.  Membrane Development: A novel method was developed for the preparation of low cost microfiltration membrane by employing a one-step manufacturing process using recycled High-Density Polyethylene (rHDPE), Low-Density Polyethylene (rLDPE) and Ca2+montmorilonite (MMT), thus producing rHDPE/rLDPE blend and rHDPE/rLDPE/Ca2+MMT composite membranes that can be applied for water treatment. Membranes developed were characterized using various techniques and tested for permeability and selectivity. Figure 1 shows the membrane development procedure.

Results & Conclusions  The water quality analysis showed that E.coli was removed to undetectable levels following ultrafiltration, thus rendering the water safe for human consumption.  Membranes developed with the new method showed good mechanical strength. Additionally, the composite membranes produced higher flux than the neat blend membranes at low pressure. Furthermore, the 4 wt% membranes produced higher flux and rejection than 2 wt% membranes. Figure 2 shows rejection potential of the 4 wt% membrane. These membranes can be potential candidates for water treatment at low pressure.

Major publication Molelekwa GF, Mukhola MS, Van der Bruggen B, Luis P (2014) Preliminary Studies on Membrane Filtration for the Production of Potable Water: A Case of Tshaanda Rural Village in South Africa. PLoS ONE 9 (8): 1-10.

55

Mary Akurut Department

Civil Engineering

PhD defence

01 June 2017

Supervisor

Prof. dr. ir. Patrick Willems

Co-supervisor

Dr. ir. Charles B. Niwagaba

Funding

VLIR-ICP

E-mail

[email protected]

WATER QUALITY ANALYSIS AND MODELLING OF INNER MURCHISON BAY, LAKE VICTORIA Introduction / Objective Over 30 million inhabitants depend on Lake Victoria (LV) in East Africa. The lake has a complex shoreline structure comprising of gulfs and bays. Inner Murchison bay (IMB) is the source of water supply to Kampala city and also recipient of its wastes. Consequently, water treatment costs by National Water & Sewerage Corporation (NWSC) have trebled; thus the need to invest in alternative options. There is lack of insight on the long-term variation of water quality in the IMB, hence the need to study the IMB water quality and link the influence of management actions and climate change on the IMB. This study helped to understand the past and future changes in IMB water quality and support decision making for sustainable economic development.

Research Methodology This research analyzed water quality patterns in the IMB. It explained the long-term variations in the IMB water quality over the past decade by applying a combined empirical – model based approach. It used lake level variations to study the IMB hydrodynamics and consequently provided a plausible water quality model to study and obtain a better understanding of the impacts of pollution and climatic conditions. The improved understanding and model were applied to study effects of climate change on the IMB water quality and to analyze the effect of selected management practices.  Delft3D-delwaq module for IMB water quality model  CMIP5 and RCP 4.5 and 8.5 based climate change scenarios  Effect of dredging and regulating effluent quality

Results & Conclusions Water quality deteriorated exponentially in 2001-2014 due to increased pollution and high residence time of water in the IMB. Worst water quality was in 2010 attributed to increased diffuse pollution due to improved drainage in the city and the declining wetland effect.  Climate change will translate to potentially strong but highly uncertain lake level changes (2040s: -12 to +5 & 2075s: -15 to +4) with annual precipitation increasing by 7-18%.  Increased lake levels lead to frequent saturation and more organic 30% reduction in the IMB loadings (at the hotpots) matter degradation especially for low BOD concentrations (> L D Dielectric strength (Δε) >> 0 d2 > L D Δε > 0 d2 ~ L D Δε ~ 0

 Random copolymer decreases the Debye length and increases the dielectric strength.

1.A. Bharati, R. Cardinaels, J. W. Seo, M. Wübbenhorst, P. Moldenaers, Polymer, 79, 271–282, (2015). 2.A. Bharati, M. Wübbenhorst, P. Moldenaers, R. Cardinaels, Macromolecules, 49, 1464-1478 (2016). 3.A. Bharati, R. Cardinaels, T. Van der Donck, J. W. Seo, M. Wübbenhorst, P. Moldenaers, Polymer, 108, 483-492, (2017). 4.A. Bharati, M. Wübbenhorst, P. Moldenaers, R. Cardinaels, Macromolecules, 50, 10, 3855-3867 (2017). 127

Rodolfo Torrea Durán Department

Electrical Engineering (ESAT)

PhD defence

06 November 2017

Supervisor

Prof. dr. ir. Marc Moonen

Co-supervisors

dr. ir. Paschalis Tsiaflakis Prof. dr. ir. Luc Vandendorpe

Funding

BESTCOM

E-mail

[email protected]

Cooperative Strategies for Inter-cell Interference Management in Dense Cellular Networks Introduction / Objective Wireless connectivity demands are steadily increasing. A promising solution is to increase the density of base stations deployed in a given area. However, issues like inter-cell interference, limited bandwidth, and excessive channel state information (CSI) exchange between base stations can limit the potential gains. In this thesis we explore different strategies in which base stations and users cooperate to relay information overheard from other base stations or users. By exploiting the overhearing capabilities of the network, we aim to reduce inter-cell interference, while reusing the available bandwidth and avoiding CSI exchange between base stations. This provides a tradeoff in spectral efficiency, spatial diversity, and energy efficiency for all the users in the network.

Research Methodology We present different strategies to deal with inter-cell interference aiming to reuse the available bandwidth and reducing the CSI exchange between base stations. Given the technical and economical difficulties of maintaining backhaul links between base stations and the high probability of overhearing signals from different BSs, the proposed strategies exploit the overhearing capabilities of the network.  In a first strategy, the CSI  In a second strategy, users  Finally, BSs also serve as relays by forwarding overheard by BSs and coming relay to the neighboring users the overheard data to users from neighboring from users of neighboring cells the overheard data from other cells. In a first scenario all the BSs in the network serves to derive an autonomous BSs through device-to-device overhear the transmissions of other BSs. In a power control that does not need communication. second scenario the BSs overhear the to share CSI between BSs. transmission of the closest BSs.

Results & Conclusions  Exploiting the overhearing capabilities of the network results in a tradeoff of different performance metrics. For instance, the proposed autonomous power control can increase the data rate compared to widely-used approaches like Iterative Waterfiling.  Exploiting users as relays provides a performance improvement in spectral efficiency and bit error rate without decreasing the degrees of freedom compared to aligned frequency reuse.  Using BSs as relays provides a tradeoff in spectral efficiency, BER, and energy efficiency compared to approaches that do not use overhearing BSs. Furthermore, exploiting the system topology helps to reduce the number of time-slots required.

Major publication R. Torrea-Duran, P. Tsiaflakis, L. Vandendorpe, M. Moonen, “Neighbor-Friendly Autonomous Power Control in Wireless Heterogeneous Networks'', EURASIP Journal on Wireless Communications and Networking, vol. 175, 2014, pp. 1-17. doi:10.1186/1687-1499-2014-175. 128

Amirahmad Mohammadi Department

Mechanical Engineering

PhD defence

07 November 2017

Supervisor

Prof. Joost R. Duflou

Co-supervisor

Prof. Albert Van Bael

Funding

Fonds Wetenschappelijk OnderzoekVlaanderen (FWO)

E-mail

[email protected]

Laser Assisted Incremental Forming Introduction / Objective Single Point Incremental Forming (SPIF) is a flexible sheet forming process which is characterized by the continuous stepwise forming of a blank sheet through the movement of a small forming tool. Customized three-dimensional parts can be made by the CNC-controlled motion of a hemispherical tipped tool over a metal, polymer or composite sheet. Using the incremental forming method, small series production of sheet components with high customization is possible within short lead times. In this dieless forming technology the time and cost of designing, manufacturing and storing dies for small series production of sheet metal parts can be reduced. However, SPIF suffers from process window limitations which can be determined by the maximum achievable forming angle and the geometrical accuracy. In this thesis Laser Assisted variant of SPIF (LASPIF) has been used to overcome above mentioned drawbacks.

Research Methodology Enhancing the formability and accuracy achievable with LASPIF requires a strategic choice for the additional process parameters involved. The work reported in this dissertation aims to investigate the relations between the different process parameters and the resulting process output in order to better understand the process mechanisms under heat-assisted forming conditions. For this purpose, it is necessary to model the process and to explore the influence of the respective process parameters. Apart from finite element modelling, a design of experiments (DOE) approach is used to define optimal laser scanning and laser-tool offset conditions to maximize the formability of the material. Moreover, an in-depth study on the use of tailored microstructures through local thermally induced hardening or softening transformations to influence the LASPIF process performance has been performed. Isotherm size and position with respect to the contact zone

Results & Conclusions

A coupled thermo-mechanical FE model has been developed which can help to investigate the deformation mechanism during SPIF process. The tool-sheet contact zone can be determined (see figure) and the offset between the laser beam and the forming tool can be determined in order to improve accuracy and formability Moreover, the heat-induced microstructural changes in the sheet was used to soften or harden the material. This results in improving formability, reducing the process forces and lowering geometric errors. Moreover, the results of laser assisted post-heat treatment for stress relief-annealing of SPIF formed parts showed that the reduce residual stresses can be effectively reduced in order to achieve higher accuracy after unclamping or trimming of the SPIF formed part

Major publication 1.

2.

3.

Mohammadi A., Vanhove H., Van Bael A., Duflou J. (2016). Towards accuracy improvement in single point incremental forming of shallow parts formed under laser assisted conditions. International Journal of Material Forming, 9 (3), 339-351. Mohammadi A., Qin L., Vanhove H., Seefeldt M., Van Bael A., Duflou J. (2016). Single point incremental forming of an aged Al-Cu-Mg alloy: Influence of pre-heat treatment and warm forming. Journal of Materials Engineering and Performance, 25 (6), 2478-2488. Mohammadi A., Vanhove H., Van Bael A., Seefeldt M., Duflou J. (2016). Effect of laser transformation hardening on the accuracy of SPIF formed parts. Journal of Manufacturing Science and Engineering, 139 (1), art.nr. 011007, 1-12. 129

Vikas Dubey Department

Materials Engineering (MTM)

PhD defence

20 November 2017

Supervisor

Prof. Dr. Ingrid De Wolf

Co-supervisor

Em. Prof. Dr. Jean-Pierre Celis

Funding

imec

E-mail

[email protected]

Fine Pitch 3D Integration Using Self-Aligned Assembly Introduction / Objective One of the key requirements to enable 3D integration is high interconnect density. Increasing the density of interconnects will decrease the pitch size, which in turn will also decrease the dimensions of the micro-bumps used in 3D stacking. Smaller micro-bumps will further reduce the alignment tolerance required in 3D stacking. Fluidic self-aligned assembly is seen as a solution to assist fine pitch stacking using a traditional pick and place tool, because of its stochastic nature and low-cost process. The objective in this thesis will be to enable high alignment accuracy (

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.