The books cover first-year calculus and linear algebra and set out with the following ambitious goals:

- To go deeper into the mathematical analysis than what is done in most calculus textbooks. This means no short-cuts, dealing properly with convergence and properly defining the real numbers.
- To integrate algorithms, programming, numerical analysis, and applications into the regular mathematics curriculum.
- To create a book that is compact and easy to read.

This is quite a challenge, but quite possible. In fact, (3) is made possible by (1) and (2); by spending time on defining and analyzing convergence and the completeness of the real number system (via Cauchy sequences), we can prove convergence of the algorithms that are presented in the book (bisection, fixed-point iteration, and Newton’s method) and, conversely, by spending time on programming and application of the algorithms, the understanding of the mathematical analysis is greatly improved.

For example, the proof of the Banach fixed-point theorem is carried out by executing the fixed-point algorithm and observing that a Cauchy sequence is formed. And since the students work actively with generating Cauchy sequences (by programming), the concept of Cauchy sequence becomes both natural, practical, and understandable. In this way, proofs and algorithms blend into one.

It’s a big undertaking to write a book (let alone four books) and it’s been a massive effort by my co-authors and myself. Here’s a glimpse of how the work has progressed since the project was started in 2013.

Now we look forward to completing the remaining three books in the series. The titles of the four books in the series will be:

- Differentialkalkyl och skalära ekvationer
- Integralkalkyl och ordinära differentialekvationer
- Linjär algebra och linjära ekvationer
- Flervariabelanalys och partiella differentialekvationer

We offer a great opportunity to work in a cutting-edge research project and combine state-of-the-art technologies with fundamental computational methods. Depending on external funding the position can be extended after graduation. The starting date is “as soon as possible”!

Some proposed general topics include:

• VC@Chalmers implementation (Unreal Engine, C++ or any Web-enabled API). • Computational Fluid Dynamics visualisation in Unreal Engine.

• Virtual Reality capabilities in Unreal Engine.

• High Performance Computing extensions to of VirtualCity@Chalmers.

• BigData extensions to VirtualCity@Chalmers.

• Turbulence modeling for urban scale simulations.

• Case Study: Fluid-structure interaction simulation for ”Linbana Göteborg”.

• Case Study: Flooding simulation for Frihamnen.

• Traffic simulation based on synthetic population models.

• Simulation of pollution on the urban scale.

• Procedural/algorithmic generation of cities in Unreal Engine.

Visit http://virtualcity.chalmers.se for more information.

For questions and applications, please contact:

Anders Logg, logg@chalmers.se

Vasilis Naserentin, vasilis.naserentin@chalmers.se, +46 (0) 31 772 6381

*In January 2018, Chalmers launched the interdisciplinary project VirtualCity@Chalmers. The project is initiated by the Area of Advance Building Futures and aims to build a virtual twin of the two campus sites modeled as an immersive 3D world. We are currently building our prototype. Some of the main features include real-time coupling to some of Chalmers’ simulation/research software platforms and MR/VR capabilities visualised and powered by Unreal Engine.*

Our application HoloSpace allows a user to walk around in a building and, using the HoloLens sensors, automatically build a geometric representation of the surrounding room. This geometric representation is then used to identify and map the physical surroundings to a BIM model residing on a BIM cloud server. Once the mapping has been computed, bringing in virtual objects like structures, pipes and wires to the physical world is easy. This essentially gives the user an X-ray vision to see through walls, floors and ceilings to explore a building in detail.

The list of applications is endless and obvious scenarios include inspections for building maintenance and planning for renovations, but also on-site editing of BIM models, and even creation of BIM models for older buildings.

Programming the HoloLens has been a fun experience. Having programmed C++ on Linux for the past 20 years, the switch to C# takes something like 15 minutes, and working in VisualStudio is actually not that bad, even for someone heavily addicted to Emacs.

HoloSpace is also IoT-enabled and integrates a real-time feed from an IoT server with BIM data streaming from a BIM server, to provide holographic visualization of sensor data such as room temperatures, humidity, illuminance and also room presence. This makes it possible to see through walls to detect whether the room next door is occupied or available. In a future version, we will also enable interaction with light switches and booking systems.

The following screenshots demonstrate some of the features of our application.

Although these screenshots are all taken from within the HoloLens emulator, the real (mixed reality) experience is much more impressive. Actual live footage/video will be published shortly!

For more information about our project and future updates, visit our web page: Hyperion Computing.

]]>\(

R_{ab} – \frac{1}{2} R g_{ab} = \frac{8\pi G}{c^4} T_{ab},

\)

and modeling the right-hand side stress-energy tensor as a collisionless gas (kinetic model). In this model, a collection of particles are described by a distribution function \(f = f(t, x, p)\) counting the density of particles at time \(t\) at position \(x\) with (four-)momentum \(p\).

We make some simplifying assumptions by considering a static (but rotating) axisymmetric system of collisionless particles, which could for example be a large collection of gravitating stars (a galaxy). The result is the following set of integro-differential equations for the four components \(\nu, B, \mu, \omega\) of the (simplified) metric \(g_{ab}\),

where the right-hand side integrals are given by

We discretize the left-hand side using the finite element method (FEM) and compute the integrals by numerical quadrature. The nonlinear system is solved by fixed-point iteration in combination with normalization (of the total mass) in each iteration, together with some Anderson acceleration to improve convergence.

Our model has two central parameters, \(L_0\) which is the minimum angular momentum of the particles and \(E_0\) which is the maximum energy of the particles in the system. By carefully tuning these parameters (path-following), we are able to find different regimes that exhibit some quite interesting features. In particular, we believe we are able to construct paths towards both (Kerr) black holes and cosmic strings. Another interesting feature is the formation of ergoregions forming around the support of the matter (see paper for details).

In future work, we will explore whether we can use the Kerr black hole limit to create worm holes and whether we can extract energy from our ergoregions. Stay tuned!

For now, enjoy these animations illustrating the path-following fixed-point iteration converging to the cosmic string limit for decreasing particle energy \(E_0\).

]]>

There’s a great discussion on 400m splits over at Jimson Lee’s Speed Endurance blog, suggesting that relatively even split times are optimal. See some good posts on the 400m.

However, these results are for elite sprinters who can run the 400m in 45s or better, and may not apply to a masters/veteran athlete like myself with a capacity to run the 400m in 51s.

So here’s an attempt to build a simple model for the optimal 400m race based on my own races this winter. I’ve done quite a few meets, mostly to win some medals and records but also with a motivation to collect data for my model.

Let’s look at the data. The table shows the splits and final times for my eight races.

Looking at the times, race 6 is a clear outlier which is expected. The race in question was from the first rounds of the European Masters Championships in Madrid, where I tried to run as slow as I possibly could to conserve energy for the semis. Sadly, race 8 (the final) is also an outlier in the sense that I got unusually tired on the second lap, possibly as a result of having done two consecutive 400m races the previous day.

To build a model, let \(t_1\) be the split for the first 200m and let \(t_2 = f(t_1)\) be the split for the second 200m. Then the final time is \(T = t_1 + t_2 = t_1 + f(t_1)\). We also let \(t_0\) denote the fastest (not optimal) split which is identical to the 200m PR.

We now set out to determine \(f\) and subsequently find the optimal 400m time \(T^{\star} = \mathrm{min}\,T(t)\) and optimal split \(t_1^{\star} = \mathrm{arg\,min}\,T(t)\).

As per the discussion above, we make the following assumptions:

- (A1) \(f(t_0) = \infty\)
- (A2) \(f(\infty) = t_0 – 0.5\)

Assumption (A1) says that if the first 200m is run flat out, then the second 200m will be infinitely slow.

Assumption (A2) says that if one walks the first 180m, and then accelerates up to the 200m mark, then the split for the second 200m will be 0.5s better than the 200m PR as a result of the flying start. [The value of 0.5s does not have a great influence on the result and could be replaced by some other reasonable number like 1.0s.]

A simple model for \(f\) that satisfies both (A1) and (A2) is

\( f(t) = C\,(t-t_0)^{-\alpha} + t_0 – 0.5 \)

where \(C, \alpha \geq 0\) are constants to be determined.

With \(t_2 = f(t_1)\), it follows that

\( \log(t_2 – t_0 + 0.5) = \log C – \alpha \log (t_1 – t_0) \)

and so \(C\) and \(\alpha\) can be determined by linear regression; that is, by fitting a first degree polynomial \(y = kx + m\) to the data points of the table above with

\(

\begin{array}{rcl}

x &=& \log(t_1 – t_0), \\

y &=& \log(t_2 – t_0 + 0.5).

\end{array}

\)

We may then find the parameters \(C\) and \(\alpha\) by \(C = \exp(m)\) and \(\alpha = -k\).

This is easily done using polyfit in Python (or MATLAB), see code below, and the result is \(C = 5.07\) and \(\alpha = 0.082\). The model together with the data points is shown in the figures below.

To determine the optimum, we note that \(T(t) = t + f(t) = t + C(t – t_0)^{-\alpha} + t_0 – 0.5\) and thus

\(T’(t) = 1 – C\alpha\,(t – t_0)^{-\alpha-1}\).

Solving for the critical point \(t_1^{\star} = \mathrm{arg\,min}\, T(t)\), we set \(T'(t_1^{\star}) = 0\) and find that

\(t_1^{\star} = t_0 + (C\alpha)^{\frac{1}{\alpha+1}} \approx 23.44\).

The prediction of the model is that the optimal 400m time is **51.36 **with splits **23.44 / 27.92**. This is surprisingly and interestingly close to my PR = SB = 51.38!

This indicates that I have likely done a good job of maxing out my capacity on the 400m this season, and to improve further I need to get faster (decrease \(t_0\)) and improve my endurance (decrease \(C\)).

As a final remark, it is interesting to note that at \(t = 23.8\), we have \(f'(t)\approx -0.5\), which tells us that by running the first lap 0.2s faster, I loose 0.1s on the first lap, but the net result is gaining 0.1s for the total time.

For reference I have included the Python code if anyone wants to try this at home.

# Analysis of 400m indoor 2018 # Anders Logg 2018-03-24 # # Model: t2 = f(t1) where # # f(t) = C / (t - t0)^p + t0 - 0.5 #. t0 = 200m PR (from blocks) # # Note: f(t0) = inf (maxing out first 200m ==> failing to finish) # Note: f(inf) = t0 - 0.5 (extremely slow first 200m ==> max second 200m) # XKCD style plotting from pylab import * xkcd() # Data t0 = 23.0 t1 = array([24.8, 24.6, 23.8, 24.3, 23.8, 25.4, 24.3, 24.0]) T = array([52.22, 51.85, 51.55, 51.61, 51.38, 54.41, 51.96, 52.37]) O = [5, 7] # outliers # Sort inliers/outliers I = [i for i in range(len(t1)) if not i in O] t1_i = t1[I] t1_o = t1[O] T_i = T[I] T_o = T[O] # Fit model t2_i = T_i - t1_i t2_o = T_o - t1_o p, logC = polyfit(-log(t1_i - t0), log(t2_i - t0 + 0.5), 1) C = exp(logC) t = linspace(min(t1) - 0.75, max(t1) + 0.5) f = C / (t - t0)**p + t0 - 0.5 print "C =", C print "p =", p # Compute derivative at t = 23.8 print "Derivative at t = 23.8: ", -C*p*(23.8 - t0)**(-p - 1) # Compute optimum t1_s = t0 + (C*p)**(1.0 / (p + 1)) t2_s = C / (t1_s - t0)**p + t0 - 0.5 T_s = t1_s + t2_s print "Optimal split: %.2f / %.2f" % (t1_s, t2_s) print "Optimal 400m: %.2f" % T_s # Plot figure() plot(t1_i, t2_i, 'og') plot(t1_o, t2_o, 'sr') plot(t, f, '-b') plot(t1_s, t2_s, 'ob') xlabel('$t_1$') ylabel('$t_2$') grid(True, lw=0.5) title('Split #2 200m vs #1 200m split') savefig('optimal400m_1.png') figure() plot(t1_i, T_i, 'og') plot(t1_o, T_o, 'sr') plot(t, t + f, '-b') plot(t1_s, T_s, 'ob') xlabel('$t_1$') ylabel('$T$') grid(True, lw=0.5) title('400m time vs #1 200m split') savefig('optimal400m_2.png') show()

]]>

We are working closely with the City of Gothenburg, and in the next phase we plan to scale the simulation to the city level. Below is a mockup of our design and a target for the development this first year.

I’m very happy with the project team we have managed to assemble. The team includes experts in mathematical modeling, simulation, architecture, human-computer interaction, communication, and – very importantly – game programming. We’ll be using Unreal Engine to build our frontend, while the backend will be running FEniCS solvers, IPS IBOFlow and data repositories in the cloud.

So far, we have managed to implement initial versions of all components, including a first version of the game engine/user interface, automated mesh generation, a couple of basic FEniCS solvers (Stokes and advection-diffusion), as well as getting the components talk to each other (via JSON over TCP). Some screenshots are posted below.

For more information on VirtualCity@Chalmers, **make sure to follow our blog** where we will be posting updates continuously.

]]>

This is already better than the number of downloads for the FEniCS book. But I guess it helps that the FEniCS Tutorial is Open Access, thanks to the generous support by Simula Research Laboratory. Be sure to check out the other books in the series!

The manuscript was prepared using Hans Petter’s Doconce system. On the upside, this generates output in LaTeX, HTML, Sphinx, PDF, etc from a single source and supports a ton of nifty features like inlining excerpts from Python code examples, cross-referencing and more. On the downside, I could not get Doconce to install properly on my MacBook so book writing had to be done through a Docker image. See below for an example of typesetting one of the pages dealing with the Navier-Stokes equations.

The full source of the book, which is itself licensed under a Creative Commons license, can be found here.

Currently a Chinese translation of the book is being prepared so watch out…

]]>

It’s always exciting when I get together with my colleagues Håkan Andréasson and Ellery Ames, and good things tend to happen. Our work could be summarized as solving the Einstein-Vlasov equations and trying to find interesting things, such as black holes, geons and cosmic strings. Previously, we found some interesting galaxies, in particular the Hoag-like object depicted below.

Results will be published soon but I can already reveal that we found at least a couple of very exciting features – that I will describe in more detail later when we have verified our findings.

]]>

For this year, my big goals are the European Masters Athletics Championships Indoor (EMACI) in Madrid and the World Masters Athletic Championships (WMAC) in Malaga, where I plan to run the 400m, aiming to run sub 51 at EMACI and better yet at WMAC…

]]>

Thanks to Simula and Springer, the book is released under the Creative Commons Attribution (4.0) license and is freely available from the publishers web page, as well as from the FEniCS web pages.

Future plans include the creation of a Chinese edition of the book (this summer) and a volume II with advanced topics (in the future).

]]>