Tutorial 1: Poisson problem with Dirichlet conditions and code validation#

In this tutorial, we will learn

  • How to solve a Poisson problem with inhomogeneous Dirichlet boundary conditions using Gridap

  • How to implement the method of manufactured solutions

  • How to perform a convergence test

  • How to define the discretization error

  • How to integrate error norms

Problem statement#

In this tutorial, we consider the Poisson equation in the unit square \(\Omega\doteq (0,1)^2\) with homogeneous Dirichlet boundary conditions as a model problem:

()#\[\begin{equation} \left\lbrace \begin{aligned} -\Delta u = f \ \text{in} \ \Omega\\ u = 0 \ \text{on}\ \partial\Omega.\\ \end{aligned} \right. \end{equation}\]

This is the “Hello World” version of most numerical methods for (elliptic) PDEs and we show how easily it can be solved in NGsolve.

Afterwards, we will quickly show how to validate your code using the well known method of manufactured solutions.

Note: This tutorial is forged from the NGSolve i-tutorial, in particular from the sections

Part 1: Five quick steps to compute the solution#

1. Import NGSolve and Netgen Python modules:#

from ngsolve import *
from ngsolve.webgui import Draw

2. Generate an unstructured mesh#

Here we prescribed a maximal mesh-size of 0.2 using the maxh flag. The various parts of boundary are automatically tag with particular names to handle boundary conditions later.

mesh = Mesh(unit_square.GenerateMesh(maxh=0.2))
print(mesh.GetBoundaries())
Draw(mesh);
('bottom', 'right', 'top', 'left')

3. Declare a finite element space and corresponding test and trial function#

  • Test and trial function are symbolic objects - called ProxyFunctions - that help you construct bilinear forms (and have no space to hold solutions).

  • We also specify where we want to set Dirichlet boundary conditions. Those are typically built into the problem via contraining the dofs (strong imposition).

  • Here, mark all boundaries which have the tag “bottom” or “right” or “top” or “left”.

V = H1(mesh, order=2, dirichlet="bottom|right|top|left")

u = V.TrialFunction()  # symbolic object
v = V.TestFunction()   # symbolic object

Alternately, you can get both the trial and test variables at once:

u, v = V.TnT()

4. Define and assemble linear and bilinear forms:#

a = BilinearForm(V)
a += grad(u)*grad(v)*dx
a.Assemble()

f = 32 * (y*(1-y)+x*(1-x))
l = LinearForm(V)
l += f*v*dx
l.Assemble();

Alternately, we can do one-liners:

a = BilinearForm(grad(u)*grad(v)*dx).Assemble()
f = LinearForm( 32 * (y*(1-y)+x*(1-x))  ).Assemble()

5. Solve the system:#

To find the numerical solution, we need

  1. Solve the (potentially large) linear algebra system \(AU = F\) resulting from the discrete weak formulation

  2. Have a way to translate the solution vector \(U\) back into a discrete finite element functions. This is done via GridFunctions which represent functions in the finite element space and contains memory to hold coefficient vectors \(U\).

uh = GridFunction(V)
Ainv =  a.mat.Inverse(freedofs=V.FreeDofs()) 
uh.vec.data =  Ainv * l.vec
Draw(uh)
Draw(-grad(uh), mesh, "Flux", vectors=True)
BaseWebGuiScene

The Dirichlet boundary condition constrains some degrees of freedom. The argument fes.FreeDofs() indicates that only the remaining “free” degrees of freedom should participate in the linear solve.

You can examine the coefficient vector of solution if needed:

print(uh.vec)
       0
       0
       0
       0
       0
       0
       0
       0
       0
       0
       0
       0
       0
       0
       0
       0
       0
       0
       0
       0
 0.540719
 0.621496
 0.471805
 0.23069
 0.533917
 0.534433
  0.4266
 0.48363
 0.679081
 0.460126
 0.21072
 0.44296
 0.610643
 0.87446
 0.926491
 0.787662
 0.842983
 0.848853
 0.985887
       0
       0
       0
       0
 0.384181
       0
       0
       0
       0
 0.353848
       0
 -1.02873
 -0.189321
       0
 -0.821619
 -0.255291
       0
 -0.271395
 -0.340681
 0.039547
 -0.245775
       0
 -0.250502
 0.0361549
       0
 -0.368389
 -0.200489
       0
 -0.170328
 -0.401524
 -1.02858
 0.0113522
       0
 -0.507053
 0.0474118
       0
 -0.331946
 -0.340653
       0
 -0.343937
 -0.379177
 0.0192259
 -0.259136
       0
 -0.247087
 0.0339563
       0
 -0.340554
 -0.251504
       0
 -0.781649
 -0.247417
 -0.166544
 -0.411468
 -0.421766
 -0.250489
 -0.176919
 -0.29508
 -0.45228
 -0.0611119
 -0.386481
 -0.219666
 -0.0543211
 -0.215916
 -0.302075
 -0.192277
 -0.164557
 -0.319371
 -0.185632
 -0.341206
 -0.122301
 -0.143452
 -0.229737
 -0.19679
 -0.405124
 -0.391905
 -0.453703
 -0.0370054
 -0.345344
 -0.241822
 -0.048346
 -0.169615
 -0.204892
 -0.423384
 -0.333871
 -0.333907
 -0.210406
 -0.328185
 -0.379819
 -0.298437
 -0.19492
 -0.222619
 -0.321429
 -0.23126

You can see the zeros coming from the zero boundary conditions.

Ways to interact with NGSolve#

  • A jupyter notebook (like this one) gives you one way to interact with NGSolve. When you have a complex sequence of tasks to perform, the notebook may not be adequate.

  • You can write an entire python module in a text editor and call python on the command line. (A script of the above is provided in poisson.py.)

    python3 poisson.py
    
  • If you want the Netgen GUI, then use netgen on the command line:

    netgen poisson.py
    

    You can then ask for a python shell from the GUI’s menu options (Solve -> Python shell).

Part 2: Method of manufactured solution#

The idea is simple: for a given PDE problem involving

  • partial differential equation(s)

  • predescribed boundary values (e.g. Dirchlet, Neumann)

  • predescribed initial values (for time-dependent problems)

we pick a solution function \(u_{ex}\) ourselves and the compute analytically the corresponding data for the PDE. For instance, the right-hand side \(f\) is simply \(f = - \Delta u_{ex}\).

Let’s pick

\[\begin{equation*} u_{ex}(x,y) = 16x(1-x)y(1-y) \end{equation*}\]

Then

\[\begin{equation*} -\Delta u_{ex}(x,y) = 32(y(1-y)+x(1-x)), \quad u_{ex}|_{\partial \Omega} = 0, \end{equation*}\]

so by some funny coincidence this is exactly the \(f\) we used above ;).

From theory we know that the \(L^2\) error can be estimated by

\[\begin{equation*} err(h) = \left(\int_{\Omega}(u - u_h)^2 \;dx \right)^{1/2} \leqslant C h^{p} \end{equation*}\]

where \(p = k+1\) and \(k\) is the polynomial order of finite elements.

So asymptotically for \(h \to 0\), we see that \(e(h) \approx C h^{p}\), so

\[\begin{equation*} \log(e(h)) \approx p\log(h) + \log(C) \end{equation*}\]

For a convergence study, where compute the discrete solution and measure the error against the known analytical solution for series of refined meshes \(\mathcal{T}_k\) with \(h_k \to 0\).

If we know plot the computed errors against the corresponding mesh size \(h_k\) in a \(\log-\log\) plot, we should roughly see a straight line with a slope of \(\approx p = k+1\)

u_ex = 16*x*(1-x)*y*(1-y)
mesh = Mesh(unit_square.GenerateMesh(maxh=0.2))
num_ref = 3
errs = []
scene = Draw(uh)

for ref in range(num_ref+1):
    V = H1(mesh, order=1, dirichlet="bottom|right|top|left")
    u = V.TrialFunction()  # symbolic object
    v = V.TestFunction()
    # symbolic object
    a = BilinearForm(V)
    a += grad(u)*grad(v)*dx
    a.Assemble()

    f = 32 * (y*(1-y)+x*(1-x))
    l = LinearForm(V)
    l += f*v*dx
    l.Assemble()

    uh = GridFunction(V)
    Ainv =  a.mat.Inverse(freedofs=V.FreeDofs()) 
    uh.vec.data =  Ainv * l.vec
    errs.append(sqrt (Integrate ( (uh-u_ex)*(uh-u_ex), mesh)))
    print(errs[-1])
    scene.Redraw()
    mesh.Refine()
0.04256818749401908
0.012638575655332903
0.003156613693074065
0.0007839078526542161
import numpy as np
import pandas as pd
def compute_eoc(errs, fac=2):
    eocs = np.log(np.array(errs)[:-1]/np.array(errs)[1:])/np.log(fac)
    eocs = np.insert(eocs, 0, np.inf)
    return eocs

eocs = compute_eoc(errs)
print(eocs)
print(errs)
table = pd.DataFrame({'Error': errs, 'EOC' : eocs})
display(table)
[       inf 1.75194178 2.00138426 2.00962173]
[0.04256818749401908, 0.012638575655332903, 0.003156613693074065, 0.0007839078526542161]
Error EOC
0 0.042568 inf
1 0.012639 1.751942
2 0.003157 2.001384
3 0.000784 2.009622

Exercise#

Now repeat the convergence study above for \(p=2,3,4\). What do you observe and why?