Exam 2008-solution_S..

advertisement
Exam 2008: Design of offshore structures
Problem 1
a)
CAPEX: Cost during field development phase
OPEX: Cost during production phase
Net Present Value, NPV: NPV = IPV – CPV, IPV is present value of total income,
CPV is present value of total cost.
b)
Ranking of regulations and standards suggested by the team:
“The Framework Regulation”
“The Facilities Regulation”
“NORSOK N-001 Structural Design”
“Det Norske Veritas Recommended Practise DNV-RP-C205 Environmental
Conditions and Environmental Loads”
5. “Student Oil Standard – Best Practise for Structural design”
1.
2.
3.
4.
Several standards useful during structural design are not included. The most important
are: “NORSOK N-003 Actions and Action Effects” and “NORSOK N-004 Design of
steel structures”….
c)
* JONSWAP spectrum: Wave frequency spectrum for a pure wind sea. Can also be
used for a pure swell sea with a large peakedness factor.
Torsethaugen spectrum: A wave frequency spectrum describing combined seas, i.e.
sea states consisting of a wind sea system superimposed on an incoming swell system.
* If we look at the drag term of the Morison equation, it is of a non-linear form with
respect to fluid particle speed ( k D u | u | ). This means that even for a sinusoidal wave
with frequency, f, load fluctuations will be experienced for f (the largest term), 2f, 3f,
4f, …. The load components corresponding to the multiple frequency of the wave
frequency are referred to as super-harmonic loading.
The load component corresponding to f is typically by far the largest. The superharmonic load terms become only of importance if they happen to be very close to a
natural frequency of a lightly damped structure.
This means for the selected structural concepts, super harmonic exitation can be
important if the natural frequencies of the structures become so small that they can
coincide with say the 2f - or 3f - load component.
* The ratio between leg diameter and wave length is very large. This suggests that the
Morrison equation should be a good load model.
* The ratio of wave height to leg diameter is 10, suggesting that the loading is
dominated by the drag term. This means that the horizontal loading is governed by the
horizontal particle speed. Accordingly, the maximum load is found as the wave crest is
passing the column. And for load calculation it is important to integrate the load
1
density to the exact surface.
When the wave height is merely 2m, the ratio of wave height to leg diameter is 1,
suggesting that the inertial term of Morrison is governing. This means that it is the
horizontal particle acceleration that defines the maximum load, i.e. the maximum load
is obtained as the zero-crossing phase of the wave before the wave crest is passing the
column. Load will be of equal magnitude also for the zero crossing after the crest
height, but with opposite sign.
Problem 2
a) Since the natural period is so small, dynamics will not be important. The
largest waves will have a period of around 15s, i.e. 10 times the natural period.
1.4s natural period is also too short to cause concern for significant super
harmonic loading.
The wave height corresponding to 10-2 – annual probability (Table 4 in
Metocean Report): 29m
The corresponding wave period shall be varied between 12.6s and 16.3s. The
10-2- load should be taken as the largest load within this interval.
Regarding wind: If we assume that the length characteristics of the topside is
less than 50m, we can use the 10-2 – annual probability 3s wind gust speed,
i.e.: 40*1.47 = 58.8m/s 10m above sea level. One will of course have to
calculate the speed at a representative height using Table 2 of the Metocean
Report.
Since wave loads will be dominating, one could possible use the 1-minute
wind speed, i.e.: 50.8m/s 10m above sea level. A proper wind for top side load
must be calculated for a proper height using table 2 in metocean Report.
It is important to include current since the drag term will include a term
proportional to 2*uwave*ucurrent. The term ucurrent**2 can be neglected.
b) Let us denote the conditional distribution of C3h given the significant wave
height and spectral peak period (which define the sea state) with
FC3h | H sTp (c | h, t ) . This is the short term variability of the problem. The long
term variability is the variability related to Hs and Tp and is represented by the
joint probability density function f H sTp (h, t ) . The marginal distribution, the
long term distribution, of C3h is then given by:
FC3 h (c) 
F
C3 h | H sT p
(c | h, t ) f H sTp (h, t ) dhdt
h t
c) Requirement is 10-4 per year. The corresponding exceedance probability per 3hour is q3h = 10-4/2920. (2920 is no. of 3-hour periods in one year).
2
The 10-4 – annual probability crest height is then found by solving:

 c1.235 1.7935 
10 4


FC3 h (c0.0001 )  exp   exp  0.0001
  1 
2.6015
2920





Solving this gives: c0.0001 = 22.4m.
With a storm surge of 1.1m and 1m to account for uncertainties, the required
airgap becomes: 24.5m.
 This suggests that the airgap should be increased with about 1.5m.
3
Problem 3
a) With the largest natural period as large as 5.5s, one cannot use a design wave
approach without at least calculating the dynamic amplification using time domain
simulations. There is reasons to be concerned when the simplified method of the rig
owner is merely 5-10% lower than original characteristic loads. A more accurate
assessment of the characteristic responses for design is required.
b) The steps of the environmental contour line method when the 10-2 annual probability
response is to be determined are the following:
1: First one has to determine the 10-2 – environmental contour of Hs and Tp. The curve
to be used at Polar Bear is shown in Fig. 1 of the Metocean Report.
2: For the given response quantity one has to find the worst combination along the
contour line. This can be done by performing a limited number of 3-hour simulations
for a number of sea states along the contour.
3: As the worst sea state is identified, a large number (at least > 20) of 3-hour
simulations are carried out for this sea state.
4: A proper extreme value distribution for the 3-hour maximum of this sea state is
obtained by fitting a proper probabilistic model (often Gumbel) to the observed 3hour maxima.
5: The 10-2 – annual probability value for the response under consideration is
estimated by the 0.9 probability value of the 3-hour extreme value distribution.
The environmental contour method is an approximate method and the approximation
is that we do not know for sure what would be the correct percentile to use.
c) One obvious reason is his choice of frequency resolution. Using a frequency
resolution of 1/1000 will result in a simulated history the repeats after 1000s. This
means that in stead of considering 3-hour maxima he is considering 1000s-maxima.
An unexpected result can also be due to statistical uncertainty when the sample size is
as small as 10, but here that is secondary.
d) The maximum displacement is coming during the impact or immediately after, i.e.
damping is not a critical issue for a preliminary assessment.That means we can use
the dynamic amplification shown for various impulses in the given information.
Assuming that the impact increases almost instantaneously to the maximum level and
decays linearly to zero after t seconds, one can use the results of impulse type loading.
From the figure enclose in the text, we see that the form indicated above is curve c.
What we need now is the ratio impulse duration to natural period.
Since it is a modest impact, the duration can not be much longer than 2-3s, most
probably it will be shorter. Using 3 as a representative duration, the ratio becomes
3/5.5 = 0.54. For curve c in the figure we see that the dynamic amplification is about
4
1.3.
 Your simple estimate suggest that the maximum dynamic displacement is 3.25m.
The largest uncertainty is the duration of the impulse. If it is much larger than
assumed here, the shape of the impulse may become a significant uncertainty.
e) From the set of 10 3-hour extremes produced by the rig owner, a Gumbel model is
fitted. This will be assumed to be the “true” distribution function:

 x   
FX 3 h ( x)  exp   exp  

 


We can estimate the uncertainty using Monte Carlo simulations. This includes the
following steps:
i) Generate 10 random numbers between 0 and 1. Replace the left hand side of
equation above with all of these and calculated the corresponding values of X3h.
ii) Fit a Gumbel model to this simulated sample. This will give a distribution which
will differ somewhat from the original distribution.
iii) Repeat i) and ii) m times, i.e. we have m different Gumbel distributions.
iv) Assuming the 0.9 probability level is the target level, we can estimate the target
extreme value from each of these Gumbel distributions. From these estimates we can
estimate say a 5% lower bound and a 95% upper bound the estimated extreme value.
The width of this band is a measure of the uncertainty caused by a limited sample
size.
5
Download