This notebook demonstrates many of the PmagPy calculation functions such as those that rotate directions, return statistical parameters, and simulate data from specified distributions.
The notebook is one of a series of notebooks that demonstrate the functionality of PmagPy. The other notebooks are:
You are currently looking at static html, but you may want to run this notebook interactively. To do so, you'll need to first install Python and PmagPy (see the Cookbook for install instructions). You can then launch the notebook from your command line (see more details here).
To use the functions in this notebook, we have to import the PmagPy modules pmagplotlib, pmag and ipmag and some other handy functions. This is done in the following code.
In order to access the example data, these examples are meant to be run in the PmagPy-data directory (PmagPy directory for developers).
import pmagpy.pmag as pmag
import pmagpy.pmagplotlib as pmagplotlib
import pmagpy.ipmag as ipmag
import matplotlib.pyplot as plt # our plotting buddy
from pmagpy import convert_2_magic as convert
import numpy as np # the fabulous NumPy package
import pandas as pd # and of course Pandas
has_basemap, Basemap = pmag.import_basemap()
has_cartopy, Cartopy = pmag.import_cartopy()
from IPython.display import Image
%matplotlib inline
Anisotropy of anhysteretic or other remanence can be converted to a tensor and used to correct natural remanence data for the effects of anisotropy remanence acquisition. For example, directions may be deflected from the geomagnetic field direction or intensities may be biased by strong anisotropies in the magnetic fabric of the specimen. By imparting an anhysteretic or thermal remanence in many specific orientations, the anisotropy of remanence acquisition can be characterized and used for correction. We do this for anisotropy of anhysteretic remanence (AARM) by imparting an ARM in 9, 12 or 15 positions. Each ARM must be preceded by an AF demagnetization step. The 15 positions are shown in the k15_magic example.
For the 9 position scheme, aarm_magic assumes that the AARMs are imparted in positions 1,2,3, 6,7,8, 11,12,13. Someone (a.k.a. Josh Feinberg) has kindly made the measurements and saved them an SIO formatted measurement file named aarm_magic_example.dat in the datafile directory called aarm_magic. Note the special format of these files - the treatment column (column #2) has the position number (1,2,3,6, etc.) followed by either a “00” for the obligatory zero field baseline step or a “10” for the in-field step. These could also be ‘0‘ and ‘1’.
We need to first import these into the measurements format and then calculate the anisotropy tensors. These can then be plotted or used to correct paleointensity or directional data for anisotropy of remanence.
So, first follow the instructions in sio_magic to import the AARM data into the MagIC format. The DC field was 50 μT, the peak AC field was 180 mT, the location was "Bushveld" and the lab protocol was AF and Anisotropy. The naming convention used Option # 3 (see help menu).
Then we need to calculate the best-fit tensor and write them out to the specimens.txt MagIC tables which can be used to correct remanence data for anisotropy.
The aarm_magic program takes a measurements.txt formatted file with anisotropy of ARM data in it and calculates the tensors, rotates it into the desired coordinate system and stores the data in a specimens.txt format file. To do this in a notebook, use ipmag.aarm_magic().
convert.sio('arm_magic_example.dat',dir_path='data_files/aarm_magic/',specnum=3,
location='Bushveld',codelist='AF:ANI',samp_con='3',
meas_file='aarm_measurements.txt',peakfield=180,labfield=50, phi=-1, theta=-1)
help(ipmag.aarm_magic)
ipmag.aarm_magic('aarm_measurements.txt',dir_path='data_files/aarm_magic/')
ipmag.aniso_magic_nb(infile='data_files/aarm_magic/specimens.txt')
help(ipmag.aniso_magic_nb)
Anisotropy of thermal remanence (ATRM) is similar to anisotropy of anhysteretic remanence (AARM) and the procedure for obtaining the tensor is also similar. Therefore, the atrm_magic is quite similar to aarm_magic. However, the SIO lab procedures for the two experiments are somewhat different. In the ATRM experiment, there is a single, zero field step at the chosen temperature which is used as a baseline. We use only six positions (as opposed to nine for AARM) because of the additional risk of alteration at each temperature step. The positions are also different:
Image('data_files/Figures/atrm_meas.png')
The file atrm_magic_example.dat in the data_files/atrm_magic directory is an SIO formatted data file containing ATRM measurement data done in a temperature of 520∘C. Note the special format of these files - the treatment column (column 2) has the temperature in centigrade followed by either a “00” for the obligatory zero field baseline step or a “10” for the first postion, and so on. These could also be ‘0‘ and ‘1’, etc..
Follow the instructions for sio_magic to import the ATRM data into the MagIC format. The DC field was 40 μT. The sample/site naming convention used option # 1 (see help menu) and the specimen and sample name are the same (specnum=0).
We will use ipmag.atrm_magic() to calculate the best-fit tensor and write out the MagIC tables which can be used to correct remanence data for the effects of remanent anisotropy.
convert.sio('atrm_magic_example.dat',dir_path='data_files/atrm_magic/',specnum=0,
location='unknown',codelist='T:ANI',samp_con='1',
meas_file='measurements.txt',labfield=40, phi=-1, theta=-1)
help(ipmag.atrm_magic)
ipmag.atrm_magic('measurements.txt',dir_path='data_files/atrm_magic')
[Essentials Appendix A.3.4] [command line version]
angle calculates the angle $\alpha$ between two declination,inclination pairs. It reads in the directions from the command line or from a file and calls pmag.angle() to do the calculation.
There are several ways to use this function from the notebook - one loading the data into a Pandas dataframe, then converting to the desired arrays, or load directly into a Numpy array of desired shape.
help(pmag.angle)
# Pandas way:
di=pd.read_csv('data_files/angle/angle.dat',delim_whitespace=True,header=None)
#rename column headers
di.columns=['Dec1','Inc1','Dec2','Inc2']
Here's the sort of data in the file:
di.head()
Now we will use pmag.angle() to calculate the angles.
# call pmag.angle
pmag.angle(di[['Dec1','Inc1']].values,di[['Dec2','Inc2']].values)
Here is the other (equally valid) way using np.loadtext().
# Numpy way:
di=np.loadtxt('data_files/angle/angle.dat').transpose() # read in file
D1=di[0:2].transpose() # assign to first array
D2=di[2:].transpose() # assign to second array
pmag.angle(D1,D2) # call pmag.angle
You can always save your output using np.savetxt().
angles=pmag.angle(D1,D2) # assign the returned array to angles
[Essentials Chapter 16] [command line version]
The program apwp calculates paleolatitude, declination, inclination from a pole latitude and longitude based on the paper Besse and Courtillot (2002; see Essentials Chapter 16 for complete discussion). Here we will calculate the expected direction for 100 million year old rocks at a locality in La Jolla Cove (Latitude: 33$^{\circ}$N, Longitude 117$^{\circ}$W). Assume that we are on the North American Plate! (Note that there IS no option for the Pacific plate in the program apwp, and that La Jolla was on the North American plate until a few million years ago (6?).
Within the notebook we will call pmag.apwp.
help(pmag.apwp)
# here are the desired plate, latitude, longitude and age:
data=['NA',33,-117,100] # North American plate, lat and lon of San Diego at 100 Ma
pmag.apwp(data,print_results=True)
[Essentials Chapter 2] [command line version]
b_vdm converts geomagnetic field intensity observed at the earth's surface at a particular (paleo)latitude and calculates the Virtual [Axial] Dipole Moment (vdm or vadm). We will call pmag.b_vdm() directly from within the notebook. [See also vdm_b.]
Here we use the function pmag.b_vdm() to convert an estimated paleofield value of 33 $\mu$T obtained from a lava flow at 22$^{\circ}$ N latitude to the equivalent Virtual Dipole Moment (VDM) in Am$^2$.
help(pmag.b_vdm)
print ('%7.1f'%(pmag.b_vdm(33e-6,22)*1e-21),' ZAm^2')
pmag.b_vdm(33e-6,22)*1e-21
[Essentials Chapter 13] [command line version]
bootams calculates bootstrap statistics for anisotropy tensor data in the form of:
x11 x22 x33 x12 x23 x13
It does this by selecting para-data sets and calculating the Hext average eigenparameters. It has an optional parametric bootstrap whereby the $\sigma$ for the data set as a whole is used to draw new para data sets. The bootstrapped eigenparameters are assumed to be Kent distributed and the program calculates Kent error ellipses for each set of eigenvectors. It also estimates the standard deviations of the bootstrapped eigenvalues.
bootams reads in a file with data for the six tensor elements (x11 x22 x33 x12 x23 x13) for specimens, calls pmag.s_boot() using a parametric or non-parametric bootstrap as desired. If all that is desired is the bootstrapped eigenparameters, pmag.s_boot() has all we need, but if the Kent ellipses are required, and we can call pmag.sbootpars() to calculated these more derived products and print them out.
Note that every time the bootstrap program gets called, the output will be slightly different because this depends on calls to random number generators. If the answers are different by a lot, then the number of bootstrap calculations is too low. The number of bootstraps can be changed with the nb option below.
We can do all this from within the notebook as follows:
help(pmag.s_boot)
So we will:
Ss=np.loadtxt('data_files/bootams/bootams_example.dat')
Tmean,Vmean,Taus,Vs=pmag.s_boot(Ss) # get the bootstrapped eigenparameters
bpars=pmag.sbootpars(Taus,Vs) # calculate kent parameters for bootstrap
print("""tau tau_sigma V_dec V_inc V_eta V_eta_dec V_eta_inc V_zeta V_zeta_dec V_zeta_inc
""")
outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(\
Tmean[0],bpars["t1_sigma"],Vmean[0][0],Vmean[0][1],\
bpars["v1_zeta"],bpars["v1_zeta_dec"],bpars["v1_zeta_inc"],\
bpars["v1_eta"],bpars["v1_eta_dec"],bpars["v1_eta_inc"])
print(outstring)
outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(\
Tmean[1],bpars["t2_sigma"],Vmean[1][0],Vmean[1][1],\
bpars["v2_zeta"],bpars["v2_zeta_dec"],bpars["v2_zeta_inc"],\
bpars["v2_eta"],bpars["v2_eta_dec"],bpars["v2_eta_inc"])
print(outstring)
outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(\
Tmean[2],bpars["t3_sigma"],Vmean[2][0],Vmean[2][1],\
bpars["v3_zeta"],bpars["v3_zeta_dec"],bpars["v3_zeta_inc"],\
bpars["v3_eta"],bpars["v3_eta_dec"],bpars["v3_eta_inc"])
print(outstring)
# with parametric bootstrap:
Ss=np.loadtxt('data_files/bootams/bootams_example.dat')
Tmean,Vmean,Taus,Vs=pmag.s_boot(Ss,ipar=1,nb=5000) # get the bootstrapped eigenparameters
bpars=pmag.sbootpars(Taus,Vs) # calculate kent parameters for bootstrap
print("""tau tau_sigma V_dec V_inc V_eta V_eta_dec V_eta_inc V_zeta V_zeta_dec V_zeta_inc
""")
outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(\
Tmean[0],bpars["t1_sigma"],Vmean[0][0],Vmean[0][1],\
bpars["v1_zeta"],bpars["v1_zeta_dec"],bpars["v1_zeta_inc"],\
bpars["v1_eta"],bpars["v1_eta_dec"],bpars["v1_eta_inc"])
print(outstring)
outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(\
Tmean[1],bpars["t2_sigma"],Vmean[1][0],Vmean[1][1],\
bpars["v2_zeta"],bpars["v2_zeta_dec"],bpars["v2_zeta_inc"],\
bpars["v2_eta"],bpars["v2_eta_dec"],bpars["v2_eta_inc"])
print(outstring)
outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(\
Tmean[2],bpars["t3_sigma"],Vmean[2][0],Vmean[2][1],\
bpars["v3_zeta"],bpars["v3_zeta_dec"],bpars["v3_zeta_inc"],\
bpars["v3_eta"],bpars["v3_eta_dec"],bpars["v3_eta_inc"])
print(outstring)
[Essentials Chapter 2] [command line version]
cart_dir converts cartesian coordinates (X,Y,Z) to polar coordinates (Declination, Inclination, Intensity). We will call pmag.cart2dir().
help(pmag.cart2dir)
# read in data file from example file
cart=np.loadtxt('data_files/cart_dir/cart_dir_example.dat')
print ('Input: \n',cart) # print out the cartesian coordinates
# print out the results
dirs = pmag.cart2dir(cart)
print ("Output: ")
for d in dirs:
print ('%7.1f %7.1f %8.3e'%(d[0],d[1],d[2]))
[Essentials Appendix B] [command line version]
Paleomagnetic data are frequently plotted in equal area projection. PmagPy has several plotting options which do this (e.g., eqarea, but occasionally it is handy to be able to convert the directions to X,Y coordinates directly, without plotting them at all. Here is an example transcript of a session using the datafile di_eq_example.dat:
The program di_eq program calls pmag.dimap() which we can do from within a Jupyter notebook.
help(pmag.dimap)
DIs=np.loadtxt('data_files/di_eq/di_eq_example.dat').transpose() # load in the data
print (pmag.dimap(DIs[0],DIs[1])) # call the function
[Essentials Chapter 9] and Changing coordinate systems [command line version]
Here we will convert D = 8.1,I = 45.2 from specimen coordinates to geographic-adjusted coordinates. The orientation of laboratory arrow on the specimen was: azimuth = 347; plunge = 27. To do this we will call pmag.dogeo(). There is also pmag.dogeo_V for arrays of data.
So let's start with pmag.dogeo().
help(pmag.dogeo)
pmag.dogeo(dec=81,inc=45.2,az=347,pl=27)
Now let's check out the version that takes many data points at once.
help(pmag.dogeo_V)
indata=np.loadtxt('data_files/di_geo/di_geo_example.dat')
print (indata)
Let's take a look at these data in equal area projection: (see eqarea for details)
ipmag.plot_net(1)
ipmag.plot_di(dec=indata.transpose()[0],inc=indata.transpose()[1],color='red',edge='black')
The data are highly scattered and we hope that the geographic coordinate system looks better! To find out try:
decs,incs=pmag.dogeo_V(indata)
ipmag.plot_net(1)
ipmag.plot_di(dec=decs,inc=incs,color='red',edge='black')
These data are clearly much better grouped.
And here they are printed out.
print(np.column_stack([decs,incs]))
[Essentials Chapter 11] [command line version]
di_rot rotates dec inc pairs to a new origin. We can call pmag.dodirot() for single [dec,inc,Dbar,Ibar] data or pmag.dodirot_V() for an array of Dec, Inc pairs. We can use the data from the di_geo example and rotate the geographic coordinate data such that the center of the distribution is the principal direction.
We do it like this:
di_block=np.loadtxt('data_files/di_rot/di_rot_example.txt') # read in some data
ipmag.plot_net(1) # make the plot
ipmag.plot_di(di_block=di_block,title='geographic',color='red',edge='black')
Now we calculate the principal direction using the method described inthe goprinc section.
princ=pmag.doprinc(di_block)
And note we use pmag.dodirot_V to do the rotation.
help(pmag.dodirot_V)
rot_block=pmag.dodirot_V(di_block,princ['dec'],princ['inc'])
rot_block
And of course look at what we have done!
ipmag.plot_net(1) # make the plot
ipmag.plot_di(di_block=rot_block,color='red',title='rotated',edge='black')
[Essentials Chapter 9] [Changing coordinate systems] [command line version]
di_tilt can rotate a direction of Declination = 5.3 and Inclination = 71.6 to “stratigraphic” coordinates, assuming the strike was 135 and the dip was 21. The convention in this program is to use the dip direction, which is to the “right” of this strike.
We can perform this calculation by calling pmag.dotilt or pmag.dotilt_V() depending on if we have a single point or an array to rotate.
help(pmag.dotilt)
help(pmag.dotilt_V)
# read in some data
data=np.loadtxt('data_files/di_tilt/di_tilt_example.dat') # load up the data
di_block=data[:,[0,1]] # let's look at the data first!
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block)
Now we can rotate them
Dt,It=pmag.dotilt_V(data) # rotate them
ipmag.plot_net(1) # and take another look
ipmag.plot_di(dec=Dt,inc=It)
Use the handy function np.column_stack to pair the decs and incs together
np.column_stack((Dt,It)) # if you want to see the output:
[Essentials Chapter 2] [command line version]
di_vgp converts directions (declination,inclination) to Virtual Geomagnetic Pole positions. This is the inverse of vgp_di. To do so, we will call pmag.dia_vgp() from within the notebook.
help(pmag.dia_vgp)
data=np.loadtxt('data_files/di_vgp/di_vgp_example.dat') # read in some data
print (data)
The data are almost in the correct format, but there is no a95 field, so that will have to be inserted (as zeros).
a95=np.zeros(len(data))
a95
DIs=data.transpose()[0:2].transpose() # get the DIs
LatLons=data.transpose()[2:].transpose() # get the Lat Lons
newdata=np.column_stack((DIs,a95,LatLons)) # stitch them back together
print (newdata)
vgps=np.array(pmag.dia_vgp(newdata)) # get a tuple with lat,lon,dp,dm, convert to array
print (vgps.transpose()) # print out the vgps
[Essentials Chapter 2] [command line version]
If we assume a geocentric axial dipole, we can calculate an expected inclination at a given latitude and that is what dipole_pinc does. It calls pmag.pinc() and so will we to find the expected inclination at a paleolatitude of 24$^{\circ}$S!
help(pmag.pinc)
lat=-24
pmag.pinc(-24)
Or as an array
lats=range(-90,100,10)
incs=pmag.pinc(lats)
plt.plot(incs,lats)
plt.ylim(100,-100)
plt.xlabel('Latitude')
plt.ylabel('Inclination')
plt.axhline(0,color='black')
plt.axvline(0,color='black');
[Essentials Chapter 2] [command line version]
dipole_plat is similar to dipole_pinc but calculates the paleolatitude from the inclination. We will call pmag.plat():
help(pmag.plat)
inc=42
pmag.plat(inc)
[Essentials Chapter 2] [command line version]
pmag.dir2cart() converts directions (Declination, Inclination, Intensity) to cartesian coordinates (X,Y,Z).
help(pmag.dir2cart)
# read in data file from example file
dirs=np.loadtxt('data_files/dir_cart/dir_cart_example.dat')
print ('Input: \n',dirs) # print out the cartesian coordinates
# print out the results
carts = pmag.dir2cart(dirs)
print ("Output: ")
for c in carts:
print ('%8.4e %8.4e %8.4e'%(c[0],c[1],c[2]))
[Essentials Chapter 13] [command line version]
This program converts eigenparameters to the six tensor elements.
There is a function ipmag.eigs_s() which will do this in a notebook:
help(ipmag.eigs_s)
Ss=ipmag.eigs_s(infile="eigs_s_example.dat", dir_path='data_files/eigs_s')
for s in Ss:
print (s)
[Essentials Appendix B] [command line version]
Data are frequently published as equal area projections and not listed in data tables. These data can be digitized as x,y data (assuming the outer rim is unity) and converted to approximate directions with the program eq_di. To use this program, install a graph digitizer (GraphClick from http://www.arizona-software.ch/graphclick/ works on Macs).
Digitize the data from the equal area projection saved in the file eqarea.png in the eq_di directory. You should only work on one hemisphere at a time (upper or lower) and save each hemisphere in its own file. Then you can convert the X,Y data to approximate dec and inc data - the quality of the data depends on your care in digitizing and the quality of the figure that you are digitizing.
Here we will try this out on a datafile already prepared, which are the digitized data from the lower hemisphere of a plot. You check your work with eqarea.
To do this in a notebook, we can use pmag.doeqdi().
help(pmag.doeqdi)
# read in the data into an array
# x is assumed first column, y, second
xy=np.loadtxt('data_files/eq_di/eq_di_example.dat').transpose()
decs,incs=pmag.doeqdi(xy[0],xy[1])
ipmag.plot_net(1)
ipmag.plot_di(dec=decs,inc=incs,color='r',edge='black')
pmag.fcalc() returns the values of an F-test from an F table.
help(pmag.fcalc)
[Essentials Chapter 11] [command line version]
fisher draws $N$ directions from a Fisher distribution with specified $\kappa$ and a vertical mean. (For other directions see fishrot). To do this, we can just call the function pmag.fshdev() $N$ times.
help(pmag.fshdev)
# set the number, N, and kappa
N,kappa=100,20
# a basket to put our fish in
fish=[]
# get the Fisherian deviates
for i in range(N):
d,i=pmag.fshdev(kappa)
fish.append([d,i])
ipmag.plot_net(1)
ipmag.plot_di(di_block=fish,color='r',edge='black')
[Essentials Chapter 11] [command line version]
This program is similar to fisher, but allows you to specify the mean direction. This has been implemented as ipmag.fishrot().
help(ipmag.fishrot)
rotdi=ipmag.fishrot(k=50,n=5,dec=33,inc=41)
for di in rotdi:
print ('%7.1f %7.1f'%(di[0],di[1]))
ipmag.plot_net(1)
ipmag.plot_di(di_block=rotdi)
Fisher statistics requires unimodal data (all in one direction with no reversals) but many paleomagnetic data sets are bimodal. To flip bimodal data into a single mode, we can use pmag.flip( ). This function calculates the principle direction and flips all the 'reverse' data to the 'normal' direction along the principle axis.
help(pmag.flip)
#read in the data into an array
vectors=np.loadtxt('data_files/eqarea_ell/tk03.out').transpose()
di_block=vectors[0:2].transpose() # decs are di_block[0], incs are di_block[1]
# flip the reverse directions to their normal antipodes
normal,flipped=pmag.flip(di_block)
# and plot them up
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block,color='red')
ipmag.plot_di(di_block=flipped,color='b')
[Essentials Chapter 11] [command line version]
This program generates sets of data drawn from a normal distribution with a given mean and standard deviation. It is just a wrapper for a call to pmag.gaussdev() which just calls numpy.random.normal() which we could do, but we would have to import it, so it is easiest just to call the pmag version which we have already imported.
help(pmag.gaussdev)
N=1000
bins=100
norm=pmag.gaussdev(10,3,N)
plt.hist(norm,bins=bins,color='black',histtype='step',density=True)
plt.xlabel('Gaussian Deviates')
plt.ylabel('Frequency');
# alternatively:
ipmag.histplot(data=norm, xlab='Gaussian Deviates', save_plots=False, norm=-1)
[Essentials Chapter 12] [command line version]
gobing calculates Bingham statistics for sets of directional data (see documentation for eqarea_ell for nice examples. We do this by calling pmag.dobingham().
help(pmag.dobingham)
di_block=np.loadtxt('data_files/gobing/gobing_example.txt')
pmag.dobingham(di_block)
[Essentials Chapter 11] [command line version]
gofish calculates Fisher statistics for sets of directional data. (see documentation for eqarea_ell for nice examples. This can be done with ipmag.fisher_mean().
help(ipmag.fisher_mean)
di_block=np.loadtxt('data_files/gofish/fishrot.out')
ipmag.fisher_mean(di_block=di_block)
There is also a function pmag.dir_df_fisher_mean() that calculates Fisher statistics on a Pandas DataFrame with directional data
help(pmag.dir_df_fisher_mean)
# make the data frame
dir_df=pd.read_csv('data_files/gofish/fishrot.out',delim_whitespace=True, header=None)
dir_df.columns=['dir_dec','dir_inc']
pmag.dir_df_fisher_mean(dir_df)
[Essentials Chapter 12] [command line version]
With gokent we can calculate Kent statistics on sets of directional data (see documentation for eqarea_ell for nice examples..
This calls pmag.dokent() (see also eqarea_ell example)
help(pmag.dokent)
di_block=np.loadtxt('data_files/gokent/gokent_example.txt')
pmag.dokent(di_block,di_block.shape[0])
[Essentials Chapter 12] [command line version]
goprinc calculates the principal directions (and their eigenvalues) for sets of paleomagnetic vectors. It doesn't do any statistics on them, unlike the other programs. We will call pmag.doprinc():
help(pmag.doprinc)
di_block=np.loadtxt('data_files/goprinc/goprinc_example.txt')
pmag.doprinc(di_block)
[Essentials Chapter 2] [command line version]
This program gives geomagnetic field vector data for a specified place at a specified time. It has many built in models including IGRFs, GUFM and several archeomagnetic models. It calls the function ipmag.igrf() for this so that is what we will do.
help(ipmag.igrf)
We will calculate the field for San Diego from 3000 BCE to 1950 in 50 year increments using the hfm.OL1.A1 model of Constable et al. (2016, doi: 10.1016/j.epsl.2016.08.015).
# make a list of desired dates
dates=range(-3000,1950,50) # list of dates in +/- Common Era
mod = 'hfm10k' # choose the desired model
lat,lon,alt=33,-117,0 # desired latitude, longitude and alitude
Vecs=[] # list for Dec,Inc,Int outputs
for date in dates: # step through the dates
Vecs.append(ipmag.igrf([date,alt,lat,lon],mod=mod)) # append to list
vector_df = pd.DataFrame(Vecs) # make it into a Pandas dataframe
vector_df.columns=['dec','inc','int']
vector_df['vadms']=pmag.b_vdm(vector_df.int.values*1e-9, lat) # calculate the VADMs
vector_df['dec_adj']=vector_df['dec']
vector_df.loc[vector_df.dec>180,['dec_adj']]=vector_df.dec-360 # adjust declinations to be -180 => 180
fig=plt.figure(1,figsize=(7,9)) # set up the figure
fig.add_subplot(411) # make 4 rows of plots, this is the first
plt.plot(dates,vector_df.dec_adj) # plot the adjusted declinations
plt.ylabel('Declination ($^{\circ}$)')
plt.title('Geomagnetic field evaluated at Lat: '+str(lat)+' / Lon: '+str(lon))
fig.add_subplot(412) # this is the second
plt.plot(dates,vector_df.inc) # plot the inclinations
plt.ylabel('Inclination ($^{\circ}$)')
fig.add_subplot(413)
plt.plot(dates,vector_df.int*1e-3) # plot the intensites (in uT instead of nT)
plt.ylabel('Intensity ($\mu$T)')
fig.add_subplot(414) # plot the VADMs
plt.plot(dates,vector_df.vadms*1e-21) # plot as ZAm^2
plt.ylabel('VADM (ZAm$^2$)')
plt.xlabel('Dates (CE)');
[Essentials Chapter 11] [command line version]
You can't get a meaningful average inclination from inclination only data because of the exponential relationship between inclinations and the true mean inclination for Fisher distributions (except exactly at the pole and the equator). So, McFadden and Reid (1982, doi: 10.1111/j.1365-246X.1982.tb04950.x) developed a maximum liklihood estimate for getting an estimate for true mean absent declination. incfish.py is an implementation of that concept. It calls pmag.doincfish() so that is what we will do.
help(pmag.doincfish)
incs=np.loadtxt('data_files/incfish/incfish_example_inc.dat')
pmag.doincfish(incs)
[Essentials Chapter 11] [command line version]
pca calculates best-fit lines, planes or Fisher means through selected treatment steps along with Kirschvink (1980, doi: 10.1111/j.1365-246X.1980.tb02601.x) MAD values. The file format is a simple space delimited file with specimen name, treatment step, intensity, declination and inclination. pca calls pmag.domean(), so that is what we will do here.
help(pmag.domean)
# read in data as space delimited file
data=pd.read_csv('data_files/pca/pca_example.txt',\
delim_whitespace=True,header=None)
# we need to add a column for quality
data['quality']='g'
# strip off the specimen name and reorder records
# from: int,dec,inc to: dec,inc,int
data=data[[1,3,4,2,'quality']].values.tolist()
pmag.domean(data,1,10,'DE-BFL')
[Essentials Chapter 16] [Essentials Appendix A.3.5] [command line version]
This program finds rotation poles for a specified location, age and destination plate, then rotates the point into the destination plate coordinates using the roations and methods described in Essentials Appendix A.3.5.
This can be done for you using the function frp.get_pole() in the finite rotation pole module called pmagpy.frp. You then call pmag.pt_rot() to do the rotation. Let's do this for to rotate the Cretaceous poles from Europe (sane data as in the polemap_magic example) and rotate them to South African coordinates.
# need to load this special module
import pmagpy.frp as frp
help(frp.get_pole)
Prot=frp.get_pole('eur',100)
Prot
help(pmag.pt_rot)
data=pd.read_csv('data_files/polemap_magic/locations.txt',sep='\t',header=1)
lats=data['pole_lat'].values
lons=data['pole_lon'].values
RLats,RLons=rot_pts=pmag.pt_rot(Prot,lats,lons)
And now we can plot them using pmagplotlib.plot_map()
Opts={}
Opts['sym']='wo' # sets the symbol
Opts['symsize']=10
Opts['proj']='ortho'
Opts['edge']='black'
Opts['lat_0']=90
Opts['details']={}
Opts['details']['fancy']=True # warning : this option takes a few minutes
if has_cartopy:
plt.figure(1,(6,6)) # optional - make a map
pmagplotlib.plot_map(1, RLats, RLons, Opts)
elif has_basemap:
plt.figure(1,(6,6)) # optional - make a map
pmagplotlib.plot_map_basemap(1, RLats, RLons, Opts)
[Essentials Chapter 13] [command line version]
This program converts the six tensor elements to eigenparameters - the inverse of eigs_s.
We can call the function pmag.doseigs() from the notebook.
help(pmag.doseigs)
Ss=np.loadtxt('data_files/s_eigs/s_eigs_example.dat')
for s in Ss:
tau,V=pmag.doseigs(s)
print ('%f %8.2f %8.2f %f %8.2f %8.2f %f %8.2f %8.2f'%\
(tau[2],V[2][0],V[2][1],tau[1],V[1][0],V[1][1],tau[0],V[0][0],V[0][1]))
[Essentials Chapter 13] [command line version]
s_geo takes the 6 tensor elements in specimen coordinates and applies the rotation similar to di_geo. To do this we will call pmag.dosgeo() from within the notebook.
help(pmag.dosgeo)
Ss=np.loadtxt('data_files/s_geo/s_geo_example.dat')
for s in Ss:
print(pmag.dosgeo(s[0:6],s[6],s[7]))
[Essentials Chapter 13] [command line version]
s_hext calculates Hext (1963, doi: 10.2307/2333905) statistics for anisotropy data in the six tensor element format.
It calls pmag.dohext().
help(pmag.dohext)
We are working with data that have no sigmas attached to them and want to average all the values in the file together. Let's look at the rotated data from the s_geo example.
# read in the data
Ss=np.loadtxt('data_files/s_geo/s_geo_example.dat')
# make a container for the rotated S values
SGeos=[]
for s in Ss:
SGeos.append(pmag.dosgeo(s[0:6],s[6],s[7]))
nf,sigma,avs=pmag.sbar(SGeos) # get the average over all the data
hpars=pmag.dohext(nf,sigma,avs)
print(hpars)
[Essentials Chapter 13] [command line version]
s_tilt takes the 6 tensor elements in geographic coordinates and applies the rotation similar to di_tilt into stratigraphic coordinates. It calls pmag.dostilt(). But be careful! s_tilt.py (the command line program) assumes that the bedding info is the strike, with the dip to the right of strike unlike pmag.dostilt which assumes that the azimuth is the dip direction.
help(pmag.dostilt)
# note that the data in this example are Ss and strike and dip (not bed_az,bed_pl)
Ss=np.loadtxt('data_files/s_tilt/s_tilt_example.dat')
for s in Ss:
print(pmag.dostilt(s[0:6],s[6]+90.,s[7])) # make the bedding azimuth dip direction, not strike.
[Essentials Chapter 14] [command line version]
This program reads in data files with vgp_lon, vgp_lat and optional kappa, N, and site latitude. It allows some filtering based on the requirements of the study, such as:
The filtering is just what Pandas was designed for, so we can calls pmag.scalc_vgp_df() which works on a suitably constructed Pandas DataFrame.
help(pmag.scalc_vgp_df)
To just calculate the value of S (without the within site scatter) we read in a data file and attach the correct headers to it depending on what is in it.
vgp_df=pd.read_csv('data_files/scalc/scalc_example.txt',delim_whitespace=True,header=None)
if len(list(vgp_df.columns))==2:
vgp_df.columns=['vgp_lon','vgp_lat']
vgp_df['dir_k'],vgp_df['dir_n'],vgp_df['lat']=0,0,0
else:
vgp_df.columns=['vgp_lon','vgp_lat','dir_k','dir_n_samples','lat']
pmag.scalc_vgp_df
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
To apply a cutoff for the Fisher k value, we just filter the DataFrame prior to calculating S_b. Let's filter for kappa>50
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df,kappa=50)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
To apply the Vandamme (1994) approach, we set v to True
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df,v=True)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
To flip the "reverse" directions, we set anti to 1
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df,anti=True)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
And, to do relative to the spin axis, set spin to True:
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df,spin=True)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
[Essentials Chapter 14] [command line version]
This program does the same thing as scalc, but reads in a MagIC formatted file. So, we can do that easy-peasy.
vgp_df=pd.read_csv('data_files/scalc_magic/sites.txt',sep='\t',header=1)
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df,anti=True)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
vgp_df=pd.read_csv('data_files/scalc_magic/sites.txt',sep='\t',header=1)
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df,anti=True,spin=True)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
Like pmag.flip( ), pmag.separate_directions divides a directional data set into two modes. Unlike pmag.flip( ), it returns the two separate modes (e.g., normal and reverse)
help(pmag.separate_directions)
#read in the data into an array
vectors=np.loadtxt('data_files/eqarea_ell/tk03.out').transpose()
di_block=vectors[0:2].transpose() # decs are di_block[0], incs are di_block[1]
# flip the reverse directions to their normal antipodes
normal,reverse=pmag.separate_directions(di_block)
# and plot them up
ipmag.plot_net(1)
ipmag.plot_di(di_block=normal,color='red')
ipmag.plot_di(di_block=reverse,color='b')
[Essentials Chapter 7] [command line version]
This program reads in dec/inc data and "squishes" the inclinations using the formula from King (1955, doi: 10.1111/j.1365-246X.1955.tb06558.x) $\tan(I_o)=flat \tan(I_f)$. [See also unsquish]. We can call pmag.squish() from within the notebook.
help(pmag.squish)
di_block=np.loadtxt('data_files/squish/squish_example.dat').transpose()
decs=di_block[0]
incs=di_block[1]
flat=0.4
fincs=pmag.squish(incs,flat)
ipmag.plot_net(1)
ipmag.plot_di(dec=decs,inc=incs,title='Original',color='blue')
ipmag.plot_net(2)
ipmag.plot_di(dec=decs,inc=fincs,title='Squished',color='red')
[Essentials Chapter 11] [command line version]
This program just calculates the N, mean, sum, sigma and sigma % for data. There are numerous ways to do that in Numpy, so let's just use those.
data=np.loadtxt('data_files/gaussian/gauss.out')
print (data.shape[0],data.mean(),data.sum(),data.std())
[Essentials Chapter 15] [MagIC Database] [command line version]
This is program is a dinosaur and can be much more easily done using the wonders of Pandas and matplotlib as demonstrated here.
# read in the data
data=pd.read_csv('data_files/strip_magic/sites.txt',sep='\t',header=1)
# see what's there
data.columns
# you might have to use **df.dropna()** to clean off unwanted NaN lines or other data massaging
# but not for this example
plt.figure(1,(10,4)) # make the figure
plt.plot(data.age,data.vgp_lat,'b-') # plot as blue line
plt.plot(data.age,data.vgp_lat,'ro',markeredgecolor="black") # plot as red dots with black rims
plt.xlabel('Age (Ma)') # label the time axis
plt.ylabel('VGP Lat.$^{\circ}$')
plt.ylim(-90,90) # set the plot limits
plt.axhline(color='black'); # put on a zero line
[Essentials Chapter 9] [command line version]
Paleomagnetists often use the sun to orient their cores, especially if the sampling site is strongly magnetic and would deflect the magnetic compass. The information required is: where are you (e.g., latitude and longitude), what day is it, what time is it in Greenwhich Mean Time (a.k.a. Universal Time) and where is the sun (e.g., the antipode of the angle the shadow of a gnomon makes with the desired direction)?
This calculation is surprisingly accurate and is implemented in the function pmag.dosundec().
help(pmag.dosundec)
Say you (or your elderly colleague) were located at 35$^{\circ}$ N and 33$^{\circ}$ E. The local time was three hours ahead of Universal Time. The shadow angle for the drilling direction was 68$^{\circ}$ measured at 16:09 on May 23, 1994. pmag.dosundec() requires a dictionary with the necessary information:
sundata={'delta_u':3,'lat':35,'lon':33,\
'date':'1994:05:23:16:9','shadow_angle':68}
print ('%7.1f'%(pmag.dosundec(sundata)))
[Essentials Chapter 16] [command line version]
Sometimes it is useful to generate a distribution of synthetic geomagnetic field vectors that you might expect to find from paleosecular variation of the geomagnetic field. The program tk03 generates distributions of field vectors from the PSV model of Tauxe and Kent (2004, doi: 10.1029/145GM08). This program was implemented for notebook use as ipmag.tk03(). [See also find_ei].
help(ipmag.tk03)
di_block=ipmag.tk03(lat=30)
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block,color='red',edge='black')
It is at times handy to be able to generate a uniformly distributed set of directions (or geographic locations). This is done using a technique described by Fisher et al. (Fisher, N. I., Lewis, T., & Embleton, B. J. J. (1987). Statistical Analysis of Spherical Data. Cambridge: Cambridge University Press). uniform does that by calling pmag.get_unf().
help(pmag.get_unf)
di_block=pmag.get_unf()
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block,color='red',edge='black')
[Essentials Chapter 7] [Essentials Chapter 16] [command line version]
This program is just the inverse of squish in that it takes "squished" data and "unsquishes" them, assuming a King (1955, doi: 10.1111/j.1365-246X.1955.tb06558.x) relationship: $\tan(I_o)=flat \tan(I_f)$. So, $\tan(I_f) = \tan(I_o)/flat$.
It calls pmag.unquish().
help(pmag.unsquish)
di_block=np.loadtxt('data_files/unsquish/unsquish_example.dat').transpose()
decs=di_block[0]
incs=di_block[1]
flat=.4
fincs=pmag.unsquish(incs,flat)
ipmag.plot_net(1)
ipmag.plot_di(dec=decs,inc=incs,title='Squished',color='red')
ipmag.plot_net(2)
ipmag.plot_di(dec=decs,inc=fincs,title='Unsquished',color='blue')
[Essentials Chapter 2] [command line version]
vdm_b is the inverse of b_vdm in that it converts a virtual [axial] dipole moment (vdm or vadm) to a predicted geomagnetic field intensity observed at the earth's surface at a particular (paleo)latitude. This program calls pmag.vdm_b().
help(pmag.vdm_b)
print ('%7.1f microtesla'%(pmag.vdm_b(7.159e22,22)*1e6))
[Essentials Chapter 2] [command line version]
vector_mean calculates the vector mean for a set of vectors in polar coordinates (e.g., declination, inclination, intensity). This is similar to the Fisher mean (gofish) but uses vector length instead of unit vectors. It calls calls pmag.vector_mean().
help(pmag.vector_mean)
data=np.loadtxt('data_files/vector_mean/vector_mean_example.dat')
Dir,R=pmag.vector_mean(data)
print (('%i %7.1f %7.1f %f')%(data.shape[0],Dir[0],Dir[1],R))
[Essentials Chapter 2] [command line version]
We use vgp_di to convert virtual geomagnetic pole positions to predicted directions at a given location. [See also di_vgp].
This program uses the function pmag.vgp_di().
help(pmag.vgp_di)
d,i=pmag.vgp_di(68,191,33,243)
print ('%7.1f %7.1f'%(d,i))
[Essentials Chapter 11] [command line version]
There are several different ways of testing whether two sets of directional data share a common mean. One popular (although perhaps not the best) way is to use Watson's F test (Watson, 1956, doi: 10.1111/j.1365-246X.1956.tb05560.x). [See also watsons_v or Lisa Tauxe's bootstrap way: common_mean].
If you still want to use Waston's F, then try pmag.watsons_f() for this.
help(pmag.watsons_f)
DI1=np.loadtxt('data_files/watsons_f/watsons_f_example_file1.dat')
DI2=np.loadtxt('data_files/watsons_f/watsons_f_example_file2.dat')
F,Fcrit=pmag.watsons_f(DI1,DI2)
print ('%7.2f %7.2f'%(F,Fcrit))