# Visualizations and more analysis with PmagPy¶

This notebook demonstrates PmagPy functions that can be used to visualize data as well as those that conduct statistical tests that have associated visualizations.

## Guide to PmagPy¶

The notebook is one of a series of notebooks that demonstrate the functionality of PmagPy. The other notebooks are:

• PmagPy_introduction.ipynb This notebook introduces PmagPy and lists the functions that are demonstrated in the other notebooks.
• PmagPy_calculations.ipynb This notebook demonstrates many of the PmagPy calculation functions such as those that rotate directions, return statistical parameters, and simulate data from specified distributions.
• PmagPy_MagIC.ipynb This notebook demonstrates how PmagPy can be used to read and write data to and from the MagIC database format including conversion from many individual lab measurement file formats.

## Running this notebook¶

You are currently looking at static html, but you may want to run this notebook interactively. To do so, you'll need to first install Python and PmagPy (see the Cookbook for install instructions). You can then launch the notebook from your command line (see more details here).

## Get started¶

To use the functions in this notebook, we have to import the PmagPy modules pmagplotlib, pmag and ipmag and some other handy functions. This is done in the following code.

In order to access the example data, these examples are meant to be run in the PmagPy-data directory (PmagPy directory for developers).

In [1]:
import pmagpy.pmag as pmag
import pmagpy.pmagplotlib as pmagplotlib
import pmagpy.ipmag as ipmag
import pmagpy.contribution_builder as cb
from pmagpy import convert_2_magic as convert
import matplotlib.pyplot as plt # our plotting buddy
import numpy as np # the fabulous NumPy package
import pandas as pd # and of course Pandas
# test if Basemap and/or cartopy is installed
has_basemap, Basemap = pmag.import_basemap()
has_cartopy, Cartopy = pmag.import_cartopy()
# test if xlwt is installed (allows you to export to excel)
try:
import xlwt
has_xlwt = True
except ImportError:
has_xlwt = False
# This allows you to make matplotlib plots inside the notebook.
%matplotlib inline
from IPython.display import Image
import os

print('All modules imported!')

All modules imported!


### Functions demonstrated within PmagPy_plots_analysis.ipynb:¶

• Functions in PmagPy_plots_tests.ipynb

• ani_depthplot : plots anisotropy data against depth in stratigraphic section (Xmas tree plots)
• aniso_magic : makes plots of anisotropy data and bootstrapped confidences
• biplot_magic : plots different columns against each other in MagIC formatted data files
• chi_magic : plots magnetic susceptibility data in MagIC format as function of field, frequency or temperature
• common_mean : graphical approach to testing two sets of directions for common mean using bootstrap
• cont_rot : makes plots of continents after rotation to specified coordinate system
• core_depthplot : plots MagIC formatted data
• curie : makes plots of Curie Temperature data and provides estimates for Tc
• dayplot_magic : makes Day et al. (1977) and other plots with hysteresis statistics
• dmag_magic : plots remanence against demagnetization step for MagIC formatted files
• eqarea and eqarea_magic : makes equal area projections for directions
• eqarea_ell : makes equal area projections for directions with specified confidence ellipses
• find_ei : finds the inclination unflattening factor that unsquishes directions to match TK03 distribution
• fishqq: makes a Quantile-Quantile plot for directions against uniform and exponential distributions
• foldtest & foldtest_magic : finds tilt correction that maximizes concentration of directions, with bootstrap confidence bounds.
• forc_diagram: plots FORC diagrams for both conventional and irregular FORCs
• hysteresis_magic : makes plots of hysteresis data (not FORCs).
• irm_unmix : analyzes IRM acquisition data in terms of coercivity distributions
• irmaq_magic : plots IRM acquistion data
• lnp_magic : plots lines and planes for site level data and calculates best fit mean and alpha_95
• lowes : makes a plot of the Lowe's spectrum for a geomagnetic field model
• lowrie and lowrie_magic : makes plots of Lowrie's (1990) 3D-IRM demagnetization experiments
• plot_cdf and plot_2cdfs : makes a cumulative distribution plot of data
• plotdi_a : makes equal are plots of directions and their $\alpha_{95}$s
• plot_geomagia : makes plots from files downloaded from the geomagia website
• plot_mag_map : makes a color contour plot of geomagnetic field models
• plot_magic_keys : plots data from MagIC formatted data files
• plot_map_pts : plots points on maps
• plot_ts : makes a plot of the desired Geomagnetic Reversal time scale
• polemap_magic : reads in MagIC formatted file with paleomagnetic poles and plots them
• qqplot : makes a Quantile-Quantile plot for data against a normal distribution
• qqunf : makes a Quantile-Quantile plot for data against a uniform distribution
• quick_hyst : makes hysteresis plots
• revtest & revtest_magic : performs a bootstrap reversals test
• thellier_magic : makes plots of thellier-thellier data.
• vgpmap_magic : reads in MagIC formatted file with virtual geomagnetic poles and plots them
• watsons_v : makes a graph for Watson's V test for common mean
• zeq and zeq_magic : makes quick zijderveld plots for measurement data
• Maps:

• cont_rot : makes plots of continents after rotation to specified coordinate system
• plot_mag_map : makes a color contour plot of geomagnetic field models
• plot_map_pts : plots points on maps
• polemap_magic : reads in MagIC formatted file with paleomagnetic poles and plots them
• vgpmap_magic : reads in MagIC formatted file with virtual geomagnetic poles and plots them

## Figures¶

• The plotting functions make plots to the screen (using the %matplotlib inline magic command), but all matplotlib plots can be saved with the command:

plt.savefig('PATH_TO_FILE_NAME.FMT')

and then viewed in the notebook with:

Image('PATH_TO_FILE_NAME.FMT')

## ani_depthplot¶

Anisotropy data can be plotted versus depth. The program ani_depthplot uses MagIC formatted data tables. Bulk susceptibility measurements can also be plotted if they are available in a measurements.txt formatted file.

In this example, we will use the data from Tauxe et al. (2015, doi:10.1016/j.epsl.2014.12.034) measured on samples obtained during Expedition 318 of the International Ocean Drilling Program. To get the entire dataset, go to the MagIC data base at: https://www2.earthref.org/MagIC/doi/10.1016/j.epsl.2014.12.034. Download the data set and unpack it with ipmag.download_magic.

We will use the ipmag.ani_depthplot() version of this program.

In [2]:
help(ipmag.ani_depthplot)

Help on function ani_depthplot in module pmagpy.ipmag:

ani_depthplot(spec_file='specimens.txt', samp_file='samples.txt', meas_file='measurements.txt', site_file='sites.txt', age_file='', sum_file='', fmt='svg', dmin=-1, dmax=-1, depth_scale='core_depth', dir_path='.', contribution=None)
returns matplotlib figure with anisotropy data plotted against depth
available depth scales: 'composite_depth', 'core_depth' or 'age' (you must provide an age file to use this option).
You must provide valid specimens and sites files, and either a samples or an ages file.
You may additionally provide measurements and a summary file (csv).

Parameters
----------
spec_file : str, default "specimens.txt"
samp_file : str, default "samples.txt"
meas_file : str, default "measurements.txt"
site_file : str, default "sites.txt"
age_file : str, default ""
sum_file : str, default ""
fmt : str, default "svg"
format for figures, ["svg", "jpg", "pdf", "png"]
dmin : number, default -1
minimum depth to plot (if -1, default to plotting all)
dmax : number, default -1
maximum depth to plot (if -1, default to plotting all)
depth_scale : str, default "core_depth"
scale to plot, ['composite_depth', 'core_depth', 'age'].
if 'age' is selected, you must provide an ages file.
dir_path : str, default "."
directory for input files
contribution : cb.Contribution, default None
data from files

Returns
---------
plot : matplotlib plot, or False if no plot could be created
name : figure name, or error message if no plot could be created



And here we go:

In [3]:
ipmag.ani_depthplot(dir_path='data_files/ani_depthplot');

-I- Using online data model
-I- Getting method codes from earthref.org
-I- Importing controlled vocabularies from https://earthref.org


## aniso_magic¶

Samples were collected from the eastern margin a dike oriented with a bedding pole declination of 110∘ and dip of 2∘. The data have been imported into a MagIC (data model 3) formatted file named dike_specimens.txt.

We will make a plot of the data using ipmag.aniso_magic_nb(), using the site parametric bootstrap option and plot out the bootstrapped eigenvectors. We will also draw on the trace of the dike.

In [4]:
help(ipmag.aniso_magic)

Help on function aniso_magic in module pmagpy.ipmag:

aniso_magic(infile='specimens.txt', samp_file='samples.txt', site_file='sites.txt', ipar=1, ihext=1, ivec=1, iplot=0, isite=1, iboot=1, vec=0, Dir=[], PDir=[], comp=0, user='', fmt='png', crd='s', verbose=True, plots=0, num_bootstraps=1000, dir_path='.', input_dir_path='')


In [5]:
help(ipmag.aniso_magic_nb)

Help on function aniso_magic_nb in module pmagpy.ipmag:

aniso_magic_nb(infile='specimens.txt', samp_file='samples.txt', site_file='sites.txt', verbose=True, ipar=False, ihext=True, ivec=False, isite=False, iloc=False, iboot=False, vec=0, Dir=[], PDir=[], crd='s', num_bootstraps=1000, dir_path='.', fignum=1, save_plots=True, interactive=False, fmt='png')
Makes plots of anisotropy eigenvectors, eigenvalues and confidence bounds
All directions are on the lower hemisphere.

Parameters
__________
infile : specimens formatted file with aniso_s data
samp_file : samples formatted file with sample => site relationship
site_file : sites formatted file with site => location relationship
verbose : if True, print messages to output
confidence bounds options:
ipar : if True - perform parametric bootstrap - requires non-blank aniso_s_sigma
ihext : if True - Hext ellipses
ivec : if True - plot bootstrapped eigenvectors instead of ellipses
isite : if True plot by site, requires non-blank samp_file
#iloc : if True plot by location, requires non-blank samp_file, and site_file  NOT IMPLEMENTED
iboot : if True - bootstrap ellipses
vec : eigenvector for comparison with Dir
Dir : [Dec,Inc] list for comparison direction
PDir : [Pole_dec, Pole_Inc] for pole to plane for comparison
green dots are on the lower hemisphere, cyan are on the upper hemisphere
crd : ['s','g','t'], coordinate system for plotting whereby:
s : specimen coordinates, aniso_tile_correction = -1, or unspecified
g : geographic coordinates, aniso_tile_correction = 0
t : tilt corrected coordinates, aniso_tile_correction = 100
num_bootstraps : how many bootstraps to do, default 1000
dir_path : directory path
fignum : matplotlib figure number, default 1
save_plots : bool, default True
if True, create and save all requested plots
interactive : bool, default False
interactively plot and display for each specimen
(this is best used on the command line only)
fmt : str, default "svg"
format for figures, [svg, jpg, pdf, png]


In [6]:
ipmag.aniso_magic_nb(infile='dike_specimens.txt',dir_path='data_files/aniso_magic',
iboot=1,ihext=0,ivec=1,PDir=[120,10],ipar=1, save_plots=False) # compare dike directions with plane of dike with pole of 120,10

-W- Couldn't read in samples data
-I- Make sure you've provided the correct file name
-W- Couldn't read in samples data
-I- Make sure you've provided the correct file name
desired coordinate system not available, using available:  g

Out[6]:
(True, [])

The specimen eigenvectors are plotted in the top diagram with the usual convention that squares are the V$_1$ directions, triangles are the V$_2$ directions and circles are the V$_3$ directions. All directions are plotted on the lower hemisphere. The bootstrapped eigenvectors are shown in the middle diagram. Cumulative distributions of the bootstrapped eigenvalues are shown in the bottom plot with the 95% confidence bounds plotted as vertical lines. It appears that the magma was moving in the northern and slightly up direction along the dike.

There are more options to ipmag.aniso_magic_nb() that come in handy. In particular, one often wishes to test if a particular fabric is isotropic (the three eigenvalues cannot be distinguished), or if a particular eigenvector is parallel to some direction. For example, undisturbed sedimentary fabrics are oblate (the maximum and intermediate directions cannot be distinguished from one another, but are distinct from the minimum) and the eigenvector associated with the minimum eigenvalue is vertical. These criteria can be tested using the distributions of bootstrapped eigenvalues and eigenvectors.

The following session illustrates how this is done, using the data in the test file sed_specimens.txt in the aniso_magic directory.

In [7]:
ipmag.aniso_magic_nb(infile='sed_specimens.txt',dir_path='data_files/aniso_magic',
iboot=1,ihext=0,ivec=1,Dir=[0,90],vec=3,ipar=1, save_plots=False) # parametric bootstrap and compare V3 with vertical

-W- Couldn't read in samples data
-I- Make sure you've provided the correct file name
-W- Couldn't read in samples data
-I- Make sure you've provided the correct file name
desired coordinate system not available, using available:  g

Out[7]:
(True, [])

The top three plots are as in the dike example before, showing a clear triaxial fabric (all three eigenvalues and associated eigenvectors are distinct from one another. In the lower three plots we have the distributions of the three components of the chosen axis, V$_3$, their 95% confidence bounds (dash lines) and the components of the designated direction (solid line). This direction is also shown in the equal area projection above as a red pentagon. The minimum eigenvector is not vertical in this case.

## biplot_magic¶

It is often useful to plot measurements from one experiement against another. For example, rock magnetic studies of sediments often plot the IRM against the ARM or magnetic susceptibility. All of these types of measurements can be imported into a single measurements formatted file and use the MagIC method codes and other clues (lab fields, etc.) to differentiate one measurement from another.

Data were obtained by Hartl and Tauxe (1997, doi: 10.1111/j.1365-246X.1997.tb04082.x) from a Paleogene core from 28$^{\circ}$ S (DSDP Site 522) and used for a relative paleointensity study. IRM, ARM, magnetic susceptibility and remanence data were uploaded to the MagIC database. The MagIC measurements formatted file for this study (which you can get from https://earthref.org/MagIC/doi/10.1111/j.1365-246X.1997.tb04082.x and unpack with download_magic is saved in data_files/biplot_magic/measurements.txt.

We can create these plots using Pandas. The key to what the measurements mean is in the Magic method codes, so we can first get a unique list of all the available method_codes, then plot the ones we are interested in against each other. Let's read in the data file in to a Pandas DataFrame and exctract the method codes to see what we have:

In [8]:
# read in the data
# get the method_codes and print
print(meas_df.method_codes.unique())
# take a look at the top part of the measurements data frame

['LT-AF-Z' 'LT-AF-I' 'LT-IRM' 'LP-X']

Out[8]:
citations dir_dec dir_inc experiment magn_mass meas_temp measurement method_codes quality specimen standard susc_chi_mass treat_ac_field treat_dc_field treat_step_num treat_temp
0 This study 268.5 -41.2 15-1-013:LP-AF-DIR 0.000003 300 15-1-013:LP-AF-DIR-1 LT-AF-Z g 15-1-013 u NaN 0.015 0.00000 1.0 300
1 This study NaN NaN 15-1-013:LP-ARM 0.000179 300 15-1-013:LP-ARM-2 LT-AF-I g 15-1-013 u NaN 0.080 0.00005 2.0 300
2 This study NaN NaN 15-1-013:LP-IRM 0.003600 300 15-1-013:LP-IRM-3 LT-IRM g 15-1-013 u NaN 0.000 1.00000 3.0 300
3 This study NaN NaN 15-1-013:LP-X NaN 300 15-1-013:LP-X-4 LP-X NaN 15-1-013 NaN 2.380000e-07 0.010 0.00000 4.0 300
4 This study 181.0 68.6 15-1-022:LP-AF-DIR 0.000011 300 15-1-022:LP-AF-DIR-5 LT-AF-Z g 15-1-022 u NaN 0.015 0.00000 5.0 300

These are: an AF demag step (LT-AF-Z), an ARM (LT-AF-I), an IRM (LT-IRM) and a susceptibility (LP-X). Now we can fish out data for each method, merge them by specimen, dropping any missing measurements and finally plot one against the other.

In [9]:
# get the IRM data
IRM=meas_df[meas_df.method_codes.str.contains('LT-IRM')]
IRM=IRM[['specimen','magn_mass']] #trim the data frame
IRM.columns=['specimen','IRM'] # rename the column
# do the same for the ARM data
ARM=meas_df[meas_df.method_codes.str.contains('LT-AF-I')]
ARM=ARM[['specimen','magn_mass']]
ARM.columns=['specimen','ARM']
# and the magnetic susceptibility
CHI=meas_df[meas_df.method_codes.str.contains('LP-X')]
CHI=CHI[['specimen','susc_chi_mass']]
CHI.columns=['specimen','CHI']
# merge IRM ARM data by specimen
RMRMs=pd.merge(IRM,ARM,on='specimen')
# add on the susceptility data
RMRMs=pd.merge(RMRMs,CHI,on='specimen')


Now we are ready to make the plots.

In [10]:
fig=plt.figure(1, (12,4)) # make a figure
fig.add_subplot(131) # make the first in a row of three subplots
plt.plot(RMRMs.IRM,RMRMs.ARM,'ro',markeredgecolor='black')
plt.xlabel('IRM (Am$^2$/kg)') # label the X axis
plt.ylabel('ARM (Am$^2$/kg)') # and the Y axis
fig.add_subplot(132)# make the second in a row of three subplots
plt.plot(RMRMs.IRM,RMRMs.CHI,'ro',markeredgecolor='black')
plt.xlabel('IRM (Am$^2$/kg)')
plt.ylabel('$\chi$ (m$^3$/kg)')
fig.add_subplot(133)# and the third in a row of three subplots
plt.plot(RMRMs.ARM,RMRMs.CHI,'ro',markeredgecolor='black')
plt.xlabel('$\chi$ (m$^3$/kg)')
plt.ylabel('IRM (Am$^2$/kg)');


## chi_magic¶

It is sometimes useful to measure susceptibility as a function of temperature, applied field and frequency. Here we use a data set that came from the Tiva Canyon Tuff sequence (see Jackson et al., 2006, doi: 10.1029/2006JB004514).

chi_magic reads in a MagIC formatted file and makes various plots. We do this using Pandas.

In [11]:
# with ipmag
ipmag.chi_magic('data_files/chi_magic/measurements.txt', save_plots=False)

Not enough data to plot IRM-Kappa-2352

Out[11]:
(True, [])
In [12]:
# read in data from data model 3 example file using pandas
print (chi_data.columns)
# get arrays of available temps, frequencies and fields
Ts=np.sort(chi_data.meas_temp.unique())
Fs=np.sort(chi_data.meas_freq.unique())
Bs=np.sort(chi_data.meas_field_ac.unique())

Index(['experiment', 'specimen', 'measurement', 'treat_step_num', 'citations',
'instrument_codes', 'method_codes', 'meas_field_ac', 'meas_freq',
'meas_temp', 'timestamp', 'susc_chi_qdr_volume', 'susc_chi_volume'],
dtype='object')

In [13]:
# plot chi versus temperature at constant field
b=Bs.max()
for f in Fs:
this_f=chi_data[chi_data.meas_freq==f]
this_f=this_f[this_f.meas_field_ac==b]
plt.plot(this_f.meas_temp,1e6*this_f.susc_chi_volume,label='%i'%(f)+' Hz')
plt.legend()
plt.xlabel('Temperature (K)')
plt.ylabel('$\chi$ ($\mu$SI)')
plt.title('B = '+'%7.2e'%(b)+ ' T')

Out[13]:
Text(0.5, 1.0, 'B = 3.00e-04 T')
In [14]:
# plot chi versus frequency at constant B
b=Bs.max()
t=Ts.min()
this_t=chi_data[chi_data.meas_temp==t]
this_t=this_t[this_t.meas_field_ac==b]
plt.semilogx(this_t.meas_freq,1e6*this_t.susc_chi_volume,label='%i'%(t)+' K')
plt.legend()
plt.xlabel('Frequency (Hz)')
plt.ylabel('$\chi$ ($\mu$SI)')
plt.title('B = '+'%7.2e'%(b)+ ' T')

Out[14]:
Text(0.5, 1.0, 'B = 3.00e-04 T')

You can see the dependence on temperature, frequency and applied field. These data support the suggestion that there is a strong superparamagnetic component in these specimens.

## common_mean¶

Most paleomagnetists use some form of Fisher Statistics to decide if two directions are statistically distinct or not (see Essentials Chapter 11 for a discussion of those techniques. But often directional data are not Fisher distributed and the parametric approach will give misleading answers. In these cases, one can use a boostrap approach, described in detail in [Essentials Chapter 12]. The program common_mean can be used for a bootstrap test for common mean to check whether two declination, inclination data sets have a common mean at the 95% level of confidence.

We want to compare the two data sets: common_mean_ex_file1.dat and common_mean_ex_file2.dat. But first, let’s look at the data in equal area projection using the methods outline in the section on eqarea.

In [15]:
directions_A=np.loadtxt('data_files/common_mean/common_mean_ex_file1.dat')
ipmag.plot_net(1)
ipmag.plot_di(di_block=directions_A,color='red')
ipmag.plot_di(di_block=directions_B,color='blue')


Now let’s look at the common mean problem using ipmag.common_mean_bootstrap().

In [16]:
help(ipmag.common_mean_bootstrap)

Help on function common_mean_bootstrap in module pmagpy.ipmag:

common_mean_bootstrap(Data1, Data2, NumSims=1000, save=False, save_folder='.', fmt='svg', figsize=(7, 2.3), x_tick_bins=4)
Conduct a bootstrap test (Tauxe, 2010) for a common mean on two declination,
inclination data sets. Plots are generated of the cumulative distributions
of the Cartesian coordinates of the means of the pseudo-samples (one for x,
one for y and one for z). If the 95 percent confidence bounds for each
component overlap, the two directions are not significantly different.

Parameters
----------
Data1 : a nested list of directional data [dec,inc] (a di_block)
Data2 : a nested list of directional data [dec,inc] (a di_block)
if Data2 is length of 1, treat as single direction
NumSims : number of bootstrap samples (default is 1000)
save : optional save of plots (default is False)
save_folder : path to directory where plots should be saved
fmt : format of figures to be saved (default is 'svg')
figsize : optionally adjust figure size (default is (7, 2.3))
x_tick_bins : because they occasionally overlap depending on the data, this
argument allows you adjust number of tick marks on the x axis of graphs
(default is 4)

Returns
-------
three plots : cumulative distributions of the X, Y, Z of bootstrapped means

Examples
--------
Develop two populations of directions using ipmag.fishrot. Use the
function to determine if they share a common mean (through visual inspection
of resulting plots).

>>> directions_A = ipmag.fishrot(k=20, n=30, dec=40, inc=60)
>>> directions_B = ipmag.fishrot(k=35, n=25, dec=42, inc=57)
>>> ipmag.common_mean_bootstrap(directions_A, directions_B)


In [17]:
ipmag.common_mean_bootstrap(directions_A,directions_B,figsize=(9,3))


These plots suggest that the two data sets share a common mean.

Now compare the data in common_mean_ex_file1.dat with the expected direction at the 5$^{\circ}$ N latitude that these data were collected (Dec=0, Inc=9.9).

To do this, we set the second data set to be the desired direction for comparison.

In [18]:
comp_dir=[0,9.9]
ipmag.common_mean_bootstrap(directions_A,comp_dir,figsize=(9,3))


Apparently the data (cumulative distribution functions) are entirely consistent with the expected direction (dashed lines are the cartesian coordinates of that).

## cont_rot¶

We can make an orthographic projection with latitude = -20$^{\circ}$ and longitude = 0$^{\circ}$ at the center of the African and South American continents reconstructed to 180 Ma using the Torsvik et al. (2008, doi: 10.1029/2007RG000227) poles of finite rotation. We would do this by first holding Africa fixed.

We need to read in in the outlines of continents from continents.get_cont(), rotate them around a rotation pole and angle as specified by the age and continent in question (from frp.get_pole() using pmag.pt_rot(). Then we can plot them using pmagplotlib.plot_map(). If the Basemap version is preferred, use pmagplotlib.plot_map_basemap(). Here we demonstrate this from within the notebook by just calling the PmagPy functions.

In [19]:
# load in the continents module
import pmagpy.continents as continents
import pmagpy.frp as frp
help(continents.get_continent)

Help on function get_continent in module pmagpy.continents:

get_continent(continent)
get_continent(continent)
returns the outlines of specified continent.

Parameters:
____________________
continent:
af : Africa
congo : Congo
kala : Kalahari
aus : Australia
balt : Baltica
eur : Eurasia
ind : India
sam : South America
ant : Antarctica
grn : Greenland
lau : Laurentia
nam : North America
gond : Gondawanaland
Returns :
array of [lat/long] points defining continent


In [20]:
help(pmagplotlib.plot_map)

Help on function plot_map in module pmagpy.pmagplotlib:

plot_map(fignum, lats, lons, Opts)
makes a cartopy map  with lats/lons
Requires installation of cartopy

Parameters:
_______________
fignum : matplotlib figure number
lats : array or list of latitudes
lons : array or list of longitudes
Opts : dictionary of plotting options:
Opts.keys=
proj : projection [supported cartopy projections:
pc = Plate Carree
aea = Albers Equal Area
aeqd = Azimuthal Equidistant
lcc = Lambert Conformal
lcyl = Lambert Cylindrical
merc = Mercator
mill = Miller Cylindrical
moll = Mollweide [default]
ortho = Orthographic
robin = Robinson
sinu = Sinusoidal
stere = Stereographic
tmerc = Transverse Mercator
utm = UTM [set zone and south keys in Opts]
laea = Lambert Azimuthal Equal Area
geos = Geostationary
npstere = North-Polar Stereographic
spstere = South-Polar Stereographic
latmin : minimum latitude for plot
latmax : maximum latitude for plot
lonmin : minimum longitude for plot
lonmax : maximum longitude
lat_0 : central latitude
lon_0 : central longitude
sym : matplotlib symbol
symsize : symbol size in pts
edge : markeredgecolor
cmap : matplotlib color map
res :  resolution [c,l,i,h] for low/crude, intermediate, high
boundinglat : bounding latitude
sym : matplotlib symbol for plotting
symsize : matplotlib symbol size for plotting
names : list of names for lats/lons (if empty, none will be plotted)
pltgrd : if True, put on grid lines
gridspace : grid line spacing
global : global projection [default is True]
oceancolor : 'azure'
landcolor : 'bisque' [choose any of the valid color names for matplotlib
see https://matplotlib.org/examples/color/named_colors.html
details : dictionary with keys:
coasts : if True, plot coastlines
rivers : if True, plot rivers
states : if True, plot states
countries : if True, plot countries
ocean : if True, plot ocean
fancy : if True, plot etopo 20 grid
NB:  etopo must be installed
if Opts keys not set :these are the defaults:


In [21]:
# retrieve continental outline
# This is the version that uses cartopy and requires installation of cartopy
af=continents.get_continent('af').transpose()
sam=continents.get_continent('sam').transpose()

#define options for pmagplotlib.plot_map
plt.figure(1,(5,5))
Opts = {'latmin': -90, 'latmax': 90, 'lonmin': 0., 'lonmax': 360., 'lat_0': -20, \
'lon_0': 345,'proj': 'ortho', 'sym': 'r-', 'symsize': 3,\
'pltgrid': 0, 'res': 'c', 'boundinglat': 0.}
if has_cartopy:
pmagplotlib.plot_map(1,af[0],af[1],Opts)
Opts['sym']='b-'
pmagplotlib.plot_map(1,sam[0],sam[1],Opts)
elif has_basemap:
pmagplotlib.plot_map_basemap(1,af[0],af[1],Opts)
Opts['sym']='b-'
pmagplotlib.plot_map_basemap(1,sam[0],sam[1],Opts)



Now for the rotation part. These are in a function called frp.get_pole()

In [22]:
help(frp.get_pole)

Help on function get_pole in module pmagpy.frp:

get_pole(continent, age)
returns rotation poles and angles for specified continents and ages
assumes fixed Africa.
Parameters
__________
continent :
aus : Australia
eur : Eurasia
[nwaf,congo] : NW Africa  [choose one]
col :  Colombia
grn : Greenland
nam : North America
par : Paraguay
eant :  East Antarctica
ind : India
[neaf,kala] : NE Africa [choose one]
[sac,sam] :  South America [choose one]
ib : Iberia
saf : South Africa
Returns
_______
[pole longitude, pole latitude, rotation angle] : for the continent at specified age


In [23]:
# get the rotation pole for south america relative to South Africa at 180 Ma
sam_pole=frp.get_pole('sam',180)
# NB: for african rotations, first rotate other continents to fixed Africa, then
# rotate with South African pole (saf)


The rotation is done by pmag.pt_rot().

In [24]:
help(pmag.pt_rot)

Help on function pt_rot in module pmagpy.pmag:

pt_rot(EP, Lats, Lons)
Rotates points on a globe by an Euler pole rotation using method of
Cox and Hart 1986, box 7-3.

Parameters
----------
EP : Euler pole list [lat,lon,angle]
Lats : list of latitudes of points to be rotated
Lons : list of longitudes of points to be rotated

Returns
_________
RLats : rotated latitudes
RLons : rotated longitudes



so here we go...

In [25]:
plt.figure(1,(5,5))
sam_rot=pmag.pt_rot(sam_pole,sam[0],sam[1]) # same for south america
# and plot 'em
Opts['sym']='r-'
if has_cartopy:
pmagplotlib.plot_map(1,af[0],af[1],Opts)
Opts['sym']='b-'
pmagplotlib.plot_map(1,sam_rot[0],sam_rot[1],Opts)
elif has_basemap:
pmagplotlib.plot_map_basemap(1,af[0],af[1],Opts)
Opts['sym']='b-'
pmagplotlib.plot_map_basemap(1,sam_rot[0],sam_rot[1],Opts)


## core_depthplot¶

The program core_depthplot can be used to plot various measurement data versus sample depth. The data must be in the MagIC data format. The program will plot whole core data, discrete sample at a bulk demagnetization step, data from vector demagnetization experiments, and so on.

We can try this out on some data from DSDP Hole 522, (drilled at 26S/5W) and measured by Tauxe and Hartl (1997, doi: 10.1111/j.1365-246X.1997.tb04082.x). These were downloaded and unpacked in the biplot_magic example. More of the data are in the directory ../data_files/core_depthplot.

In this example, we will plot the alternating field (AF) data after the 15 mT step. The magnetizations will be plotted on a log scale and, as this is a record of the Oligocene, we will plot the Oligocene time scale, using the calibration of Gradstein et al. (2012), commonly referred to as “GTS12” for the the Oligocene. We are only interested in the data between 50 and 150 meters and we are not interested in the declinations here.

All this can be done using the wonders of Pandas data frames using the data in the data_files/core_depthplot directory.

Let's do things this way:

• read in the data from the sites and specimens files.
• Drop the records with NaN for analysts, keeping one of the three lines available for each specimen.
• Make a new column named site in the specdimens table that is the same as the specimen column.
• (this makes sense because these are core data, so the specimen=sample=site. )
• Merge the two DataFrames on the site column.
• filter the data for depths between 50 and 150.
• Plot dir_inc versus core_depth.
• Put on GAD field inclination
• plot the time scale
In [26]:
specimens=pd.read_csv('data_files/core_depthplot/specimens.txt',sep='\t',header=1)
specimens=specimens.dropna(subset=['dir_inc']) # kill unwanted lines with duplicate or irrelevent info
specimens['site']=specimens['specimen'] # make a column with site name
data=pd.merge(specimens,sites,on='site') # merge the two data frames on site
data=data[data.core_depth>50] # all levels > 50
data=data[data.core_depth<150] # and < 150
lat=26 # we need this for the GAD INC


Plot versus core_depth

In [27]:
fig=plt.figure(1,(6,12)) # make the figure
ax=fig.add_subplot(121) # make the first of 2 subplots
plt.ylabel('Depth (m)') # label the Y axis
plt.plot(data.dir_inc,data.core_depth,'k-') # draw on a black line through the data
# draw the data points as cyan dots with black edges
plt.plot(data.dir_inc,data.core_depth,'co',markeredgecolor='black')
plt.title('Inclinations') # put on a title
plt.axvline(0,color='black')# make a central line at inc=0
plt.ylim(150,50) # set the plot Y limits to the desired depths
fig.add_subplot(122) # make the second of two subplots
# plot intensity data on semi-log plot
plt.semilogx(data.int_rel/data.int_rel.mean(),data.core_depth,'k-')
plt.semilogx(data.int_rel/data.int_rel.mean(),\
data.core_depth,'co',markeredgecolor='black')
plt.ylim(150,50)
plt.title('Relative Intensity');


And now versus age:

In [28]:
fig=plt.figure(1,(9,12)) # make the figure
ax=fig.add_subplot(131) # make the first of three subplots
pmagplotlib.plot_ts(ax,23,34,timescale='gts12') # plot on the time scale
fig.add_subplot(132) # make the second of three subplots
plt.plot(data.dir_inc,data.core_depth,'k-')
plt.plot(data.dir_inc,data.core_depth,'co',markeredgecolor='black')
plt.ylim(35,23)
# calculate the geocentric axial dipole field for the site latitude