Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Harnessing the power of data and knowledge for sustainable development
The Pacific Data Hub (PDH) aims to deliver the most comprehensive collection of data and information about the Pacific and for the Pacific, including key areas such as population statistics, fisheries science, climate change adaptation, disaster risk reduction and resilience, public health surveillance, conservation of plant genetic resources for food security, and human rights.
We have built the Pacific Data Hub platform on our intimate understanding of the Pacific region and the enduring relationships we have forged with our members. We understand the policy and development challenges that matter to our members, and we work in coordination with our development partners to provide access to timely and reliable data to inform better policy development and decision-making to help deliver better development outcomes for the Pacific region.
The Pacific Data Hub platform is made up of four key components:
The Pacific Data Hub is an innovative programme of work, led by the and supported by the . It is a regional public good that provides a single authoritative point of entry for data about the Pacific and serves as a vehicle for investment in a sustainable regional data infrastructure.
The Pacific Data Hub is part of an emerging Pacific Data Ecosystem, a partnership between Pacific Island Countries and Territories, and the to promote greater coordination in data management, dissemination and uptake initiatives. Check out the for more data.
- an open data repository which manages and publishes all data in the Pacific Data Hub. It is the central component which links to PDH.stat and the Microdata Library; Most spatial datasets in the Data Catalogue can be visualised with PacificMap
- a geospatial data exploration tool providing easy-to-use map-based visualisation of spatial data;
- indicator database explorer which contains the 132 Pacific Sustainable Development Indicators (SDGs) as well as a range of economic, health, demographic and environmental datasets (replaces National Minimum Development Indicator Database - NMDI)
- online census and survey documentation and archiving application which also provides access to microdata for some collections.
Data from the Pacific, for the Pacific, all in a single place.
The Data Catalogue is at the centre of the Pacific Data Hub. It is an open data repository which manages and publishes all data in the Pacific Data Hub. Data comes from a range of sources, though is always related to the Pacific Region, and is accessible via a powerful search engine (above).
The Data Catalogue publishes data in units called "datasets". A dataset is a parcel of data - for example, it could be the education statistics for a country, the Exclusive Economic Zones (EEZ's) for the Pacific, or temperature readings from various weather stations. When users search for data, the search results they see will be individual datasets:
Each dataset contains two main elements:
Information or "metadata" about the data. For example, the title and publisher, date, what formats it is available in, what license it is released under, etc.
A number of "resources", which hold the data itself. A resource can be a CSV or Excel spreadsheet, XML file, PDF document, image file, KML, GeoJSON, etc. A dataset can contain any number of resources. For example, different resources might contain the data for different years, or they might contain the same data in different formats.
If you would like to add or edit datasets, view private datasets and download restricted resources from the PDH Catalogue, you will need to sign in with a PDH user account. Follow the steps below to register for a new account.
For SPC staff, please sign in using the .
From the Home page, click on the link
Click on the tab
Please enter a username, email address and password.
Then click on the 'I'm not a robot' check box. You may be asked to complete the verification process.
Follow the verification steps then click on the 'Create new account' button.
Once you have successfully registered, you will be redirected to the Home page where you will see the following notification message at the center of the page:
'Registration successful. You are now logged in.'
You may now browse the catalogue as a registered user. Click on the link from the top navigation menu.
If you wish to upload or modify datasets within the , you will need to be granted permissions to an within the catalogue. Please get in touch with the owner of the you wish to contribute to.
You can also contact the PDH administrators on live chat or on the email .
For an introduction on how to use the Data Catalogue see this video:
You will be redirected to the Data Catalogue when you make your data search either using the Search bar or the Shortcuts located below.
In this example we have typed "cyclone" in the search bar and redirected to the Data Catalogue
We can order our results by relevance, popularity, name or last modification using the Order by button
To narrow down the data search we can use the filtering tools located on the left. The filters available are Topic, Member countries, Organisations, Tags, Formats, Dataset type and Licenses.
Once you have found the dataset you are looking for, click on it to open the Dataset screen.
On top of the screen you will find the dataset's title and description.
You can export the metadata by clicking on the Export Metadata button located below the description.
Metadata can be accessed on the bottom of the page.
A number of different data resources with various formats can be linked to the dataset.
Click on the resources to access the resource page where you will find the description, a preview pane and a download link.
Different preview screens are available depending on the format of the file.
At the bottom of the file you will find shortcuts to access the other resources linked to the dataset.
Go to the Pacific Data Hub home page
Searching for data using Solr search syntax
By enabling the advanced search feature, you can take advantage of Solr's flexible search syntax to analyze your data. For example, you can search for data that contains the terms 'Climate' and 'Financing' in close proximity to each other, or search for data or dates that fall within a specific range.
The following are examples of searches for a specific keyword or search phrase.
Task
Example
Syntax
Search for a keyword in a specific field
Search for the keyword AccessLog
in the title field
title:AccessLog
Search for a phrase in a specific field
Search for the phrase Code 1918
in the title field
title:"Code 1918"
Search for a phrase in one field and a second phrase in another field
Search for Error 401
in the title field and Authorization is denied
in the body field
title:"Error 401" AND body:"Authorization is denied"
Combine searches for multiple phrases or keywords using operators such as AND or OR
Search for Error 401
in the title field AND Authorization is denied
in the body field, or search for Password
in the title field.
(title:"Error 401" AND body:"Authorization is denied") OR title:Password
Search for a keyword in a specific field, excluding search results with another keyword in the same field
Search for 401
but not 404
in the title field
title:401 -title:404
Search for data in which a field does not contain a specific value
Search for data where the inStock field is not false
-inStock:false
Search for values in a specified range
Search for values from 20020101
to 20030101
in the mod_date field
mod_date:[20020101 TO 20030101]
You can use the wildcard character (*) to search for results that are not exact matches. Solr search syntax does not support using a wildcard symbol as the first character of a search.
Task
Example
Syntax
Search for words starting with a string of characters
Search for any word that starts with En
in the title field
title:En*
Search for words starting and ending with specific strings of characters
Search for any word that starts with En
and ends with ed
in the title field
title:En*ed
Search for values in a field that are less than or equal to a specified numeric value
Search for values in the code field that are less than or equal to 100
code:[* TO 100]
Search for values in a field that are greater than or equal to a specified numeric value
Search for values in the code field that are greater than or equal to 100
code:[100 TO *]
Search for data that contains a specific field
Find data that includes the message field
message:[* TO *]
Search for data that does not contain a specific field
Find data that does not have a message field
-message:[* TO *]
You can search for terms that are a given number of words away from each other (called a proximity search).
Task
Example
Syntax
Search for keywords that are a specific number of words away from each other
Search for log analysis
within 4 words from each other
"log analysis"~4
Search for transposed words
Search for log analysis
or analysis log
"log analysis"~1
You can approximate a search for multiple keywords (for example, a search for business AND analysis) using a search with a large proximity value, such as "business analysis"~10000000
. For practical purposes, this returns the same group of results as searching for business AND analysis. Unlike a search for business AND analysis, however, results in which business and analysis are closer together are regarded as having a higher search relevance. However, the proximity search also requires more time and system resources to perform.
You can determine which parts of a search query are treated as more important by providing a numeric boost factor. For example, the following query assigns higher importance to matches in the title field than matches in the body field: (title:MicroStrategy OR title:Analytics)^1.5 (body:Intelligence OR body:Server)
.
For a detailed overview of Solr query syntax, including information about creating queries that take advantage of functions, nested queries, boost factors, and more, see the . In most cases, Solr uses the standard Lucene query syntax to perform searches. For a list of exceptions, see the .
The user-friendly interface to Pacific data
For an introduction on how to use the Data Explorer see this video :
PDH.stat Data Explorer is directly accessible via the following URL:
A link is available from the "Tools" menu:
PDH Data Catalogue entries have a link to visualize the data in the Data Explorer as a source:
From the welcome page of the Data Explorer data can be searched using predefined topics or using textual search.
Textual search will not only return datasets mentionning serch text in their name or description but also datasets containing occurence of the text in code labels or comments used in the data.
The research results page implements what is called a "faceted search" allowing search criteria to be refined in a second step using a filtering panel on the left side of the screen.
Datasets are first displayed according to a predefined default view.
Filters applied to the default view are visible in the filtering panel on the left side of the screen,
A maximum number of observations will be displayed in the Data Explorer to prevserve user experience. If this limit is exceeded the dataset will be trucated and a warning message will be displayed at the top of the table.
Filters can be managed from the filtering panel on the left of the data table.
The upper side of the panel shows applied filters and allow to easilty clear filters (remove one category, remove all categories selected for a given dimension, clear all filters)
The lower side of the panel shows available filters and allows defining filters (add or remove individual categories for each dimension)
To remove a category click on the cross next to the category
To remove all selected categories for a given dimension click on the cross next to the dimension label
To clear all filters appalied to the dataset click on the cross next to the dimension label
To define filters the lower part of the filtering panel can be used to add or remove categoires from the data selection for each dimension.
Selectect categories are highlighted,
Clicking on an unselected category ads it to the selection
Clikcing on a selected category removes it from the selection
When no catgory is selected for a dimension all categories are part of the selection
The presence of a chevron to the right of the category label indicates that subcategories are available, clicking on the chevron symbol will display sub-categories
The numbers in the green rectangle next to dimension labels inidicate how many categories are selected from the total number of categories available for the dimension
In the Data Explorer dimensions may be displayed as row section, row or column.
Table layout can be customized using the "customize" functionnality available by clicking the appropriate item on ribbon above the data table.
Dimensions can be dragged between columns, row sections and rows.
The new table view is applied once the "apply layout" button is clicked.
At least one dimension must set for rows.
When multiple dimensions are set as columns, row section or row the order in which they are mentionned is significant and corresponds to the order in which they will be nested in the table view. Dimensions can be re-ordred using drag and drop.
A share functionnality allows sending a link to the data currenlty displayed in the Data Explorer per e-mail.
To access this functionnality the "Share" item from the top menu must be clicked.
Data can be shared in 2 ways :
As a snapshot of data meaning that data will not change even if updated on the site
As a view of latest data available meaning that latest available data will always be shown when the shared link is clicked
The email address to which a link will be sent must be provided. The first time an email is sent the address must be validated by clicking the "valide email link" in the message received from the Data Explorer.
Data can be downloaded as Excel or as CSV, for CSV files it is possible to export only data currently dispalyed considering filters applied or to export the entire dataset whithout any filtering.
When a metadata document is available it will be accessible from the "Download" menu by clicking the "Metadata" link next to a blue "i" icon.
Metadata documents comprise a first section with reference metadata (data description, data source, processing, coverage, ...) and a second section with information on the data structure (columns and codelists).
API queries corresponding to the data view currentlty displayed in the Data Explorer can be accessed using the "More" menu.
Separate API queries are provided for data and for structural metadata.
Search, visualise and share Pacific data
Different applications and use cases are targeted to different types of users:
Data in PDH.stat in structured and standardized and uses open standards making it easy to use.
A spatial data exploration tool for everyone
The platform brings together geospatial datasets from the Pacific Data Hub catalogue sourced from SPC's technical divisions and its member countries. Data is also sourced from a number of development partners and other publically available sources which provide spatial data from the Pacific.
Use the interactive features of the PacificMap platform to produce maps and create compelling impact stories. From remote-sensed geospatial to statistical time series data, PacificMap enables the analysis of public and private geospatial data at global, regional, national and subnational data.
The application is also accessible in several ways from the :
PDH.stat data is registered in the Data Catalogue and can also be found using its various different functionnalities. For more information see .
For more information see .
The PDH Data Catalogue is built on , the world's leading open source data management system. CKAN’s Action API is a powerful, RPC-style API that exposes all of CKAN’s core features to API clients.
This API provides live access to the Pacific Data Hub Data Catalogue. Further documentation on the API is available from .
Confirm the version of the API available from the catalogue by requesting .
(commonly known as PDH.stat) is a powerful interactive online tool that presents a wide range of statistical indicators in a flexible and dynamic manner. PDH.Stat empowers users in their quest for Pacific data by providing state-of-the-art search functionalities and offering a user-friendly interface to extract information from voluminous data tables.
PDH.Stat contains more than 350 indicators represented in tables which include population figures, main economic indicators, social statistics and SDGs, and is continually expanding. A .
is a platform developed by and the in collaboration with the , as part of the Asia - Pacific Data for Development Initiative (D4D). The PacificMap is a platform for map-based access to spatial data from 22 Pacific Island Countries and Territories. It aims to lower the barrier and enhance access to timely, relevant and useful data for government and non-government organisations, research and academic institutions, businesses and communities throughout the Pacific.
New topics (thematic areas) for datasets
Unique dataset metadata schema
Data migration
New dashboards added
Linking to PCCOS catalogue
Updated NADA to version 5.2.1
Added more dashboards (powerBI based)
Improved Maritime Boundaries dashboard
Improved stories and news section
Better topic pages
Sitemaps
CKAN version 2.9.7
Better support for geospatial data files
New "video" dataset type
Added JSON-LD support
Fixed organization and group lists
Miscellaneous fixes
New interactive country map
Improved page layouts (headers and breadcrumbs)
Improved mobile version
Page footer fixes
CKAN version 2.9.3
Improved search results and suggestions
Merged datasets and publications schemas definitions (PDH-DCAT)
Metadata field Theme now contains Library of Congress Subject headings
Metadata field Countries separated in Countries and Regions fields
Fixed packages without topics
Fixed broken harvesters
Added Citations (DataCite) and Plumx altmetrics support
Improved user tour (in-page tutorial)
Access Pacific data with your favorite data analysis software
Create a live connection between your Excel Workbooks and PDH.stat
2: Right-click the zipped folder, choose "Extract All...", choose where you would like to extract the files (optional), then click "Extract".
3: In the extracted folder, navigate to “Stat-DLM” folder.
4: Double click the “resign” Windows Batch File to start the installation process.
5: If a warning appears saying "Windows protected your PC", click "More info" and then "Run anyway".
6: Wait while the file opens a "command prompt" window and writes a few lines.
7: When prompted press any key on the keyboard to finish the setup.
8: Now, in the “Stat-DLM” folder, run the “setup” Application File.
9: Another warning will most likely appear. If it is like the previous warning, click "More info", then "Run anyway".
10: Once a prompt like below appears, the installation has started successfully. Click "Install" and wait for the process to complete. If a prompt like below hasn't appeared, you may need to run "setup" again.
11: Open Excel and choose an existing file or make a new one.
12: Find the installed add-in under the "PDH.stat" heading on the Excel Ribbon.
1: Navigate to the “PDH.stat” add-in on the Excel Ribbon.
2: Click “New Table” and then “PDH” to connect to the Pacific Data Hub.
3: Wait while the application loads temporarily.
4: A window will appear, allowing you to select data from PDH.stat.
5: In “Step 1 – Select data”, under “Datasets and queries”, click the drop-down menu to select a data set.
6: Depending on the selected data set, the “Data filters” will change.
7: Click “EDIT FILTERS” to step through each filter and choose from the available options.
8: Click “APPLY FILTERS” to finish editing filters.
9: Click “NEXT STEP”.
10: In “Step 2 – Specify output”, settings can be adjusted to choose where to put the data loaded from PDH.stat.
11: Adjust “Start cell” to choose which Excel Sheet and cell to put the data.
12: From the “Table type” menu, choose how the data should be formatted: “Flat” (all data), “Time Series Down” (rows for each time period), or "Time Series Across” (columns for each time period).
13: Adjust “Return” to choose whether to have “Labels” or not, and in which language. Selecting “Labels” will provide the real names for all data, otherwise the data will make use of codes. Select “Exclude Codes” to remove the codes altogether.
14: Click “GET DATA” and wait while the data is retrieved from PDH.stat.
15: The data will be loaded into the selected Excel Sheet.
16: To load more data, add a new Excel Sheet and repeat the above steps.
With a data set already loaded, the add-in provides extra features.
Edit the data connection: Select “Change Selection” to adjust the settings of the current data connection and reload the data in the same location. NOTE: this will remove the existing data in that location.
Refresh the data connection: Select “Current Sheet” to refresh the data in the current Sheet. Select “All Sheets” to refresh the data in all Sheets with data connections.
Metadata on the current connection: Information about the data connection on the current Sheet is displayed in the Ribbon under PDH.stat, including the data set code, the source, and the last extraction date.
See the API query: When adding a connection or changing an existing one, in “Step 2 – Specify output” there is a button showing “SHOW QUERY SYNTAX”. Select this to see how your customised filters are represented in a request to the actual PDH.stat API. The API call is shown in several different formats.
“This size of data exceeds MS Excel limits”: If no filters are applied, there may be too much data to represent it in Excel (especially if the “Table type” is set to “Time Series Down”). To fix this, set “Table type” to “Flat” or introduce some filters so that the data set is smaller.
“No data available for this selection”: If very specific filters are applied, there may be an error and no data will appear. This is likely because there aren’t any data for the specified filters. To fix this, broaden the search terms and remove some filters.
Search “Settings” in the Windows Start Menu.
Go to “Settings” and then navigate to “Apps”.
Go to “Apps & features”.
Search for “.Stat DLM”.
Click on “.Stat DLM” and choose “Uninstall”.
Follow the prompts to uninstall the add-in.
1: Download the addin .
Access Pacific data with the most popular data science solution using the pandasdmx library
These steps have been tested with Python 3.7.4 in an Anaconda environment on Windows 10.
To install with pip from the command prompt (note: include the '1' at the end of sdmx): pip install sdmx1
In a Python project, import the package: import sdmx
To see the available sources and find PDH.stat:
The source abbreviation for PDH.stat is "SPC" (Pacific Community), as shown below.
To connect to PDH.stat and then view its available data flows:
To connect to a data flow and convert it into a pandas Multi-index series:
And then to turn the series into a dataframe, reset the index:
For an introduction on how to use Pacific map see this video:
In the left-hand panel click the Explore map data button to launch the Data Catalogue.
Browse through the Data Catalogue to find a data set of interest. Click on the title of your preferred data set to get a preview of that data, along with a description and other relevant metadata. To view your selected data set on the main map, click the Add to the Map button. The spatial data will be immediately displayed in the map view, and a visual legend for that data will appear in the Data Workbench, located on the left-hand side of the page.
To locate the loaded data on the map, go to the Data Workbench (positioned on the left-hand side of the page), and click the Ideal Zoom link for your desired data set. From here you can also click About data to get more information about your selected data set.
To add additional datasets to the map, simply click Explore map data again in the left-hand panel to relaunch the Data Catalogue.
Zoom manually by moving your mouse pointer over the map and using your mouse wheel to zoom in or out further.
Click and drag the map to further show the region in which you are interested.
Click on a feature (that is, directly on a point or line, or within a region) to show data about the individual feature.
Click on the feature which is displayed on the map. You can click on points, lines or within regions to see a display of the information available from the spatial data provider for that particular feature.
For points and lines, you need to click quite accurately to identify the feature. For Regions, clicking on the boundary will give ambiguous results. Click within the region.
Note: You cannot find out further information about the features which are part of the base maps.
In the top-right corner click on Map Settings.
In the Map View section you can select whether to display data using 3D Terrain, 3D Smooth or 2D.
In the Base Map section you can select the most appropriate option for the type of dataset you are working with.
The Image Optimisation slide control will help you to prioritize between quality or performance. In cases where yourcomputer is not powerful enough or the bandwidth is limited, you should move the control to the right in order to make the app faster and more reactive.
PacificMap can display two kinds of spreadsheets:
Spreadsheets with a point location (latitude and longitude) for each row, expressed as two columns: lat
and lon
. These will be displayed as points (circles).
Spreadsheets must be saved as CSV (comma-separated values).
Standard spatial formats such as GeoJSON and KML are also supported.
There are two ways to load your data:
Drag your data file onto the PacificMap map view. The format of the data file will be auto-detected.
Click on the Upload button in the left-hand panel. This will launch the Data Catalogue. Select the My Data tab at the top of the modal window and follow the provided instructions.
As with datasets which are already part of the Pacific Data Hub, you can click on regions or points to see the available data for a given feature. If the file is a CSV, data from all columns will be shown in the feature information dialogue when you click.
To share a view of your data with others, you must first upload the dataset to the Pacific Data Hub Catalogue.
In the top right corner click on Share/Print.
There are three ways of sharing your map view:
Share URL. Copy the given URL (shown in the first text box) to the clipboard and paste it into an email which you send to the recipient. They can click on it in the email or paste it into their browser to see the same view as you.
Print Map. Generates a static map that can be shared using different formats. Note that this option is the only one that will show data loaded from a local file or URL.
Advanced Options. Create a code to embed the map into a web page.
PacificMap encourages data providers to publish their spatial data using this platform. There are two routes you can take to publishing.
Datasets in the Pacific Data Hub containing geographic data will be tagged with the Geographic data label
The resources within the dataset containing geographic data come with the PacificMap label
When opening one of the files containing geographic data, three map preview options are made available.
Select PacificMap option and the geospatial dataset will be displayed on an embed frame.
Click on Open in PacificMap if you want to open your the dataset on a PacificMap instance independent from the Pacific Data Hub Catalogue
PDH.stat is part of the sdmx
Python package developed by Paul Natsuo Kishimoto. More information about the package can be found .
Start by installing sdmx. To learn more about the package, see the code .
For an example of how to use the plugin in combination with the API key and parameter settings, see the .
You can access PacificMap either from the Pacific Data Hub main page or by using the URL .
Spreadsheets where each row refers to a region such as a local government area or divisions. Columns must be named according to the standard. These will be displayed as regions, highlighting the actual shape of each area.
You can also use all of the features of the on your dataset.
Any spatial data which is added to using a protocol or format supported by the PacificMap (such as WMS) will automatically appear in the PacificDataHub section of the Data Catalogue tab for the PacificMap.
If you require your data set to appear under a separate category of the PacificMap Data Catalogue, you will need to contact for more information.
Integrate your IT applications to the Pacific data ecosystem
PDH.stat provides machine-to-machine accessibility to Pacific data through its RESTful API. A freely available SDMX-based web service is exposed on the Web at the following endpoint:
For more information on SDMX web services see the following references:
Produce reports and dashboards with Power BI using a PowerQuery template
The OECD has developed a plugin to connect an SDMX data source directly to Power BI. It can be loaded when adding "an alternate data source" in PowerBI.
When the data is imported you can transform it with Power Query ("Transform Data" button) to remove useless columns, clean some fields or anything else.
The next screenshot shows a Power Query with the previously imported data, transformed with some steps.
Finally, your data is now imported and transformed, you can use it to do some visualisations.
Run advanced statistical analyses on Pacific data using the rsdmx package
These steps have been tested with R 4.0.2 on Windows 10.
Remove rsdmx if already installed: remove.packages("rsdmx")
Install devtools: install.packages("devtools")
Install rsdmx from the latest development version on Github: devtools::install_github("opensdmx/rsdmx")
Load package: library(rsdmx)
See all service providers
Aside from PDH.stat, the original package offers connectivity with OECD, Eurostat and others. See all available service providers with the getSDMXServiceProviders()
function.
See available dataflows from PDH.stat
To see the available PDH.stat dataflows (data sets), use the readSDMX()
function, setting the providerId
parameter to "PDH" and the resource
parameter to "dataflow":
To return the available data set IDs and their English names, filter the dataframe:
Get all data for a dataflow
To retrieve a dataflow, provide the dataflow ID to the readSDMX()
function in the flowref
parameter, also setting the resource
as "data".
For example, to connect to "Inflation Rates" dataflow, the ID is "DF_CPI" (as shown when retrieving all the dataflows for PDH.stat):
Get more specific data for a dataflow
Extra parameters can be supplied the readSDMX()
function to retrieve a filtered view of the dataflow:
start
is the desired start year (supplied as an integer)
end
is the desired end year (supplied as an integer)
key
controls a variety of filters, and by default it is set to "all" (retrieves all data). A further explanation is provided below.
The key
parameter controls a different number of variables depending on the dataflow, including time period, country, currency and others. Each variable is selected with a code, and separated by a dot .
Two dots ..
indicates a "wildcard" (selects all available values). A plus +
can allow multipled variables to be selected. Generally the time period comes first, A
for "annual" or M
for "month" (if the data is available at that level). Some examples:
For DF_CPI
"Inflation Rates" dataflow, to get annual data from 2010-2015 for Cook Islands and Fiji:
The key
is "A.CK+FJ.."
start
is 2010 and end
is 2015
The R code:
Sample API calls corresponding to particular data views can be produced from the Data Explorer, for more information on this see .
and its
To import a dataset you will need the API query corresponding to the data to be imported using the Data Explorer as .
Then you can create a new data source in Power BI: Data > Get Data > SDMX. For importation mode, you can choose between labels only, codes only or both.
NOTE: Because this plugin is free and open source, its code is publicly available in this .
PDH.stat is part of the rsdmx
package, developed by Emmanuel Blondel, and contributors Matthieu Stigler and Eric Persson. Learn more about the original package . It has been configured to include Pacific Data Hub's .Stat API as a default service provider.
This is a quick-start guide. Go for the official documentation.
Given that the key
variables can change depending on the dataflow, it can be easier to retrieve all data and then filter manually in R. Alternatively, use the to filter a dataset and then view the relevant API call and key as explained .
Here are some worked examples of the PDH.stat API being used for various reasons. Full details of each implementation are available in the provided Github links.
This Python script returns a list of all existing dataflowIDs. This can be useful for applications which need to check for new/updated dataflows.
This Python script returns a dictionary with the title, agencyId, version for a given dataflowId. This can be useful for applications which harvest from PDH.stat or simply need to display information about a dataset/dataflow. The function can be used iteratively for information on more than one dataflow.
Many of these tasks can be simplified through the PDH.stat API's suite of .
This Python script demonstrates how the API can be accessed with the . It makes a request for a filtered dataset of population projections for a specified number of countries. It then plots the results as a time series chart. It could be adapted to handle different countries, different time frames and other time series data too.
A gateway to the Pacific region’s survey, census, and administrative-based microdata and documentation
The Microdata Library allows researchers to browse, search, compare, apply for access and download relevant survey and census information from the Pacific Islands region. It allows data producers to disseminate survey information in a secured environment, in compliance with policies and conditions of use.
Note: security and access of microdata is closely controlled and can be accessed only under strict rules and conditions.Users of microdata need to be bona-fide researchers linked to clearly defined public-good (i.e. non-commercial) research projects.However all associated documentation is openly accessible to all users.
Users can search inside datasets, down to the names of indicators and variables. This is a valuable feature of the library because it allows the possibility to explore a dataset’s variables (if documented) in detail and show response frequencies. This kind of “deep search” allows users to discover data they may not even know existed.
The only thing more important than data is Metadata. Who made this dataset? How was it produced or acquired? When was it last updated? Who’s allowed to use it? What have people already done with it? Are algorithms shared that will allow for reproducing the construction of a dataset or indicator? While the catalog tags each dataset with some basic metadata consistent across all datasets, the amount and nature of the metadata will vary depending on the type of dataset
The Microdata Library also features a list of citations including those from articles and books which referring to a study (dataset). Data producers often don’t get enough credit for their data work - part of the reason is that it’s hard to track where data have been cited, used and re-used. Understanding where and how the data are used helps us understand the impact of the dataset and divert investments in priority data areas in the Pacific. It incentivises data providers to release their data more proactively much like research papers, giving them more accountability.
Connect STATA to PDH.stat with sdmxpdh
Full credit goes to Sebastien Fontenay, Robert Picard, Nicholas Cox.
This module is a simple adaptation of the SDMXUSE module (Fontenay), which itself uses the MOSS module (Picard & Cox).
Sebastien Fontenay, 2016. "SDMXUSE: Stata module to import data from statistical agencies using the SDMX standard," Statistical Software Components S458231, Boston College Department of Economics, revised 30 Sep 2018.
Robert Picard & Nicholas J. Cox, 2011. "MOSS: Stata module to find multiple occurrences of substrings," Statistical Software Components S457261, Boston College Department of Economics, revised 29 Apr 2016.
Instructions work for Stata SE 15.1 in Windows
In Stata, find your "PERSONAL" directory path with the command: sysdir
In Windows, go to the "PERSONAL" directory location. It is probably C:\ado\personal\
. If the path doesn't seem to exist, make the necessary folders yourself.
Put .ado
and .sthlp
files inside your "PERSONAL" directory.
Restart Stata.
Check the install worked by bringing up the SDMXPDH help document: help sdmxpdh
PDH.stat resources are maintained by SPC (Pacific Community), so SPC is the provider for resources.
General command structure is: sdmxpdh <resource> <provider>, <filters>
For example, use dataflow DF_CPI
(Consumer Price Index)
This time, we want DF_CPI
time series data from 2005 to 2018, for countries Fiji and Guam.
Again, let's use DF_CPI
.
SDMXPDH is a version of the SDMXUSE Stata module, with the changes allowing users to connect to Pacific Data Hub .Stat API (PDH.stat). See the code .
Download .ado
and .sthlp
files from the Github repository .
Using the dimensions()
option is tricky, see the API for a guide.
Entry point for documented census and survey datasets, and access to microdata
For an introduction on how to use the Microdata Library see this video :
The Microdata Library is accessible using the following URL:
A link is available from the "Tools" menu:
PDH Data Catalogue entries have a link to visualize the data at its source (Microdata Library):
The Microdata Library Central Data Catalog allows users to search for published studies (censuses and surveys).
A keyword search returns datasets containing the given keyword(s) in titles and descriptions. The variable description can also be used to search in more detail.
Study Description - details all components of a study (census or survey) from identification to metadata production.
Documentation - provides access to questionnaires, reports, technical documents etc.
Data Description - provides a full listing of all datasets and variables created as part of the study.
Clicking on a data file provides a listing of all variables. Clicking on a variable then displays a profile of that variables' responses.
To access microdata (where available), create an account and then follow the data request instructions
The application is also accessible in several ways from the :
The Microdata Library is registered in the Data Catalogue and can also be found using its various different functionnalities. For more information see .
The PDH.stat API is an efficient way of accessing Pacific data for a variety of use cases.
Here we outline a few different types of use cases, so that you can learn how to best leverage the API's functionality for your need.
If you're wanting to access specific data for use in a web application, for producing a "live" chart, or for some other reason, you will want to construct an API request.
In general, you will use the HTTP GET
method with the data
endpoint to request a dataflow (i.e. dataset). You'll use the API's path and query parameters to define the exact data you want, the time period, the response format, and so on.
The basic template for the API data request URL is defined below.
The key parameter
The key
parameter is the primary way of filtering the exact data you're looking for in a dataflow (dataset). The keyword all
can be used to indicate that all data should be returned. The allowable values for key will change depending on the selected dataflow. In general it is a series of parameters separated by the .
symbol. Where there are 2 points in a row, it indicates a wildcard for that parameter. To select several values as a parameter, separate them with a +
sign.
Examples of key
parameter for different dataflows:
FJ+KI.A
: (Population dataflow) Annual population figures for Fiji and Kiribati
AG_LND_TOTL..GU.KM2
: (Pocket Summary dataflow) Land area in kilometres squared in Guam
Example: Retrieving population data for two countries
In this example, we want data on population estimates of Fiji and New Caledonia over time (1970 to 2018). The following API request will return the SDMX data we're after:
Path Parameters
data : indicates that we want to access a given "dataflow"
DF_POP_PROJ : flowID
parameter, uniquely identifies the SDMX "dataflow" ("Population Projections" in this case)
A.NC+FJ... : key
parameter, filters results to get Annual data about New Caledonia (NC) and Fiji (FJ). The points represent "wildcards" for other optional filters (such as sex, age, and type of population indicator), meaning that we will get data on all of those options.
Query Parameters
startPeriod=1970 : startPeriod
parameter, gets data starting from 1970
endPeriod=2018 : endPeriod
parameter, gets data up until 2018
Data requests can specify what the response data format should be: json, csv, SDMX etc. In this example, we will get the same population data as in the above example, but we want the response in a JSON format. The following API request will return the JSON data we're after:
Path Parameters
data : indicates that we want to access a given "dataflow"
DF_POP_PROJ : flowID
parameter, uniquely identifies the SDMX "dataflow" ("Population Summary" in this case)
A.NC+FJ... : key
parameter, filters results to get Annual data about New Caledonia (NC) and Fiji (FJ). The points represent "wildcards" for other optional filters (such as sex, age, and type of population indicator), meaning that we will get data on all of those options.
Query Parameters
startPeriod=1970 : startPeriod
parameter, gets data starting from 1969
endPeriod=2018 : endPeriod
parameter, gets data up until 2018
format=jsondata : format
parameter, defines the desired response format: jsondata, csv, genericdata etc.
Sample Code
Depending on your application's needs, you may want to retrieve information on all existing dataflows. The HTTP GET
method is used with the dataflow
endpoint to get this type of information.
The basic template for the API dataflow request URL is defined below.
Example: Retrieve all dataflows for a specific agency
In this example, we want to know all existing dataflows maintained by SPC, and details about those dataflows. The following API request will return the SDMX data we're after:
Path Parameters
dataflow : indicates that we want information on dataflows
SPC : agencyID
parameter, uniquely identifies the agency whose dataflows we want to request
all : resourceID
parameter, retrieves 'all' resources
latest : version
parameter, gets the latest version of the resource
Query Parameters
detail=full : detail
parameter, determines what sort of information about resources is returned
Sample Code
If you're making a "one-off" data request about a certain topic, the may be the easiest way to go about it. The intuitive interface allows you to choose a dataset, adjust filters and view the results in a table or chart. You can then produce the API request URL for the actual SDMX data you've accessed. This makes it useful for API query building and is explained in a .
Note: building the key parameter can be difficult, so the Data Explorer can be used to visually select/filter data, and then see the matching API request URL. This can make key-building much easier. See it .
For detailed implementations of API use, see .
For example code which retrieves all dataflowIds, see .
Pacific Data Hub .Stat API. Access macrodata datasets about the Pacific region. Data is available in XML, JSON and CSV formats.
GET
https://stats-nsi-stable.pacificdata.org/rest/data/{flow}/{key}/{provider}[?startPeriod][&endPeriod][&dimensionAtObservation][&detail][&format]
This method retrieves the data observations for a dataflow, based on various filters.
flow
string
The statistical domain (dataflow) of the data to be returned.
Examples:
DF_SDG
: The ID for Sustainable Development Goals dataflow
DF_CPI
: The ID for Consumer Price Index dataflow
DF_POCKET
: The ID for Pocket Summary dataflow
DF_POP_SUM
: The ID for Population dataflow
DF_IMTS
: The ID for International Merchandise Trade Statistics dataflow
key
string
The (possibly partial) key identifying the data to be returned.
The keyword all
can be used to indicate that all data belonging to the specified dataflow and provided by the specified provider must be returned. The allowable values for key will change depending on the selected dataflow. In general, it is a series of parameters separated by the .
sign. Where there are 2 points in a row, it indicates a "wildcard" for that parameter. To select several values as a parameter, separate them with a +
sign.
provider
string
The agency maintaining the artefact to be returned (i.e. SPC
).
It is possible to set more than one agency, using +
as separator (e.g. SPC+ECB
).
The keyword all
can be used to indicate that artefacts maintained by any maintenance agency should be returned.
startPeriod
string
The start of the period for which results should be supplied (inclusive). Can be expressed using 8601 dates or SDMX reporting periods.
Examples:
2000
: Year (ISO 8601)
2000-01
: Month (ISO 8601)
2000-01-01
: Date (ISO 8601)
2000-Q1
: Quarter (SDMX)
2000-W01
: Week(SDMX)
endPeriod
string
The end of the period for which results should be supplied (inclusive). Can be expressed using 8601 dates or SDMX reporting periods.
Examples:
2000
: Year (ISO 8601)
2000-01
: Month (ISO 8601)
2000-01-01
: Date (ISO 8601)
2000-Q1
: Quarter (SDMX)
2000-W001
: Week (SDMX)
dimensionAtObservation
string
Indicates how the data should be packaged.
The options are:
TIME_PERIOD
: A timeseries view
The ID of any other dimension: A cross-sectional view of the data
AllDimensions
: A flat view of the data
detail
string
The amount of information to be returned.
Possible options are:
full
: All data and documentation
dataonly
: Everything except attributes
serieskeysonly
: The series keys. This is useful to return the series that match a certain query, without returning the actual data (e.g. overview page)
nodata
: The series, including attributes and annotations, without observations
format
string
The data format to be returned.
Possible options are:
jsondata
csv
genericdata
structure
structurespecificdata
Accept-Language
string
Specifies the client's preferred language.
If-Modified-Since
string
Takes a date-time (RFC3339 format) as input and returns the content matching the query only if it has changed since the supplied timestamp.
Accept
string
Specifies the format of the API response.
Possible options are:
application/vnd.sdmx.genericdata+xml;version=2.1
: returns SDMX-XML format
application/vnd.sdmx.data+json;version=2.1
: returns SDMX-JSON format
application/vnd.sdmx.data+csv;version=2.1
: returns SDMX-CSV format
Accept-Encoding
string
Specifies whether the response should be compressed and how.
identity
(the default) indicates that no compression will be performed.
GET
https://stats-nsi-stable.pacificdata.org/rest/dataflow/{agencyID}/{resourceID}/{version}[?references][&detail]
This method retrieves a dataflow (or many dataflows), and the associated metadata, including the name, description, and metadata dictionary.
agencyID
string
The agency maintaining the artefact to be returned (i.e. SPC).
It is possible to set more than one agency, using +
as separator (e.g. SPC+ECB
).
The keyword all
can be used to indicate that artefacts maintained by any maintenance agency should be returned.
resourceID
string
The ID of the artefact to be returned.
It is possible to set more than one ID, using +
as separator (e.g. CL_FREQ+CL_CONF_STATUS
).
The keyword all
can be used to indicate that any artefact of the specified resource type should be returned.
version
string
The version of the artefact to be returned.
It is possible to set more than one version, using +
as separator (e.g. 1.0+2.1
).
The keyword all
can be used to return all version of the matching resource.
The keyword latest
can be used to return the latest production version of the matching resource.
references
string
Instructs the web service to return (or not return) the artefacts referenced by the artefact to be returned.
Possible values are:
none
: No references will be returned
parents: Returns the artefacts that use the artefact matching the query
parentsandsiblings
: Returns the artefacts that use the artefact matching the query, as well as the artefacts referenced by these artefacts
children
: Returns the artefacts referenced by the artefact to be returned
descendants
: References of references, up to any level, will be returned
all
: The combination of parentsandsiblings
and descendants
In addition, a concrete type of resource may also be used (e.g. codelist
)
detail
string
The amount of information to be returned. referencepartial
is a common value.
Possible values are:
allstubs
: All artefacts should be returned as stubs, containing only identification information, as well as the artefacts' name
referencestubs
: Referenced artefacts should be returned as stubs, containing only identification information, as well as the artefacts' name
referencepartial
: Referenced item schemes should only include items used by the artefact to be returned. For example, a concept scheme would only contain the concepts used in a DSD, and its isPartial flag would be set to true
allcompletestubs
: All artefacts should be returned as complete stubs, containing identification information, the artefacts' names, descriptions, annotations and isFinal information
referencecompletestubs
: Referenced artefacts should be returned as complete stubs, containing identification information, the artefacts' name, description, annotations and isFinal information
full
: All available information for all artefacts should be returned
GET
https://stats-nsi-stable.pacificdata.org/rest/agencyscheme/{agencyID}/{resourceID}/{version}[?references][&detail]
This method retrieves information about agencies associated with the .Stat instance.
agencyID
string
The agency maintaining the artefact to be returned (i.e. SPC).
It is possible to set more than one agency, using +
as separator (e.g. SPC+ECB
).
The keyword all
can be used to indicate that artefacts maintained by any maintenance agency should be returned.
resourceID
string
The ID of the artefact to be returned.
It is possible to set more than one ID, using +
as separator (e.g. CL_FREQ+CL_CONF_STATUS
).
The keyword all
can be used to indicate that any artefact of the specified resource type should be returned.
version
string
The version of the artefact to be returned.
It is possible to set more than one version, using +
as separator (e.g. 1.0+2.1
).
The keyword all
can be used to return all version of the matching resource.
The keyword latest
can be used to return the latest production version of the matching resource.
references
string
Instructs the web service to return (or not return) the artefacts referenced by the artefact to be returned. Possible values are:
none
: No references will be returned
parents
: Returns the artefacts that use the artefact matching the query
parentsandsiblings
: Returns the artefacts that use the artefact matching the query, as well as the artefacts referenced by these artefacts
children
: Returns the artefacts referenced by the artefact to be returned
descendants
: References of references, up to any level, will be returned
all
: The combination of parentsandsiblings and descendants In addition, a concrete type of resource may also be used (e.g. codelist)
detail
string
The amount of information to be returned. referencepartial
is a common value.
Possible values are:
allstubs
: All artefacts should be returned as stubs, containing only identification information, as well as the artefacts' name
referencestubs
: Referenced artefacts should be returned as stubs, containing only identification information, as well as the artefacts' name
referencepartial
: Referenced item schemes should only include items used by the artefact to be returned. For example, a concept scheme would only contain the concepts used in a DSD, and its isPartial flag would be set to true
allcompletestubs
: All artefacts should be returned as complete stubs, containing identification information, the artefacts' names, descriptions, annotations and isFinal information
referencecompletestubs
: Referenced artefacts should be returned as complete stubs, containing identification information, the artefacts' name, description, annotations and isFinal information
full
: All available information for all artefacts should be returned
GET
https://stats-nsi-stable.pacificdata.org/rest/categorisation/{agencyID}/{resourceID}/{version}[?references][&detail]
This method retrieves information about categories used by dataflows.
agencyID
string
The agency maintaining the artefact to be returned (i.e. SPC).
It is possible to set more than one agency, using +
as separator (e.g. SPC+ECB
).
The keyword all
can be used to indicate that artefacts maintained by any maintenance agency should be returned.
resourceID
string
The ID of the artefact to be returned.
It is possible to set more than one ID, using +
as separator (e.g. CL_FREQ+CL_CONF_STATUS
).
The keyword all
can be used to indicate that any artefact of the specified resource type should be returned.
version
string
The version of the artefact to be returned.
It is possible to set more than one version, using +
as separator (e.g. 1.0+2.1
).
The keyword all
can be used to return all version of the matching resource.
The keyword latest
can be used to return the latest production version of the matching resource.
references
string
Instructs the web service to return (or not return) the artefacts referenced by the artefact to be returned. Possible values are:
none
: No references will be returned parents: Returns the artefacts that use the artefact matching the query
parentsandsiblings
: Returns the artefacts that use the artefact matching the query, as well as the artefacts referenced by these artefacts
children
: Returns the artefacts referenced by the artefact to be returned
descendants
: References of references, up to any level, will be returned
all
: The combination of parentsandsiblings and descendants In addition, a concrete type of resource may also be used (e.g. codelist)
detail
string
The amount of information to be returned. referencepartial
is a common value.
Possible values are:
allstubs
: All artefacts should be returned as stubs, containing only identification information, as well as the artefacts' name
referencestubs
: Referenced artefacts should be returned as stubs, containing only identification information, as well as the artefacts' name
referencepartial
: Referenced item schemes should only include items used by the artefact to be returned. For example, a concept scheme would only contain the concepts used in a DSD, and its isPartial flag would be set to true
allcompletestubs
: All artefacts should be returned as complete stubs, containing identification information, the artefacts' names, descriptions, annotations and isFinal information
referencecompletestubs
: Referenced artefacts should be returned as complete stubs, containing identification information, the artefacts' name, description, annotations and isFinal information
full
: All available information for all artefacts should be returned
GET
https://stats-nsi-stable.pacificdata.org/rest/categoryscheme/{agencyID}/{resourceID}/{version}[?references][&detail]
This method retrieves information about category schemes used by dataflows.
agencyID
string
The agency maintaining the artefact to be returned (i.e. SPC).
It is possible to set more than one agency, using +
as separator (e.g. SPC+ECB
).
The keyword all
can be used to indicate that artefacts maintained by any maintenance agency should be returned.
resourceID
string
The ID of the artefact to be returned.
It is possible to set more than one ID, using +
as separator (e.g. CL_FREQ+CL_CONF_STATUS
).
The keyword all
can be used to indicate that any artefact of the specified resource type should be returned.
version
string
The version of the artefact to be returned.
It is possible to set more than one version, using +
as separator (e.g. 1.0+2.1
).
The keyword all
can be used to return all version of the matching resource.
The keyword latest
can be used to return the latest production version of the matching resource.
references
string
Instructs the web service to return (or not return) the artefacts referenced by the artefact to be returned. Possible values are:
none
: No references will be returned parents: Returns the artefacts that use the artefact matching the query
parentsandsiblings
: Returns the artefacts that use the artefact matching the query, as well as the artefacts referenced by these artefacts
children
: Returns the artefacts referenced by the artefact to be returned
descendants
: References of references, up to any level, will be returned
all
: The combination of parentsandsiblings and descendants In addition, a concrete type of resource may also be used (e.g. codelist)
detail
string
The amount of information to be returned. referencepartial
is a common value.
Possible values are:
allstubs
: All artefacts should be returned as stubs, containing only identification information, as well as the artefacts' name
referencestubs
: Referenced artefacts should be returned as stubs, containing only identification information, as well as the artefacts' name
referencepartial
: Referenced item schemes should only include items used by the artefact to be returned. For example, a concept scheme would only contain the concepts used in a DSD, and its isPartial flag would be set to true
allcompletestubs
: All artefacts should be returned as complete stubs, containing identification information, the artefacts' names, descriptions, annotations and isFinal information
referencecompletestubs
: Referenced artefacts should be returned as complete stubs, containing identification information, the artefacts' name, description, annotations and isFinal information
full
: All available information for all artefacts should be returned
GET
https://stats-nsi-stable.pacificdata.org/rest/codelist/{agencyID}/{resourceID}/{version}[?references][&detail]
This method retrieves the codelists associated with a dataflow.
agencyID
string
The agency maintaining the artefact to be returned (i.e. SPC
).
It is possible to set more than one agency, using +
as separator (e.g. SPC+ECB
).
The keyword all
can be used to indicate that artefacts maintained by any maintenance agency should be returned.
resourceID
string
The ID of the artefact to be returned.
It is possible to set more than one ID, using +
as separator (e.g. CL_FREQ+CL_CONF_STATUS
).
The keyword all
can be used to indicate that any artefact of the specified resource type should be returned.
version
string
The version of the artefact to be returned.
It is possible to set more than one version, using +
as separator (e.g. 1.0+2.1
).
The keyword all
can be used to return all version of the matching resource.
The keyword latest
can be used to return the latest production version of the matching resource.
references
string
Instructs the web service to return (or not return) the artefacts referenced by the artefact to be returned. Possible values are:
none
: No references will be returned parents: Returns the artefacts that use the artefact matching the query
parentsandsiblings
: Returns the artefacts that use the artefact matching the query, as well as the artefacts referenced by these artefacts
children
: Returns the artefacts referenced by the artefact to be returned
descendants
: References of references, up to any level, will be returned
all
: The combination of parentsandsiblings and descendants In addition, a concrete type of resource may also be used (e.g. codelist)
detail
string
The amount of information to be returned. referencepartial
is a common value.
Possible values are:
allstubs
: All artefacts should be returned as stubs, containing only identification information, as well as the artefacts' name
referencestubs
: Referenced artefacts should be returned as stubs, containing only identification information, as well as the artefacts' name
referencepartial
: Referenced item schemes should only include items used by the artefact to be returned. For example, a concept scheme would only contain the concepts used in a DSD, and its isPartial flag would be set to true
allcompletestubs
: All artefacts should be returned as complete stubs, containing identification information, the artefacts' names, descriptions, annotations and isFinal information
referencecompletestubs
: Referenced artefacts should be returned as complete stubs, containing identification information, the artefacts' name, description, annotations and isFinal information
full
: All available information for all artefacts should be returned
GET
https://stats-nsi-stable.pacificdata.org/rest/conceptscheme/{agencyID}/{resourceID}/{version}[?references][&detail]
This method retrieves information about concept schemes used by dataflows.
agencyID
string
The agency maintaining the artefact to be returned (i.e. SPC
).
It is possible to set more than one agency, using +
as separator (e.g. SPC+ECB
).
The keyword all
can be used to indicate that artefacts maintained by any maintenance agency should be returned.
resourceID
string
The ID of the artefact to be returned.
It is possible to set more than one ID, using +
as separator (e.g. CL_FREQ+CL_CONF_STATUS
).
The keyword all
can be used to indicate that any artefact of the specified resource type should be returned.
version
string
The version of the artefact to be returned.
It is possible to set more than one version, using +
as separator (e.g. 1.0+2.1
).
The keyword all
can be used to return all version of the matching resource.
The keyword latest
can be used to return the latest production version of the matching resource.
references
string
Instructs the web service to return (or not return) the artefacts referenced by the artefact to be returned.
Possible values are:
none
: No references will be returned
parents
: Returns the artefacts that use the artefact matching the query
parentsandsiblings
: Returns the artefacts that use the artefact matching the query, as well as the artefacts referenced by these artefacts
children
: Returns the artefacts referenced by the artefact to be returned
descendants
: References of references, up to any level, will be returned
all
: The combination of parentsandsiblings and descendants In addition, a concrete type of resource may also be used (e.g. codelist)
detail
string
The amount of information to be returned. referencepartial
is a common value.
Possible values are:
allstubs
: All artefacts should be returned as stubs, containing only identification information, as well as the artefacts' name
referencestubs
: Referenced artefacts should be returned as stubs, containing only identification information, as well as the artefacts' name
referencepartial
: Referenced item schemes should only include items used by the artefact to be returned. For example, a concept scheme would only contain the concepts used in a DSD, and its isPartial flag would be set to true
allcompletestubs
: All artefacts should be returned as complete stubs, containing identification information, the artefacts' names, descriptions, annotations and isFinal information
referencecompletestubs
: Referenced artefacts should be returned as complete stubs, containing identification information, the artefacts' name, description, annotations and isFinal information
full
: All available information for all artefacts should be returned
GET
https://stats-nsi-stable.pacificdata.org/rest/contentconstraint/{agencyID}/{resourceID}/{version}[?references][&detail]
This method retrieves content constraints for a dataflow.
agencyID
string
The agency maintaining the artefact to be returned (i.e. SPC
).
It is possible to set more than one agency, using +
as separator (e.g. SPC+ECB
).
The keyword all
can be used to indicate that artefacts maintained by any maintenance agency should be returned.
resourceID
string
The ID of the artefact to be returned.
It is possible to set more than one ID, using +
as separator (e.g. CL_FREQ+CL_CONF_STATUS
).
The keyword all
can be used to indicate that any artefact of the specified resource type should be returned.
version
string
The version of the artefact to be returned.
It is possible to set more than one version, using +
as separator (e.g. 1.0+2.1
).
The keyword all
can be used to return all version of the matching resource.
The keyword latest
can be used to return the latest production version of the matching resource.
references
string
Instructs the web service to return (or not return) the artefacts referenced by the artefact to be returned.
Possible values are:
none
: No references will be returned parents: Returns the artefacts that use the artefact matching the query
parentsandsiblings
: Returns the artefacts that use the artefact matching the query, as well as the artefacts referenced by these artefacts
children
: Returns the artefacts referenced by the artefact to be returned
descendants
: References of references, up to any level, will be returned
all
: The combination of parentsandsiblings and descendants In addition, a concrete type of resource may also be used (e.g. codelist)
detail
string
The amount of information to be returned. referencepartial
is a common value.
Possible values are:
allstubs
: All artefacts should be returned as stubs, containing only identification information, as well as the artefacts' name
referencestubs
: Referenced artefacts should be returned as stubs, containing only identification information, as well as the artefacts' name
referencepartial
: Referenced item schemes should only include items used by the artefact to be returned. For example, a concept scheme would only contain the concepts used in a DSD, and its isPartial flag would be set to true
allcompletestubs
: All artefacts should be returned as complete stubs, containing identification information, the artefacts' names, descriptions, annotations and isFinal information
referencecompletestubs
: Referenced artefacts should be returned as complete stubs, containing identification information, the artefacts' name, description, annotations and isFinal information
full
: All available information for all artefacts should be returned
GET
https://stats-nsi-stable.pacificdata.org/rest/datastructure/{agencyID}/{resourceID}/{version}[?references][&detail]
This method retrieves a data structure definition.
agencyID
string
The agency maintaining the artefact to be returned (i.e. SPC
).
It is possible to set more than one agency, using + as separator (e.g. SPC+ECB
).
The keyword all
can be used to indicate that artefacts maintained by any maintenance agency should be returned.
resourceID
string
The ID of the artefact to be returned.
It is possible to set more than one ID, using +
as separator (e.g. CL_FREQ+CL_CONF_STATUS
).
The keyword all
can be used to indicate that any artefact of the specified resource type should be returned.
version
string
The version of the artefact to be returned.
It is possible to set more than one version, using +
as separator (e.g. 1.0+2.1
).
The keyword all
can be used to return all version of the matching resource.
The keyword latest
can be used to return the latest production version of the matching resource.
references
string
Instructs the web service to return (or not return) the artefacts referenced by the artefact to be returned.
Possible values are:
none
: No references will be returned
parents
: Returns the artefacts that use the artefact matching the query
parentsandsiblings
: Returns the artefacts that use the artefact matching the query, as well as the artefacts referenced by these artefacts
children
: Returns the artefacts referenced by the artefact to be returned
descendants
: References of references, up to any level, will be returned
all
: The combination of parentsandsiblings and descendants In addition, a concrete type of resource may also be used (e.g. codelist)
detail
string
The amount of information to be returned. referencepartial
is a common value.
Possible values are:
allstubs
: All artefacts should be returned as stubs, containing only identification information, as well as the artefacts' name
referencestubs
: Referenced artefacts should be returned as stubs, containing only identification information, as well as the artefacts' name
referencepartial
: Referenced item schemes should only include items used by the artefact to be returned. For example, a concept scheme would only contain the concepts used in a DSD, and its isPartial flag would be set to true
allcompletestubs
: All artefacts should be returned as complete stubs, containing identification information, the artefacts' names, descriptions, annotations and isFinal information
referencecompletestubs
: Referenced artefacts should be returned as complete stubs, containing identification information, the artefacts' name, description, annotations and isFinal information
full
: All available information for all artefacts should be returned