Data Integrations - Importing Data

Many partner institutions transfer data bidirectionally between Slate and their institutional SIS, and between Slate and other databases as needed. File transfer via SFTP or RESTful web services is employed for such integration.

Data feeds are generally limited to applicant and financial aid data. Most commonly, the data exchange is implemented as a transfer of flat files on a scheduled basis through an SFTP server, where the specifications for the data exchange can be dictated by the institution, and where the value and code translations (for country codes, major codes, term codes, etc.) happen within Slate.

The data points from Slate to the campus system typically include student biographic and demographic data, as well as key application components that are necessary outside of admissions (for example, entry term, admission plan, or admission decision). A Slate ID is also sent along with a placeholder for the Campus or Institutional ID.

A return feed is then provided, and the Slate ID is included with the matched or newly created record's Institutional ID. Subsequent data feeds from Slate into the external system then include that identifier for direct matching.

This article describes options for this type of data exchange, and provides links to additional, related documentation for each method. 

Slate has integrated successfully with a wide variety of software applications, including SIS systems such as Banner, PeopleSoft and Colleague, business intelligence solutions, and enterprise content management systems. This article reviews methods and best practices for data integration with Slate. 

 Tip

For information specific to your SIS or external system, your peer institutions may be your best source of information, and we encourage reaching out through the Slate Community Forum.

Batched Imports

Upload Dataset is Slate's import tool. Most frequently, institutions deliver import files to an /incoming/ directory on Slate's SFTP servers, which are polled frequently (at least once every 15 minutes). Files matching a specified filename mask are then loaded. The files are routed into our Upload Dataset interface, where the predefined import format handles all value and code translations. This helps ensure that year-over-year changes made to accommodate new fields or values are straightforward and can be handled within admissions. 

It is also possible to poll a remote SFTP server, but only to the point of SFTP server availability. Since we can ensure that our servers remain highly available, the process is usually most reliable when using our infrastructure.

  What file layouts can Slate consume?

Slate can consume Excel spreadsheets, delimited text files, fixed width files, XML and JSON. We typically recommend that delimited files with column headers be used, since you can add or remove columns at any time without negatively impacting the import process within Slate. This then allows for asynchronous changes to the data feed specifications.

Web Services

Pulling from a Remote Endpoint Into Slate's Upload Dataset

This option allows Slate to poll external web services for new data and then process this data through the Upload Data interface, just as if the files were transferred via SFTP. These could include XML posts but can also include delimited data.

Since the data updates are processed through our Upload Dataset mechanism, changes to records can be queued, batched, and run in the most efficient manner possible, which minimizes or eliminates any potential for observable record locking.

Pushing Data into Upload Dataset through a Web Service Endpoint

This option uses web services to post files into Slate that are then processed by the Upload Dataset mechanism, just as if the files were transferred via SFTP. This is also like the process of pulling from a remote endpoint.

Document Imports

We recommend that files be sent using the industry-standard Document Import Processor (DIP) approach, where a zip archive is generated containing PDFs or TIFFs of the documents to be imported, along with an index file containing the filename of each document as well as any associated metadata parameters (such as EMPLID and document type). Slate can then extract the documents and index file to import the documents into the appropriate student records. 

  Best Practice

We recommend delivering import documents in a zip file using SFTP, since SFTP is much more efficient with the transmission of a single file (such as a zip archive) than with thousands of individual files. While documents could be imported using Web Services, we advise that imports are handled using SFTP, since a zip archive containing potentially numerous PDFs could be quite large.

We also prefer PDFs to TIFFs, since a digital PDF of non-scanned data is be a fraction of the size of a TIFF file. A TIFF file is a rasterized/bitmapped image without digital text content, and thus cannot be enlarged beyond the original resolution without a loss of fidelity.

Integration Goals

Ultimately, the goal is to design a process that is reliable, stable, supportable, and sustainable for your institution.

  • Reliability: The integration process can be more reliable when more of it lives within the Slate infrastructure, so an export to our SFTP servers is typically preferred.
  • Stability: The integration process should run without constant human intervention. In other words, you should be able to “set it and forget it.” It should not need to change with any frequency. This means that code and value translations should not live within the query itself.
  • Supportability and sustainability: The integration process should minimize the use of custom SQL where possible as well as host the value translation inside Slate but outside of the query, enabling the people closest to the process (such as admissions staff) to manage the periodic changes which may be necessary because of new terms, programs, majors, etc.
Was this article helpful?
10 out of 18 found this helpful