Tag Archives: dfp_pql

LineItemService, or Line_Item PQL Table?

One of the most used services in the DFP API is the LineItemService. Many of you are already utilizing the Line_Item table in the PublisherQueryLanguageService to create match tables on fields like Status or ExternalId, but with newer API versions, more and more fields are available as columns. Did you know that as of v201411 the Line_Item table includes a column for Targeting? With so many line item fields now accessible through PQL, the Line_Item table might be a viable replacement for your read operations.

What's the advantage? Faster response times. As an example, I pulled 5,000 line items from a network using both the LineItemService and the Line_Item PQL Table, printing page offsets as the results arrived. Take a look at the results:

* Actual response times may vary. Line item fields only available in participating PQL Tables.

Using the PublisherQueryLanguageService shaved off 17 seconds for a respectable speed increase of 15%.

However, if your application doesn't need some of the heavier fields, you'll see a much bigger gain. Check out what happens when we leave out Targeting:

The sparse selection offered by the PublisherQueryLangugeService means our data size is smaller, cutting the total time by a whopping 45%!

If you're looking for a performance boost in your LineItem read operations, give the Line_Item table a try. We've got example code in each of our client libraries to get you started. If you have any questions, don't hesitate to reach out to us on our API forums.

New PQL tables in the DFP API

In the recent DFP API releases, we announced the addition of more tables to the PublisherQueryLanguageService, starting with Line_Item and Ad_Unit. These tables are an alternative to retrieving entities from their respective services’ get***ByStatement methods. They allow you to retrieve sparse entities containing only the fields you’re interested in. For example, the following select statement retrieves the first page of only the ID and name of line items that are missing creatives.
SELECT Id, Name from Line_Item WHERE IsMissingCreatives = true LIMIT 500 OFFSET 0
In this blog post, we’ll go over some situations where this feature can be utilized to speed up entity retrieval times from hours to minutes.

Entity synchronization


The first major use case that benefits from these new tables is entity synchronization. For example, if you’re synchronizing line items on your network into a local database, you’re most likely using LineItemService.getLineItemsByStatement and hopefully taking advantage of the LineItem.lastModifiedDateTime field to only filter out line items that have changed since the last time you synchronized. But even with lastModifiedDateTime, this synchronization can still take a while, depending on how many line items you have on your network, and how complex their targetings are. If you don’t need to synchronize all the fields in your line item objects, you may be able to use the Line_Item PQL table to perform this synchronization instead.

If you do need to synchronize fields not yet available in the Line_Item table, such as targeting, you can still take advantage of this table for computed fields that don’t affect lastModifiedDateTime, such as LineItem.status. What you can do is synchronize your line items as usual with getLineItemsByStatement filtering on lastModifiedDateTime. Then update your local statuses with selected line item statuses from the Line_Item table (a very quick process):
SELECT Id, Status from Line_Item LIMIT 500 OFFSET 0

Match tables for reports


Local copies of line item information can also be used as match tables to construct more detailed reports. Sometimes, you may want more information in your reports than what is currently available as a dimensionAttribute. For example, if you run a report by line item ID, you may also want other line item information like isMissingCreatives to show in the report. Because LineItem.isMissingCreatives is unavailable as a DimensionAttribute, you can create a local match table containing line item IDs and additional columns to be included in the report. Then you can merge this match table with the report by the line item ID to obtain a report with those additional columns.

For example, let’s say you run a report with the following configuration:
Dimension.LINE_ITEM_ID
DimensionAttribute.LINE_ITEM_COST_TYPE
Column.AD_SERVER_IMPRESSIONS
The report in CSV_DUMP format looks something like this:
Dimension.LINE_ITEM_ID, DimensionAttribute.LINE_ITEM_COST_TYPE,
Column.AD_SERVER_IMPRESSIONS
1234567, CPM, 206
1234568, CPD, 45
1234569, CPD, 4
To also include LineItem.isMissingCreatives in the report, you would fetch a match table and save it (as a CSV file for example) by retrieving ID and isMissingCreatives from the Line_Item table.
SELECT Id, IsMissingCreatives from Line_Item LIMIT 500 OFFSET 0
Full examples of how to fetch match tables are available in all our client libraries. For instance, Python’s is here. Then using a script or a spreadsheet program, merge the match table with the report to produce something like this:
Dimension.LINE_ITEM_ID, DimensionAttribute.LINE_ITEM_COST_TYPE,
Column.AD_SERVER_IMPRESSIONS, LineItem.isMissingCreatives
1234567, CPM, 206, true
1234568, CPD, 45, false
1234569, CPD, 4, false
If you have any questions on these new PQL tables, or suggestions on what PQL tables you want in the next release, please let us know on the API forum, or on our Google+ Developers page.

Smarter Querying using Pagination with the DFP API

As your networks grow, so does their data in the DFP servers. While previously making requests for tens of line items, you now find yourself requesting tens of thousands of line items. Of course, with more data comes more responsibility - your requests are now taking longer and the response sizes have increased accordingly. You notice that some of your requests are now returning with 'ServerError.SERVER_ERROR.' Things might seem hopeless, but don’t panic...

Many of these problems can be solved with pagination! What does this mean from a developer's perspective? In a large number of implementations, what we've noticed is that applications will make requests with empty statements to calls like this:
getCreativesByStatement(" ")
getLineItemsByStatement(" ")
getOrdersByStatement(" ")
getCustomTargetingValuesByStatement(" ")
These requests do not limit the size of the returned result set. In doing so, the applications are asking for the data of every single object belonging to that service. When you’re talking about thousands of line items, each with their own distinct custom targeting, the amount of data will often cause the request to fail.

The fix? When creating PQL statements to query for DFP objects, you’ll find our client libraries all utilize a recommended page size (500) to limit your queries to smaller batches using the 'LIMIT' keyword, which should feel familiar for most who've used SQL. After the first page has returned successfully, you can then use the 'OFFSET' keyword to retrieve each subsequent page until your request returns nothing. If the calls still seem to take a long time to return a page or fail at this point, you can try to use a smaller page size.

If you use pagination to retrieve data, you not only get the benefit of increased reliability, but also protect yourself should something go wrong. Instead of retrying the entire request from the start again, you can simply pick up where you left off.

To see how to implement pagination logic, you can find examples in each of our client libraries:
Ruby
Java
PHP
Python
Dotnet
If you have any questions on using pagination with your queries, post them on the API forum or Google+ Developers page.

 - , DFP API Team