Tag Archives: json

Improving Developer Experience for Writing Structured Data

Though we’re still waiting on the full materialization of the promise of the Semantic Web, search engines—including Google—are heavy consumers of structured data on the web through Schema.org. In 2015, pages with Schema.org markup accounted for 31.3% of the web. Among SEO communities, interest in Schema.org and structured data has been on the rise in recent years.

Yet, as the use of structured data continues to grow, the developer experience in authoring pieces of structured data remains spotty. I ran into this as I was trying to write my own snippets of JSON-LD. It turns out, the state-of-the-art way of writing JSON-LD is to: read the Schema.org reference; try writing a JSON literal on your own; when you think you’re done, paste the JSON into a validator (like Google’s structured data testing tool); see what’s wrong, fix; and repeat, as needed.

If it’s your first time writing JSON-LD, you might spend a few minutes figuring out how to represent an enum or boolean, looking for examples as needed.

Enter schema-dts

My experience left me with a feeling that things could be improved; writing JSON-LD should be no harder than any JSON that is constrained by a certain schema. This led me to create schema-dts (npm, github) a TypeScript-based library (and an optional codegen tool) with type definitions of the latest Schema.org JSON-LD spec.

The thinking was this: Just as IDEs (and, later, language server protocols for lightweight code editors) supercharge our developer experience with as-you-type error highlighting and code completions, we can supercharge the experience of writing those JSON-LD literals.

With IDEs and language server protocols, the write-test-debug loop was made much tighter. Developers get immediate feedback on the basic correctness of the code they write, rather than having to save sporadically and feed their code to a compiler for that feedback. With schema-dts, we try to take validators like the structured data testing tool out of the critical path of write-test-debug. Instead, you can use a library to type-check your JSON, reporting errors as you type, and offering completions for `@type`s, property names, and their values.


Thanks to TypeScript’s structural typing and discriminated unions, the general shape of Schema.org’s JSON-LD can be well-represented in TypeScript typings. I have previously described the type theory behind creating a TypeScript structure that expresses the Schema.org class structure, enumerations, `DataType`s, and properties.

Schema-dts includes two related pieces: the ‘default’ schema-dts NPM package which includes the latest Schema.org definitions, and the schema-dts-gen CLI which allows you to create your own typing definitions from Schema.org-like .nt N-Triple files. The CLI also has flags to control whether deprecated classes, properties, and enums should be included, what `@context` should be assumed by objects you write, etc.

Goals and Non-Goals

The goal of schema-dts isn’t to make type definitions that accept all legal Schema.org JSON literals. Rather, it is to make sure we provide typings that always (or almost always) result in legal Schema.org JSON-LD literals that search engines would accept. In the process, we’d like to make sure it’s as general as possible, without sacrificing type checking and useful completions.

For instance, RDF’s perspective is that structured data is property-centric, and the Schema.org reference of the domains and ranges of properties is only a suggestion for what values are inferred as. RDF actually permits values of any type to be assigned to a property. Instead, schema-dts will actually constrain you by the Schema.org values.

***

If you’re passionate about structured data, try schema-dts and join the conversation on GitHub!

By: Eyas Sharaiha, Geo Engineering & Open Source scheme-dts Project

Discontinuing support for JSON-RPC and Global HTTP Batch Endpoints

Posted by Dan O’Meara, Program Manager, Google Cloud Platform team

We have invested heavily in our API and service infrastructure to improve performance and security and to add features developers need to build world-class APIs. As we make changes we must address features that are no longer compatible with the latest architecture and business requirements.

The JSON-RPC protocol (http://www.jsonrpc.org/specification) and Global HTTP Batch (Javascript example) are two such features. Our support for these features was based on an architecture using a single shared proxy to receive requests for all APIs. As we move towards a more distributed, high performance architecture where requests go directly to the appropriate API server we can no longer support these global endpoints.

As a result, next year, on January 25, 2019 we will discontinue support for both these features.

We know that these changes have customer impact and have worked to make the transition steps as clear as possible. Please see the guidance below which will help ease the transition.

What do you need to do?

Google API Client Libraries have been regenerated to no longer make requests to the global HTTP batch endpoint. Clients using these libraries must upgrade to the latest version. Clients not using the Google API Client Libraries and/or making custom calls to the JSON-RPC endpoint or HTTP batch endpoint will need to make the changes outlined below.

JSON-RPC

To identify whether you use JSON-RPC, you can check whether you send requests to https://www.googleapis.com/rpc or https://content.googleapis.com/rpc. If you do, you should migrate.

  1. If you are using client libraries (either the Google published libraries or other libraries) that use the JSON-RPC endpoint, switch to client libraries that speak to the API's REST endpoint:
  2. Example code for Javascript

    Before

    // json-rpc request for the list method
    gapi.client.rpcRequest('zoo.animals.list', 'v2',
    name:'giraffe'}).execute(x=>console.log(x))

    After

    // json-rest request for the list method
    gapi.client.zoo.animals.list({name:'giraffe'}).then(x=>console.log(x))

    OR

    1. If you are not using client libraries (i.e. making raw HTTP requests):
      1. Use the REST URLs, and
      2. Change how you form the request and parse the response.

    Example code

    Before

    // Request URL (JSON-RPC)
    POST https://content.googleapis.com/rpc?alt=json&key=xxx
    // Request Body (JSON-RPC)
    [{
    "jsonrpc":"2.0","id":"gapiRpc",
    "Method":"zoo.animals.list",
    "apiVersion":"v2",
    "Params":{"name":"giraffe"}
    }]

    After

    // Request URL (JSON-REST)
    GET https://content.googleapis.com/zoo/v2/animals?name=giraffe&key=xxx

    HTTP batch

    A batch request is homogenous if the inner requests are addressed to the same API, even if addressed to different methods of the same API. It is heterogeneous if the inner requests go to different APIs. Heterogeneous batching will not be supported after the turn down of the Global HTTP batch endpoint. Homogenous batching will still be supported but through API specific batch endpoints.

    1. If you are currently forming heterogeneous batch requests:
      1. Change your client code to send only homogenous batch requests.

    Example code

    The example demonstrates how we can split a heterogeneous batch request for 2 apis (urlshortener and zoo) into 2 homogeneous batch requests.

    Before

    // heterogeneous batch request example.

    // Notice that the outer batch request contains inner API requests
    // for two different APIs.

    // Request to urlshortener API
    request1 = gapi.client.urlshortener.url.get({"shortUrl": "http://goo.gl/fbsS"});

    // Request to zoo API
    request2 = gapi.client.zoo.animals.list();

    // Request to urlshortener API
    request3 = gapi.client.urlshortener.url.get({"shortUrl": "https://goo.gl/XYFuPH"});

    // Request to zoo API
    request4 = gapi.client.zoo.animal.get("name": "giraffe");

    // Creating single heterogeneous batch request object
    heterogeneousBatchRequest = gapi.client.newBatch();
    // adding the 4 batch requests
    heterogeneousBatchRequest.add(request1);
    heterogeneousBatchRequest.add(request2);
    heterogeneousBatchRequest.add(request3);
    heterogeneousBatchRequest.add(request4);
    // print the heterogeneous batch request
    heterogeneousBatchRequest.then(x=>console.log(x));

    After

    // Split heterogeneous batch request into two homogenous batch requests

    // Request to urlshortener API
    request1 = gapi.client.urlshortener.url.get({"shortUrl": "http://goo.gl/fbsS"});

    // Request to zoo API
    request2 = gapi.client.zoo.animals.list();

    // Request to urlshortener API
    request3 = gapi.client.urlshortener.url.get({"shortUrl": "http://goo.gl/fbsS"})

    // Request to zoo API
    request4 = gapi.client.zoo.animals.list();
    // Creating homogenous batch request object for urlshorterner
    homogenousBatchUrlshortener = gapi.client.newBatch();

    // Creating homogenous batch request object for zoo
    homogenousBatchZoo = gapi.client.newBatch();
    // adding the 2 batch requests for urlshorterner
    homogenousBatchUrlshortener.add(request1); homogenousBatchUrlshortener.add(request3);

    // adding the 2 batch requests for zoo
    homogenousBatchZoo.add(request2);
    homogenousBatchZoo.add(request4);
    // print the 2 homogenous batch request
    Promise.all([homogenousBatchUrlshortener,homogenousBatchZoo])
    .then(x=>console.log(x));

    OR

  3. If you are currently forming homogeneous batch requests
    1. And you are using Google API Client Libraries, then simply update to the latest versions.
    2. If you are using non-Google API client libraries or no client library (i.e making raw HTTP requests), then:
      • Change the endpoint from www.googleapis.com/batch to www.googleapis.com/batch//.
      • Or, simply read the value of 'batchPath' from the API's discovery doc and use that value.

For help on migration, consult the API documentation or tag Stack Overflow questions with the 'google-api' tag.