How PrimedIO works

Create a personalized web application that is unique and relevant for each and every user with PrimedIO. We currently support client-side web: (preferred) primedjs and server-side: primednodejs. Mobile (iOS and Android) clients are on the roadmap due for release in 2018.

Models and Predictions

Primed.io can deal with various types of models and predictions based on how they are represented:

Dense or sparse matrix predictions (DMP or SMP)

This category suits statistical models best. DMP means we have a score for each signal-target pair, essentially creating a dense matrix of predictions. DMP is most suited for models that are meant to be blended. The SMP matrix is sparse and we use only a subset of the possible signal-target pair predictions, for instance when we upload the top-N recommendations for a given userId.

One-to-many predictions (OMP)

In certain cases we don’t need to trigger a specific signal upon calling personalisation. This is mainly the case for domain expert models such as staff picks, editor's top or most popular. In these cases, all predictions should ‘trigger’ for each call, regardless of end-user preferences or context. OMP provides this facility via a reserved signal key: * (asterisk). Any OMP model should include a single ‘wildcard’ signal with the * char as key, from which predictions to targets will then always be triggered.

Recency

Modelling relevance as a function of time since publication can be done using the recency functionality. Each target can be assigned three values: published_at (time of target publication), recency_histogram (discretization of decay function with arbitrary bins, between 0.0 ‘irrelevant’ and 1.0 ‘relevant’), recency_xmax (maximum amount of seconds that item remains relevant).

Setting these values determines the relevance decay over time (measured in seconds since publication date) using a histogram (which can be set to arbitrary resolution to support almost any decay function shape). After recency_xmax in seconds, this recency-coefficient will always be 0.0 (rendering the target ‘irrelevant’).

Upon calling personalisation, for each target the blend function will calculate the time difference (delta) between now and published_at in seconds. By looking up the delta in the histogram (x-axis) the blend function retrieves the corresponding y-value, which it returns as the recency-coefficient mentioned above. The recency-coefficient will be applied to the wscore right before ordering the results, ensuring the ‘irrelevant’ targets end up at the bottom of the list.

# pyprimed

targets = [
	{
		"key": "article1", 
		"value": {"title": "my article"}, 
		"published_at": "2017-10-05T14:48:00.000Z", 
		"recency_histogram":[1.0, 0.5], 
		"recency_xmax":1800
	},
	{
		"key": "article2", 
		"value": {"title": "my next article"}, 
		"published_at": "2017-11-05T14:48:00.000Z", 
		"recency_histogram":[1.0, 0.84, 0.68, 0.52, 0.36, 0.2], 
		"recency_xmax":3600
	}
]

# create campaign and abvariant
c = u.campaigns.create(name="mycampaign", key="dummy.frontpage.recommendations")
c.abvariants.create(label="A", models=[{"uid": cf.uid, "weight": 1.0}], recency=True)

The above configuration will make the recency-coefficient decay over time for both article1 and article2, but in different ways. Let’s start with article1: within the first 15 minutes (the first out of 2 buckets: (1800/2) = 900 seconds) of the lifetime, the article will remain unaffected by relevance decaying over time: the first bucket is set to 1.0. After 15 minutes, the recency-coefficient corresponding to the second bucket is set to 0.5 and effectively halves the wscore, penalizing the article for its age. For article1 we have set a maximum lifetime of 1800 seconds (a half hour), meaning that any personalise call that triggers article1 more than half an hour after publication (set at 2017-10-05T14:48:00.000Z) will cause this article to end up at the bottom of the results.

For article2, the decay is much smoother and takes place over a larger timespan (1 hour). We gradually decrease the recency-coefficients by 0.16 for each 10 minute (600 second) bucket.

Please note that the histogram length is arbitrary and can vary among targets; the same goes for the recency_xmax.

Dithering

The goal of a recommender is to find relevant items that a user may be interested in. The list of recommended items is typically sorted by the items’ relevance scores (highest to lowest) and is presented to the user in that order. These scores may not change much over time, meaning that the list of recommendations for individual users don’t change much either. Depending upon your user interface, you’ll also be restricted to showing only the top n recommendations to users in a single screen at a time, despite the fact that you may have generated a much larger list of recommendations for them. This means that when users return, many will see pretty much the same top n recommendations every visit and you’ll miss the opportunity to show them ‘less relevant’, but perhaps very interesting, recommendations.

Dithering provides a solution to this problem. The idea behind dithering is to re-order the recommendations list by adding random noise to the original relevance-based ordering. This results in surfacing some of the items that are further down the list (e.g. from the second page or even later pages) to the first page. In doing so, it’s possible to create the illusion of freshness in your list of recommendations as they appear to change regularly between visits although they may actually have been generated from a single run of a recommender model. What’s more, there are also benefits to recommending more items to users as it increases the variety of interactions from which your system can learn.

dithering equation

Blending

Creating a blend works by adding an abvariant to a campaign with multiple associated models. Each of these models are assigned a weight (the total weight needs to sum to 1.0):

# pyprimed

# create models
m1 = pio.models.create(name="my_device_model")
m2 = pio.models.create(name="my_cbf")

# create campaign and blends
c = u.campaigns.create(name="mycampaign", key="dummy.frontpage.recommendations") 

c.abvariants.create(label="A", models=[{"uid": m1.uid, "weight": 0.3}, {"uid": m2.uid, "weight": 0.7}])
c.abvariants.create(label="B", models=[{"uid": m1.uid, "weight": 1.0}])
c.abvariants.create(label="C", models=[{"uid": m1.uid, "weight": 1.0}])

res_for_A = c.personalize(keys=["key1", "key2"], abvariant="A")
res_for_CONTROL = c.personalize(keys=["key1", "key2"], abvariant="__CONTROL__")

Upon personalisation, the blend function takes input keys, for instance ["myUserId", "iPhone", "Morning"], campaign key and an abvariant. The blend function then matches the provided keys against the signals in the database, and returns the targets with corresponding prediction scores. The result set is then grouped by the target key, and the prediction scores are combined using a weighted average into wscore, applying the model weights assigned previously when creating the blended abvariant. Each wscore is then optionally (if this is a recency-enabled abvariant) multiplied with the recency-coefficient after which the list is sorted by wscore in descending order. Depending on how many results are requested, the top n are returned to the client.

A/B Testing

We at Primed.io believe that something that can’t be measured, can’t be improved. Personalisation is no exception and so Primed.io offers advanced A/B testing. This feature allows product-owners and data-scientists to define different blends of models as separate A/B variants and systematically benchmark these variants against each other. Traditionally, A/B testing (sometimes called split testing) is comparing two versions of a web page to see which one performs better. You compare two web pages by showing the two variants (let’s call them A and B) to similar visitors at the same time. The one that gives a better conversion rate, wins!

In Primed.io, every campaign needs at least one associated abvariant to function, we could call this ‘the baseline’ or ‘the control group’. In fact, Primed.io offers a built in abvariant for each campaign that is created, automatically. After defining a campaign (without abvariants), one can still call the personalize endpoint for this campaign using the special, reserved abvariant label __CONTROL__. This special abvariant will find all targets associated with the campaign (by resolving the universe the campaign belongs to) and assign completely random scores to each target, sort the list and return the top N. As such, this functionality allows for an always present, random baseline that helps Data Scientists to correctly assess model performance over time.

Excluding targets from __CONTROL__

If you want to exclude certain targets from __CONTROL__ you can set the exclude_from_control property to False on target-creation:

# pyprimed
universe.targets.create(key="mykey", value="myvalue", exclude_from_control=True)

API

Calling the personalise and conversion endpoints require a public key, signature and a nonce to validate the request. Typically, the signature is generated server-side using the public key, private key and none so as to never send the private key over line in plain text.

Typically a client calls the personalise endpoint on a campaign, providing an abvariant and signals on which to match and trigger predictions. If successful, the call will return a list of results, each of which represents a predicted target in the database. These can then be used to render the page, or application. Each of the results also has a tracking_uuid field which can be used to track any conversions on it. A popular conversion metric is click-through rate; when a user clicks on the page-rendered target, it is registered as a conversion by calling the conversion endpoint using the `tracking_uuidz.

Signature

Primed.io uses HMAC-authentication to secure communication between end-user client-side (javascript / mobile) and server-side. The use case is slightly more complicated than usual, as the service does not operate in the same domain as the one that serves the application to the end-user. For this reason, HTTPS is required to encrypt data in motion and no plain-text secret keys are sent over the line. The approach is loosely based on how AWS secures S3.

In order to now generate a signature for authenticating a request the following is required:

nonce = int(datetime.datetime.now(datetime.timezone.utc).timestamp()) # time in seconds from epoch UTC
pubkey = 'somekey'
secretkey = 'somesecretkeywithalotofchars'

local = hashlib.sha512()
signature = "{}{}{}".format(pubkey, secretkey, nonce)
local.update(signature.encode('utf-8'))

print("{}\t{}".format("X-Authorization-Key", pubkey))
print("{}\t{}".format("X-Authorization-Signature", local.hexdigest()))
print("{}\t{}".format("X-Authorization-Nonce", str(nonce)))

# X-Authorization-Key	somekey
# X-Authorization-Signature	9b049c8cbde82331aed33d696cbf57990519d20d337c8e8519a9b6b4b7eb7926eb989c4709b30557ed9b5648747316b14599ac48e8c9bb5cfdc412e7341c9f5c
# X-Authorization-Nonce	12345 # clearly, this should match your timestamp

Arguments and payloads

The personalise endpoint takes a POST request on /api/v1/personalize/<campaign.key>, the JSON (utf-8) formatted payload than takes the following keys:

Key Description Type Default value Optional?
limit_results number of results requested int 10 Yes
abvariant abvariant to fetch model results for string ‘__CONTROL__’ Yes
keys signal keys to trigger predictions for array [’*‘] Yes

An example call looks as follows:

curl -i -X POST \
   -H "X-Authorization-Key:<publickey>" \
   -H "Content-Type:application/json; charset=utf-8" \
   -H "X-Authorization-Signature:<signature>" \
   -H "X-Authorization-Nonce:<nonce>" \
   -d \
'{
  "abvariant": "__CONTROL__",
  "keys": ["*", "iphone", "989ef51f-6b3d-4303-b23a-68bf5e3042e1"]
}' \
 'https://my.primed.api/api/v1/personalize/rtlnieuws.article.sprint3'

Calling the conversion endpoint on /api/v1/conversion/<tracking_uuid> is also done using the POST method. It optionally takes a JSON (utf-8) payload which it will append to the conversion for later analysis.