Skip to content

Based on the copy rotation instruction documents you've provided (from Caria/CRI), here is the data model extracted to support integration with the larger domain model for campaign management.


1. Entities, Attributes, and Data Types

Below is a consolidated view of the core entities, their attributes, and data types drawn from the structure of the rotation instruction HTML files.

A. RotationInstruction

Represents a full rotation instruction document per campaign.

Attribute Type Description
instruction_id String Unique reference ID (e.g., 707175)
version Integer Instruction version
sent_date DateTime Date and time the instruction was issued
from_agency String Sending agency name
primary_contact Contact Primary contact person (name, email)
secondary_contacts List Optional secondary contacts
client_name String Brand or advertiser (e.g., Philips)
product_name String Product (e.g., OneBlade)
date_range DateRange Overall campaign date range
rotation_details List One or more blocks with station and creative rotation

B. Contact

Reusable contact structure.

Attribute Type Description
name String Person’s name
email String Email address
phone_number String Optional phone

C. RotationDetail

Represents a rotation block within the campaign.

Attribute Type Description
start_date Date Start of the sub-period
end_date Date End of the sub-period
time_band TimeRange? Optional time restriction (e.g., 21:00 - 29:30)
stations List Channels and platforms involved
creatives List Creatives and ratios used

D. CreativeRotation

Defines each asset assigned in a rotation block.

Attribute Type Description
clock_number String Clearcast/clock ID (e.g., EGW/PHOB087/020)
title String Title or description of the creative
duration_secs Integer Length in seconds
copy_source String How the copy was delivered (e.g., Clearcast, Peach)
held_date Date Date the copy was held/confirmed
ratio Integer Rotation weight (e.g., 1, 3, 7)

E. Approval

Optional information shown for legal/compliance purposes.

Attribute Type Description
approval_id String Unique ID for approval
client_name String Client associated with approval
product_name String Product associated
date_range DateRange Validity of the approval

F. DateRange

Attribute Type
start_date Date
end_date Date

G. TimeRange (Optional)

Attribute Type
start_time Time
end_time Time

2. UML Class Diagram (puml)

RotationInstructioninstruction_id: Stringversion: Integersent_date: DateTimefrom_agency: Stringprimary_contact: Contactsecondary_contacts: List<Contact>client_name: Stringproduct_name: Stringdate_range: DateRangerotation_details: List<RotationDetail>Contactname: Stringemail: Stringphone_number: StringDateRangestart_date: Dateend_date: DateTimeRangestart_time: Timeend_time: TimeRotationDetaildate_range: DateRangetime_band: TimeRangestations: List<String>creatives: List<CreativeRotation>CreativeRotationclock_number: Stringtitle: Stringduration_secs: Integercopy_source: Stringheld_date: Dateratio: IntegerApprovalapproval_id: Stringclient_name: Stringproduct_name: Stringdate_range: DateRangeprimary_contactcontainsoptionalcontainsheld_date optional
RotationInstructioninstruction_id: Stringversion: Integersent_date: DateTimefrom_agency: Stringprimary_contact: Contactsecondary_contacts: List<Contact>client_name: Stringproduct_name: Stringdate_range: DateRangerotation_details: List<RotationDetail>Contactname: Stringemail: Stringphone_number: StringDateRangestart_date: Dateend_date: DateTimeRangestart_time: Timeend_time: TimeRotationDetaildate_range: DateRangetime_band: TimeRangestations: List<String>creatives: List<CreativeRotation>CreativeRotationclock_number: Stringtitle: Stringduration_secs: Integercopy_source: Stringheld_date: Dateratio: IntegerApprovalapproval_id: Stringclient_name: Stringproduct_name: Stringdate_range: DateRangeprimary_contactcontainsoptionalcontainsheld_date optional

3. Integration with Domain Model

This RotationInstruction model integrates directly with:

  • Campaign ← maps to client_name, product_name, and date_range
  • Asset ← derived from CreativeRotation (clock_number = unique key)
  • TargetingTactic ← inferred from RotationDetail + stations
  • CopyValidated ← event triggered once creatives are matched and held

Entities, Attributes, and Data Types for Tag Documents

These documents represent tracking tag configurations for campaigns. Each row typically corresponds to a targeting tactic with a tag URL used to track impressions.

A. TrackingTagAssignment

Attribute Type Description
tag_id String Unique ID or alias for tag entry
campaign_name String Campaign identifier (e.g., “Go Compare UK”)
tactic_name String Targeting group, platform, or segment (e.g., “All4 Demo”, “LVOD”)
advertiser String Brand/advertiser (e.g., Tesco, McDonald’s)
start_date Date Flight start date for tag
end_date Date Flight end date for tag
platform String Media platform (e.g., Channel 4, All4, YouTube)
country_code String Market scope (e.g., GBR, WEU)
tag_url String Third-party tag or pixel URL
media_buyer String Agency or buyer assigning the tag
publisher String Destination media channel (e.g., All4)

B. Tactic

A tactic is a logical grouping of impressions for which the same tag will be applied.

Attribute Type Description
tactic_id String Internal or external ID (may be derived from naming convention)
description String Label used in media plans (e.g., "VOD - M25 18-34 Male")
rotation_ratio Int Optional weighting when multiple creatives rotate under the tactic

C. Campaign

Links tag rows to broader campaign metadata.

Attribute Type Description
campaign_id String Unique internal or external campaign ID
advertiser String Name of brand client
start_date Date Overall campaign start
end_date Date Overall campaign end

Class Diagram (puml)

Campaigncampaign_id: Stringadvertiser: Stringstart_date: Dateend_date: DateTactictactic_id: Stringdescription: Stringrotation_ratio: IntegerTrackingTagAssignmenttag_id: Stringcampaign_name: Stringtactic_name: Stringadvertiser: Stringstart_date: Dateend_date: Dateplatform: Stringcountry_code: Stringtag_url: Stringmedia_buyer: Stringpublisher: Stringincludes1..*is tagged with1
Campaigncampaign_id: Stringadvertiser: Stringstart_date: Dateend_date: DateTactictactic_id: Stringdescription: Stringrotation_ratio: IntegerTrackingTagAssignmenttag_id: Stringcampaign_name: Stringtactic_name: Stringadvertiser: Stringstart_date: Dateend_date: Dateplatform: Stringcountry_code: Stringtag_url: Stringmedia_buyer: Stringpublisher: Stringincludes1..*is tagged with1

Multi-Criteria Decision Analysis (MCDA)

MCDA Overview

Multi-Criteria Decision Analysis (MCDA) is a decision-making framework that evaluates multiple criteria in complex situations. The key component of MCDA is the pairwise comparison matrix, where the importance of each criterion is compared directly to every other criterion. This process involves a decision-maker or expert assigning values based on how much more important one element is compared to another. These comparisons result in a matrix where the diagonal is typically zero (as an element compared with itself has equal importance) and the off-diagonal elements represent the relative importance.

Pairwise comparison is a fundamental component of MCDA and is used to determine the relative importance of each criterion (or method, in the context of this example). Here's how it fits into the process, with a focus on the pairwise comparison aspect:

  1. Pairwise Comparison of Criteria: The table represents a pairwise comparison matrix. For each pair of methods, a decision maker has determined how important one is compared to the other in contributing to the overall goal, which in this case is the extraction of terms from text. For instance, 'token' is considered equally important when compared to itself (hence, the score is 0), but it is considered 4 times as important as 'semantic'. This process is repeated for every pair of methods, resulting in the upper triangle of the matrix.

  2. Scoring the Pairwise Comparisons: The numbers in the matrix reflect the relative importance between pairs. A score of 1 indicates that both methods are of equal importance. Scores above 1 indicate that the row method is more important than the column method, with the magnitude of the number reflecting the strength of the preference. Scores between 0 and 1 would indicate the opposite.

  3. Summing the Pairwise Comparisons: After all pairwise comparisons are made, each method's importance scores are summed to provide a total score reflecting its overall importance relative to all other methods.

  4. Normalizing the Weights: To make these sums usable as weights, they are normalized to sum up to 1 (or 100%). This is done by dividing each method's sum by the grand total of all sums. The resulting normalized weights reflect the proportionate importance of each method.

  5. Application of Weights: These normalized weights are then used in the decision-making process. For the term extraction example, when each provider (token, semantic, noun_phrase, noun, sentence) generates a score for a term, that score is multiplied by the method's weight to obtain a score that reflects both the term's relevance and the method's importance as determined by the pairwise comparison process.

In summary, the pairwise comparison matrix is a structured way to capture and quantify subjective judgments about the relative importance of each criterion. These judgments are then converted into a set of weights through normalization, which can be applied to the decision-making process. The method ensures that the final decision reflects the considered opinion of experts or decision-makers about the relative importance of the criteria involved.

MCDA Legend

Value Meaning
0 No effect (on the diagonal)
1 As important
2 More important
4 Significantly more important
0.5 Less important
0.25 Significantly less important

MCDA Weights

token semantic noun_phrase noun sentence
token 0 0.5 1 1 0.25
semantic 4 0 2 2 0.5
noun_phrase 4 0.5 0 1 0.5
noun 4 0.5 2 0 0.25
sentence 4 2 4 2 0
sums 16 3.5 9 6 1.5
weight 0.44444444 0.09722222 0.25 0.16666667 0.04166667

mcda_weights = {
    "token": 0.44444444,
    "semantic": 0.09722222,
    "noun_phrase": 0.25,
    "noun": 0.16666667,
    "sentence": 0.04166667
}

MCDA Pseudo Implementation

The Python implementation would look something like this (assuming we have the data structures available):


import numpy as np

# This is an example, you would actually load this data from your source
# terms_scores: dict where key is the term_id and the value is a list of scores from each provider

terms_scores = {
    "d2ac602d-298f-4979-af43-63d189067136": [1, 0.78, 0, 1, 1],
    "423c405e-1bce-4ffa-884e-39dca3911436": [1, 0.65, 1, 0, 1],
    "b984dcc9-d71d-4c4b-a1e6-064bd823e48e": [0, 0.67, 1, 1, 0],
    "4d951431-4e24-4ee1-9d06-edbca35857e2": [1, 0.99, 1, 0, 1],
    "46b0a20a-3087-446d-8b9c-eb756c151b1c": [1, 0.67, 0, 1, 0],
    "f83b321d-c543-49d1-8ba7-2cc95d7003ae": [0, 0.12, 1, 0, 1],
    "1ec4693b-f57e-4179-bda2-db3004f6130c": [1, 0.67, 0, 1, 0]
}


# mcda_weights: dict where key is the method and the value is the weight

mcda_weights = {
    "token": 0.44444444,
    "semantic": 0.09722222,
    "noun_phrase": 0.25,
    "noun": 0.16666667,
    "sentence": 0.04166667
}

# Convert scores and weights into arrays for easier manipulation
# Assuming the order of the scores aligns with the order of methods in mcda_weights
scores_array = np.array(list(terms_scores.values()))
weights_array = np.array(list(mcda_weights.values()))

# Apply MCDA weights to the scores
weighted_scores = scores_array * weights_array

# Calculate the final weighted score for each term by summing across the methods
final_scores = weighted_scores.sum(axis=1)

print(final_scores)

# Apply a threshold to filter the terms
threshold = 0.7 # Define your threshold here
filtered_terms = {term_id: score for term_id, score in zip(terms_scores.keys(), final_scores) if score >= threshold}

# The filtered_terms now contains the term IDs and their weighted score above the threshold
print("Filtered Terms IDs with scores above the threshold:")
for term_id, score in filtered_terms.items():
    print(f"Term ID: {term_id}, Score: {score}")

In the code above:

  • terms_scores would be replaced by your actual data structure that holds the term IDs and their associated scores.
  • mcda_weights should contain the weights from the MCDA analysis for each method.
  • The scores are weighted using the MCDA weights, and then a threshold is applied to select the terms.

Example output:


Filtered Terms IDs with scores above the threshold:
Term ID: d2ac602d-298f-4979-af43-63d189067136, Score: 0.7286111116
Term ID: 423c405e-1bce-4ffa-884e-39dca3911436, Score: 0.7993055530000001
Term ID: 4d951431-4e24-4ee1-9d06-edbca35857e2, Score: 0.8323611078