Files @ 88361fb7c01b
Branch filter:

Location: HCDA/cikm-paper/mypaper-final.tex - annotation

arjen
again... a latex glitch I fixed before.
hope all the git conflicts did not undo too many changes...
   1
   2
   3
   4
   5
   6
   7
   8
   9
  10
  11
  12
  13
  14
  15
  16
  17
  18
  19
  20
  21
  22
  23
  24
  25
  26
  27
  28
  29
  30
  31
  32
  33
  34
  35
  36
  37
  38
  39
  40
  41
  42
  43
  44
  45
  46
  47
  48
  49
  50
  51
  52
  53
  54
  55
  56
  57
  58
  59
  60
  61
  62
  63
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
ef88c0e1e6e7
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
4b6d6a2cfe78
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
60fbfbab0287
60fbfbab0287
60fbfbab0287
60fbfbab0287
60fbfbab0287
b3d9215866be
b3d9215866be
60fbfbab0287
b3d9215866be
60fbfbab0287
60fbfbab0287
60fbfbab0287
b3d9215866be
b3d9215866be
b3d9215866be
b3d9215866be
b3d9215866be
60fbfbab0287
60fbfbab0287
60fbfbab0287
60fbfbab0287
60fbfbab0287
68fbea2f0372
60fbfbab0287
3034dd468026
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
68fbea2f0372
25b33af62ac2
25b33af62ac2
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
68fbea2f0372
25b33af62ac2
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
25b33af62ac2
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
68fbea2f0372
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
2c1313a39690
2d75f50f60d9
ef88c0e1e6e7
2d75f50f60d9
68fbea2f0372
68fbea2f0372
68fbea2f0372
68fbea2f0372
68fbea2f0372
68fbea2f0372
68fbea2f0372
68fbea2f0372
68fbea2f0372
68fbea2f0372
ef88c0e1e6e7
68fbea2f0372
68fbea2f0372
68fbea2f0372
68fbea2f0372
68fbea2f0372
68fbea2f0372
ef88c0e1e6e7
ef88c0e1e6e7
ef88c0e1e6e7
68fbea2f0372
68fbea2f0372
ef04fe3de859
ef04fe3de859
25b33af62ac2
25b33af62ac2
68fbea2f0372
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
ef04fe3de859
25b33af62ac2
25b33af62ac2
25b33af62ac2
430e35705d08
4d6b80d23816
25b33af62ac2
25b33af62ac2
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4c5df7fa8a15
25b33af62ac2
2c1313a39690
25b33af62ac2
2c1313a39690
25b33af62ac2
2c1313a39690
25b33af62ac2
4c5df7fa8a15
25b33af62ac2
25b33af62ac2
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
25b33af62ac2
25b33af62ac2
25b33af62ac2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
ef04fe3de859
ef04fe3de859
25b33af62ac2
25b33af62ac2
25b33af62ac2
e26f865b0658
25b33af62ac2
e26f865b0658
e26f865b0658
e26f865b0658
e26f865b0658
e26f865b0658
e26f865b0658
e26f865b0658
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
ef04fe3de859
25b33af62ac2
25b33af62ac2
3034dd468026
3034dd468026
3034dd468026
3034dd468026
3034dd468026
3034dd468026
3034dd468026
3034dd468026
3034dd468026
3034dd468026
3034dd468026
3034dd468026
3034dd468026
3034dd468026
3034dd468026
3034dd468026
3034dd468026
3034dd468026
3034dd468026
3034dd468026
3034dd468026
3034dd468026
3034dd468026
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
4342a47f2ca8
3034dd468026
4342a47f2ca8
25b33af62ac2
25b33af62ac2
e26f865b0658
25b33af62ac2
25b33af62ac2
25b33af62ac2
ef04fe3de859
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
fb8b29e39fa2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
ef04fe3de859
25b33af62ac2
430e35705d08
25b33af62ac2
fb8b29e39fa2
25b33af62ac2
fb8b29e39fa2
25b33af62ac2
ef04fe3de859
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
fb8b29e39fa2
fb8b29e39fa2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
fb8b29e39fa2
fb8b29e39fa2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
ef04fe3de859
25b33af62ac2
25b33af62ac2
ef04fe3de859
25b33af62ac2
25b33af62ac2
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
25b33af62ac2
25b33af62ac2
25b33af62ac2
ef04fe3de859
25b33af62ac2
25b33af62ac2
ef04fe3de859
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
4c5df7fa8a15
4c5df7fa8a15
4c5df7fa8a15
4c5df7fa8a15
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
12c3faed77f6
12c3faed77f6
4c5df7fa8a15
4c5df7fa8a15
25b33af62ac2
ef04fe3de859
ef04fe3de859
25b33af62ac2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
4c5df7fa8a15
4c5df7fa8a15
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
4c5df7fa8a15
4c5df7fa8a15
25b33af62ac2
25b33af62ac2
25b33af62ac2
ef04fe3de859
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
ef04fe3de859
fb8b29e39fa2
fb8b29e39fa2
4c5df7fa8a15
ef04fe3de859
4c5df7fa8a15
4c5df7fa8a15
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
fb8b29e39fa2
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
25b33af62ac2
e26f865b0658
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
ef88c0e1e6e7
ef88c0e1e6e7
fb8b29e39fa2
fb8b29e39fa2
ef88c0e1e6e7
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
fb8b29e39fa2
ef04fe3de859
430e35705d08
430e35705d08
ef88c0e1e6e7
ef88c0e1e6e7
ef88c0e1e6e7
ef88c0e1e6e7
ef88c0e1e6e7
ef88c0e1e6e7
ef88c0e1e6e7
ef88c0e1e6e7
ef88c0e1e6e7
ef88c0e1e6e7
ef88c0e1e6e7
ef88c0e1e6e7
4d6b80d23816
ef88c0e1e6e7
4d6b80d23816
ef88c0e1e6e7
ef88c0e1e6e7
ef88c0e1e6e7
ef88c0e1e6e7
ef88c0e1e6e7
ef88c0e1e6e7
ef88c0e1e6e7
ef88c0e1e6e7
ef88c0e1e6e7
ef88c0e1e6e7
fe74b611603f
ef88c0e1e6e7
fe74b611603f
fe74b611603f
fe74b611603f
fe74b611603f
fe74b611603f
fe74b611603f
ef88c0e1e6e7
fe74b611603f
ef88c0e1e6e7
fe74b611603f
ef88c0e1e6e7
fe74b611603f
fe74b611603f
ef88c0e1e6e7
fe74b611603f
fe74b611603f
fe74b611603f
fe74b611603f
fe74b611603f
ef88c0e1e6e7
fe74b611603f
fe74b611603f
fe74b611603f
fe74b611603f
ef88c0e1e6e7
fe74b611603f
fe74b611603f
fe74b611603f
fe74b611603f
fe74b611603f
fe74b611603f
ef88c0e1e6e7
fe74b611603f
fe74b611603f
ef88c0e1e6e7
fe74b611603f
fe74b611603f
ef88c0e1e6e7
ef88c0e1e6e7
fe74b611603f
ef88c0e1e6e7
fe74b611603f
fe74b611603f
ef88c0e1e6e7
fe74b611603f
fe74b611603f
fe74b611603f
fe74b611603f
fe74b611603f
fe74b611603f
4d6b80d23816
25b33af62ac2
4d481f8d3ab8
25b33af62ac2
25b33af62ac2
25b33af62ac2
ef04fe3de859
ef04fe3de859
25b33af62ac2
25b33af62ac2
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
25b33af62ac2
25b33af62ac2
4d6b80d23816
4d6b80d23816
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
ef04fe3de859
25b33af62ac2
25b33af62ac2
25b33af62ac2
ef04fe3de859
25b33af62ac2
ef04fe3de859
25b33af62ac2
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
25b33af62ac2
4d6b80d23816
4d6b80d23816
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
ef04fe3de859
ef04fe3de859
25b33af62ac2
ef04fe3de859
25b33af62ac2
ef04fe3de859
25b33af62ac2
25b33af62ac2
25b33af62ac2
ef04fe3de859
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
430e35705d08
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
ef04fe3de859
25b33af62ac2
25b33af62ac2
25b33af62ac2
ef88c0e1e6e7
25b33af62ac2
25b33af62ac2
25b33af62ac2
ef04fe3de859
25b33af62ac2
ef04fe3de859
25b33af62ac2
d15504031773
d15504031773
25b33af62ac2
ef04fe3de859
25b33af62ac2
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
ef04fe3de859
25b33af62ac2
25b33af62ac2
4b6d6a2cfe78
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4b6d6a2cfe78
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
25b33af62ac2
3963d5538939
3963d5538939
25b33af62ac2
ef04fe3de859
25b33af62ac2
25b33af62ac2
ef04fe3de859
ef04fe3de859
d15504031773
d15504031773
ef04fe3de859
ef04fe3de859
ef04fe3de859
4d6b80d23816
4d6b80d23816
ef04fe3de859
ef04fe3de859
ef04fe3de859
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
d15504031773
d15504031773
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
4d6b80d23816
ef04fe3de859
ef04fe3de859
ef04fe3de859
ef04fe3de859
e26f865b0658
88361fb7c01b
88361fb7c01b
88361fb7c01b
88361fb7c01b
88361fb7c01b
88361fb7c01b
88361fb7c01b
ef04fe3de859
88361fb7c01b
88361fb7c01b
88361fb7c01b
88361fb7c01b
88361fb7c01b
25b33af62ac2
25b33af62ac2
ef04fe3de859
ef04fe3de859
ef04fe3de859
ef04fe3de859
25b33af62ac2
25b33af62ac2
ef04fe3de859
ef04fe3de859
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
25b33af62ac2
% THIS IS SIGPROC-SP.TEX - VERSION 3.1
% WORKS WITH V3.2SP OF ACM_PROC_ARTICLE-SP.CLS
% APRIL 2009
%
% It is an example file showing how to use the 'acm_proc_article-sp.cls' V3.2SP
% LaTeX2e document class file for Conference Proceedings submissions.
% ----------------------------------------------------------------------------------------------------------------
% This .tex file (and associated .cls V3.2SP) *DOES NOT* produce:
%       1) The Permission Statement
%       2) The Conference (location) Info information
%       3) The Copyright Line with ACM data
%       4) Page numbering
% ---------------------------------------------------------------------------------------------------------------
% It is an example which *does* use the .bib file (from which the .bbl file
% is produced).
% REMEMBER HOWEVER: After having produced the .bbl file,
% and prior to final submission,
% you need to 'insert'  your .bbl file into your source .tex file so as to provide
% ONE 'self-contained' source file.
%
% Questions regarding SIGS should be sent to
% Adrienne Griscti ---> griscti@acm.org
%
% Questions/suggestions regarding the guidelines, .tex and .cls files, etc. to
% Gerald Murray ---> murray@hq.acm.org
%
% For tracking purposes - this is V3.1SP - APRIL 2009

\documentclass{acm_proc_article-sp}
\usepackage{booktabs}
\usepackage{multirow}
\usepackage{todonotes}
\usepackage{url}

\begin{document}

\title{Entity-Centric Stream Filtering and ranking: Filtering and Unfilterable Documents 
}
%SUGGESTION:
%\title{The Impact of Entity-Centric Stream Filtering on Recall and
%  Missed Documents}

%
% You need the command \numberofauthors to handle the 'placement
% and alignment' of the authors beneath the title.
%
% For aesthetic reasons, we recommend 'three authors at a time'
% i.e. three 'name/affiliation blocks' be placed beneath the title.
%
% NOTE: You are NOT restricted in how many 'rows' of
% "name/affiliations" may appear. We just ask that you restrict
% the number of 'columns' to three.
%
% Because of the available 'opening page real-estate'
% we ask you to refrain from putting more than six authors
% (two rows with three columns) beneath the article title.
% More than six makes the first-page appear very cluttered indeed.
%
% Use the \alignauthor commands to handle the names
% and affiliations for an 'aesthetic maximum' of six authors.
% Add names, affiliations, addresses for
% the seventh etc. author(s) as the argument for the
% \additionalauthors command.
% These 'additional authors' will be output/set for you
% without further effort on your part as the last section in
% the body of your article BEFORE References or any Appendices.

\numberofauthors{2} %  in this sample file, there are a *total*
% of EIGHT authors. SIX appear on the 'first-page' (for formatting
% reasons) and the remaining two appear in the \additionalauthors section.
%
% \author{
% % You can go ahead and credit any number of authors here,
% % e.g. one 'row of three' or two rows (consisting of one row of three
% % and a second row of one, two or three).
% %
% % The command \alignauthor (no curly braces needed) should
% % precede each author name, affiliation/snail-mail address and
% % e-mail address. Additionally, tag each line of
% % affiliation/address with \affaddr, and tag the
% % e-mail address with \email.
% %
% % 1st. author
% \alignauthor
% Ben Trovato\titlenote{Dr.~Trovato insisted his name be first.}\\
%        \affaddr{Institute for Clarity in Documentation}\\
%        \affaddr{1932 Wallamaloo Lane}\\
%        \affaddr{Wallamaloo, New Zealand}\\
%        \email{trovato@corporation.com}
% % 2nd. author
% \alignauthor
% G.K.M. Tobin\titlenote{The secretary disavows
% any knowledge of this author's actions.}\\
%        \affaddr{Institute for Clarity in Documentation}\\
%        \affaddr{P.O. Box 1212}\\
%        \affaddr{Dublin, Ohio 43017-6221}\\
%        \email{webmaster@marysville-ohio.com}
% }
% There's nothing stopping you putting the seventh, eighth, etc.
% author on the opening page (as the 'third row') but we ask,
% for aesthetic reasons that you place these 'additional authors'
% in the \additional authors block, viz.
% Just remember to make sure that the TOTAL number of authors
% is the number that will appear on the first page PLUS the
% number that will appear in the \additionalauthors section.

\maketitle
\begin{abstract}

Cumulative citation recommendation refers to the problem faced by
knowledge base curators, who need to continuously screen the media for
updates regarding the knowledge base entries they manage. Automatic
system support for this entity-centric information processing problem
requires complex pipe\-lines involving both natural language
processing and information retrieval components. The pipeline
encountered in a variety of systems that approach this problem
involves four stages: filtering, classification, ranking (or scoring),
and evaluation. Filtering is only an initial step, that reduces the
web-scale corpus of news and other relevant information sources that
may contain entity mentions into a working set of documents that should
be more manageable for the subsequent stages.
Nevertheless, this step has a large impact on the recall that can be
maximally attained! Therefore, in this study, we have focused on just
this filtering stage and conduct an in-depth analysis of the main design
decisions here: how to cleans the noisy text obtained online, 
the methods to create entity profiles, the
types of entities of interest, document type, and the grade of
relevance of the document-entity pair under consideration.
We analyze how these factors (and the design choices made in their
corresponding system components) affect filtering performance.
We identify and characterize the relevant documents that do not pass
the filtering stage by examing their contents. This way, we
estimate a practical upper-bound of recall for entity-centric stream
filtering.

\end{abstract}
% A category with the (minimum) three required fields
\category{H.4}{Information Filtering}{Miscellaneous}

%A category including the fourth, optional field follows...
%\category{D.2.8}{Software Engineering}{Metrics}[complexity measures, performance measures]

\terms{Theory}

\keywords{Information Filtering; Cumulative Citation Recommendation; knowledge maintenance; Stream Filtering;  emerging entities} % NOT required for Proceedings

\section{Introduction}
In 2012, the Text REtrieval Conferences (TREC) introduced the Knowledge Base Acceleration (KBA) track  to help Knowledge Bases(KBs) curators. The track is crucial to address a critical need of KB curators: given KB (Wikipedia or Twitter) entities, filter  a stream  for relevant documents, rank the retrieved documents and recommend them to the KB curators. The track is crucial and timely because  the number of entities in a KB on one hand, and the huge amount of new information content on the Web on the other hand make the task of manual KB maintenance challenging.   TREC KBA's main task, Cumulative Citation Recommendation (CCR), aims at filtering a stream to identify   citation-worthy  documents, rank them,  and recommend them to KB curators.
  
   
 Filtering is a crucial step in CCR for selecting a potentially
 relevant set of working documents for subsequent steps of the
 pipeline out of a big collection of stream documents. The TREC
 Filtering track defines filtering as a ``system that sifts through
 stream of incoming information to find documents that are relevant to
 a set of user needs represented by profiles''
 \cite{robertson2002trec}. 
In the specific setting of CCR, these profiles are
represented by persistent KB entities (Wikipedia pages or Twitter
users, in the TREC scenario).
 
 TREC-KBA 2013's participants applied Filtering as a first step  to
 produce a smaller working set for subsequent experiments. As the
 subsequent steps of the pipeline use the output of the filter, the
 final performance of the system is dependent on this step.  The
 filtering step particularly determines the recall of the overall
 system. However, all 141 runs submitted by 13 teams did suffer from
 poor recall, as pointed out in the track's overview paper 
 \cite{frank2013stream}. 

The most important components of the filtering step are cleansing
(referring to pre-processing noisy web text into a canonical ``clean''
text format), and
entity profiling (creating a representation of the entity that can be
used to match the stream documents to). For each component, different
choices can be made. In the specific case of TREC KBA, organisers have
provided two different versions of the corpus: one that is already cleansed,
and one that is the raw data as originally collected by the organisers. 
Also, different
approaches use different entity profiles for filtering, varying from
using just the KB entities' canonical names to looking up DBpedia name
variants, and from using the bold words in the first paragraph of the Wikipedia
entities’ page to using anchor texts from other Wikipedia pages, and from
using the exact name as given to WordNet derived synonyms. The type of entities
(Wikipedia or Twitter) and the category of documents in which they
occur (news, blogs, or tweets) cause further variations.
% A variety of approaches are employed  to solve the CCR
% challenge. Each participant reports the steps of the pipeline and the
% final results in comparison to other systems.  A typical TREC KBA
% poster presentation or talk explains the system pipeline and reports
% the final results. The systems may employ similar (even the same)
% steps  but the choices they make at every step are usually
% different. 
In such a situation, it becomes hard to identify the factors that
result in improved performance. There is  a lack of insight across
different approaches. This makes  it hard to know whether the
improvement in performance of a particular approach is due to
preprocessing, filtering, classification, scoring  or any of the
sub-components of the pipeline.
 
In this paper, we therefore fix the subsequent steps of the pipeline,
and zoom in on \emph{only} the filtering step; and conduct an in-depth analysis of its
main components.  In particular, we study the effect of cleansing,
entity profiling, type of entity filtered for (Wikipedia or Twitter), and
document category (social, news, etc) on the filtering components'
performance. The main contribution of the
paper are an in-depth analysis of the factors that affect entity-based
stream filtering, identifying optimal entity profiles without
compromising precision, describing and classifying relevant documents
that are not amenable to filtering , and estimating the upper-bound
of recall on entity-based filtering.

The rest of the paper is is organized as follows: 

\textbf{TODO!!}

 \section{Data Description}
We base this analysis on the TREC-KBA 2013 dataset%
\footnote{\url{http://trec-kba.org/trec-kba-2013.shtml}}
that consists of three main parts: a time-stamped stream corpus, a set of
KB entities to be curated, and a set of relevance judgments. A CCR
system now has to identify for each KB entity which documents in the
stream corpus are to be considered by the human curator.

\subsection{Stream corpus} The stream corpus comes in two versions:
raw and cleaned. The raw and cleansed versions are 6.45TB and 4.5TB
respectively,  after xz-compression and GPG encryption. The raw data
is a  dump of  raw HTML pages. The cleansed version is the raw data
after its HTML tags are stripped off and only English documents
identified with Chromium Compact Language Detector
\footnote{\url{https://code.google.com/p/chromium-compact-language-detector/}}
are included.  The stream corpus is organized in hourly folders each
of which contains many  chunk files. Each chunk file contains between
hundreds and hundreds of thousands of serialized  thrift objects. One
thrift object is one document. A document could be a blog article, a
news article, or a social media post (including tweet).  The stream
corpus comes from three sources: TREC KBA 2012 (social, news and
linking) \footnote{\url{http://trec-kba.org/kba-stream-corpus-2012.shtml}},
arxiv\footnote{\url{http://arxiv.org/}}, and
spinn3r\footnote{\url{http://spinn3r.com/}}.
Table \ref{tab:streams} shows the sources, the number of hourly
directories, and the number of chunk files.
\begin{table}
\caption{Retrieved documents to different sources }
\begin{center}

 \begin{tabular}{l*{4}{l}l}
 documents     &   chunk files    &    Sub-stream \\
\hline

126,952         &11,851         &arxiv \\
394,381,405      &   688,974        & social \\
134,933,117       &  280,658       &  news \\
5,448,875         &12,946         &linking \\
57,391,714         &164,160      &   MAINSTREAM\_NEWS (spinn3r)\\
36,559,578         &85,769      &   FORUM (spinn3r)\\
14,755,278         &36,272     &    CLASSIFIED (spinn3r)\\
52,412         &9,499         &REVIEW (spinn3r)\\
7,637         &5,168         &MEMETRACKER (spinn3r)\\
1,040,520,595   &      2,222,554 &        Total\\

\end{tabular}
\end{center}
\label{tab:streams}
\end{table}

\subsection{KB entities}

 The KB entities consist of 20 Twitter entities and 121 Wikipedia entities. The selected entities are, on purpose, sparse. The entities consist of 71 people, 1 organization, and 24 facilities.  

\subsection{Relevance judgments}

TREC-KBA provided relevance judgments for training and
testing. Relevance judgments are given as a document-entity
pairs. Documents with citation-worthy content to a given entity are
annotated  as \emph{vital},  while documents with tangentially
relevant content, or documents that lack freshliness o  with content
that can be useful for initial KB-dossier are annotated as
\emph{relevant}. Documents with no relevant content are labeled
\emph{neutral} and spam is labeled as \emph{garbage}. 
%The inter-annotator agreement on vital in 2012 was 70\% while in 2013 it
%is 76\%. This is due to the more refined definition of vital and the
%distinction made between vital and relevant.

\subsection{Breakdown of results by document source category}

%The results of the different entity profiles on the raw corpus are
%broken down by source categories and relevance rank% (vital, or
%relevant).  
In total, the dataset contains 24162 unique entity-document
pairs, vital or relevant; 9521 of these have been labelled as vital,
and the remaining 17424 as relevant.
All documents are categorized into 8 source categories: 0.98\%
arxiv(a), 0.034\% classified(c), 0.34\% forum(f), 5.65\% linking(l),
11.53\% mainstream-news(m-n), 18.40\% news(n), 12.93\% social(s) and
50.2\% weblog(w). We have regrouped these source categories into three
groups ``news'', ``social'', and ``other'', for two reasons: 1) some groups
are very similar to each other. Mainstream-news and news are
similar. The reason they exist separately, in the first place,  is
because they were collected from two different sources, by different
groups and at different times. we call them news from now on.  The
same is true with weblog and social, and we call them social from now
on.   2) some groups have so small number of annotations that treating
them independently does not make much sense. Majority of vital or
relevant annotations are social (social and weblog) (63.13\%). News
(mainstream +news) make up 30\%. Thus, news and social make up about
93\% of all annotations.  The rest make up about 7\% and are all
grouped as others.

 \section{Stream Filtering}
 
 The TREC Filtering track defines filtering as a ``system that sifts
 through stream of incoming information to find documents that are
 relevant to a set of user needs represented by profiles''
 \cite{robertson2002trec}. Its information needs are long-term and are
 represented by persistent profiles, unlike the traditional search system
 whose adhoc information need is represented by a search
 query. Adaptive Filtering, one task of the filtering track,  starts
 with  a persistent user profile and a very small number of positive
 examples. A filtering system can improve its user profiles with a
 feedback obtained from interaction with users, and thereby improve
 its performance. The  filtering stage of entity-based stream
 filtering and ranking can be likened to the adaptive filtering task
 of the filtering track. The persistent information needs are the KB
 entities, and the relevance judgments are the small number of postive
 examples.

Stream filtering is then the task to, given a stream of documents of news items, blogs
 and social media on one hand and a set of KB entities on the other,
 to filter the stream for  potentially relevant documents  such that
 the relevance classifier(ranker) achieves as maximum performance as
 possible.  Specifically, we conduct in-depth analysis on the choices
 and factors affecting the cleansing step, the entity-profile
 construction, the document category of the stream items, and the type
 of entities (Wikipedia or Twitter) , and finally their impact overall
 performance of the pipeline. Finally, we conduct manual examination
 of the vital documents that defy filtering. We strive to answer the
 following research questions:
\begin{enumerate}
  \item Does cleansing affect filtering and subsequent performance
  \item What is the most effective way of entity profile representation
  \item Is filtering different for Wikipedia and Twitter entities?
  \item Are some type of documents easily filterable and others not?
  \item Does a gain in recall at filtering step translate to a gain in F-measure at the end of the pipeline?
  \item What characterizes the vital (and relevant) documents that are
    missed in the filtering step?
\end{enumerate}

The TREC filtering and the filtering as part of the entity-centric
stream filtering and ranking pipepline have different purposes. The
TREC filtering track's goal is the binary classification of documents:
for each incoming docuemnt, it decides whether the incoming document
is relevant or not for a given profile. The docuemnts are either
relevant or not. In our case, the documents have relevance ranking and
the goal of the filtering stage is to filter as many potentially
relevant documents as possible, but less  irrelevant documents as
possible not to obfuscate the later stages of the piepline.  Filtering
as part of the pipeline needs that delicate balance between retrieving
relavant documents and irrrelevant documensts. Bcause of this,
filtering in this case can only be studied by binding it to the later
stages of the entity-centric pipeline. This bond influnces how we do
evaluation.

To achieve this, we use recall percentages in the filtering stage for
the different choices of entity profiles. However, we use the overall
performance to select the best entity profiles.To generate the overall
pipeline performance we use the official TREC KBA evaluation metric
and scripts \cite{frank2013stream} to report max-F, the maximum
F-score obtained over all relevance cut-offs.

\section{Literature Review}
There has been a great deal of interest  as of late on entity-based filtering and ranking. One manifestation of that is the introduction of TREC KBA in 2012. Following that, there have been a number of research works done on the topic \cite{frank2012building, ceccarelli2013learning, taneva2013gem, wang2013bit, balog2013multi}.  These works are based on KBA 2012 task and dataset  and they address the whole problem of entity filtering and ranking.  TREC KBA continued in 2013, but the task underwent some changes. The main change between  the 2012 and 2013 are in the number of entities, the type of entities, the corpus and the relevance rankings.

The number of entities increased from 29 to 141, and it included 20 Twitter entities. The TREC KBA 2012 corpus is 1.9TB after xz-compression and has  400M documents. By contrast, the KBA 2013 corpus is 6.45 after XZ-compression and GPG encryption. A version with all-non English documented removed  is 4.5 TB and consists of 1 Billion documents. The 2013 corpus subsumed the 2012 corpus and added others from spinn3r, namely main-stream news, forum, arxiv, classified, reviews and meme-tracker.  A more important difference is, however, a change in the definitions of relevance ratings vital and relevant. While in KBA 2012, a document was judged vital if it has citation-worthy content for a given entity, in 2013 it must have the freshliness, that is the content must trigger an editing of the given entity's KB entry. 

While the tasks of 2012 and 2013 are fundamentally the same, the approaches  varied due  to the size of the corpus. In 2013, all participants used filtering to reduce the size of the big corpus.   They used different ways of filtering: many of them used two or more of different name variants from DBpedia such as labels, names, redirects, birth names, alias, nicknames, same-as and alternative names \cite{wang2013bit,dietzumass,liu2013related, zhangpris}.  Although most of the participants used DBpedia name variants none of them used all the name variants.  A few other participants used bold words in the first paragraph of the Wikipedia entity's profiles and anchor texts from other Wikipedia pages  \cite{bouvierfiltering, niauniversity}. One participant used Boolean \emph{and} built from the tokens of the canonical names \cite{illiotrec2013}.  

All of the studies used filtering as their first step to generate a smaller set of documents. And many systems suffered from poor recall and their system performances were highly affected \cite{frank2012building}. Although  systems  used different entity profiles to filter the stream, and achieved different performance levels, there is no study on and the factors and choices that affect the filtering step itself. Of course filtering has been extensively examined in TREC Filtering \cite{robertson2002trec}. However, those studies were isolated in the sense that they were intended to optimize recall. What we have here is a different scenario. Documents have relevance rating. Thus we want to study filtering in connection to  relevance to the entities and thus can be done by coupling filtering to the later stages of the pipeline. This is new to the best of our knowledge and the TREC KBA problem setting and data-sets offer a good opportunity to examine this aspect of filtering. 

Moreover, there has not been a chance to study at this scale and/or a study into what type of documents defy filtering and why? In this paper, we conduct a manual examination of the documents that are missing and classify them into different categories. We also estimate the general upper bound of recall using the different entities profiles and choose the best profile that results in an increased over all performance as measured by F-measure. 

\section{Method}
All analyses in this paper are carried out on the documents that have
relevance assessments associated to them. For this purpose, we
extracted those documents from the big corpus. We experiment with all
KB entities. For each KB entity, we extract different name variants
from DBpedia and Twitter.
\

\subsection{Entity Profiling}
We build entity profiles for the KB entities of interest. We have two
types: Twitter and Wikipedia. Both entities have been selected, on
purpose by the track organisers, to occur only sparsely and be less-documented.
For the Wikipedia entities, we fetch different name variants
from DBpedia: name, label, birth name, alternative names,
redirects, nickname, or alias. 
These extraction results are summarized in Table
\ref{tab:sources}.
For the Twitter entities, we visit
their respective Twitter pages and fetch their display names. 
\begin{table}
\caption{Number of different DBpedia name variants}
\begin{center}

 \begin{tabular}{l*{4}{c}l}
 Name variant& No. of strings  \\
\hline
 Name  &82\\
 Label   &121\\
Redirect  &49 \\
 Birth Name &6\\
 Nickname & 1&\\
 Alias &1 \\
 Alternative Names &4\\

\hline
\end{tabular}
\end{center}
\label{tab:sources}
\end{table}


The collection contains a total number of 121 Wikipedia entities.
Every entity has a corresponding DBpedia label.  Only 82 entities have
a name string and only 49 entities have redirect strings. (Most of the
entities have only one string, except for a few cases with multiple
redirect strings; Buddy\_MacKay, has the highest (12) number of
redirect strings.) 

We combine the different name variants we extracted to form a set of
strings for each KB entity. For Twitter entities, we used the display
names that we collected. We consider the names of the entities that
are part of the URL as canonical. For example in entity\\
\url{http://en.wikipedia.org/wiki/Benjamin_Bronfman}\\
Benjamin Bronfman is a canonical name of the entity. 
An example is given in Table \ref{tab:profile}.

From the combined name variants and
the canonical names, we  created four sets of profiles for each
entity: canonical(cano) canonical partial (cano-part), all name
variants combined (all) and partial names of all name
variants(all-part). We refer to the last two profiles as name-variant
and name-variant partial. The names in parentheses are used in table
captions.


\begin{table*}
\caption{Example entity profiles (upper part Wikipedia, lower part Twitter)}
\begin{center}
\begin{tabular}{l*{3}{c}}
 &Wikipedia&Twitter \\
\hline

 &Benjamin\_Bronfman& roryscovel\\
  cano&[Benjamin Bronfman] &[roryscovel]\\
  cano-part &[Benjamin, Bronfman]&[roryscovel]\\
  all&[Ben Brewer, Benjamin Zachary Bronfman] &[Rory Scovel] \\
  all-part& [Ben, Brewer, Benjamin, Zachary, Bronfman]&[Rory, Scovel]\\
			   
                  
   \hline                      
\end{tabular}
\end{center}
\label{tab:profile}
\end{table*}
\subsection{Annotation Corpus}

The annotation set is a combination of the annotations from before the Training Time Range(TTR) and Evaluation Time Range (ETR) and consists of 68405 annotations.  Its breakdown into training and test sets is  shown in Table \ref{tab:breakdown}.


\begin{table}
\caption{Number of annotation documents with respect to different categories(relevance rating, training and testing)}
\begin{center}
\begin{tabular}{l*{3}{c}r}
 &&Vital&Relevant  &Total \\
\hline

\multirow{2}{*}{Training}  &Wikipedia & 1932  &2051& 3672\\
			  &Twitter&189   &314&488 \\
			   &All Entities&2121&2365&4160\\
                        
\hline 
\multirow{2}{*}{Testing}&Wikipedia &6139   &12375 &16160 \\
                         &Twitter&1261   &2684&3842  \\
                         &All Entities&7400   &12059&20002 \\
                         
             \hline 
\multirow{2}{*}{Total} & Wikipedia       &8071   &14426&19832  \\
                       &Twitter  &1450  &2998&4330  \\
                       &All Entities&9521   &17424&24162 \\
	                 
\hline
\end{tabular}
\end{center}
\label{tab:breakdown}
\end{table}






%Most (more than 80\%) of the annotation documents are in the test set.
The 2013 training and test data contain 68405
annotations, of which 50688 are unique document-entity pairs.   Out of
these, 24162 unique document-entity pairs are vital (9521) or relevant
(17424).

 

\section{Experiments and Results}
 We conducted experiments to study  the effect of cleansing, different entity profiles, types of entities, category of documents, relevance ranks (vital or relevant), and the impact on classification.  In the following subsections, we present the results in different categories, and describe them.
 
 \subsection{Cleansing: raw or cleansed}
\begin{table}
\caption{Percentage of vital or relevant documents retrieved under different name variants (upper part from cleansed, lower part from raw)}
\begin{center}
\begin{tabular}{l@{\quad}rrrrrrr}
\hline
&cano&cano-part  &all &all-part  \\
\hline



   Wikipedia      &61.8  &74.8  &71.5  &77.9\\
   Twitter        &1.9   &1.9   &41.7  &80.4\\
   All Entities   &51.0  &61.7  &66.2  &78.4 \\	
  
 
\hline
\hline
   Wikipedia      &70.0  &86.1  &82.4  &90.7\\
   Twitter        & 8.7  &8.7   &67.9  &88.2\\
  All Entities    &59.0  &72.2  &79.8  &90.2\\
\hline

\end{tabular}
\end{center}
\label{tab:name}
\end{table}


The upper part of Table \ref{tab:name} shows the recall performances on the cleansed version and the lower part on the raw version. The recall performances for all entity types  are increased substantially in the raw version. Recall increases on Wikipedia entities  vary from 8.2 to 12.8, and in Twitter entities from 6.8 to 26.2. In all entities, it ranges from 8.0 to 13.6.  The recall increases are substantial. To put it into perspective, an 11.8 increase in recall on all entities is a retrieval of 2864 more unique document-entity pairs. %This suggests that cleansing has removed some documents that we could otherwise retrieve. 

\subsection{Entity Profiles}
If we look at the recall performances for the raw corpus,   filtering documents by canonical names achieves a recall of  59\%.  Adding other name variants  improves the recall to 79.8\%, an increase of 20.8\%. This means  20.8\% of documents mentioned the entities by other names  rather than by their canonical names. Canonical partial  achieves a recall of 72\%  and name-variant partial achives 90.2\%. This says that 18.2\% of documents mentioned the entities by  partial names of other non-canonical name variants. 


%\begin{table*}
%\caption{Breakdown of recall percentage increases by document categories }
%\begin{center}\begin{tabular}{l*{9}{c}r}
% && \multicolumn{3}{ c| }{All entities}  & \multicolumn{3}{ c| }{Wikipedia} &\multicolumn{3}{ c| }{Twitter} \\ 
% & &others&news&social & others&news&social &  others&news&social \\
%\hline
% 
%\multirow{4}{*}{Vital}	 &cano-part $-$ cano  	&8.2  &14.9    &12.3           &9.1  &18.6   &14.1             &0      %&0       &0  \\
%                         &all$-$ cano         	&12.6  &19.7    &12.3          &5.5  &15.8   &8.4             &73   &35%.9    &38.3  \\
%	                 &all-part $-$ cano\_part&9.7    &18.7  &12.7       &0    &0.5  &5.1        &93.2 & 93 &64.4 \\%
%	                 &all-part $-$ all     	&5.4  &13.9     &12.7           &3.6  &3.3    &10.8              &20.3 %  &57.1    &26.1 \\
%	                 \hline
%	                 
%\multirow{4}{*}{Relevant}  &cano-part $-$ cano  	&10.5  &15.1    &12.2          &11.1  &21.7   &14.1            % &0   &0    &0  \\
%                         &all $-$ cano         	&11.7  &36.6    &17.3          &9.2  &19.5   &9.9             &%54.5   &76.3   &66  \\
%	                 &all-part $-$ cano-part &4.2  &26.9   &15.8          &0.2    &0.7    &6.7           &72.2   &8%7.6 &75 \\
%	                 &all-part $-$ all     	&3    &5.4     &10.7           &2.1  &2.9    &11              &18.2   &%11.3    &9 \\
%	                 
%	                 \hline
%\multirow{4}{*}{total} 	&cano-part $-$ cano   	&10.9   &15.5   &12.4         &11.9  &21.3   &14.4          &0 %    &0       &0\\
%			&all $-$ cano         	&13.8   &30.6   &16.9         &9.1  &18.9   &10.2          &63.6  &61.8%    &57.5 \\
%                        &all-part $-$ cano-part	&7.2   &24.8   &15.9          &0.1    &0.7    &6.8           &8%2.2  &89.1    &71.3\\
%                        &all-part $-$ all     	&4.3   &9.7    &11.4           &3.0  &3.1   &11.0          &18.9  &27.3%    &13.8\\	                 
%	                 
%                                  	                 
%\hline
%\end{tabular}
%\end{center}
%\label{tab:source-delta2}
%\end{table*}


 \begin{table*}
\caption{Breakdown of recall performances by document source category}
\begin{center}\begin{tabular}{l*{9}{c}r}
 && \multicolumn{3}{ c| }{All entities}  & \multicolumn{3}{ c| }{Wikipedia} &\multicolumn{3}{ c| }{Twitter} \\ 
 & &others&news&social & others&news&social &  others&news&social \\
\hline
 
\multirow{4}{*}{Vital} &cano                 &82.2& 65.6& 70.9& 90.9&  80.1& 76.8&   8.1&  6.3&  30.5\\
&cano part & 90.4& 80.6& 83.1& 100.0& 98.7& 90.9&   8.1&  6.3&  30.5\\
&all  & 94.8& 85.4& 83.1& 96.4&  95.9& 85.2&   81.1& 42.2& 68.8\\
&all part &100& 99.2& 95.9& 100.0&  99.2& 96.0&   100&  99.3& 94.9\\
\hline
	                 
\multirow{4}{*}{Relevant} &cano & 84.2& 53.4& 55.6& 88.4& 75.6& 63.2& 10.6& 2.2& 6.0\\
&cano part &94.7& 68.5& 67.8& 99.6& 97.3& 77.3& 10.6& 2.2& 6.0\\
&all & 95.8& 90.1& 72.9& 97.6& 95.1& 73.1& 65.2& 78.4& 72.0\\
&all part &98.8& 95.5& 83.7& 99.7& 98.0& 84.1& 83.3& 89.7& 81.0\\
	                 
	                 \hline
\multirow{4}{*}{total} 	&cano    &   81.1& 56.5& 58.2& 87.7& 76.4& 65.7& 9.8& 3.6& 13.5\\
&cano part &92.0& 72.0& 70.6& 99.6& 97.7& 80.1& 9.8& 3.6& 13.5\\
&all & 94.8& 87.1& 75.2& 96.8& 95.3& 75.8& 73.5& 65.4& 71.1\\
&all part & 99.2& 96.8& 86.6& 99.8& 98.4& 86.8& 92.4& 92.7& 84.9\\
	                 
\hline
\end{tabular}
\end{center}
\label{tab:source-delta}
\end{table*}
    

%The break down of the raw corpus by document source category is presented in Table
%\ref{tab:source-delta}.  
 
 
 
 
 \subsection{ Relevance Rating: vital and relevant}
 
When comparing recall for vital and relevant, we observe that
canonical names are more effective for vital than for relevant
entities, in particular for the Wikipedia entities. 
%For example, the recall for news is 80.1 and for social is 76, while the corresponding recall in relevant is 75.6 and 63.2 respectively.
We conclude that the most relevant documents mention the
entities by their common name variants.
%  \subsection{Difference by document categories}
%  
 
%  Generally, there is greater variation in relevant rank than in vital. This is specially true in most of the Delta's for Wikipedia. This  maybe be explained by news items referring to  vital documents by a some standard name than documents that are relevant. Twitter entities show greater deltas than Wikipedia entities in both vital and relevant. The greater variation can be explained by the fact that the canonical name of Twitter entities retrieves very few documents. The deltas that involve canonical names of Twitter entities, thus, show greater deltas.  
%  

% If we look in recall performances, In Wikipedia entities, the order seems to be others, news and social. This means that others achieve a higher recall than news than social.  However, in Twitter entities, it does not show such a strict pattern. In all, entities also, we also see almost the same pattern of other, news and social. 



  
\subsection{Recall across document categories: others, news and social}
The recall for Wikipedia entities in Table \ref{tab:name} ranged from
61.8\% (canonicals) to 77.9\% (name-variants).  Table
\ref{tab:source-delta} shows how recall is distributed across document
categories. For Wikipedia entities, across all entity profiles, others
have a higher recall followed by news, and then by social.  While the
recall for news ranges from 76.4\% to 98.4\%, the recall for social
documents ranges from 65.7\% to 86.8\%. In Twitter entities, however,
the pattern is different. In canonicals (and their partials), social
documents achieve higher recall than news.
%This indicates that social documents refer to Twitter entities by their canonical names (user names) more than news do. In name- variant partial, news achieve better results than social. The difference in recall between canonicals and name-variants show that news do not refer to Twitter entities by their user names, they refer to them by their display names.
Overall, across all entities types and all entity profiles, documents
in the others category achieve a higher recall than news, and news documents, in turn, achieve higher recall than social documents. 

% This suggests that social documents are the hardest  to retrieve.  This  makes sense since social posts such as tweets and blogs are short and are more likely to point to other resources, or use short informal names.


%%NOTE TABLE REMOVED:\\\\
%
%We computed four percentage increases in recall (deltas)  between the
%different entity profiles (Table \ref{tab:source-delta2}). The first
%delta is the recall percentage between canonical partial  and
%canonical. The second  is  between name= variant and canonical. The
%third is the difference between name-variant partial  and canonical
%partial and the fourth between name-variant partial and
%name-variant. we believe these four deltas offer a clear meaning. The
%delta between name-variant and canonical means the percentage of
%documents that the new name variants retrieve, but the canonical name
%does not. Similarly, the delta between  name-variant partial and
%partial canonical-partial means the percentage of document-entity
%pairs that can be gained by the partial names of the name variants. 
% The  biggest delta  observed is in Twitter entities between partials
% of all name variants and partials of canonicals (93\%). delta. Both
% of them are for news category.  For Wikipedia entities, the highest
% delta observed is 19.5\% in cano\_part - cano followed by 17.5\% in
% all\_part in relevant. 
  
  \subsection{Entity Types: Wikipedia and Twitter}
Table \ref{tab:name} summarizes the differences between Wikipedia and
Twitter entities.  Wikipedia entities' canonical representation
achieves a recall of 70\%, while canonical partial achieves a recall of 86.1\%. This is an
increase in recall of 16.1\%. By contrast, the increase in recall of
name-variant partial over name-variant is 8.3\%.
%This high increase in recall when moving from canonical names to their
%partial names, in comparison to the lower increase when moving from
%all name variants to their partial names can be explained by
%saturation: documents have already been extracted by the different
%name variants and thus using their partial names do not bring in many
%new relevant documents.
For Wikipedia entities, canonical
partial achieves better recall than name-variant in both the cleansed and
the raw corpus.  %In the raw extraction, the difference is about 3.7.
In Twitter entities, recall of canonical matching is very low.%
\footnote{Canonical
and canonical partial are the same for Twitter entities because they
are one word strings. For example in https://twitter.com/roryscovel,
``roryscovel`` is the canonical name and its partial is identical.}
%The low recall is because the canonical names of Twitter entities are
%not really names; they are usually arbitrarily created user names. It
%shows that  documents  refer to them by their display names, rarely
%by their user name, which is reflected in the name-variant recall
%(67.9\%). The use of name-variant partial increases the recall to
%88.2\%.



The tables in \ref{tab:name} and \ref{tab:source-delta} show a higher recall
for Wikipedia than for Twitter entities. Generally, at both
aggregate and document category levels, we observe that recall
increases as we move from canonicals to canonical partial, to
name-variant, and to name-variant partial. The only case where this
does not hold is in the transition from Wikipedia's canonical partial
to name-variant. At the aggregate level (as can be inferred from Table
\ref{tab:name}), the difference in performance between  canonical  and
name-variant partial is 31.9\% on all entities, 20.7\% on Wikipedia
entities, and 79.5\% on Twitter entities. 

Section \ref{sec:analysis} discusses the most plausible explanations for these findings.
%% TODO: PERHAPS SUMMARY OF DISCUSSION HERE

\section{Impact on classification}
In the overall experimental setup, classification, ranking, and
evaluation are kept constant. Following \cite{balog2013multi}
settings, we use
WEKA's\footnote{\url{http://www.cs.waikato.ac.nz/~ml/weka/}} Classification
Random Forest. However, we use fewer numbers of features which we
found to be more effective. We determined the effectiveness of the
features by running the classification algorithm using the fewer
features we implemented and their features. Our feature
implementations achieved better results.  The total numbers of
features we used are 13 and are listed below.
  
\paragraph*{Google's Cross Lingual Dictionary (GCLD)}

This is a mapping of strings to Wikipedia concepts and vice versa
\cite{spitkovsky2012cross}. 
(1) the probability with which a string is used as anchor text to
a Wikipedia entity 

\paragraph*{jac} 
  Jaccard similarity between the document and the entity's Wikipedia page
\paragraph*{cos} 
  Cosine similarity between the document and the entity's Wikipedia page
\paragraph*{kl} 
  KL-divergence between the document and the entity's Wikipedia page
  
  \paragraph*{PPR}
For each entity, we computed a PPR score from
a Wikipedia snapshot  and we kept the top 100  entities along
with the corresponding scores.


\paragraph*{Surface Form (sForm)}
For each Wikipedia entity, we gathered DBpedia name variants. These
are redirects, labels and names.


\paragraph*{Context (contxL, contxR)}
From the WikiLink corpus \cite{singh12:wiki-links}, we collected
all left and right contexts (2 sentences left and 2 sentences
right) and generated n-grams between uni-grams and quadro-grams
for each left and right context. 
Finally,  we select  the 5 most frequent n-grams for each context.

\paragraph*{FirstPos}
  Term position of the first occurrence of the target entity in the document 
  body 
\paragraph*{LastPos }
  Term position of the last occurrence of the target entity in the document body

\paragraph*{LengthBody} Term count of document body
\paragraph*{LengthAnchor} Term count  of document anchor
  
\paragraph*{FirstPosNorm} 
  Term position of the first occurrence of the target entity in the document 
  body normalised by the document length 
\paragraph*{MentionsBody }
  No. of occurrences of the target entity in the  document body



  
  Features we use incude similarity features such as cosine and jaccard, document-entity features such as docuemnt mentions entity in title, in body, frequency  of mention, etc., and related entity features such as page rank scores. In total we sue  The features consist of similarity measures between the KB entiities profile text, document-entity features such as  
  In here, we present results showing how  the choices in corpus, entity types, and entity profiles impact these latest stages of the pipeline.  In tables \ref{tab:class-vital} and \ref{tab:class-vital-relevant}, we show the performances in max-F. 
\begin{table*}
\caption{vital performance under different name variants(upper part from cleansed, lower part from raw)}
\begin{center}
\begin{tabular}{ll@{\quad}lllllll}
\hline
%&\multicolumn{1}{l}{\rule{0pt}{12pt}}&\multicolumn{1}{l}{\rule{0pt}{12pt}cano}&\multicolumn{1}{l}{\rule{0pt}{12pt}canonical partial }&\multicolumn{1}{l}{\rule{0pt}{12pt}name-variant }&\multicolumn{1}{l}{\rule{0pt}{50pt}name-variant partial}\\[5pt]
  &&cano&cano-part&all  &all-part \\


   all-entities &max-F& 0.241&0.261&0.259&0.265\\
%	      &SU&0.259  &0.258 &0.263 &0.262 \\	
   Wikipedia &max-F&0.252&0.274& 0.265&0.271\\
%	      &SU& 0.261& 0.259&  0.265&0.264 \\
   
   
   twitter &max-F&0.105&0.105&0.218&0.228\\
%     &SU &0.105&0.250& 0.254&0.253\\
  
 
\hline
\hline
  all-entities &max-F & 0.240 &0.272 &0.250&0.251\\
%	  &SU& 0.258   &0.151  &0.264  &0.258\\
   Wikipedia&max-F &0.257&0.257&0.257&0.255\\
%   &SU	     & 0.265&0.265 &0.266 & 0.259\\
   twitter&max-F &0.188&0.188&0.208&0.231\\
%	&SU&    0.269 &0.250 &0.250&0.253\\
\hline

\end{tabular}
\end{center}
\label{tab:class-vital}
\end{table*}
  
  
  \begin{table*}
\caption{vital-relevant performances under different name variants(upper part from cleansed, lower part from raw)}
\begin{center}
\begin{tabular}{ll@{\quad}lllllll}
\hline
%&\multicolumn{1}{l}{\rule{0pt}{12pt}}&\multicolumn{1}{l}{\rule{0pt}{12pt}canonical}&\multicolumn{1}{l}{\rule{0pt}{12pt}canonical partial }&\multicolumn{1}{l}{\rule{0pt}{12pt}name-variant }&\multicolumn{1}{l}{\rule{0pt}{50pt}name-variant partial}\\[5pt]

 &&cano&cano-part&all  &all-part \\

   all-entities &max-F& 0.497&0.560&0.579&0.607\\
%	      &SU&0.468  &0.484 &0.483 &0.492 \\	
   Wikipedia &max-F&0.546&0.618&0.599&0.617\\
%   &SU&0.494  &0.513 &0.498 &0.508 \\
   
   twitter &max-F&0.142&0.142& 0.458&0.542\\
%    &SU &0.317&0.328&0.392&0.392\\
  
 
\hline
\hline
  all-entities &max-F& 0.509 &0.594 &0.590&0.612\\
%    &SU       &0.459   &0.502  &0.478  &0.488\\
   Wikipedia &max-F&0.550&0.617&0.605&0.618\\
%   &SU	     & 0.483&0.498 &0.487 & 0.495\\
   twitter &max-F&0.210&0.210&0.499&0.580\\
%	&SU&    0.319  &0.317 &0.421&0.446\\
\hline

\end{tabular}
\end{center}
\label{tab:class-vital-relevant}
\end{table*}




Table \ref{tab:class-vital} shows the recall performance for vitally judged documents.  On Wikipedia entities, except in the canonical profile, the cleansed version achieves  better results than the raw version.  However, on Twitter entities, the raw corpus achieves  better  in all entity profiles (except  in name-variant partial).  At an aggregate (both Wikipedia and Twitter) level, we see that in three profiles, cleansed achieves better.  Only in canonical partial, does raw perform better. Overall cleansed achieves better results than raw.  This result is interesting because we saw in previous sections that the raw corpus achieves  higher recall than cleansed. In the case name-variant partial, for example, 10\% more relevant documents are retrieved in the raw corpus. The gain in recall in raw corpus does not translate into a gain in F\_measure. In fact, in most cases F\_measure decreased. % One explanation for this is that it brings in many false positives from, among related links, adverts, etc.  
For Wikipedia entities,  canonical partial  achieves the highest performance. For Twitter, name-variant partial achieves  better results.

In vital-relevant category (Table \ref{tab:class-vital-relevant}), the performances are different.  Except in canonical partial,  raw achieves better results in all cases. For Twitter entities, the raw corpus achieves better results in all cases.  In terms of  entity profiles, Wikipedia's canonical partial  achieves  the best F-score. For Twitter, as before, canonical partial. The raw corpus has more effect on relevant documents and Twitter entities.  

%The fact that canonical partial names achieve better results is interesting.  We know that partial names were used as a baseline in TREC KBA 2012, but no one of the KBA participants actually used partial names for filtering.


   
%    
   
   
%    
%    \begin{table*}
% \caption{Breakdown of missing documents by sources for cleansed, raw and cleansed-and-raw}
% \begin{center}\begin{tabular}{l*{9}r}
%   &others&news&social \\
% \hline
% 
% 			&missing from raw only &   0 &0   &217 \\
% 			&missing from cleansed only   &430   &1321     &1341 \\
% 
%                          &missing from both    &19 &317     &2196 \\
%                         
%                          
% 
% \hline
% \end{tabular}
% \end{center}
% \label{tab:miss-category}
% \end{table*}



%    To gain more insight, I sampled for each 35 entities, one document-entity pair and looked into the contents. The results are in \ref{tab:miss from both}
%    
%    \begin{table*}
% \caption{Missing documents and their mentions }
% \begin{center}
% 
%  \begin{tabular}{l*{4}{l}l}
%  &entity&mentioned by &remark \\
% \hline
%  Jeremy McKinnon  & Jeremy McKinnon& social, mentioned in read more link\\
% Blair Thoreson   & & social, There is no mention by name, the article talks about a subject that is political (credit rating), not apparent to me\\
%   Lewis and Clark Landing&&Normally, maha music festival does not mention ,but it was held there \\
% Cementos Lima &&It appears a mistake to label it vital. the article talks about insurance and centos lima is a cement company.entity-deleted from wiki\\
% Corn Belt Power Cooperative & &No content at all\\
% Marion Technical Institute&&the text could be of any place. talks about a place whose name is not mentioned. 
%  roryscovel & &Talks about a video hinting that he might have seen in the venue\\
% Jim Poolman && talks of party convention, of which he is member  politician\\
% Atacocha && No mention by name The article talks about waste from mining and Anacocha is a mining company.\\
% Joey Mantia & & a mention of a another speeedskater\\
% Derrick Alston&&Text swedish, no mention.\\
% Paul Johnsgard&& not immediately clear why \\
% GandBcoffee&& not immediately visible why\\
% Bob Bert && talks about a related media and entertainment\\
% FrankandOak&& an article that talks about a the realease of the most innovative companies of which FrankandOak is one. \\
% KentGuinn4Mayor && a theft in a constituency where KentGuinn4Mayor is vying.\\
% Hjemkomst Center && event announcement without mentioning where. it takes a a knowledge of \\
% BlossomCoffee && No content\\
% Scotiabank Per\%25C3\%25BA && no content\\
% Drew Wrigley && politics and talk of oilof his state\\
% Joshua Zetumer && mentioned by his film\\
% Théo Mercier && No content\\
% Fargo Air Museum && No idea why\\
% Stevens Cooperative School && no content\\
% Joshua Boschee && No content\\
% Paul Marquart &&  No idea why\\
% Haven Denney && article on skating competition\\
% Red River Zoo && animal show in the zoo, not indicated by name\\
% RonFunches && talsk about commedy, but not clear whyit is central\\
% DeAnne Smith && No mention, talks related and there are links\\
% Richard Edlund && talks an ward ceemony in his field \\
% Jennifer Baumgardner && no idea why\\
% Jeff Tamarkin && not clear why\\
% Jasper Schneider &&no mention, talks about rural development of which he is a director \\
% urbren00 && No content\\
% \hline
% \end{tabular}
% \end{center}
% \label{tab:miss from both}
% \end{table*}

 

   
  
\section{Analysis and Discussion}\label{sec:analysis}


We conducted experiments to study  the impacts on recall of 
different components of the filtering stage of entity-based filtering and ranking pipeline. Specifically 
we conducted experiments to study the impacts of cleansing, 
entity profiles, relevance ratings, categories of documents, entity profiles. We also measured  impact of the different factors and choices  on later stages of the pipeline. 

Experimental results show that cleansing can remove entire or parts of the content of documents making them difficult to retrieve. These documents can, otherwise, be retrieved from the raw version. The use of the raw corpus brings in documents that can not be retrieved from the cleansed corpus. This is true for all entity profiles and for all entity types. The  recall difference between the cleansed and raw ranges from  6.8\% t 26.2\%. These increases, in actual document-entity pairs,  is in thousands. We believe this is a substantial increase. However, the recall increases do not always translate to improved F-score in overall performance.  In the vital relevance ranking for both Wikipedia and aggregate entities, the cleansed version performs better than the raw version.  In Twitter entities, the raw corpus achieves better except in the case of all name-variant, though the difference is negligible.  However, for vital-relevant, the raw corpus performs  better across all entity profiles and entity types 
except in partial canonical names of Wikipedia entities. 

The use of different profiles also shows a big difference in recall. Except in the case of Wikipedia where the use of canonical partial achieves better than name-variant, there is a steady increase in recall from canonical to  canonical partial, to name-variant, and to name-variant partial. This pattern is also observed across the document categories.  However, here too, the relationship between   the gain in recall as we move from less richer profile to a more richer profile and overall performance as measured by F-score  is not linear. 


%%%%% MOVED FROM LATER ON - CHECK FLOW

There is a trade-off between using a richer entity-profile and retrieval of irrelevant documents. The richer the profile, the more relevant documents it retrieves, but also the more irrelevant documents. To put it into perspective, lets compare the number of documents that are retrieved with  canonical partial and with name-variant partial. Using the raw corpus, the former retrieves a total of 2547487 documents and achieves a recall of 72.2\%. By contrast, the later retrieves a total of 4735318 documents and achieves a recall of 90.2\%. The total number of documents extracted increases by 85.9\% for a recall gain of 18\%. The rest of the documents, that is 67.9\%, are newly introduced irrelevant documents. 

%%%%%%%%%%%%


In vital ranking, across all entity profiles and types of corpus, Wikipedia's canonical partial  achieves better performance than any other Wikipedia entity profiles. In vital-relevant documents too, Wikipedia's canonical partial achieves the best result. In the raw corpus, it achieves a little less than name-variant partial. For Twitter entities, the name-variant partial profile achieves the highest F-score across all entity profiles and types of corpus.  


Cleansing impacts Twitter
entities and relevant documents.  This  is validated by the
observation that recall  gains in Twitter entities and the relevant
categories in the raw corpus also translate into overall performance
gains. This observation implies that cleansing removes relevant and
social documents than it does vital and news. That it removes relevant
documents more than vital can be explained by the fact that cleansing
removes the related links and adverts which may contain a mention of
the entities. One example we saw was the the cleansing removed an
image with a text of an entity name which was actually relevant. And
that it removes social documents can be explained by the fact that
most of the missing of the missing  docuemnts from cleansed are
social. And all the docuemnts that are missing from raw corpus
social. So in both cases socuial seem to suffer from text
transformation and cleasing processes. 

%%%% NEEDS WORK:

Taking both performance (recall at filtering and overall F-score
during evaluation) into account, there is a clear trade-off between using a richer entity-profile and retrieval of irrelevant documents. The richer the profile, the more relevant documents it retrieves, but also the more irrelevant documents. To put it into perspective, lets compare the number of documents that are retrieved with  canonical partial and with name-variant partial. Using the raw corpus, the former retrieves a total of 2547487 documents and achieves a recall of 72.2\%. By contrast, the later retrieves a total of 4735318 documents and achieves a recall of 90.2\%. The total number of documents extracted increases by 85.9\% for a recall gain of 18\%. The rest of the documents, that is 67.9\%, are newly introduced irrelevant documents. 

Wikipedia's canonical partial is the best entity profile for Wikipedia entities. This is interesting  to see that the retrieval of of  thousands vital-relevant document-entity pairs by name-variant partial does not translate to an increase in over all performance. It is even more interesting since canonical partial was not considered as contending profile for stream filtering by any of participant to the best of our knowledge. With this understanding, there  is actually no need to go and fetch different names variants from DBpedia, a saving of time and computational resources.


%%%%%%%%%%%%




The deltas between entity profiles, relevance ratings, and document categories reveal four differences between Wikipedia and Twitter entities. 1) For Wikipedia entities, the difference between canonical partial and canonical is higher(16.1\%) than between name-variant partial and  name-variant(8.3\%).  This can be explained by saturation. This is to mean that documents have already been extracted by  name-variants and thus using their partials does not bring in many new relevant documents.  2) Twitter entities are mentioned by name-variant or name-variant partial and that is seen in the high recall achieved  compared to the low recall achieved by canonical(or their partial). This indicates that documents (specially news and others) almost never use user names to refer to Twitter entities. Name-variant partials are the best entity profiles for Twitter entities. 3) However, comparatively speaking, social documents refer to Twitter entities by their user names than news and others suggesting a difference in 
adherence to standard in names and naming. 4) Wikipedia entities achieve higher recall and higher overall performance. 

The high recall and subsequent higher overall performance of Wikipedia entities can  be due to two reasons. 1) Wikipedia entities are relatively well described than Twitter entities. The fact that we can retrieve different name variants from DBpedia is a measure of relatively rich description. Rich description plays a role in both filtering and computation of features such as similarity measures in later stages of the pipeline.   By contrast, we have only two names for Twitter entities: their user names and their display names which we collect from their Twitter pages. 2) There is not DBpedia-like resource for Twitter entities from which alternative names cane be collected.   


In the experimental results, we also observed that recall scores in the vital category are higher than in the relevant category. This observation  confirms one commonly held assumption:(frequency) mention is related to relevance.  this is the assumption why term frequency is used an indicator of document relevance in many information retrieval systems. The more  a document mentions an entity explicitly by name, the more likely the document is vital to the entity.

Across document categories, we observe a pattern in recall of others, followed by news, and then by social. Social documents are the hardest to retrieve. This can be explained by the fact that social documents (tweets and  blogs) are more likely to point to a resource where the entity is mentioned, mention the entities with some short abbreviation, or talk without mentioning the entities, but with some context in mind. By contrast news documents mention the entities they talk about using the common name variants more than social documents do. However, the greater difference in percentage recall between the different entity profiles in the news category indicates news refer to a given entity with different names, rather than by one standard name. By contrast others show least variation in referring to news. Social documents falls in between the two.  The deltas, for Wikipedia entities, between canonical partials and canonicals,  and name-variants and canonicals are high, an indication that canonical partials 
and name-variants bring in new relevant documents that can not be retrieved by canonicals. The rest of the two deltas are very small,  suggesting that partial names of name variants do not bring in new relevant documents. 


\section{Unfilterable documents}

\subsection{Missing vital-relevant documents \label{miss}}

% 

 The use of name-variant partial for filtering is an aggressive attempt to retrieve as many relevant documents as possible at the cost of retrieving irrelevant documents. However, we still miss about  2363(10\%) of the vital-relevant documents.  Why are these documents missed? If they are not mentioned by partial names of name variants, what are they mentioned by? Table \ref{tab:miss} shows the documents that we miss with respect to cleansed and raw corpus.  The upper part shows the number of documents missing from cleansed and raw versions of the corpus. The lower part of the table shows the intersections and exclusions in each corpus.  

\begin{table}
\caption{The number of documents missing  from raw and cleansed extractions. }
\begin{center}
\begin{tabular}{l@{\quad}llllll}
\hline
\multicolumn{1}{l}{\rule{0pt}{12pt}category}&\multicolumn{1}{l}{\rule{0pt}{12pt}Vital }&\multicolumn{1}{l}{\rule{0pt}{12pt}Relevant }&\multicolumn{1}{l}{\rule{0pt}{12pt}Total }\\[5pt]
\hline

Cleansed &1284 & 1079 & 2363 \\
Raw & 276 & 4951 & 5227 \\
\hline
 missing only from cleansed &1065&2016&3081\\
  missing only from raw  &57 &160 &217 \\
  Missing from both &219 &1927&2146\\
\hline



\end{tabular}
\end{center}
\label{tab:miss}
\end{table}

One would  assume that  the set of document-entity pairs extracted from cleansed are a sub-set of those   that are extracted from the raw corpus. We find that that is not the case. There are 217  unique entity-document pairs that are retrieved from the cleansed corpus, but not from the raw. 57 of them are vital.    Similarly,  there are  3081 document-entity pairs that are missing  from cleansed, but are present in  raw. 1065 of them are vital.  Examining the content of the documents reveals that it is due to a missing part of text from a corresponding document.  All the documents that we miss from the raw corpus are social. These are documents such as tweets and blogs, posts from other social media. To meet the format of the raw data (binary byte array), some of them must have been converted later, after collection and on the way lost a part or the entire content. It is similar for the documents that we miss from cleansed: a part or the entire content  is lost in during the cleansing process (the removal of 
HTML tags and non-English documents).  In both cases the mention of the entity happened to be on the part of the text that is cut out during transformation. 
 

 The interesting set  of relevance judgments are those that  we miss from both raw and cleansed extractions. These are 2146 unique document-entity pairs, 219 of them are with vital relevance judgments.   The total number of entities in the missed vital annotations is  28 Wikipedia and 7  Twitter, making a total of 35. The  great majority (86.7\%) of the documents are social. This suggests that social (tweets and blogs) can talk about the entities without mentioning  them by name more than news and others do. This is, of course, inline with intuition. 
   


%%%%%%%%%%%%%%%%%%%%%%

We observed that there are vital-relevant documents that we miss from raw only, and similarly from cleansed only. The reason for this is transformation from one format to another. The most interesting documents are those that we miss from both raw and cleansed corpus. We first identified the number of KB entities who have a vital relevance judgment and  whose documents can not be retrieved (they were 35 in total) and conducted a manual examination into their content to find out why they are missing. 
 
 
 We  observed  that among the missing documents, different document ids can have the same content, and be judged multiple times for a given entity.  %In the vital annotation, there are 88 news, and 409 weblog. 
 Avoiding duplicates, we randomly selected 35 documents, one for each entity.   The documents are 13 news and  22  social. Here below we have classified the situation under which a document can be vital for an entity without mentioning the entities with the different entity  profiles we used for filtering. 

\paragraph*{Outgoing link mentions} A post (tweet) with an outgoing link which mentions the entity.
\paragraph*{Event place - Event} A document that talks about an event is vital to the location entity where it takes place.  For example Maha Music Festival takes place in Lewis and Clark\_Landing, and a document talking about the festival is vital for the park. There are also cases where an event's address places the event in a park and due to that the document becomes vital to the park. This is basically being mentioned by address which belongs to alarger space. 
\paragraph*{Entity -related entity} A document about an important figure such as artist, athlete  can be vital to another. This is specially true if the two are contending for the same title, one has snatched a title, or award from the other. 
\paragraph*{Organization - main activity} A document that talks about about an area on which the company is active is vital for the organization. For example, Atacocha is a mining company  and a news item on mining waste was annotated vital. 
\paragraph*{Entity - group} If an entity belongs to a certain group (class),  a news item about the group can be vital for the individual members. FrankandOak is  named innovative company and a news item that talks about the group  of innovative companies is relevant for a  it. Other examples are: a  big event  of which an entity is related such an Film awards for actors. 
\paragraph*{Artist - work} Documents that discuss the work of artists can be relevant to the artists. Such cases include  books or films being vital for the book author or the director (actor) of the film. Robocop is film whose screenplay is by Joshua Zetumer. A blog that talks about the film was judged vital for Joshua Zetumer. 
\paragraph*{Politician - constituency} A major political event in a certain constituency is vital for the politician from that constituency. 
 A good example is a weblog that talks about two north Dakota counties being drought disasters. The news is vital for Joshua Boschee, a politician, a member of North Dakota democratic party.  
\paragraph*{head - organization} A document that talks about an organization of which the entity is the head can be vital for the entity.  Jasper\_Schneider is USDA Rural Development state director for North Dakota and an article about problems of primary health centers in North Dakota is judged vital for him. 
\paragraph*{World Knowledge} Some things are impossible to know without your world knowledge. For example ''refreshments, treats, gift shop specials, "bountiful, fresh and fabulous holiday decor," a demonstration of simple ways to create unique holiday arrangements for any home; free and open to the public`` is judged relevant to Hjemkomst\_Center. This is a social media post, and unless one knows the person posting it, there is no way that this text shows that. Similarly ''learn about the gray wolf's hunting and feeding behaviors and watch the wolves have their evening meal of a full deer carcass; $15 for members, $20 for nonmembers`` is judged vital to Red\_River\_Zoo.  
\paragraph*{No document content} A small number of documents were found to have no content.
\paragraph*{Disagreement} For a few remaining documents, the authors disagree with the assessors as to why these are vital to the entity.



\section{Conclusions}
In this paper, we examined the filtering stage of the entity-centric stream filtering and ranking  by holding the later stages of fixed. In particular, we studied the cleansing step, different entity profiles, type of entities(Wikipedia or Twitter), categories of documents(news, social, or others) and the relevance ratings. We attempted to address the following research questions: 1) does cleansing affect filtering and subsequent performance? 2) what is the most effective way of entity profiling? 3) is filtering different for Wikipedia and Twitter entities? 4) are some type of documents easily filterable and others not? 5) does a gain in recall at filtering step translate to a gain in F-measure at the end of the pipeline? and 6) what are the circumstances under which vital documents can not be retrieved? 

Cleansing does remove parts or entire contents of documents making them irretrievable. However, because of the introduction of false positives, recall gains by  raw corpus and some  richer entity profiles do not necessarily translate to overall performance gain. The results conclusion on this is mixed in the sense that cleansing helps improve the recall on vital documents and Wikipedia entities, but reduces the recall on Twitter entities and the relative category of relevance ranking. Vital and relevant documents show a difference in retrieval nonperformance documents are easier to filter than relevant.  


Despite an aggressive attempt to filter as many vital-relevant documents as possible,  we observe that there are still documents that we miss. While some are possible to retrieve with some modifications, some others are not. There are some document that indicate that an information filtering system does not seem to get them no matter how rich representation of entities they use. These circumstances under which this happens are many. We found that some documents have no content at all, subjectivity(it is not clear why some are judged vital). However, the main circumstances under which vital  documents can defy filtering is: outgoing link mentions, 
venue-event, entity - related entity, organization - main area of operation, entity - group, artist - artist's work,  party-politician, and world knowledge.  


%ACKNOWLEDGMENTS are optional
%\section{Acknowledgments}

%
% The following two commands are all you need in the
% initial runs of your .tex file to
% produce the bibliography for the citations in your paper.
\bibliographystyle{abbrv}
\bibliography{sigproc}  % sigproc.bib is the name of the Bibliography in this case
% You must have a proper ".bib" file
%  and remember to run:
% latex bibtex latex latex
% to resolve all references
%
% ACM needs 'a single self-contained file'!
%
%APPENDICES are optional
%\balancecolumns


\end{document}