I've heard about the Apriori algorithm several times before but never got the time or the opportunity to dig into it, can anyone explain to me in a simple way the workings of this algorithm? Also, a basic example would make it a lot easier for me to understand.

# Apriori Algorithm

algorithmapriori

#### Related Solutions

I usually go with something like the implementation given in Josh Bloch's *fabulous* Effective Java. It's fast and creates a pretty good hash which is unlikely to cause collisions. Pick two different prime numbers, e.g. 17 and 23, and do:

```
public override int GetHashCode()
{
unchecked // Overflow is fine, just wrap
{
int hash = 17;
// Suitable nullity checks etc, of course :)
hash = hash * 23 + field1.GetHashCode();
hash = hash * 23 + field2.GetHashCode();
hash = hash * 23 + field3.GetHashCode();
return hash;
}
}
```

As noted in comments, you may find it's better to pick a large prime to multiply by instead. Apparently 486187739 is good... and although most examples I've seen with small numbers tend to use primes, there are at least similar algorithms where non-prime numbers are often used. In the not-quite-FNV example later, for example, I've used numbers which apparently work well - but the initial value isn't a prime. (The multiplication constant *is* prime though. I don't know quite how important that is.)

This is better than the common practice of `XOR`

ing hashcodes for two main reasons. Suppose we have a type with two `int`

fields:

```
XorHash(x, x) == XorHash(y, y) == 0 for all x, y
XorHash(x, y) == XorHash(y, x) for all x, y
```

By the way, the earlier algorithm is the one currently used by the C# compiler for anonymous types.

This page gives quite a few options. I think for most cases the above is "good enough" and it's incredibly easy to remember and get right. The FNV alternative is similarly simple, but uses different constants and `XOR`

instead of `ADD`

as a combining operation. It looks *something* like the code below, but the normal FNV algorithm operates on individual bytes, so this would require modifying to perform one iteration per byte, instead of per 32-bit hash value. FNV is also designed for variable lengths of data, whereas the way we're using it here is always for the same number of field values. Comments on this answer suggest that the code here doesn't actually work as well (in the sample case tested) as the addition approach above.

```
// Note: Not quite FNV!
public override int GetHashCode()
{
unchecked // Overflow is fine, just wrap
{
int hash = (int) 2166136261;
// Suitable nullity checks etc, of course :)
hash = (hash * 16777619) ^ field1.GetHashCode();
hash = (hash * 16777619) ^ field2.GetHashCode();
hash = (hash * 16777619) ^ field3.GetHashCode();
return hash;
}
}
```

Note that one thing to be aware of is that ideally you should prevent your equality-sensitive (and thus hashcode-sensitive) state from changing after adding it to a collection that depends on the hash code.

As per the documentation:

You can override GetHashCode for immutable reference types. In general, for mutable reference types, you should override GetHashCode only if:

- You can compute the hash code from fields that are not mutable; or
- You can ensure that the hash code of a mutable object does not change while the object is contained in a collection that relies on its hash code.

The link to the FNV article is broken but here is a copy in the Internet Archive: Eternally Confuzzled - The Art of Hashing

I cannot understand how to identify a function with a log time.

The most common attributes of logarithmic running-time function are that:

- the choice of the next element on which to perform some action is one of several possibilities, and
- only one will need to be chosen.

or

- the elements on which the action is performed are digits of n

This is why, for example, looking up people in a phone book is O(log n). You don't need to check *every* person in the phone book to find the right one; instead, you can simply divide-and-conquer by looking based on where their name is alphabetically, and in every section you only need to explore a subset of each section before you eventually find someone's phone number.

Of course, a bigger phone book will still take you a longer time, but it won't grow as quickly as the proportional increase in the additional size.

We can expand the phone book example to compare other kinds of operations and *their* running time. We will assume our phone book has *businesses* (the "Yellow Pages") which have unique names and *people* (the "White Pages") which may not have unique names. A phone number is assigned to at most one person or business. We will also assume that it takes constant time to flip to a specific page.

Here are the running times of some operations we might perform on the phone book, from fastest to slowest:

**O(1) (in the worst case):**Given the page that a business's name is on and the business name, find the phone number.**O(1) (in the average case):**Given the page that a person's name is on and their name, find the phone number.**O(log n):**Given a person's name, find the phone number by picking a random point about halfway through the part of the book you haven't searched yet, then checking to see whether the person's name is at that point. Then repeat the process about halfway through the part of the book where the person's name lies. (This is a binary search for a person's name.)**O(n):**Find all people whose phone numbers contain the digit "5".**O(n):**Given a phone number, find the person or business with that number.**O(n log n):**There was a mix-up at the printer's office, and our phone book had all its pages inserted in a random order. Fix the ordering so that it's correct by looking at the first name on each page and then putting that page in the appropriate spot in a new, empty phone book.

For the below examples, we're now at the printer's office. Phone books are waiting to be mailed to each resident or business, and there's a sticker on each phone book identifying where it should be mailed to. Every person or business gets one phone book.

**O(n log n):**We want to personalize the phone book, so we're going to find each person or business's name in their designated copy, then circle their name in the book and write a short thank-you note for their patronage.**O(n**A mistake occurred at the office, and every entry in each of the phone books has an extra "0" at the end of the phone number. Take some white-out and remove each zero.^{2}):**O(n ยท n!):**We're ready to load the phonebooks onto the shipping dock. Unfortunately, the robot that was supposed to load the books has gone haywire: it's putting the books onto the truck in a random order! Even worse, it loads all the books onto the truck, then checks to see if they're in the right order, and if not, it unloads them and starts over. (This is the dreaded**bogo sort**.)**O(n**You fix the robot so that it's loading things correctly. The next day, one of your co-workers plays a prank on you and wires the loading dock robot to the automated printing systems. Every time the robot goes to load an original book, the factory printer makes a duplicate run of all the phonebooks! Fortunately, the robot's bug-detection systems are sophisticated enough that the robot doesn't try printing even more copies when it encounters a duplicate book for loading, but it still has to load every original and duplicate book that's been printed.^{n}):

###### Related Question

- Easy interview question got harder: given numbers 1..100, find the missing number(s) given exactly k are missing
- Ukkonen’s suffix tree algorithm in plain English
- C++ – Image Processing: Algorithm Improvement for ‘Coca-Cola Can’ Recognition
- How to find time complexity of an algorithm
- How does the HyperLogLog algorithm work
- Bomb dropping algorithm
- The optimal algorithm for the game 2048

## Best Solution

## Apriori Algorithm

It is a candidate-generation-and-test approach for frequent pattern mining in datasets. There are two things you have to remember.

Apriori Pruning Principle -If any itemset is infrequent, then its superset should not be generated/tested.Apriori Property -A given`(k+1)-itemset`

is a candidate`(k+1)-itemset`

only if everyone of its`k-itemset`

subsets are frequent.Now, here is the apriori algorithm in 4 steps.

`1-itemset`

.Generatelength`k+1`

candidateitemsets from length`k`

frequent itemsets.Testthe candidates against the database/dataset.## Solved Example

Suppose there is a transaction database as follows with 4 transactions including their transaction IDs and items bought with them. Assume the minimum support -

`min_sup`

is`2`

. The term support is the number of transactions in which a certain itemset is present/included.Transaction DBNow, let's create the candidate

`1-itemsets`

by the 1st scan of the DB. It is simply called as the set of`C_1`

as follows.If we test this with

`min_sup`

, we can see`{D}`

does not satisfy the`min_sup`

of`2`

. So, it will not be included in the frequent`1-itemset`

, which we simply call as the set of`L_1`

as follows.Now, let's scan the DB for the 2nd time, and generate candidate

`2-itemsets`

, which we simply call as the set of`C_2`

as follows.As you can see,

`{A,B}`

and`{A,E}`

itemsets do not satisfy the`min_sup`

of`2`

and hence they will not be included in the frequent`2-itemset`

,`L_2`

Now let's do a 3rd scan of the DB and get candidate

`3-itemsets`

,`C_3`

as follows.You can see that,

`{A,B,C}`

,`{A,B,E}`

and`{A,C,E}`

does not satisfy`min_sup`

of`2`

. So they will not be included in frequent`3-itemset`

,`L_3`

as follows.Now, finally, we can calculate the

`support (supp)`

,`confidence (conf)`

and`lift (interestingness value)`

values of theAssociation/Correlation Rulesthat can be generated by the itemset`{B,C,E}`

as follows.