Are there any O(1/n) algorithms?
Or anything else which is less than O(1)?
big-ocomplexity-theorytheory
Are there any O(1/n) algorithms?
Or anything else which is less than O(1)?
I assume that you are looking for intuitive definitions, since the technical definitions require quite some time to understand. First of all, let's remember a preliminary needed concept to understand those definitions.
Now, let us define those complexity classes.
P is a complexity class that represents the set of all decision problems that can be solved in polynomial time.
That is, given an instance of the problem, the answer yes or no can be decided in polynomial time.
Example
Given a connected graph G
, can its vertices be coloured using two colours so that no edge is monochromatic?
Algorithm: start with an arbitrary vertex, color it red and all of its neighbours blue and continue. Stop when you run out of vertices or you are forced to make an edge have both of its endpoints be the same color.
NP is a complexity class that represents the set of all decision problems for which the instances where the answer is "yes" have proofs that can be verified in polynomial time.
This means that if someone gives us an instance of the problem and a certificate (sometimes called a witness) to the answer being yes, we can check that it is correct in polynomial time.
Example
Integer factorisation is in NP. This is the problem that given integers n
and m
, is there an integer f
with 1 < f < m
, such that f
divides n
(f
is a small factor of n
)?
This is a decision problem because the answers are yes or no. If someone hands us an instance of the problem (so they hand us integers n
and m
) and an integer f
with 1 < f < m
, and claim that f
is a factor of n
(the certificate), we can check the answer in polynomial time by performing the division n / f
.
NP-Complete is a complexity class which represents the set of all problems X
in NP for which it is possible to reduce any other NP problem Y
to X
in polynomial time.
Intuitively this means that we can solve Y
quickly if we know how to solve X
quickly. Precisely, Y
is reducible to X
, if there is a polynomial time algorithm f
to transform instances y
of Y
to instances x = f(y)
of X
in polynomial time, with the property that the answer to y
is yes, if and only if the answer to f(y)
is yes.
Example
3-SAT
. This is the problem wherein we are given a conjunction (ANDs) of 3-clause disjunctions (ORs), statements of the form
(x_v11 OR x_v21 OR x_v31) AND
(x_v12 OR x_v22 OR x_v32) AND
... AND
(x_v1n OR x_v2n OR x_v3n)
where each x_vij
is a boolean variable or the negation of a variable from a finite predefined list (x_1, x_2, ... x_n)
.
It can be shown that every NP problem can be reduced to 3-SAT. The proof of this is technical and requires use of the technical definition of NP (based on non-deterministic Turing machines). This is known as Cook's theorem.
What makes NP-complete problems important is that if a deterministic polynomial time algorithm can be found to solve one of them, every NP problem is solvable in polynomial time (one problem to rule them all).
Intuitively, these are the problems that are at least as hard as the NP-complete problems. Note that NP-hard problems do not have to be in NP, and they do not have to be decision problems.
The precise definition here is that a problem X
is NP-hard, if there is an NP-complete problem Y
, such that Y
is reducible to X
in polynomial time.
But since any NP-complete problem can be reduced to any other NP-complete problem in polynomial time, all NP-complete problems can be reduced to any NP-hard problem in polynomial time. Then, if there is a solution to one NP-hard problem in polynomial time, there is a solution to all NP problems in polynomial time.
Example
The halting problem is an NP-hard problem. This is the problem that given a program P
and input I
, will it halt? This is a decision problem but it is not in NP. It is clear that any NP-complete problem can be reduced to this one. As another example, any NP-complete problem is NP-hard.
My favorite NP-complete problem is the Minesweeper problem.
This one is the most famous problem in computer science, and one of the most important outstanding questions in the mathematical sciences. In fact, the Clay Institute is offering one million dollars for a solution to the problem (Stephen Cook's writeup on the Clay website is quite good).
It's clear that P is a subset of NP. The open question is whether or not NP problems have deterministic polynomial time solutions. It is largely believed that they do not. Here is an outstanding recent article on the latest (and the importance) of the P = NP problem: The Status of the P versus NP problem.
The best book on the subject is Computers and Intractability by Garey and Johnson.
I cannot understand how to identify a function with a log time.
The most common attributes of logarithmic running-time function are that:
or
This is why, for example, looking up people in a phone book is O(log n). You don't need to check every person in the phone book to find the right one; instead, you can simply divide-and-conquer by looking based on where their name is alphabetically, and in every section you only need to explore a subset of each section before you eventually find someone's phone number.
Of course, a bigger phone book will still take you a longer time, but it won't grow as quickly as the proportional increase in the additional size.
We can expand the phone book example to compare other kinds of operations and their running time. We will assume our phone book has businesses (the "Yellow Pages") which have unique names and people (the "White Pages") which may not have unique names. A phone number is assigned to at most one person or business. We will also assume that it takes constant time to flip to a specific page.
Here are the running times of some operations we might perform on the phone book, from fastest to slowest:
O(1) (in the worst case): Given the page that a business's name is on and the business name, find the phone number.
O(1) (in the average case): Given the page that a person's name is on and their name, find the phone number.
O(log n): Given a person's name, find the phone number by picking a random point about halfway through the part of the book you haven't searched yet, then checking to see whether the person's name is at that point. Then repeat the process about halfway through the part of the book where the person's name lies. (This is a binary search for a person's name.)
O(n): Find all people whose phone numbers contain the digit "5".
O(n): Given a phone number, find the person or business with that number.
O(n log n): There was a mix-up at the printer's office, and our phone book had all its pages inserted in a random order. Fix the ordering so that it's correct by looking at the first name on each page and then putting that page in the appropriate spot in a new, empty phone book.
For the below examples, we're now at the printer's office. Phone books are waiting to be mailed to each resident or business, and there's a sticker on each phone book identifying where it should be mailed to. Every person or business gets one phone book.
O(n log n): We want to personalize the phone book, so we're going to find each person or business's name in their designated copy, then circle their name in the book and write a short thank-you note for their patronage.
O(n2): A mistake occurred at the office, and every entry in each of the phone books has an extra "0" at the end of the phone number. Take some white-out and remove each zero.
O(n · n!): We're ready to load the phonebooks onto the shipping dock. Unfortunately, the robot that was supposed to load the books has gone haywire: it's putting the books onto the truck in a random order! Even worse, it loads all the books onto the truck, then checks to see if they're in the right order, and if not, it unloads them and starts over. (This is the dreaded bogo sort.)
O(nn): You fix the robot so that it's loading things correctly. The next day, one of your co-workers plays a prank on you and wires the loading dock robot to the automated printing systems. Every time the robot goes to load an original book, the factory printer makes a duplicate run of all the phonebooks! Fortunately, the robot's bug-detection systems are sophisticated enough that the robot doesn't try printing even more copies when it encounters a duplicate book for loading, but it still has to load every original and duplicate book that's been printed.
Best Solution
This question isn't as silly as it might seem to some. At least theoretically, something such as O(1/n) is completely sensible when we take the mathematical definition of the Big O notation:
Now you can easily substitute g(x) for 1/x … it's obvious that the above definition still holds for some f.
For the purpose of estimating asymptotic run-time growth, this is less viable … a meaningful algorithm cannot get faster as the input grows. Sure, you can construct an arbitrary algorithm to fulfill this, e.g. the following one:
Clearly, this function spends less time as the input size grows … at least until some limit, enforced by the hardware (precision of the numbers, minimum of time that
sleep
can wait, time to process arguments etc.): this limit would then be a constant lower bound so in fact the above function still has runtime O(1).But there are in fact real-world algorithms where the runtime can decrease (at least partially) when the input size increases. Note that these algorithms will not exhibit runtime behaviour below O(1), though. Still, they are interesting. For example, take the very simple text search algorithm by Horspool. Here, the expected runtime will decrease as the length of the search pattern increases (but increasing length of the haystack will once again increase runtime).