The correct way to avoid SQL injection attacks, no matter which database you use, is to separate the data from SQL, so that data stays data and will never be interpreted as commands by the SQL parser. It is possible to create SQL statement with correctly formatted data parts, but if you don't fully understand the details, you should always use prepared statements and parameterized queries. These are SQL statements that are sent to and parsed by the database server separately from any parameters. This way it is impossible for an attacker to inject malicious SQL.
You basically have two options to achieve this:
Using PDO (for any supported database driver):
$stmt = $pdo->prepare('SELECT * FROM employees WHERE name = :name');
$stmt->execute([ 'name' => $name ]);
foreach ($stmt as $row) {
// Do something with $row
}
Using MySQLi (for MySQL):
$stmt = $dbConnection->prepare('SELECT * FROM employees WHERE name = ?');
$stmt->bind_param('s', $name); // 's' specifies the variable type => 'string'
$stmt->execute();
$result = $stmt->get_result();
while ($row = $result->fetch_assoc()) {
// Do something with $row
}
If you're connecting to a database other than MySQL, there is a driver-specific second option that you can refer to (for example, pg_prepare()
and pg_execute()
for PostgreSQL). PDO is the universal option.
Correctly setting up the connection
Note that when using PDO to access a MySQL database real prepared statements are not used by default. To fix this you have to disable the emulation of prepared statements. An example of creating a connection using PDO is:
$dbConnection = new PDO('mysql:dbname=dbtest;host=127.0.0.1;charset=utf8', 'user', 'password');
$dbConnection->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);
$dbConnection->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
In the above example the error mode isn't strictly necessary, but it is advised to add it. This way the script will not stop with a Fatal Error
when something goes wrong. And it gives the developer the chance to catch
any error(s) which are throw
n as PDOException
s.
What is mandatory, however, is the first setAttribute()
line, which tells PDO to disable emulated prepared statements and use real prepared statements. This makes sure the statement and the values aren't parsed by PHP before sending it to the MySQL server (giving a possible attacker no chance to inject malicious SQL).
Although you can set the charset
in the options of the constructor, it's important to note that 'older' versions of PHP (before 5.3.6) silently ignored the charset parameter in the DSN.
Explanation
The SQL statement you pass to prepare
is parsed and compiled by the database server. By specifying parameters (either a ?
or a named parameter like :name
in the example above) you tell the database engine where you want to filter on. Then when you call execute
, the prepared statement is combined with the parameter values you specify.
The important thing here is that the parameter values are combined with the compiled statement, not an SQL string. SQL injection works by tricking the script into including malicious strings when it creates SQL to send to the database. So by sending the actual SQL separately from the parameters, you limit the risk of ending up with something you didn't intend.
Any parameters you send when using a prepared statement will just be treated as strings (although the database engine may do some optimization so parameters may end up as numbers too, of course). In the example above, if the $name
variable contains 'Sarah'; DELETE FROM employees
the result would simply be a search for the string "'Sarah'; DELETE FROM employees"
, and you will not end up with an empty table.
Another benefit of using prepared statements is that if you execute the same statement many times in the same session it will only be parsed and compiled once, giving you some speed gains.
Oh, and since you asked about how to do it for an insert, here's an example (using PDO):
$preparedStatement = $db->prepare('INSERT INTO table (column) VALUES (:column)');
$preparedStatement->execute([ 'column' => $unsafeValue ]);
Can prepared statements be used for dynamic queries?
While you can still use prepared statements for the query parameters, the structure of the dynamic query itself cannot be parametrized and certain query features cannot be parametrized.
For these specific scenarios, the best thing to do is use a whitelist filter that restricts the possible values.
// Value whitelist
// $dir can only be 'DESC', otherwise it will be 'ASC'
if (empty($dir) || $dir !== 'DESC') {
$dir = 'ASC';
}
You can apply this CSS to the inner <div>
:
#inner {
width: 50%;
margin: 0 auto;
}
Of course, you don't have to set the width
to 50%
. Any width less than the containing <div>
will work. The margin: 0 auto
is what does the actual centering.
If you are targeting Internet Explorer 8 (and later), it might be better to have this instead:
#inner {
display: table;
margin: 0 auto;
}
It will make the inner element center horizontally and it works without setting a specific width
.
Working example here:
#inner {
display: table;
margin: 0 auto;
border: 1px solid black;
}
#outer {
border: 1px solid red;
width:100%
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
EDIT
With flexbox
it is very easy to style the div horizontally and vertically centered.
#inner {
border: 1px solid black;
}
#outer {
border: 1px solid red;
width:100%;
display: flex;
justify-content: center;
}
<div id="outer">
<div id="inner">Foo foo</div>
</div>
To align the div vertically centered, use the property align-items: center
.
Best Solution
caching is useful if you do a lot of reads but seldom update. the more often the data in the database changes, the more problematic a caching system becomes. caching does add a certain complexity to your codebase, which can be a pain to handle. and it can even slow your site down in the worst case.
the most important question is:
when do you have to invalidate your cache? when does it become stale? in most of the cases, if the database-query returns different rows than at the time you cached that page. but how do you know that? you don't (maybe there is a way, but i can't think of any atm), because to check that, you probably have to query the result to compare.
what you can do:
clear all your cache everytime the relevant parts of the database are updated
this is indeed possible if your database only rarely gets updated - hourly, daily, weekly. but it's useless if changes are coming in continually. that's the case with most web projects.
clear cached items after something happens
this only works if changes do not have to be reflected instantly (e.g. doesn't matter if there's incorrect data for some time). in this case you simply could clear the cache for a certain item if it's older than X minutes, or more than Y pageviews happened.
clear only the relevant pieces
here you have to figure out which parts of the cache are affected when you're updating the database. if done right, changes are reflected instantly while performance improves.
most likley is option 3: you have to find out. so, as an example, lets take the classic case of a weblog, consisting of a frontpage, archive pages and a detail-page for every entry.
changes are introduced by: the admin-panel (crud for entries) and comments
if an entry gets edited or deleted, you have to clear the cache for:
if someone commentes you just have to clear the detail-page, but only if the number of comments is not displayed in the index or archive. otherwise, same as entry-crud.
if something sitewide is changed, the whole cache has to be cleared (bad!)
now, lets think about entry-crud and the archive. if the archive is of the type "one page per month", then clear the month the entry belongs to. but if the archive is kind of entry 1-10, 11-20, 21-30, ... most likley the whole archive-cache has to be rebuild.
and so on ...
some of the problems:
if you don't identify all the affected pieces correctly, it can lead to stale data and/or (un-)dead links.
if updates happen too often, building the cache is additional work, because when the next pageview happens, the cache is most probably stale again and has to be rebuild anyway.
some parts of the page are unfit for caching, e.g. the (custom) search function. if the cache works elsewhere everything is fast and great, but searching is still awfully slow.
it can be problematic if you have to clear the whole cache while lots of requests are happening. it then can choke your server, because a cache-miss is normally more expensive than if the page's not cached in the first place. even worse, if 3 request are coming in, and the first request can't cache the page before the other two are handled, the cache gets requested 3 times instead of once.
my advice:
optimize your database. keys and config ok? maybe it works without caching.
optimize your queries. "explain select"!
only cache parts of the page - the expensive ones. fill in small, cheap changes with str_replace and placeholders
if everything works, use apc or memcached instead of files (files usually work great, but apc/memc are faster). you can also use your database to cache your database, often that works great!
are you building a lazy or an eager caching system? lazy means: build the cache when the page's first requested, eager means: right after the update.
meh, i don't have any real advice for you. depends too much on the problem :)
update
theres a request for the blog-entry with key 256. it shows the blog-entry, the comments and who is currently logged in. it's expensive to query the entry and the comments, and format all the text and everything. the currently logged in user resides in the session.
first, create a unique key for the part you want to cache. in this case, the cache-key probably is the database id of the entry (with some prefix and postfix).
so, the cached file should have the name
cache/blogentry_256.tmp
. check, if that file exists.if it doesn't exist, do all the expensive querying and formatting, leave a placeholder (e.g. {username}) where the name of the current user should be and save the result to
cache/blogentry_256.tmp
. be careful not to write any data into this file that shouldn't be displayed for everyone or changes on every request.now, read the file (or reuse the data from 1) and str_replace the username into the placeholder.
echo
the result.if an entry gets changed or someone comments, you have to delete the cache-file with the entries id.
this is lazy caching - the cache is built only if the user request the page. be careful - if a user types {username} into a comment, it's inserted there too! that means, you have to escape your cached data and unescape it after str_replacing. this technique works with memcached or apc too.
problems: you have to build your design around that caching desicion. e.g. if you want to display "comment posted 5 minutes ago" instead of "commented added May 6th, 3:42pm", then you're in trouble.