Am 29.10.2013 10:07, schrieb Mateusz Marzantowicz:
On 29.10.2013 09:17, Ian Malone wrote:
This isn't an argument for using content type rather than autodetection, the content type could be manipulated as part of an attack.
in that case you are already lost and if the server had an intrusion and is attacking you be sure sooner or later someone will attack your local code doing the mime-sniffing itself
OK, I know all that argumentation about security but as you've mentioned HTTP headers could be easily manipulated.
could they?
only in two cases and in *both* you are already lost
* the server was hacked and is attacking users * a successful man-in-the-middle
Content recognition must be done somewhere, in that case on web server, in order to set headers correctly.
and that is the right place
There always would be need for content inspection
not on the client except for local saved files
So what is better: check content on server side or client side?
on the server side - period
From client perspective the later is safer because it doesn't have to trust some remote entity.
*lol* if you do not trust the remote entity aka server how should your client make it safe by magic?
My sample URL showed that even GitHub isn't perfect and sets improper headers for some files (or it does it by choice)
because GitHub and some others are failing in basics does not mean that the client has to fix their errors
Finally, client software and web browsers should not be fragile to miscellaneous and manipulated content - they just should recognizes it as such
maybe windows is the solution for this non-existing problem the majority of users has no problem by some random servers which are broken, you can *always* save a file and open it local or in doubt if the server is that broken close the web-page and go to some trustable source