Saturday, May 21, 2016

.Net Core - Hello World from command line

.Net Core is now RC2 and is hopefully going to be stable and not see major changes. Some very basic developmental structural changes happened across RC1 and RC2 so this is never guaranteed though. RC2 has much cleaner and consistent (with other programming languages) development experience which usually is something like following.

  1. Download and install the language
  2. Create a program
  3. Compile and run it
You can follow these steps below to get up and running with your first .Net core Hello World project.
  1. Download and install .Net core from
  2. Open command prompt and create a new directory and 'cd' to that directory
    • mkdir newproject
    • cd newproject
  3. Create a new .Net core project in the 'newproject' directory that we created above.
    • dotnet new
  4. Restore all the dependencies for the project
    • dotnet restore
  5. Compile and run
    • dotnet run
That is all. You have your first .Net Core Hello World! program up and running.

I have created a chocolatey pacage so that .Net core SDK can be installed from command like itself. This package is located here.

With chocolatey, the steps then become as follows.

  1. Install .Net core. Note the -pre that is required because it is a prelease version of .net core. It will install latest .Net core which is RC2 as of this post. This installs .net core and updates the PATH so that 'dotnet' command is available from the command prompt.
    • choco install dotnetcoresdk -pre -y
  2. Refresh PATH for current command prompt session. Chocolatey comes up with pretty neat command for this.
    • refreshenv
  3. Create a new project directory and go to that directory
    • mkdir newproject
    • cd newproject
  4. Create new project
    • dotnet new
  5. Restore all the dependencies for the project
    • dotnet restore
  6. Compile and run
    • dotnet run
That's it. You have core application running while never had to leave the command prompt.

Sunday, May 01, 2016

RabbitMQ Connection String Gotcha

RabbitMQ connection strings looks like following

or for amqp over SSL/TLS it looks like following

One very important thing to always keep in mind is that username, password and vhost should be pct-encoded. If the password, for example, is #asd49d$ then the amqp connection string will become as follows-

It is very well documented here.

Monday, March 21, 2016

Secure Web Application Practices - Account Management

In this post, I am going to cover some of the best practices around account management. Account Management covers things like 'account registration', 'forget password', 'remember me', 'password storage', etc.
Following are account management practices that one should consider to minimize the risk of breach.

  • Password Storage: Always use cryptographically strong password hashing (not encryption) to store passwords. There are only a few cases you want to store password in encrypted (and not hashed) format. Always use some random credential-specific salt with the hashing algorithm you are using. This makes the hashes non-predictable and hard to reverse even if you are using simple algorithms like MD5. For the hashing algorithm, prefer from one of the followings (in the order of preference):
    • Argon2
    • PBKDF2 
    • scrypt
    • bcrypt 
  • Registration: 
    • There are pros and cons to whether to use email address or free text username for the login id. While there is privacy concern for the use of email, requiring a different username is a little bit of inconvenience on the user end. Some sites do allow both which is a better option because it leaves the option in the user's hand.
    • Require strong password with a mix of alphanumeric and special characters. Do not limit users with use of any special characters as much as possible. Always have a limit of the minimum length of the password. It is always better to have a very high maximum limit on the length of the passwords. It allows for people to use pass phrases and use super strong passwords. If one is using any password manager, then length of the password does not matter anyway because you don't have to remember it. 
    • Do not disable pasting of password from clipboard. There is almost no valid reason to disallow users from pasting a password. It in fact discourages users to select a strong password, particularly when you use some kind of password manager to generate a strong password first and then want to use that.
    • Verify account via email. Always require the user to verify the account by sending some email to their account and then requiring them to verifying by going to some auto generated unique link. This prevents spammers and also ensure authenticity of the user. You don't want somebody else registering using your email address!
    • Protect against account enumeration. Do not disclose the existence of a user account. Always throw a generic message that do not disclose whether or not the user being registered exists or not. While in some cases it might be a privacy concern because somebody can find if you are registered in a dating site for instance, in other cases it might be used for a brute force attacks.
    • Use CAPTCHA for anti automation. It is very easy to automate Http requests to a website and that is true for automate the registration using some common names and flood the server. Use of CAPTCHA will protect against this kind of attacks.
  • Logon:
    • Protect against account enumeration attacks. You should not disclose existence of a user account in the login error messages. "Login for User foo: invalid password", "Login failed, invalid user ID", "Login failed; account disabled", "Login failed; this user is not active" are bad messages. The correct message would be "Login failed; Invalid userID or password" irrespective of whether password is incorrect or the userid is not registered.
    • Protect against brute force: Use CAPTCHA to prevent automated logins. Also, have some minimum duration between two login attempts. For instance, do not allow login request if last login attempt was less than 30 seconds away. The idea is to introduce some kind of limit or delay when you see brute force kind of attack on the server.
    • Employ two-factor authentication: This option can be employed optionally to better secure the user login. In the mechanism, you use user's phone or secondary email address to send a temporary one time password that you require before they can login. This adds little bit of inconvenience to the user but can optionally be provided for users so that they can selectivity opt in.
  • Remember Me:
    • The first choice will be to see if you can live without this feature. Remember Me is usually not a good option for high value applications. For this reason you don't see remember me feature on bank websites.
    • Use a specific Secure HttpOnly cookie for Remember Me. Do not store username/password in the cookie. 
    • Set cookie expiry to be as minimal as you can afford for your application. 
    • Require user to re-authenticate before they can perform any sensitive operation. For instance, ask user to provide 'old password'  when they try to change their password.
  • Password Reset:
    • Do not send password in the email. That is true for both when registering a new user or when resetting password. Emails are unsafe and usually lie in unencrypted form on the servers.
    • Use time limited reset token. Generate a link to send this to user to the registered email and they have to click on this link to set the new password. Link should expire after the first use or when the validity time period (should be very less, may be an hour) has expired.
    • Use security questions. This is another approach that can be taken to verify authenticity of the user before you allow them to change the password.
  • Log Off:
    • Expire session on the server. See if you can keep some kind of expiry for user sessions. For instance, it might be reasonable to log off user after 20 minutes of inactivity. As with everything else, it depends on the nature of the site. It might be inconvenience of some sites, like social media sites for instance.
    • Protect against CSRF. As with Login and other forms, this is also a probable candidate for CSRF attacks and this should be protected against CSRF attacks.

Friday, January 29, 2016

Secure Web Application Practices - HTTP Headers

HTTP Headers play a very important role in security of a web application. Some of the headers pose a security risk and so should be removed while others help prevent against different kinds of attacks so should be added.
Following are same best practices with respect to HTTP headers.

Remove headers revealing too much information about server

Revealing information about server increases your attack surface and makes the attacker job easier in case a vulnerability is found on the given server. These headers usually do not add any value to the application so they should better be turned off. Following are some headers for Aspnet MVC application that can safely be stripped.
  • X-AspNet-Version: It can be easily stipped in web.config as follows.
<system.web> <httpRuntime enableVersionHeader="false" /> </system.web>
  • X-AspNetMvc-Version: It can be easily stripped in global.asax application_start as follows.
MvcHandler.DisableMvcResponseHeader = true;
  • X-Powered-By: It can be stripped in web.config as follows.
            <clear />
  • Server: This is a little bit tricy to remove. This is written by IIS and it is not very safe to try to remove it from with Asp Net. It can removed using Http Modules but is not recommended using article here. A better approach is to use UrlScan as mentioned by Troy Hunt here.

Add Security Headers

There are several security headers recommended in the Http sepc and implemented by many browsers that should be leveraged as a layer of defense aganst verious kinds of attacks. Here are some headers that can be used.
    • HTTP String Transport Security (HSTS): HSTS instructs the browser that you only want to load secure version of that site. This reduces impact of bugs in application leaking session data through cookies and external links and defends against Man-in-the-middle attacks. HSTS also disables the ability for users to ignore SSL negotiation warnings.
Strict-Transport-Security: max-age=16070400; includeSubDomains;preload
HSTS suffers from what is called Trust-on-first-use (TOFU) because the site must be loaded first in order to get the HSTS header. So you do have small risk window when the site is loaded over an unsecure connection. ‘preload’ is used to solve exactly this problem. This is still a pretty new concept and not many websites are using this. You have to register your domain to preload list here. More details on HSTS here.
    • HTTP Public Key Pinning (HPKP): HPKP allows public certificates to be whitelisted to protect in case you CA (Certificate Authority) is compromised and so someone else can actually present a forged certifficate on your behalf.
Public-Key-Pins: pin-sha256="<sha256>"; pin-sha256="<sha256>"; max-age=15768000; includeSubDomains
You should create backup pins and short max age to mitigate risk like expired certificate, etc. More details on HKPK here.
    • Content Security Policy (CSP): CSP s a way of whitelisting what your site is allowed to run. It is quite comprehensive and contains many directive. You can control your script sources, stylesheet sources, font sources, etc. Following is content security policy that Facebook sends as of today.
content-security-policy:default-src * data: blob:;script-src * * * * * ** ** 'unsafe-inline' 'unsafe-eval' * blob: chrome-extension://lifbcibllhkdhoafpjfnlhfpfgnpldfl;style-src * 'unsafe-inline' data:;connect-src * * * ** * wss://*** * blob:* http://* https://*;
More details on Content Security Policy here.
    • X-Frame-Options: To improve the protection of web applications against Clickjacking attack this directive tells the browser what is the allowed policy on whether this site can be hosted inside iFrame of another website. Valid values are deny, sameorigin and allow-from.
X-Frame-Options: deny
    • X-XSS-Protection: This header enables Cross Site Scripting filter built into most recent web browsers. This is not widely suppoted by all browsers but as with all security measures, there is no harm in adding another level of defense.
X-XSS-Protection: 1; mode=block
    • X-Content-Type-Options: The only defined value, "nosniff", prevents Google Chrome and Internet Explorer from trying to mime-sniff the content-type of a response away from the one being declared by the server. It reduces exposure to drive-by downloads and the risks of user uploaded content that, with clever naming, could be treated as a different content-type, like an executable.
X-Content-Type-Options: nosniff
It is not trivial to manage so many headers and be able to correctly set them. You shoudl prefer some library that can do this job for you. One such very good library for Aspnet is NWebSec.

Thursday, January 28, 2016

Secure Web Application Practices - CSRF

A Cross Site Request Forgery (CSRF) attack forces an authenticated victim's browser to send a request to a vulnerable web application, which then performs chosen action on behalf of the victim. The malicious code is often not on the vulnerable application, that is why it is called Cross Site. This vulnerability is known by several other names such as Session Riding, One-Click Attacks, Cross Site Reference Forgery and Automation Attacks.

Following are some of the practices that can be used to mitigate the risk of CSRF attacks.

  1. Introduce randomness to a page response by including anti forgery token pair -  one in the page and another in the cookie. The form token and cookie token both needs to be sent when form is submitted for it to be successful. The idea is that an attacker site might be able to make the request on behalf of the vulnerable application and this send the anti forgery token in the cookie but it will not be able to send the token on the page and thus the request will fail.
  2. CSRF is not limited to form submissions, API requests (like Ajax calls from the page) is equally a candidate for attack so the same mechanism needs to be applied here as well. Usually, APIs should also expect an anti forgery token in the header that can then be validated by the server by matching it with the one sent with the cookie.
  3. Aspnet makes it very easy to handle this with the help of Html.AntiForgeryToken html helper and ValidateAntiForgeryToken attribute filter.
  4. Validate referrer header to prevent cross domain requests. Note however that referrer header is not guaranteed to be set by all clients and can also be manipulated. The idea again is to have multiple levels of defenses. Here for example, it can be implemented so that you either allow referrer to be the same domain or not be there at all so that browsers who don't send this header can still work.
  5. Re-authenticate before sensitive data or value transactions. This adds another level of defense. For instance, ask user to re-authenticate before they transfer money. This mitigates the risk of CSRF attacks to these sensitive transactions.
  6. A related form of attack to CSRF is Clickjacking in which an attacker hijacks the clicks of the victim meant for their page and routes it to the vulnerable site. It is usually done by attacker web application hosting the vulnerable application into an iFrame using transparent layer and then tricking the victim into clicking some button that results into posting to the vulnerable application. All the anti forgery token defense will not work here because it is valid request/response and browser will happily send anti forgery token in the cookie as well as one on the form.
    • Clickjacking can be mitigated with X-Frame-Options HTTP response header that specifies how the page may be iFramed. It can have value of Deny, SameOrigin and Allow-From. Use whatever suits best but the idea is to limit the framing scope and thus reduce the attack surface.

Sunday, January 24, 2016

Secure Web Application Practices - XSS

Cross-Site Scripting (XSS) attacks are a type of injection, in which malicious scripts are injected into otherwise benign and trusted web sites. XSS attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user. Flaws that allow these attacks to succeed are quite widespread and occur anywhere a web application uses input from a user within the output it generates without validating or encoding it.
An attacker can use XSS to send a malicious script to an unsuspecting user. The end user’s browser has no way to know that the script should not be trusted, and will execute the script. Because it thinks the script came from a trusted source, the malicious script can access any cookies, session tokens, or other sensitive information retained by the browser and used with that site.
Following are the some of the practices that helps mitigate the risk of XSS in a web application.
  1. HTML Escape before inserting untrusted data into HTML element content. Untrusted data can be malicious scripts that when put into html context can cause it to execute and do nasty things. For example,


    In Aspnet, you can use AntiXssEncoder.HtmlEncode to encode data before putting into Html. Most of template bindings (for example Razor) do it by default for you.
  2. Attribute Escape before inserting untrusted data into HTML common attributes.

    <div attr=...ESCAPE UNTRUSTED DATA BEFORE PUTTING HERE...>content</div>

    In Aspnet, you can use AntiXssEncoder.HtmlAttributeEncode method to encode data before you put it as an attribute value of an html element. Again, most of template bindings take care of this for you if you are using one.
  3. JavaScript Escape before inserting untrusted data into JavaScript context. This applies to both javascript code and direct event handlers on html elements.
  4. <script>alert('...ESCAPE UNTRUSTED DATA BEFORE PUTTING HERE...')</script> 
    In Aspnet, you can use AntiXssEncoder.JavaScriptStringEncode to encode unsafe data to be used inside of a javascript context.
  5. CSS Escape and strictly validate before inserting untrusted data into HTML style property values. For instance,

    <style>selector { property : ...ESCAPE UNTRUSTED DATA BEFORE PUTTING HERE...; } </style>

    In Aspnet, you can use AntiXssEncoder.CssEncode to encode unsafe data before you use that in CSS styles.
  6. URL Escape before inserting untrusted data into HTML URL parameter values. For instance,
  7. <a href=" UNTRUSTED DATA BEFORE PUTTING HERE...">link</a >
    In Aspnet, you can AntiXssEncoder.UrlEncode to encode unsafe data before putting it into URL.
  8. Sanitize HTML Markup. If your application handles markup – untrusted input that is supposed to contain HTML – you should use some sanitization libary that can parse and clean HTML formatted text. HTMLSanitizer is one such great tool.
  9. Use HttpOnly cookies. Although this is not directly related to XSS, this is always a good practice to use HttpOnly cookies for sensitive cookies like SessionId, etc. This is mainly to help mitigate the risk after you do have an XSS risk.
  10. Whitelist allowable values. Rather than trying to encode and escape untrusted data, you should always see if it much easier to just sanitize the input and check if it aligns with what is expected in that input value. For instance, if the input is a product id in product search API, it might be easier to validate that against a regex that does not allow any of unsafe characters.
  11. Use native browser defenses. Internet explorer implements some native XSS defense that should be used (it is enabled by default) if possible. It honors a HTTP header named X-XSS-Protection that can be set to 0 for disabling it if you have some issues with it. Not a common defense, but it might make sense for applications for instance when all your users use IE only.
  12. Use Aspnet Request Validation. Aspnet by default has request validation enabled and it looks for malicious data in the input and blocks the request if it finds any.
  13. <system.web>
    <pages validateRequest="true"/>
Like protection against any other kind of attacks, in XSS also it is advisable to have multiple level of defenses to help mitigate the risk of XSS. You would usually use a combination of encoding, securing cookies, native browser defenses, request validation and whitelisting to prevent against XSS attacks.

Secure Web Application Practices – SQL Injection

SQL Injection is still the top web application security risk today according to OWASP top 10.
Injection flaws, such as SQL, OS, and LDAP injection occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing unintended commands or accessing data without proper authorization.
Below are the best practices that you should follow or look for when reviewing an application for SQL Injection vulnerability.
  1. Always use parameterized query. Most of the SQL injection attacks are done when application is building the SQL query by concatenating untrusted data.

    "SELECT * FROM accounts WHERE custID='" + request.getParameter("id") + "'";
  2. Prefer use of ORM. Although Security is hardly the main reason for choosing to use an ORM framework like Entity framework, we should understand that it is a great tool for mitigating SQL injection risks. These tools make use of parameterized queries and so help mitigate SQL injection risks to a great extent. Following query for instance mititgates the SQL Injection risk that was shown above.

    DbContext.Customers.Where (cust => cust.CustID == request.getParameter(“Id”));
  3. Use stored procedures. Stored procedures promote parameterization and thus avoid the SQL Injection risks that can arise out of concatenating queries.
  4. Stored procedures have risks as well. Look for query concatenation and dynamic queries inside an stored procedures. Check presence of EXEC statements that is used to execute dynamic queries. That is usually a smell for injection risks.
  5. Follow principle of least privileges. An application should have access to only the the data it needs and also only the kind of access it needs. It might mean that you will have to maintain multiple logins and there is a maintenance trade off.
  6. Validate untrusted data. Security is all about having multiple layers of defense so that multiple layers of vulnerabilities are required to get access to sensitive data. Untrusted data should be properly validated. Also prefer white listing rather than blacklisting. You never know enough about what data is bad.
  7. Implement proper error handling. Internal errors should not propagate to the end users. They disclose hell lot of information that is often used by malicious attackers for SQL injection attacks. Attackers can still use Blind SQL Injection attacks which is much harder than error based SQL injection attacks.
  8. Encrypt sensitive data. Hash passwords. This is another layer of defense that should always be considered. Passwords should always be hashed and also any other sensitive data should be encrypted.
  9. Isolate database network segment. A proper network segment should be created and firewall rules should be put in place so that only designated network segments have access to the data. A typical network segment divides network into Untrusted, Semi-Trusted and Trusted zones where database is placed into Trusted zone. Only certain applications in Semi-Trusted zone is allowed access to the data. This is again about applying another layer of defense and mitigating the security risks.
  10. Keep Software patched and current. Attackers usually use known vulnerabilities in software to attack certain applications. Many a times websites continue to use older versions of software multiple months after risks have been identified and this makes the attack vector really easy. It is always better to be current and patched.
  11. Ensure OS level commands like xp_cmdshell are disabled. Modern SQL Server keeps them disabled by default which is what it should be -  secure by default. This is a very powerful command because an attacker if they have access can run any OS level command using this. 
There are many automation tools that help identify many of the vulnerabilities quite easily. Following are some of the tools that you can use to make your job easier.
  1. SQL Inject Me (Firefox plugin):
  2. Fuzz Testing with Burp Suite.
  3. Data extraction with SqlMap:
  4. Security scanning with Netsparker: