Tag: seo

  • schema

    Schema validator

    https://validator.schema.org/

    Google Rich test

    https://search.google.com/test/rich-results

  • Fix Canonical URL using .htaccess

    When you host a web site, many web hosting providers allow your web site access using multiple urls like yourdomain.com, www.yourdomain.com. If you have a dedicated IP address, your web site become available on the IP address also. It is bad for SEO to have a site available in multiple URLs. If search engine index your web site contents using differnt URLs, that will affect your page ranking.

    To ge beter search ranking, you need to deicde which URL you want search engines to index. This can be with www or with out www, it is more of a personal choice. In my case, i decided to use with out www (https://serverok.in) to make URL shorter. Once you decided the URL you need google to index, you need to redirect all other URLs used to access your web site to this URL. This is called Canonical url.

    On Apache web server, you can do this using .htacess file. Create an .htaccess file with following content

    RewriteEngine on
    RewriteCond %{SERVER_NAME} !=www.yourdomain.com
    RewriteRule ^ https://www.yourdomain.com%{REQUEST_URI} [END,NE,R=permanent]
    

    In above code, replace www.yourdomain.com with the Canonical URL of your web site. If you don’t use www, then remove the www from the redirect code. What the code doing is check if visitor is using proper Canonical URL to access your web site, if not redirect it to Canonical url. In third line, we redirect to https, if you don’t have SSL installed, change it to “http” instead of “https”.

    See redirect

  • robots.txt

    When you make a copy of your site for development/testing purpose, you don’t need search engines index your site as it will cuase duplicate content.

    To disable indexing of a website by search engines, create a file with name robots.txt with following content.

    User-agent: *
    Disallow: /

    To Allow all robots, to use

    User-agent: *
    Allow: /

    To specify sitemap

    User-agent: *
    Allow: /
    Sitemap: https://serverok.in/sitemap.xml

    Crawl-delay

    User-agent: *
    Allow: /
    Sitemap: https://serverok.in/sitemap.xml
    Crawl-delay: 10

    Crawl-delay sets the number of seconds between each page request. In this case, the bot waits 10 seconds before indexing the next page. Bing and Yahoo support it.

    Only allow search engines to index the Home page. Deny indexing all other pages

    user-agent: *
    Allow: /$
    Disallow: /

    For Nginx web server, you can edit the location block and add

    add_header X-Robots-Tag "noindex, follow";

    Back to SEO

  • SEO

    Keyword Research Tools

    robots.txt

    https://search.google.com/test/rich-results – check if your site support schema.

    https://learningseo.io – Free SEO Tutorials.

    https://quillbot.com – paraphrasing tool helps millions of people rewrite and enhance any sentence, paragraph, or article using state-of-the-art AI.

    Back link Checker

    Google Search Console Filters

    Go to Google Search Console > Performance.

    On top of the page, click on “+ New”. From the drop-down menu, select “Query”. On the popup window, select Custom (regex). Under enter regular express, enter following text.

    ^(who|what|where|when|why|how|was|did|do|is|are|aren’t|won’t|does|if|can|could|should|would|)[” “]
    

    This will show all question-based searches that brings you traffic.