Pages having RSS and Sitemap tag
Inviting google's crawler is essential factor for SEO. .The pages are crawled again and again, and they turn over the index
Fetch as Google for the new page not for old page
Google provides Fetch as Google on the search console. It can immediately invite crawler and index the page, soon.
By using it, the page will be indexed in a few minutes. But it seems to generate index for immediate algorithm.
Therefore, it seem not to be suitable for pages which are already indexed.
The page which is fetched will indexed very soon and the cache of the page also updated, when we use it for already indexed page. And it appears on the search result of time ranged query.
After install the Content Management System, view the default demo site, you have to set up the system to your own site. This page shows you how to set up the CMS.
Set up your Website on CMS
In order to make your own site, you have to set up the CMS data at first. Click the "Article and Pages" button to open the document editor.
Then following document editor is shown.
The google webmaster tool's sitemap page changed. Most of pages but new blog posts are indexed.
Today is 4th day after I started this website with new domain.
This most rapid response for indexed data is the sitemap page
In order to know the pages newly created pages can be indexed or not, the sitemap page gives rapid information next to the site command.
I'm interested in how long does it take for google to analyse website's contents, because it is necessary when I check the response for SEO activities.
After you created main contents as blog category, you have to write blog posts continuously. The main contents are to aim at middle keywords. By writing blog posts, you can generate traffic from search engines by long tail keywords.
Role of Contents
The website consists of 2 parts, they are main contents and blog articles.
Each part has own strong and weak points, and cover them each other. By this web structure, we can build effective internal links, and cover a lot of information by least number of pages.
Main contents are blog categories
Main contents pages are organized pages, and it works as blog category. But these pages have a lot of original information.
The Content Management System has function to analyse googlebot's access, and webmaster tools status. By checking them, you can check the health of actual SEO status.
Check crawler log
By checking crawler log, you can check SEO health of the website. The crawler comes to important pages, and if the crawler does not come to a page, it has some problem.
Raw log data
The web master tools show statistic data of crawling. But it is only the number of access.
Actually, there are some kinds of crawlers. By watching the raw log data on Content Management System, you can check actual crawling.
After you write a content, you have to publish it and notify it to the search engine as first as possible. The content management system supports the life cycle, writing draft, publish, and notify to the search engine.
Writing Draft contents
After you opened the website, writing draft content is necessary. Most of CMS has this function.
Preview and Publish the page
After you write draft function, you can preview the page with current template.
The appearance in the wysiwyg editor and actual website is different, therefore you have to check it before release the page.