chatbotAI FAQ Chatbot chat_svgHuman Agent

HTML Factors

Time to first byte:

Time to first byte is a analysis which is the marker of the receptiveness of a webserver or the other network resource.
TTFB measures the time from the client requesting an HTTP request to the very initial byte of the webpage being received by the client's browser.
This time is made up of the socket connection time, the time taken to answer or send the HTTP request, and the time to get the initial byte of the web-page. It should not be mistaken as a after-DNS calculations

The main key aspects of TTFB are as follows:
​1) sending a request to the end server from a client browser
2) process and transform that particular request on the server and provoke a response,
3) sending the output response to the client from the server.

Response times are generally measured in the milliseconds, ms.
A good server response time will be not higher than 300ms .This varies through the client's request location. A response time greater 300 ms, or extreme variations, are both warnings that there are issues with the server and needs to be fixed.

Avoid a Character Set In the Meta-Tag

Character set is defined as the group of characters each of the character defined by a number. Using character set in the meta-tags create nausance in the browser rendring and the crawling search engines. These character-sets are used in order to find what should be shown on a web-page. A typical meta tags looks like these The major cause of using meta-tags is information duplicacy. Most of the browsers that we use automatically send these headers so there is chance that they may be overridden. Most of the servers like apache/nginx already uses inbuilt in their configuration or .htaccess files. e.g Apache .htaccess file

AddType 'text/html; charset=UTF-8' html

Nginx .config file

http { include /etc/nginx/mime.types; charset UTF-8; ... }

Avoiding meta-tags will led to increase our page-load time and skip any information duplicacy that the browser may cause.

Webpage Redirection

Url Redirecting makes a web-page to load slower because requests conversion takes a lot of time from one url to other.Many websites suffers a lot of speed issues because of this. Avoding any redirects and keeping the original url will he If you do not use any redirects, you are serving your content significantly faster. Total Time taken on loading a page depends on the number of redirects the site has. Redirects are likely the one single most time waster in your code especially when you consider mobile networks. They dramatically affect your page speed in a noticeably bad way. Redirects affect mobile users a great deal as they are using less reliable mobile networks than your desktop users.

Avoid E-Tag Status

Enitity tags are a way to determine weather component in the browsers's cache is matched against that of the backend server. That component can be either an image,css,scripts,stylesheets etc. Sending Etags everytime in the header takes a lot-of time for the requests to process and corresponding status is sent back to server.A sample ETag looks like this:

HTTP/1.1 200 OK Last-Modified: Tue, 12 Dec 2006 03:03:59 GMT ETag: "10c24bc-4ab-457e1c1f" Content-Length: 12195

Etags provide a flexible validation model and you do not know how to use it in its core, we should not use Etag Header and remove it altogether.

IP-6 Enabled

When talking about the network protocols IPv6 is far better in terms of speed and stability than the IPv4, Big Giants like Facebook and the Linkedin found 30% improvements in the locations of Europe by changing their protocols from v4 to v6. IPv6 addresses each contain 128 bits and use hexadecimal digits. That means instead of zero through 10, they can use zero through 10 plus ‘a’ through ‘F’ (base 16).That gives a total of 340 undecillion possible combinations, so we do not need to worry about running out of IPV6 addresses anytime soon. This gives us a total range of 340 undecillion (3.4 x 10^28) possible combinations. We won’t have to worry about running out of IPv6 addresses any time soon So Web Analyzer recommends switching from IPv-4 to IPv-6 for better speed performance.

Total Page Requests

Every time when we visit a webpage on the web, the web browser pings the server which is hosting hosts that webpage .It requests the server that send it to the server-files containing the content for that website. These files consitutes the text, the images, and the videos or multimedia that exist on that webpage.

The Hypertext Transfer Protocol (HTTP) is a protocol for allotted, collective, hypermedia knowledge systems. It is the base foundation of data communication for the WWW( World Wide Web )

That request is known an HTTP request. HTTP stands for "Hypertext Transfer Protocol," in which a web browser sends a request for a document or file, and the server transferring that particular document or file to the browser.

Page Size

The first web page was live way back in August 6, 1991. With years of fast tech development and user-interface becoming prominent, the actual sizes of pages have significantly increased. In 2010, the average web page size was 702kb and in 2017 it was 3422kb.So the challenge lies how to show all the dynamic content without making compromising the speed.

The overall size of a web-page is calculated by various number of components. Keep in mind that the actual size of the text is rarely a factor and is negligible. The most important are the java-scripts, the images, the graphical content and icons, and the videos..

You can read how to better server Javascript,CSS,Images in the web-page.An average page size should be around 200k.Also with page number total number of http-requests are also important. Having a web-page weighing only 200k but the http-requests excedding 100 is probably worse off than a 500k page with only 20 http-requests.

DNS LookUp

DNS is defined as Domain Name System schema which keeps reference to which domain name maps to which Internet Protocol(IP).Precisely, its a process to know which IP the domain or the url belongs to.Browser mainly handles all the look-ups for the DNS.Web Analyzer recommends no more than 15 unique domains for good speed.On an average, a DNS lookup takes between 20-120ms to complete.

Also, DNS records can be cached to improve the network speed. The DNS cache tries to boost up the process by magaing the name resolution of already /recently visited addresses just before the request is sent out to the internet.

Page Load Time

Page Load Time is a performance criteria that tells how much time it takes for a webpage to fully render. Its the direct factor that impacts user-engagement and business's bottom line When a user makes a request to a webpage and until the entire content of that page is displayed on the requesting browser, this time is Page Load Time WebAnalyzer recommends less than 3 seconds web-page load time as a good metrics, above than that you must think about taking instant measures to correct. Page Load Time depends on TimeForFirstByte i.e how fast a web server respond, all the static files including images,css,js.

Render Blocking Sources

Render-blocking resources are the factors that takes the maximum time delaying the page load time.So, removing these extra nauicance should be removed at the very first. Now the question is how do we find these resources.Webnalyser helps in giving the waterfall for all the resources against their time taken to load.So one can easily capture those resources either it will be css,js or an broken image. These resources can be removed by following these steps:

  • Properly calling the CSS and JS files - using genioun link tag instead of using @import
  • Reducing the number of CSS files in the head tag
  • Removing any <script></script> tag in the head
  • A link without async attribute
Removing these factors will make our page load fast and result in higher user engagement, more pageviews.

Use Gzip Compression

GZIP is used for file compression and decompression. It is generally enabled server-side, and it allows the HTML reduction in size.One can check if the server has Gzip enabled by checking for response headers "content-encoding: gzip". If the header is detected,it servers with the compressed files.It is one of the general and common ways to serve files with GZIP compression.Web-Analyzer helps to detect that by checking on website response and let you know the outcomes.

Connection-Keep Alive

Keep alive is a way to allow the same TCP connection persists for HTTP conversations instead of opening a new connection with each new request.It is also called persistant connection. One can check for "connection: keep-alive" header in response for a webpage request.One can enable keep-alive in Apache config file for a typical apache web-server.

Vary Header

The Vary header tells the server HTTP cache that which of parts of the headers other than the path and the Host header, to take into account when trying to find the right object. It does this by listing the names of the relevant headers, which in this case is Accept-Encoding. In case there are more than one or multiple headers that influence the response, they would all be listed in a single header, separated by a comma. A typical response for a Compressed response should look something like:e: HTTP/1.1 200 OK Content-Length: 3458 Cache-Control: max-age=86400 Content-Encoding: gzip Vary: Accept-Encoding