Open elective web device

 Through the web, we get to know about different things.

From the web, we can do our work easily

We can promote our business through the web.

We make our business website through the web.

Through the web, we can also get information about sitting anywhere at home.


Difference between chrome and opra

Opera

Opera comes with a lot of built-in features, both major and minor. There’s a snapshot tool that lets you take screen grabs of the entire webpage, regardless of length. Also included is a news reader that lets you customize the sources, giving you easy access to all your regular news outlets all in one place.


Battery-saving mode promises up to 1 extra hour of run time compared with Chrome.

There’s some basic customization you can do, including dark and light modes, wallpapers and themes. Integration with popular messaging apps, like Facebook Messenger and WhatsApp, is also included, and you can access them through the shortcut bar on the left side of the screen. This is incredibly convenient, as you don’t have to dedicate separate tabs to these applications.


Chrome

On mobile, there’s even fewer advanced or unique features included compared with desktop, with the only thing of interest being the reading list that lets you save webpages so that you can access them later when you’re offline


Chrome extensions and themes available on the Chrome Web Store. Chrome is an especially good choice if you're a fan of the Google ecosystem of apps (Gmail, Drive, Docs, Sheets, and others). 

Chrome certainly has a larger library of extensions, but Opera is compatible with many of these in addition to its own dedicated add-ons. In effect, this neutralizes Chrome’s biggest advantage when it comes to features, as Opera contains far more advanced functionality built into the browser, such as a snapshot tool, messaging integration and a newsreader.

Comparison in between four browser ppt




Types of Search Engines

Crawlers

These types of search engines use a "spider" or a "crawler" to search the Internet. The crawler digs through individual web pages, pulls out keywords and then adds the pages to the search engine's database. Google and Yahoo are examples of crawler search engines.

The advantage of crawlers are:

They contain a huge amount of pages. 
 
Ease of use. 
 
Familiarity. Most people who search the Internet are familiar with Google.
There are several disadvantages to crawlers:

Sometimes, it's just too much information.
 
It is easy to trick the crawler. Websites have hidden data that can be manipulated to make the page appear like it's something it's not. So, that search result for Descartes might actually take you to a porn site. 
 
Page rank can be manipulated. While search engine companies frown on the practice, there are ways to improve where your page appears on the list of results. 
 

Directories

Directories are human powered search engines.  A website is submitted to the directory and must be approved for inclusion by editorial staff.  Open Directory Project and the Internet Public Library are examples of directories.

Advantages:

  • Each page is reviewed for relevance and content before being included.  This means no more surprise porn sites.
  • Less results sometimes means finding what you need quicker.

Disadvantages:

  • Unfamiliar design and format. 
     
  • Delay in creation of a website and it's inclusion in the directory.
     
  • May have trouble with more obscure searches.
Hybrids

Hybrids are a mix of crawlers and directories. Sometimes, you have a choice when you search whether to search the Web or a directory. Other times, you may receive both human powered results and crawler results for the same search. In this case, the human results are usually listed first. 

 

Meta

Meta search engines are ones that search several other search engines at once and combines the results into one list. While you get more results with meta search engines, the relevancy and quality of the results may sometimes suffer. Dogpile and Clusty are examples of meta search engines.

Crawling indexing and ranking

Crawling
Crawling is the process by which search engines discover updated content on the web, such as new sites or pages, changes to existing sites, and dead links.

To do this, a search engine uses a program that can be referred to as a ‘crawler’, ‘bot’ or ‘spider’ (each search engine has its own type) which follows an algorithmic process to determine which sites to crawl and how often.

As a search engine’s crawler moves through your site it will also detect and record any links it finds on these pages and add them to a list that will be crawled later. This is how new content is discovered.

Indexing
Once a search engine processes each of the pages it crawls, it compiles a massive index of all the words it sees and their location on each page. It is essentially a database of billions of web pages.

This extracted content is then stored, with the information then organised and interpreted by the search engine’s algorithm to measure its importance compared to similar pages.

Servers based all around the world allow users to access these pages almost instantaneously. Storing and sorting this information requires significant space and both Microsoft and Google have over a million servers each.

Ranking
As SEO’s this is the area we are most concerned with and the part that allows us to show clients tangible progress.

Once a keyword is entered into a search box, search engines will check for pages within their index that are a closest match; a score will be assigned to these pages based on an algorithm consisting of hundreds of different ranking signals.

These pages (or images & videos) will then be displayed to the user in order of score.

So in order for your site to rank well in search results pages, it’s important to make sure search engines can crawl and index your site correctly – otherwise they will be unable to appropriately rank your website’s content in search results.

To help give you even more of a basic introduction to this process, here is a useful video from Google which explains it quite well. Each search engine follows a similar methodology to this.

Crawling ppt






What Is Black Hat SEO

Black hat SEO refers to a set of practices that are used to increases a site or page's rank in search engines through means that violate the search engines' terms of service.

Long-Tail Keywords

Long-tail keywords are longer and more specific keyword phrases that visitors are more likely to use when they’re closer to a point-of-purchase or when they're using voice search. They’re a little bit counter-intuitive, at first, but they can be hugely valuable if you know how to use them.

Take this example: if you’re a company that sells classic furniture, the chances are that your pages are never going to appear near the top of an organic search for “furniture” because there’s too much competition (this is particularly true if you’re a smaller company or a startup). But if you specialize in, say, contemporary art-deco furniture, then keywords like “contemporary Art Deco-influenced semi-circle lounge” are going to reliably find those consumers looking for exactly that product.

Managing long-tail keywords is simply a matter of establishing better lines of communication between your business and the customers who are already out there, actively shopping for what you provide.

Think about it: if you google the word “sofa” (a very broad keyword sometimes referred to as a “head term”) what are the chances you’re going to end up clicking through to a sale? But if you google “elm wood veneer day-bed” you know exactly what you’re looking for and you’re probably prepared to pay for it then and there.

Obviously, you’re going to draw less traffic with a long-tail keyword than you would with a more common one, but the traffic you do draw will be better: more focused, more committed, and more desirous of your services.

Keyword density

Keyword density is the percentage of times a keyword or phrase appears on a web page compared to the total number of words on the page. In the context of search engine optimization, keyword density can be used to determine whether a web page is relevant to a specified keyword or keyword phrase. 

What Is Keyword Density?

Keyword density refers to the number of times a keyword appears on a given webpage or within a piece of content as a ratio or percentage of the overall word count. This is also sometimes referred to as keyword frequency, or the frequency with which a specific keyword appears on a webpage.

Keyword density formula

Keyword density can also be calculated as a specific figure, should you need to. To determine the keyword density of a webpage, simply divide the number of times a given keyword is mentioned by the total number of words on the page – the resulting figure is the keyword density of that page.

Link juice

Link juice is the term used in the SEO world to refer to the value or equity passed from one page or site to another. This value is passed through hyperlinks. Search engines see links as votes by other websites that your page is valuable and worth promoting











































Comments

Popular Posts