Thursday, December 18, 2008

THE GREATEST SECRET TO FINANCIAL FREEDOM:- I AM A LIVING WITNESS

Basic1. How much can I earn? For one membership you can earn $65,559.00 within 4 months, please watch our Flash Presentation to learn how easy to earn the $65,559.00 every year. 2. What are the bonuses and incentives? There are 5 Ways to Earn, and all bonuses and commissions (except #1) will be sent automatically and instantly by our Automated System.#1 Sell the $5000 worth of Digital Products that are included in your membership (This is optional)#2 Direct Referral Bonus $5 to each new member you refer.#3 Pairing Bonus $10 per pair. Once your member got paired with your new member.#4 Referred Referral Bonus $1 per new member referred by your direct referrals. Once your referral refer others, then you will earn this bonus.#5 Binary Networking is another bonuses. it will explain on the next question. 3. What is the #5 Binary Networking? The #5 Binary Networking is only the structure of all your downlines, it does not limit you to refer but in only position your downlines by 2's, and this is another earning, with Binary Network you will earn $0.50 everytime you have downlines in your Binary Structure. 4. What are other benefits joining the Club? One major benefits are the bonuses and commissions, other benefits are your can access the $5000+ worth of digital products, you may use it or sell it, a banner ad spot to your 22USD.com site and a Monthly Reward to active and working members. 5. How do I get paid? All your bonuses and commissions will be sent to your e-gold account automatically and instantly by our automated system once you have a commissions or bonuses. 6. How to join? Click the Join link at the Top page, complete the simple form and pay the $22 Membership fee 7. There is a message "You have no sponsor, you will be assigned to root, what is that? Because the system required a direct sponsor and you are currently signing up to a website that has no sponsor record, so the system will assign you to the default sponsor which is the "root", if someone invites you to register to his/her website and you see this notification, it means you probably mistyped the website address of your sponsor, you should pay attention to all details before registering because once you registered you cannot change your sponsor anymore. 8. Can I register 2 or more accounts? Yes, you can have as many accounts as you like, you can even use the same egold and email address to all of your accounts.
for more information please follow this link http://www.rogo.22usd.com/?option=join

LEARN HOW TO TRADE FOREX

LEARN HOW TO TRADE FOREX
Orders and Positions
When you want to open a position you need to place an "entry" order. If and when the entry order executes, the position becomes "open" and starts its life on the market. At some point in the future, you will place an "exit" order to "close" the position. A position can be "long" (entry order is to buy and exit order is to sell an instrument) or "short" (entry order is to sell and exit order is to buy an instrument).
At the point when you place your entry order, you need to define price level at which you want to buy or sell certain instrument. You also need to specify type of the order and quantity of the instrument you want to trade. There are 3 order types:

Market Order
Placing a market order means that you will buy at the current "ask" (or "offer") price, or sell at the current "bid" price, whatever that price currently is. For example, suppose you are buying a market instrument and its current market price is 129.34 / 129.38. This means a participant in the market is willing to buy the instrument from you at 129.34 and / or sell it to you at 129.38.

Stop Order
Initiating a trade with a stop order means that you will only open a position if the market moves in the direction you are anticipating. For example, if an instrument is trading at 129.34 / 129.38 and you believe it will move higher, you could place a stop order to buy at 129.48. This means that the order will only be executed if ask price in the market moves up to 129.48. The advantage is that if you are wrong and the market moves straight down, you will not have bought (because 129.48 will never have been reached). The disadvantage is that 129.48 is clearly a less attractive rate at which to buy than 129.38. Opening a position with a stop order is usually appropriate if you wish to trade only with strong market momentum in a particular direction.

Limit Order
A limit order is an order to buy below the current price, or sell above the current price. For example, if an instrument is trading at 129.34 / 129.38 and you believe the market will rise, you could place a limit order to buy at 129.28. If executed, this will give you a long position at 129.28, which is 10 pips better than if you had just used a market order. The disadvantage of the limit order is that if the instrument moves straight up from 129.34 / 129.38 your limit at 129.28 will never be filled and you will miss out on the profit opportunity even though your view on the direction was correct. Opening a position with a limit order is usually appropriate if you believe that the market will remain in a range before moving in your anticipated direction, allowing the order to be filled first.
For both entry and exit orders you can specify price levels at which you want them to be executed. You have to specify entry levels when you place you entry order, while most trading systems would allow you to specify exit levels at any time.

Buying and Selling
Financial market is a mechanism that allows people to easily buy and sell (trade) market instruments at low transaction costs and at prices that reflect efficient markets. Financial markets have evolved significantly over several hundred years and are undergoing constant innovation to improve liquidity.
If you believe value of a market instrument is going to increase, then you would buy the instrument and at one point in the future you would sell it for a higher price. This is the basic motivation for trading on financial markets.

1. What do I need to do in trading?
Your goal in trading is to buy at a lower price and sell afterwards for a higher price. For example you can buy a market instrument (quantity of 10000) for 1.2349 and sell it later for 1.2458. You will make a profit of 109 (in currency the instrument is denominated in).

2. When is the market open for trading?
You can trade from Sunday 17:00 to Friday 17:00 New York local time. Virtual trading is open at all times. Please visit http://www.timeanddate.com/worldclock/city.html?n=179 to check New York local time. The following approximate market schedule is based on New York local time: Japan markets open at 19:00 followed by Singapore and Hong Kong that open at 21:00. European markets open in Frankfurt at 2:00, while London opens an hour later. New York markets open at 8:00 (NYSE opens at 9:00). European markets close at 12:00 and Australian markets start again at 18:00.
3. Is there an easy way to start trading?
The easiest way to start trading is to click on a market instrument in a price window. When [Send Order] dialog shows up, you can set [Quantity] field to 1 or more (depending on the amount of money you have on currently active trading desk). When you click button, the order will go into the market. You can find a collection of introductory articles and various other resources to help you understand trading basics at http://www.marketiva.com/index.ncre?page=resources page :-)
4. What is long and short position?
Long (buy then sell) position and short (sell then buy) position are: a long position is simply one in which a trader buys a market instrument at one price and aims to sell it later at a higher price. In this scenario, the trader benefits from a rising market. A short position is one in which the trader sells a market instrument in anticipation that it will depreciate. In this scenario, the trader benefits from a declining market. For more details, please check http://www.marketiva.com/index.ncre?page=re-orders-and-positions page :-)
5. What is entry limit and stop level?
A limit order is an order to buy below the current price, or sell above the current price. For example, if a market instrument is trading at 1.2952 / 55 and you believe that price is expensive, you could place a limit order to buy at 1.2945. If executed, this will give you a long position at 1.2945, which is 10 points better than if you had just bought the instrument with a market order. A stop order is an order where you buy above the current market price or sell below the current market price, and is used if you are away from your desk and want to catch a trend. If a market instrument is trading at 1.2952 / 55, you could place a stop buy order at 1.2970. In case the market moves up to that price, your order will execute and open a long position. If market continues in the same direction (trend), the position will bring you profit :-)
6. What is stop-loss and target level?
A stop-loss order ensures a particular position is automatically liquidated at a predetermined price in order to limit potential losses should the market move against a trader's position. Exit target level is a price level at which you want to close your position, when you reach certain profit. You can set the exit target level when you open your position or at any time while the position is open.
What is GTC, GTD and IOC order?
GTC (Good Till Cancelled) order will stay in the market until you cancel it and it is the default order duration type. GTD (Good Till Date) will stay in the market until a date you specify, and IOC (Immediate Or Cancel) order will be executed immediately (if other order conditions are met) or cancelled.
7. What is point and position point value?
Point is the smallest change in a market instrument's price. If a price changed from 1.2000 to 1.2001 or from 201.10 to 201.11, it changed for 1 point. Point value depends on your position size. Please read more about how to calculate point values at http://www.marketiva.com/index.ncre?page=re-calculating-profit page.
8. How do I calculate my profit?
Your profit depends on your position size and difference in prices traded. If you buy a market instrument for 129.38 (quantity of 10000) and later sell it for 129.52, your profit will be (129.52 - 129.38) * 10000 = 1400. You can read more on how to calculate your profit at http://www.marketiva.com/index.ncre?page=re-calculating-profit page.
Calculating Profit
The objective of trading is to buy a market instrument and later sell the same market instrument for a higher price. In case of margin trading, trader can also sell a market instrument first and later buy the same market instrument for a lower price. Either way, trader has to close position in order to lock in the profit.
Let us assume that you open a long position by buying a market instrument for 129.38 (quantity of 10000) and few hours after that, you close the position by selling it for 129.52 (same quantity of 10000). These two trades would bring you profit of (129.52 - 129.38) * 10000 = 1400.
We can also say that these two trades would bring you 14 "points" profit. A "point" is the smallest increment in an instrument's price. For the instrument in the above example, one point is 0.01 and for an instrument denominated with 4 decimals, one point would be 0.0001. Expressing position profits in points is often very useful for quick calculations and estimates.
One point, from the example position above, would bring you 0.01 * 10000 = 100 profit, denominated in the same currency the market instrument is denominated in.
In case of Forex, currency pair denomination will be in the counter currency (JPY is the counter or quote currency in the USD/JPY pair) and you may need additional currency conversion to get profit calculated in the currency your trading account is denominated in.

9. Are there any restrictions on quantity?
You can specify any position size in our trading system, as we don't have strict quantity specifications. For example, you can specify: 1, 3, 7, 23, 154, 837, 3497, 10000, 100000 or any other quantity when you send an entry order.
3.4. Where can I see spread sizes?
Spreads between bid and offer (ask) prices are variable. For detailed information please visit http://www.marketiva.com/index.ncre?page=market-conditions page. Price spreads often unexpectedly change and greatly increase during weekends, in after-hours trading, in case of market-related announcements or market turmoil.
10. How do I manage risk in trading?
The limit order and the stop-loss order are the most common risk management tools in trading. A limit order places restriction on the maximum price to be paid or the minimum price to be received. A stop-loss order ensures a particular position is automatically liquidated at a predetermined price in order to limit potential losses should the market move against a trader's position. For more details, please check http://www.marketiva.com/index.ncre?page=re-controlling-risk page.
11. What kind of strategy should I use?
Traders make decisions using business reports, economic fundamentals, technical factors and other relevant information. Technical traders use charts, trend lines, support and resistance levels, and numerous patterns and mathematical analyses to identify trading opportunities, whereas fundamentalists predict price movements by interpreting a wide variety of economic information, including news, business reports, government-issued indicators and reports, and even rumors. The most dramatic price movements, however, occur when unexpected events happen. The event can range from a central bank raising domestic interest rates to the outcome of a political election or even an act of war. Nonetheless, more often it is the expectation of an event that drives the market rather than the event itself.
12. How long are positions maintained?
As a general rule, a position is kept open until one of the following occurs: 1) realization of sufficient profits from a position; 2) the specified stop-loss is triggered; 3) another position that has a better potential appears and you need these funds.
General Trading Guidelines
Plan your trade and trade your plan: You must have a trading plan to succeed. A trading plan should consist of a position, why you enter, stop loss point, profit taking level, plus a sound money management strategy. A good plan will remove all the emotions from your trades.
The trend is your friend: Do not buck the trend. When the market is bullish, go long. On the reverse, if the market is bearish, you short. Never go against the trend.
Focus on capital preservation: This is the most important step that you must take when you deal with your trading capital. You main goal is to preserve the capital. Do not trade more than 10% of your deposit in a single trade. For example, if your total deposit is $10,000, every trade should limit to $1000. If you don't do this, you'll be out of the market very soon.
Know when to cut loss: If a trade goes against you, sell it and let go. Do not hold on to a bad trade hoping that the price will go up. Most likely, you end up losing more money. Before you enter a trade, decide your stop loss price, a price where you must sell when the trade turns sour. It depends on your risk profile as of how much you should set for the stop loss.
Take profit when the trade is good: Before entering a trade decide how much profit you are willing to take. When a trade turns out to be good, take the profit. You can take profit all at one go, or take profit in stages. When you've recovered your trading cost, you have nothing to lose. Sit tight and watch the profit run.
Be emotionless: Two biggest emotions in trading: greed and fear. Do not let greed and fear influence your trade. Trading is a mechanical process and it's not for the emotional ones. As Dr. Alexander Elder said in his book "Trading For A Living", if you sit next to a successful trader and observe him or her, you might not be able to tell whether he or she is making or losing money. That's how emotionally stable a successful trader is.
Do not trade based on tips from other people: Trade only when you have done your own research. Be an informed trader.
Keep a trading journal: When you buy a market instrument, write down the reasons why you buy, and your feelings at that time. You do the same when you sell. Analyze and write down the mistakes you've made, as well as things that you've done right. By referring to your trading journal, you learn from your past mistakes. Improve on your mistakes, keep learning and keep improving.
When in doubt, stay out: When you have doubt and not sure where the market is going, stay on the sideline. Sometimes, doing nothing is the best thing to do.
Do not overtrade: Ideally you should have 3-5 positions at a time. No more than that. If you have too many positions, you tend to be out of control and make emotional decisions when there is a change in market. Do not trade for the sake of trading.
Technical Analysis
Technical analysis differs from fundamental analysis in that technical analysis is applied only to the price action of the market, ignoring fundamental factors. As fundamental data can often provide only a long-term or "delayed" forecast of market price movements, technical analysis has become the primary tool with which to successfully trade shorter-term price movements, and to set stop loss and profit targets.
Technical analysis consists primarily of a variety of technical studies, each of which can be interpreted to generate buy and sell decisions or to predict market direction.
Support and Resistance Levels
One use of technical analysis, apart from technical studies, is in deriving "support" and "resistance" levels. The concept here is that the market will tend to trade above its support levels and trade below its resistance levels. If a support or resistance level is broken, the market is then expected to follow through in that direction. These levels are determined by analyzing the chart and assessing where the market has encountered unbroken support or resistance in the past.
Popular Technical Analysis Tools
Moving Averages (MA): Indicators used to smooth price fluctuations and identify trends. The most basic type of moving average, the simple moving average, is the average of the past x bars ending with the current bar;
Moving Average Convergence Divergence (MACD): Indicator that utilizes moving averages to identify possible trends and an oscillator to determine when a trend is overbought or oversold;
Bollinger Bands: Bands that are placed x moving average standard deviations above and below a simple MA line;
Fibonacci Retracement Levels: Indicator used to identify potential levels of support and resistance;
Directional Movement Index (DMI): A positive line (+DI) measuring buying and a negative line (-DI) measuring selling pressure;
Relative Strength Index (RSI): Momentum oscillator that is plotted on a vertical scale from 0 to 100;
Stochastics: Momentum oscillator that measure momentum by comparing the recent close to the absolute price range (high of the range minus the low of the range) over a period of x bars;
Trendlines: Straight line on a chart that connects consecutive tops or consecutive bottoms of prices and is utilized to identify levels of support and resistance;
Fundamental Analysis
Fundamental analysis is the evaluation of non-visual information to evaluate trading activity and make trading decisions. Whereas technical analysts utilize charts and mathematical indicators to quantify price activity, fundamental analysts utilize market news and market forecasts to qualify price activity.
There are numerous market events that move financial markets every week. Some affect every market instrument while others affect specific instruments. If the outcome of a market event has been fully discounted by the market, traders will not notice any discernible impact on their charts. If the outcome of a market event has not been fully discounted by the market, the result is either price appreciation or price depreciation and traders will see this activity on their charts.
Every week, there are fundamentally-important market events that are scheduled in every country at specific times. Similarly, there are fundamentally-important market events that may not be scheduled for specific times. Some countries (Germany, for instance) often do not schedule market events for specific times. The outcome of market events is sometimes leaked in advance in certain countries (Germany, for instance) for different reasons.
Market events include the release of economic data, speeches and testimony by government officials, interest rate decisions, and others.
Controlling Risk
Controlling risk is one of the most important ingredients of successful trading. While it is emotionally more appealing to focus on the upside of trading, every trader should know precisely how much he or she is willing to lose on each trade before cutting losses, and how much he or she is willing to lose in trading account before ceasing trading and re-evaluating.
Risk will essentially be controlled in two ways: by exiting losing trades before losses exceed your pre-determined maximum tolerance (or "cutting losses"), and by limiting the "leverage" or position size you trade for a given account size.
Cutting Losses
Too often, the beginning trader will be overly concerned about incurring losing trades. Trader therefore lets losses mount, with the "hope" that the market will turn around and the loss will turn into a gain.
Almost all successful trading strategies include a disciplined procedure for cutting losses. When a trader is down on a position, many emotions often come into play, making it difficult to cut losses at the right level. The best practice is to decide where losses will be cut before a trade is even initiated. This will assure the trader of the maximum amount he or she can expect to lose on the trade.
The other key element of risk control is overall account risk. In other words, a trader should know before start of trading endeavor how much of trading account he or she is willing to lose before ceasing trading and re-evaluating strategy. If you open an account with $2,000, are you willing to lose all $2,000? $1,000? As with risk control on individual trades, the most important discipline is to decide on a level and stick with it. Further information on the mechanics of limiting risk can be found in trading literature.
Trading Terminology
Traders often chat with one another about a variety of topics related to financial markets, giving their perspectives and discussing trading ideas and current moves on the markets. While communicating with each other they often use slang to express their thoughts in a shorter form. Some of the most popular slang is listed below.
Asset Allocation: Dividing instrument funds among markets to achieve diversification or maximum return.
Bearish: A market view that anticipates lower prices.
Bullish: A market view that anticipates higher prices.
Chartist: An individual who studies graphs and charts of historic data to find trends and predict trend reversals.
Counterparty: The other organization or party with whom trading is being transacted.
Day Trader: Speculator who takes positions in instruments which are liquidated prior to the close of the same trading day.
Economic Indicator: A statistics which indicates economic growth rates and trends such as retail sales and employment.
Exotic: A less broadly traded market instrument.
Fast Market: Rapid movement in a market caused by strong interest by buyers and / or sellers.
Fed: The U.S. Federal Reserve. FDIC membership is compulsory for Federal Reserve members.
GDP: Total value of a country's output, income or expenditure produced within the country's physical borders.
Liquidity: The ability of a market to accept large transactions.
Resistance Level: A price which is likely to result in a rebound but if broken may result in a significant price movement.
Spread: The difference between the bid and ask price of a market instrument.
Support Levels: When a price depreciates or appreciates to a level where analysis suggests that the price will rebound.
Thin Market: A market in which trading volume is low and in which consequently spread is wide and the liquidity is low.
Volatility: A measure of the amount by which an asset price is expected to fluctuate over a given period.
Margin Requirements
Margin requirement is only applicable to margin trading. It allows you to hold a position much larger than your actual account value. Margin requirement or deposit is not a down payment on a purchase. Rather, the margin is a performance bond, or good faith deposit, to ensure against trading losses. Trading platforms often perform automatic pre-trade checks for margin availability and will execute the trade only if you have sufficient margin funds in your account.
In the event that funds in your account fall below margin requirement, most trading systems will automatically close one or more open positions. This prevents your account from ever falling below the available equity even in a highly volatile, fast moving market.
For example, you may be required to have only $1,000 in your account in order to trade position that would normally require $20,000. The $1,000 (5%) is referred to as "margin". This amount is essentially collateral to cover any losses that you might incur. Margin should reflect some rational assessment of potential risk in a position. For example, if a market instrument is very volatile, a higher margin requirement would normally be justified.
Overnight Interest
Overnight interest is only applicable to margin trading. Trading on margin means that a trader borrows money to buy or sell a market instrument using actual account value as collateral. Traders generally use margin to increase their purchasing power so that they can own more market instruments without fully paying for it.
Considering that trading on margin involves borrowing money, trader has to pay interest on the loan. That interest is referred to as Overnight Interest and is generally charged based on number of days a position on margin was held. Most trading systems will charge daily interest portion at the end of each trading session and charge three times more on Monday or on other preset weekday (if market is closed on weekends).
In case of Forex, Overnight Interest is calculated as interest rate differential between interest rates for particular currencies that make the currency pair that is being traded. For example, if a trader wants to sell USD/JPY on margin, he or she will have to pay 4.0% (e.g. U.S. interest rate at 5.0% subtracted by Japanese interest rate at 1.0% makes the interest rate differential) of the amount borrowed per year to hold the position.
Before trading on margin it is highly recommended to get information on exact interest rates charged for borrowing money and how that will affect the total return on investments.

WRITING A BUSINESS PLAN

WRITING A BUSINESS PLAN
Creating a written plan for success in your own business can mean the difference between success and failure. Yes, start-up companies are often fluid and fast moving, but you should still have a good grasp of where you want your company to be in yearly increments, what milestones you want to meet, and how you are going to get there. Also, a good business plan should quantify the size of your market and the buying discriminators of your target clients. In short, it is an operating tool, a good reality check, and should clearly communicate your ideas to others and be the base upon which you can use to build financing proposals. A business plan consists of three parts: concept, markets, and financing. Address these questions in the concept portion of your plan: ● what do you want your business to do? Are you going to sell a product or service? ● Why are you uniquely qualified to operate this business? What makes you different from the competition? ● Where are you going to be located? Do you want to be nationwide? If so, how are you going to get there? ● What are your staff capabilities going to be? ● How are you going to build support function such as human resources, accounting and payroll, and sales? Address these questions in the markets portion of your plan: ● Who are your customers? Where are they physically located? ● How big is the size of the overall market? Is the market segmented? If so, how and what is the value of each segment? How many customers in each segment do you need? ● What do you offer your potential customers that no one else does? Why should they buy from you? ● How big is your market? ● What is your plan for reaching potential buyers? How will they know about you? ● Who is the competition? What are their strategic advantages? How can you overcome them? Address these questions in the financing portion of your plan: ● How much capital do you need to start your business. Quarterly, annually? ● What is the burn rate of your capital? ● What are your projected sales targets? When do you expect to make a profit? ● How can you control your finances? ● How can you maintain cash flow and liquidity?

BECOME AN INTERNET ENTREPRENEUR

How to Piggyback your way to internet success
I have discovered that success on the Internet is very dependent on ideas, strategies and hard work. Have you been online for some time now trying to create alternative income for yourself without success? Do you want to discover a way to start your Internet business on low budget? Piggyback marketing is the answer.

The term piggyback is defined as something that is riding on the back of some thing else. To piggyback is to ride on someone back and/or shoulders.To piggyback on the Internet means you don’t have to re- invent new products. All you need to do is to locate successful systems, people, products, websites or strategies and find a way to leverage and profit from their effort.

Piggy backing is predicated on a simple idea, profit from someone Else's success and popularity.Why do you think some upcoming author (no matter how creative he may be) always wants the foreword of his book to be written by a renowned bestselling Author, or why an upcoming singer would always want to collaborate with successful award wining musician, even if it is just of feature the famous musician in his album.

Most companies also piggyback in order to promote their products.they use musicians,artistes or football stars for their promotion. You can do the same on the Internet. Below are 3 ways to piggyback yourself to instant Internet success.

Piggyback successful Internet gurus.
You can piggyback successful experts and profession on the Internet by leveraging on their success. All you need to do is to piggyback their knowledge to create your own digital information product and you are guaranteed of Internet sales because their followers will definitely buy your book.

Piggyback successful website
Make a research on the Internet for successful and fast rising websites like social networking sites. You can leverage these on the success of my space and faced b ask to generate constant traffic and sales to your website

Piggyback hot selling products
Make a research on hot selling product on the Internet and piggyback on any of the successful product. Visit sites like Click bank and Amazon, and locate successful products, as far an it is in E-book format, you can buy the book , follow the guide as described in the book and use the popularity of the book and its resale rights to make money.
Anthony D’Angelo once said. “Don’t reinvent the wheel just re-align it'' that should help you to become successful on the Internet.

BECOME AN INTERNET ENTREPRENEUR

How to Piggyback your way to internet success

I have discovered that success on the Internet is very dependent on ideas, strategies and hard work. Have you been online for some time now trying to create alternative income for yourself without success? Do you want to discover a way to start your Internet business on low budget? Piggyback marketing is the answer.


The term piggyback is defined as something that is riding on the back of some thing else. To piggyback is to ride on someone back and/or shoulders.To piggyback on the Internet means you don’t have to re- invent new products. All you need to do is to locate successful systems, people, products, websites or strategies and find a way to leverage and profit from their effort.


Piggy backing is predicated on a simple idea, profit from someone Else's success and popularity.Why do you think some upcoming author (no matter how creative he may be) always wants the foreword of his book to be written by a renowned bestselling Author, or why an upcoming singer would always want to collaborate with successful award wining musician, even if it is just of feature the famous musician in his album.


Most companies also piggyback in order to promote their products.they use musicians,artistes or football stars for their promotion. You can do the same on the Internet. Below are 3 ways to piggyback yourself to instant Internet success.


Piggyback successful Internet gurus.
You can piggyback successful experts and profession on the Internet by leveraging on their success. All you need to do is to piggyback their knowledge to create your own digital information product and you are guaranteed of Internet sales because their followers will definitely buy your book.
Piggyback successful website
Make a research on the Internet for successful and fast rising websites like social networking sites. You can leverage these on the success of my space and faced b ask to generate constant traffic and sales to your website
Piggyback hot selling products
Make a research on hot selling product on the Internet and piggyback on any of the successful product. Visit sites like Click bank and Amazon, and locate successful products, as far an it is in E-book format, you can buy the book , follow the guide as described in the book and use the popularity of the book and its resale rights to make money.
Anthony D’Angelo once said. “Don’t reinvent the wheel just re-align it'' that should help you to become successful on the Internet.

Start Your Own Business

Start Your Own Business
Now!Are u thinking of starting your own business but do not have a specific one in mind? Then, the following might be useful to you;- start-up a virtual assistant business, offering administrative, accounting, marketing, and graphic design services to clients- Freelance Advertising will offer marketing services to companies that are looking to employ email marketing techniques as the cornerstone of their marketing program.- Manage saltwater aquariums for clients. Help every step of the way, from setting up the aquarium, to cleaning it, maintaining it, and feeding the fish.- Start a home-based service provision specializing in the packaging, shipping, and installation of valuable artwork, including paintings, sculpture, mobiles, and ceramics, working for both commercial and residential clients.- Start an upscale old-world gentleman's barber shop offering premium grooming services and products.- Start a spa and beauty salon specializing in hair replacement services, as well as offering hair styling, massage, and beauty products.- Start-up used bookstore offering a wide range of book, magazine, and music selections.- Residential home inspection service for real estate agents, buyers, sellers, and home owners.- Start a gourmet coffee bar that boasts a fun, relaxed atmosphere for its customers- Call centre Services; start-up business providing clients with top quality call center services 24 hours-a-day.- Catering services; offer creative, colorful, and unusual kosher and traditional foods.- Offer upscale child care services for kids aged 4 months to 5 years.- Cleaning Service offers extra care and attentive cleaning services for upper class homes.- Start-up Coffee Cafe selling specialty coffee drinks, food, religious books and music.- Business Solutions consulting is a start-up business offering full-cycle, business-to-business planning consulting.- specializing in the installation, replacement and removal of septic tanks.- Upscale convenience store with a small 20-seat cafe.- Start a computer-based matchmaking service.- Electronic engineering firm providing specialized components to the high-tech manufacturing market.- is a new concept in food preparation for busy families in Texas, run by a party planner and a personal chef.

Copied from Nairaland

Tuesday, December 09, 2008

HISTORICAL DEVELOPMENT OF INTERNET

What Is The Internet (And What Makes It Work) - December, 1999By Robert E. Kahn and Vinton G. Cerf
INTRODUCTION
THE EVOLUTION OF THE INTERNET
THE INTERNET ARCHITECTURE
GOVERNMENT’S HISTORICAL ROLE
A DEFINITION FOR THE INTERNET
WHO RUNS THE INTERNET
WHERE DO WE GO FROM HERE?
This paper was prepared by the authors at the request of the Internet Policy Institute (IPI), a non-profit organization based in Washington, D.C., for inclusion in their upcoming series of Internet related papers. It is a condensation of a longer paper in preparation by the authors on the same subject. Many topics of potential interest were not included in this condensed version because of size and subject matter constraints. Nevertheless, the reader should get a basic idea of the Internet, how it came to be, and perhaps even how to begin thinking about it from an architectural perspective. This will be especially important to policy makers who need to distinguish the Internet as a global information system apart from its underlying communications infrastructure.
INTRODUCTION
As we approach a new millennium, the Internet is revolutionizing our society, our economy and our technological systems. No one knows for certain how far, or in what direction, the Internet will evolve. But no one should underestimate its importance.
Over the past century and a half, important technological developments have created a global environment that is drawing the people of the world closer and closer together. During the industrial revolution, we learned to put motors to work to magnify human and animal muscle power. In the new Information Age, we are learning to magnify brainpower by putting the power of computation wherever we need it, and to provide information services on a global basis. Computer resources are infinitely flexible tools; networked together, they allow us to generate, exchange, share and manipulate information in an uncountable number of ways. The Internet, as an integrating force, has melded the technology of communications and computing to provide instant connectivity and global information services to all its users at very low cost.
Ten years ago, most of the world knew little or nothing about the Internet. It was the private enclave of computer scientists and researchers who used it to interact with colleagues in their respective disciplines. Today, the Internet’s magnitude is thousands of times what it was only a decade ago. It is estimated that about 60 million host computers on the Internet today serve about 200 million users in over 200 countries and territories. Today’s telephone system is still much larger: about 3 billion people around the world now talk on almost 950 million telephone lines (about 250 million of which are actually radio-based cell phones). But by the end of the year 2000, the authors estimate there will be at least 300 million Internet users. Also, the total numbers of host computers and users have been growing at about 33% every six months since 1988 – or roughly 80% per year. The telephone service, in comparison, grows an average of about 5-10% per year. That means if the Internet keeps growing steadily the way it has been growing over the past few years, it will be nearly as big as today’s telephone system by about 2006.
top
THE EVOLUTION OF THE INTERNET
The underpinnings of the Internet are formed by the global interconnection of hundreds of thousands of otherwise independent computers, communications entities and information systems. What makes this interconnection possible is the use of a set of communication standards, procedures and formats in common among the networks and the various devices and computational facilities connected to them. The procedures by which computers communicate with each other are called "protocols." While this infrastructure is steadily evolving to include new capabilities, the protocols initially used by the Internet are called the "TCP/IP" protocols, named after the two protocols that formed the principal basis for Internet operation.
On top of this infrastructure is an emerging set of architectural concepts and data structures for heterogeneous information systems that renders the Internet a truly global information system. In essence, the Internet is an architecture, although many people confuse it with its implementation. When the Internet is looked at as an architecture, it manifests two different abstractions. One abstraction deals with communications connectivity, packet delivery and a variety of end-end communication services. The other abstraction deals with the Internet as an information system, independent of its underlying communications infrastructure, which allows creation, storage and access to a wide range of information resources, including digital objects and related services at various levels of abstraction.
Interconnecting computers is an inherently digital problem. Computers process and exchange digital information, meaning that they use a discrete mathematical “binary” or “two-valued” language of 1s and 0s. For communication purposes, such information is mapped into continuous electrical or optical waveforms. The use of digital signaling allows accurate regeneration and reliable recovery of the underlying bits. We use the terms “computer,” “computer resources” and “computation” to mean not only traditional computers, but also devices that can be controlled digitally over a network, information resources such as mobile programs and other computational capabilities.
The telephone network started out with operators who manually connected telephones to each other through “patch panels” that accepted patch cords from each telephone line and electrically connected them to one another through the panel, which operated, in effect, like a switch. The result was called circuit switching, since at its conclusion, an electrical circuit was made between the calling telephone and the called telephone. Conventional circuit switching, which was developed to handle telephone calls, is inappropriate for connecting computers because it makes limited use of the telecommunication facilities and takes too long to set up connections. Although reliable enough for voice communication, the circuit-switched voice network had difficulty delivering digital information without errors.
For digital communications, packet switching is a better choice, because it is far better suited to the typically "burst" communication style of computers. Computers that communicate typically send out brief but intense bursts of data, then remain silent for a while before sending out the next burst. These bursts are communicated as packets, which are very much like electronic postcards. The postcards, in reality packets, are relayed from computer to computer until they reach their destination. The special computers that perform this forwarding function are called variously "packet switches" or "routers" and form the equivalent of many bucket brigades spanning continents and oceans, moving buckets of electronic postcards from one computer to another. Together these routers and the communication links between them form the underpinnings of the Internet.
Without packet switching, the Internet would not exist as we now know it. Going back to the postcard analogy, postcards can get lost. They can be delivered out of order, and they can be delayed by varying amounts. The same is true of Internet packets, which, on the Internet, can even be duplicated. The Internet Protocol is the postcard layer of the Internet. The next higher layer of protocol, TCP, takes care of re-sending the “postcards” to recover packets that might have been lost, and putting packets back in order if they have become disordered in transit.
Of course, packet switching is about a billion times faster than the postal service or a bucket brigade would be. It also has to operate over many different communications systems, or substrata. The authors designed the basic architecture to be so simple and undemanding that it could work with most communication services. Many organizations, including commercial ones, carried out research using the TCP/IP protocols in the 1970s. Email was steadily used over the nascent Internet during that time and to the present. It was not until 1994 that the general public began to be aware of the Internet by way of the World Wide Web application, particularly after Netscape Communications was formed and released its browser and associated server software.
Thus, the evolution of the Internet was based on two technologies and a research dream. The technologies were packet switching and computer technology, which, in turn, drew upon the underlying technologies of digital communications and semiconductors. The research dream was to share information and computational resources. But that is simply the technical side of the story. Equally important in many ways were the other dimensions that enabled the Internet to come into existence and flourish. This aspect of the story starts with cooperation and far-sightedness in the U.S. Government, which is often derided for lack of foresight but is a real hero in this story.
It leads on to the enthusiasm of private sector interests to build upon the government funded developments to expand the Internet and make it available to the general public. Perhaps most important, it is fueled by the development of the personal computer industry and significant changes in the telecommunications industry in the 1980s, not the least of which was the decision to open the long distance market to competition. The role of workstations, the Unix operating system and local area networking (especially the Ethernet) are themes contributing to the spread of Internet technology in the 1980s into the research and academic community from which the Internet industry eventually emerged.
Many individuals have been involved in the development and evolution of the Internet covering a span of almost four decades if one goes back to the early writings on the subject of computer networking by Kleinrock [i], Licklider [ii], Baran [iii], Roberts [iv], and Davies [v]. The ARPANET, described below, was the first wide-area computer network. The NSFNET, which followed more than a decade later under the leadership of Erich Bloch, Gordon Bell, Bill Wulf and Steve Wolff, brought computer networking into the mainstream of the research and education communities. It is not our intent here to attempt to attribute credit to all those whose contributions were central to this story, although we mention a few of the key players. A readable summary on the history of the Internet, written by many of the key players, may be found at www.isoc.org/internet/history. [vi]
From One Network to Many: The role of DARPA
Modern computer networking technologies emerged in the early 1970s. In 1969, The U.S. Defense Advanced Research Projects Agency (variously called ARPA and DARPA), an agency within the Department of Defense, commissioned a wide-area computer network called the ARPANET. This network made use of the new packet switching concepts for interconnecting computers and initially linked computers at universities and other research institutions in the United States and in selected NATO countries. At that time, the ARPANET was essentially the only realistic wide-area computer network in existence, with a base of several dozen organizations, perhaps twice that number of computers and numerous researchers at those sites. The program was led at DARPA by Larry Roberts. The packet switches were built by Bolt Beranek and Newman (BBN), a DARPA contractor. Others directly involved in the ARPANET activity included the authors, Len Kleinrock, Frank Heart, Howard Frank, Steve Crocker, Jon Postel and many many others in the ARPA research community.
Back then, the methods of internetworking (that is interconnecting computer networks) were primitive or non-existent. Two organizations could interwork technically by agreeing to use common equipment, but not every organization was interested in this approach. Absent that, there was jury-rigging, special case development and not much else. Each of these networks stood on its own with essentially no interaction between them – a far cry from today’s Internet.
In the early 1970s, ARPA began to explore two alternative applications of packet switching technology based on the use of synchronous satellites (SATNET) and ground-based packet radio (PRNET). The decision by Kahn to link these two networks and the ARPANET as separate and independent networks resulted in the creation of the Internet program and the subsequent collaboration with Cerf. These two systems differed in significant ways from the ARPANET so as to take advantage of the broadcast and wireless aspects of radio communications. The strategy that had been adopted for SATNET originally was to embed the SATNET software into an ARPANET packet switch, and interwork the two networks through memory-to-memory transfers within the packet switch. This approach, in place at the time, was to make SATNET an “embedded” network within the ARPANET; users of the network would not even need to know of its existence. The technical team at Bolt Beranek and Newman (BBN), having built the ARPANET switches and now building the SATNET software, could easily produce the necessary patches to glue the programs together in the same machine. Indeed, this is what they were under contract with DARPA to provide. By embedding each new network into the ARPANET, a seamless internetworked capability was possible, but with no realistic possibility of unleashing the entrepreneurial networking spirit that has manifest itself in modern day Internet developments. A new approach was in order.
The Packet Radio (PRNET) program had not yet gotten underway so there was ample opportunity to change the approach there. In addition, up until then, the SATNET program was only an equipment development activity. No commitments had been obtained for the use of actual satellites or ground stations to access them. Indeed, since there was no domestic satellite industry in the U.S. then, the only two viable alternatives were the use of Intelsat or U.S. military satellites. The time for a change in strategy, if it was to be made, was then.
top
THE INTERNET ARCHITECTURE
The authors created an architecture for interconnecting independent networks that could then be federated into a seamless whole without changing any of the underlying networks. This was the genesis of the Internet as we know it today.
In order to work properly, the architecture required a global addressing mechanism (or Internet address) to enable computers on any network to reference and communicate with computers on any other network in the federation. Internet addresses fill essentially the same role as telephone numbers do in telephone networks. The design of the Internet assumed first that the individual networks could not be changed to accommodate new architectural requirements; but this was largely a pragmatic assumption to facilitate progress. The networks also had varying degrees of reliability and speed. Host computers would have to be able to put disordered packets back into the correct order and discard duplicate packets that had been generated along the way. This was a major change from the virtual circuit-like service provided by ARPANET and by then contemporary commercial data networking services such as Tymnet and Telenet. In these networks, the underlying network took responsibility for keeping all information in order and for re-sending any data that might have been lost. The Internet design made the computers responsible for tending to these network problems.
A key architectural construct was the introduction of gateways (now called routers) between the networks to handle the disparities such as different data rates, packet sizes, error conditions, and interface specifications. The gateways would also check the destination Internet addresses of each packet to determine the gateway to which it should be forwarded. These functions would be combined with certain end-end functions to produce the reliable communication from source to destination. A draft paper by the authors describing this approach was given at a meeting of the International Network Working Group in 1973 in Sussex, England and the final paper was subsequently published by the Institute for Electrical and Electronics Engineers, the leading professional society for the electrical engineering profession, in its Transactions on Communications in May, 1974 [vii]. The paper described the TCP/IP protocol.
DARPA contracted with Cerf's group at Stanford to carry out the initial detailed design of the TCP software and, shortly thereafter, with BBN and University College London to build independent implementations of the TCP protocol (as it was then called – it was later split into TCP and IP) for different machines. BBN also had a contract to build a prototype version of the gateway. These three sites collaborated in the development and testing of the initial protocols on different machines. Cerf, then a professor at Stanford, provided the day-to-day leadership in the initial TCP software design and testing. BBN deployed the gateways between the ARPANET and the PRNET and also with SATNET. During this period, under Kahn's overall leadership at DARPA, the initial feasibility of the Internet Architecture was demonstrated.
The TCP/IP protocol suite was developed and refined over a period of four more years and, in 1980, it was adopted as a standard by the U.S. Department of Defense. On January 1, 1983 the ARPANET converted to TCP/IP as its standard host protocol. Gateways (or routers) were used to pass packets to and from host computers on “local area networks.” Refinement and extension of these protocols and many others associated with them continues to this day by way of the Internet Engineering Task Force [viii].
top
GOVERNMENT’S HISTORICAL ROLE
Other political and social dimensions that enabled the Internet to come into existence and flourish are just as important as the technology upon which it is based. The federal government played a large role in creating the Internet, as did the private sector interests that made it available to the general public. The development of the personal computer industry and significant changes in the telecommunications industry also contributed to the Internet’s growth in the 1980s. In particular, the development of workstations, the Unix operating system, and local area networking (especially the Ethernet) contributed to the spread of the Internet within the research community from which the Internet industry eventually emerged.
The National Science Foundation and others
In the late 1970s, the National Science Foundation (NSF) became interested in the impact of the ARPANET on computer science and engineering. NSF funded the Computer Science Network (CSNET), which was a logical design for interconnecting universities that were already on the ARPANET and those that were not. Telenet was used for sites not connected directly to the ARPANET and a gateway was provided to link the two. Independent of NSF, another initiative called BITNET ("Because it's there" Net) [ix] provided campus computers with email connections to the growing ARPANET. Finally, AT&T Bell Laboratories development of the Unix operating system led to the creation of a grass-roots network called USENET [x], which rapidly became home to thousands of “newsgroups” where Internet users discussed everything from aerobics to politics and zoology.
In the mid 1980s, NSF decided to build a network called NSFNET to provide better computer connections for the science and education communities. The NSFNET made possible the involvement of a large segment of the education and research community in the use of high speed networks. A consortium consisting of MERIT (a University of Michigan non-profit network services organization), IBM and MCI Communications won a 1987 competition for the contract to handle the network’s construction. Within two years, the newly expanded NSFNET had become the primary backbone component of the Internet, augmenting the ARPANET until it was decommissioned in 1990.At about the same time, other parts of the U.S. government had moved ahead to build and deploy networks of their own, including NASA and the Department of Energy. While these groups originally adopted independent approaches for their networks, they eventually decided to support the use of TCP/IP.
The developers of the NSFNET, led by Steve Wolff who had the direct responsibility for the NSFNET program, also decided to create intermediate level networks to serve research and education institutions and, more importantly, to allow networks that were not commissioned by the U.S. government to connect to the NSFNET. This strategy reduced the overall load on the backbone network operators and spawned a new industry: Internet Service Provision. Nearly a dozen intermediate level networks were created, most with NSF support, [xi] some, such as UUNET, with Defense support, and some without any government support. The NSF contribution to the evolution of the Internet was essential in two respects. It opened the Internet to many new users and, drawing on the properties of TCP/IP, structured it so as to allow many more network service providers to participate.
For a long time, the federal government did not allow organizations to connect to the Internet to carry out commercial activities. By 1988, it was becoming apparent, however, that the Internet's growth and use in the business sector might be seriously inhibited by this restriction. That year, CNRI requested permission from the Federal Networking Council to interconnect the commercial MCI Mail electronic mail system to the Internet as part of a general electronic mail interconnection experiment. Permission was given and the interconnection was completed by CNRI, under Cerf’s direction, in the summer of 1989. Shortly thereafter, two of the then non-profit Internet Service Providers (UUNET [xii] and NYSERNET) produced new for-profit companies (UUNET and PSINET [xiii] respectively). In 1991, they were interconnected with each other and CERFNET [xiv]. Commercial pressure to alleviate restrictions on interconnections with the NSFNET began to mount.
In response, Congress passed legislation allowing NSF to open the NSFNET to commercial usage. Shortly thereafter, NSF determined that its support for NSFNET might not be required in the longer term and, in April 1995, NSF ceased its support for the NSFNET. By that time, many commercial networks were in operation and provided alternatives to NSFNET for national level network services. Today, approximately 10,000 Internet Service Providers (ISPs) are in operation. Roughly half the world's ISPs currently are based in North America and the rest are distributed throughout the world.
top
A DEFINITION FOR THE INTERNET
The authors feel strongly that efforts should be made at top policy levels to define the Internet. It is tempting to view it merely as a collection of networks and computers. However, as indicated earlier, the authors designed the Internet as an architecture that provided for both communications capabilities and information services. Governments are passing legislation pertaining to the Internet without ever specifying to what the law applies and to what it does not apply. In U.S. telecommunications law, distinctions are made between cable, satellite broadcast and common carrier services. These and many other distinctions all blur in the backdrop of the Internet. Should broadcast stations be viewed as Internet Service Providers when their programming is made available in the Internet environment? Is use of cellular telephones considered part of the Internet and if so under what conditions? This area is badly in need of clarification.
The authors believe the best definition currently in existence is that approved by the Federal Networking Council in 1995, http://www.fnc.gov/ and which is reproduced in the footnote below [xv] for ready reference. Of particular note is that it defines the Internet as a global information system, and included in the definition, is not only the underlying communications technology, but also higher-level protocols and end-user applications, the associated data structures and the means by which the information may be processed, manifested, or otherwise used. In many ways, this definition supports the characterization of the Internet as an “information superhighway.” Like the federal highway system, whose underpinnings include not only concrete lanes and on/off ramps, but also a supporting infrastructure both physical and informational, including signs, maps, regulations, and such related services and products as filling stations and gasoline, the Internet has its own layers of ingress and egress, and its own multi-tiered levels of service.
The FNC definition makes it clear that the Internet is a dynamic organism that can be looked at in myriad ways. It is a framework for numerous services and a medium for creativity and innovation. Most importantly, it can be expected to evolve.
top
WHO RUNS THE INTERNET
The Domain Name System
The Internet evolved as an experimental system during the 1970s and early 1980s. It then flourished after the TCP/IP protocols were made mandatory on the ARPANET and other networks in January 1983; these protocols thus became the standard for many other networks as well. Indeed, the Internet grew so rapidly that the existing mechanisms for associating the names of host computers (e.g. UCLA, USC-ISI) to Internet addresses (known as IP addresses) were about to be stretched beyond acceptable engineering limits. Most of the applications in the Internet referred to the target computers by name. These names had to be translated into Internet addresses before the lower level protocols could be activated to support the application. For a time, a group at SRI International in Menlo Park, CA, called the Network Information Center (NIC), maintained a simple, machine-readable list of names and associated Internet addresses which was made available on the net. Hosts on the Internet would simply copy this list, usually daily, so as to maintain a local copy of the table. This list was called the "host.txt" file (since it was simply a text file). The list served the function in the Internet that directory services (e.g. 411 or 703-555-1212) do in the US telephone system - the translation of a name into an address.
As the Internet grew, it became harder and harder for the NIC to keep the list current. Anticipating that this problem would only get worse as the network expanded, researchers at USC Information Sciences Institute launched an effort to design a more distributed way of providing this same information. The end result was the Domain Name System (DNS) [xvi] which allowed hundreds of thousands of "name servers" to maintain small portions of a global database of information associating IP addresses with the names of computers on the Internet.
The naming structure was hierarchical in character. For example, all host computers associated with educational institutions would have names like "stanford.edu" or "ucla.edu". Specific hosts would have names like "cs.ucla.edu" to refer to a computer in the computer science department of UCLA, for example. A special set of computers called "root servers" maintained information about the names and addresses of other servers that contained more detailed name/address associations. The designers of the DNS also developed seven generic "top level" domains, as follows:
Education - EDUGovernment - GOVMilitary - MILInternational - INTNetwork - NET(non-profit) Organization - ORGCommercial - COM
Under this system, for example, the host name "UCLA" became "UCLA.EDU" because it was operated by an educational institution, while the host computer for "BBN" became "BBN.COM" because it was a commercial organization. Top-level domain names also were created for every country: United Kingdom names would end in “.UK,” while the ending “.FR” was created for the names of France.
The Domain Name System (DNS) was and continues to be a major element of the Internet architecture, which contributes to its scalability. It also contributes to controversy over trademarks and general rules for the creation and use of domain names, creation of new top-level domains and the like. At the same time, other resolution schemes exist as well. One of the authors (Kahn) has been involved in the development of a different kind of standard identification and resolution scheme [xvii] that, for example, is being used as the base technology by book publishers to identify books on the Internet by adapting various identification schemes for use in the Internet environment. For example, International Standard Book Numbers (ISBNs) can be used as part of the identifiers. The identifiers then resolve to state information about the referenced books, such as location information (e.g. multiple sites) on the Internet that is used to access the books or to order them. These developments are taking place in parallel with the more traditional means of managing Internet resources. They offer an alternative to the existing Domain Name System with enhanced functionality.
The growth of Web servers and users of the Web has been remarkable, but some people are confused about the relationship between the World Wide Web and the Internet. The Internet is the global information system that includes communication capabilities and many high level applications. The Web is one such application. The existing connectivity of the Internet made it possible for users and servers all over the world to participate in this activity. Electronic mail is another important application. As of today, over 60 million computers take part in the Internet and about 3.6 million web sites were estimated to be accessible on the net. Virtually every user of the net has access to electronic mail and web browsing capability. Email remains a critically important application for most users of the Internet, and these two functions largely dominate the use of the Internet for most users.
The Internet Standards Process
Internet standards were once the output of research activity sponsored by DARPA. The principal investigators on the internetting research effort essentially determined what technical features of the TCP/IP protocols would become common. The initial work in this area started with the joint effort of the two authors, continued in Cerf's group at Stanford, and soon thereafter was joined by engineers and scientists at BBN and University College London. This informal arrangement has changed with time and details can be found elsewhere [xviii]. At present, the standards efforts for Internet is carried out primarily under the auspices of the Internet Society (ISOC). The Internet Engineering Task Force (IETF) operates under the leadership of its Internet Engineering Steering Group (IESG), which is populated by appointees approved by the Internet Architecture Board (IAB) which is, itself, now part of the Internet Society.
The IETF comprises over one hundred working groups categorized and managed by Area Directors specializing in specific categories.
There are other bodies with considerable interest in Internet standards or in standards that must interwork with the Internet. Examples include the International Telecommunications Union Telecommunications standards group (ITU-T), the International Institute of Electrical and Electronic Engineers (IEEE) local area network standards group (IEEE 801), the Organization for International Standardization (ISO), the American National Standards Institute (ANSI), the World Wide Web Consortium (W3C), and many others.
As Internet access and services are provided by existing media such as telephone, cable and broadcast, interactions with standards bodies and legal structures formed to deal with these media will become an increasingly complex matter. The intertwining of interests is simultaneously fascinating and complicated, and has increased the need for thoughtful cooperation among many interested parties.
Managing the Internet
Perhaps the least understood aspect of the Internet is its management. In recent years, this subject has become the subject of intense commercial and international interest, involving multiple governments and commercial organizations, and recently congressional hearings. At issue is how the Internet will be managed in the future, and, in the process, what oversight mechanisms will insure that the public interest is adequately served.
In the 1970s, managing the Internet was easy. Since few people knew about the Internet, decisions about almost everything of real policy concern were made in the offices of DARPA. It became clear in the late 1970s, however, that more community involvement in the decision-making processes was essential. In 1979, DARPA formed the Internet Configuration Control Board (ICCB) to insure that knowledgeable members of the technical community discussed critical issues, educated people outside of DARPA about the issues, and helped others to implement the TCP/IP protocols and gateway functions. At the time, there were no companies that offered turnkey solutions to getting on the Internet. It would be another five years or so before companies like Cisco Systems were formed, and while there were no PCs yet, the only workstations available were specially built and their software was not generally configured for use with external networks; they were certainly considered expensive at the time.
In 1983, the small group of roughly twelve ICCB members was reconstituted (with some substitutions) as the Internet Activities Board (IAB), and about ten “Task Forces” were established under it to address issues in specific technical areas. The attendees at Internet Working Group meetings were invited to become members of as many of the task forces as they wished.
The management of the Domain Name System offers a kind of microcosm of issues now frequently associated with overall management of the Internet's operation and evolution. Someone had to take responsibility for overseeing the system's general operation. In particular, top-level domain names had to be selected, along with persons or organizations to manage each of them. Rules for the allocation of Internet addresses had to be established. DARPA had previously asked the late Jon Postel of the USC Information Sciences Institute to take on numerous functions related to administration of names, addresses and protocol related matters. With time, Postel assumed further responsibilities in this general area on his own, and DARPA, which was supporting the effort, gave its tacit approval. This activity was generally referred to as the Internet Assigned Numbers Authority (IANA) [xix]. In time, Postel became the arbitrator of all controversial matters concerning names and addresses until his untimely death in October 1998.
It is helpful to consider separately the problem of managing the domain name space and the Internet address space. These two vital elements of the Internet architecture have rather different characteristics that color the management problems they generate. Domain names have semantics that numbers may not imply; and thus a means of determining who can use what names is needed. As a result, speculators on Internet names often claim large numbers of them without intent to use them other than to resell them later. Alternate resolution mechanisms [xx], if widely adopted, could significantly change the landscape here.
The rapid growth of the Internet has triggered the design of a new and larger address space (the so-called IP version 6 address space); today's Internet uses IP version 4 [xxi]. However, little momentum has yet developed to deploy IPv6 widely. Despite concerns to the contrary, the IPv4 address space will not be depleted for some time. Further, the use of Dynamic Host Configuration Protocol (DHCP) to dynamically assign IP addresses has also cut down on demand for dedicated IP addresses. Nevertheless, there is growing recognition in the Internet technical community that expansion of the address space is needed, as is the development of transition schemes that allow interoperation between IPv4 and IPv6 while migrating to IPv6.
In 1998, the Internet Corporation for Assigned Names and Numbers (ICANN) was formed as a private sector, non-profit, organization to oversee the orderly progression in use of Internet names and numbers, as well as certain protocol related matters that required oversight. The birth of this organization, which was selected by the Department of Commerce for this function, has been difficult, embodying as it does many of the inherent conflicts in resolving discrepancies in this arena. However, there is a clear need for an oversight mechanism for Internet domain names and numbers, separate from their day-to-day management.
Many questions about Internet management remain. They may also prove difficult to resolve quickly. Of specific concern is what role the U.S. government and indeed governments around the world need to play in its continuing operation and evolution. This is clearly a subject for another time.
top
WHERE DO WE GO FROM HERE?
As we struggle to envision what may be commonplace on the Internet in a decade, we are confronted with the challenge of imagining new ways of doing old things, as well as trying to think of new things that will be enabled by the Internet, and by the technologies of the future.
In the next ten years, the Internet is expected to be enormously bigger than it is today. It will be more pervasive than the older technologies and penetrate more homes than television and radio programming. Computer chips are now being built that implement the TCP/IP protocols and recently a university announced a two-chip web server. Chips like this are extremely small and cost very little. And they can be put into anything. Many of the devices connected to the Internet will be Internet-enabled appliances (cell phones, fax machines, household appliances, hand-held organizers, digital cameras, etc.) as well as traditional laptop and desktop computers. Information access will be directed to digital objects of all kinds and services that help to create them or make use of them [xxii].
Very high-speed networking has also been developing at a steady pace. From the original 50,000 bit-per-second ARPANET, to the 155 million bit-per-second NSFNET, to today’s 2.4 – 9.6 billion bit-per-second commercial networks, we routinely see commercial offerings providing Internet access at increasing speeds. Experimentation with optical technology using wavelength division multiplexing is underway in many quarters; and testbeds operating at speeds of terabits per second (that is trillions of bits-per-second) are being constructed.
Some of these ultra-high speed systems may one-day carry data from very far away places, like Mars. Already, design of the interplanetary Internet as a logical extension of the current Internet, is part of the NASA Mars mission program now underway at the Jet Propulsion Laboratory in Pasadena, California [xxiii]. By 2008, we should have a well functioning Earth-Mars network that serves as a nascent backbone of the interplanetary Internet.
Wireless communication has exploded in recent years with the rapid growth of cellular telephony. Increasingly, however, Internet access is becoming available over these networks. Alternate forms for wireless communication, including both ground radio and satellite are in development and use now, and the prospects for increasing data rates look promising. Recent developments in high data rate systems appear likely to offer ubiquitous wireless data services in the 1-2 Mbps range. It is even possible that wireless Internet access may one day be the primary way most people get access to the Internet.
A developing trend that seems likely to continue in the future is an information centric view of the Internet that can live in parallel with the current communications centric view. Many of the concerns about intellectual property protection are difficult to deal with, not because of fundamental limits in the law, but rather by technological and perhaps management limitations in knowing how best to deal with these issues. A digital object infrastructure that makes information objects “first-class citizens” in the packetized “primordial soup” of the Internet is one step in that direction. In this scheme, the digital object is the conceptual elemental unit in the information view; it is interpretable (in principle) by all participating information systems. The digital object is thus an abstraction that may be implemented in various ways by different systems. It is a critical building block for interoperable and heterogeneous information systems. Each digital object has a unique and, if desired, persistent identifier that will allow it to be managed over time. This approach is highly relevant to the development of third-party value added information services in the Internet environment.
Of special concern to the authors is the need to understand and manage the downside potential for network disruptions, as well as cybercrime and terrorism. The ability to deal with problems in this diverse arena is at the forefront of maintaining a viable global information infrastructure. “ IOPS.org” [xxiv] – a private-sector group dedicated to improving coordination among ISPs - deals with issues of ISP outages, disruptions, other trouble conditions, as well as related matters, by discussion, interaction and coordination between and among the principal players. Business, the academic community and government all need as much assurance as possible that they can conduct their activities on the Internet with high confidence that security and reliability will be present. The participation of many organizations around the world, including especially governments and the relevant service providers will be essential here.
The success of the Internet in society as a whole will depend less on technology than on the larger economic and social concerns that are at the heart of every major advance. The Internet is no exception, except that its potential and reach are perhaps as broad as any that have come before.
top
[i] Leonard Kleinrock's dissertation thesis at MIT was written during 1961: "Information Flow in Large Communication Nets", RLE Quarterly Progress Report, July 1961 and published as a book "Communication Nets: Stochastic Message Flow and Delay", New York: McGraw Hill, 1964. This was one of the earliest mathematical analyses of what we now call packet switching networks.
[ii] J.C.R. Licklider & W. Clark, "On-Line Man Computer Communication", August 1962. Licklider made tongue-in-cheek references to an "inter-galactic network" but in truth, his vision of what might be possible was prophetic.
[iii] [BARAN 64] Baran, P., et al, "On Distributed Communications", Volumes I-XI, RAND Corporation Research Documents, August 1964. Paul Baran explored the use of digital "message block" switching to support highly resilient, survivable voice communications for military command and control. This work was undertaken at RAND Corporation for the US Air Force beginning in 1962.
[iv] L. Roberts & T. Merrill, "Toward a Cooperative Network of Time-Shared Computers", Fall AFIPS Conf., Oct. 1966.
[v] Davies, D.W., K.A. Bartlett, R.A. Scantlebury, and P. T. Wilkinson. 1967. "A Digital Communication Network for Computers Giving Rapid Response at Remote Terminals," Proceedings of the ACM Symposium on Operating System Principles. Association for Computing Machinery, New York, 1967. Donald W. Davies and his colleagues coined the term "packet" and built one node of a packet switching network at the National Physical Laboratory in the UK.
[vi] Barry M. Leiner, Vinton G. Cerf, David D. Clark,Robert E. Kahn, Leonard Kleinrock, Daniel C. Lynch, Jon Postel, Larry G. Roberts, Stephen Wolff, "A Brief History of the Internet," www.isoc.org/internet/history/brief.html and see below for timeline
[vii] Vinton G. Cerf and Robert E. Kahn, "A Protocol for Packet Network Intercommunication," IEEE Transactions on Communications, Vol. COM-22, May 1974.
[viii] The Internet Engineering Task Force (IETF) is an activity taking place under the auspices of the Internet Society (www.isoc.org). See http://www.ietf.org/
[ix] From the BITNET charter:
BITNET, which originated in 1981 with a link between CUNY and Yale, grew rapidly during the next few years, with management and systems services provided on a volunteer basis largely from CUNY and Yale. In 1984, the BITNET Directors established an Executive Committee to provide policy guidance.
(see http://www.geocities.com/SiliconValley/2260/bitchart.html)
[x] Usenet came into being in late 1979, shortly after the release of V7 Unix with UUCP. Two Duke University grad students in North Carolina, Tom Truscott and Jim Ellis, thought of hooking computers together to exchange information with the Unix community. Steve Bellovin, a grad student at the University of North Carolina, put together the first version of the news software using shell scripts and installed it on the first two sites: "unc" and "duke." At the beginning of 1980 the network consisted of those two sites and "phs" (another machine at Duke), and was described at the January Usenix conference. Steve Bellovin later rewrote the scripts into C programs, but they were never released beyond "unc" and "duke." Shortly thereafter, Steve Daniel did another implementation in C for public distribution. Tom Truscott made further modifications, and this became the "A" news release.
(see http://www.ou.edu/research/electron/internet/use-soft.htm)
[xi] A few examples include the New York State Education and Research Network (NYSERNET), New England Academic and Research Network (NEARNET), the California Education and Research Foundation Network (CERFNET), Northwest Net (NWNET), Southern Universities Research and Academic Net (SURANET) and so on. UUNET was formed as a non-profit by a grant from the UNIX Users Group (USENIX).
[xii] UUNET called its Internet service ALTERNET. UUNET was acquired by Metropolitan Fiber Networks (MFS) in 1995 which was itself acquired by Worldcom in 1996. Worldcom later merged with MCI to form MCI WorldCom in 1998. In that same year, Worldcom also acquired the ANS backbone network from AOL, which had purchased it from the non-profit ANS earlier.
[xiii] PSINET was a for-profit spun out of the NYSERNET in 1990.
[xiv] CERFNET was started by General Atomics as one of the NSF-sponsored intermediate level networks. It was coincidental that the network was called "CERF"Net - originally they had planned to call themselves SURFNET, since General Atomics was located in San Diego, California, but this name was already taken by a Dutch Research organization called SURF, so the General Atomics founders settled for California Education and Research Foundation Network. Cerf participated in the launch of the network in July 1989 by breaking a fake bottle of champagne filled with glitter over a Cisco Systems router.
[xv] October 24, 1995, Resolution of the U.S. Federal Networking Council
RESOLUTION:
"The Federal Networking Council (FNC) agrees that the following language reflects our definition of the term "Internet".
"Internet" refers to the global information system that --(i) is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions/follow-ons;(ii) is able to support communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-compatible protocols; and(iii) provides, uses or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein."
[xvi] The Domain Name System was designed by Paul Mockapetris and initially documented in November 1983. Mockapetris, P., "Domain names - Concepts and Facilities", RFC 882, USC/Information Sciences Institute, November 1983 and Mockapetris, P.,"Domain names - Implementation and Specification", RFC 883, USC/Information Sciences Institute, November 1983. (see also http://soa.granitecanyon.com/faq.shtml)
[xvii] The Handle System - see http://www.handle.net/
[xviii] See Leiner, et al, "A Brief History…", www.isoc.org/internet/history/brief.html
[xix] See http://www.iana.org/ for more details. See also http://www.icann.org/.
[xx] see http://www.doi.org/
[xxi] Version 5 of the Internet Protocol was an experiment which has since been terminated
[xxii] see A Framework for Distributed Digital Object Services, Robert E Kahn and Robert Wilensky at www.cnri.reston.va.us/cstr/arch/k-w.html
[xxiii] The interplanetary Internet effort is funded in part by DARPA and has support from NASA. For more information, see http://www.ipnsig.org/
[xxiv] See http://www.iops.org/ for more information on this group dedicated to improving operational coordination among Internet Service Providers.

HOW TO UPLOAD WEBSITE

Uploading Your Website
After you create your website uploading or publishing is the process of moving the pages and images that make up your website, to web space on a server that is connected to the internet. The process is very simple. You can use the FTP tool located on a console provided by your web host or you can make use of the simple free FTP tool that we provide on our website.
The process will be much faster using our tool, since it doesn't require the act of going to your web space, logging on and transferring pages via the console. Using our FTP tool the process of logging on is all automated once it is set up.

Finding a Web Host
If this is your first website you may want to read our web hosting FAQ. You'll find information about how much web space you need and what features to look for.
After you study the FAQ you can visit our Web Hosting Comparison Chart where you'll find some of the best prices for web hosting on the internet.
To download our free FTP tool visit the tutorial which shows how to set it up and begin uploading your pages.

Registering a Domain Name
The decision of whether or not to purchase a registered domain name is a matter of debate.
Some authorities say it can improve your search engine rankings if the domain name includes keywords. Other experts will tell you this is a myth. Check the SERPS and see what you think.
One thing a registered domain name will do is give your website a more professional appearance.
Most domain name sellers provide a service called Free Parking, which allows you to temporarily use their default DNS information until you acquire your web space.
If the web hosting package you choose comes with a free domain name, make sure you have the option of purchasing the domain name if you should leave their service.

QUALITIES OF GOOD WEBSITE

QUALITIES OF GOOD WEBSITE
Websites today have become a very essential medium for companies however small or big to showcase their products and services and increase their brand visibility. A good website does not only appeal to the visitors but gives them something to think about your organization.

Sometimes people may decide to create websites all by themselves. But creating websites or getting websites developed by people who have limited technological expertise and experience has its own risks. Economical and cost effective website may help save money but they may prove unproductive in the long run. Such website may be low in quality and also not appeal to the target audience.

So whenever you plan to get a website for yourself, consult a professional website designer or a company. Ensure that your website has the following features to enhance its productivity:

1) Template- A customized template that matches your organizations’ color and values. Don't use free template because these types of template are repeated on many websites and fail to give your company the required unique identity.

2) Content - Content is the lifeblood of any website, take your time to give this section the best attention. Ensure you have content related to the theme of the website and also ensure availability of fresh content for each & every page. It will help attract new customers.

3) Meta Tag & Title - Each and every page should have a title & be supported by appropriate meta tags. This will enhance your search engine good ranking in search engines like Google, Yahoo & MSN.

4) Internal navigation - Ensure that your website is logically linked. Try to keep internal pages maximum of three click away from the index page.

5) Broken links - Cross check all the active links on the website. Broken link has a negative impact on your website. A broken link could take visitors away from your site and prevent them from seeing the core contents that could convert to sales.

Wednesday, December 03, 2008

HOW TO DESIGN WEBSITE

HOW TO DESIGN WEBSITE

Website designing is all about making a graphics and text presentation in a unique environment called internet.

There are millions of applications one can use for website designing, from the millions we are going to use one of the most popular and common in every computer system for today’s training.

Approach:
There are just two approaches to website designing which are:
1. WYSIWYG: - what you see is what you get.
2. HTML coding: hyper text markup language is the way of using a programming language or translates the script into viewable and readable format.

Planning: have an idea of what you want to input in your web page, the types of information you wish to have on your site.

Structure: you need to have the detail map of your site.

Domain name: is the unique name that identifies an internet site.

Web hosting: providing space on internet servers for the storage of World Wide Web sites which can be accessed by others through the network. The World Wide Web is a massive collection of web sites, all hosted on computers (called web servers) all over the world. The web server (computer) where your web site's html files, graphics, etc. reside is known as the web host.
a web hosting service is a type of internet hosting service that allows individuals and organizations to provide their own website accessible via the world wide web. Web hosts are companies that provide space on a server they own for use by their clients as well as providing internet connectivity, typically in a data center. Web hosts can also provide data center space and connectivity to the internet for servers they do not own to be located in their data center, called colocation.

Link:
is the process of connecting or linking a page to another. There are two kinds of links such as:
1. Internal link: are links that links your page to another page.
2. External link: are links that link you to another website.


FOR FURTHER INFORMATION PLEASE CONTACT US THROUGH OUR EMAIL
infowealthinvestment@gmail.com or onyewenu@gmail.com, +2348055157644

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++