First of all I don’t know where to begin. The reason for this search engine is for a lot of reasons. The main reason is the search of objective knowledge about anything on the Internet. Thus kdich attempts to answer objective questions about anything on the Internet. Kdich cannot deal with the subjective judgments which are also important to the Internet. One of the many other reasons are there is too much talent in this world that goes to waste. Let me give you an example. If I search for ratios or how to solve a ratio 30:58:21 there is no proper explanation. The concept of ratio is still never explained properly to me. I would say a good mathematician is one who can solve a problem in more than one method possibly using more than one concept. My mathematics is not good- even when explained by a highly educated teacher, no one can get the concept through me. At this point I am fed up. There has to be a way to achieve better search results. We all know that Google is the mother of all search engines. I feel that its crawler has the ability to crawl the entire text of any page or all pages of the Internet. If I am correct, the world wide web or Internet for that matter consists of 60+ billion or was it 60+ trillion. I think it is 60+ billion web pages. If someone were to ask me can we present search results in a way that exactly matches what the user is searching for, I would say yes, but you have to change the way people search. This cannot be done forcefully but smoothly. We have to show that we have this type of search engine that caters to objective searches. If we don’t then users will code websites according to the current type of search engine. My revolutionary idea will help for this. First of all how many times have you ever experienced a situation where you bought something and then saw or heard or realized after sometime that there was something better out there at a better deal and price or quality with minor change in price or functionality etc. Or have you ever experienced a situation where if you had known “that” then you would have done “that”. Let me ask you this, how many times have you been explained something whose concept in this case mathematics, you knew but were explained in a different manner or method which you can’t understand. You begin to doubt your own version. This miscommunication is a barrier that can be resolved. Every human mind is different and difficult to be compatible with someone else. Therefore we need search to be more concise or more elaborated. Creating another search engine like Google would not be an everlasting idea. I present to you a talent trading index. There is always something that someone has that others need. This exchange of data is vital for growth of information as long as it is not information that puts a person or company at risk(as long as its not company secrets or other important data that can have a catastrophic affect on companies or individuals)This search engine will start exchanging data between the text/images/files etc of URL’s crawled to deduce many many iterations of different outcomes to produce a more customized and detailed specific set of results for a user searching specific information. I know for certain this is a game changer. How do you get everyone in the world to start trading talent. Why would anyone start trading anybody’s talent /their talent even in the first place. Firstly some people are good in/ at something that others aren’t good at because they have a natural affinity towards their talent that makes them have the highest rate of success in their talent. This concept is going to revolutionize the way information is traded or viewed. Originally, I meant why would anyone trade their own talent. Trading other talent is something you may accommodate to the concept of trading your own talent. So we need to setup a talent trading index for different industries or services. How do we trade talent on a global scale or a small scale even. Let me give you a vague idea of what a talent trading index search engine is capable of. Suppose I purchased a sofa. Later I realized that due to some unexpected / unheard of circumstances this sofa is gone bad. If I was told this earlier maybe I would not have purchased it. Another person could have purchased it because he will or may have a different mind set. With a talent trading index this concept will do wonders. Now another big concern is how do we connect this talent trading index concept to the already existing Internet layout. We use the concept of multidimensional. Each node on the Internet will be connected to multiple dimensions. You will have the regular Google-similar search results but at the back end there will be a talent traded search result. If we incorporate talent trading index even only for mobile search it will do wonders.
consider this diagram:
Now this may look like a normal Internet structure but its not. All incoming request are from a different dimension. Let us say that these results are broken into smaller bits of information which we expect the user to provide small bits of information that join like a linked list to the node in the middle of the diagram which can then be in turn part of a bigger chain with different or at different dimensions depending on how talent is or was traded at that junction. Now why would anyone trade talent. We all heard of stories of website creators stealing an idea that made them millions. No one will want to trade their own talent, let alone trading others talents. You may have a websites(s) that contain information of talent equivalent to 3 or 4 people. However some people may not be able to understand the entire page. Everyone’s brain is not the same. I propose a talent trading indexed search engine. This will be a new way to innovate search in mobile handsets. This may also be a security issue if launched on a global scale. Consider a talent trading index to be merged with a life-cycle search engine in the back end. We will come back to life-cycle search engine later. Lets talk about what the future would be like with a talent trading index combined with a life-cycle search engine running in the back-end of the search engine. First of all, only advertisements that pertain to the search results. Secondly you will be linked to everyone’s talent around you, maybe even the worlds talent at your finger tips. Say the search engine begins “whats on your mind”. I say “chair”. My results will be part of a key chain of different dimensions. The next best link to the word chair will appear depending on which dimension you choose. It could be dining table chair or the link will contain (going backwards) wood type for this chair-at this point another link node from another dimension will intersect the node which contains wood used /would type. This type of processed results are called life-cycle search results. The better and more precise your search phrase is, the difference the key chain from each dimension will be. Lets take an example. You are entering sockets, the link will look pictorially like this:
consider this diagram:
The dimensions and nodes will be auto created by the talent trading index and life-cycle engine. There may be some redundancy also. We have not discussed how this is going to be resolved. Each key chain is a never ending life-cycle. This is like a fission reaction. Each molecule or atom is broken into more sub parts which then break into more sub parts. We hope the creation of the talent trading index will automatically make this new Internet,whose search function methodology has been applied and adopted by everyone. How do we implement this type of a search engine? First we create a normal search Google type search engine. Then we can try to start the talent trading index like a fission reaction. This is one method. In this method I you start the trading index like a stock market (not fully like a stock market). Similar to adding money initially in the stock market, you add the crawled nodes of Kdich’s search engine in the market. Then begin a global one/single talent trading index. Eventually search on mobile should have a separate programming language, separate Operating System and view to create this craze of talent trading search. Now why would anyone want to trade talent. Well if they are trading talent, they’ll be getting something-a better decision making which will flourish more ideas and add more fission reaction nodes to a key chain dimension. I agree their may be some extent to which talent is not shared but there is no harm in trying this. Back to life cycle search engine, what would be the revenue model for this search engine? What would be the future benefit of this type of search engine. 1) new innovative ideas will spring up as more in depth information will be provided. For example consider a search link Integrated circuits:
consider the diagram:
At a particular point in the search chain, a user can apply his knowledge and break the chain at any junction to search through a new chain of thought. For example the user has a eureka moment and realizes with the information of the first and second node or even first and third node he can deviate and create something else. At this point you would wonder if the eureka user has a brilliant idea why would he parse or create a new chain(by uploading information to his website on a new concept/idea). The advantages of parsing or creating a new chain is that more users can build upon that new chain which can then solidify the original eureka user’s concept. There may be a situation where users add false data or wrong information purposely to the chain. This will reflect in the trading index. If the Eureka user’s concept is already solidified and you may think he or she will not create a new chain then with our advertising campaign this will force the user to still make the new deviated chain of thoughts. Now let us concentrate on how to create a money making model for this type of search engine. You can treat this search engine like a normal stock exchange. Then the particular talent index is low, you might most probably have more buyers then sellers who buy large talent/big players. We will also require a mechanism to control redundancy. Remember a node is connected with many dimension portals and can accommodate million and billions of chains. Two or more chains may be similar but there will be a difference on how the thoughts are presented in a different manner so that any type of user can be catered to. You can have a smart user with high vocabulary knowledge on one chain and a novice user on another chain when both chains or nodes to be precise are providing the same information. How do you even begin to deploy such a search engine: The same way you deploy a stock exchange only this time there will be as single exchange worldwide. Imagine a user in India trading talent with a user in china. What actually goes on during the talent trading process. Does one user in India use the talent of the user in China and do we define this as trading? Or is it the exchange of talent that we define as trading? What if the same user in India can acquire the same talent trade at another chain or node. Why then should these users spread the talent chain. Why not stop after acquiring the talent and utilize or create something on their own. This search engine can’t be based on the rules of Wikipedia or Linux forums. We need to create such a money making system that will induce Goliath trading continuously. I am an Internet user in India. I have the options to buy furniture from flipkart, snapdeal, amazon, pepperfry, ebay etc. There will be so many choices and hidden information about each piece of furniture but I will make a decision based on my best judgment and purchase the piece of furniture. Now with talent trading I can gather enough background information and then decide. With this example you may think what if this generates a monopoly on a talent chain to buy the same product. At this stage depending on each persons frame of mind- everyone’s is not the same, it is unlikely that more than 78% of Internet users will buy the same product. Here is why. This talent trading system will force users /buyers sellers etc. to innovate more on their products. Look at all the advancements in technology and other industries. Advancements will always continue. It can be at a steady rate or a fast rate. A problem may arise like the economic crises in 2008, where a majority of Internet users will have enough money or talent earned /acquired, to sort of shift the talent trading market to their own needs. This will have to be monitored. Suppose I am a coder. I can grab a piece of code and apply that to my program. What if in talent trading search engine I can see blocks of code being applied in every possible scenario. I can learn and change my code accordingly. This is one of the many advantages of talent trading. Think of a market that constantly innovates (although this is the present scenario everywhere) due to constant change in talent as a result of talent trading search engine. There will be plenty of information available. Consider I look up a particular chain of nodes.
consider the diagram
I can change the node chain after I/C’s and innovate a new TV myself. Lets talk about the cost and how to deploy such a search engine. What do we have? What are our raw materials 1) servers 2)crawlers 3)databases 4) Inference engine to compile useful information and present to the user 5) basic operating system to house everything from the crawler to the databases 6) compression software. Firstly we need to create a trading system. Then we need to create a stock market of talent trading. Then we need to create a special software or operating system for mobiles (2G), smartphones and computers that can constantly be connected to this trading market. The investment will not be the same as that of setting up a current search engine model. We will already have the links and search words crawled. We may want to start this search engine as a file directory where users initially input data and then our search engines will crawl data inputted. We need to be able to work upon and build over the current structure of the Internet. This will save money and time because we can then deploy this search engine in phases where in at each phase or stage we can decide to shut down the project or continue with the building of the talent trading search engine. In the future we may decide to deploy talent trading index country wise like the NYSE or the BSE. This search engine should not discourage or prevent in any manner the entrepreneurship spirit of any individual. All companies cannot be as big as IBM. Small companies, medium sized companies should all be able to continue BAU operations with no loss due to talent trading search engine. This search engine should accommodate all businesses. A customer of a business may go to another company but another new customer should be able to come to the original business. Not all customers (potential and existing) have the same frame of mind to make the same decisions. We will also be incorporating location wise search results. Suppose one seller on the shop is selling a watch. Another seller might also be selling a watch. We need to create the same demand for both shops just like the demand for public transport is constant. It will be a successful search engine if there is equal probability for both shops or multiple shops to receive the customer. Certain problems may arise where in users put false information. This can be excavated from the chain as no talent trading would take place over that node. This will have to be monitored. Before you begin to read my search engine write-up please keep in mind one thing: If you purchase 1 pack of bread (approx 10-13 slices) there is a higher chance that you would eat it and finish all the slices as opposed to purchasing 2 packs of bread ( 20 – 24 slices total). The reason for this is because when there are less number of bread slices you tend to finish them quickly so as to buy another pack. However when there are more number of bread slices to be finished you tend to get put off by the amount and end up not even eating 1 pack worth of slices. The moral is eat more but in small portions. The search engine is just at the idea stage. Let me first use an example to be able to explain the concept better. Assume you have just graduated from elementary school. You have a vague knowledge of lets say everything like sports, maths, physics, chemistry etc. You are looking for information from the Internet. You are not aware of concepts terminology or vocabulary that most websites will have. Then how will you be able to search? On top of this it may be possible that each user has a different method of grasping concepts. The content on the same website could probably be understood by one user and not the other. Another issue could be that a website will not contain complete information about a topic. An Internet user may have to look up more websites and do more research on the Internet. It may also be possible that a user is able to understand one website but not the other. The user is then left with only one option: painstakingly try to rely on the current search engine and do lots of research by visiting many websites in hope to gain knowledge required to complete the task. People also have different tastes in any product or service. There are certain minute details which catch the eye of the customer. It is imperative that search engines distinguish, separate and cater between these minute differences in products and services. Search engines today have to be non-bias. That means that a search engine will not try to guess in which context your search pertains to. The search engine will try to provide the most accurate results of your search query without caring if you’ll be able to understand them or not. Hence the user probably enters a search query that he/she thinks will yield a result of what he/she is actually looking for. How can we overcome this hurdle? An excellent search engine is like a mathematician – it should be able to solve queries using more than one concept. We can make an A.I search engine. I’m sure that all big companies such as Google, Yahoo and Bing are already trying to perfect their predictive search engines. So the big question today is what can we do at this stage/age of technology. We can make something better than today’s current search engine technology. Let me explain the concept in detail. At first it may not make sense but if you understand the example I provided earlier all the puzzle pieces will come together to make the best looking puzzle come alive. Let us first consider what does a search result compose of: It composes of a hyperlink, text, images and much more. Now lets consider a situation where a user searches for Integrated Circuits specifically. Let’s say he searches for IC 8085 microprocessor. The search results will yield images of 8085, circuit diagrams, and some more specific information on 8085. There will also be a “similar pages” icon under each search result. The user may have a phd knowledge of 8085 microprocessor or may not. Let us assume the user still seeks knowledge of 8085 microprocessor. He has 2 options: continue looking or skimming through more results or typing in a new search query. It may be possible that the user may search for more information. The point is the user will keep on searching and may still not be able to find exactly what he/she is looking for. Also the user may be able to find the context of results for which he / she is looking for. This brings us down to who is our target audience. Our target audience are those who do not know how to get the information they are looking for from search engines like Google, Bing or Yahoo. These users may continue to type search queries so as to obtain information but will not be able to understand the concept of any topic. These users may lack the knowledge to type the correct search query to obtain desired information from search engines. Now let us approach a solution to this problem by looking at all websites from a perspective of energy. Let us find a method to process web search results crawled and arrange them in a life cycle (before displaying them to the end user) and then display the results like Google does. Consider all websites and URL’s to be a living entity. Therefore URL’s will have mass(m). In a fiber optic network, data packets will travel with speed(c). We know from law of conservation of energy: energy cannot be created nor destroyed; it can only transfer from one form to another. i.e E²=(mc²)²+(pc)², where p represents momentum of the user’s search pattern. The search engine will logically break up search results into constituents and then eventually break up search results based on keyword(s) entered. Now let each and every URL/website crawled, be equal and related to every other website/URL crawled, by the recurrence relation sequence n(n-1) (n-2)/2! (n-3)/3! ………. [n-n(n-1)]/n! Where n is the number of nodes on the Internet structure(Assuming the Internet has infinite number of nodes). This algorithm will reach the epicenter for the Internet (i.e the Internet backbone) and then keep on crawling from there. Once the starting point is located, the crawlers will arrive back at their origin point(the location where they were launched). The point of this relation sequence is to generate an instance/arrangement of the stored(crawled) URL database and display that on the search results page. All URL’s / Websites will be stored in a kind of ‘Periodic’ table of(elements) URL’s/websites based on the property attributes/specifications of each website/URL. Each search result crawled and processed will be a life-cycle of pre-result nodes. For example, when searching for 8085 microprocessor one pre- search result that is processed, will yield what is a microprocessor->types of microprocessors->anatomy of microprocessor->8085 microprocessor. What if a user searches an image of a light fitting or text with the image of the light fitting. The results for one life-cycle could be glass used>bulbs that fit>and so on. Then lets say another life-cycle connects at the second junction (I.e bulbs that fit) and this life-cycle shows …>bulbs that fit>paintings of bulbs>…. And there will be millions of such abstract life-cycles. At each junction(i.e “->”) scaler vector value will be either 0 or 1. Let us assume each vector plane to be one page. Let each node reside within a vector plane. Consider a space where each vector plane can move. A point(hyperlink) on the plane can be represented by a vector matrix/function which will be generated. Each processed life-cycle consists of vector planes and each node is a part of a line equation. Since the points constitute vector planes, all vector planes that come in contact with the line or in between 2 or more points from 2 or more planes should constitute the node’s constituents. The lines that form when N-dimensional planes intersect form the life-cycle. Vectors depict in which direction(s) the data packets and crawler bots are and will be traveling on their own in a fiber optic network between nodes within/throughout the Internet Structure. This search engine will truly be an A.I search engine. Each node on a processed life-cycle is a stock which is traded on a global index stock exchange. Each time a node in a life-cycle is parsed a stock is traded. Since a node on a life-cycle is also part of multiple life-cycles or part of a node of another life-cycle, we will first consider a life-cycle a dimension(or in other words we will consider a node to be part of a dimension). We will associate a number of dimensions to a node. A node can be part of more than 1 life-cycle but at the twist of the moment how does a user switch/transition to another life-cycle(dimension) and how do we display the final result to the end user. Also how is stock value traded as a user transitions between nodes in one life-cycle or between nodes from other life-cycles. A node can come from any part of the world. Therefore a node can be traded locally and / or internationally. Its quantitative stock value changes based on number of hits(within any/the life-cycle). This will affect its rank and therefore its position (in real-time or in other words on real-time basis) in any life-cycle. For the search engine it’s a numbers game: the more number of nodes in a life-cycle that are parsed to get to the destination that the user is seeking,the more the search engine will learn and adapt and bring the user more precisely to his/her destination until finally a processed life-cycle may not be required(just a list of single search results like Google/Bing/Yahoo!). Or in other words its a numbers game until each node will be associated with a single dimension. On a smartphone a results will be processed first as in the form of a single node which is part of a life-cycle and then displayed like Google search results. Why would a user want to switch between life-cycles? Depending on how complex the user’s search query multiple dimensions would have to be set in place for the displaying of result to solve the complex query. This process is called measuring and calculating the momentum(p) of the user’s keyword input. From the moment the user inputs the keyword, its(the) square root of Energy (i.e E) is transferred throughout the search database ticketing system and related keywords and URL’s are extracted from each website to produce a result of pre- life-cycle nodes. Until you’ve seen(know) everything(all possibilities / permutations / combinations) about a particular topic(entity) or keyword, how do you know what you want or what you really want. We can then measure the standard deviation of what keyword was searched from the results received. The standard deviation curve will enable the score of a node in a life-cycle to either round up or round down depending on the other life-cycle nodes connected to that particular node in question for which we are generating the standard deviation. There will be a preassigned a value to each node in a life-cycle through which a standard deviation curve can be generated so as to enable the rounding up or rounding down of scores for each node with respect to the preassigned values of each node in any life-cycle. Once the value has been rounded up or rounding down the corresponding life-cycle nodes and hence life-cycles will be yielded in the results. What is the advantage of such a setup of a search engine? A monitoring tool will be implemented/created to check and monitor URL rank and AdSence/AdWords account. When you are using a smart phone or tablet this will promote collaboration and innovation for the users of the search engine. This search engine is designed for smartphones in terms of user interface, how collaborative the user environment is and how uniquely the user can obtain results on a smartphone. How do we deploy such a search engine. We will be deploying the search engine in five phases to avoid loss(loss minimization). The first phase of development will be to create a directory services website. This website can behave like a B2B hub such as Alibaba. The directory services website will be a verified service like twitter and facebook where every user will be verified similar to the verified profiles on Twitter and Facebook(these profiles have the blue check mark). Verification will be done by importing data from the mobile subscriber database from each particular country. If the user only has a land-line then that data will be acquired from the respective databases. The user will also have the option to manually enter in his data on the directory services website. We will start deploying the directory services website for developing nations first like India, China etc. This is because these countries are the backbone (with our specifications requirements) for the directory services website. The results page of the directory services website will contain a creative description of the user’s business offerings/talent along with a photo of either the business logo, business location or person followed by a vertical line boundary for separation. On the right hand side of the boundary vertical line will be basic details such as website address with favicon, telephone number, business address, and link icons like Facebook, Twitter, Pinterset, GitHUB project / sourceforge project user’s are working on(unless the user does not want to disclose his particulars and whatever projects he/she is working on) etc apps that he has installed on his / her smartphone, apps that are running in the background of his smartphone(the user can decide if he/she wants to enable this feature to publicize what apps he / she has on his/her smartphone as well as the apps running in the background in his/her profile). We need to ensure that if the profile goes public then we need to keep a tap on the followers who follow the user. It will look something like this xx|yy(where xx is the creative description and yy is the icons for famous links to websites such as twitter profile and tumblr profile etc and “ | ” represents a vertical line separating the two). This vertical line can run in between xx and yy for the entire page as a single vertical line or the vertical line may run in between the values of each xx and yy on the page, separately. The homepage will be accompanied with the “marketplace”. The marketplace will contain the links to all markets in the city/state/country/regions. This is because people in developing nations just don’t go to a Starbucks on a nearby Madison Street. They go to a famous marketplace first then they think what to buy or what to do at that marketplace. There are many famous marketplaces that people visit in developing nations. That is how the marketplace was setup and developed in developing nations like India and China. Stores are not scattered like in developed nations like in the United States. In the marketplace each market in the state/city will be listed along with the businesses/stores in each market, along with the street food stalls etc. Each and every marketplace will also have a special map associated with it. It is not a regular map such as Google maps. It is a special map to facilitate the finding of the business’s location along with other neighboring building complexes and the stores/shops within these complexes. Now what if Kdich wants to capitalize on this exchange of Information (trading of talent). Kdich will then deploy ‘Virtual Market’ concept to the users. A virtual market is as the name suggests, a market created/hosted by the users for the users. It is a concept for users to build their Internet talent empire by selling their business or services. Factors that go into consideration of creating the virtual market are the scores or stock value that each node/website has. Picture this: A user or a group of high scoring users collaborate and create a soon to be famous virtual market “Nehru Place IT Hub” on a virtual road(which is part of the ‘special map’ or a virtual marketplace map mentioned above) of a virtual city/location. This map will be similar to a game’s map, where you build your empire showing all the famous locations of virtual markets. Kdich is the eco-system preserving the rights to allow a user or users to create a virtual location to showcase their talent based on their stock value of the node or ‘talent trading score’ (talent trading score will be calculated by Kdich’s search engine and kept in the backend never to be displayed to the end user). These users will affect the homepage layout of Kdich. Do not mistake this homepage for a Yahoo homepage where once logged in, customized widgets are displayed. Kdich’s homepage will contain every aspect that pertains to a talent trading score or stock value score(which both will be hidden) of the nodes in a life-cycle. The homepage will be generated based on interests of users or where their discussions are going or where exactly is the bulk/ clog of the talent trading values being cornered or clashing at which point. This service from Kdich will be beneficial to children and roadside stalls. For street food stalls virtual marketplace will be a blessing. While the street food stall could be located in a less famous or less popular physical real marketplace the street food stall could be located in a more famous virtual market. Hence the street food stall will attract more customers. This scenario is same for almost all businesses or users. However strict supervision has to be engaged to prevent illegal virtual markets from emerging, selling illegal products or services. The second phase will be to incorporate a method of measuring and extracting relevant data of hits per day similar to that which is given in www.distrowatch.com and provide a means of easy communication between users-Google Wave will have to be created by Kdich. We can also use something similar to Bing Rewards at this stage where there will be a referral bonus to those who refer users to join the directory services website. The third phase will be to infuse a life-cycle search engine into an already existing directory services website. The fourth phase will be to infuse a talent trading index(similar to a stock exchange) into the already existing directory services website and search engine. The fifth phase will be to deploy the search engine as a Google-similar search engine where no stock value to trading value is shown to the user- just results with hyperlinks like Google’s current search engine structure. Kdich’s search engine motto “So many people would trade places(talent) with you” is the foundation stone for Kdich Guide search engine’s talent trading model. It’s about trading places domestically / locally (within a country or a.k.a inter-region) and also internationally/ globally(inter-country). No task will go unfinished and no search query will go unresolved. You may be wondering if all the places of mediocre talent get traded by better talented people then what happens to the mediocre talent which will begin to trade places with lower level talent and so on such that what happens to the weakest link(people whose places got traded)? Why would the strongest(stronger) link want to trade places with the weaker link. What would he /she gain and what would both parties gain? How will both benefit? There is a possibility that weaker links will learn over time and become masters at their “trade”. The time that the weaker link will require to become master along with other factors such as cost of training and capabilities will have to be statistically calculated so Kdich can better manage talent and human resource(s). Why would anyone want to trade places and bear the risk involved with taking over the “project” (or skill/talent). The “project” may have to be done all over again from scratch, wasting more time. All these factors would have to be calculated. Some users would not want to trade places with others. With this information being traded the stock value of each node in the life-cycle will vary and hence its ranking will vary. Another issue is that a user may think that he is better qualified to do the job(trade places/ trade talent) but realizes after trading places that he/she is not. As a result we need to have “levels” of trading. These levels or sub-trading of 1st hand, 2nd hand, 3rd hand and so on…. trading levels will have to be established. A user may sublease a (place) or (talent) to another user. The amount of sub-leases ( or the number of times the sublease has occurred for a particular node) will be calculated and affect the stock value of the node. A user should open Kdich and say “Oh my gosh, I/You can do that with Kdich” or “Kdich can do that”. We want to “Kdich” on that experience to the user. The next phase is to have two search engine search bars on the main homepage of Kdich. One bar for the talent pool and the other bar for the talent. This feature of two search bars will be the “ advanced search” option. In other words one bar for the user who knows what to do and another bar for the user who knows how to do it. And that is what we want – we want to connect the people/websites who/that know what to do with the people/websites who/that know how to do it so that we can bring about innovation. People don’t realize how special they are that they can accomplish tasks they never thought of. So how do we connect the people who know what to do to the people who know how to do it and how does this connection between these 2 types of people create a capability index(or measure their capability in the backend) so we can measure to determine the possibility/probability of a particular job being complete by connecting these 2 types of people. By connecting the people who know what to do with the people who know how to do it, we get the best at each node in the life-cycle so people can get better at what they do best due to being more creative so we get the best at each node. We can measure the possibility that a user is most likely to complete a particular task 100 percent successfully or not. Have you ever wanted to connect the people who know what to do with the people who know how to do it and see what results they produce or what the outcome becomes and how does that outcome become even further innovative to produce even larger results or better results. The future is connecting the people who know what to do with the people who know how to do it somehow and make them work collaboratively to produce higher results and whether by administrating some form which would so both would be equally benefited and both would be wanting to work with each other. Have you ever thought how we could harvest this wealth of knowledge. The only issue would be is why would the people who know what to do want to interact with the people who know how to do it if they’re not getting anything in return (both parties). How do we measure their capability even before a task is assigned to them to check if these users are able to complete it or not. So the aim of kdich guide search is to extract more useful information than our competitors so that we can give them more useful results for example if kdich guide search is extracting such vital information in the backend that can determine the capability percentage of a task being completed by the website owner then we have successfully made a much better search engine than our competitors. Most people have both qualities. They know what to do and they know how to do it. But there will be a point where a person who knows what to do will have difficulty in execution or a person who knows how to execute tasks will not know where to begin and how to use his own capabilities. At this point you may say “duh!, thats what Google, Bing, and Yahoo are for”. But today’s search engines are not organized in a manner that is understandable to most internet users, in terms of information. Users are forced to use whatever information is provided by the search engine (results) in the first couple of pages. Users have the possibility to find more details about the subject matter by doing extensive research on search engines by viewing all possible results, checking videos online but this can be very time consuming for the user. The ability of a search engine to extract useful information via effective data mining techniques: Search engines do not know what the user’s intentions are, what they are searching for, why they are searching for and what level of prior knowledge/ education the user has of the subject matter. Therefore it becomes difficult for search engines to give users better results than they were even expecting or in conclusion we want a search engine to bring out the creativity from users. We want search engines to distinguish between users who know what to do and users who know how to do it. Such type of information is very valuable for employers as well as corporations. How do we achieve what we want out of a future search engine with the least amount of R & D expenses/ corporate expenses: Solution. With the current search engine foundation of reverse index search, we cannot achieve the “future search engine vision”. What we want the future of search to be: There is something in my mind that I want, lets say, I want (or really, my hidden desire is to create a mobile device). So if I click on Google and type “how to make a mobile phone”, I doubt I’ll get an accurate result. I would have to change my search words to many things and still I won’t be able to make a mobile phone device. Case II: suppose I am thinking about something? Say “shoe laces”. It may be weird to think about it, but suppose that I am? I log on to google and type “shoelaces”. Most probably I will get a list of shopping results for shoelaces followed by some other results which are irrelevant to “shoelaces” so some abstract results. What if there was a way to make something else with shoelaces that I did not know about. Therefore we would have to tweak the current structure to obtain a structure known as TQM or total quality management. In this structure a particular search result is either divided in processes similar to that of a linked list, or is part of a process of multiple linked lists. This is why we have coined the term TQM. Now the obstacles which come in this way are the curse of dimensionality. Therefore in order to over come the curse of dimensionality we use a simple solution: we use n dimensions but start out with 3 dimensions. With the help of offsets we calculate the next process or search result using simple trigonometry and solid geometry. As will be explained consider the following diagram of the cloud (assume a cross section of a rubix cube in a cloud) we use a simple concept of a rubix cube to help us understand the storage structure of a crawled index within the cloud. Let us assume each square is of unit 1 {and hence each cube represents a link which is part of atleast one linked list}. A collection of cubes represents a linked list. Now let us assume that we have entered a search “show-laces”. There are many possibilities that we want so shoe laces is a term with a vector (x1, y1,z1) on the rubix cube(cloud storage). Now the node of the linked list A) which is part of the unit cube can be apart of 3 different linked lists ( 3 dimensions only). As a result, if the 2nd node is at point x2,y2,z2 away from the first node, there is no direct method for calculating the displacement or linking the node without compromising space/ storage and performance. Therefore we have to traverse through different link lists in order to reach our destination of {x2,y2,z2} so in a 3d vector space we utilize the concepts of offsets in order to reach our destination. Consider a 2d diagram of the following (assume 2d diagram of rubix cube). There are many paths which we can take to reach from destination A to destination B, provided we can only travel in 2 dimensions that is laterally x-y. If we assume each square to be a node {result} part of a linked list, then using Pythagorean theorem we can calculate the shortest path to the next node. Therefore we have our linked list formed. Imagine that each user in the directory services website has a score so we can try and pair him/her (make sort of a pairing so if he/she is paired with these other users) then the overall amount of the task that can be completed will have a higher percentage or something similar. For example we want to determine before it even happens that a task can be completed by this much percent of accuracy by these group of people. Kdich it on! Just Kdich it! If people are not ready to share/disclose their talent that is fine because at least Kdich will be ready when they do decide to disclose talent. By this we mean that user’s may not adapt to Kdich’s(this) new search engine culture. We may expect a climatic change over the years as users use Kdich’s new search engine options. Even as it is if user’s do not adapt to the change in this search engine culture, the new search engine still does not change. In essence it is still the same search engine as Google but much SMARTER and prepared to deliver. There are certain issues to look out for when designing the directory services website. Firstly we may encounter “fake users” – that is users that don’t have the skills/talent/creative business but still show they do. Another issue is one user who has multiple accounts as a single user or as a business owner or both. Another issue is when two or more profiles conflict due to redundant data such as telephone number, website. In this case we have to escalate this issue to the concerned department within Kdich and verify if there is a relation between the redundant profiles. For example one user could be the subordinate of another user(that is why their work telephone numbers could be the same). We need to visually establish this relation if any between the users on the website by some means. There are many advantages to this search engine. The best being the better management of resources. Kdich Guide Search will track resource management and allocation to better serve and save the environment. An example would be the tracking of waste management and rain water harvesting management. Also when the search users are not ready to be apart of this new search engine convention this search engine will serve as a regular search engine like Google, Bing and Yahoo. This search engine parser and life-cycle search engine along with the talent trading index and talent scores which we have discussed above is to be sealed within the search engine’s model. The users only see the search engine as is- the regular search engine similar to Google. The search engine setup mentioned above is the ideal state of how the search engine will operate in optimal standard conditions. Due to algorithms used in coding the search engine, it is unlikely we will achieve optimal working of this search engine but we will strive for this. There is a BIG issue that needs to be resolved. That is how will the search engine be able to answer complex queries? The search engine can crawl and match keywords with websites. But how will the search engine be able to answer a query such as: “Sturdy furniture”. The search engine won’t know the user’s perception of sturdy. This means the search engine will match cases in one way and not the reverse way. By reverse way we mean the search engine will not be able to compute “what are the different meanings when the user uses the keyword of sturdy other than the definition/synonyms/antonyms/use as a slang etc of sturdy”. One would say that this is dependent on the content of the Internet. We can use the concept of number sequences to tackle this problem. By carefully measuring and assigning a number to each URL, we can deduce a relation with the set of number/pattern sequences that can be formed with the number’s digits assigned to each URL. For example suppose we assign a code C222018 to a URL, a non AI search engine like Google may take the sequence of 222 and match every URL that contains the sequence 222 with the keyword searched. Kdich will follow a different Rule. Kdich will find all possible Rules/patterns that the sequence of digits share with other number sequences and deduce relations as that in functions. Kdich will not limit the number of digits to 3 or 2 or 1. Instead it will consider n digits and compare with n-r digits from other URL’s crawled. The end result of this search engine ideally would be that the search engine should be able to decipher a block of code and be able to apply that code in any other code block from another website to produce an output of some kind. Let us now look deeper into the working of the web crawler. A Spider forgets/falls and builds his web again. Every time a user uses the web(website) and its database/contents, the web is broken and the spider(crawler) has to build it again(spider disappears when web is broken/ when database is broken). The crawler will make a ‘web'(temporary mini Internet of entire database I.e filesystem) every time it gets a chance whenever the user finishes his search and the spider gets a chance to make a temporary web again. Each ‘web’ design is different so the spider can learn. The spider will rebuild the web all over again. A bug enters the web and the entire web vibrates / wakes up (A bug could be a bug literally or a user trying to search something) and the spider knows the bugs exact location prior to the spider reaching the bug, the spider knows something about the bug[user search criteria , location, what the bug is looking for](size, type etc). Spider builds a temporary web surrounding the keyword searched. The crawler(spider) within a second will parse the entire database back and forth searching for updated websites (url’s hyperlinks images, files etc). During the time when the spider is back(and not disappeared – this is what the spider will always aim for I.e to be back and not have its “web” broken) after the search has been made, the spider will continuously combine websites with each other forming combinations and comparisons to see how each website relates to all other websites. Then the Spider disappears until more requests are requested from another dimension[but the “web” remains partly torn]. Spider builds a temporary web when not in use surrounding the keyword searched. The spider will disappear unless another user searches for something that is connected to the temporary web(It is the current temporary web for the initial user and the initial user’s search). Server spaced is saved and will always be under-utilized. This is because the servers will only be used to store the web pages and their data(hyperlinks, images files etc). The cache will store the temporary internet spider web. As the cache adds websites to its memory from the crawler, it parses all websites in each “web” iteration(The cache stores all different web formations (I.e iterations from the moment the search engine is launched) since the release of the first iteration. The more number of times(every time) the spider’s ‘web’ is broken and the more number of times a new ‘web’ is built the spider finds a way to crawl deeper and deeper into the Internet web and build its database. The end result is that for every node in the Internet structure (and their sub nodes and hyperlinks / files / images etc) there exists a multitude of ‘webs’. This is because each ‘web’ is designed differently therefore tracking more different routes within the Internet structure. We will now look deeper into the intelligent crawler that will be implemented. Let the crawler begin with a non-deterministic polynomial problem. The crawler will resolve this in the following way: The crawler will begin with the first URL and will try to understand the URL and its contents but by the law of a non-deterministic polynomial the crawler will fail. The crawler will then move to the next URL(with the aim to ‘solve’ the first URL- by solving we mean understand). The crawler will plug the values of the 2nd URL(images, text, files, hyperlinks, extensions etc) in the first URL. If the crawler is still unable to solve the first URL, it will move on to the next and so on until it reaches the last URL(which will never happen because the last URL will be infinite nth term.) The NP crawler is left with no choice but to start creating its own dictionary of URL’s, text, images, files etc. As the NP crawler creates its own dictionary it begins to solve URL’s and their contents. Once the first URL is ‘solved’ the Crawler will move to the next till it solves all (infinite-Deterministic) URL’s. Once the crawler has ‘solved’ each (or all) the URL’s it begins to match the URL’s and its contents with its dictionary. (we can manually alter these settings as in whether the crawler should ‘solve’ each URL one at a time or solving all URL’s at once I.e all or none similar to a compiler / interpreter) Let us now dwell into the sorting of the data within the database. As mentioned earlier there will be two sets of results(from two different search bars)- one with the text searched unambiguously and one with the text ambiguously. As mentioned earlier the search engine will learn and evolve as the number of searches (ambiguous or unambiguous) increases. Initially the unambiguous searches will yield more results. As the search engine learns the ambiguous results will start increasing. Both results will be better rounded to the search terms like the search results of Google always are. The results with the highest percentile will have the highest rank and be displayed on the first search results page. The other results will follow accordingly in subsequent search result pages similar to how Google’s search results are currently shown. We will implement a matrix table which will determine the position of each node(we will assign a matrix vector to each node in the Internet structure which will lead to subsequent matrices for the hyperlinks and its contents) of each website and its contents. The matrix will act as a routing table for the crawler. This will facilitate in allowing the crawler to make its way throughout the Internet structure. Innovation will spread with implementing this search engine idea. Lets take one last example to see the beauty of this search engine. Suppose I search Sony stereo receiver. A particular result should appear as: what is a stereo>materials used in stereo>types of stereos>brands of stereo>advantages of different brands>Sony stereo receiver>what connects with Sony stereo receiver>prices of connectors>using your stereo receiver abroad>etc … The search engine will grow and mature over time based on what its intake has been (types of URL’s and their contents). Sometimes to see the problem with today’s current search engine you have to think the way people think. And people make decisions based on a life-cycle of information. If we can present search results in that same manner, we’ve made a better version of search engine. We have to create an algorithm for the spider crawler by implementing the following rule: The cause of the spider’s downfall (when a search is made) will be the website (and its contents) who knows the spider’s (crawler’s) deepest secret. In this case the deepest secret is the keyword entered in the search bar because that is what the spider only knows and hence that is the spider’s deepest secret. This one algorithm will be the base of the kdich search engine. The next part of the algorithm should be the spider only loves one result. That one result will be the key answer to the keywords searched. It is difficult to visualize this but this is the truth for the spider. We need to mathematically create an algorithm for the above 2 rules. Then we compute the inputs. Now let us move to another aspect of these 2 algorithms. Specifically the first algorithm: We know that the spider’s deepest secret is the keywords searched. We also know that the User also knows the spider’s secret because the user has inputted the words. The spider also now understands that the user knows its downfall. The spider will act accordingly by dispersing the keywords into its ‘webs’ trying to outwit the user by providing results that the spider assumes the user will require next in order to fulfill his/her search. As this ongoing battle continues between the user and the spider(the user will want more information and hence search more terms) more users of the search engine get involved and hence the spider’s AI brain is evolved. A never ending battle between the spider and the user will initialize until the spider’s web is built strong enough to withstand all user’s requests. This is how you create the brain of the spider and this is how you code the spider’s brain and principles. The rest of the work is covered by the end user. When a user begins to search he/she will continue to search until he/she reaches his/her destination(what he/she is looking for). When the user wants to idle around with Kdich, the user will search in the form of a life-cycle(i.e searching puzzle pieces by puzzle pieces) until he has found a solution for what he/she is searching. Let us now look at how the search engine is going to store data. We call it the universal tangram. In the universal tangram each hyperlink and its contents parsed will form a equilateral triangle which is part of a tangram. We assume that using pythagorean theorm n(n-1)[(n-2)/2!]…..(n-r)/r! where r tends to infinity, triangles will be formed and n is the number of links parsed. As n tends to infinity the tangram grows. Each triangle and the tangram is composed of dots which represent the links parsed by the crawler. Each dot that makes up a triangle within the tangram can be accessed from anywhere(i.e the link can be accessed by any other dot within the tangram). A point within the tangram can be found/accessed by first assuming the tangram is equilateral and equiangular. The lines bisecting each corner of the tangram meet at a point. Therefore a line can be drawn from a starting dot to another dot not necessarily on the border of the tangram. Imagine a person searches for one dot on or within the tangram. That dot meets the next dot by forming an imaginary line (vector) that connects the two dots(x1, y1, z1) and (x2,y2,z2) which then connects to a third dot (x3,y3,z3) and so on. The actual path taken will be through the triangle border lines on or within the tangram. I would like to end this idea presentation by stating something. Somebody told me you should find one good reason to do something instead of finding so many reasons not to do that something. My search engine idea works in the opposite way because my search engine will find zero reasons not to do something before providing the best result for the end user for his/her search entered. My search engine has one mantra: Those who matter most to you say the most about you. Let us now see how to develop the search engines core (I.e the heart of the crawler/Indexer/ranking).We will use the concepts of numbers and what we will newly introduce: A belief system. How do we humans easily memorize number sequences? What if we could break number sequences into parts and later join the sequences together making sure the sequence matches to the original number sequence. The crawler will scan a number sequence after assigning it to a hyperlink(URL and its associated text /images/files etc). How many digits of the sequence is scanned is determined by the number of life-cycles associated with the link. The belief system is a table that contains a list of number sequences. These number sequences are matched with the number sequence assigned by the crawler to the URL’s (and their contents). The ability of the crawler to manipulate the belief system so as to achieve an accurate set of life-cycle results is achieved through truth tables. These truth tables work like switch boards(similar to the switch boards initially implemented in the AT&T telephone structure) or routing tables like those in routers. The same concept can be applied to alphabets and words. By combining/comparing different sequences of alphabets the search engine can learn and perform the same way as Google’s search engine. However the added spice of the relation algorithm will bring meaning to the sequence of characters matched. I can only show you the door, you have to walk through it. In most cases users have to dig through multiple websites to find exactly what they are looking for. This is a waste of time and energy. Kdich aims to alleviate this hassle and bring users straight to their desired destination or even a destination they weren’t expecting but welcome. We will now dive into the details of how the crawler will perform the daunting task of parsing all URL’s in a second. We will name this crawler a nomadic crawler. As the name suggests we will be deploying a community of crawlers where each member of the community will reside in different locations on the Internet moving from one cluster to another. Let us now summarize the key points of this search engine to get an overall overview of how this search engine is going to operate. Have you ever wanted an entire collection of X-men cards or Baseball cards. Have you ever wanted a never ending collection of music at a fingerprints reach. This is what kdich is all about. The entire Internet on your fingertips. Consider each search result to be a card. Each card is part of an unlimited deck. A card can be part of one or more decks. Consider each deck to be like a set of dominos. Now if you’ve seen dominos falling you know that more than one series of dominos connect to eachother. Think of kdich in the same way where millions of series of different decks are connected at some point with eachother. There can be millions of such junctions throughout the decks. Once you can stack dominos in a 3 dimensional space, you will be able to virtually stack decks in multiple dimensions. Now we will look at this structure virtually. As we mentioned earlier each domino (card) has to be related to its next card or the card before within the same series or at a junction. Its like a relay where you pass the baton from one card to the other ( they are all representing the same team) or in this case the same relation to the keyword searched. The location of a card may change but its relation to its next and previous card will remain. Think of this as adding coordinates to a card or search result. Remember once the talent trading index value has changed its position in any given life-cycle will change also. This search engine will rank results based on which result(s) to read first. The results appear in a chronological sequence. If a URL in a result is not trading high its relative position will change within the talent trading index. When you do a Kdich search you aren’t actually searching kdich’s index of the web, you’re searching the Internet web. Let us now look deeper into the concept of the talent trading index. I stated when you parse one node(URL card) to the next node you are trading. So how do we compute and measure this. We will use the concept of Time. We will have 9 components which will work congruently to measure the trading quotient(value) of each and every node in any life-cycle. Keep in mind that a node can be a part of n number of life-cycles at any given point in time. We will call these components collectively under a single name “INFO”. The components are as follows: Name, From, To, Pause, Average, Status, Profile, Task and Note. These are the base properties of each node that will build the periodic table of Kdich’s database (file system). Lets dig deeper into the working of the crawler. We know the diagram of the square inside the circle which is the base of Archimedes principal. Consider a void square(a square behind a square at an angle and a square behind that and so on) inside a circle. Since each square inside a void square is at an angle it will form a circle and return to its original position possibly if hypothesize this. The crawler will reach the depths of the Internet web with this hypothesis and arrive back at its orginal location. This is because from tangrams concept mentioned above each susequent triangle that is formed between the square edges and the circle (square corners are touching circle) will form the circumference of the circle. And just like snake and ladders game, a user can navigate up the tallest ladder in the life-cycle web to reach his or her destination. The space time dimension comes into place here when giving attributes to each node in the life-cycle using “INFO”. We will be using the concept of place value. The talent trading numeric value can change if the place value of its digits changes thereby changing the location of the node within a particular life-cycle. Think of this as an elevator taking the node to another floor in a building (the building being the Internet). And further taking a node to another dimension. Other derived properties will also be added to the periodic table of nodes(URL’s/ webpages). Let us now look at the steps we will follow in order to deploy this search engine. The first step will be to create a directory services website similar to Yahoo. We will use colored themes similar to Nokia lumia line. These color themes will distinguish the type of results in the directory ( business, restaurants etc). We will use xx|yy framework for making the directory website. Xx part will be a snippet and the yy part will be particulars such as phone number and timings of the business. We can consider adding more particulars as long as performance is not affected. Examples include profiles, youtube links etc. The snippet will allow the website owner to write a text pertaining to his website instead of the text from the website being used below the link as in today’s search engine. If we get users then we will move to the next phase. We will buy rocketmail from Yahoo. With this aquistion we aim to penetrate developing nations markets because we will put every attention to details by over engineering the directory services website making include any possible business from a street food stall to an entire market like Nehru Place and all its stores. Phase three will be a new version of Maps depicting each story in a building and its maps a new technology making it more fast and easy to use because markets like Nehru place have lots of businesses within buildings. Maps will include planning of daily trips and quickest routes to reach destination. The next phase will be to create a social network for families (a family network) to share content between your family members. This is especially important in developing nations like India where family and extended families are a special relationship and bond. the next phase will be the shopping guide for restaurants and anything such as apparels will be broken into types and for example a new fashion giant Kdich apparel search will be formed. The next phase will be research where we will create an OS for servers and use minimum resources with maximum memory without going pass resource threshold. The next phase will be to create a search engine with inbuilt talent trading index mentioned above. We will create a specail database(proprietary) that will host the talent trading index. The last phase will be to create an OS for smartphones and dumb phones which will host the talent trading search engine using LTE speeds. Again this setup will be lauched first for developing nations. This marks the end of my search engine ideas. Lets discuss more about a new feature in Kdich Guide Search called ‘myQueue’or also known as myQ. MyQ lets users save linked-lists of results out of a plethora of life-cycles. These saved linked lists will be molded with regular search results(google type results) to assist users between regular google type search results to help users find what they are searching for. An option will be for users to add searches in between search life cycle nodes to better confine search results to exacly what the user is searching for. Let the crawler catch the URL’s and its contents in its web and let the user believe he is working his magic through the search engines web results where as all this while the spider is creating a special place and fitting the crawled with numeric values to extort what the user is searching for and beat the user in how he/she thinks. A user searches – the results trigger a special belief system which beats the user to the punch and limits what is not required and creates a spark which ignites the results database to provide exceptional results and finds a way to infinity. Once users start using kdich guide search we will allow two or more users who reside within eachother’s friends list to share life-cyle search results with each other and give opinions by literally sharing a snippet of life-cycle nodes which other users can add to their existing life-cycle search results in real time.Each node in the snippet can be from a different life-cycle. The user can form their own life-cycle snippet. Each snippet will have a number sequence associated with it that will merge with the number sequence of the life-cycle it is placed in. Similar to puzzle pieces each snippet will have a part of the digits and the other remaining part which will be part of the life-cyle’s number will merge with the snippet. So if the snippet has a sequence 123… and the life-cycle node which is being merged with the snippet will have a number 456789 then the number on the life-cyle node will make way for the number from the snippet to form 123789 and will simultaneously affect the digits for the entire life-cycle and all the life-cycles in the search engine database and their corresponding nodes. Depending on the length (number) of the digits which concatenate or round up or down Kdich can alter the sequence numbers on the nodes through its algorithm. The important thing at this point is to utilize the digits 1 and 0 only within the numbers associated with each node. By shifting the digits 1 and 0 (whether adding this or using existing 1 and 0 already present in the digits number associated with the node) within the number of (associated with) the life-cycle the number changes similar to an on and off switch. We would use these 1 and 0 along with operators such as “/”, “” and all other special characters to alter the value of the number associated with each node and at the same time provide a direction to which would be the next node in the respective life-cycle that the previous node points to. During testing phase of this search engine we would crawl the web and associated a number with each element of a website that is parsed and test after creating and placing the websites within the global life-cycle talent trading index, how it ranks unambiguous results with each other and also with ambiguous results of the search terms entered by the user. At this point you may be wondering what if there is a permutation or combination of 0 and 1 together either at the beginning of the number or towards the end of the number or in the middle of the number. This would act as a multiplexer towards the next node on the linked-list. At the same time we have to maintain the actual number such as 40 or 900 or even numbers as 101100 or 1111100111 etc. Then the key question is how do we utilize both concepts of numbers and 1 and 0 without compromising integrity and scalability of the search engine. We assign or define the digits (something called a) its face value and place value. Users and advertisers can bid or bet on link list snippets ( a portion of the link list) that search engine users would follow either as a whole(by parsing set/collection of nodes a.k.a a node snippet) or individually. Advertisers and users would still be going through each individual node[i.e parse each node of the snippet] in the node snippet.) or through(by parsing) each individual node(s) and decide whether/where at which node junction to place their Ad there (we have to decide on ‘where’ exactly the add is going to be placed) and predict through statistics at which junction the search engine user would click on the ads. The next evolution is investing ads in futures options. In this scenario futures options are what the next node could be in a linked list and how the ad(s) would affect the nodes path in the linked list. We are now going to discuss the advertising strategy. How is Kdich going to make money. Our motto is to sell miles and tickets (such as airline companies earn through frequent flier miles from credit cards) rather than only selling tickets (PPC / PPA). Kdich would earn money through the parsing of nodes in the search engine as well as PPC (selling tickets to view linked lists) There is much more such as earning coupons while shopping through kdich. Kdich would use statistics to measure the gap in time it takes to reach from starting node to destination node. This gap in time is noted and computed and preserved as a unit to measurement to achieve ultimate success to provide end user exact results. We are still working on the design and overall user experience. I would reiterate that most users know what they are searching for. Examples such as how to manage contractors on GitHub. If a user knows only about GitHub he/she will only search with respect to GitHub. However there may be many options for platforms that manage contractors which could be unrelated to GitHub structure and platform. A user in Google could type “GitHub vs …” and wait for Google to provide the remaining in order to find other options than GitHub. But do you see the irony here. Google would only fill in the blank with related platforms such as GitHub. There could be other options such as WiFi encryption enabled software via internet login. There would be issues with this type of search engine. The biggest issue is if multiple users follow the same linked list and have the same Eureka moment and create or arrive to the same conclusion and hence the same idea. The results of each node on a linked list is limited to number of digits associated with each node. In the future once this search engine is built we can place futures options (shares to buy) on/for a node depending on growth of each node. We would have to look more into details as to define how shares of a node would affect the node’s “growth”. What is the mathematics behind the algorithm used to allow the user to traverse from one node to the next in our global linked lists. We would use mathematical proofs. That is proof algorithms that match and give proof to the digits in the sets of numbers assoicated with each node on a real time basis(as the numbers change on real time basis) for each node. As digits are only 0-9 algorithms can be used to compute the proof of digit combinations(different permutations / combinations) for each node. Let us now discuss how we are going to create our crawler. Contrary to its name this crawler would not crawl the web. Instead the crawler would use the concept “gap in time”. This is our patented algorithm to literally reach the deep web. At any given time signals are being transmitted between all nodes on the Internet. Kdich would send a signal such that all other signals on the Internet would follow this one signal. Kdich will hone in on the signals that follow and retrieve information by crawling. Kdich crawler would setup what is known or called a toll for each signal that is transmitted on the Internet by leading the “followers” signals to the deep web. This is achieved through A* pathfinder algorithm. Upon reaching the deep web kdich’s crawlers would begin to crawl the Internet. Think of this search engine as a protein folding problem and its solution. We would now talk about WOM (Word of mouth) and how this slogan is going to revolutionize Kdich. Imagine each search result containing all social feeds and non social feeds in real time next to the search result. The talent trading ranking would be implemented here to provide a rank for the feeds and their respective search result associated with the feeds. We would now mention the working of the talent trading ranking with linked list nodes. We use the concept of sound waves with 0’s and 1’s. Consider taping your finger nails on a surface. To the third person who is listening to the taps he/she would not make sense of what music the tapping is for. Only the person who is thinking of the music and tapping at the same time would be able to sync the tapping with what the music is. There could also be multiple musics related to the same tapping. Hence we would have to narrow down by eliminating number sequences to figure out which tapping is related to which music and this is how our linked list nodes would operate. We would now discuss the ads and how they would appear on kdich. Similar to property ads and regular ads in classifieds section of the newspaper, that is how ads would be viewed on kdich. Kdich crawler would clone structure of the Internet with each passing node ( that is with each node parsed the kdich crawler would map a structure of the Internet.) We are going to begin Kdich by using existing ruby code from Saush Search engine: https://github.com/sausheong/saushengine and https://github.com/sausheong/saushengine.v1 and then build on from there to create the front end apply formula for ‘n’nodes on bottom of page 6 from this writeup. The biggest advantage of Kdich is the capability of collaboration through different miediums under Kdich platform. At each search result node a user can be added to a session similar to google hangouts. Not sure if this is mentioned earlier in this write-up Kdich would be utilizing abstract advertisements. These Ads may not have a direct relation with the query term searched but would have an indirect relation. An example of how to search on Kdich: Suppose you want a toothbrush that has a picture of you printed on it. In Google you could search: toothbrush with picture of yourself. You may not get the results you desire. Instead with Kdich every linked list that has the nodes toothbrush and picture would display and you can see an ad such as “get your picture on your toothbrush” which you might not see on Google’s ads. We mentioned how numbers associated to each node would affect the position on the node. If we dig deeper into this we can see a relationship emerge. Using the concept of “as number tends to…”. This would assist in determining the next node in the life-cycle. We would also be using the source code for this search engine: https://github.com/dalibor/saushengine which came from this website: https://blog.saush.com/2009/03/17/write-an-internet-search-engine-with-200-lines-of-ruby-code/ We are going to start this search engine as a reverse engineered idea search engine called Kdich. It begins as “Kdich Ideas”. A user first searches for a query on Kdich search engine. He finds a search result. He can add an idea to the result in return for the user posting an ad to his webpage(promoting his product or service). As this process progresses users can search ideas which lead to webpages associated with idea query. A high valued idea would have a high talent trading index value. As idea relate to each other and other search results, a flow chart of search results is formed (life-cycle) and results for ideas begin to yield results in the form of a flow chart. Search results could also be opinions. It is important that Kdich is able to accommodate these opinions in the search results. As search queries increase then Kdich would organize its search index accordingly. The aim of this search engine is to have someone with no knowledge at all about programming to be able to create say an Internet web search engine in less than 24 hours. We are going to introduce a couple of new advertising programs/ techniques. They are called shadowing and branding. In shadowing a user who wants to learn Kdich’s existing advertising program(s) can literally virtually sit next to a mentor (follow in the footsteps of a mentor)/advance user and learn how to advertise on Kdich’s platform. The second is a user can become an advertiser by putting their brand on an existing product and selling it or promoting it. Think of Kdich as a “you are here” type of search engine similar to how Google Maps feels. The next part of Kdich is how to bring the user experience to the end user. I.e how are we going to present the results on the results page. We need to know initially how the search engine is going to store the results. We are adding a component to the search index known as the results index. Consider the picture below and rotate the picture 90 degrees anticlockwise. We see 4 vendiagrams touching each other but not overlapping in the usual manner. Each circle has a core and nodes surrounding it 4 circles overlap a particular circle at any time but core is not overlapped. The trick to understand with the digaram is to not consider the 4 circles overlapping a particular circle but stacked above the circle. Literally stacked above the circle(s). The core represents the dimension. Points surrounding the core are nodes and circles covering (i.e overlapping you can imagine overlapping but it is in reality / actually stacked above). Nodes form the linked lists which can be depicted on the search results. Now how are the results going to appear – They are going to appear same as regular google search results with a mathematical matrix next to them showing the location of the results within the linked lists and dimensions. This would give the end user options on how to traverse from that point to the user’s desired search
Eventually users using Kdich would be able to pay per future option where users bid on “futures” searches (search keywords) inputted by users in the future based on the life-cycle trend(s). These trends would be provided statistically and white papers would be available on Kdich showcasing these trends in real time. We are adding a feature Kdich Shopping where shopping can be made easy by selecting options and trims to perfect ones style. We can use the algorithm to count how many steps you take and the goal percentage (from the steps timer that you set for how many steps you want to reach). Remember the aim of this search engine is to solve and provide accurate results to complex queries. The more complex query the user searches the narrower the life-cycle chain result the user reaches. In summary there is a gap in the market. Feel the difference between what people wanted and what they are getting. Take a look at the above picture. What if our perception of the Internet Structure is incorrect. What if based on the picture each small circle within the big circles represents 1 particular Internet structure that are no considered dead structures with dead links. Imagine how did these circles come to be. (and each tiny circle was once an Internet structure of what we know of today. Lets assume that within each tiny circle existed an Internet Structure with nodes which all became dead nodes which eventually led to the entire (1 particular Internet Structure) to be a dead Internet Structure. This led to the the expansion of more Internet Structures. Now the problem which is why we are reading is if a search engine parses a link from a node which goes dead (how a node goes dead- we are going to review later) then the link becomes a dead link because either the node is dead or the user switched servers then how does the crawler find the link. The crawler would have to work another route to reach that link. If an entire Internet Structure can go dead (become a dead internet node structure link) and a crawler is still able to locate the url then it is residing on another Internet Structure. We don’t know how these Internet Structures are connected or if when they become a dead node links then how do we extract information from them. We are going to devise an algorithm to ensure that these links /nodes of the Internet Structure (each Internet structure don’t become dead) We are going to use Archimedes principle. Consider the smallest closed two-dimensional figure (a triangle) once we solve this issue in 2D we can solve for n th dimension tending to infinity Consider each node (URL) a 3 sided triangle. It starts out as a triangle before becoming an n sided polygon tending to infinity sides and eventually a circle. How does a node become an n-sided polygon? Every node would contain the URL’s of every link on the internet. Everytime a URL is forwarded to the next node in the linked list a side is added to the triangle. Imagine it as folding a corner of the polygon. Once all the URL’s are utilized the polygon tends to Infinity but does not become a circle yet. It would still contain the routing table protocol of all the other nodes / URL’s on the internet. The routing table information is forwarded then making the node into a circle or otherwise a dead node. The dead node can only route information in one direction I.e back to its sender signaling to the sender that the receiver node is a dead node and the sender is to find another route. Routing table information would automatically be updated at all nodes throughout the Internet Structure. We know that an N-dimensional representation of the above picture is possible. As new Internet Structures form new dimensions form to try and connect these Internet Structures. Once the new dimensions begin forming the circle dead nodes transition between representing as a single node within an Internet Structure and relaying information to the remaining subsequent nodes and behaving as a node does within a single Internet Structure. We need to find a way that would allow a search result to yield the nodes from all Internet Structures. This can only happen when connections to all nodes from all Internet Structures in the Internet structure are established think of this as a handshake when discussing networks. Once the handshake is established the life–cycle search results can acquire the nodes properties and display the search results to the user. We are also incorporating the concept of deal or no deal in this search engine. Basically you do something for me and I do something for you indirectly. Allow me to explain I want a property but have no money. I ask user a help me sell my idea(assets) and I buy this property that user a is selling. We apply the same sort of “exchange with money” concept to all searches. This algorithm that we would create for this type of exchange would further enhance the search engines search and ranking and indexing and showcasing(when providing the results on the results page) capabilities. You do sometihng for me; I give you this. I do something for you; you give me this. You need something done from me and I need something done from you