Free download:


Software Wars, the Movie

Soundtrack for the book:


If you enjoyed the free download, a donation of the cost of a newspaper would be appreciated.



The best book explaining free market economics:


The best explanation of how we can build a space elevator in 10 years:


Book explaining a political solution to our economic problems:


AI and Google


AI and Google

The future is open source everything.

—Linus Torvalds

That knowledge has become the resource, rather than a resource, is what makes our society post-capitalist.

—Peter Drucker, 1993

Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.1

Some say free software doesn’t work in theory, but it does work in practice. In truth, it “works” in proportion to the number of people who are working together, and their collective efficiency. In early drafts of this book, I had positioned this chapter after the one explaining economic and legal issues around free software. However, I now believe it is important to discuss artificial intelligence separately and first, because AI is the holy-grail of computing, and the reason we haven’t solved AI is that there are no free software codebases that have gained critical mass. Far more than enough people are out there, but they are usually working in teams of one or two people, or proprietary codebases.


Deep Blue has been Deep-Sixed

Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.

—Alan Kay, computer scientist

The source code for IBM’s Deep Blue, the first chess machine to beat then-reigning World Champion Gary Kasparov, was built by a team of about five people. That code has been languishing in a vault at IBM ever since because it was not created under a license that would enable further use by anyone, even though IBM is not attempting to make money from the code or using it for anything.

The second best chess engine in the world, Deep Junior, is also not free, and is therefore being worked on by a very small team. If we have only small teams of people attacking AI, or writing code and then locking it away, we are not going to make progress any time soon towards truly smart software.

Today’s chess computers have no true AI in them; they simply play moves, and then use human-created analysis to measure the result. If you were to go tweak the computer’s value for how much a queen is worth compared to a pawn, the machine would start losing and wouldn’t even understand why. It comes off as intelligent only because it has very smart chess experts programming the computer precisely how to analyze moves, and to rate the relative importance of pieces and their locations, etc.

Deep Blue could analyze two hundred million positions per second, compared to grandmasters who can analyze only 3 positions per second. Who is to say where that code might be today if chess AI aficionados around the world had been hacking on it for the last 10 years?


DARPA Grand Challenge

Proprietary software developers have the advantages money provides; free software developers need to make advantages for each other. I hope some day we will have a large collection of free libraries that have no parallel available to proprietary software, providing useful modules to serve as building blocks in new free software, and adding up to a major advantage for further free software development. What does society need? It needs information that is truly available to its citizens—for example, programs that people can read, fix, adapt, and improve, not just operate. But what software owners typically deliver is a black box that we can’t study or change. —Richard Stallman

The hardest computing challenges we face are man-made: language, roads and spam. Take, for instance, robot-driven cars. We could do this without a vision system, and modify every road on the planet by adding driving rails or other guides for robot-driven cars, but it is much cheaper and safer to build software for cars to travel on roads as they exist today — a chaotic mess.

At the annual American Association for the Advancement of Science (AAAS) conference in February 2007, the “consensus
” among the scientists was that we will have driverless cars by 2030. This prediction is meaningless because those working on the problem are not working together, just as those working on the best chess software are not working together. Furthermore, as American cancer researcher Sidney Farber has said, “Any man who predicts a date for discovery is no longer a scientist.”

Today, Lexus has a car that can parallel park itself, but its vision system needs only a very vague idea of the obstacles around it to accomplish this task. The challenge of building a robot-driven car rests in creating a vision system that makes sense of painted lines, freeway signs, and the other obstacles on the road, including dirtbags not following “the rules”.

The Defense Advanced Research Projects Agency (DARPA), which unlike Al Gore, really invented the Internet, has sponsored several contests to build robot-driven vehicles:

wordpress html 3324d9fd
Stanley, Stanford University’s winning entry for the 2005 challenge. It might not run over a Stop sign, but it wouldn’t know to stop.

Like the parallel parking scenario, the DARPA Grand Challenge of 2004 required only a simple vision system. Competing cars traveled over a mostly empty dirt road and were given a detailed series of map points. Even so, many of the cars didn’t finish, or perform confidently. There is an expression in engineering called “garbage in, garbage out”; as such, if a car sees “poorly”, it is helpless.

What was disappointing about the first challenge was that an enormous amount of software was written to operate these vehicles yet none of it has been released (especially the vision system) for others to review, comment on, improve, etc. I visited Stanford’s Stanley website
and could find no link to the source code, or even information such as the programming language it was written in.

Some might wonder why people should work together in a contest, but if all the cars used rubber tires, Intel processors and the Linux kernel, would you say they were not competing? It is a race, with the fastest hardware and driving style winning in the end. By working together on some of the software, engineers can focus more on the hardware, which is the fun stuff.

The following is a description of the computer vision pipeline required to successfully operate a driverless car. Whereas Stanley’s entire software team involved only 12 part-time people, the vision software alone is a problem so complicated it will take an effort comparable in complexity to the Linux kernel to build it:

Image acquisition: Converting sensor inputs from 2 or more cameras, radar, heat, etc. into a 3-dimensional image sequence

Pre-processing: Noise reduction, contrast enhancement

Feature extraction: lines, edges, shape, motion

Detection/Segmentation: Find portions of the images that need further analysis (highway signs)

High-level processing: Data verification, text recognition, object analysis and categorization

The 5 stages of an image recognition pipeline.

A lot of software needs to be written in support of such a system:

wordpress html m45e9032d
The vision pipeline is the hardest part of creating a robot-driven car, but even such diagnostic software is non-trivial.

In 2007, there was a new DARPA Urban challenge. This is a sample of the information given to the contestants:

wordpress html 56d53074
It is easier and safer to program a car to recognize a Stop sign than it is to point out the location of all of them.

Constructing a vision pipeline that can drive in an urban environment presents a much harder software problem. However, if you look at the vision requirements needed to solve the Urban Challenge, it is clear that recognizing shapes and motion is all that is required, and those are the same requirements as had existed in the 2004 challenge! But even in the 2007 contest, there was no more sharing than in the previous contest.

Once we develop the vision system, everything else is technically easy. Video games contain computer-controlled drivers that can race you while shooting and swearing at you. Their trick is that they already have detailed information about all of the objects in their simulated world.

After we’ve built a vision system, there are still many fun challenges to tackle: preparing for Congressional hearings to argue that these cars should have a speed limit controlled by the computer, or telling your car not to drive aggressively and spill your champagne, or testing and building confidence in such a system.2

Eventually, our roads will get smart. Once we have traffic information, we can have computers efficiently route vehicles around any congestion. A study
found that traffic jams cost the average large city $1 billion dollars a year.

No organization today, including Microsoft and Google, contains hundreds of computer vision experts. Do you think GM would be gutsy enough to fund a team of 100 vision experts even if they thought they could corner this market?

There are enough people worldwide working on the vision problem right now. If we could pool their efforts into one codebase, written in a modern programming language, we could have robot-driven cars in five years. It is not a matter of invention, it is a matter of engineering. Perhaps the world simply needs a Linus Torvalds of computer vision to step up and lead these efforts.


Software and the Singularity

Futurists talk about the “Singularity”, the time when computational capacity will surpass the capacity of human intelligence. Ray Kurzweil predicts it will happen in 2045.3
The flaw with any date estimate, other than the fact that they are always prone to extreme error, is that our software today has no learning capacity, because the idea of continuous learning is not yet a part of the foundation. Even the learning capabilities of an ant would be useful.

I believe the benefits inherent in the singularity will happen as soon as our software becomes “smart”. I don’t believe we need to wait for any further Moore’s law progress for that to happen. Computers today can do billions of operations per second, like add 123,456,789 and 987,654,321. Even if you could do that calculation in your head in one second, it would take you 30 years to do the billion that your computer can do in that second.

Even if you don’t think computers have the necessary hardware horsepower to be smart today, understand that in many scenarios, the size of the input is the driving factor to the processing power required. In image recognition, for example, the amount of work required to interpret an image is mostly a function of the size of the image. Each step in the image recognition pipeline, and the processes that take place in our brain, dramatically reduce the amount of data from the previous step. At the beginning of the analysis might be a one million pixel image, requiring 3 million bytes of memory. At the end of the analysis is the data that you are looking at your house, a concept that requires only 10 bytes to represent. The first step, working on the raw image, requires the most processing power, so therefore it is the image resolution (and frame rate) that set the requirements, values that are trivial to change. No one has shown robust vision recognition software running at any speed, on any sized image!

While a brain is different from a computer in that it does work in parallel, such parallelization only makes it happen faster, it does not change the result. Anything accomplished in our parallel brain could also be accomplished on computers of today, which can do only one thing at a time, but at the rate of billions per second. A 1-gigahertz processor can do 1,000 different operations on a million pieces of data in one second. With such speed, you don’t even need multiple processors! Even so, more parallelism is coming.4
Once we build software as smart as an ant, we will build software as smart as a human the same day, because it is the same software.


Google

One of the problems faced by the monopoly, as its leadership now well understands, is that any community that it can buy is weaker than the community that we have built.

Eben Moglen

In 1950, Alan Turing proposed a thought experiment as a definition of AI in which a computer’s responses (presumed to be textual) were so life-like that, after questioning, you could not tell whether they were made by a human or a computer. Right now the search experience is rather primitive, but eventually, your search engine’s response will be able to pass the Turing Test. Instead of simply doing glorified keyword matching, you could ask it to do things like: “Plot the population and GDP of the United States from 1900 – 2000.”5 Today, if you see such a chart, you know a human did a lot of work to make it. The creation of machines that can pass the Turing Test will make today’s challenge of outsourcing seem like small potatoes. Why outsource work to humans in other countries when computers nearby can do the task?

AI is a meaningless term in a sense because building a piece of software that will never lose at Tic-Tac-Toe is a version of AI, but it is a very primitive type of AI, entirely specified by a human and executed by a computer that is just following simple rules. Fortunately, the same primitive logic that can play Tic-Tac-Toe can be used to build arbitrarily “smart” software, like chess computers and robot-driven cars. We simply need to build systems with enough intelligence to fake it. This is known as “Weak AI”, as opposed to “Strong AI”, which is what we think about when we imagine robots that can pass the Turing Test, compose music, or get depressed. In Strong AI, you wouldn’t give this machine a software program to play chess, just the rules. The first application of Strong AI is Search; the pennies for web clicks can pay for the creation of intelligent computers.

The most important and interesting service on the Internet is search. Without an index, a database is useless — imagine a phone directory where the names were in random order. There is an enormous turf war taking place between Google, Yahoo!, and Microsoft for the search business. Google has 200,000 servers, which at 200 hits per second gives them the potential for three trillion transactions per day. Even with a quite typical quarter of a penny per ad impression, the potential revenue is huge. Right now, Google has 65% of the search business, with Yahoo! at 20% and Microsoft at 7%. Bill Gates has said that Microsoft is working merely to keep Google “honest”, which reveals his acceptance that, unlike Windows and Office, Microsoft’s search is not the leader. (Note that Microsoft’s online efforts have an inherent advantage over those who would also use Windows because they get unlimited software for free from themselves. Any other company which wanted to build services using Microsoft’s software would have higher costs.)

Furthermore, to supplant an incumbent, being 10% better is insufficient. It will take a major breakthrough by one of Google’s competitors to change the game. I use Google because I find its results good enough and because it keeps a search history, so that I can go back in time and retrieve past searches. If I started using a different search provider, I would lose this archive.

Part of the reason that Google is so profitable is because it uses lots of free software, but very little of their code is released to outsiders. Google’s source code is not freely available, and not for sale. In fact, Google is an extremely secretive and opaque company. Even in casual conversation at conferences, its engineers quickly retreat to statements about how everything is confidential. Ironically, a paper explaining PageRank, written in 1998 by Google co-founders Sergey Brin and Larry Page, says, “With Google, we have a strong goal to push more development and understanding into the academic realm.” It seems they have since had a change of heart.

Google has sufficient momentum and sophistication to stay ahead of its competitors. Here is a list of Google’s services:

wordpress html m153ae7f6
Google is applying Metcalfe’s law to the web: Gmail is a good product, but being a part of the Google brand is half of its reason for success.

Even with all that Google is doing, search is its most important business though with Droid and stuff you wonder if they are forgetting that. Google has tweaked its patented PageRank algorithm extensively and privately since it was first introduced in 1998, but the core logic remains intact: The most popular web pages that match your search are the ones whose results are pushed to the top.7

PageRank lets the wisdom in millions of web sites decide what is the most popular, and therefore the best search result — because the computer cannot make that decision today. PageRank is an excellent stopgap measure to the problem of returning relevant information, but the focus should be on putting richer information into the database.

I believe software intelligence will get put into web spiders, those programs that crawl the Internet and process the pages. Right now, they mostly just index the location of words in a document, but eventually they will start to understand it, and build a database of knowledge, rather than a database of words. Much of the rest is a parsing issue. (Some early search engines, treated digits as words: searching for 1972 would find any reference to 1, 9, 7 or 2; this is clearly not a smart search algorithm.) The spiders that understand the information, because they’ve put it there, also become the librarians who take the search string you give it, and compare that to its knowledge.8 You need a librarian to build a library, and a librarian needs the library she built to help you. Today, web spiders are not getting a lot of attention in the search industry. Wikipedia documents 37 web crawlers, and it appears that the major focus for them is on performance and discovering link spam.9

The case for why a free search engine is better is a difficult one to make, so I will start with a simpler example, Google’s blogging software.

Blogger

While Google has 65% of the billion-dollar search business, it has 10% or less of the blog business. There exists an enormous number of blog sites, the code for which is basically all the same. The technology involved in running Instapundit.com, one of the most influential current-events blogs, is little different than that running Myspace, the most popular diary and chatboard for Jay-Z-listening teenage girls.

Google purchased the proprietary blogging engine Blogger in 2000 for an undisclosed amount. Google doesn’t release how many users they have because they consider that knowledge proprietary, but we do know that no community of hundreds of third party developers is working to extend Blogger to make it better and more useful.

The most popular free blogging engine is WordPress, a core of only 40,000 lines (400 pages) of code. It has no formal organization behind it, yet we find that just like Wikipedia and the Linux kernel, WordPress is reliable, rich, and polished:

wordpress html 7b3f3ece
WordPress, the most popular free blogging engine

WordPress is supported by a community of developers, who have created plug-ins, written and translated documentation, and designed many themes to customize the look. Here are the categories of plug-ins available for WordPress:

Administration

Administration Tools

Advertisement

Anti-Spam

Comments

Meta (tagging)

Restrictions

Statistics

Syntax Highlighting

Syndication

Translation and Languages

Tweaking

Monetizing


Design, Layout and Styles

Archive

Calendar – Event

Navigation

Randomness

Styles

Widgets

Links

3rd-parties services

Graphics, Video, and Sound

Audio

Images

Multimedia

Video

Odds and Ends

Financial

Forums

Geo

Miscellaneous

Mood

Time

Weather

Outside Information

Del.icio.us

Technorati

Posts

Audio Posts

Editing Posts

Formatting Posts

Miscellaneous Post Plugins

There are hundreds of add-ons for WordPress that demonstrate the health of the developer community and which make it suitable for building even very complicated websites. This might look like a boring set of components, but if you broke apart MySpace or CNN’s website, you would find much of the same functionality.

Google acquired only six people when it purchased Pyra Labs, the original creators of Blogger, a number dwarfed by WordPress’s hundreds of contributors. As with any thriving ecosystem, the success of WordPress traces back to many different people tweaking, extending and improving shared code. Like everything else in the free software community, it is being built seemingly by accident.10

In addition to blogging software, I see other examples where Google could have worked more closely with the free software community with no threat to its business model. Recently I received many e-mails whose first words were: “Your cr. rating doesn’t matter” that I dutifully marked as spam. It took weeks before Gmail’s spam filter caught on. Spam is a very hard problem and cooperating with others could help improve Google faster, and lower their R&D costs. Some think that making spam filter software public will make it easier to make spam. But encryption algorithms are publicly documented and the consensus is that it makes them more secure because more people have looked at it and can all agree that the only way to decrypt is with the password. Likewise, all the popular algorithms use Bayesian-type analysis, and learn for each user what words are likely spam, which makes the job of a spammer much harder. The point is that giving this code away doesn’t actually help the spammers, who can even run tests with Google’s accounts to determine how it works.

Search

Google tells us what words mean, what things look like, where to buy things, and who or what is most important to us. Google’s control over “results” constitutes an awesome ability to set the course of human knowledge.

Greg Lastowka, Professor of Law, Rutgers University

And I, for one, welcome our new Insect Overlords.

News Anchorman Kent Brockman, The Simpsons

Why Google should have built Blogger as free software is an easier case to make because it isn’t strategic to Google’s business or profits, the search engine is a different question. Should Google have freed their search engine? I think a related, and more important question is this: Will it take the resources of the global software community to solve Strong AI and build intelligent search engines that pass the Turing Test?

Because search is an entire software platform, the best way to look at it is by examining its individual components. One of the most fundamental responsibilities for the Google web farm is to provide a distributed file system. The file system which manages the data blocks on one hard drive doesn’t know how to scale across machines to something the size of Google’s data. In fact, in the early days of Google, this was likely one of its biggest engineering efforts. There are (today) a number of free distributed file systems, but Google is not working with the free software community on this problem. One cannot imagine that a proprietary file system would provide Google any meaningful competitive advantage, nevertheless they have built one.

Another nontrivial task for a search engine is the parsing of PDFs, DOCs, and various other types of files in order to pull out the text to index them. It appears that this is also proprietary code that Google has written.

It is a lot easier to create a Google-scaled datacenter with all of its functionality using free software today than it was when Google was formed in 1998. Not only is Google not working with the free software community on the software they have created, they are actually the burdened first-movers. What you likely find running on a Google server is a base of Linux and other free software, upon which Google has created their custom, proprietary code. Google might think their proprietary software gives them an advantage, but it is mostly sucking up resources, and preventing them from leveraging advancements from outside developers.

And like Microsoft’s Windows NT kernel, even if Google were to release their infrastructure code, much of it would not be picked up because the free software community has developed their own solutions. In fact, in late 2006, Google began to release tiny bits and pieces of their most boring software, but when I looked at the codebases, there didn’t appear to be much contributions from the outside — because it isn’t nearly as interesting to the world as it would have been ten years earlier. Furthermore, as these codebases have lived inside Google for a long time they probably have lots of dependencies on other Google technologies which make it hard for it isolated and used in the outside world.

What about the core of Google’s business, the code that takes your search request and attempts to make sense of it so that it can pass the Turing Test? Google has not even begun to solve this problem, and even many simpler problems, so it makes one wonder if it is something a single company can solve by itself.

There are two kinds of engineering challenges for Google:

Those necessary, non-strategic, and at best loosely correlated to their profits, like blogging, language translation, and spam detection, none of which Google is cooperating with the community on.

Then there is the daunting problem of building software with Strong AI which Google had better be working on with the rest of the world. The idea of Google “owning” Strong AI is at least as scary as Microsoft owning Windows and Office. Google has publicly stated that Microsoft’s proprietary software model has been bad for the industry, but doesn’t recognize that it is trying to do the exact same thing!

Google is one of the few new, large, and fast-growing software businesses in America and few people are publicly arguing that the company give away the farm by sharing their core technology with the free software community. This is especially scary because it is an irreversible step. However, software is not a datacenter or a relationship with customers and advertisers. Most of the users of Google’s code would not be in competition with Google, but would be taking it to new places that they hadn’t considered. Furthermore, Google would still have a significant first-mover advantage of the code they created.

In addition, if someone else eventually creates a free search engine that a worldwide community of researchers coalesce around, where will Google be then? Perhaps Microsoft could flank Google by building a free search engine that scientists and researchers around the world could tinker in. There is an interesting free codebase called Lucene, run by the Apache foundation, which is steadily gaining use in Enterprises who want to run their own search engine. It seems quite possible that this is the codebase and community that will provide a threat to Google in five to ten years.


Conclusion

wordpress html 6d930d8d
Comic from xkcd.com

There is reason for optimism about the scientific challenges we confront because the global community’s ability to solve problems is greater than the universe’s ability to create them. The truth waiting for us on the nature of matter, DNA, intelligence, etc. has been around for billions of years.

There are millions of computer scientists sitting around wondering why we haven’t yet solved the big problems in computer science. It should be no surprise that software is moving forward so slowly because there is such a lack of cooperation.

1 One website documents 60 pieces of source code that perform Fourier transformations, which is an important software building block. The situation is the same for neural networks, computer vision, and many other advanced technologies.

2 There are various privacy issues inherent in robot-driven cars. When computers know their location, it becomes easy to build a “black box” that would record all this information and even transmit it to the government. We need to make sure that machines owned by a human stay under his control, and do not become controlled by the government without a court order and a compelling burden of proof.

3 His prediction is that the number of computers, times their computational capacity, will surpass the number of humans, times their computational capacity, in 2045. Therefore, the world will be amazing then.

This calculation is flawed for several reasons:

We will be swimming in computational capacity long before 2040. Today, my computer is typically running at 2% CPU when I am using it, and therefore has 50 times more computational capacity than I need. An intelligent agent twice as fast as the previous one is not necessarily more useful.

Many of the neurons of the brain are not spent on reason, and so shouldn’t be in the calculations.

Billions of humans are merely subsisting, and are not plugged into the global grid, and so shouldn’t be measured.

There is no amount of continuous learning built in to today’s software.

Each of these would tend to push Singularity forward and support the argument that the benefits of singularity are not waiting on hardware. Humans make computers smarter, and computers make humans smarter, and this feedback loop makes 2045 a meaningless moment.
Who in the past fretted: “When will man build a device that is better at carrying things than me?” Computers will do anything we want, at any hour, on our command. A computer plays chess or music because we want it to. Robotic firemen will run into a burning building to save our pets. Computers have no purpose without us. We should worry about robots killing humans as much as we worry about someone stealing an Apache helicopter and killing humans today.

4 Most computers today contain a dual-core CPU and processor folks promise that 10 and more are coming. Intel’s processors also have limited 4-way parallel processing capabilities known as MMX and SSE. Intel could add even more of this parallel processing support if applications put them to better use. Furthermore, graphics cards exist to do work in parallel, and this hardware could also be adapted to AI if it is not usable already.

5 Of course, there are some interesting complexities to the GDP aspect, like whether to plot the GDP in constant dollars and per person.

6 Although Google doesn’t give away or sell their source code, they do sell an appliance for those who want a search engine for the documents on an internal Intranet. This appliance is a black box and is, by definition, managed separately than the other hardware and software in a datacenter.
It also doesn’t allow tight integration with internal applications. An example of a feature important to Intranets is to have the search engine index all documents I have access to. The Internet doesn’t really have this problem as basically everything is public. Applications are the only things that know who has access to all the data. It isn’t clear that Google has attacked this problem and because the appliance is not extensible, no one other than Google can fix this either. This feature is one reason why search engines should be exposed as part of an application.

7 Some of Google’s enhancements also include: freshness, which gives priority to recently-changed web pages. It also tries to classify queries into categories like places and products. Another new tweak is to not display too many results of one kind: they try to mix in news articles, advertisements, a Wikipedia entry, etc. These enhancements are nice, but are far from actually understanding what is in the articles, and it appears to be applying smarts to the search query rather than the data gathered by the spiders.

8 One might worry about how a spider that has only read a small portion of the Internet can help you with parts it has not seen? The truth is that these spiders all share a common memory.

9 Focusing on the spider side means continually adding new types of information into the database as it starts to understand things better. Let’s say you build a spider that can now understand dates and guess the publication date of the page. (This can be tricky when a web page contains biographical information and therefore many dates.) Spiders will then start to tag all the web pages it reads in the future with this new information. All of this tagging happens when the data is fetched, so that is where the intelligence needs to go.

10 In fact, WordPress’s biggest problem is that the 3rd party development is too rich, and in fact chaotic. There are hundreds of themes and plugins, many that duplicate each other’s functionality. But grocery stores offer countless types of toothpaste, and this has not been an insurmountable problem for consumers. I talk more about this topic in a later chapter.


1 Comment

  1. Whether you believe in God or not, this is a must-read message!!!

    Throughout time, we can see how we have been slowly conditioned to come to this point where we are on the verge of a cashless society. Did you know that the Bible foretold of this event almost 2,000 years ago?

    In Revelation 13:16-18, we read,

    “He (the false prophet who decieves many by his miracles) causes all, both small and great, rich and poor, free and slave, to receive a mark on their right hand or on their foreheads, and that no one may buy or sell except one who has the mark or the name of the beast, or the number of his name.

    Here is wisdom. Let him who has understanding calculate the number of the beast, for it is the number of a man: His number is 666.”

    Referring to the last generation, this could only be speaking of a cashless society. Why? Revelation 13:17 tells us that we cannot buy or sell unless we receive the mark of the beast. If physical money was still in use, we could buy or sell with one another without receiving the mark. This would contradict scripture that states we need the mark to buy or sell!

    These verses could not be referring to something purely spiritual as scripture references two physical locations (our right hand or forehead) stating the mark will be on one “OR” the other. If this mark was purely spiritual, it would indicate only in one place.

    This is where it really starts to come together. It is shocking how accurate the Bible is concerning the implatnable RFID microchip. These are notes from a man named Carl Sanders who worked with a team of engineers to help develop this RFID chip

    “Carl Sanders sat in seventeen New World Order meetings with heads-of-state officials such as Henry Kissinger and Bob Gates of the C.I.A. to discuss plans on how to bring about this one-world system. The government commissioned Carl Sanders to design a microchip for identifying and controlling the peoples of the world—a microchip that could be inserted under the skin with a hypodermic needle (a quick, convenient method that would be gradually accepted by society).

    Carl Sanders, with a team of engineers behind him, with U.S. grant monies supplied by tax dollars, took on this project and designed a microchip that is powered by a lithium battery, rechargeable through the temperature changes in our skin. Without the knowledge of the Bible (Brother Sanders was not a Christian at the time), these engineers spent one-and-a-half-million dollars doing research on the best and most convenient place to have the microchip inserted.

    Guess what? These researchers found that the forehead and the back of the hand (the two places the Bible says the mark will go) are not just the most convenient places, but are also the only viable places for rapid, consistent temperature changes in the skin to recharge the lithium battery. The microchip is approximately seven millimeters in length, .75 millimeters in diameter, about the size of a grain of rice. It is capable of storing pages upon pages of information about you. All your general history, work history, crime record, health history, and financial data can be stored on this chip.

    Brother Sanders believes that this microchip, which he regretfully helped design, is the “mark” spoken about in Revelation 13:16–18. The original Greek word for “mark” is “charagma,” which means a “scratch or etching.” It is also interesting to note that the number 666 is actually a word in the original Greek. The word is “chi xi stigma,” with the last part, “stigma,” also meaning “to stick or prick.” Carl believes this is referring to a hypodermic needle when they poke into the skin to inject the microchip.”

    Mr. Sanders asked a doctor what would happen if the lithium contained within the RFID microchip leaked into the body. The doctor replied by saying a terrible sore would appear in that location. This is what the book of Revelation says:

    “And the first (angel) went, and poured out his vial on the earth; and there fell a noisome and grievous sore on the men which had the mark of the beast, and on them which worshipped his image” (Revelation 16:2).

    You can read more about it here–and to also understand the mystery behind the number 666: https://2ruth.org/rfid-mark-of-the-beast-666-revealed/

    The third angel’s warning in Revelation 14:9-11 states,

    “Then a third angel followed them, saying with a loud voice, ‘If anyone worships the beast and his image, and receives his mark on his forehead or on his hand, he himself shall also drink of the wine of the wrath of God, which is poured out full strength into the cup of His indignation. He shall be tormented with fire and brimstone in the presence of the holy angels and in the presence of the Lamb. And the smoke of their torment ascends forever and ever; and they have no rest day or night, who worship the beast and his image, and whoever receives the mark of his name.'”

    Who is Barack Obama, and why is he still in the public scene?

    So what’s in the name? The meaning of someone’s name can say a lot about a person. God throughout history has given names to people that have a specific meaning tied to their lives. How about the name Barack Obama? Let us take a look at what may be hiding beneath the surface.

    Jesus says in Luke 10:18, “…I saw Satan fall like lightning from heaven.”

    The Hebrew Strongs word (H1299) for “lightning”: “bârâq” (baw-rawk)

    In Isaiah chapter 14, verse 14, we read about Lucifer (Satan) saying in his heart:

    “I will ascend above the heights of the clouds, I will be like the Most High.”

    In the verses in Isaiah that refer directly to Lucifer, several times it mentions him falling from the heights or the heavens. The Hebrew word for the heights or heavens used here is Hebrew Strongs 1116: “bamah”–Pronounced (bam-maw’)

    In Hebrew, the letter “Waw” or “Vav” is often transliterated as a “U” or “O,” and it is primarily used as a conjunction to join concepts together. So to join in Hebrew poetry the concept of lightning (Baraq) and a high place like heaven or the heights of heaven (Bam-Maw), the letter “U” or “O” would be used. So, Baraq “O” Bam-Maw or Baraq “U” Bam-Maw in Hebrew poetry similar to the style written in Isaiah, would translate literally to “Lightning from the heights.” The word “Satan” in Hebrew is a direct translation, therefore “Satan.”

    So when Jesus told His disciples in Luke 10:18 that He beheld Satan fall like lightning from heaven, if this were to be spoken by a Jewish Rabbi today influenced by the poetry in the book of Isaiah, he would say these words in Hebrew–the words of Jesus in Luke 10:18 as, And I saw Satan as Baraq O Bam-Maw.

    The names of both of Obama’s daughters are Malia and Natasha. If we were to write those names backward (the devil does things in reverse) we would get “ailam ahsatan”. Now if we remove the letters that spell “Alah” (Allah being the false god of Islam), we get “I am Satan”. Coincidence? I don’t think so.

    Obama’s campaign logo when he ran in 2008 was a sun over the horizon in the west, with the landscape as the flag of the United States. In Islam, they have their own messiah that they are waiting for called the 12th Imam, or the Mahdi (the Antichrist of the Bible), and one prophecy concerning this man’s appearance is the sun rising in the west.

    “Then I saw another angel flying in the midst of heaven, having the everlasting gospel to preach to those who dwell on the earth—to every nation, tribe, tongue, and people— saying with a loud voice, ‘Fear God and give glory to Him, for the hour of His judgment has come; and worship Him who made heaven and earth, the sea and springs of water.'” (Revelation 14:6-7)

    Why have the word’s of Jesus in His Gospel accounts regarding His death, burial, and resurrection, been translated into over 3,000 languages, and nothing comes close? The same God who formed the heavens and earth that draws all people to Him through His creation, likewise has sent His Word to the ends of the earth so that we may come to personally know Him to be saved in spirit and in truth through His Son Jesus Christ.

    Jesus stands alone among the other religions that say to rightly weigh the scales of good and evil and to make sure you have done more good than bad in this life. Is this how we conduct ourselves justly in a court of law? Bearing the image of God, is this how we project this image into reality?

    Our good works cannot save us. If we step before a judge, being guilty of a crime, the judge will not judge us by the good that we have done, but rather by the crimes we have committed. If we as fallen humanity, created in God’s image, pose this type of justice, how much more a perfect, righteous, and Holy God?

    God has brought down His moral laws through the 10 commandments given to Moses at Mt. Siani. These laws were not given so we may be justified, but rather that we may see the need for a savior. They are the mirror of God’s character of what He has put in each and every one of us, with our conscious bearing witness that we know that it is wrong to steal, lie, dishonor our parents, murder, and so forth.

    We can try and follow the moral laws of the 10 commandments, but we will never catch up to them to be justified before a Holy God. That same word of the law given to Moses became flesh about 2,000 years ago in the body of Jesus Christ. He came to be our justification by fulfilling the law, living a sinless perfect life that only God could fulfill.

    The gap between us and the law can never be reconciled by our own merit, but the arm of Jesus is stretched out by the grace and mercy of God. And if we are to grab on, through faith in Him, He will pull us up being the one to justify us. As in the court of law, if someone steps in and pays our fine, even though we are guilty, the judge can do what is legal and just and let us go free. That is what Jesus did almost 2,000 years ago on the cross. It was a legal transaction being fulfilled in the spiritual realm by the shedding of His blood.

    For God takes no pleasure in the death of the wicked (Ezekiel 18:23). This is why in Isaiah chapter 53, where it speaks of the coming Messiah and His soul being a sacrifice for our sins, why it says it pleased God to crush His only begotten Son.

    This is because the wrath that we deserve was justified by being poured out upon His Son. If that wrath was poured out on us, we would all perish to hell forever. God created a way of escape by pouring it out on His Son whose soul could not be left in Hades but was raised and seated at the right hand of God in power.

    So now when we put on the Lord Jesus Christ (Romans 13:14), God no longer sees the person who deserves His wrath, but rather the glorious image of His perfect Son dwelling in us, justifying us as if we received the wrath we deserve, making a way of escape from the curse of death–now being conformed into the image of the heavenly man in a new nature, and no longer in the image of the fallen man Adam.

    Now what we must do is repent and put our trust and faith in the savior, confessing and forsaking our sins, and to receive His Holy Spirit that we may be born again (for Jesus says we must be born again to enter the Kingdom of God–John chapter 3). This is not just head knowledge of believing in Jesus, but rather receiving His words, taking them to heart, so that we may truly be transformed into the image of God. Where we no longer live to practice sin, but rather turn from our sins and practice righteousness through faith in Him in obedience to His Word by reading the Bible.

    Our works cannot save us, but they can condemn us; it is not that we earn our way into everlasting life, but that we obey our Lord Jesus Christ:

    “And having been perfected, He became the author of eternal salvation to all who obey Him.” (Hebrews 5:9)

    “Now I saw a new heaven and a new earth, for the first heaven and the first earth had passed away. Also there was no more sea. Then I, John, saw the holy city, New Jerusalem, coming down out of heaven from God, prepared as a bride adorned for her husband. And I heard a loud voice from heaven saying, ‘Behold, the tabernacle of God is with men, and He will dwell with them, and they shall be His people. God Himself will be with them and be their God. And God will wipe away every tear from their eyes; there shall be no more death, nor sorrow, nor crying. There shall be no more pain, for the former things have passed away.’

    Then He who sat on the throne said, ‘Behold, I make all things new.’ And He said to me, ‘Write, for these words are true and faithful.’

    And He said to me, ‘It is done! I am the Alpha and the Omega, the Beginning and the End. I will give of the fountain of the water of life freely to him who thirsts. He who overcomes shall inherit all things, and I will be his God and he shall be My son. But the cowardly, unbelieving, abominable, murderers, sexually immoral, sorcerers, idolaters, and all liars shall have their part in the lake which burns with fire and brimstone, which is the second death.'” (Revelation 21:1-8).

Leave a comment

Your email address will not be published.

Meet me on social media: Facebook Gettr Logo

Gab Logo

How will this impact the human gene pool?

"due to..“PRRARSV” which is absent from..other coronaviruses..S proteins..[and]..S mRNAs.. translocate into the nucleus..the nuclear translocation of both S mRNA and S protein..a novel feature of SARS-CoV-2"

https://www.frontiersin.org/articles/10.3389/fmicb.2023.1073789/full

Spent last two days at Twitter in SF talking to engineers, product managers and yes, @elonmusk. Learned a ton about what’s going on. Before I share, want to note that after couple hour meeting I asked Elon what I could share and he said, “anything that’s true.”

🧵time…

What’s also really crazy now having seen under the hood is that Jack Dorsey repeatedly said they don’t shadowban. The entire machine behind Twitter is designed to shadowban. It’s almost as if that was the primary goal rather than the product itself.

Best of AI Twitter - January 11-25 (megathread):

Andrej Karpathy builds "GPT from Scratch",
A class-action lawsuit against Stability AI,
Track human pose using WiFi alone,
An LLM-native IDE,
InstructPix2Pix & AdA,
Training a world-class Rocket-league RL bot,

... and more:

1/22

Excited to share the first of a series of @GoogleAI blog posts summarizing our research work from 2022. This covers language & multimodal models, computer vision, and generative models. We'll have ~7 posts covering other areas over next few weeks!

https://ai.googleblog.com/2023/01/google-research-2022-beyond-language.html

Load More...