Renault’s EZ-GO envisions walk-on, walk-off urban EV mobility

Renault’s EZ-GO is less concept car than a full transportation service concept design. The vehicle revealed at the Geneva Motor Show this week is a fully autonomous electric car that can’t travel fast, but that does fit as many as six passengers through a big, almost garage-like door that opens up to load people and goods easily.

The EZ-GO is designed for use in urban transportation scenarios, getting people around smoothly while managing the driving itself. The concept car is designed to be able to navigate city spaces more effectively with full four-wheel steering, and it’s intended to wrk on a shared service platform, through which you hail it to your location. You can also pick one up at designated stations, so it’s a bit like a cross between an Uber and a bus.

Passengers can actually stand up inside the cabin and walk out at full height thanks to the high-rise door, and there are windows all around giving you a pretty unencumbered view inside the car. It’s designed as a shared use asset, so the idea was to make it a community vehicle in all regards.

Basically, the EZ-GO is a vision of what a transportation system might look like in the future when urban residents are looking to fill in the gaps between public transit, taxis, Ubers and personal vehicles. The hope for the price point of rides in the car is to get it to above public transit but well below private hire vehicles.

It’s definitely a priority for cities to do more shared use vehicles, but getting there, and getting users comfortable with the idea, will still take some work. Renault still hopes to make this car a reality on roads by 2022, or some version that incorporates the basic principles of the concept they showed today.

SEO

Just was told that back links mean everything for Google. Searched a few minutes and found interesting links that help you identify the answer to that question and many others. Here are the links:

SEOquake – Chrome Web Store

Top 10 Must Have SEO Extensions for Google Chrome – SEO Tools – Moz 

World’s Best SEO Tools and Free Search Software | Moz 

seo tools – Google 

Free Link Building Tools from BuzzStream 

Backlink Tutorial: How to Get 50+ Quality Backlinks Every Month 

Help A Reporter |

Ahrefs: Competitor Research Tools & SEO Backlink Checker

https://usp1-iis-10.hosted1.act.com/BLCremationSy/default.aspx

Programming Designer

Coder (programming):

Programming code is simply writing computer language that a computer understands. The end product is software, apps and websites, to name some. Detail-oriented and meticulous, introverts can make excellent coders. There is high demand for freelance coders, and much of the work can be done from the comfort of your home.

Coding is a general term. There are many different coding languages you can learn. For instance, JavaScript and HTML are common programming languages used for website development. The upside of programming from home is that you can set your own hours and the mean hourly wage for programmers in the U.S. is $38.39. The median annual pay for a programmer is $79,840.

There is an abundance of free learning resources online such asCode Academy (which offers classes in 12 coding languages, including JavaScript and Python, as well as markup languages HTML and CSS) and Udemy, where you can educate yourself. General Assembly offers one-shot classes and intensive six to 12 week training sessions online and in-class for a cost ranging from $140 to $3,500.

One direction you can go is specializing in front-end development, or coding the part of a website that you can see and interact with, including fonts, drop-down menus, buttons and contact forms. This requires fluency in HTML, CSS and JavaScript, plus coders should know front-end frameworks such as AngularJS and ReactJC. Or try back-end development, basically everything you can’t see on a website. Java, Scala and Python are the primary languages of back-end development.

If you’re at a loss of where to start, you can try coding languages that have staying power and are used in many applications. For instance, JavaScript is used in almost everything built on the web, including websites and video games. Every website uses HTML as a markup language, which controls how the website appears. Once you’re in the world, you’ll have a better idea of direction and where your skills lie.

To find work as a newbie coder, you can build your resume with freelance jobs before going out for full-time ones. Check out UpworkPeoplePerHour and Freelancer for coding and programming jobs. Once you’ve established some experience and job references, you can search for full-time or contractual work on job sites, such as FlexJobsGlassdoor, MonsterZipRecruiterCareerBuilder and Indeed.

Graphic Designer:

For highly creative and visual introverts, freelance graphic design can be a great way to make a living. Graphic designers work with businesses and individuals creating logos, websites, stationery and marketing materials, to name some. The work is comprised of both understanding the principles of design, as well as knowing how to use the software (such as Adobe Photoshop, Illustrator and InDesign) to execute the vision. Then, there’s the client component: You have to communicate with the client and be able to grasp what the client wants. Graphic designers frequently work with advertising agencies, publishing companies, magazines, corporations, product manufacturers and individuals.

As a graphic designer, you don’t necessarily need a degree or certificate, unless you’re trying to work at a creative agency, where the company might require it. However, if you go freelance or work for a smaller company, your work is your calling card. You need a solid portfolio more than a fancy degree. You can build your portfolio by doing small freelance jobs.

The average graphic designer makes $48,256 annually, and you can get a feel for jobs in your area by looking on the usual job sites: FlexJobsGlassdoor, MonsterZipRecruiterCareerBuilderand Indeed.

To learn more about starting a graphic design business from home, read Start Your Own Graphic Design Business by Entrepreneur Press and George Sheldon. It’s available on AmazoneBooks.com and Barnes & Noble.

Curt Long

One interesting fellow, who thinks of himself as a Programming Designer, Web Architect, and Solution Consultant with Business Intel for his customers.

Printing Skin Tissue / Human organs from 3-D printers

An earlier article here covered 3-D printers, which use modified inkjet technology to create solid objects with extremely complex shapes. The printers use a variety of techniques to solidify arbitrary areas on the surface of a powdered substrate, which supports the object as it is built up layer by layer. Designers commonly use 3-D printers for prototyping things like consumer electronic products, ensuring that they will be manufacturable before expensive metal molds are created to enable mass production. I ran into an old acquaintance the day that article ran who had never heard of Interesting Thing of the Day, so I told him about the site. He asked me what that day’s topic was, and I happily described the 3-D printers. He said, “Oh yeah, I know about those. Did you know they’re also using them to ‘print’ human tissue?” Um…no, I had no idea. It turns out that the humble inkjet printer has quite a few tricks up its sleeve—including, incredibly, the capability of manufacturing living skin and other organs.

Cell Mates

Growing individual human cells is not especially difficult. Take a sample of healthy cells, provide them with the right nutrients and environment, and they will grow and multiply. When multiple tissue cells are placed in close proximity to each other, they have a tendency to fuse together. Because of this phenomenon, hospitals can “grow” new skin to be used as grafts for burn patients using the patient’s own skin cells. However, this technique does have significant limitations. In particular, the skin cannot be made very thick because there’s no way to get blood to deeper cells—the process grows a homogeneous sheet of skin without the essential network of blood vessels, not to mention pores and other minute structures.

But creating intricate solid structures layer by layer is easy for a 3-D printer. So researchers have adapted old inkjet printers to hold a suspension of human cells in one reservoir and a gel-like substrate in another. Each pass of the print head lays down a pattern of cells held in place by the gel; when the next layer is applied, the adjacent cells begin to fuse to the layer beneath. If, for example, each layer contains a circle of cells in the same location, the result will be a tube—in other words, a structure very much like a blood vessel. A printer could in fact hold different kinds of cells in an array of ink reservoirs (like those used by color printers), theoretically enabling the creation of entire organs.

It’s All Beginning to Gel

The gel that functions as the substrate for this type of tissue printing is itself quite interesting. As with the powdered material used in rapid prototyping, the gel must be removed after the rest of the structure has solidified. Called thermo-reversible gel, it has the unusual property of being solid above 32°C and turning into a liquid when cooled below 20°C. So after the cells have fused, the tissue is cooled and the liquefied gel simply drains away.

Although the most obvious application for such a technology is producing skin grafts that are more robust than what’s currently possible, one day much thicker organs could be printed—making the inkjet printer a veritable tool for manufacturing replacement human parts. But although early laboratory experiments have yielded impressive results, researchers caution that the technology is in its infancy—likely a decade or more away from even initial trials with real patients. One of the hurdles to be overcome is that cells take time to fuse together into tissue, but can only survive for a short period of time without nutrients and oxygen. So the thicker a printed organ is, the more difficult it will be to keep it alive and healthy until the gel can be removed and it can begin getting nourishment from blood (or a reasonable facsimile thereof). Furthermore, remember that the printers don’t actually create the cells; they only arrange them. All the cells must have been grown in advance, a process that can take weeks (and that cannot be done equally well with all types of cells). So don’t expect to show up at the emergency room and get a new pancreas printed while you wait. —Joe Kissell

Tagmemics / The linguistic theory of everything

When I was studying linguistics in graduate school, the question people asked me most often was, “So how many languages can you speak?” I’d roll my eyes and say, “One, almost.” I’d then try to explain that I usually get by pretty well in English, that I can order food in a French restaurant without embarrassing myself, and that I’ve picked up a smattering of phrases in half a dozen other languages—but that’s pretty much it (unless you want to count computer languages or ancient Greek and Hebrew, of which I know just enough to mistranslate an inscription here and there). Linguists, I would say, are not necessarily polyglots; the study of linguistics is not about learning a bunch of languages but rather about understanding the nature of language generally: how the brain creates and interprets it, how children learn it, how it functions in society, how to model it computationally, that sort of thing. (At this point listeners would generally nod, try valiantly to suppress a yawn, and change the subject.)

In the course of my studies, I came across a fringe linguistic theory that is, even by the most generous standards, far from being generally accepted, or even respected. The theory is known as tagmemics; its inventor and primary proponent, the late Dr. Kenneth L. Pike, was on my thesis committee. So I got to spend some quality time getting to know the man and his theory—which, though I argued forcefully against its shortcomings, is nevertheless quite interesting. It’s the one linguistic theory that ordinary, nonacademic human beings have a reasonable chance of comprehending without months of study.

Understanding How Language Works

What’s a linguistic theory, anyway, and why do we need one? Linguists studying the way language works can observe what people say or write, but they can’t tell what’s going on in someone’s mind. To oversimplify greatly, that’s what a linguistic theory tries to figure out—the mental processing behind language. The reason for doing this varies from one linguist to the next: some are searching for the origins of a particular language or evidence that language is basically hard-wired in the brain; others want to find easier ways to learn or teach languages, or improve computer speech recognition. But whatever the motivation, a linguistic theory—a model that describes how language is put together and predicts how new words and sentences will be formed—is an essential starting point.

In the 1930s, Pike began studying phonology—the rules that govern how sounds are combined into words. Some sounds are regarded as the same by native speakers of a given language, even though they are objectively, or phonetically, different. Linguists use the term phoneme to describe a sound that speakers intuitively regard as being unique and meaningful in a language: thus two sounds may be phonetically different but phonemically the same. For example, in English there’s a sound we call “schwa”—a sort of neutral vowel sound that substitutes for other vowel sounds when it’s in an unstressed syllable. We don’t need to use the schwa symbol when spelling words that include it; we know intuitively and automatically when another sound—like “a,” “e,” or “u”—should be shortened into a schwa. So because it is simply a variant that occurs in very well-defined situations, schwa would not be a phoneme in English. On the other hand, there’s no rule that could predict when “k” would be used versus “d,” so both the “k” sound and the “d” sound must be phonemes in English. Finding the phonemes—the meaningful units of sounds in a language—is a basic part of the analysis of any language.

Etic and Emic

What Pike wondered was whether there might be something analogous to the phoneme in grammar—that is, at the level of words. To take a fairly trivial example, consider a pair of synonyms, such as “aid” and “assist.” Pike would say that even though these two terms are objectively different, the fact that they can be used and understood in the same way in a given context makes them equivalent at the level of grammar. He used the terms “etic” (as in phonetic) and “emic” (as in phonemic) to describe objective and subjective units of meaning, respectively. Thus, in this example, “aid” and “assist” are etically different but emically the same. Pike originally called the minimal grammatical unit of (emic) meaning a grameme but later changed the term to tagmeme, which he felt was more generic.

A tagmeme is basically a composite of form and meaning, a “unit-in-context.” Where many other linguists only wanted to study the objective form of language (that is, its “etic” aspect), Pike felt that the interesting thing was how language actually functions for users in real life—its “emic” aspect. So the tagmeme, as Pike’s fundamental unit of language, is described in terms of four features (or “cells”)—slot (where the unit can appear), class (what type of unit it is), role (how the unit functions), and cohesion (how the unit relates to other units). Pike found that the very same structures that appeared on lower levels also appeared on higher levels—as sounds formed words, words formed sentences, and sentences formed discourse, Pike used tagmemes to describe these larger and larger units. And he begin to think: if the etic/emic distinction applies to all levels of language, perhaps it was an even more basic, more general principle that could explain a great many other things too.

Beyond Grammar

In his monumental (and very, very heavy) book Language in Relation to a Unified Theory of the Structure of Human Behavior, Pike claimed that the same kinds of structures, rules, and procedures found in phonology apply not only to grammar and discourse, but in fact to all of human behavior. He analyzed events such as football games and church services using his tagmemic system, with the rather lofty goal of proving that all human behavior is basically linguistic.

Of course, the fact that a football game and a sentence can be described using the same methodology and structures, while very interesting, doesn’t really prove that a football game is linguistic. And most linguists seemed to feel that as a descriptive model, tagmemics was not as rigorous or objective as other models, so it didn’t lend itself well to serious scientific inquiry. But from Pike’s point of view, looking at language as an objective formal system was missing the point; behavior that is fundamentally subjective can only be understood and described meaningfully if the observer allows context to play a role at every level.

The Wide World of Tagmemics

Another of Pike’s main claims was that language is deeply hierarchical, in several simultaneous ways. Sounds and intonation form a phonological hierarchy; words and sentences form a grammatical hierarchy; and meanings—whatever a speaker is talking about—form a referential hierarchy. All three hierarchies interlock and operate at the same time, and of course, what could be said of the hierarchy of language could also be said of the hierarchy of all behavior. As the theory developed over the course of several decades, Pike expanded it even further to include insights from other fields. From quantum physics, for example, Pike borrowed the notion that any event can be seen from the perspective of particle (a static view of a unit), wave (a dynamic view), or field (a unit in relation to other units).

Pike applied tagmemics to rhetoric, poetry, science fiction, and philosophy, among other fields. Others have taken it further—I’ve even seen a document using tagmemics as a model for learning the programming language Perl. While it never did (and never will) meet the day-to-day needs of most linguists, tagmemics has managed to maintain a small but loyal following among researchers in a wide variety of disciplines. The key insights of tagmemics—that context is essential, behavior involves overlapping hierarchies, and viewpoint affects one’s analysis of data—turn out to be surprisingly effective for understanding many kinds of phenomena. This obscure linguistic theory is in fact a pretty good way of thinking about what it means to be human. —Joe Kissell

Tag Questions / You know what this is about, don’t you?

People who want to make fun of the Canadian dialect of English invariably start with one of its two most idiosyncratic features. The pronunciation of the diphthong “ou,” of course, is one of them—in words like out and about, Americans exaggerate both the gliding and rounding of the vowels so that it sounds like the “ow” in power, whereas the stereotypical Canadian pronunciation is closer to oat and a boat. I know lots of Canadians who protest this characterization, pointing out that Americans butcher the language much more egregiously. They may say, “Every dialect of English has its faults, eh?” This is the second oft-ridiculed peculiarity of Canadian English: turning a statement into a question by adding the word “eh” at the end, which means, approximately, “Isn’t that so?”

Needless to say, not all Canadians fit the stereotype—my wife, for example, rarely uses “eh,” just as I avoid most of the influences of Pittsburghese. Some of her family members from Saskatchewan, on the other hand, say “hey” instead of “eh,” and there are many other regional variations of English within Canada, just as there are within other English-speaking countries. But whether or not one uses “eh” (or “hey”), every English speaker knows dozens of ways to add a word or a phrase to the end of a statement so that it becomes a yes/no question. Questions formed in this way are called tag questions.

You’re It (Aren’t You?)

In one of my first graduate linguistics courses in grammar, we studied English tag questions at great length, because they nicely illustrate a simple, rule-based grammatical transformation. The simplest way to make a tag question in English is to repeat the verb, negate it, and then repeat the subject. For example, “He is smart” becomes “He is smart, isn’t he?” If the verb is already negative, you just make it positive. “It won’t rain” becomes “It won’t rain, will it?” In most cases, if a sentence doesn’t use a “be” verb, tag questions are created using a form of the verb “do”: “This scarf matches my hat, doesn’t it?” Depending on the verb and the context, there are numerous other variations, along with special exceptions to the rules. Every student in elementary school is taught that when speaking of yourself, you must use the awkward-sounding “Aren’t I?” to form a tag question unless you’re willing to phrase it as “Am I not?”

Beyond these basic kinds of tag questions, though, there are many other ways of achieving the same (or very similar) result. “Don’t you think?” is very common, as are “Right?” and “OK?” and sometimes even “Huh?” In certain parts of the U.S., Canada, and England, “Isn’t it?” is shortened to “innit?” and used as an all-purpose tag question, even where the verb doesn’t seem to match, as in “This shirt costs a lot of money, innit?” But if you think of “innit?” as short for “isn’t it so?” you have a nice parallel to the all-purpose French tag question, which also shows up in English. “This foie gras is splendid, n’est-ce pas?” And oddly enough, “yes” and “no” can often be used interchangeably to form tag questions. “We’re having fun, yes?” means about the same as “We’re having fun, no?”

Tag Questions Have Many Uses (Don’t They?)

What I have always found most interesting about tag questions is their many and varied uses. Ostensibly they are questions that seek agreement or disagreement with whatever the original statement was, but more often than not, they are used for reasons other than gathering information. In some regional dialects of English, tag questions occur quite frequently—every few sentences or so—with the net effect of softening the overall impact of the speaker’s statements. In other words, tag questions can make speech sound more polite or deferential by implicitly suggesting, “I could be wrong about this; what do you think?” This is also one of the functions of the Canadian “eh”—just as a Canadian will often say “Sorry!” if you step on his toes, frequent use of “eh” can serve the social purpose of limiting one’s impression of self-importance.

Very often, tag questions are used mainly as a tool to move conversations along, to involve other participants. A tag question can invite feedback—anything from a nod to a “Yeah, sure,” to a lengthy response. But with very few exceptions, tag questions that expect a response are looking for a positive response: an agreement with the speaker’s original statement before it became a question. They say: “I believe such-and-such. Do you agree?” So tag questions can exert a subtle (or sometimes not-so-subtle) pressure on the listener to respond positively, like a preacher who makes a string of bold statements followed by “Amen?” It would be unexpected, to say the least, for a member of the congregation to shout out “Not really!” So just as tag questions can be used as a means of expressing courtesy, they can also be misused as a way of controlling a conversation, inducing guilt, or expressing passive aggression. Hence the infamous “You’d never leave me, would you?” It’s not easy for someone to respond, “Oh, sure I would!”

Tag questions can be used to make accusations, especially when followed by an explicit demand for agreement: “And then you bludgeoned the victim with Volume XI of the Oxford English Dictionary, didn’t you? Admit it!” They’re also a perennial favorite among parents: “You didn’t finish your vegetables, did you?” or “You need a nap, don’t you?” In fact, tag questions are a veritable Swiss army knife of English constructions, with almost as many possible uses as expletives. Although tag question formation is usually taught to people learning English as a second language, the full range of uses and variations is rarely addressed, making them confusing for people who don’t grasp their underlying motivations.

You Don’t Get It, Eh?

Even English speakers don’t always understand the unwritten rules for tag questions. Several times I’ve heard Americans try unsuccessfully to imitate a Canadian by saying something like, “Isn’t it cold up here, eh?” And that’s simply wrong; a tag question only works if it modifies a statement (or, in some cases, a command: “Stay for dinner, eh?”).

Tag questions are fascinating, aren’t they? You enjoyed reading about them, didn’t you? You will keep reading Interesting Thing of the Day every day, won’t you? —Joe Kissell

The Tunnels of Moose Jaw / Underground legends

The first couple of times I visited Saskatchewan, where my wife’s family lives, it was winter. Temperatures hovered around –40°, making holiday shopping along the streets of downtown Saskatoon a challenge. Even bundled to the gills, we could barely stand to be outside for more than a few minutes. Morgen assured me that during the summer (or “mosquito season,” as it is affectionately known), the prairies of southern Saskatchewan took on an entirely different look and were quite hospitable to humans. But I was thinking, this is why they invented malls. Malls are good. Let’s go to the mall! We went to the mall.

Moosey in the Sky with Diamonds

I like to kid my wife about Saskatchewan: the monotonous flatness of the landscape, the dearth of trees, the nasty winter weather, the fact that the province’s slogan, “Land of Living Skies,” suggests there’s not much interesting about the land itself. Morgen, in turn, can kid me about western Pennsylvania (where I grew up), which has its own peculiarities. But even though Pennsylvania has no shortage of oddly named towns, Saskatchewan’s legendary town of Moose Jaw takes the cake. Although everyone in Canada has heard of Moose Jaw, it’s known more for its silly name than for any other characteristic. Which is a shame, because if you dig a little bit, you can find all sorts of interesting things in Moose Jaw.

Moose Jaw, located just west of the provincial capital of Regina in south-central Saskatchewan, most likely got its name from a Cree word meaning “warm breezes” via folk etymology—though there are several other theories too, including one that the river running through town was thought to be shaped like a moose’s jawbone. Warm breezes or not, Moose Jaw (like the rest of Saskatchewan) gets plenty cold in the winter. In the early 1900s, when the town was beginning to undergo significant growth, most of the larger buildings were heated by steam, with coal-powered boilers located in the basements. The engineers who kept the heating equipment running didn’t like having to go upstairs and outside in the cold repeatedly to move from building to building, so they arranged for the creation of a series of tunnels linking the basements to provide easier access. Over a number of years, the tunnels expanded and interconnected, becoming a large network.

Down and Out in Moose Jaw

Not long after the tunnels were built, a wave of Chinese immigrants arrived in Moose Jaw. Anti-Chinese sentiment at the time made it difficult for these immigrants to live and work in public view, yet business owners valued them as a source of cheap labor. So the tunnels were expanded and used as both living quarters and workplaces. Conditions were harsh in the tunnels and pay was poor, but the workers stayed because their options for earning money in the outside world were limited.

During Prohibition (1917–1924 in Saskatchewan and 1920–1933 in the U.S.), Moose Jaw became a hub for liquor distribution both domestically and across the border. Along with speakeasies, gambling and prostitution became big businesses in the town. The tunnels provided a conveniently obscure place for all these activities. According to several reports—though no conclusive evidence exists—Al Capone himself called Moose Jaw home for a short while, overseeing a profitable bootlegging operation in person. Because of the town’s connection to organized crime in the U.S.—and its physical link to Chicago via the Canadian Pacific Railway’s Soo Line, used heavily for transporting illicit alcohol—Moose Jaw became known as “Little Chicago.”

Over time, most of the tunnels fell into disuse; many were filled in or blocked off as new buildings were constructed. But a portion of the tunnel network that remains has been developed into an elaborate, theatrical tourist attraction. Guests can take either or both of two tours featuring both live actors and animatronic figures. One tour highlights the tunnels’ use by Chinese immigrants; the other tour focuses on the organized-crime angle. The tours attract more than 100,000 visitors per year—about three times the town’s population. The tunnels are now a source of civic pride, though they may never match the incredible drawing power of the town’s unusual name. —Joe Kissell

Straw Bale Houses / The power of banding together

Several years ago, the company I worked for had a big Halloween celebration. One of my coworkers decided that a group of us needed to dress up as the Three Little Pigs and the Big Bad Wolf. So she worked for days sewing costumes for all of us, and even brought in plastic pig noses for us to wear. For an authentic touch, she asked that we also decorate our desks with the building materials featured in the story. I got the short straw (so to speak) and ended up making a pathetic mess by scattering straw all around my desk, and the “pig” who used sticks didn’t fare much better. But our colleague with the brick “house” simply printed out a huge brick pattern on a large-format color printer and wrapped it around his desk. In life as in the story, his design was clearly the best.

It is difficult to set aside the bias that straw is an inappropriate building material, even knowing that wolves lack the lung capacity to blow down a straw house. And yet people have been building sturdy, comfortable houses out of straw bales for more than a century. This building technique has been, shall we say, a bit slow to catch on—and is not without its limitations. But using straw as a building material turns out to have some interesting merits.

If You Can’t Eat ‘Em…

Straw is what’s left over when grains like wheat, barley, or rice are harvested—basically the hollow stalks. Unlike hay, which can be used to feed animals, straw is a nearly useless agricultural byproduct. Millions of tons of straw must be burned or otherwise disposed of each year. Inconveniently, it doesn’t even decompose rapidly. Automated baling machines, invented in the 1890s, compact straw into tightly compressed blocks, so that they will at least occupy as little space as possible. Faced with a surplus of straw bales, a lack of trees, and a cold winter approaching, some settler long ago decided to stack up the bales and use them as the walls of a house. This worked surprisingly well, and after years of refinement, straw bale construction is beginning to gain respect as a mainstream technique.

Walls made of straw bales are held together and reinforced with rebar (or sometimes, wooden or bamboo stakes). In some designs, walls made entirely of straw bales support a roof; in others, a conventional wooden frame is used as the load-bearing structure while the straw bales form the exterior shell. Straw will rot if exposed to moisture, so to keep it dry, both interior and exterior surfaces are sealed with plaster, stucco, or adobe. The net effect is that walls of a finished straw bale building look just like any other wall, only a bit thicker.

The Last Straw

One of the strongest arguments for using straw as a building material is that it saves lumber. Even if wood is readily available, straw is invariably much cheaper. It’s a rapidly renewable resource, and one that usually goes to waste. A wall made of straw bales also has dramatically higher insulating properties than a standard wooden wall, making buildings that use them very energy-efficient. Straw bale houses are also easier to build than wooden frames, even by people with little experience.

Studies performed by various universities and government organizations have shown that a properly constructed straw bale house, sealed well with plaster, is actually less susceptible to damage by fire than a wooden building. Although you wouldn’t build a high-rise condo out of straw bales, a carefully designed one-story building is also quite safe in an earthquake. Owners must be careful to keep all cracks sealed, though, because once infiltrated by moisture, bugs, or rodents, a straw bale wall will rapidly lose its integrity. Last but not least, straw bale buildings are extremely resistant to damage by wind. So much for the Big Bad Wolf. —Joe Kissell

Esperanto / Artificial language for the masses

Like many people, I endured four years of high-school French only to find that I lacked the ability to order a croissant in a Paris bakery without making a fool of myself. I eventually got the hang of basic conversation in French, but then found myself traveling to places where Spanish, German, or Italian (for example) were spoken, and having to start all over again with the basics (“Where’s the bathroom?” “How much does this cost?” “Where have you sent my luggage?”). As much as I enjoy and appreciate linguistic diversity, it can make travel, trade, and diplomacy challenging at times.

In some heavily multilingual areas of the world, most people learn a lingua franca—a regional trade language—in addition to their mother tongue. It stands to reason, then, that this notion could be expanded more broadly. But when someone proposes English or French, say, as a trade language, objections inevitably arise. These languages are notoriously difficult to learn, with strange spellings and lots of grammatical rules and exceptions. But more importantly, they’re loaded with historical and cultural baggage. If your country—not mentioning any names—has been a rival of English- or French-speaking nations, you will likely not jump at the chance to spend long years learning a language with such unpleasant associations. The only hope for a truly universal language would seem to be an artificial one—a language that is designed to be free from cultural biases and easy to learn. This was precisely the goal of Esperanto.

Hoping for a New Language

L. L. Zamenhof grew up in the late 1800s in Warsaw (part of Russia at that time). While still in high school he set out to design a universal artificial language that would facilitate communication within his linguistically diverse community. By the time he finished this side project ten years later, Zamenhof was a practicing ophthalmologist. In 1887, he published the first guide (in Russian) to the new language, which he called “Lingvo Internacia” (international language). Zamenhof wrote the textbook under the pseudonym “Esperanto,” meaning “a person who is hoping” in Lingvo Internacia. Fans of the language decided that “Esperanto” had a nicer ring to it, and they soon adopted it as the informal name of the language.

Esperanto was designed to be both easy to learn and culturally neutral. According to some sources, an English speaker can learn Esperanto up to five times faster than Spanish. For starters, Esperanto uses strictly phonetic spelling—a given letter always makes exactly the same sound. Second, the structure of Esperanto is very simple, with only sixteen basic grammatical rules that need to be learned—and no exceptions to the rules (such as irregular verbs). And third, Esperanto has a very small core vocabulary; new words are constructed by combining words and adding prefixes and suffixes. (Esperanto is thus an agglutinative polysynthetic language, for those who need to have such things spelled out…)

Something Old, Something New, Something Borrowed

The vocabulary of Esperanto will have a familiar ring to anyone who knows a European language, as roots were borrowed from French, German, and Spanish, among other languages. (A few examples: bona means “good”; porko means “pig”; filo means “son”; hundo means “dog.”) One could argue that this selection represents not so much cultural neutrality as Euro-neutrality, but this hasn’t prevented Esperanto from becoming popular in China and some other parts of Asia.

For all its merits, Esperanto has not reached the level of acceptance its creator foresaw more than a century ago. There may be as many as two million people who speak Esperanto with at least a moderate level of proficiency, but probably no more than a few hundred who learned Esperanto at home as their first language—and no known speakers (over the age of three or so) who speak only Esperanto. Ironically, the cultural neutrality that is touted as such a benefit of the language also serves to limit its growth, because languages tend to spread along with the cultures that gave rise to them. Alas, unless or until the number of Esperanto speakers reaches a larger critical mass, it will be of little value as a trade language, and without a clear value, it will be difficult to convince people to learn it. —Joe Kissell

Geodesic Domes / Building outside the box

Unless you’ve been living in a cave for the past half century, you have probably encountered a geodesic dome at one time or another. They can be found on playgrounds, at amusement parks, and in museums; and any number of homes and public buildings are constructed using some variation of this structure. Depending on your tastes and disposition, you may think geodesic domes look cool, endearingly retro, or woefully unfashionable. But you may not know the story (and the logic) behind this sometimes-controversial design.

Bucky-ing Trends

R. Buckminster Fuller was one of the most prolific thinkers and inventors of the 20th century. He wrote numerous books, received dozens of patents, and worked tirelessly for decades to solve some of the world’s most vexing problems using the tools of engineering and common sense. For all his innovations, Fuller was a very practical man, and like most engineers he saw a great beauty in elegantly logical solutions—even if they defied tradition, aesthetics, or conventional wisdom. So when a housing crisis arose in the years following World War II, he set out to find the simplest and most effective solution, no matter how unusual it may be.

Fuller loved geometry, and he was particularly impressed by the triangle, the most stable geometrical shape. Many of his building designs involve triangles, because they provide the greatest structural integrity. He also knew that the sphere was the most efficient three-dimensional shape, enclosing the largest possible volume with the smallest surface area—meaning a dome (a partial sphere) should be a logical shape for a building. But dome-shaped buildings are notoriously awkward to construct. Fuller’s innovation was a way to create a sphere (or partial sphere) out of triangles, providing the best of both worlds. He called this shape a geodesic dome, because the pattern of triangles forms an interlocking web of geodesics. A geodesic is the shortest path between two points. This is, of course, a line in two-dimensional geometry, but on the surface of a sphere, the shortest distance between two points is an arc defined by a great circle—a circle with the same diameter as the sphere (like the equator).

The Miracle Building

If all that geometry is too much to wrap your brain around, consider the main advantage Fuller cited in his 1954 patent application for the geodesic dome: this shape, because it is self-reinforcing, requires far less building material than any other design. Conventional buildings, according to Fuller, weigh about 50 pounds (22.7kg) for each square foot (0.09 sq meter) of floor space. A geodesic dome can weigh less than 1 pound (0.5kg) for each square foot of floor space. (One of Fuller’s original geodesic domes was a metal framework lined with a sheet of heavy, flexible plastic.) The upshot of this is that you can create buildings very inexpensively, and with a minimum of equipment and labor. Geodesic domes are also stronger than conventional buildings, highly resistant to earthquakes and wind, and more energy-efficient too. What’s not to like?

Well, that’s a circular question. The main problem with a dome-shaped building is that although it encloses a large volume of space, a lot of that space is not easily usable by humans. The slope of the walls means the floor space is effectively limited (more so, the taller you are), and most furniture, having been designed for flat walls and corners, doesn’t fit well. There’s also the fact that you need a fairly large lot for a dome of any reasonable height; in urban areas, such real estate may be hard to come by. And banks are generally hesitant to provide home loans for dome builders; they’re seen as a risky investment, because there’s no way to gauge their resale value.

All these issues in no way diminish my enthusiasm for Fuller’s design, because, as he did, I feel that logic and elegance count for a lot. Plus—let’s not beat around the sphere—I think geodesic domes look very impressive, and I imagine it would be interesting to live in a space without right angles. If fortune ever smiles upon me broadly enough that I can afford to build my own home, you can be certain a dome will find its way into the design somewhere. —Joe Kissell

Performative Verbs / Doing as you say

In a sociolinguistics class years ago, each of the students had to complete a major project on the topic of their choice, and the professor met with each of us to discuss what sorts of things we were thinking of researching. I described some areas of interest, and my professor said, “You should read J.L. Austin’s How to Do Things with Words. I think it’s exactly the kind of thing you’re talking about.” I read the book, and although it was not at all relevant to the project I had in mind, it was quite interesting. The entire book was a treatise on performative verbs, which is to say, verbs whose action is accomplished merely by saying them.

I Speak, Therefore I Act

Performatives sound a bit mystical at first, like a spell or incantation. But in fact such verbs are quite commonplace. If you’ve ever said, “I promise” or “I apologize,” you have performed those actions by the simple act of saying them. You’re not talking about doing these things or stating that you’re doing them; you’re actually doing them. The same is true when you say, “I bet,” “I invite,” “I request,” or “I protest,” for example. There are countless other examples, such as:

  • I now pronounce you husband and wife.
  • I’m warning you, don’t go in there.
  • I thank you for your kind attention.
  • You’re fired!
  • I must ask you to leave now.
  • I christen this ship The Daydream.
  • I claim this land in the name of the king of England.

Among Austin’s points in his discussion of performative verbs is that they look exactly like declarative statements, yet they aren’t. “I run this meeting” has the same grammatical form as “I adjourn this meeting,” but the first one is declarative while the second is performative. One of the consequences of this peculiarity is that unlike regular declaratives, performatives cannot be evaluated for truth or falseness. The sentence “You’re happy” can be true or false, but “You’re fired,” if uttered as a performative, is neither true nor false. And yet, there must be some way of evaluating the meaning of such a sentence. After all, if I walked up to a politician I didn’t like and said, “You’re fired,” my doing so would not in fact terminate that person’s employment, whereas it would if I said it to someone who worked for me. Austin used the terms felicitous and infelicitous to describe whether a performative utterance is effective—whether it works. If social conventions are followed and my intentions are sincere, a performative utterance will be felicitous. If I do not have the authority to use a certain verb performatively in a certain context—or if I’m joking, or acting, for example—the very same utterance will be infelicitous.

 

I Hereby Insult You

There are several other curious facts about performative verbs. For one thing, you can nearly always perform the action specified by a performative verb without actually using the verb. For instance, you can promise to do something by saying, “As surely as the sun rises each morning, I will repay you the cost of lunch.” Or you can apologize by saying, “I’m sorry.” Conversely, you can make a statement that sounds just like a performative, but is simply an ordinary declarative. For example, the expression “I apologize” could be a statement about what I habitually do. (“What happens when I step on someone’s toe?” “I apologize.”) In addition, some activities that seem like good candidates for performative verbs turn out not to fit the pattern. If you said, “I insult you,” that would not constitute an insult; saying “I swear at you” doesn’t mean you have done so. (On the other hand, “I swear to tell the truth” is a performative utterance.)

Ever since I read Austin’s book more than a decade ago, I’ve been more aware of the use of performative verbs, and more likely to use them myself. In some strange way, using words to perform actions feels both elegant and powerful. But don’t take my word for it—try it yourself. I insist. —Joe Kissell

Ice Hotels / In-refrigerator rooms

When I first heard about an “ice hotel,” I thought it must be a joke. I’ve heard of igloos, of course, but that’s not really the image that comes to mind when I think hotel. Sure, there was the Bad Guy’s ice lair in the James Bond film “Die Another Day,” but that’s just fantasy, right? The thought that someone might really construct an entire hotel out of ice, rent rooms, and then repeat the process each year was almost too wacky to believe. Believe it—not only does it happen, it has now become the trendiest way to spend a winter vacation.

They’ve Got It Down Cold

The first ice hotel was built in 1989 in a village called Jukkasjärvi in northern Lapland, Sweden. That first year it was a modest, 60-square-meter igloo; this year, the structure measures over 4,000 square meters and has 85 rooms. Construction begins each year in October, and the hotel is open for guests from December through April (weather permitting). By summer the hotel has melted, but plans are already underway for next year’s bigger, better ice structure.

Ice hotels are built, naturally, entirely out of frozen water in the form of ice blocks and hard-packed snow. In some cases, blocks of ice are sawed from a river; for other parts of the building snow is compressed into wooden forms to create building blocks. The guest rooms contain beds made of a block of ice and topped with a foam mattress. You sleep in high-tech mummy-style sleeping bags covered with animal pelts; although the air temperature in the room is below freezing, your body remains toasty warm. If nature calls in the middle of the night, you can head to an adjoining heated building with conventional facilities. Outhouses would not be much fun, as the exterior temperature frequently reaches –40°.

Put It on Ice

But a classy hotel is much more than a place to sleep, and at the prices of these rooms, you’d better get much more than a sleeping bag. Although the design changes from year to year, Sweden’s Icehotel invariably includes an ice bar for vodka-based drinks (beer would freeze); even the glasses and plates are made of ice. There’s also an ice chapel for “white” weddings, an ice cinema, an ice sauna (I have yet to figure that one out), ice art galleries, and even—I am not making this up—a replica of Shakespeare’s Globe Theatre built of ice. Most guests stay only one night in an ice room; ordinary heated hotel rooms are available nearby for longer stays. Even so, the hotel has a waiting list several years long.

Sweden’s Icehotel was the first, but imitators are appearing all across the Arctic Circle. In Kangerlussuaq, Greenland you can find the more modest Hotel Igloo Village, with six adjoining igloos (four of which serve as guest rooms). If you want the igloo experience in Greenland during the summer, you can also stay at the Hotel Arctic in the town of Ilulissat, where guests enjoy all the comforts of home in melt-proof aluminum igloos. For the past five years, Québec has had its own Ice Hotel, modeled on the original Swedish Icehotel and rivaling it in size and luxury.

In 2004, the United States saw its first ice hotel—the Aurora Ice Hotel at the Chena Hot Springs Resort in Fairbanks, Alaska. During its construction, state officials cited the hotel’s owner for fire code violations and did not permit the building to open until smoke detectors and fire extinguishers had been installed in each room. (I’m not kidding. Only in America.) Although the initial structure melted in the spring of 2004, it was rebuilt for the 2005 season, this time inside a larger, refrigerated structure—with the goal of keeping it frozen and habitable year-round.

As far as I know, I’m not personally acquainted with anyone who has stayed at an ice hotel. I rather suspect—marketing hype and high prices notwithstanding—that it would be a decidedly uncomfortable experience. But then, many uncomfortable experiences are worth having, and it’s not every night you get to drink vodka out of an ice glass while watching the Northern Lights, and then sleep on a slab of ice. Sign me up! —Joe Kissell

Spoonerisms / Sixing up mounds

One of my linguistics professors in grad school had a strange sense of humor that appealed to me greatly. He didn’t see a need to divide work and pleasure; exams regularly contained jokes, puns, and strange juxtapositions, and every class session was filled with laughter. When this professor needed to make up a word in an imaginary language to use as an example, he wouldn’t give it a common meaning like “mother” or “tree”; he’d instead gloss the word as “flagpole sitter,” “hubcap thief,” or something similarly odd. He constantly urged us not to take our homework too seriously and to ask annoying questions of the other professors. I think this lighthearted attitude helped us all to learn better, and it certainly brightened the classroom atmosphere.

How Near This

Class discussion had a remarkable tendency to stray from the planned lesson, though invariably it went in interesting (and linguistically useful) directions. One day, someone in the class mentioned the word metathesis, which is the phenomenon that occurs when two adjacent sounds are swapped (as in “aks” for “ask”). Without missing a beat, the professor said, “Oh yes, this reminds me of spoonerisms,” and proceeded to recite, rapidly and perfectly, the tale of the Mion and the Louse. We were stunned and delighted by his brilliant display of linguistic prowess. It’s not easy to make mistakes like that on purpose.

A spoonerism is like metathesis but instead of affecting adjacent sounds within a single word, it’s spread out across two or more words (sometimes with intervening words)—for example “hat rack” becomes “rat hack”; “light a fire” becomes “fight a liar.” Some spoonerisms instead transpose vowel sounds (“I fool like a feel” instead of “I feel like a fool”). Because mistakes like this are involuntary slips of the tongue, they don’t always result in real words (you might say “key tup” for “tea cup,” for instance), but the funniest and most memorable spoonerisms change the meaning of a sentence completely (as in “I’m biting a rook” in place of “I’m writing a book.”)

A Speecher Named Tuner

I have mentioned my hope that my name never gets distorted into an adjective or other part of speech. But if history remembers me for anything, I trust it will be for something more auspicious than a tendency to mix up my words, as was the case with the Reverend William Archibald Spooner, a member of New College, Oxford, from 1862 to 1924. Spooner was a small man and an albino. His head was disproportionately large, and he had poor eyesight. But he was kind, well-liked, and extremely intelligent—so much so that his mouth couldn’t keep up with his brain. He therefore developed a reputation for frequent verbal blunders.

Spooner himself was seldom aware of making these mistakes, and some people believe the quotes attributed to him were apocryphal. In any case, he is credited with such classics as “a blushing crow” (instead of “a crushing blow”), “you’ve tasted two worms” (instead of “you’ve wasted two terms”), and a toast to “our queer old Dean” (instead of “our dear old Queen”). He navigated the streets of Oxford on a well-boiled icicle, and reminded parishioners in one of his sermons that “the Lord is a shoving leopard.” By the time he was in his fifties, the term “spoonerism” had become a common noun, but as far as I can tell Spooner accepted this dubious distinction with gracious good humor. A legend in his own time, he lives on in our marts and hinds. —Joe Kissell

Origin of the Trophy Cup / Handing it to the winner

Having written several articles based on the theme “Throwing Down the Goblet,” I found myself wondering about trophies. Lots of major sporting competitions award the winning team a trophy in the shape of a cup (or, if you prefer, a bowl, chalice, or goblet)—the Stanley Cup, the America’s Cup, the World Cup, and so on. Trophy cups are also found quite often in collegiate sports, and Harry Potter fans will of course remember the House Cup as the highly coveted award for the house that has accumulated the most points during a given term. Often, though not always, tradition dictates that a single trophy cup be passed from one winning team to the next. In individual competitions, by contrast, trophy cups are much less common; designs are based more often on a human (or angelic) figure of some kind.

The Salad Fork of Victory

When you’re rooting for your team to win, say, the World Cup, it’s probably not especially important to you what the actual token of victory is shaped like. The important thing, most competitors and fans would agree, is simply to win—and to have some commemorative token. A cube or sphere or an inscribed toaster oven could just as easily serve this purpose, though without a doubt, larger, more elaborate, and costlier trophies give the winner something further to brag about. All I wanted to know was, why a cup? How did a cup, of all things, come to symbolize competitive victory?

The answer has been surprisingly difficult to track down; in fact, after several hours of research I can only advance a couple of plausible theories. For many centuries, a “trophy” was simply something of one’s enemy—a piece of armor, perhaps, or occasionally a body part—that was displayed after a battle as a tangible proof of triumph. This may, of course, have been a cup on occasion, but I have not been able to find any examples of cups designed for the sole purpose of serving as trophies (in particular, for sporting events) until the mid-18th century. This means the inspiration for such a design must have appeared earlier than that.

For Methodists Who Love to Win

One explanation traces the origin of the trophy cup to the “loving cup” designed by theologian John Wesley (1703–1781). Wesley founded the Methodist church, and part of the church’s early rituals included “love feasts”—community gatherings that included a simple meal of bread and water. Although superficially similar to Holy Communion, love feasts were simpler and were conducted by laypeople rather than clergy. Wesley’s loving cup was given two handles so that the water could be passed easily from person to person. The handles and the tradition of passing the cup fit with the trophy cup, although I have not been able to find any explicit evidence that a trophy designer used the loving cup as inspiration.

An alternative theory is advanced by Ian Pickford (of Antiques Roadshow fame). According to Pickford, the modern trophy cup was based on the two-handled “ox-eye” college cup design from the 17th century. Far be it from me to gainsay an antiques expert, but I was not able to corroborate this claim—and even if true, it begs the question of where that design came from or how it came to have its current meaning. If any trophy historians out there would like to chime in with evidence supporting either explanation (or a different one), I’d be all too happy to set the record straight. —Joe Kissell

Cleopatra’s Wager / The most expensive meal in history

A news article mentioned a hotel bar in New York whose drink menu includes a US$10,000 drink called “Martini on the Rock.” That works out to about $5 for the gin, vermouth, and olives—and $9,995 for the loose diamond sitting at the bottom of the glass. Patrons must order the drink three days in advance, and meet with a jeweler to pick out the perfect stone. The first person to order this drink paid a bit extra—$13,000—and instead of a loose stone, selected a 1.85-carat diamond engagement ring. (His girlfriend said yes.) Perhaps unknown to the hotel’s proprietors, this extravagant beverage has a fascinating historical precedent.

Et Tu, Cleo?

The year was 41 B.C. Mark Antony, one of the rulers of Rome, summoned Egyptian queen Cleopatra VII for an audience at Tarsus (in present-day Turkey). Antony ostensibly wanted Cleopatra to answer charges that she had aided Cassius, who had conspired with Brutus to assassinate Julius Caesar. But most people believe the real reason for the meeting was that Antony wanted Egyptian aid for an upcoming military campaign, and besides, he had the hots for Cleopatra.

Cleopatra arrived on her legendary barge, and proceeded to throw elaborate banquets for Antony and his officials for several evenings straight—nothing like a bit of wining and dining to smooth over political misunderstandings. So impressed was Antony at the lavish feasts Cleopatra had arranged that he accepted a friendly wager. Cleopatra bet Antony a large sum of money that she could host the most expensive meal in history. The next day, as the meal in question was nearing its end, Antony said that it had been terrific, but no more impressive than her other banquets—and certainly not worth the sum of money she had specified. At this, Cleopatra removed one of her pearl earrings and dropped it in a goblet of wine vinegar. Each of the pearls was so large and rare that it was extraordinarily valuable—estimates are usually expressed in extremely helpful terms such as “10,000,000 sesterces” or “100,000 gold aurei,” or “the value of 15 countries.” In any event, it was worth a fortune. The pearl dissolved in the vinegar, which Cleopatra then drank. Antony conceded defeat—the value of that single drink, let alone the banquet, had indeed been more than any meal in history.

I’ve Got a Crush on You

There are a number of different versions of this story, which originally appeared in Pliny the Elder’s Natural History. According to some versions, Cleopatra ground the pearl in a mortar before dropping it in the vinegar. That would have been a wise choice. Pearls, which are made primarily of calcium carbonate (the same material that forms stalactites, stalagmites, and tufa), will indeed dissolve in a mild acid such as vinegar, neutralizing the acid in the process. (This is how antacids work, by the way—check the label on a bottle of Tums and you’ll see that its main ingredient is also calcium carbonate.) However, this might take days for a whole pearl; a crushed pearl could dissolve in a matter of minutes.

Not only did Cleopatra win the wager, she won Mark Antony’s heart. Antony left his wife and moved to Alexandria. But ten years later, Octavius led Rome in a war against Egypt. He defeated Antony and Cleopatra, both of whom committed suicide shortly thereafter. Meanwhile, according to legend, the pearl from Cleopatra’s other earring was later cut in two, with each half being placed in one of the ears of the statue of Venus in Rome. Rome fell, of course, soon thereafter. Coincidence? Probably, but all the same, I recommend against sticking antacids in your ears. —Joe Kissell

The Invention of the Wheel / The best thing until sliced bread

On occasion, you may have heard it said of some wonderful gadget, “This is the greatest invention since sliced bread!” Such a comment is intended to be both a compliment and a reference to how revolutionary and world-changing the invention is. It’s worth bearing in mind, though, that while people have been slicing bread for eons, pre-sliced, packaged bread has only been available since 1928, when Otto Frederick Rohwedder introduced the world’s first mechanical bread slicer in Battle Creek, Michigan. I don’t know what revolutionary invention the bread-slicer was compared to when it first appeared, but sooner or later, it all goes back to the wheel. Nobody seems to be able to come up with an older, or more important, invention than that.

Giving It a Spin

Before I began my curatorial duties here at Interesting Thing of the Day, I had never really wondered when the wheel was invented, much less why it was invented. That’s obvious, isn’t it? Everyone knows the wheel was invented to enable people to move stuff around more easily—a revolutionary alternative (so to speak) to carrying, pushing, or dragging heavy objects. Surprisingly enough, some historians and archeologists aren’t so sure about that. There is in fact a fairly good case for the hypothesis that the wheel was invented to facilitate pottery making.

The wheel was almost certainly invented in Mesopotamia—present-day Iraq. Estimates on when this may have occurred range from 5500 to 3000 B.C., with most guesses closer to a 4000 B.C. date. The oldest artifacts with drawings that depict wheeled carts date from about 3000 B.C., though for all anyone knows, the wheel was in use for centuries before these drawings were made. But there is also evidence from the very same period of time that wheels were used for pottery.

Drinking and Driving

It was around 3000 B.C. that the first goblets appeared. Clay goblets are normally made by throwing them on a wheel in two parts—first the bowl, then the stem (including the foot). This makes for a far more smooth and regular shape than could be achieved by manual coiling, and since the oldest surviving goblets bear the telltale signs of wheel manufacture, it is plausible that wheels were used for pottery before they were used for transportation. For that matter, it’s conceivable—though admittedly a wild and improbable speculation—that the wheel was invented for the express purpose of making goblets. Be that as it may, it is virtually certain that historically, the preferred way to make goblets was to throw them.

If the wheel was indeed invented for the convenience of potters, the question then becomes how it came to be used for transportation; clearly, whichever use appeared first, the other quickly followed. To be honest—putting myself as best I can into the sandals of someone living many thousands of years ago in Mesopotamia—I probably never would have thought to turn a pottery wheel on its edge and put it under a box (or vice-versa). But then, I’ve always had a knack for overlooking the obvious.

This story also has an interesting postscript. Some sources claim that prior to the invention of the pottery wheel, most pottery was primarily made by women, whereas afterward, it became a man’s job. Thus we can see that the stereotypical male trait of liking gadgets goes way back. I can just picture those prehistoric dudes standing around bragging about their new wheels and saying, “This is the greatest invention since fire!” —Joe Kissell

Sheets of ice found below Mars’ surface could be a boon for human exploration

If you look at a photo of Mars, you’ll mostly see red.The rust-colored world is known for its oxidized look, but if you dig down into the dirt, Mars gets a lot more interesting.The red planet is actually hiding pockets of water-ice up to about 100 meters thick just below its red surface, according to a new study published in the journal Science this week. The research found eight different pockets of ice of varying size not far below the planet’s surface.That ice could have implications for science, human exploration, and even long-term living on Mars.

“This ice is a critical target for science and exploration: it affects modern geomorphology, is expected to preserve a record of climate history, influences the planet’s habitability, and may be a potential resource for future exploration,” the study says.

While scientists have known that Mars is a pretty icy place for years, the new study helps confirm exactly where those ice sheets exist on the red planet.

When can we go?

Scientists and engineers have long-thought that ice could be a boon for human exploration of the red world.

NASA and other organizations hoping to send people to Mars want to harvest as many resources from the planet itself as possible in order to limit the amount of stuff they would need to send from Earth off to Mars.

“There has been discussion by the Mars Exploration Program Analysis Group… and others in the community of using ice as a resource,” lead author and planetary scientist Colin Dundas, said via email. “Our research may be useful information but it will be up to them to determine how to use it.”

If there is a relatively large cache of ice just under the Martian surface, as this study — which is based on data from the Mars Reconnaissance Orbiter (MRO) — suggests, it could help any future explorers who would want to use utilize it fuel or even just water.

“In many ways, water is the key resource: Humans need liquid water biologically, water can be processed to provide oxygen for breathing and hydrogen for energy generation and even rocket fuel. Water ice deposits may be that resource,” Richard Zurek, the chief Mars scientist at NASA’s Jet Propulsion Laboratory, said in an email.

“… The question is how much energy/work does it take to extract the water, to transport it to where the humans are and then to process it?”

That said, it may not be all that easy to access the ice found in the new study.

According to Zurek, who is an MRO project scientist but did not participate in the new study, the newly-identified sites with water-ice are in the higher latitudes of Mars, meaning that sunlight and temperatures in those areas go through extreme swings throughout the year.

This could make it more difficult for a human explorer to extract those resources, Zurek said.

Ice sheets on Mars

But how did that water-ice get there in the first place?

The new study suggests that the ice built up over time, much the same way that Earth’s glaciers and ice sheets came to be.

Here’s how it works on Mars: When the planet is farther from the sun in its orbit, and it snows, that snow remains on the surface and becomes a buildup of ice.

Over time, what first began as snow is “compacted into massive, fractured, and layered ice,” the study says. Some of that ice was then covered up by the movement of dirt on the surface of the planet, saving it from sublimating — turning straight from a solid into gas.

Aside from potentially aiding in human exploration of Mars, the newly-mapped ice sheets could also unlock secrets hidden in Mars’ past.

“We expect the vertical structure of Martian ice-rich deposits to preserve a record of ice deposition and past climate,” the study says.

Kolibree Magik toothbrush lets kids play with augmented reality while they brush

Augmented reality is slowly seeping into our everyday lives. It’s not just for snaps and video games anymore. Case in point: a new kids toothbrush with AR.

French company Kolibree announced at CES 2018 a smart toothbrush that uses AR to make brushing teeth fun for kids.

The device is paired with a motion-tracking app that uses your smartphone’s front-facing camera to put your kid right in the middle of the fun. It comes with a phone stand, so your kids don’t have to worry about holding up a phone while they brush.

Kids can choose from 15 different games, featuring pirates, princesses, monsters, and a whole cast of fantasy characters.

In one version, kids are tasked with shooting a monster who is spreading cavities across the land. As it runs around the screen, the child in turn moves the toothbrush around her mouth to shoot it.

The toothbrush also allows parents to monitor their children’s brushing habits. They’ll see how many times a day their kids are brushing, how fast, and for how long.

Oh, and you don’t need to stress about over-brushing: The app can only be used three times per day.

Another interesting feature: It teaches kids to brush. The Magik toothbrush app offers kids guidance on where to brush, how thoroughly to brush, and how long to stay in each spot.

This is not Kolibree’s first go at a smart toothbrush. The company released Ara, a toothbrush that uses artificial intelligence to track oral health and encourage healthy brushing habits, in early 2017.

Magik will launch later this year for under $30. That’s significantly more than your average toothbrush, but you get the brush, the mobile game, the stand, and a much happier kid for that price.

Check out more of our CES 2018 coverage here.

Take these tiny arcade games with you wherever you go

Play your thumbs off and get the high score with mini Space Invaders, Pac-Man, and Ms. Pac-Man. Buy them here: http://fave.co/2CVkVMM

Cold snaps like the one that just gripped the U.S. are far more rare thanks to global warming

The first week of January was the coldest such week on record in most locations in the Eastern United States. It was so frigid that week, and the week preceding it, that sea ice formed around Cape Cod and Chesapeake Bay, sharks froze to death on Massachusetts beaches, and alligators went into a resting state while entombed in ice.

One might think that a cold snap like this one all but disproves global warming, or at least refutes the more dire scenarios about winter all but disappearing as the globe responds to sharp increases in greenhouse gases, such as carbon dioxide and methane.

However, the reality is far more complex, scientists say. In fact, it’s getting harder to pull off a cold outbreak of the severity and longevity of the late December and early January Arctic blast, according to a new analysis published on Thursday.

Data visualization showing very cold temperatures gripping large portions of North America on Jan. 1, 2018

Image: noaa/nnvl

The study, by the World Weather Attribution project, an international consortium of researchers that analyze the role global warming may have played in extreme events, concludes that a cold outbreak like the one that just occurred is 15-times less likely to take place today due to global warming.

Scientists from Princeton University, the Royal Netherlands Meteorological Institute, University of Oxford, and Climate Central examined the two-week cold wave, between Dec. 26, 2017 and Jan. 8, 2018, over the northeastern U.S. and southeastern Canada. They found that this was a “relatively rare event” now that global warming has made such cold snaps less frequent and severe.

In fact, the attribution analysis, which has not yet been peer-reviewed, found that the effects of global warming on cold outbreaks like this is to make them warmer than they otherwise would be, by about 4 degrees Fahrenheit.

Trend in the coldest two weeks of the winter as a multiple of the global mean temperature rise during the 1880 to 2017 time period.

Trend in the coldest two weeks of the winter as a multiple of the global mean temperature rise during the 1880 to 2017 time period.

Image: Berkeley Earth/ ERA-interim

For the study, scientists compared the temperatures during the cold wave to readings during the past 30 years, as well as the time series of the temperature of the coldest two weeks of the year, dating back to 1880. They found that there were many equally cold or colder two-week periods in this region in the past, but none have occurred since the winter of 1993-94.

“Cold waves like this occurred more frequently in the climate of a century ago and the temperature of two-week cold waves has increased throughout North America, which is consistent in a climate of global warming,” said Geert Jan van Oldenborgh, senior researcher at the Royal Netherlands Meteorological Institute (KNMI), in a press release.

Many records were broken during this cold snap, with the most frigid conditions found on the back side of the “bomb cyclone” that slammed the East Coast with snow and high winds on Jan. 3 through 5. New York City’s temperature remained at or below 32 degrees Fahrenheit for two weeks, which ranks among the top five records for the most consecutive number of days at or below freezing. Chicago’s 12 consecutive days below 20 degrees Fahrenheit tied a record seen only twice before in 1895 and 1936

The study’s findings are likely cold comfort to the millions who just experienced bone-chilling conditions, though. But they are important, since they show how winter cold spells are changing as the climate warms.

Temperature of the coldest two-week period averaged over land points in 40º–50ºN, 65º–95ºW. The green line is a 10-year running mean.

Temperature of the coldest two-week period averaged over land points in 40º–50ºN, 65º–95ºW. The green line is a 10-year running mean.

Image: WWA

The scientists found that the temperature of the coldest two-week period has increased about two times faster than the global mean temperature has increased. In other words, the coldest periods are warming faster than the overall warming rate, making cold extremes less severe and more rare.

In fact, the researchers calculated that a cold wave like this occurred about once every 17 years at the beginning of the 20th century, but now can be expected to occur just once out of every 250 years.

While some scientists contend that melting Arctic sea ice is causing colder air to leak southward into the midlatitudes during the winter, thereby intensifying winter weather in the U.S. and Europe, this study argues against that.

The researchers found that the weather pattern that caused the two-week cold period has not been occurring more frequently lately.

In any case, winter has only just started. While the odds of another cold snap of similar severity are long, they’re not zero. So keep that heavy coat and long underwear handy for a little longer.