Wednesday, September 30, 2015

Library of Babel

Awesome!

Thursday, September 24, 2015

How the brain encodes time and place | MIT News

From the article:

How the brain encodes time and place

Neuroscientists identify a brain circuit that is critical for forming episodic memories.Watch Video

Press Contact

Sarah McDonnell
Email: s_mcd@mit.edu
Phone: 617-253-8923
MIT News Office

Media Resources

1 images for download
When you remember a particular experience, that memory has three critical elements — what, when, and where. MIT neuroscientists have now identified a brain circuit that processes the “when” and “where” components of memory.
This circuit, which connects the hippocampus and a region of the cortex known as entorhinal cortex, separates location and timing into two streams of information. The researchers also identified two populations of neurons in the entorhinal cortex that convey this information, dubbed “ocean cells” and “island cells.”
Previous models of memory had suggested that the hippocampus, a brain structure critical for memory formation, separates timing and context information. However, the new study shows that this information is split even before it reaches the hippocampus.
MIT researchers have now identified a brain circuit that processes the “when” and “where” components of memory.
Video produced and edited by Melanie Gonick/MIT. Additional images courtesy of Takashi Kitamura.
“It suggests that there is a dichotomy of function upstream of the hippocampus,” says Chen Sun, an MIT graduate student in brain and cognitive sciences and one of the lead authors of the paper, which appears in the Sept. 23 issue of Neuron. “There is one pathway that feeds temporal information into the hippocampus, and another that feeds contextual representations to the hippocampus.”
The paper’s other lead author is MIT postdoc Takashi Kitamura. The senior author is Susumu Tonegawa, the Picower Professor of Biology and Neuroscience and director of the RIKEN-MIT Center for Neural Circuit Genetics at MIT’s Picower Institute for Learning and Memory. Other authors are Picower Institute technical assistant Jared Martin, Stanford University graduate student Lacey Kitch, and Mark Schnitzer, an associate professor of biology and applied physics at Stanford.
When and where
Located just outside the hippocampus, the entorhinal cortex relays sensory information from other cortical areas to the hippocampus, where memories are formed. Tonegawa and colleagues identified island and ocean cells a few years ago, and have been working since then to discover their functions.
In 2014, Tonegawa’s lab reported that island cells, which form small clusters surrounded by ocean cells, are needed for the brain to form memories linking two events that occur in rapid succession. In the new Neuron study, the team found that ocean cells are required to create representations of a location where an event took place.
“Ocean cells are important for contextual representations,” Sun says. “When you’re in the library, when you’re crossing the street, when you’re on the subway, you have different memories associated with each of these contexts.”
To discover these functions, the researchers labeled the two cell populations with a fluorescent molecule that lights up when it binds to calcium — an indication that the neuron is firing. This allowed them to determine which cells were active during tasks requiring mice to discriminate between two different environments, or to link two events in time.
The researchers also used a technique called optogenetics, which allows them to control neuron activity using light, to investigate how the mice’s behavior changed when either island cells or ocean cells were silenced.
When they blocked ocean cell activity, the animals were no longer able to associate a certain environment with fear after receiving a foot shock there. Manipulating the island cells, meanwhile, allowed the researchers to lengthen or shorten the time gap between events that could be linked in the mice’s memory.
Information flow
Previously, Tonegawa’s lab found that the firing rates of island cells depend on how fast the animal is moving, leading the researchers to believe that island cells help the animal navigate their way through space. Ocean cells, meanwhile, help the animal to recognize where it is at a given time.
The researchers also found that these two streams of information flow from the entorhinal cortex to different parts of the hippocampus: Ocean cells send their contextual information to the CA3 and dentate gyrus regions, while island cells project to CA1 cells.
“Tonegawa’s group has revealed a key distinction in the processing of contextual and temporal information within different cell groups of the entorhinal cortex that project to distinct parts of the hippocampus,” says Howard Eichenbaum, director of the Center for Memory and Brain at Boston University, who was not involved in the research. “These findings advance our understanding of critical components of the brain circuit that supports memory.”
Tonegawa’s lab is now pursuing further studies of how the entorhinal cortex and other parts of the brain represent time and place. The researchers are also investigating how information on timing and location are further processed in the brain to create a complete memory of an event.
“To form an episodic memory, each component has to be recombined together,” Kitamura says. “This is the next question.”

ASP.NET Sucks Huge Balls, And I Hate It « Fear and Loathing on the Learning Curve » Observations on life, tech and web design from a slightly misanthropic mind.

Hmm.

Now that I’ve got your attention…
Full dis­clos­ure: I make my liv­ing as a PHP pro­gram­mer. So I can cough to the tri­bal bias right off the bat. I’m also a big fan of Open Source. That aside, I’ve had the mis­for­tune to have worked briefly on ASP.NET pro­jects in the past, and am cur­rently con­tract­ing for a com­pany — a PHP shop — whose own super-fine PHP-based web­site is being redeveloped extern­ally for polit­ical reas­ons, by a sis­ter com­pany versed only in ASP.NET. We’re about to take cus­tody of that site as it if were our own spawn; need­less to say the fire­works have already begun in earn­est. So here’s my take on some of the prob­lems with ASP.NET.
I’m going to tackle this in two bites. Firstly a gen­eral look at why I find ASP.NET so abhor­rent from my POV in the PHP camp, then a focus on a more spe­cific gripe that came up this week: ASP.NET’s inab­il­ity to pro­duce markup that con­forms to XHTML 1.0 Strict (or, in many cases, Transitional).
I should make one more thing clear from the out­set: I’m will­ing to believe that ASP.NET can be made to be great (or at least pass­able) if imple­men­ted with suit­able expert­ise. My prob­lem, how­ever, is that I’ve been deal­ing with sup­posed ASP.NET “vet­er­ans” who still pro­duce awful sites. It seems that the level of expert­ise required sur­passes that of other for-the-web lan­guages, because the stock products that pop out of Visual Studio and Visual Web Developer, wiel­ded by those with less-than-savant abil­ity, are just so very far from accept­able. Visual Web Developer in par­tic­u­lar — though I know its use is held in less than high esteem by ASP.NET pros — seems to delight in churn­ing out really awful sites. This is also not meant to be a shin­ing appraisal of PHP — a lan­guage that I know is not without its faults — just a com­par­ison along the lines of “it can be so simple in X, why can’t it be in Y?”
Firstly the infra­struc­ture as I see it. Anyone with a little skill can set up a per­fectly production-ready web server without spend­ing a penny on soft­ware, and on fairly mod­est hard­ware. All the com­pon­ents of the LAMP stack — Linux, Apache, MySQL and PHP — are avail­able for free along with the sup­port of great com­munit­ies. Contrast ASP.NET, which requires a licensed copy of Windows/Windows Server + IIS, and a license for SQL Server. And let’s not for­get the devel­op­ment envir­on­ment, which for non­trivial pro­jects requires a fur­ther license for Visual Studio. Proprietary IDEs exist for PHP, sure, but I’ve never known any PHP developer have any issue with writ­ing code in a simple, plain-text editor, per­haps with some syn­tax high­light­ing thrown in if they’re lucky.
I’m not going to be so trite as to bust out fig­ures here, but nobody can argue that — on the soft­ware side at least — the cost of using PHP is zero, while the cost of using ASP.NET is decidedly non-zero. Other people have done TCO com­par­is­ons ad nauseum.
Lastly, while I won’t go down the “pro­pri­et­ary soft­ware is evil, OSS for evah!” route here, it has been poin­ted out that ASP.NET rep­res­ents a depend­ency on one com­pany, and the inher­ent liab­il­it­ies therein. Regardless of where you stand on that issue (I’m going to help­fully sit on the fence for this one), the fol­low­ing situ­ation might amuse: as men­tioned above, the com­pany to whom I con­tract is hav­ing their site developed by an ASP.NET shop sis­ter com­pany, who is then hand­ing over the fin­ished product for my employer to con­tinue main­ten­ance in the future. The sis­ter com­pany uses Visual Studio 2005; we can only pur­chase Visual Studio 2008 as Microsoft no longer offers older ver­sions for sale. Once Visual Studio 2008 has opened and saved a .NET pro­ject, it can’t be opened in older ver­sions. So imme­di­ately we have a backwards-compatibility prob­lem if the ori­ginal authors of the code­base need to get involved in the future (as they inev­it­ably will for sup­port issues). Either they don’t get involved or if they have to, they’re effect­ively forced to upgrade to Visual Studio 2008. Ouch.
Various ground-level issues con­tinue to irrit­ate me. While PostBack can argu­ably be a use­ful device, its use brings mul­ti­tude issues in oth­er­wise simple situ­ations like chan­ging between the tabs on a tab strip con­trol. The browser is load­ing a new page each time, but because every trans­ition takes the form of a POST request, hit­ting the Back but­ton in the browser res­ults in the ugly “Are you sure you wish to re-submit this page?” dia­log that com­pletely breaks the UX. I’m told that IE has some addi­tional sup­port that allows ASP.NET to manip­u­late the his­tory states, but that leaves non-IE users high and dry. And we all know which dir­ec­tion that demo­graphic is going in.
I’ve also found issues with hav­ing mul­tiple forms on one page — ASP.NET doesn’t dis­tin­guish one from another (since the page is treated as one massive form), so hit­ting Enter in a text field to sub­mit it will often sub­mit a dif­fer­ent form on the same page, or just reload the page in place.
PostBack and ViewState also spell trouble for meaningful/memorable URLs, as huge hashed val­ues are passed from page to page mak­ing debug­ging from the browser end a com­plete night­mare. The site that I’m being sub­jec­ted to at the moment has no friendly URLs likeViewItem.aspx?id=1234, instead passing all para­met­ers in POST or using ViewState-style hash­ing to pro­duce links with URLs likeViewItem.aspx?wEPDwUBMGQYAQU
eX19Db250cm9sc1JlcXVpcm
. I’m sure these things make more sense if you’re a seasoned ASP.NET pro, but from my POV as an exper­i­enced PHP developer I just can­not under­stand why these are the stand­ard ASP.NET way of doing things, and how they can be said to be bet­ter than the straight­for­ward, debug­gable PHP equivalent.
Now for that other issue — spec com­pli­ance. This week I have been faced with the task of mak­ing the front page for my employer’s new site (built by the ASP.NET shop) val­id­ate against some kind of sens­ible spe­cific­a­tion. We provided them with (valid) front-end HTML which then had ASP.NET com­pon­ents spliced into it. I thought I’d go for gold and try val­id­at­ing against XHTML 1.0 Strict. That failed dis­mally, as did Transitional. And here’s a taster as to why, and why I still haven’t been able to get suc­cess­ful val­id­a­tion even after find­ing this stuff out.
ASP.NET uses a scheme of assign­ing just about every ele­ment a unique ID. These IDs can’t be relied upon to be con­sist­ent at runtime, so you have to refer to everything using classes if you want to get your CSS applied prop­erly. That’s one gripe. That aside, ASP.NET insists on giv­ing every <form> tag a name attrib­ute, the same value as its ID, but which breaks the Strict spec that dic­tates that forms can­not havenames. There doesn’t seem to be any way to sup­press these attrib­utes, mean­ing val­id­a­tion failure.
Second, we found that some of our image tags were fail­ing val­id­a­tion due to hav­ing no alt attrib­ute. Quite rightly, we thought — of course they should have at least an empty alt=””. So we checked the code — ASP:Image con­trols in this instance — and found that they did indeed haveAlternateText=”” in their declar­a­tions. Turns out that ifAlternateText is empty in the ASP.NET code, it is help­fully assumed that you didn’t mean to put that attrib­ute in, and no altattrib­ute is writ­ten in the HTML. Great.
Yes, I’m well aware that images should have mean­ing­ful alt val­ues, but that’s no excuse for this beha­viour. There are situ­ations where empty alt val­ues are appro­pri­ate, but appar­ently not in ASP.NET’s world, where it’s more appro­pri­ate to viol­ate the spec com­pletely instead of just half-heartedly.
Finally there was an instance where we had anASP:ImageButton being used to sub­mit a login form. The val­id­ator was com­plain­ing about the tag gen­er­ated by this con­trol, say­ing that the bor­der attrib­ute was not allowed (on an input type=“image” field). Fair enough, we thought — we looked at the ASP.NET code, but could find no evid­ence of a bor­der attrib­ute being spe­cified at all, even under some dif­fer­ent name. We then looked in the HTML, and found nobor­der attrib­ute either. What we did find was a style attrib­ute, which looked like style=“border-width:0px”. Confused? You bet. Further invest­ig­a­tion revealed that ASP.NET was writ­ing the invalidbor­der attrib­ute out in the HTML, then using JavaScript to change the DOM at the point of page load, repla­cing that attrib­ute with thestyle attrib­ute above. Why? Who knows. Of course the val­id­ator doesn’t sup­port JavaScript, so it sees some­thing dif­fer­ent to the browser, and the spec is viol­ated again. Once again, we seem to have no con­trol over this beha­viour, mak­ing it impossible for our page to pass validation.
Now look. I’m not sug­gest­ing these prob­lems are com­pletely insur­mount­able. A bit of search­ing has revealed that some of these prob­lems could be addressed if we (or our sis­ter com­pany) were to use the ASP.NET 2.0 Web Forms exten­sion, or even write our own con­trols that pro­duce valid code. But what I’m say­ing is that we shouldn’t have to resort to that; that if ASP.NET really is the all-singing, all-dancing super web plat­form of the future, this sort of thing should be handled prop­erly by default and shouldn’t take a genius to fig­ure out. I’m also pre­pared to be told that our sis­ter com­pany are mor­ons — same logic applies.
I’m pre­pared for unend­ing flam­ing from the ASP.NET crowd, but I’m hop­ing for some con­struct­ive com­ment. From where I’m stand­ing, PHP might not be the per­fect solu­tion, but I’m damn sure I can build great web­sites with it, rel­at­ively hassle-free. My exper­i­ence thus far with ASP.NET doesn’t fill me with a great deal of con­fid­ence. Is it really that shit, or have I just been watch­ing the work of idi­ots? If it’s done right, does it turn out any­thing like this?

Why Many Developers Hate ASP.NET… and Why They’re Wrong - Tuts+ Code Article

Why Many Developers Hate ASP.NET… and Why They’re Wrong - Tuts+ Code Article:



'via Blog this'

Tuesday, September 22, 2015

Researchers set distance record for quantum teleportation - UPI.com

Imagine how good remote site backups could be... terabytes all safely tucked away!

Scientists break quantum teleportation distance record

Currently, quantum communication is mostly used in information security, but researchers say the technology could one day be used to create a quantum Internet.
By Brooks Hays   |   Sept. 22, 2015 at 4:49 PM
A single-photon detector used to pick up entangled quantum data. Photo by NIST
BOULDER, Colo., Sept. 22 (UPI) -- Researchers with the National Institute of Standards and Technology have set a new distance record for quantum teleportation, sending quantum data through fibers four times longer than the previous record-holder.
Scientists successfully sent and received quantum information, encoded in light photons, through 62 miles of fiber.
Other experiments have successfully teleported quantum data over longer distances through free space, but quantum communication through fibers is more difficult -- and of more significance to practical applications of the technology.
Researchers chronicled their feat in the latest issue of the journal Optica.
"What's exciting is that we were able to carry out quantum teleportation over such a long distance," study co-author Martin Stevens, a quantum optics scientist at NIST,told Live Science.
Quantum teleportation isn't instantaneous. But by encoding the fundamental physics -- or "quantum states" -- of an object onto light particles, researchers can beam information across long distances. These entangled quantum states can be detected and used to recreate the object, or encoded information, on the other end of the fibers.
Currently, quantum communication is mostly used in information security, but researchers say the technology could one day be used to create a quantum Internet. But to do so, scientist need to find strategies for long-distance, fiber-based quantum teleportation.
What made the feat possible, researchers say, is the newly designed photon detectors deployed on the far-end of the fibers.
"Only about 1 percent of photons make it all the way through 100 kilometers (60 miles) of fiber," Stevens said in a press release. "We never could have done this experiment without these new detectors, which can measure this incredibly weak signal."

Monday, September 21, 2015

Dependency injection in C# - a simple introduction

Are you sure? ....

So, you have heard of Dependency injection (DI) but are having a hard time grasping the concept? Well you're not alone, DI can seem quite complex at first! Fortunately dependency injection is easy to learn and understand, and once you start practising it chances are that you never want to go back to do things in the "old bad" ways.

The old bad ways

Let's say that you have a class called Car and Car need to call a method in the Engine class. Today you might either have to provide the instance to Engine manually when you create a Car:
var generator = new Generator();
var engine = new Engine(generator);
var car = new Car(engine);
Or perhaps create Engine in the Car's constructor:

public void Car()
{
    _engine = new Engine();
}
Is this bad? Not always, but it can be much better!

The Dependency Injection way

Let's say that you'd like to implement the following using Dependency Injection. This is how you can do it:

1) Extract your classes method definitions to Interfaces:
public interface IEngine
{
    void Start();
}

public class Engine : IEngine
{
    public void Start();
}
2) Create your classes so that all their dependencies are fed to them as Interfaces through the constructor and store them in private variables:
private readonly IEngine _engine;

public void Car(IEngine engine)
{
    _engine = engine;
}
3) In the entry point of your application, register what instance of what class that should be provided for each interface with an IoC container (I'm using Autofac in this example):
var builder = new ContainerBuilder();
builder.RegisterType<Generator>().As<IGenerator>();
builder.RegisterType<Engine>().As<IEngine>();
builder.RegisterType<Car>().As<ICar>();
var container = builder.Build();
This means that whenever a constructor asks for an instance of type IGenerator the IoC will provide it with an instance of Generator and so on.

4) Start the top-level instance (and all underlying instances will be created automatically for you):
var car = resolver.Resolve<ICar>();
car.Start();
The following will happen:
 * The IoC will try to create an instance of ICar using the class Car
 * Doing this it will notice that Car needs an instance of IEngine in order to be constructable
 * The IoC will then try to create an instance of IEngine using the class Engine
 * Doing this it will notice that Engine needs an instance of IGenerator in order to be constructable
 * The IoC will then try to create an instance of IGenerator using the class Generator
 * The IoC can now create an instance of Engine as all it's dependencies have been met
 * The IoC can now create an instance of Car as all it's dependencies have been met
 * The instance is returned to you
 * You can invoke the method Start

Why is this good?

This has several benefits. The most important is that it automatically makes your code testable. As you are using interfaces everywhere, you can easily provide another implementation in your unit tests. This means that your tests will be much easier to set up as well as being restricted to test a specific unit - not a whole chain of code.

Most people need to try this for them selves in order to really see the benefits. Do it. You will not regret it.
Article created: Aug 10 at 10:45. Edited Sep 18 at 09:54.

Dogecoin Clone “ShibeCoin” Scam Busted | Dogecoin News Site

I feel it is my duty to spread this info:

DOGECOIN CLONE “SHIBECOIN” SCAM BUSTED

Earlier this month, a couple of naughty individuals decided to launch ShibeCoin — a Dogecoin knock-off which was designed to be an end-life Proof-of-Stake coin. I’ll admit, it is certainly a clever way to get noticed amongst the sea of scam-coins. I am sure many scheming alt-coin developers were kicking themselves for missing such a good opportunity.
https://ip.bitcointalk.org/?u=http%3A%2F%2Fi.imgur.com%2FVpRWGob.png&t=540&c=sGUIAC1e1CLU4Q
Background
ShibeCoin suffered from the same problem that most alt-coins do: mischievous developers. The coin was announced to have a 0.5% pre-mine of 1.5 million coins (out of 300 million). Fortunately, cryptocurrency technology allows us to verify this by simply checking the blockchain. Unsurprisingly, the first 56 blocks are time stamped several minutes before the actual coin was publicly launched (which indicates the developers pre-mined much more than just 1.5 million coins)
Don’t forget: the developers also hard coded the first 50 blocks to have the highest block rewards. Needless to say, the developers lied about their 0.5% pre-mine and received a massively unfair advantage.
Wallet Issues
The coin has been plagued with its fair share of wallet problems as well. As evidenced by many user reports on the announcement thread, staking simply doesn’t work. The coin was forked within the first week of launch, the wallet has been updated five times within the last week, and the developers are of little help.
Price
Bittrex was the first exchange to open Shibe training and the price quickly rose to 700 satoshi. The coin traded at the price of 200-450 satoshi for a few weeks before taking the recent plunge to 50 satoshi. Despite the devious nature of the coin, it actually performed pretty well in the market place. ShibeCoin peaked with a market cap of around $450,000
http://i.imgur.com/bK3kwjF.png
Conclusion
I have nothing against alt-coins (even ones that try to imitate our glorious Shibe) but such a poorly executed coin embarrasses the entire cryptocurrency community as a whole. The coin doesn’t seem to be done yet. The developer recently released a series of videos indicating he is still involved with the coin. My advice to you all is to exercise extreme caution with ShibeCoin. With that said, steer clear of the most recent Dogecoin clone DojeCoin as well.

Saturday, September 19, 2015

Why Agile Didn’t Work

Why Agile Didn’t Work:



'via Blog this'

Nuclear Stability and Magic Numbers - Chemwiki

Cool stuff!

Nuclear Stability and Magic Numbers

Nuclear Stability is a concept that helps to identify the stability of an isotope. The two main factors that determine nuclear stability are the neutron/proton ratio and the total number of nucleons in the nucleus.


Introduction

A isotope is an element that has same atomic number but different atomic mass compared to the periodic table. Every element has a proton, neutron, and electron. The number of protons is equal to the atomic number, and the number of electrons is equal the protons, unless it is an ion.  To determine the number of neutrons in an element you subtract the atomic number from the atomic mass of the element. Atomic mass is represented as (A) and atomic number is represented as (Z) and neutrons are represented as (N).
A=N+Z(1)


atomic mass = number of neutrons + atomic number
To determine the stability of an isotope you can use the ratio of neutrons to protons (N:Z) as discussed below.


Determining the N:Z Ratio

The principal factor for determining whether a nucleus is stable is the neutron to proton ratio. Elements with (Z<20) are lighter and these elements' nuclei and have a ratio of 1:1 and prefer to have the same amount of protons and neutrons. 
Example: Carbon Isotopes
Carbon has three isotopes that scientists commonly used: 12C13C14C. What is the the number of neutron, protons, total nucleons and N:Z ratio for the 12C nuclide?
SOLUTION
For this specific isotope, there are 12 total nucleons (A. From the periodic table, we can see that Z for carbon (any of the isotopes) is 6, therefore N=A-Z (from Equation 1):
126=6


The N:P ratio therefore is 6:6 or a 1:1. This is a stable ratio that lies on the Belt of Stability. In fact 99% of all carbon in the earth is this isotope. In contrast, the same analysis of 14C (also known as "radiocarbon" or carbon-14) suggests that it is off the Belt of Stability and is unstable; in fact, it spontaneously decomposes into other nuclei (but on a slow timescale).
Exercise: Oxygen
Identify the number of neutron, protons, total nucleons and N:Z ratio in the 128O nuclide?
Elements that have atomic numbers from 20 to 83 are heavy elements, therefore the ratio is different. The ratio is 1.5:1, the reason for this difference is because of the repulsive force between protons: the stronger the repulsion force, the more neutrons are needed to stabilize the nuclei.
Note
Neutrons help to separate the protons from each other in a nucleus so that they do not feel as strong a repulsive force from other.


Belt of Stability

The graph of stable elements is commonly referred to as the Band (or Belt) of Stability. The graph consists of a y-axis labeled neutrons, an x-axis labeled protons, and a nuclei.At the higher end (upper right) of the band of stability lies the radionuclides that decay via alpha decay, below is positron emission or electron capture, above is beta emissions and elements beyond the atomic number of 83 are only unstable radioactive elements. Stable nuclei with atomic numbers up to about 20 have an neutron:proton ratio of about 1:1 (solid line).

Figure 1: Belt of Stability. Graph of isotopes by type of nuclear decay. Orange and blue nuclides are unstable, with the black squares between these regions representing stable nuclides. The solid black line represents the theoretical position on the graph of nuclides for which proton number is the same as neutron number (N=Z. Elements with more than 20 protons require more neutrons than protons to be stable. Figure used with permission from Wikipedia.
Note
The deviation from the N:Z=1 line on the belt of stability originates from a non-unity N:Z ratio necessary for total stability of nuclei.  That is, more neutrons are required to stabilize the repulsive forces from a fewer number of protons within a nucleus (i.e., N>Z).
The belt of stability makes it is easy to determine where the alpha decay, beta decay, and positron emission or electron capture occurs.
  • Alpha Î± Decay: Alpha decay is located at the top of the plotted line, because the alpha decay decreases the mass number of the element to keep the isotope stable. This is accomplished by emitting a alpha particle, which is just a helium (He) nucleus. In this decay pathway, the unstable isotope's proton number P is decreased by 2 and its neutron (N) number is decreased by 2. The means that the nucleon number A decreases by 4 (Equation 1).
  • Beta Î² Decay: Beta Î² decay accepts protons so it changes the amount of protons and neutrons. the number of protons increase by 1 and the neutron number decreases by 1. This pathway occurs in unstable nuclides that have too many neutrons lie above the band of stability (blue isotopes in Figure 1).
  • Positron Î²+ Decay: Positron Î²+ emission and electron capture is when the isotope gains more neutrons. Positron emission and electron capture are below the band of stability because the ratio of the isotope has more protons than neutrons, think of it as there are too few protons for the amount of neutrons and that is why it is below the band of stability (yellow isotopes in Figure 1).
As with all decay pathways, if the daughter nuclides are not on the Belt, then subsequent decay pathways will occur until the daughter nuclei are on the Belt.


Magic Numbers

Magic numbers are natural occurrences in isotopes that are particularly stable (similar to octets with valence electrons). Below is a list of numbers of protons and neutrons; isotopes that have these numbers occurring in either the proton or neutron are stable. In some cases there the isotopes can consist of magic numbers for both protons and neutrons; these would be called double magic numbers. The double numbers only occur for isotopes that are heavier, because the repulsion of the forces between the protons.  
The magic numbers:
  • proton: 2, 8, 20, 28, 50, 82, 114
  • neutron: 2, 8, 20, 28, 50, 82, 126, 184
Also, there is the concept that isotopes consisting a combination of even-even, even-odd, odd-even, and odd-odd are all stable. There are more nuclides that have a combination of even-even than odd-odd.

Proton number (Z)Neutron Number# of stable Isotopes
EvenEven163
EvenOdd53
OddEven50
OddOdd4
Note
Although rare, four stable odd-odd nuclides exist: 21H63Li105B147N


Unstable or Stable

Here is a simple chart that can help you decide is an element is likely stable.
  • Calculate the total number of nucleons (protons and neutrons) in the nuclide
    • If the number of nucleons is even, there is a good chance it is stable.
  • Are there a magic number of protons or neutrons?
    • 2,8,20,28,50,82,114 (protons), 126 (neutrons), 184 (neutrons) are particularly stable in nuclei.
  • Calculate the N/Z ratio.
    • Use the belt of stability (Figure 1) to determine the best way to get from an unstable nucleus to a stable nucleus


Outside links



References

  1. Olmsted III, John and Gregory M William. Chemistry Fourth Edition. John Wiley and Sons Inc:NJ, 2006.
  2. Petrucci, Ralph H., William S. Harwood, F. Geoffrey Herring, Jeffry D Madura. General Chemistry. Pearson Education Inc: NJ, 2007.


Problems

  1. Using the above chart state if this isotope is alpha-emitter, stable, or unstable:  a) 4020Ca   b) 5425Mn  c) 21084 Po
  2. If the isotope is located above the band of stability what type of radioactivity is it? what if it was below?
  3. Between elements bromide and carbon which is more stable when using magic numbers?
  4. Name one of the isotopes that consist of odd-odd combination in the nuclei?


Solutions

1)  a) Stable, because this Ca isotope has 20 neutrons, which is on of the magic numbers
        b) Unstable, because there is an odd number (25 and 29) of protons and neutrons
        c) Alpha-emitter, because Z=84, which follows rule/step one on the chart
   2) Beta decay, positron emission, or electron capture
   3) Carbon is stable
   4)  Hydrogen-2, Lithium-6,  Boron-10, nitrogen-14


Contributors

  • Content was contributed, in part, from Socratic.org.