MIT researchers have now identified a brain circuit that processes the “when” and “where” components of memory.
Video produced and edited by Melanie Gonick/MIT. Additional images courtesy of Takashi Kitamura.
How the brain encodes time and place
Neuroscientists identify a brain circuit that is critical for forming episodic memories.Watch VideoAnne Trafton | MIT News Office
September 23, 2015Press Contact
When you remember a particular experience, that memory has three critical elements — what, when, and where. MIT neuroscientists have now identified a brain circuit that processes the “when” and “where” components of memory.This circuit, which connects the hippocampus and a region of the cortex known as entorhinal cortex, separates location and timing into two streams of information. The researchers also identified two populations of neurons in the entorhinal cortex that convey this information, dubbed “ocean cells” and “island cells.”Previous models of memory had suggested that the hippocampus, a brain structure critical for memory formation, separates timing and context information. However, the new study shows that this information is split even before it reaches the hippocampus.“It suggests that there is a dichotomy of function upstream of the hippocampus,” says Chen Sun, an MIT graduate student in brain and cognitive sciences and one of the lead authors of the paper, which appears in the Sept. 23 issue of Neuron. “There is one pathway that feeds temporal information into the hippocampus, and another that feeds contextual representations to the hippocampus.”The paper’s other lead author is MIT postdoc Takashi Kitamura. The senior author is Susumu Tonegawa, the Picower Professor of Biology and Neuroscience and director of the RIKEN-MIT Center for Neural Circuit Genetics at MIT’s Picower Institute for Learning and Memory. Other authors are Picower Institute technical assistant Jared Martin, Stanford University graduate student Lacey Kitch, and Mark Schnitzer, an associate professor of biology and applied physics at Stanford.When and whereLocated just outside the hippocampus, the entorhinal cortex relays sensory information from other cortical areas to the hippocampus, where memories are formed. Tonegawa and colleagues identified island and ocean cells a few years ago, and have been working since then to discover their functions.In 2014, Tonegawa’s lab reported that island cells, which form small clusters surrounded by ocean cells, are needed for the brain to form memories linking two events that occur in rapid succession. In the new Neuron study, the team found that ocean cells are required to create representations of a location where an event took place.“Ocean cells are important for contextual representations,” Sun says. “When you’re in the library, when you’re crossing the street, when you’re on the subway, you have different memories associated with each of these contexts.”To discover these functions, the researchers labeled the two cell populations with a fluorescent molecule that lights up when it binds to calcium — an indication that the neuron is firing. This allowed them to determine which cells were active during tasks requiring mice to discriminate between two different environments, or to link two events in time.The researchers also used a technique called optogenetics, which allows them to control neuron activity using light, to investigate how the mice’s behavior changed when either island cells or ocean cells were silenced.When they blocked ocean cell activity, the animals were no longer able to associate a certain environment with fear after receiving a foot shock there. Manipulating the island cells, meanwhile, allowed the researchers to lengthen or shorten the time gap between events that could be linked in the mice’s memory.Information flowPreviously, Tonegawa’s lab found that the firing rates of island cells depend on how fast the animal is moving, leading the researchers to believe that island cells help the animal navigate their way through space. Ocean cells, meanwhile, help the animal to recognize where it is at a given time.The researchers also found that these two streams of information flow from the entorhinal cortex to different parts of the hippocampus: Ocean cells send their contextual information to the CA3 and dentate gyrus regions, while island cells project to CA1 cells.“Tonegawa’s group has revealed a key distinction in the processing of contextual and temporal information within different cell groups of the entorhinal cortex that project to distinct parts of the hippocampus,” says Howard Eichenbaum, director of the Center for Memory and Brain at Boston University, who was not involved in the research. “These findings advance our understanding of critical components of the brain circuit that supports memory.”Tonegawa’s lab is now pursuing further studies of how the entorhinal cortex and other parts of the brain represent time and place. The researchers are also investigating how information on timing and location are further processed in the brain to create a complete memory of an event.“To form an episodic memory, each component has to be recombined together,” Kitamura says. “This is the next question.”
Now that I’ve got your attention…Full disclosure: I make my living as a PHP programmer. So I can cough to the tribal bias right off the bat. I’m also a big fan of Open Source. That aside, I’ve had the misfortune to have worked briefly on ASP.NET projects in the past, and am currently contracting for a company — a PHP shop — whose own super-fine PHP-based website is being redeveloped externally for political reasons, by a sister company versed only in ASP.NET. We’re about to take custody of that site as it if were our own spawn; needless to say the fireworks have already begun in earnest. So here’s my take on some of the problems with ASP.NET.I’m going to tackle this in two bites. Firstly a general look at why I find ASP.NET so abhorrent from my POV in the PHP camp, then a focus on a more specific gripe that came up this week: ASP.NET’s inability to produce markup that conforms to XHTML 1.0 Strict (or, in many cases, Transitional).I should make one more thing clear from the outset: I’m willing to believe that ASP.NET can be made to be great (or at least passable) if implemented with suitable expertise. My problem, however, is that I’ve been dealing with supposed ASP.NET “veterans” who still produce awful sites. It seems that the level of expertise required surpasses that of other for-the-web languages, because the stock products that pop out of Visual Studio and Visual Web Developer, wielded by those with less-than-savant ability, are just so very far from acceptable. Visual Web Developer in particular — though I know its use is held in less than high esteem by ASP.NET pros — seems to delight in churning out really awful sites. This is also not meant to be a shining appraisal of PHP — a language that I know is not without its faults — just a comparison along the lines of “it can be so simple in X, why can’t it be in Y?”Firstly the infrastructure as I see it. Anyone with a little skill can set up a perfectly production-ready web server without spending a penny on software, and on fairly modest hardware. All the components of the LAMP stack — Linux, Apache, MySQL and PHP — are available for free along with the support of great communities. Contrast ASP.NET, which requires a licensed copy of Windows/Windows Server + IIS, and a license for SQL Server. And let’s not forget the development environment, which for nontrivial projects requires a further license for Visual Studio. Proprietary IDEs exist for PHP, sure, but I’ve never known any PHP developer have any issue with writing code in a simple, plain-text editor, perhaps with some syntax highlighting thrown in if they’re lucky.I’m not going to be so trite as to bust out figures here, but nobody can argue that — on the software side at least — the cost of using PHP is zero, while the cost of using ASP.NET is decidedly non-zero. Other people have done TCO comparisons ad nauseum.Lastly, while I won’t go down the “proprietary software is evil, OSS for evah!” route here, it has been pointed out that ASP.NET represents a dependency on one company, and the inherent liabilities therein. Regardless of where you stand on that issue (I’m going to helpfully sit on the fence for this one), the following situation might amuse: as mentioned above, the company to whom I contract is having their site developed by an ASP.NET shop sister company, who is then handing over the finished product for my employer to continue maintenance in the future. The sister company uses Visual Studio 2005; we can only purchase Visual Studio 2008 as Microsoft no longer offers older versions for sale. Once Visual Studio 2008 has opened and saved a .NET project, it can’t be opened in older versions. So immediately we have a backwards-compatibility problem if the original authors of the codebase need to get involved in the future (as they inevitably will for support issues). Either they don’t get involved or if they have to, they’re effectively forced to upgrade to Visual Studio 2008. Ouch.Various ground-level issues continue to irritate me. While PostBack can arguably be a useful device, its use brings multitude issues in otherwise simple situations like changing between the tabs on a tab strip control. The browser is loading a new page each time, but because every transition takes the form of a POST request, hitting the Back button in the browser results in the ugly “Are you sure you wish to re-submit this page?” dialog that completely breaks the UX. I’m told that IE has some additional support that allows ASP.NET to manipulate the history states, but that leaves non-IE users high and dry. And we all know which direction that demographic is going in.I’ve also found issues with having multiple forms on one page — ASP.NET doesn’t distinguish one from another (since the page is treated as one massive form), so hitting Enter in a text field to submit it will often submit a different form on the same page, or just reload the page in place.PostBack and ViewState also spell trouble for meaningful/memorable URLs, as huge hashed values are passed from page to page making debugging from the browser end a complete nightmare. The site that I’m being subjected to at the moment has no friendly URLs likeViewItem.aspx?id=1234, instead passing all parameters in POST or using ViewState-style hashing to produce links with URLs likeViewItem.aspx?wEPDwUBMGQYAQU
eX19Db250cm9sc1JlcXVpcm. I’m sure these things make more sense if you’re a seasoned ASP.NET pro, but from my POV as an experienced PHP developer I just cannot understand why these are the standard ASP.NET way of doing things, and how they can be said to be better than the straightforward, debuggable PHP equivalent.Now for that other issue — spec compliance. This week I have been faced with the task of making the front page for my employer’s new site (built by the ASP.NET shop) validate against some kind of sensible specification. We provided them with (valid) front-end HTML which then had ASP.NET components spliced into it. I thought I’d go for gold and try validating against XHTML 1.0 Strict. That failed dismally, as did Transitional. And here’s a taster as to why, and why I still haven’t been able to get successful validation even after finding this stuff out.ASP.NET uses a scheme of assigning just about every element a unique ID. These IDs can’t be relied upon to be consistent at runtime, so you have to refer to everything using classes if you want to get your CSS applied properly. That’s one gripe. That aside, ASP.NET insists on giving every <form> tag a name attribute, the same value as its ID, but which breaks the Strict spec that dictates that forms cannot havenames. There doesn’t seem to be any way to suppress these attributes, meaning validation failure.Second, we found that some of our image tags were failing validation due to having no alt attribute. Quite rightly, we thought — of course they should have at least an empty alt=””. So we checked the code — ASP:Image controls in this instance — and found that they did indeed haveAlternateText=”” in their declarations. Turns out that ifAlternateText is empty in the ASP.NET code, it is helpfully assumed that you didn’t mean to put that attribute in, and no altattribute is written in the HTML. Great.Yes, I’m well aware that images should have meaningful alt values, but that’s no excuse for this behaviour. There are situations where empty alt values are appropriate, but apparently not in ASP.NET’s world, where it’s more appropriate to violate the spec completely instead of just half-heartedly.Finally there was an instance where we had anASP:ImageButton being used to submit a login form. The validator was complaining about the tag generated by this control, saying that the border attribute was not allowed (on an input type=“image” field). Fair enough, we thought — we looked at the ASP.NET code, but could find no evidence of a border attribute being specified at all, even under some different name. We then looked in the HTML, and found noborder attribute either. What we did find was a style attribute, which looked like style=“border-width:0px”. Confused? You bet. Further investigation revealed that ASP.NET was writing the invalidborder attribute out in the HTML, then using JavaScript to change the DOM at the point of page load, replacing that attribute with thestyle attribute above. Why? Who knows. Of course the validator doesn’t support JavaScript, so it sees something different to the browser, and the spec is violated again. Once again, we seem to have no control over this behaviour, making it impossible for our page to pass validation.Now look. I’m not suggesting these problems are completely insurmountable. A bit of searching has revealed that some of these problems could be addressed if we (or our sister company) were to use the ASP.NET 2.0 Web Forms extension, or even write our own controls that produce valid code. But what I’m saying is that we shouldn’t have to resort to that; that if ASP.NET really is the all-singing, all-dancing super web platform of the future, this sort of thing should be handled properly by default and shouldn’t take a genius to figure out. I’m also prepared to be told that our sister company are morons — same logic applies.I’m prepared for unending flaming from the ASP.NET crowd, but I’m hoping for some constructive comment. From where I’m standing, PHP might not be the perfect solution, but I’m damn sure I can build great websites with it, relatively hassle-free. My experience thus far with ASP.NET doesn’t fill me with a great deal of confidence. Is it really that shit, or have I just been watching the work of idiots? If it’s done right, does it turn out anything like this?
Scientists break quantum teleportation distance record
Currently, quantum communication is mostly used in information security, but researchers say the technology could one day be used to create a quantum Internet.By Brooks Hays | Sept. 22, 2015 at 4:49 PMBOULDER, Colo., Sept. 22 (UPI) -- Researchers with the National Institute of Standards and Technology have set a new distance record for quantum teleportation, sending quantum data through fibers four times longer than the previous record-holder.Scientists successfully sent and received quantum information, encoded in light photons, through 62 miles of fiber.Other experiments have successfully teleported quantum data over longer distances through free space, but quantum communication through fibers is more difficult -- and of more significance to practical applications of the technology.Researchers chronicled their feat in the latest issue of the journal Optica."What's exciting is that we were able to carry out quantum teleportation over such a long distance," study co-author Martin Stevens, a quantum optics scientist at NIST,told Live Science.Quantum teleportation isn't instantaneous. But by encoding the fundamental physics -- or "quantum states" -- of an object onto light particles, researchers can beam information across long distances. These entangled quantum states can be detected and used to recreate the object, or encoded information, on the other end of the fibers.Currently, quantum communication is mostly used in information security, but researchers say the technology could one day be used to create a quantum Internet. But to do so, scientist need to find strategies for long-distance, fiber-based quantum teleportation.What made the feat possible, researchers say, is the newly designed photon detectors deployed on the far-end of the fibers."Only about 1 percent of photons make it all the way through 100 kilometers (60 miles) of fiber," Stevens said in a press release. "We never could have done this experiment without these new detectors, which can measure this incredibly weak signal."
So, you have heard of Dependency injection (DI) but are having a hard time grasping the concept? Well you're not alone, DI can seem quite complex at first! Fortunately dependency injection is easy to learn and understand, and once you start practising it chances are that you never want to go back to do things in the "old bad" ways.The old bad ways
Let's say that you have a class called Car and Car need to call a method in the Engine class. Today you might either have to provide the instance to Engine manually when you create a Car:Or perhaps create Engine in the Car's constructor:
var generator = new Generator(); var engine = new Engine(generator); var car = new Car(engine);
Is this bad? Not always, but it can be much better!
public void Car() { _engine = new Engine(); }
The Dependency Injection way
Let's say that you'd like to implement the following using Dependency Injection. This is how you can do it:
1) Extract your classes method definitions to Interfaces:2) Create your classes so that all their dependencies are fed to them as Interfaces through the constructor and store them in private variables:
public interface IEngine { void Start(); } public class Engine : IEngine { public void Start(); }
3) In the entry point of your application, register what instance of what class that should be provided for each interface with an IoC container (I'm using Autofac in this example):
private readonly IEngine _engine; public void Car(IEngine engine) { _engine = engine; }
This means that whenever a constructor asks for an instance of type IGenerator the IoC will provide it with an instance of Generator and so on.
var builder = new ContainerBuilder(); builder.RegisterType<Generator>().As<IGenerator>(); builder.RegisterType<Engine>().As<IEngine>(); builder.RegisterType<Car>().As<ICar>(); var container = builder.Build();
4) Start the top-level instance (and all underlying instances will be created automatically for you):The following will happen:
var car = resolver.Resolve<ICar>(); car.Start();
* The IoC will try to create an instance of ICar using the class Car
* Doing this it will notice that Car needs an instance of IEngine in order to be constructable
* The IoC will then try to create an instance of IEngine using the class Engine
* Doing this it will notice that Engine needs an instance of IGenerator in order to be constructable
* The IoC will then try to create an instance of IGenerator using the class Generator
* The IoC can now create an instance of Engine as all it's dependencies have been met
* The IoC can now create an instance of Car as all it's dependencies have been met
* The instance is returned to you
* You can invoke the method StartWhy is this good?
This has several benefits. The most important is that it automatically makes your code testable. As you are using interfaces everywhere, you can easily provide another implementation in your unit tests. This means that your tests will be much easier to set up as well as being restricted to test a specific unit - not a whole chain of code.
Most people need to try this for them selves in order to really see the benefits. Do it. You will not regret it.Article created: Aug 10 at 10:45. Edited Sep 18 at 09:54.
DOGECOIN CLONE “SHIBECOIN” SCAM BUSTED
Earlier this month, a couple of naughty individuals decided to launch ShibeCoin — a Dogecoin knock-off which was designed to be an end-life Proof-of-Stake coin. I’ll admit, it is certainly a clever way to get noticed amongst the sea of scam-coins. I am sure many scheming alt-coin developers were kicking themselves for missing such a good opportunity.BackgroundShibeCoin suffered from the same problem that most alt-coins do: mischievous developers. The coin was announced to have a 0.5% pre-mine of 1.5 million coins (out of 300 million). Fortunately, cryptocurrency technology allows us to verify this by simply checking the blockchain. Unsurprisingly, the first 56 blocks are time stamped several minutes before the actual coin was publicly launched (which indicates the developers pre-mined much more than just 1.5 million coins)Don’t forget: the developers also hard coded the first 50 blocks to have the highest block rewards. Needless to say, the developers lied about their 0.5% pre-mine and received a massively unfair advantage.Wallet IssuesThe coin has been plagued with its fair share of wallet problems as well. As evidenced by many user reports on the announcement thread, staking simply doesn’t work. The coin was forked within the first week of launch, the wallet has been updated five times within the last week, and the developers are of little help.PriceBittrex was the first exchange to open Shibe training and the price quickly rose to 700 satoshi. The coin traded at the price of 200-450 satoshi for a few weeks before taking the recent plunge to 50 satoshi. Despite the devious nature of the coin, it actually performed pretty well in the market place. ShibeCoin peaked with a market cap of around $450,000ConclusionI have nothing against alt-coins (even ones that try to imitate our glorious Shibe) but such a poorly executed coin embarrasses the entire cryptocurrency community as a whole. The coin doesn’t seem to be done yet. The developer recently released a series of videos indicating he is still involved with the coin. My advice to you all is to exercise extreme caution with ShibeCoin. With that said, steer clear of the most recent Dogecoin clone DojeCoin as well.
Nuclear Stability and Magic NumbersNuclear Stability is a concept that helps to identify the stability of an isotope. The two main factors that determine nuclear stability are the neutron/proton ratio and the total number of nucleons in the nucleus.
Introduction
A isotope is an element that has same atomic number but different atomic mass compared to the periodic table. Every element has a proton, neutron, and electron. The number of protons is equal to the atomic number, and the number of electrons is equal the protons, unless it is an ion. To determine the number of neutrons in an element you subtract the atomic number from the atomic mass of the element. Atomic mass is represented as (A ) and atomic number is represented as (Z ) and neutrons are represented as (N ).A=N+Z(1)
atomic mass = number of neutrons + atomic numberTo determine the stability of an isotope you can use the ratio of neutrons to protons (N:Z ) as discussed below.
Determining the N:Z Ratio
The principal factor for determining whether a nucleus is stable is the neutron to proton ratio. Elements with (Z<20 ) are lighter and these elements' nuclei and have a ratio of 1:1 and prefer to have the same amount of protons and neutrons.Example: Carbon IsotopesCarbon has three isotopes that scientists commonly used:12C ,13C ,14C . What is the the number of neutron, protons, total nucleons andN:Z ratio for the12C nuclide?SOLUTIONFor this specific isotope, there are 12 total nucleons (A . From the periodic table, we can see thatZ for carbon (any of the isotopes) is 6, therefore N=A-Z (from Equation 1):12−6=6
The N:P ratio therefore is 6:6 or a 1:1. This is a stable ratio that lies on the Belt of Stability. In fact 99% of all carbon in the earth is this isotope. In contrast, the same analysis of14C (also known as "radiocarbon" or carbon-14) suggests that it is off the Belt of Stability and is unstable; in fact, it spontaneously decomposes into other nuclei (but on a slow timescale).Exercise: OxygenIdentify the number of neutron, protons, total nucleons and N:Z ratio in the128O nuclide?Elements that have atomic numbers from 20 to 83 are heavy elements, therefore the ratio is different. The ratio is 1.5:1, the reason for this difference is because of the repulsive force between protons: the stronger the repulsion force, the more neutrons are needed to stabilize the nuclei.NoteNeutrons help to separate the protons from each other in a nucleus so that they do not feel as strong a repulsive force from other.
Belt of Stability
The graph of stable elements is commonly referred to as the Band (or Belt) of Stability. The graph consists of a y-axis labeled neutrons, an x-axis labeled protons, and a nuclei.At the higher end (upper right) of the band of stability lies the radionuclides that decay via alpha decay, below is positron emission or electron capture, above is beta emissions and elements beyond the atomic number of 83 are only unstable radioactive elements. Stable nuclei with atomic numbers up to about 20 have an neutron:proton ratio of about 1:1 (solid line).Figure 1: Belt of Stability. Graph of isotopes by type of nuclear decay. Orange and blue nuclides are unstable, with the black squares between these regions representing stable nuclides. The solid black line represents the theoretical position on the graph of nuclides for which proton number is the same as neutron number (N=Z . Elements with more than 20 protons require more neutrons than protons to be stable. Figure used with permission from Wikipedia.NoteThe deviation from theN:Z=1 line on the belt of stability originates from a non-unityN:Z ratio necessary for total stability of nuclei. That is, more neutrons are required to stabilize the repulsive forces from a fewer number of protons within a nucleus (i.e.,N>Z ).The belt of stability makes it is easy to determine where the alpha decay, beta decay, and positron emission or electron capture occurs.
- Alpha
α Decay: Alpha decay is located at the top of the plotted line, because the alpha decay decreases the mass number of the element to keep the isotope stable. This is accomplished by emitting a alpha particle, which is just a helium (He ) nucleus. In this decay pathway, the unstable isotope's proton numberP is decreased by 2 and its neutron (N ) number is decreased by 2. The means that the nucleon numberA decreases by 4 (Equation 1).- Beta
β− Decay: Betaβ− decay accepts protons so it changes the amount of protons and neutrons. the number of protons increase by 1 and the neutron number decreases by 1. This pathway occurs in unstable nuclides that have too many neutrons lie above the band of stability (blue isotopes in Figure 1).- Positron
β+ Decay: Positronβ+ emission and electron capture is when the isotope gains more neutrons. Positron emission and electron capture are below the band of stability because the ratio of the isotope has more protons than neutrons, think of it as there are too few protons for the amount of neutrons and that is why it is below the band of stability (yellow isotopes in Figure 1).As with all decay pathways, if the daughter nuclides are not on the Belt, then subsequent decay pathways will occur until the daughter nuclei are on the Belt.
Magic Numbers
Magic numbers are natural occurrences in isotopes that are particularly stable (similar to octets with valence electrons). Below is a list of numbers of protons and neutrons; isotopes that have these numbers occurring in either the proton or neutron are stable. In some cases there the isotopes can consist of magic numbers for both protons and neutrons; these would be called double magic numbers. The double numbers only occur for isotopes that are heavier, because the repulsion of the forces between the protons.The magic numbers:
- proton: 2, 8, 20, 28, 50, 82, 114
- neutron: 2, 8, 20, 28, 50, 82, 126, 184
Also, there is the concept that isotopes consisting a combination of even-even, even-odd, odd-even, and odd-odd are all stable. There are more nuclides that have a combination of even-even than odd-odd.
Proton number (Z) Neutron Number # of stable Isotopes Even Even 163 Even Odd 53 Odd Even 50 Odd Odd 4 NoteAlthough rare, four stable odd-odd nuclides exist:21H ,63Li ,105B ,147N
Unstable or Stable
Here is a simple chart that can help you decide is an element is likely stable.
- Calculate the total number of nucleons (protons and neutrons) in the nuclide
- If the number of nucleons is even, there is a good chance it is stable.
- Are there a magic number of protons or neutrons?
- 2,8,20,28,50,82,114 (protons), 126 (neutrons), 184 (neutrons) are particularly stable in nuclei.
- Calculate the N/Z ratio.
- Use the belt of stability (Figure 1) to determine the best way to get from an unstable nucleus to a stable nucleus
Outside links
- http://employees.oneonta.edu/viningwj/sims/stability_of_isotopes_s.swf -Go to this website to see a more accurate graph of belt of stability
- http://teachertube.com/viewVideo.php?video_id=139403&title=Nuclear_Stability -This video goes over nuclear stability and the types of radio active decay
References
- Olmsted III, John and Gregory M William. Chemistry Fourth Edition. John Wiley and Sons Inc:NJ, 2006.
- Petrucci, Ralph H., William S. Harwood, F. Geoffrey Herring, Jeffry D Madura. General Chemistry. Pearson Education Inc: NJ, 2007.
Problems
- Using the above chart state if this isotope is alpha-emitter, stable, or unstable: a) 4020Ca b) 5425Mn c) 21084 Po
- If the isotope is located above the band of stability what type of radioactivity is it? what if it was below?
- Between elements bromide and carbon which is more stable when using magic numbers?
- Name one of the isotopes that consist of odd-odd combination in the nuclei?
Solutions
1) a) Stable, because this Ca isotope has 20 neutrons, which is on of the magic numbersb) Unstable, because there is an odd number (25 and 29) of protons and neutronsc) Alpha-emitter, because Z=84, which follows rule/step one on the chart2) Beta decay, positron emission, or electron capture3) Carbon is stable4) Hydrogen-2, Lithium-6, Boron-10, nitrogen-14
Contributors
- Content was contributed, in part, from Socratic.org.