Wednesday, November 16, 2011

ISAAC (International Society for Augmentative/Alternative Communication) 2012 Call for Papers Countdown

ISAAC (International Society for Augmentative/Alternative Communication) 2012 Call for Papers Countdown
Pittsburgh, PA  July 28-August 4, 2012
 
Call for Papers Deadline soon approaching!!
 
Tuesday 22 November 2011 is the deadline for submitting proposals for ISAAC.  We invite you to share with a worldwide audience your clinical, educational, research and life experiences at ISAAC’s main conference: Highest Performance Communication: Best Life Experience-WOW! to be held July 30-August 2, 2012 in Pittsburgh PA.  ISAAC expects to greet 2,000 participants.  Presenters will be contributing to the world’s largest gathering of assistive technology and augmentative and alternative communication (AAC). 
Submissions are invited in the following formats: 60-minute oral presentations, 20-minute oral presentations and poster presentations. The exciting program will also include plenary sessions and workshops.

Tuesday 31 July is Autism Awareness day and Thursday 2 August is Amyotrophic Lateral Sclerosis (ALS) Awareness Day.   Authors are encouraged to consider how their knowledge, skills and experience may relate to our special awareness days and be valued by the ISAAC audience.  Here are just a few key word topics to help you get your creativity started too:
Acquired disabilities
Evidence-based practice
AAC & mounting issues
Autism
Families and life care planning
Multicultural issues
Assistive Technology Assessment
Funding/policy issues

Professional development
Brain-Computer Interfaces
International issues & Developing nations
Telerehabilitation
Consumer issues and advocacy
Language and/ or literacy development
Integrated controls and wheelchairs

Proposals for the ISAAC 2012 biennial conference will be submitted electronically. The following information is needed to complete your proposal submission. Submissions will include complete author information (affiliation, and mailing and electronic contact addresses) and a biographical statement. Each proposal will include its title (15 words maximum), an abstract (150 words), and a proposal summary (1000 word maximum, including up to 10 reference citations).

Proposals will be peer reviewed and evaluated on their quality, content, and significance. Only papers that have not been published previously will be accepted. Submissions will be accepted in English only and only via the ISAAC web site and email processes.
 
Important Dates
Paper Submission Deadline: November 22, 2011
Paper Acceptance Confirmed: January 31, 2012

Please visit: www.isaac2012.org


--
November is Assistive Technology Awareness Month in Pennsylvania!
 
Amy S. Goldman
Associate Director, Institute on Disabilities at Temple University
College of Education
Student Center, Room 411 South
1755 N.13th St.
Philadelphia, PA 19122

215-204-3862
www.disabilities.temple.edu
amy.goldman@temple.edu
 
Twitter: http://twitter.com/IODTempleU
Follow us on Facebook!
 
Mark your calendar:  ISAAC 2012
Pittsburgh – Wow!  Highest performance communication; best life experience – Wow!!!

https://mail.alsa-national.org/exchweb/bin/redir.asp?URL=http://www.facebook.com/pages/Isaac-2012-Pittsburgh/143324919049372

Twitter, There’s Nothing Wrong With Being A Social Network

Author: Robin Wauters
From Tech Crunch Magazine
http://techcrunch.com/2011/11/16/of-course-twitter-is-a-social-network/

I just finished reading an interesting blog post by venture capitalist Bill Gurley about Twitter, a portfolio company of his firm Benchmark Capital. In it, he argues that there’s a misperception about Twitter in that people keep regarding it as a social network and pitting it against Facebook.

Gurley makes some good points, albeit ones that have been made in the past, about Twitter being an ‘information utility’, a ‘discovery engine’ and a ‘better RSS reader’ rather than a social network. Except I still think Twitter is a social network as well. Not that there’s anything wrong with that.
What Gurley is saying is that Twitter isn’t a social network like Facebook is one, because there are some key differences, and both platforms are being used differently. Which is undeniably true, but that doesn’t make Twitter not a social network, at least not in my book.

I have certainly been using Twitter as a social network for years, to connect with friends and coworkers and people in this industry that I may have never met in real life. I engage with them publicly and through direct messages, I share media with them, and I use Twitter as an identity provider for a variety of third-party applications and services. More so than Facebook, even.

(I’ll also note that Twitter just became more like Facebook and its Ticker in many ways, too.)
As Gurley points out, Twitter is used by plenty of people to simply consume information and news without even signing up or actively using the service (“you don’t need to tweet to use Twitter”). But increasingly, that’s the direction Facebook is heading, too. I don’t see how that makes Twitter a pure-play information broadcasting service and no longer a social communication utility, because the company certainly attempts to convert every ‘lurker’ into an active, registered user at every turn.

That makes all the sense in the world, because these are the people Twitter can monetize down the line, and the company has to turn a profit at some point in the future.
So why the identity crisis? What’s the big deal with simply admitting Twitter is also a social network?
Writes Gurley:
For the vast majority of Twitter’s next 900 million users, the core usage modality will have very little to do with “tweeting,” and everything to do with “listening” or “hearing.”
All fine and dandy, but there wouldn’t be much to listen to when there wouldn’t be any users doing the actual tweeting. And slice it any way you want, most people – at least not that I’ve observed over the past few years – don’t treat Twitter exclusively as an information broadcasting utility, but use it to engage with friends, family, coworkers and yes, people that they may not know in real life.

Twitter is not just about musicians sharing their tour experiences and work, politicians sharing their views, and getting news and insights from news organizations (or straight from journalists or field experts). There’s an army of users communicating in close-knit circles, and they shouldn’t be brushed off simply because Twitter doesn’t like being pitted against Facebook in the social networking category. Let’s see Twitter taking away the ability to send direct, private messages to other users because it’s an inherently ‘few-to-few’ (or even one-to-one) social activity.

Users would revolt, and justifiably so. Also, Twitter, read this attentively.
Gurley’s point reminds a lot about the time when MySpace was so busy trying to convince people that they weren’t a social network like Facebook but rather a ‘personalized entertainment destination’ or whatever they tried to call it. That always confused me, because MySpace was so obviously a social network, but one that desperately tried to reposition themselves because they could sense they would lose the ‘king of social networking’ title to Facebook. The attempt was unsuccessful.

So yes, dear Twitter management and investors, we realize full well Twitter is different from Facebook, and that you’re not necessarily head-to-head rivals. But the fact that you don’t appreciate your service being called a social network doesn’t mean there’s no significant functionality overlap lurking in the places you’re both heading to. You are both ‘leaders in social networking’.

I suggest you play to your strengths and let people call Twitter a social network if they believe that’s what it is, to them, rather than trying to fight those nasty ‘misperceptions’ out there.
Lord knows you’ve got bigger fish to fry.

Apple’s AssistiveTouch Helps the Disabled Use a Smartphone

From NY Times.  David Pogue

November 16, 2011

http://pogue.blogs.nytimes.com/2011/11/10/apples-assistivetouch-helps-the-disabled-use-a-smartphone/

Plenty has been written about the new iPhone 4S, with its voice-controlled virtual assistant Siri, and about iOS 5, its software.

But in writing a book about both, I stumbled across an amazingly thoughtful feature that I haven’t seen a word about: something called AssistiveTouch.

Now, Apple has always gone to considerable lengths to make the iPhone usable for people with vision and hearing impairments. If you’re deaf, you can have the LED flash to get your attention when the phone rings. You can create custom vibration patterns for each person who might call you. You can convert stereo music to mono (handy if you’re deaf in one ear).

If you’re blind, you can literally turn the screen off and operate everything — do your e-mail, surf the Web, adjust settings, run apps — by tapping and letting the phone speak what you’re touching. You can also magnify the screen or reverse black for white (for better-contrast reading).

In short, iPhone was already pretty good at helping out if you’re blind or deaf. But until iOS 5 came along, it was tough rocks if you had motor-control problems. How are you supposed to shake the phone (a shortcut for “Undo”) if you can’t even hold the thing? How are you supposed to pinch-to-zoom a map or a photo if you can’t even move your fingers?

One new feature, called AssistiveTouch, is Apple’s accessibility team at its most creative. When you turn on this feature in Settings->General->Accessibility, a new, white circle appears at the bottom of the screen. It stays there all the time.

When you tap it, you get a floating on-screen palette. Its buttons trigger motions and gestures on the iPhone screen without requiring hand or multiple-finger movement. All you have to be able to do is tap with a single finger — even a stylus you’re holding in your teeth or fist.

For example, you can tap the Home on-screen button instead of pressing the physical Home button.
If you tap Device, you get a sub-palette of six functions that would otherwise require you to grasp the phone or push its tiny physical buttons. There’s Rotate Screen (tap this instead of turning the phone 90 degrees), Lock Screen (tap instead of pressing the Sleep switch), Volume Up and Volume Down (tap instead of pressing the volume keys), Shake (does the same as shaking the phone to undo typing), and Mute/Unmute (tap instead of flipping the small Mute switch on the side).

If you tap Gestures, you get a peculiar palette that depicts a hand holding up two, three, four, or five fingers. When you tap the three-finger icon, for example, you get three blue circles on the screen. They move together. Drag one of them, and the phone thinks you’re dragging three fingers on its surface. Using this technique, you can operate apps that require multiple fingers dragging on the screen.

To me, the most impressive part is that you can define your own gestures. In Settings->General->Accessibility, you can tap Create New Gesture to draw your own gesture right on the screen, using up to five fingers.

For example, suppose you’re frustrated in Google Maps because you can’t do the two-finger double-tap that means “zoom out.” On the Create New Gesture screen, get somebody to do the two-finger double-tap for you. Tap Save and give the gesture a name—say, “2 double tap.”

From now on, “2 double tap” shows up on the final AssistiveTouch panel, called Favorites, ready to trigger with a single tap by a single finger or stylus. (Apple starts you off with one predefined gesture already in Favorites: Pinch. That’s the two-finger pinch or spread gesture you use to zoom in and out of photos, maps, Web pages, PDF documents, and so on. Now you can trigger it with only one finger.)

I doubt that people with severe motor control challenges represent a financially significant number of the iPhone’s millions of customers. But somebody at Apple took them seriously enough to write a complete, elegant and thoughtful feature that takes down most of the barriers to using an app phone.
I, for one, am impressed.

Stanford joins BrainGate team developing brain-computer interface to aid people with paralysis

Stanford joins BrainGate team developing brain-computer interface to aid people with paralysis

November 14, 2011 By Tanya Lewis
The implantable BrainGate neural interface can detect and record brain signals, allowing persons who have lost the use of arms and legs to have point-and-click control of a computer.
(Medical Xpress) -- Stanford University researchers are enrolling participants in a pioneering study investigating the feasibility of people with paralysis using a technology that interfaces directly with the brain to control computer cursors,
Those who may be eligible to enroll in the trial include people with weakness of all four limbs resulting from cervical spinal cord injury, brainstem stroke, muscular dystrophy, or motor neuron disease, such as amyotrophic lateral sclerosis (Lou Gehrig’s disease).

The pilot clinical trial, known as BrainGate2, is based on technology developed at Brown University and is led by researchers at Massachusetts General Hospital, Brown and the Providence Veterans Affairs Medical Center. The researchers have now invited the Stanford team to establish the only trial site outside of New England.

Under development since 2002, BrainGate is a combination of hardware and software that directly senses electrical signals in the brain that control movement. The device — a baby-aspirin-sized array of electrodes — is implanted in the cerebral cortex (the outer layer of the brain) and records its signals; computer algorithms then translate the signals into digital instructions that may allow people with paralysis to control external devices.

“This technology is truly extraordinary, and I’m excited to begin testing it,” said Jaimie Henderson, MD, lead investigator of the Stanford branch of the trial. “One of the biggest contributions that Stanford can offer is our expertise in algorithms to decode what the brain is doing and turn it into action.”
The trial as a whole is directed by Leigh Hochberg, MD, PhD, who is affiliated MGH, Brown, the Providence VAMC and Harvard Medical School.
Henderson, an associate professor of neurosurgery in Stanford’s School of Medicine, will be conducting the surgeries to implant the device and then evaluating its effects. He is working with Krishna Shenoy, PhD, associate professor of electrical engineering in the School of Engineering. Shenoy’s work focuses on understanding how the brain controls movement, and translating this knowledge to build high-performance neural prosthetic systems using sophisticated software. Henderson and Shenoy co-direct Stanford’s Neural Prosthetics Translational Laboratory.
“The BrainGate program has been a model of innovation and teamwork as it has taken the first giant steps toward turning potentially life-changing technology into a reality,” Shenoy said. “This team brings together experts from a variety of fields.”
Hochberg, a critical care neurologist and neuro-engineer, welcomed his Stanford colleagues to the team.
“We couldn’t ask for a better expansion of our collaboration,” he said. “Drs. Henderson and Shenoy are leaders in the field of neural interfaces. Our combined Brown-Harvard-MGH-Stanford-VA team of physicians, scientists and engineers is well-suited not only to explore the possibilities and address the challenges of neural interface research, but simultaneously to make the fundamental discoveries that we hope will yield even greater advances in the development of restorative neurotechnologies for people with paralysis or limb loss.”

BrainGate is based on research and technology developed in the laboratory of John Donoghue, PhD, the Henry Merritt Wriston Professor of Neuroscience and Engineering at Brown University, director of the Brown Institute for Brain Science and a senior research career scientist with the Providence VA Medical Center. Donoghue co-directs the overall BrainGate research effort with Hochberg.
In 2006, Donoghue and Hochberg led the publication of a landmark paper in Nature demonstrating that trial participants could control a computer cursor and other devices directly by neural activity. In that same issue of the journal, a team led by Shenoy demonstrated a brain-computer interface system, using the same electrode array, which set important performance benchmarks.

Earlier this year, the BrainGate2 research team published another paper in the Journal of Neural Engineering showing that the system allowed a patient to accurately control a computer cursor more than 1,000 days after it was implanted.

The BrainGate team is also engaged in research toward control of advanced prosthetic limbs and toward direct brain-based control of functional electrical stimulation devices for people with spinal cord injury, in collaboration with researchers at the Cleveland Functional Electrical Stimulation Center.
“Decades of publicly funded, fundamental neuroscience research continue to provide the building blocks for tomorrow’s breakthroughs,” Donoghue said. “Translating that knowledge toward the creation of powerful assistive technologies is the BrainGate team’s continuing goal.”

The BrainGate research is one of several efforts throughout the world aiming to develop technologies to restore function by recording signals directly from the brain and converting those signals into commands for computers and assistive devices.

The systems deployed up to this time, while impressive, require further testing and development before a person with paralysis will be able to perform the same tasks at the same speed as an able-bodied person, said Henderson. While he expects the technology to require several years of continued research before it might be widely available to patients, he has high hopes for the new collaboration.

“With the tremendous synergy between Stanford, Brown, MGH/Harvard and the VA, I am confident we can make these systems useful to people with paralysis,” Henderson said. Although the BrainGate researchers will record and report how well the technology operates, the primary purpose of this pilot study is to collect
initial information about whether the device is safe to use in humans.

More information: http://www.braingate2.org/

Monday, November 14, 2011

Controlling an Avatar with Your Brain? Israeli Lab is Trying

Controlling an Avatar with Your Brain? Israeli Lab is Trying
From: NoCamels - 10/26/2011
By: Alexandra Mann

Researchers at the Interdisciplinary Center's Advanced Virtuality Lab (AVL)
are developing next-generation of human-computer interfaces. AVL's main goal
is to build virtual worlds and the interfaces that will be used in the
future, investigating human behavior and the human mind in a virtual reality
setting. One of the projects, Virtual Embodiment and Robotic Re-embodiment,
is researching a way to control a virtual or physical body using only the
mind. The research team is one of the first to use a brain scanner to control
a computer application interactively in real time, which could help severely
disabled patients communicate, according to AVL's Doron Friedman. AVL
researchers also are working on the Being in Augmented Multi-modal
Naturally-networked Gatherings (BEAMING) project, telepresence technology
that aims to produce the feeling of a live interaction using mediated
technologies such as videoconferencing, virtual and augmented reality,
haptics technology, and spatialized audio and robotics. The researchers are
using BEAMING to develop a body language and gesture translation system.

Read the entire article and view a video (2:56) at:
http://nocamels.com/2011/10/controlling-an-avatar-with-your-brain-israeli-lab-is-trying/