Follow by Email

Thursday, February 23, 2017

Brain–Computer Interface Allows Speediest Typing to Date

Via Scientific American -- https://www.scientificamerican.com/article/brain-computer-interface-allows-speediest-typing-to-date/
A new interface system allowed three paralyzed individuals to type words up to four times faster than the speed that had been demonstrated in earlier studies
A participant enrolled by Stanford University in the BrainGate clinical trial uses the brain-computer interface to type by controlling a computer cursor with her thoughts. Credit: Courtesy Stanford University
Ten years ago Dennis Degray’s life changed forever when he slipped and fell while taking out the trash in the rain. He landed on his chin, causing a severe spinal cord injury that left him paralyzed below the neck. Now he’s the star participant in an investigative trial of a system that aims to help people with paralysis type words using only their thoughts.
The promise of brain–computer interfaces (BCIs) for restoring function to people with disabilities has driven researchers for decades, yet few devices are ready for widespread practical use. Several obstacles exist, depending on the application. For typing, however, one important barrier has been reaching speeds sufficient to justify adopting the technology, which usually involves surgery. A study published Tuesday in eLife reports the results of a system that enabled three participants—Degray and two people with amyotrophic lateral sclerosis (ALS, or Lou Gehrig's disease, a neurodegenerative disease that causes progressive paralysis)—to type at the fastest speeds yet achieved using a BCI—speeds that bring the technology within reach of being practically useful. “We're approaching half of what, for example, I could probably type on a cell phone,” says neurosurgeon and co-senior author, Jaimie Henderson of Stanford University.
The researchers measured performance using three tasks. To demonstrate performance in the most natural scenario possible, one participant was assessed in a “free typing” task, where she just answered questions using the device. But typing speeds are conventionally measured using copy typing, which involves typing out set phrases, so all three participants were also assessed this way. The woman who performed the free-typing task achieved faster than six words-per-minute, the other ALS patient managed nearly three and Degray achieved almost eight. The group reported comparable results in a Nature Medicine studyin 2015 but these were achieved using software that exploited the statistics of English to predict subsequent letters. No such software was employed in this study.
The drawback of copy typing is performance can vary with the specific phrases and keyboard layouts used. To get a measure independent of any of these factors, the third task involved selecting squares on a six by six grid as they lit up randomly. This gets closer to quantifying the maximum speed the system can output information, and is easily converted into a digital “bits per second” measure. The team used this range of tasks, without predictive software, because one of the study’s central aims was to develop standardized measures. “We need to establish measures so that—in spite of potential variability between people, methods and researchers—we can really say, ‘clearly this new advance led to higher performance,’ because we have systematic ways of comparing that,” says co-lead author Chethan Pandarinath, then a postdoctoral fellow at Stanford. “It's critical for moving this technology forward.”
The two ALS patients achieved 2.2 and 1.4 bits per second, respectively, more than doubling previous records (held by these same participants in a previous study from this group). Degray achieved 3.7 bits per second, which is four times faster than the previous best speed. “This is a pretty large leap in performance in comparison to previous clinical studies of BCIs,” Pandarinath says.
Other researchers agree these are state-of-the-art results. “This is the fastest typing anyone has shown with a BCI,” says biomedical engineer Jennifer Collinger, of the University of Pittsburgh, who was not involved in the study. “It's on par with technologies like eye-trackers, but there are groups those technologies don’t work for such as people who are “locked-in.” These speeds also approach what ALS patients questioned in a survey said they would want from a BCI device. “You're getting to the point where performance is good enough that users would actually want to have it,” Collinger says.
Participants had either one or two tiny (one-sixth-inch) electrode arrays implanted on the surfaces of their brains. These “intracortical” implants contain 96 microelectrodes that penetrate one to 1.5 millimeters into parts of the motor cortex that control arm movements. Two of the surgeries were performed by Henderson, who co-directs Stanford’s Neural Prosthetics Translational Laboratory with the study’s senior co-author, bioengineer Krishna Shenoy. The neural signals recorded by the electrodes are transmitted via a cable to a computer where algorithms developed in Shenoy's lab decode the participant's intentions and translate the signals into movements of a computer cursor. The Stanford team is part of a multi-institute consortium called BrainGate, which includes Massachusetts General Hospital and Brown University, among others.
Other methods of interfacing with the brain via electrodes include those put on the scalp for electroencephalography (EEG) and ones placed under the skull on the brain’s surface, known as electrocorticography (ECoG). The advantage of intracortical implants is they can pick out activity from single cells whereas the other methods capture the average activity of thousands of neurons. “This performance is 10 times better than anything you would get from EEG or ECoG, [which don’t] contain enough information to do this kind of task at this level,” says neurobiologist Andrew Schwartz, at Pitt, who was not involved in the study. Movement and scarring reduces signal quality over roughly the first two years after implantation, but what remains is still useful—“much better than you get with any other technique,” he says.
The biggest drawback, currently, is having wires coming out of people's heads and attached to cables, which is cumbersome and carries risks. “The future is making these devices wireless,” Pandarinath says. “We're not there yet with people but we’re probably closer to five than 10 years away, and that’s a critical step [toward] a device that you could send somebody home with and be less worried about potential risks like infection.” The devices would need wireless power but several groups are already working on this. “Most of the technology is basically there,” Schwartz says. “You can do that inductively using coils—like wirelessly charging your cell phone in a cradle with coils on either side.”
The team attributes the improvements to better systems engineering and decoding algorithms. “Performing repeated computations rapidly is critical in a real-time control system,” Pandarinath says. The researchers published a study last year, led by Stanford bioengineer Paul Nuyujukian. In it they trained two macaque monkeys to perform a similar task to the grid exercise used in this study. The animals typed sentences by selecting characters on a screen as they changed color (although they wouldn’t have understood what the words meant). When the team added a separate algorithm to detect the monkeys’ intention to stop, their best speed increased by two words per minute.
This “discrete click decoder” was also used in the current study. “We've basically created a ‘point and click’ interface here, like a mouse. That’s a good interface for things like modern smartphones or tablets,” Pandarinath says, “which would open a whole new realm of function beyond communication: surfing the Web, playing music, all sorts of things able-bodied people take for granted.”
The Stanford team is already investigating wireless technology, and has ambitious long-term goals for the project. “The vision we hope to achieve someday would be to be able to plug a wireless receiver into any computer and use it using your brain,” Henderson says. “One of our main goals is to allow 24 hours a day, seven days a week, 365 days a year control of a standard computer interface using only brain signals.”

Microsoft app helps people with ALS speak using just their eyes


https://www.newscientist.com/article/2121579-microsoft-app-helps-people-with-als-speak-using-just-their-eyes/

A smartphone held up in front of a person gazin to the right, showing the GazeSpeak app on the phone screen
The eyes say it all
GazeSpeak, Enable Team, Microsoft Research
It can be difficult to communicate when you can only move your eyes, as is often the case for people with ALS (also known as motor neurone disease). Microsoft researchers have developed an app to make talking with your eyes easier, called GazeSpeak.
GazeSpeak runs on a smartphone and uses artificial intelligence to convert eye movements into speech, so a conversation partner can understand what is being said in real time.
The app runs on the listener’s device. They point their smartphone at the speaker as if they are taking a photo. A sticker on the back of the phone, visible to the speaker, shows a grid with letters grouped into four boxes corresponding to looking left, right, up and down. As the speaker gives different eye signals, GazeSpeak registers them as letters.
“For example, to say the word ‘task’ they first look down to select the group containing ‘t’, then up to select the group containing ‘a’, and so on,” says Xiaoyi Zhang, who developed GazeSpeak whilst he was an intern at Microsoft.
GazeSpeak selects the appropriate letter from each group by predicting the word the speaker wants to say based on the most common English words, similar to predictive text messaging. The speaker indicates they have finished a word by winking or looking straight ahead for two seconds. The system also takes into account added lists of words, like names or places that the speaker is likely to use. The top four word predictions are shown onscreen, and the top one is read aloud.
“We’re using computer vision to recognise the eye gestures, and AI to do the word prediction,” says Meredith Morris at Microsoft Research in Redmond, Washington.
The app is designed for people with motor disabilities like ALS, because eye movement can become the only way for people with these conditions to communicate. ALS progressively damages nerve cells, affecting a person’s ability to speak, swallow and eventually breathe. The eye muscles are often some of the last to be affected.

Board of the old

“People can become really frustrated when trying to communicate, so if this app can make things easier that’s a really good thing,” says Matthew Hollis from the Motor Neurone Disease Association.
There are currently limited options for people with ALS to communicate. The most common is to use boards displaying letters in different groups, with a person tracking the speaker’s eye movements as they select letters. But it can take a long time for someone to learn how to interpret these eye movements effectively.
GazeSpeak proved much faster to use in an experiment with 20 people trying both the app and the low-tech boards. Completing a sentence with GazeSpeak took 78 seconds on average, compared with 123 seconds using the boards. The people in the tests did not have ALS, but the team also got feedback on the technology from some people with ALS and their interpreters. One person who tried the device typed a test sentence in just 62 seconds and said he thought it would be even quicker in a real-life situation, as his interpreter can more easily predict what he is likely to say.
“I love the phone technology; I just think that would be so slick,” said one of the interpreters.
Other systems currently use software to track eye movements with infrared cameras. But these are often expensive and bulky, and infrared cameras don’t work very well in sunlight. The GazeSpeak app is portable and comparatively cheap, as it only requires an iOS device, like an iPhone or iPad, with the app installed.
Microsoft will present the app at the Conference on Human Factors in Computing Systems in Colorado in May. The researchers say it will be available on the Apple App Store before the conference, and the source code will be made freely available so that other people can help to improve it.

Wednesday, June 1, 2016

Aussies on the verge of bionics ‘Holy Grail’ ahead of human trials of brain machine interface technology


Australian researchers are leading the way with brain machine interface technology.
Nick Whighamnews.com.au
IMAGINE being able to communicate with a machine using nothing but your thoughts. 
That is the goal currently being pursued by a team of researchers and engineers at Melbourne University who are leading the way in the hugely significant field of developing brain machine interfaces.
In an effort to accomplish what has been likened to machine telepathy, they have developed a tiny biocompatible implant called a stentrode which gets implanted into a blood vessel next to the brain. The tiny implant records electric activity from a specific part of the brain and the information is then fed into a decoding algorithm which interprets the electric activity, or thoughts.
Dr Tom Oxley is leading the research and perhaps the only thing more impressive than the science involved is the story behind how he secured funding to embark on the project.
While on holiday in New York about four years ago, Dr Oxley sent a cold call e-mail to US Colonel Geoffrey Ling who at the time he had just become the director of the Pentagon’s science and research unit DARPA.
Much to the surprise of the trainee neurologist at Royal Melbourne Hospital he was quickly invited to the US Defence Department’s research agency and found himself pitching his bold idea to its top brass.
They agreed to give him $1 million to get started on his work.
“I don’t think any other body in the world would’ve funded it,” Dr Oxley tells news.com.au. “It was something that was so blue sky and out there.”
Dr Thomas Oxley has been working on this idea since 2007.
Dr Thomas Oxley has been working on this idea since 2007.Source:Supplied
There was some unfounded stigma that come attached with working with DARPA.
–– ADVERTISEMENT ––
“There are a lot of cynical representations of DARPA about conducting black box evil work,” he says. “But my experience was that of an open, academically and creatively rich environment to pursue next generation research.”
It’s a misconception that extends to his work in brain machine interfaces.
Given the incredible nature of the science, many are quick to jump to lofty conclusions about its future capabilities including speculating about far flung military applications and mind control.
The technology is “an incredible step forward... but it’s a little bit overblown with what’s likely to happen here,” he says, referring to the more “science fiction” possibilities of the technology.
Dr Oxley stressed the implanted device is simply used to record information from the brain, not implanting information into it. “So when people start talking about mind control and things like that, actually this is a technology that is totally controlled by the user… it doesn’t actually work the other way around.”
His team is purely focused on the life altering benefits the technology can bring to the medical industry, primarily in the treatment of paralysis and epilepsy.
The stentrode is inserted into the blood vessel using a catheter.
The stentrode is inserted into the blood vessel using a catheter.Source:Supplied
From the DARPA funding, Dr Oxley and his team was able to use that to leverage Australian government funding.
Back in Melbourne Terence O’Brien, the head of Melbourne University’s Department of Medicine embraced the project with gusto — something which he referred to as the “Holy Grail for research in bionics”.
He introduced Dr Oxley to engineers Tony Burkitt and David Grayden who at the time were working on a bionic eye. In the following months postdoctoral researcher Nick Opie joined Dr Oxley as a lead researcher on the project.
Fast forward to 2016 and the team had successful results of animal trials published in the journal Nature Biotechnology.
“It’s one thing to prove that we can record that type of data but the next stage is to demonstrate in a human that we can get the human user to control that signal in a way in which is useful,” Dr Oxley says.
Ultimately, the process hinges on the ability of the technology to translate the electric brain activity into useful information. To do so, they require a tailor-made decoding algorithm.
“There is a lot of work being done in this space but what’s lacking now is a kind of framework for people to continue to improve on these algorithms,” Dr Oxley says.
“We are modelling as best we can the decoding algorithms to make it work but really until it’s in (humans) it’s going to be challenging to improve on these systems.”
The group is aiming to carry out human trails in the near future, most likely next year, when the project will really begin to take shape.
“The beginning is probably going to be slow. We are aiming for basic control of a couple different directions on a computer screen with a cursor and then with that we hope to use that to manipulate mobility assist devices such as exoskeletons,” Dr Oxley says.
For those suffering from paralysis or severe spinal cord injuries, the technology offers “the capacity to get information out of their brain to modulate movement systems that will basically enable them to interact with their environment again.”
Another objective is to allow doctors treating a patient with uncontrollable seizures to have a constant data stream of what’s happening in their brain in order to predict and address the issue.
The team is keen to get the human patient trials under way and is certainly optimistic about the potential.
“I think what we’re seeing is the start of a whole new field,” Dr Oxley says.
Dr Tom Oxley and Dr Nick Opie, the lead researchers on the project. Picture: David Caird
Dr Tom Oxley and Dr Nick Opie, the lead researchers on the project. Picture: David CairdSource:News Limited

‘It's as simple as just talking’: VocaliD gives a voice to the voiceless


Imagine losing your voice. Not just for a minute, a day, or even a week. Imagine it’s not there anymore. Ever. How would you cope?
Until now, people only had access to limited technology that made everyone sound alike, with robotic tones, much like Stephen Hawking. While it’s a huge, and important, step from being mute, the speechless have sought something more human, more personal — more ‘them’.
Speech scientist Rupal Patel is helping break down communication barriers for the more than 10 million people without a voice through the groundbreaking technology, VocaliD, where Patel is chief executive officer. The company is pioneering the customization of digital voices and is working with Saatchi & Saatchi New York to help tell the story.
“We're entering in the market for assistive technology where people have to have this voice. They don't have other choices, and until now they've been given these generic sounding voices,” explains Patel, who gained wider recognition for the technology through an inspiring TED Talk.
Patel wanted to personalize the experience and she founded the company in 2014 to create custom vocal identities and celebrate the diversity of the human voice. The great thing is, anyone can contribute a voice. They just have to log on to the company’s website, turn on their computer’s microphone and record several sentences. The company then logs that voice into The Human Voicebank. This crowdsourcing of voices currently has logged over 11,000 speakers in 110 countries, and makes it affordable for anyone without a voice to get a match that fits their gender, age and personality. Essentially, it helps create a vocal DNA for the voiceless.
In addition, if someone may be losing their voice to a disease or a condition, or if for some reason they want to preserve their voice for the unexpected, they can log their own voices. Patel sees The Human Voicebank as a better way than having one voice actor recording many statements over days in a studio.
“We want to have a technology match. [What] we're able to do with six hours of someone's voice isn't going to be the same as creating a Siri-like voice for millions of dollars and lots and lots of hours. We're really pushing the envelope on technology and even the pricing part of it to see how we can get this off the ground. We do see that more and more things are going to start to talk, and we're going to be relevant then too, to that broader market,” says Patel.
Goldivox
Generating interest in the technology is Saatchi & Saatchi New York, who have created an engaging, interactive animated video. Goldivox tells the story of a little girl, unable to speak, who searches to find her perfect voice match. She travels the globe until she finally finds a girl whose voice is perfect. The interactivity comes when you speak the words on the screen and they are recorded during the story, changing the story as the user speaks. It not only gets the word out about VocaliD, it also encourages people to become a part of the voice bank.
“This was a very unique challenge that demanded a unique solution,” comments Jay Benjamin, Saatchi NY’s chief creative officer. “We hope this interactive storytelling experience will help people feel how powerful their own voice can be, and that they will be compelled to donate their voice through VocaliD or even spread the message to others who might donate theirs.”  
Patel is excited about the interest the Goldivox video can generate.
“As we understand Goldivox’s need to find a voice that fits, we discover that each one of us has the power to share voice. The interactive read-along invites you to empathize, act upon and cheer on Golidvox all in one. It’s so exciting to have the creative genius of the Saatchi team bring our vision to life through Goldivox’s voice.”
Benjamin said the interactivity came about somewhat by chance.
“The interactive component of it came as we worked through this together with VocaliD, and just sitting together, said, ‘wouldn't it be cool if this wasn't just an animated story, but if the viewer could actually use their voice to move the story along?’”
Goldivox
For Patel, developing the technology and taking it out of the lab was personal, in that she wanted to help those with disabilities who didn’t have a voice.
“They're people, and they have a voice. How do we make it so that they have their own voice because really we haven't really leveled the playing field until you give them a voice that makes them feel like a human being as opposed to just a robot, right? Seeing people with disabilities as fully human is definitely what's personal driving this. Everyday people don't even know that people with speech disabilities suffer or have to deal with this kind of technology. They just don't know. They don't meet them. They don't encounter them,” adds Patel.
“Communication disability is so isolating that people kind of get removed from society, and my hope is that they can re-enter society and be themselves. But I think it's also timely because everyone else, the people that aren't disabled, participate in that movement. I think that's the coolest thing. I can share my voice with someone who can't speak. The fact that I can do something so meaningful for them, I think people are dying to do meaningful things for people. The fact that I can record in my own home, off my own computer, that ubiquity of recording and people understanding how they can do those things pretty easily. It's all about timing, both in the social realm but also in the technological realm.”
The story of Goldivox and its interactivity is helping bring greater awareness to the cause, and Benjamin believes some of that comes from the way those who interact with their voices can steer the story.
“When we see people interact with the story it’s one of the things that actually draws them even closer to it, because their voice is the thing that's making the story happen. So you're living and breathing how your voice is going to bring someone else's story to life. And I think it's a very innovative and forward-thinking style of storytelling that I think can be used in other formats as well,” he says.
Benjamin found Chilean animator Tomas Vergara through Saatchi’s Cannes new directors showcase last year — and he essentially created the entire animation for the project, enhancing the interactivity and underscoring the commitment of those involved in telling the VocaliD story. Benjamin sees opportunities for interactive stories to develop for kids with this technology, having them be an integral part of the story, kind of like the old ‘Choose Your Own Adventure’ books, but digitally.
For her part, Patel has been approached about expanding the technology to help more people. Reading apps for people with low vision, and customizing voices for other interactive stories are possibilities. But for now, VocaliD is focusing on making lives better for the voiceless — and counts numerous times that she has received inspirational feedback from those who have used the technology, including adults and children who finally found a personality through their new voices.
“One man who we made a voice for recently had lost his voice to ALS (amyotrophic lateral sclerosis, also known as ‘Lou Gehrig’s disease’), but he didn't have any recordings of himself. It had been eight years since he and his wife had heard a voice that sounded like him. We created three different options for him. The first couple he just politely nodded and was just like, ‘Yeah, that's pretty cool. It's different than what I'm using right now.’ The third one, when he heard it, his entire body went into the shakes for a minute-and-a-half, and his wife's face changed color. Initially, I didn't know what that meant; I didn't know what they were signaling. I've gotten to know them for the last year and a half or so, and I was really nervous about playing the voice sample. What they told me, after the tears and the shaking stopped, was, ‘Oh my gosh, this is remarkably like his voice.’ There's something about voice that re-acquaints us with a person. That's really powerful in terms of finding your voice again in that scenario,” Patel shares.
Saatchi & Saatchi will continue to develop a campaign for VocaliD, including one that recruits more people to contribute their voices. They hope that it helps change people’s perception of those without a voice.
“Eventually I hope that that will trickle down to changing our attitudes about people with disabilities and what we can do about it. I think there is a complacency sometimes that we have about, ‘Well, what can I do? How can I help someone who can't speak? I don't have those skills.’ Well, you do. It's as simple as just talking. Having kids do this activity, adults, people of all ages, we need all those different voices to create the variety of voices we need. It is in so many ways the ultimate education campaign, right? A public education campaign,” Patel concludes.
Goldivox
----
Credits
Creative / Production: 
Chief Creative Officer: Jay Benjamin
Creative Director: James Tucker
Creative Director: Billy Leyhe
ACD: Brad Soulas      
Art Director: Ryan Gifford
Copywriter: Callum Spencer
Copywriter: Devin McGillivary
Business Design Director, New York and Worldwide: Blake Enting
Designer: Christopher Kelly
Head of Film: John Doris
Producer: Tegan Mahford
Animation: Peak Pictures/Tomas Vergara
Interactive Design and Development: Potato London
Head of Production: Lir Cowman
Business Development Director: Oliver Matthews
Lead Developer: Stu Cox
Developer: David Martin
Project Manager: Jemma Kamara
Project Manager: Adam Field

Music: Massive Music
Sound Design: Daniel Ferreira

Is it possible for a machine to read your mind?



Those suffering from motor neuron disease such as Lou Gehrig's struggle to turn thoughts into words. A scientist from University of California, Berkeley, aims to overcome this through advanced technology.
The idea that Professor Robert Knight has is to develop a machine that could communicate people's intended thoughts via an electronic speaker or writing device. This would be a direct aid to those with the spectrum of motor neuron conditions. A motor neuron disease refers to one of five neurological disorders that selectively affect motor neurons (these are the cells that control voluntary muscles of the body.) These conditions are: amyotrophic lateral sclerosis, primary lateral sclerosis, progressive muscular atrophy, progressive bulbar palsy and pseudobulbar palsy. Lou Gehrig's Disease is an alternative name for amyotrophic lateral sclerosis (ALS.)
The idea of machine recording thoughts and playing these back through a speech device or other form of electronic communication has been the stuff of science fiction. However, this concept is no longer far-fetched. Already neuroprosthetics allows people to control artificial arms with their thoughts. 

While a fully working machine remains may years away, some recent success has been reported. Professor Knight's team have managed to playback a word that someone was thinking by monitoring their brain activity and interpreting the brainwaves. 
This impressive feat involved decoding electrical activity in the brain’s temporal lobe — the seat of the auditory system. Speaking with the Daily Mail, Professor Knight outlined the next steps: "Now, the challenge is to reproduce comprehensible speech from direct brain recordings done while a person imagines a word they would like to say."
To achieve the single word recognition has taken years of research, analyzing brain waves through electrodes and attempting to discern the relationship between brainwaves, words and the interpretation of language, The ultimate aim is to develop a fully-working brain implant. 
Some of the work to date has been published in the journal PLoS Biology ("Reconstructing Speech from Human Auditory Cortex.")


Read more:  http://www.digitaljournal.com/science/new-device-promises-to-turn-brain-waves-into-speech/article/466626#ixzz4ALI6DpQO

Wednesday, May 11, 2016

Channing Tatum adds star power to Carly Fleischmann's non-verbal talk show

CTV National News: A non-verbal talk show?
Carly Fleishmann can't speak, but she's opening up a world of possibilities by communicating through typing. Avis Favaro reports. 
3K
3K

    Karolyn Coorsh, CTVNews.ca Staff
    Published Friday, May 6, 2016 10:15PM EDT 
    A smart and sassy 21-year-old who has autism has created a talk show in the hopes of inspiring others to find their voice.  
    Carly Fleischmann, of Ontario, can’t speak, but that hasn’t stopped her from launching what is believed to be the world’s first-ever non-verbal talk show.
    The online show, called “Speechless” is going viral after she snagged one of Hollywood’s hottest stars for her debut interview – actor Channing Tatum.
    As host of the show, Fleischmann types her questions, which are then voiced by her computer and posed to the interviewee.
    Many people assumed that Fleischmann’s future was limited after she was diagnosed with autism and oral motor apraxia at age two. But after learning to type, Fleischmann revealed her razor-sharp mind. Since then, she has co-published a book, appeared on multiple TV shows, and is now aiming to become the world’s first non-verbal talk show host with autism. Her objective, she says, is to “prove that it doesn’t matter what comes out of your mouth, it’s the voice within that needs to shine.”
    And she doesn’t shy away from the asking provocative questions.
    In her interview with Tatum, Fleischmann asks, “Would you date a 21-year-old person with autism?” Tatum quips: “Yes … but I have to get my wife’s permission first.”
    Fleischmann’s shoots back: “Alright, I’ve got my lawyers working on your divorce papers as we speak.”  
    And she didn’t stop there, asking Tatum about his previous career as a male stripper. “How many girls at the end of your night would take you home” she asks as Tatum laughs.
    Her mother, Tammy Starr, said she was “laughing my head off” when saw the interview.
    “I couldn’t believe the question,” Starr said of her daughter’s bold style. “These are questions he’s never been asked … before.”
    Fleischmann is hoping a major network picks up her show.
    Laurie Mawlam, executive director of Autism Canada called Fleischmann an inspiration.
    “Ultimately, we should all follow our dreams and that is what she is doing,” Mawlam told CTV News. “Autism is not an obstacle for her.”
    With a report by CTV News medical specialist Avis Favaro and producer Elizabeth St. Philip