Accessibility concerns regarding Apps created in Corona

I recently was putting together a basic app for an organization, and was surprised to notice that my app did not work at all with voice over on IOS.  voice over is the screen reader built in to IOS which allows blind and visually impaired users like myself to independently use our devices.  Native apps are able to be used with voice over.  is Corona doing something behind the scenes such as not using native controlls and instead drawing aproximations of controlls on screen?
this has me very concerned as both a developer and a consumer. If more and more apps start using Corona as a development platform, does this now mean that we will be left out in the cold? does anyone with a more technical background have any thoughts on this. I’m deeply concerned and wanted to get this out there so that hopefully someone would see it and could respond.

 

To start, if I misunderstood your questions and concerns, I apologize.

If you are looking for “text to speech” functionality, there is a plugin that was created that accomplishes this:

https://forums.coronalabs.com/topic/59180-text-to-speech/

Now, this just me talking of course, but if I were to create an app without specifically targeting the blind/visually impaired community, and I wanted to make sure that the app could be used by that community, I would most likely have to use Corona Enterprise (the paid tier, which allows for hybrid Corona/native development), as I would need to incorporate the Google Speech API (for Android), and/or SpeechKit (for iOS) to achieve speech to text functionality. 

If I didn’t want to use Corona Enterprise for whatever reason, I would definitely re-model the app’s architecture/UI to make it easier for users that are blind/visually impaired to use it, rather than relying on native libraries to accomplish the accessibility portion.

I’m sure a Corona engineer will weigh in as well, but to be clear, Corona SDK is definitely not using native controls in most cases. As far as “drawing approximations”, I’m not sure what you mean there, but there is little incorporation of native APIs when developing with Corona SDK. 

what I’m refering to is If you create something such as a tab bar or a text box. it is not a native controll correct? For this reason voice over isn’t able to actually know there is a tab bar or a text box present.  Text to speech doesn’t favtor in at all though tts is certainly useful for some games. I’m focusing on aps such as apps created for businesses or organizations.

I’ve seen text to speech be pretty useful for those apps in my experience, but if you don’t find it useful, the point is moot.

Again, I’m sure a Corona engineer will have a more nuanced answer, but the root answer is, no, the tab bar isn’t a native object. Text boxes are native objects, however, but the ability to utilize the “accessibility” APIs of iOS would be minimal, given the way that Corona SDK operates (openGL layers, active threads, other technical terms that someone besides me is better at using).

I’d love to talk about a way to solve your issue. It sounds like you’re looking for a way to convey the information that is inside of your app auditorily (sp?), rather than visually. Is that accurate? I saw from another super-old post that you were looking for a way to reconcile touch/event listeners to things that aren’t graphically represented on the devices screen. Is that still the case? I think the Corona community would be happy to brainstorm possible solutions for you, so if you could give a bit more detail on what your app’s use-case is, that would be great!

yq uestion is more or less answered.  I’ve found Corona has been very useful for games I’m making. Audio Archery for example which is a small audio game using positional audio.It just means if developer create apps using Corona such as let’s say an app displaying sports scores or twitter posts, it just means a segment of the IOS user base will not be able to take advantage of it.  I understand why Corona doesnt use native apis, but it seems like it will only hurt people with disabilities if app developers start using it to cut both costs and time.

An example use case for an app I wanted to create was an app that would use the barcode library to scan products and then return information on said product. the information would be spoken and displayed on screen.  the problem of course is that since Corona draws things on screen this just wouldn’t be possible to make something using native controlls and of course have it meet accessibility guidelines.

We’ve had similar issues with apps that use QT as a cross-platform gui framework. It’s been able to run on windows, mac and linux, but screen readers are unable to hook in to the Gui to read back useable information. 

I’m sorry, I must be misunderstanding some fundamental to your app example. You say (emphasis mine):

“the information would be spoken and displayed on screen

then you say:

since Corona draws things on screen

which seems like they are the same behavior. You can use the QR Scanner plugin, capture the output, and then use the above-referenced text-to-speech plugin to “speak” the output so that the user can hear it. 

If you could let me know what about my above solution doesn’t meet your needs, it would help tailor any suggestions that the community might have.

Speaking to your other concern that “it will only hurt people with disabilities if app developers start using it to cut both costs and time”, I don’t really know how to address that. Speaking about my own personal experience, I don’t think the apps I create couldn’t be used by people that are disabled. To be fair, I guess this would depend on the disability in question, as hearing impaired individuals would have no problem playing Segreta, for example, but a visually impaired person might have a problem seeing the smaller sprites. Then again, having the app “speak” the item’s info might not be all that useful. Maybe if the character was within a specific proximity to the item? Like I said above, I’d have to do some serious going-back-to-the-drawing-board in order to incorporate those kinds of features (in this very hypothetical example, of course).

I think I’m completely missing the point here, which would not be the first time. I apologize if this is the case. Are you looking for “speech to text” or voice recognition within a Corona SDK-developed app? 

I’m really struggling on how to make my point, but let me see if I can.

First of all I’m not refering to games using sprites and other visual aspects. It’s fairly clear that can’t be adapted which is fine. what I’m refering to are apps that may be used for informational or productivity purposes. IE. an app for a church or a bake sale.  if an app is developped natively, controlls like buttons, text boxes are rendered in such a way that voice over reads them. This is I’m assuming due to them being actual controlls on a view.

Corona has what it calls Widgits. Correct? Things like tab bars, text boxes, buttons.  What I’m trying to figure out is. Are these objects being converted to native objects, or are they rendered in Open GL and there fore are actually on-screen graphics that voice over can’t get a hold of.

About the only way to see this in action is to crate a simple app. Add some widigts like text boxes, buttons, and then test to see what if anything voice over can actually read.

the point I’m trying to make is that if these widgits are being drawn on screen as images, and are not being converted in to native objects, this puts blind users at a massive disadvantage. Not where games are concerned, but where productivity apps are concerned. The ease of use of something like Corona makes it atractive for developpers especially if they want to quickly create an app and make it cross-platform.

I’m unsure how else I can really put it in to words, but possibly others may be able to expound on what I’m trying to say.

Nope, that makes perfect sense. I apologize for making you re-invent the wheel to get it through my thick head :wink:

You are correct, Corona SDK renders display objects as images only, and not native objects. Therefore, iOS APIs can’t grab them and apply voice-over to them. You are 100% correct there. 

The only way around this, for Corona-developed apps at least, is to have this functionality in mind ahead of the fact, and developers need to incorporate the voice over themselves during production.

Regarding your other point, I see what you’re saying, but it’s a weird “problem”. I guess Corona can be considered to be doing a disservice to the disabled community by being  too  easy to use and allowing people to create apps that certain disabled communities won’t be able to use, but I don’t know how to fix that “problem”. 

The only way to solve that problem is to render objects as actual native objects. That will probably not be happening, but I wanted to check and be sure I was correct in my asumptions.

Here’s some backup information as to how Corona uses OpenGL:

https://forums.coronalabs.com/topic/58256-will-it-be-possible-to-create-games-for-the-playstation-4-and-xbox-one-with-corona-in-the-near-future/

https://forums.coronalabs.com/topic/59278-i-want-video-fill-to-background-rectangle/

Again, I don’t know if I can call this a problem, in that there are many other options for software development. Now, if one were to say, “but Corona is so easy to use! If they could stop using OpenGL then everyone would benefit.” the answer would most likely be, “if Corona stopped using OpenGL, then it would be exponentially more difficult to use, thereby eliminating the benefit.” 

disclosure: I’m not employed by Corona, nor do I derive financial or practical benefit from the company aside from using the SDK they produce for app development. My opinions and comments above are my own and do not reflect the feelings, views or opinions of anyone from Corona. My comments are not intended to offend, hurt, demean or insult any person, group or affiliation. If my comments have done so, I apologize.

I guess the difference is that where native iOS controls expose some accessibility properties to VoiceOver, the screenreader built into iOS which is not speech-to-text but rather OS-wide text-to-speech, it appears these Corono-specific controls do not. I have no idea if Apple allows for a whole lot of options in this matter, you would either have to compile down to native or at least nativer code to get those access properties back or Corona would have to implement their own accessibility API, which I’m not sure is possible either. We’d need an engineer knowledgeable about iOS internals and the Corona toolchain to comment on this…

To start, if I misunderstood your questions and concerns, I apologize.

If you are looking for “text to speech” functionality, there is a plugin that was created that accomplishes this:

https://forums.coronalabs.com/topic/59180-text-to-speech/

Now, this just me talking of course, but if I were to create an app without specifically targeting the blind/visually impaired community, and I wanted to make sure that the app could be used by that community, I would most likely have to use Corona Enterprise (the paid tier, which allows for hybrid Corona/native development), as I would need to incorporate the Google Speech API (for Android), and/or SpeechKit (for iOS) to achieve speech to text functionality. 

If I didn’t want to use Corona Enterprise for whatever reason, I would definitely re-model the app’s architecture/UI to make it easier for users that are blind/visually impaired to use it, rather than relying on native libraries to accomplish the accessibility portion.

I’m sure a Corona engineer will weigh in as well, but to be clear, Corona SDK is definitely not using native controls in most cases. As far as “drawing approximations”, I’m not sure what you mean there, but there is little incorporation of native APIs when developing with Corona SDK. 

what I’m refering to is If you create something such as a tab bar or a text box. it is not a native controll correct? For this reason voice over isn’t able to actually know there is a tab bar or a text box present.  Text to speech doesn’t favtor in at all though tts is certainly useful for some games. I’m focusing on aps such as apps created for businesses or organizations.

I’ve seen text to speech be pretty useful for those apps in my experience, but if you don’t find it useful, the point is moot.

Again, I’m sure a Corona engineer will have a more nuanced answer, but the root answer is, no, the tab bar isn’t a native object. Text boxes are native objects, however, but the ability to utilize the “accessibility” APIs of iOS would be minimal, given the way that Corona SDK operates (openGL layers, active threads, other technical terms that someone besides me is better at using).

I’d love to talk about a way to solve your issue. It sounds like you’re looking for a way to convey the information that is inside of your app auditorily (sp?), rather than visually. Is that accurate? I saw from another super-old post that you were looking for a way to reconcile touch/event listeners to things that aren’t graphically represented on the devices screen. Is that still the case? I think the Corona community would be happy to brainstorm possible solutions for you, so if you could give a bit more detail on what your app’s use-case is, that would be great!

yq uestion is more or less answered.  I’ve found Corona has been very useful for games I’m making. Audio Archery for example which is a small audio game using positional audio.It just means if developer create apps using Corona such as let’s say an app displaying sports scores or twitter posts, it just means a segment of the IOS user base will not be able to take advantage of it.  I understand why Corona doesnt use native apis, but it seems like it will only hurt people with disabilities if app developers start using it to cut both costs and time.

An example use case for an app I wanted to create was an app that would use the barcode library to scan products and then return information on said product. the information would be spoken and displayed on screen.  the problem of course is that since Corona draws things on screen this just wouldn’t be possible to make something using native controlls and of course have it meet accessibility guidelines.

We’ve had similar issues with apps that use QT as a cross-platform gui framework. It’s been able to run on windows, mac and linux, but screen readers are unable to hook in to the Gui to read back useable information. 

I’m sorry, I must be misunderstanding some fundamental to your app example. You say (emphasis mine):

“the information would be spoken and displayed on screen

then you say:

since Corona draws things on screen

which seems like they are the same behavior. You can use the QR Scanner plugin, capture the output, and then use the above-referenced text-to-speech plugin to “speak” the output so that the user can hear it. 

If you could let me know what about my above solution doesn’t meet your needs, it would help tailor any suggestions that the community might have.

Speaking to your other concern that “it will only hurt people with disabilities if app developers start using it to cut both costs and time”, I don’t really know how to address that. Speaking about my own personal experience, I don’t think the apps I create couldn’t be used by people that are disabled. To be fair, I guess this would depend on the disability in question, as hearing impaired individuals would have no problem playing Segreta, for example, but a visually impaired person might have a problem seeing the smaller sprites. Then again, having the app “speak” the item’s info might not be all that useful. Maybe if the character was within a specific proximity to the item? Like I said above, I’d have to do some serious going-back-to-the-drawing-board in order to incorporate those kinds of features (in this very hypothetical example, of course).

I think I’m completely missing the point here, which would not be the first time. I apologize if this is the case. Are you looking for “speech to text” or voice recognition within a Corona SDK-developed app? 

I’m really struggling on how to make my point, but let me see if I can.

First of all I’m not refering to games using sprites and other visual aspects. It’s fairly clear that can’t be adapted which is fine. what I’m refering to are apps that may be used for informational or productivity purposes. IE. an app for a church or a bake sale.  if an app is developped natively, controlls like buttons, text boxes are rendered in such a way that voice over reads them. This is I’m assuming due to them being actual controlls on a view.

Corona has what it calls Widgits. Correct? Things like tab bars, text boxes, buttons.  What I’m trying to figure out is. Are these objects being converted to native objects, or are they rendered in Open GL and there fore are actually on-screen graphics that voice over can’t get a hold of.

About the only way to see this in action is to crate a simple app. Add some widigts like text boxes, buttons, and then test to see what if anything voice over can actually read.

the point I’m trying to make is that if these widgits are being drawn on screen as images, and are not being converted in to native objects, this puts blind users at a massive disadvantage. Not where games are concerned, but where productivity apps are concerned. The ease of use of something like Corona makes it atractive for developpers especially if they want to quickly create an app and make it cross-platform.

I’m unsure how else I can really put it in to words, but possibly others may be able to expound on what I’m trying to say.

Nope, that makes perfect sense. I apologize for making you re-invent the wheel to get it through my thick head :wink:

You are correct, Corona SDK renders display objects as images only, and not native objects. Therefore, iOS APIs can’t grab them and apply voice-over to them. You are 100% correct there. 

The only way around this, for Corona-developed apps at least, is to have this functionality in mind ahead of the fact, and developers need to incorporate the voice over themselves during production.

Regarding your other point, I see what you’re saying, but it’s a weird “problem”. I guess Corona can be considered to be doing a disservice to the disabled community by being  too  easy to use and allowing people to create apps that certain disabled communities won’t be able to use, but I don’t know how to fix that “problem”.