Go Back  FlyerTalk Forums > Travel&Dining > Travel Safety/Security > Checkpoints and Borders Policy Debate
Reload this Page >

New technology will enhance privacy on body scanners, TSA says

Community
Wiki Posts
Search

New technology will enhance privacy on body scanners, TSA says

Thread Tools
 
Search this Thread
 
Old Sep 3, 2011, 4:37 pm
  #91  
 
Join Date: Mar 2009
Posts: 1,972
Originally Posted by ScatterX
The more truthful answer is the routine screening of every passenger is going to be done by a person looking at the cartoon at the checkpoint. HOWEVER, the system is 100% reliant on accurately processing the original image and presenting the cartoon properly. The TSA will still have a room and a person that makes sure this is done properly.
Are you sure? I thought the whole point of this system is that the image analysis is done completely by computer.
RichardKenner is offline  
Old Sep 3, 2011, 6:33 pm
  #92  
 
Join Date: Jun 2010
Location: LAS
Posts: 1,279
Originally Posted by RichardKenner
Are you sure? I thought the whole point of this system is that the image analysis is done completely by computer.
The image analysis will be done automatically (to generate the cartoon) for normal operations.

I'm talking about what they won't tell us is also happening. Although I'm speculating (based on past system testing), I'm sure they will keep both images for system tuning (sensitivity) purposes, maybe QA and training as well. How else will they be able to tell if the auto analysis is not showing too much or too little detail? This calibration must be done for each machine.

IMO, this is WORSE than the previous NOS. They have now created a need/justification to store the original images. I'm sure they will say this is not possible or only done with test subjects, but the capability is likely going to be there (it is now). It's possible they would calibrate using test subjects, but this still requires that the original images to be transmitted and displayed somewhere (unless they hook another monitor up to it and do it out in the open--which I doubt).

My point is to not take what they say at face value. There are always other things like operational issues, maintenance, training, testing, data collection, system optimization, upgrades, etc. that never get mentioned. Since this is a new system, you can bet they are going to optimize, customize, and try to improve on it. The system works by taking nude pictures and processing them. Does anyone want to bet that real nude pictures aren't stored for use in the R&D effort?

Something else to worry about. What happens when the cartoon shows feminine hygiene products or other private/medical devices? How many ladies want everyone to see a little yellow square displayed on their cartoon crotch (followed by a trip to the private lounge)?

The routine operation sounds questionable already. I'm sure it's not the whole story either.
ScatterX is offline  
Old Sep 3, 2011, 7:38 pm
  #93  
 
Join Date: Apr 2011
Location: Northern VA
Posts: 1,007
Originally Posted by swag
Here's an article about the updated scanners getting installed at DFW. It includes a video showing the images.

http://www.wfaa.com/news/national/TS...129113398.html

Now, I've been going out of my way for a year now at DFW, heading to alternate checkpoints and/or alternate terminals to ensure I only encountered WTMD. I think I may now stop doing that, or at least not the alternate terminals (which could mean needing to allow an extra 15 minutes, if departing from scanner-only terminals C or D).

I still have doubts about the scanner effectiveness, but that isn't a concern for me personally when I choose a checkpoint.

I still don't think Backscatter is safe, but DFW is all MMW (or WTMD),

But the images shown on the videos seem inoffensive. I like that the private TSA viewing rooms are eliminated, and I like that I get to see the same image of myself that the TSA does.
It doesn't matter what the images show. The MMW's inherently can't tell a bomb from a boarding pass, whether being interpreted by a computer for a human. The machines are a complete turd and should be replaced by the old WTMD's or a new technology that actually works.
Pesky Monkey is offline  
Old Sep 4, 2011, 1:05 am
  #94  
 
Join Date: Dec 2010
Posts: 2,425
Originally Posted by ScatterX
.... I'm sure they will keep both images for system tuning (sensitivity) purposes, maybe QA and training as well. How else will they be able to tell if the auto analysis is not showing too much or too little detail? This calibration must be done for each machine.
The TSA needs to make a public statement on this point, pledging that they do not save or transmit the raw or rendered data from pax scans from the machines running ATR. It would be easy for them to make such a statement, under oath.

Something else to worry about. What happens when the cartoon shows feminine hygiene products or other private/medical devices? How many ladies want everyone to see a little yellow square displayed on their cartoon crotch (followed by a trip to the private lounge)?
This is the $64K question. With 250 machines using ATR by the end of September, we'll find out pretty soon whether these type of fears materialize.
nachtnebel is offline  
Old Sep 4, 2011, 4:58 am
  #95  
 
Join Date: Feb 2008
Location: Nashville, TN
Programs: WN Nothing and spending the half million points from too many flights, Hilton Diamond
Posts: 8,043
Originally Posted by ScatterX
The image analysis will be done automatically (to generate the cartoon) for normal operations.

I'm talking about what they won't tell us is also happening. Although I'm speculating (based on past system testing), I'm sure they will keep both images for system tuning (sensitivity) purposes, maybe QA and training as well. How else will they be able to tell if the auto analysis is not showing too much or too little detail? This calibration must be done for each machine.

IMO, this is WORSE than the previous NOS. They have now created a need/justification to store the original images. I'm sure they will say this is not possible or only done with test subjects, but the capability is likely going to be there (it is now). It's possible they would calibrate using test subjects, but this still requires that the original images to be transmitted and displayed somewhere (unless they hook another monitor up to it and do it out in the open--which I doubt).

My point is to not take what they say at face value. There are always other things like operational issues, maintenance, training, testing, data collection, system optimization, upgrades, etc. that never get mentioned. Since this is a new system, you can bet they are going to optimize, customize, and try to improve on it. The system works by taking nude pictures and processing them. Does anyone want to bet that real nude pictures aren't stored for use in the R&D effort?

Something else to worry about. What happens when the cartoon shows feminine hygiene products or other private/medical devices? How many ladies want everyone to see a little yellow square displayed on their cartoon crotch (followed by a trip to the private lounge)?

The routine operation sounds questionable already. I'm sure it's not the whole story either.
You are exactly right.

With machinery analysis systems, particularly the automated ones, if and when they generate and alarm, first the alarm is resolved, i.e. the machine is inspected to determine why the system alarmed. The machine is then either repaired, adjusted or the anomaly is recorded as a non-alarm situation.

Whatever the inspection finds, it is absolutely necessary to go back to the original data to see why the alarm was generated and if there are changes to the calibration or alarm algorithms to provide better or more accurate alarms in the system.

If they are not using a findings/results/follow up analysis to perfect the system, then they are even less competent than I thought. These types of systems can be made to very, very accurate whether it is machinery analysis or any other statistically repeatable data collection analysis system. One can not perfect the systems without analysis of why the raw data generated the alarm. One can not analyze the raw data without the raw data.

They are either analyzing the raw data or they are not making an attempt to perfect the systems. Neither is good.
InkUnderNails is offline  
Old Sep 4, 2011, 6:25 am
  #96  
FlyerTalk Evangelist
 
Join Date: May 2001
Location: MSY; 2-time FT Fantasy Football Champ, now in recovery.
Programs: AA lifetime GLD; UA Silver; Marriott LTTE; IHG Plat,
Posts: 14,518
Originally Posted by Pesky Monkey
It doesn't matter what the images show. The MMW's inherently can't tell a bomb from a boarding pass, whether being interpreted by a computer for a human. The machines are a complete turd and should be replaced by the old WTMD's or a new technology that actually works.
Yeah, I wasn't commenting on whether the machines were effective.

What I was pondering is whether from a personal standpoint, whether the updated machines are now safe enough (since they are MMW, not backscatter) and are now inoffensive enough in terms of privacy, that it's no longer worth my time to avoid them via an extra 15 minute schlep through an alternate terminal.
swag is offline  
Old Sep 4, 2011, 6:38 am
  #97  
 
Join Date: Mar 2009
Posts: 1,972
Originally Posted by ScatterX
I'm talking about what they won't tell us is also happening. Although I'm speculating (based on past system testing), I'm sure they will keep both images for system tuning (sensitivity) purposes, maybe QA and training as well. How else will they be able to tell if the auto analysis is not showing too much or too little detail? This calibration must be done for each machine.
I disagree with the latter sentence. And why would the images be useful for training? If people aren't going to be interpreting the images anymore, why train them on doing so? Even if the above is happening, I don't consider it worse, at all. Right now, there are thousands of people seeing the images, with some control over whose image they see. Even if images are sent out with the new system, it would be to a relatively small group of people (probably no more than hundreds) who'd be working on the software development (and it's QA). They'd be looking at tens or hundreds of thousands of images and have no control over whose image they saw. This is a very different situation.

Something else to worry about. What happens when the cartoon shows feminine hygiene products or other private/medical devices? How many ladies want everyone to see a little yellow square displayed on their cartoon crotch (followed by a trip to the private lounge)?
Indeed this remains the major issue with this technology in my opinion.
RichardKenner is offline  
Old Sep 4, 2011, 6:47 am
  #98  
 
Join Date: Jan 2011
Location: in the sky
Posts: 490
Do you have any anomalies? Are you able to assume the position? YMMV

The technology does nothing to help the person with permanent anomalies, unable to walk through or assume the position, who must now forever be resolved in a most intimate way in order to be cleared into the sterile areas.
loops is offline  
Old Sep 4, 2011, 8:11 am
  #99  
 
Join Date: Jun 2010
Location: LAS
Posts: 1,279
Originally Posted by nachtnebel
The TSA needs to make a public statement on this point, pledging that they do not save or transmit the raw or rendered data from pax scans from the machines running ATR. It would be easy for them to make such a statement, under oath.
^^^ Including independent verification.

Originally Posted by nachtnebel
This is the $64K question. With 250 machines using ATR by the end of September, we'll find out pretty soon whether these type of fears materialize.
Unfortunately, this is typically how most "Hmmmm, I should have thought of that" questions and the truthful answers are discovered.

Originally Posted by InkUnderNails
They are either analyzing the raw data or they are not making an attempt to perfect the systems. Neither is good.
Yup. One is a major privacy issue and the other is a major incompetence issues. Is there any doubt the TSA will find a way to make it both?

Originally Posted by RichardKenner
I don't consider it worse, at all. Right now, there are thousands of people seeing the images, with some control over whose image they see. Even if images are sent out with the new system, it would be to a relatively small group of people (probably no more than hundreds) who'd be working on the software development (and it's QA). They'd be looking at tens or hundreds of thousands of images and have no control over whose image they saw. This is a very different situation.
The storage, transmission, and viewing of nude images (that is ineffective, has no commensurate benefit, at extreme cost [the basic definition of unreasonable search]) is the problem, IMHO. Big group, small, group, etc. doesn't change the primary issue for me. If Brad and Angelina went through the NOS, it wouldn't really matter who was in the loop. What matters (to me at least) is there is a loop. My point, which was echoed by InkUnderNails, is they now have a reason to store the images or simply settle for being incompetent. Neither is good.

Originally Posted by RichardKenner
I disagree with the latter sentence. And why would the images be useful for training? If people aren't going to be interpreting the images anymore, why train them on doing so?
For the basic operators, you are very likely correct. You may also be correct for the bigger picture too. However, I believe they will have to have system operators and administrators that calibrate the machines (so that the auto-target-recognition is showing the right things, not showing the wrong things, the proper level of detail, etc.). These people will need to be trained. They will also need raw images to do this. The question is where the raw images come from?
ScatterX is offline  
Old Sep 4, 2011, 9:18 am
  #100  
 
Join Date: Jun 2010
Location: LAS
Posts: 1,279
Originally Posted by swag
What I was pondering is whether from a personal standpoint, whether the updated machines are now safe enough (since they are MMW, not backscatter) and are now inoffensive enough in terms of privacy, that it's no longer worth my time to avoid them via an extra 15 minute schlep through an alternate terminal.
In contrast to the backScatterX-ray, the MMW is extremely safe. The sole issue is privacy and the fact that the system is so flawed that many get the pat down anyway. Your issues with privacy will be based on your feelings on the subject. As they say, YMMV.

In my case, I really don't care if people see me naked and I have back issues that make standing in one place with my arms up (for the pat down) extremely painful. You would think I'd go through the NOS and take the easy way out. I don't for a very good reason. I am worried about our collective privacy, what the government is doing now, and what's next along the slippery slope of "anything" for security. The NOS is ineffective, expensive, and invades our privacy without commensurate benefit. The x-ray system is also unsafe. The current WBI approach is an unreasonable search IMO, IANAL, YMMV, LMNOP. It's normally unreasonable (think waste of money) and Constitutionally unreasonable (for the reasons above). I'm literally standing (in pain) for what I believe is the right thing.

(To be fair, I have no issue with walking through portal systems, as long as they are safe, effective at detecting WIE, and worth the cost. The current machines are NONE of these and have no hope of every being so.)

Read the posts in this thread, get informed in other ways, and decide the privacy issue and your actions for yourself. My advice is not take the TSA line at face value. That fishy smell should be telling you something.
ScatterX is offline  
Old Sep 4, 2011, 12:11 pm
  #101  
 
Join Date: Mar 2009
Posts: 1,972
Originally Posted by ScatterX
For the basic operators, you are very likely correct. You may also be correct for the bigger picture too. However, I believe they will have to have system operators and administrators that calibrate the machines (so that the auto-target-recognition is showing the right things, not showing the wrong things, the proper level of detail, etc.). These people will need to be trained. They will also need raw images to do this. The question is where the raw images come from?
I don't see that. When you calibrate modern equipment, you don't do it manually, but with various test patterns and have the software interpret the results and do any needed adjustments. Putting a human in the calibration loop doesn't make any sense since it would be less accurate and take more time. And even if you did, that person would be seeing images of the test objects (of known densities), not passengers: you don't calibrate equipment on unknown objects!
RichardKenner is offline  
Old Sep 4, 2011, 12:51 pm
  #102  
 
Join Date: Feb 2008
Location: Nashville, TN
Programs: WN Nothing and spending the half million points from too many flights, Hilton Diamond
Posts: 8,043
Originally Posted by RichardKenner
I don't see that. When you calibrate modern equipment, you don't do it manually, but with various test patterns and have the software interpret the results and do any needed adjustments. Putting a human in the calibration loop doesn't make any sense since it would be less accurate and take more time. And even if you did, that person would be seeing images of the test objects (of known densities), not passengers: you don't calibrate equipment on unknown objects!
Richard:

You already have two humans in the "calibration" loop, the person seeking the anomaly and the person having the anomaly. And, we are not talking about calibration. Calibration of inspection systems compares an indicated result to a known sample. When the result can be made to match the sample, it is calibrated.

The discussion around expert systems is the learning ability of the software, not its calibration. As I only know about machinery inspection systems, I will discuss those.

The expert system will detect an anomaly and from its database of known anomalies will seek a match. If it finds one, it will say the problem and its likelihood. The inspector will then examine the machine (search the passenger) to verify that the anomaly is as detected (gum wrapper in passenger pocket). The system will then be told that the pattern of the anomaly detected matched the result and the database is strengthened. This is all good.

However, if the machine detects a gum wrapper and it turns out the pocket is empty, someone will have go to the raw data, highlight it for the software and tell the machine this was really nothing. It may turn out it was a knife and not a gum wrapper. It is the same thing in that we have to "teach" the machine and build the database.

Repeat as needed and the machine will eventually build a database that is large enough to detect anomalies accurately and successfully. This includes ignoring what needs to be ignored. This is not calibration. This is the nature of automated inspection systems.

It is very difficult technology to build and perfect. The reasoning is simple. The inspection of a machine with no anomalies presents few difficulties. It is when an anomaly is detected that the system is tested. How severe is the problem? What is the exact nature of the problem? Will it shut the machine down? Will it cause poor quality production? The detection of false positives are also to be avoided as we do not want to shut down production to search for and repair a nothing. This often requires years of trail and error and cross referenced databases for simple machinery analysis systems. Most times, someone must look at the raw data and make these production based decisions, overriding the system indication.

Furthermore, we are checking the same machine over and over, day in, day out. Anomalies are easy to see and define.

With the passenger scanning system, we have short/tall, trim and not so trim, men/women, old/young and on and on. We are trying to build an acceptability database with a highly variable inspected item. This is very, very difficult to perfect. It will require constant feedback of success and error based on the raw data.

And someone, somewhere, will need to look at the data to determine why it alarmed to do this correctly. It is unavoidable.
InkUnderNails is offline  
Old Sep 4, 2011, 2:14 pm
  #103  
 
Join Date: Mar 2009
Posts: 1,972
Originally Posted by InkUnderNails
However, if the machine detects a gum wrapper and it turns out the pocket is empty, someone will have go to the raw data, highlight it for the software and tell the machine this was really nothing. It may turn out it was a knife and not a gum wrapper. It is the same thing in that we have to "teach" the machine and build the database.
I disagree. The machine is supposed to report anomalies. It isn't supposed to (and I don't think can) distinguish between a gum wrapper and a knife: both are supposed to be reported as an anomaly. And I also completely disagree with your contention that the production machines will have any role in the machine learning system: I don't think that's practical for quite a number of reasons.
RichardKenner is offline  
Old Sep 4, 2011, 3:02 pm
  #104  
 
Join Date: Sep 2009
Posts: 3,702
Originally Posted by ScatterX
This quote is BS...
However, the agency believes it will save money by eliminating the need for private screening rooms and extra personnel, who had to watch the monitors before the software upgrade.


So... the TSA will still have a nude image, still have a person looking at it...

I fail to see what has changed other than the smoke screen and cute diversion presented to appease the public.
I can 100% confirm you are incorrect. No one will be in the room. Despite what you chose to believe.

And you fail to see what changed becasue you wish it had not changed so you can make the arguments you make. Simple as that.
SATTSO is offline  
Old Sep 4, 2011, 3:05 pm
  #105  
 
Join Date: Sep 2009
Posts: 3,702
Originally Posted by RichardKenner
I disagree. The machine is supposed to report anomalies. It isn't supposed to (and I don't think can) distinguish between a gum wrapper and a knife: both are supposed to be reported as an anomaly. And I also completely disagree with your contention that the production machines will have any role in the machine learning system: I don't think that's practical for quite a number of reasons.
You are correct. However, the technology IS being developed to allow the AIT to tell the difference between a gum wrapper and a knife, or a gun, etc. BTW, such software is also in the works for x-rays. Now, when these technologies are ready for use is another question. Personally, I believe we will see this used for x-rays before AIT. Just my opinion.
SATTSO is offline  


Contact Us - Manage Preferences - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service -

This site is owned, operated, and maintained by MH Sub I, LLC dba Internet Brands. Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Designated trademarks are the property of their respective owners.