Canlarim benim mukemmel dakikalar gecirmek istiyorsaniz ve bunu mersin escort ile yapmak istiyorsaniz ben sizi burada bekliyorum hepinizin amk seviliyorsunuz he birde ankara escort var


Day 1 Keynote

Made possible by DornerWorks

“We Need It Before The End Of The Year: What’s Your Estimate?”

by: Tim Lister
No matter what your position on a project you must be a good estimator of your work, and it is a cardinal sin to let good estimates by smart people be overwhelmed by the strong desires of powerful people. Accurate estimates are the foundation of all critical project decisions regarding staffing, functionality, delivery date, and budget. How do we accurately estimate in a world where tradition declares that the deadline is set before the requirements are even known? Tim will offer practical advice on dealing with this thorny issue. He will present strategies and tactics for project estimating and will describe his favorite estimating metric, the Estimating Quality Factor (EQF). By thinking that goals are important and so are good estimates; you will be on the road to better quality and better projects.

Tim Lister is a principal of the Atlantic Systems Guild, Inc., based in the New York office. He divides his time between consulting, teaching, and writing. Currently he is working on tailoring software development processes using software risk management techniques. He has been an invited speaker at the Agile Development Conference three times. Tim was a guest lecturer on software risk management at the Stanford University School of Business, and gave the Dean’s Lecture at the Rochester Institute of Technology. He was a member of the Airlie Software Council, a group of industry consultants, advising the DoD on best practices for software development and acquisition, and is a member of the Cutter Business Technology Council.

Tim, along with the other 5 Principals at the Guild, is co-author of Adrenaline Junkies and Template Zombies: Understanding Patterns of Project Behavior, (Dorset house, 2008). He is co-author with Tom DeMarco of the text, Waltzing With Bears: Managing Software Project Risk, (Dorset House, 2003), which won the Jolt Award for best general computing text in 2003-2004. Tim and Tom are also co-authors of Peopleware: Productive Projects and Teams, 2nd ed. (Dorset House, 1999). Peopleware has been translated into ten languages. Tim Lister and Tom DeMarco are also co-editors of Software State-of -the-Art: Selected Papers, a collection of 31 of the best papers on software published in the 1980’s (Dorset House, 1990). The two partners have also produced a video entitled Productive Teams, also available through Dorset House.

Tim Lister has over 35 years of professional software development experience. Before the formation of the Atlantic Systems Guild, he worked at Yourdon Inc. from 1975 to 1983. At Yourdon he was an Executive Vice President and Fellow, in charge of all instructor/consultants, the technical content of all courses, and the quality of all consultations.

Tim Lister lives in Manhattan. He holds an A.B. from Brown University, and is a member of the I.E.E.E. and the A.C.M. He also serves as a panelist for the American Arbitration Association, arbitrating disputes involving software and software services, and has served as an expert witness in litigation proceedings involving software problems.

Day 2 Keynote

“Investment Modeling as an Exemplar of Exploratory Test Automation”

by: Cem Kaner
Most of the activity in modern stock markets is programmed. In algorithmic trading, which accounted for over 60% of equity transactions in American exchanges last year, software decides what to buy and what price (and how much) and when to place the trades. Imagine testing one of these systems. You could focus on VERIFICATION—does the system correctly implement the model (does it make the trades the underlying model would make) and does it execute the trades correctly (placing the right orders, monitoring the results and recognizing errors). You could focus on OPTIMIZATION and PERFORMANCE—in a fiercely competitive marketplace, the software must quickly get data from the exchanges, interpret the data and get orders to the exchanges and it must do so under competition for resources (e.g. load) at the local system level, in the services the system relies on, and in the paths to the exchanges. You could focus on SECURITY—how vulnerable is the system to espionage or interference? And you can focus on VALIDATION—is this the right model? It doesn’t help anyone (except your competitors) if you can reliably and quickly get the wrong trades to the exchange.

You can probably do the basic verifications as regression tests, maybe even as manual regression tests, but the rest of these concerns require good tools, intense automation, and most of this testing should be exploratory.

Doug Hoffman and I started teaching techniques for automated ET in the late 1990′s, calling them “high volume test automation.” These techniques go after bugs that are virtually impossible to expose or isolate in manual testing. One of the challenges in teaching automated ET is the extent to which sophisticated testing relies on a deeper knowledge of the application under test. As I’ve worked with investment models over the past two years, I’ve realized that this is a type of application that can probably capture the interest of most of the people in our community and thus serve as a good foundation for explaining where I think the next generation of testing should be headed.

These are not new ideas. I have no desire to rebrand them and pretend that my clique of academics and consultants invented them. I first saw automated ET techniques in use in 1985; Hoffman was using some over 30 years ago. These ideas have old roots. My contribution is to make them a little more accessible, via better explanation and examples, with a little better cheerleading for ideas whose time is long overdue.

Cem Kaner has pursued a multidisciplinary career centered on the theme of the satisfaction and safety of software customers and software-related workers. With a law degree (practice focused on the law of software quality), a doctorate in Experimental Psychology, and 17 years in the Silicon Valley software industry, Dr. Kaner joined Florida Institute of Technology as Professor of Software Engineering in 2000. Dr. Kaner is senior author of three books: Testing Computer Software (with Jack Falk and Hung Quoc Nguyen), Bad Software (with David Pels), and Lessons Learned in Software Testing (with James Bach and Bret Pettichord). At Florida Tech, his research is primarily focused on the question, How can we foster the next generation of leaders in software testing? See for some course materials and this Proposal to the National Science Foundation for a summary of the course-related research.


Comments are closed.