Canlarim benim mukemmel dakikalar gecirmek istiyorsaniz ve bunu mersin escort ile yapmak istiyorsaniz ben sizi burada bekliyorum hepinizin amk seviliyorsunuz he birde ankara escort var

After CAST Sessions

Live! AST Instructors’ Orientation Course Jumpstart Tutorial

by: Rebecca Fiedler and Cem Kaner
Thru May 15
Non-Members & Associate Members
$100
Full Members & Student Members
$50
You’ve read about AST’s free software testing courses. Now find out how you can get involved in teaching these for AST, for your company, or independently. This workshop will use presentations, lectures, and hands-on exercises to address the challenges of teaching online. (Bring your laptop and wireless card if you can.) The presenters will merge instructional theory and assessment theory to show you how they developed the AST-BBST online instructional model.  Over lunch, chat with AST members who are working on AST Instructor Certification.This workshop satisfies the Instructors’ Orientation Course requirement for prospective AST-certified instructors.PREREQUISITE: Successful completion of BBST FoundationsYour registration includes lunch.

Learn more about becoming an AST BBST instructor at http://www.satisfice.com/kaner/?p=41

This workshop is partially based on research that was supported by NSF Grants EIA-0113539 ITR/SY+PE: “Improving the Education of Software Testers” and CCLI-0717613 “Adaptation  & Implementation of an Activity-Based Online or Hybrid Course in Software Testing.” Any opinions, findings and conclusions or recommendations expressed in this workshop are those of the presenter(s) and do not necessarily reflect the views of the National Science Foundation.

For the past 25 year years, Rebecca Fiedler has been teaching students of all ages – from Kindergarten to University. She is interested in how people learn and how technology can make educational efforts more effective and more accessible to more people. Lately, she’s been trying to figure out how to make online teaching more effective. In the testing community, Dr. Fiedler works with Cem Kaner on the Black Box Software Testing (BBST) courses and AST’s Education SIG. She is also a regular attendee at the Workshop on Teaching Software Testing.
Cem Kaner has pursued a multidisciplinary career centered on the theme of the satisfaction and safety of software customers and software-related workers. With a law degree (practice focused on the law of software quality), a doctorate in Experimental Psychology, and 17 years in the Silicon Valley software industry, Dr. Kaner joined Florida Institute of Technology as Professor of Software Engineering in 2000. Dr. Kaner is senior author of three books: Testing Computer Software (with Jack Falk and Hung Quoc Nguyen), Bad Software (with David Pels), and Lessons Learned in Software Testing (with James Bach and Bret Pettichord). At Florida Tech, his research is primarily focused on the question, How can we foster the next generation of leaders in software testing? See TestingEducation.org for some course materials and this Proposal to the National Science Foundation for a summary of the course-related research.

Context-Driven Performance Testing

by: Eric Proegler and Paul Holland

Non-Members & Associate Members
$635
Full Members
$525
Student Members
$425
Performance Testing measures whether software meets certain non-functional software requirements, such as response time, concurrency, scalability, reliability, resilience, and other characteristics of a software system that are essential for user experience. This style of testing is usually accomplished with expensive automated tools, using scripted activities to construct a simulation of many people using a software system. Like other kinds of testing, it can seem that “everyone knows” the “right way” to conduct performance testing. There is a widely duplicated model that focuses performance testing software at the end of the development cycle, shortly before deployment, attempting to replicate the expected production workload with an expensive, proprietary load tool. This model proposes this simulation of production workload reduces risk by causing similar levels of computing resources to be consumed and, through use of software with many users activities, simulated.Like other kinds of testing, it can be demonstrated that a prescriptive, cookbook-style approach can be somewhat successful, but testing well and providing the most complete information requires thoughtful planning matched with the context in which the tests are conducted. While simulation against deployment-ready code can reduce risk significantly by itself, there are additional approaches extending the same expensive load tools, test artifacts, and testers to provide additional feedback and value throughout the development lifecycle. Depending on the people, tools, time, development model, and other variables, these other approaches to performance testing can even be more valuable than the simulation approach. In this tutorial, we will share some ideas about how performance testing can be evolved past the “dress rehearsal” model and discuss performance testing beyond capture-replay and requirements definition. Additional topics include:

  • Other times, places, and types of performance tests to perform; test early and often to find issues early enough to fix them.
  • Why the choice of tool is not really important
  • Scripting mechanics and what performance testing can learn from automated testing
  • Heuristics you may be able to use to plan, execute, and interpret performance tests rapidly
  • Common characteristics of effective performance testers
Eric Proegler has been testing Windows applications for 15 years and performance testing them for 9. He is responsible for performance testing and analysis of new releases of .WPF and browser-based IIS.NET clients for a high-concurrency and high-transactional database application. Eric also writes, teaches, and consults in areas of perfomance testing, SQL Server, hardware selection and sizing, WAN deployment, storage, and solution design for customers and partners of Hyland Software, the developers of the OnBase ECM Solution.
Paul Holland has been performance testing telecommunications equipment since 1995. He is currently the manager of a verification group at Alcatel-Lucent in Ottawa, Canada. He is responsible for testing new releases of DSL switches used by telephone companies to sell triple play solutions (Video, Voice, and Data). Over the past 15 years Paul has supervised the creation and world-wide deployment of an automation environment; he also created and supervised automation for Asynchronous Transfer Mode (ATM) data switches. Prior to 1995 Paul flew Sea King helicopters for the Canadian Military, and managed to performance test the helicopters in innovative ways.
 

Comments are closed.