Test picture for negative and positive thinking. Classification of test types Negative testing

Very concerned about the quality of the products. This explains the worldwide availability of software testers. By providing, these people ensure its quality.

Many testers will never forget about negative testing, although not all programmers are happy with it. Such control is needed to protect against hackers, bots, Dos / DDos attacks.

What is the vocation of testing professionals? They must find problems that are not visible to others. Do not delay negative testing, or you will endanger the system.

Positive and negative testing

Let's start from the very beginning. There are 2 types of control when test cases are included in testing: positive and negative. The latter has the advantage.

Positive testing Is a verification process for correct behavior according to technical requirements and documentation. Positive testing is done to ensure that the system does exactly what is expected.

Negative testing Is the process of checking for incorrect behavior. During such testing, we can find out that the system will cope with unforeseen situations.

Positive-negative testing

To perform software testing, you need to have intuition or a hunting instinct. A tester is a versatile person who can perform both business analysis and testing.

Testers check whether the process is running correctly: whether there is compliance with technical requirements and test scenarios. Performing positive and negative testing separately will take longer than doing both at the same time. This is because there are two test iterations.

After all, the closer the X hour, the faster time goes by and the sooner you need to complete tasks, fix defects, apply business requirements (which can vary), and more to do. Deadline is the hottest time!

Separating negative and positive testing simply goes against the nature of a tester! Its task is to check the system for all possible actions of the end user.

People are mostly illogical and can cause problems in software. Negative testing can help avoid problems.

We (not such a secret) are very worried about the quality of our products and watch with trepidation as the system collapses. This justifies the existence of testers in the world. It makes us feel like heroes: the great Tester came and saved his users from terrible critical bugs!

And our testers never forget about negative testing, although not all programmers are happy about it. But such checks are not a whim of "evil testers", they are caused by the need to close vulnerabilities and protect against penetration of hackers and bots into the system, Dos / DDos attacks.

Of course, after all, what is the vocation of test specialists? We need to find problems. Problems that no one often has time to think about, does not want to see them and deal with them. And if not only the correct operation of the system is checked, but also its abnormal behavior, then the tension in the team is added.

You see, programmers write software, aiming at the result, at the planned release, fly on the wings of inspiration! And here comes the stage of checking and numerous corrections and edits of the "ideal" code. And that's all, hide wherever, the system is being tested.

In order not to annoy anyone, some specialists may postpone negative testing for later or even ignore it (horror!) In order to reduce the time and budget. Well, why check if the program does not even do what it should, right? Nope.

Positive and negative testing

But first things first. When testing software using test cases, there are two sets of checks: positive and negative. Moreover, the second is usually more than the first.

Positive testing - this is a check of the system operation for compliance with its normal (standard, expected) behavior, according to the TOR and. That is, here we are looking at whether the software does what is expected of it, whether the implementation meets modern requirements, whether the user interface guidelines are supported, etc.

A negative testing is testing the system for abnormal behavior. We look at whether the software is resistant to incorrect data input, how exceptions are handled, what information is shown in error messages, whether it is possible to disrupt the operation of the product and / or affect the performance of the solution, and so on.

We have already said that some experts leave negative testing for later, or even forget about it, that it is almost the same thing. You yourself know that postponed until later almost always remains unfulfilled.

Therefore, in our opinion,

Negative and positive testing does not need to be separated and spread over time.

Because can we say that the system is working as it should if we test its response only on the correct input?

Positive-negative testing

When testing, oh, how important are intuition, sense, hunting instincts - call what you want. And here is our engineer sitting, checking the registration form, for example.

It checks everything according to the technical assignment and test scenarios, looks at how the data is processed, which the user must enter into the fields (not the fact that he will enter, by the way) and here it is - an insight! It seems to him that if you enter some "% adynadyn /\u003e" in this field for login, and not ordinary text, then something will definitely happen. Something dark and gloomy is wrong.

So what? He has to tell himself, “No. Now I have to do positive testing and nothing else. Here I have a negative appointment next week, then the time will come for% adynadins /\u003e. Probably"?

We find this approach to negative testing ineffective, and here's why:

  1. If positive and negative testing is done separately, it will take longer. At least because it will already be two iterations of testing.
  2. Testers and coders live on deadlines. And if time is strictly limited, then postponing negative testing for later increases the risk that it will be forgotten altogether. After all, the closer to the moment X, the faster time flies, the sooner you need to complete the tasks, fix the defects, apply the final business requirements (which may change) and complete a lot of other things. Deadline - times are hot!
  3. The separation of negative and positive testing, in our opinion, is simply contrary to the nature of a tester! After all, its main task is to check the system for all possible actions of the end user. And people for the most part are illogical, and can do all kinds of lewdness with software;)

We, as testers, are very worried if the system contains errors in checks from the category of negative ones. And especially if the consequences of such errors are critical for the entire system. But we are not afraid to report them. Especially with such a trump card in the sleeve - we have girls-testers in our team. And who will be able to stubbornly defend the "ideality" of the code when they, with gentle voices to smithereens, smash the project's performance? That's the same.

So what conclusions can we draw?

Do not forget about negative testing, combine it with positive, gather experienced specialists in a team and try to shift the task of reporting onto the shoulders of girls! We 100% recommend everything except the last one, and your project manager will deal with this.

And, of course, be sure to check your product, do not think that programmers will immediately write code cleanly and beautifully - you still cannot do without bugs! Not to mention the numerous vulnerabilities, which are confirmed by personal and confidential data regularly leaking into the network.

In my training courses for novice testers, I invite them to write positive and negative tests for:

  1. The function of calculating the root in the calculator.
  2. Working with the basket (add / delete / edit) in the online store.

And here's what I noticed - everything is fine with positive testing, the guys come up with different types of tests (because the task is to name a few, and not to list everything, everything, everything, therefore, even working in a team, you can not repeat yourself). But with the negative, many have problems, they ask to clarify, because "nothing but entering characters into the numerical value of the number of products in the basket and calculating the root of a negative number does not come to mind."

So I decided to write an explanatory article.

Positive testing

A tester is the person who provides the team with information about the product. So we decided to make the same online store, thought out the concept, wrote the code, and now the task of testing is to tell if everything works as we need.

And of course, positive tests are of great importance. If a user visits our website and cannot put the product in the cart, then he will deeply give a damn that when entering any special characters or sql-injections, we write beautiful error messages.

Therefore, when we are given something to test, we can happily rush to break new molds, but need to check the correct scripts first. First, we satisfy loyal and knowledgeable users, and then we do the rest.

Thus, positive testing is aimed at making sure that the core functionality works. All scenarios for using our system are feasible and lead to the expected result, not errors.

Let's see an example:

The main test case is to check that the root of the correct number is actually calculated.

It can be broken down into the following equivalence classes:

  • After calculating the root, an integer remains (root of 4 \u003d 2)
  • After calculating the root, a fractional number remains (root of 3)

Hmm, what if we have a fractional number not only after calculating the root, but also before ? Can we take the root of 2.2? Positive test? Positive!

You can also divide numbers into small numbers, up to 100, for example. Then take an interval from 100 to int and the third will be even larger, as much as our calculator fits. 3 equivalence classes, we check one value from the interval.

Let's not forget about the boundary values, let's check 0. Is it a positive test? But how! The root of 0 is 0, not an error!

From the main, perhaps all.

Oh, that's where the imagination is!

The user can execute so many different scenarios !! But first of all, let's take the main, shortest ones. Because if they do not work, then long chains (added - edited - deleted - added again - etc.) are definitely not worth checking. So:

Think it will work if it works separately? Noooo guys, you are testers! Never take word-of-mouth programs! Have you come up with a script? Check it out!

Moreover, such a scenario may well fall, we have already removed this product from the basket, right? So the system may well prevent us from adding it again. Like "you have already refused, ale, I remember everything!" Is this behavior correct? No!

Is the scenario itself positive? Yes! Although already with notes of perversion, I must admit

Negative testing


Remember that negative testing is just as important as positive testing. Because our users are humans, not robots. And people tend to be wrong. And you should always remember about this human factor.

If I go to the site, place an order and everything will be fine - I will come there again. But if I come, place an order and accidentally make a mistake somewhere, for example, insert a message copied in ICQ instead of a number, then I want to see a tactful remark, and not a crash of the entire system.

In general, in our time, there is usually a large selection of sites to solve a user's problem (for example, to buy something). After looking at them and realizing that the functionality he needs is everywhere, the user will choose the most beautiful and convenient site.

But, no matter how convenient such a site is, if it is not able to work under the influence of the human factor, the user will leave sooner or later. "A step to the left, a step to the right - shooting", who will like it? I would like to be able to make mistakes and correct errors, and not get "handed" terrible error messages on the whole screen.

Therefore, we conduct negative testing. What is negative testing? This is the input of deliberately incorrect data. We enter and see how the program behaves, whether it gives clear error messages ...

But how do you write these tests? Let's see some examples:

1. Function of calculating the root in the calculator.

The first thing that comes to mind is what happens if you calculate the root of a negative number?

But what else can you think of here?

  • The root of the void - remember the boundary values, we cannot enter a string of negative length, but we can do the boundary value (zero-length string)!
  • Root of symbols - you need to check what the system will say if you enter or copy-paste something symbolic there. Moreover, we divide the characters into Russian, English and special characters!
  • The root of the meaning "four" - also characters can be divided into gibberish and "type number". By the way, if we talk about such "type numbers" ...
  • Let's try to enter a string that represents a number. And take the root from it already.

See? In fact, there are not so few tests! Separately, I would like to express on the topic "enter a very large number, as large as possible." You can try, why not? But this will have a more negative effect on the squaring scenario than on the root calculation.

In the root, you can not enter the largest possible number, but enter such a number so that the root of it (fractional value) turns out to be very long and does not fit on the screen, so what will happen then, will it be cut off or broken?

2. Working with the basket in the online store.

Here, again, you can find a number field and play with it, as we just did with the calculator. The "quantity of goods" field is very suitable here! But, on the other hand, is it boring, such different applications and the same tests?

Remember just 2 words - different tabs !

Do you feel it? Let me explain. A negative test for removing an item from the cart is to try to remove an item that has already been deleted. And here the options begin for how this can be done:

  • Opened the cart in 2 browser tabs. First they pressed "delete" in one, then in the second. That is, an attempt to delete something that you yourself have already deleted from your own basket.
  • An attempt to delete a product deleted by the admin. In 1 tab under the admin we delete the product altogether, in principle, and in the other we try to delete it from the cart under the user.

And by the way, you can also try to add a product deleted by the admin or edit its quantity. And the admin may not delete the product, but move it to another category. And here nothing should break !!! If, in case of deletion, we should see the correct error message, then in case of transfer, simply continue working.

And what will happen if the admin did not move the product in the store hierarchy (moved it to another category, the product was originally placed incorrectly), but simply corrected, edited the description? Also, nothing should break!

And even if we don't have a store, but something else, always think about how you can try to apply the technique of different tabs.

For example, we can create a card - a person, a building, the same book, something else ... Try to edit it in 2 windows. In one, change one field, save, and then in the second, change another field and also save. Or delete something, and add or change in the second window.

Always try to play around with these things, it often ends up badly for the program. And even if the team decides not to fix such defects as not critical yet, it doesn't matter. The main thing for us about them know ! Provide information, and then decide what to do with it ...

I would like to give one more example from real practice. Also a web interface where you can click "create" and add a new card. The user adds, and every other time the form falls. Why?

They began to find out. And they understood. The user had to create a lot of cards at once (migration), so he clicked on "create" several times, holding down Ctrl (open in a new tab). And then I went through the tabs and created.

It would seem, where is the negative testing? After all, the user does not make contradictory changes by changing the same card. No, he creates new ones, that is, he introduces information into various cards. But here's how, the system considered the open "new card" window to be one thing, loudly indignant at the user's impudent attempts to cram one information there, then another.

So guys, go for it! Open different tabs and go ahead, look for information about how, well, how exactly does your program behave with conflicting influences on it?

PS is an excerpt from my book for beginner testers, written to help students of my school for testers

Come to our light! ツ

The article has been revised taking into account the criticism and recommendations received in the forum.

With this article, I would like to describe my understanding of software testing - the process is not trivial, as it always seemed to me, and, I could not even imagine, very interesting.

I've always wondered what software testing is. Why hire someone to test a software product, if the developer himself can spend a couple of hours on such an insignificant task. Finally, why test at all? After all, programmers are smart guys - they write correctly. But

not everything is as simple as it seemed to me.

Having passed from programmers to testers, not having enough theory in my bosom, I tried for a long time to “break” the software product, giving deliberately incorrect input data as input. And, oddly enough, it broke. An error message was generated, and the next day was considered well spent.

Later, I began to face the fact that, no matter how many tests you run, errors still pop up. Without any idea of \u200b\u200bwhat and how should be given "as input" to the application under test, the testing process seemed endless. As a result - a vicious circle in which the deadlines for testing, angry PM and developers tired of "nonsense" are disrupted.

And only much later I identified for myself a clear sequence of actions that must be performed to test software:

  1. Examining the specification. This stage is the most important, it is also called design and / or requirements analysis. Sometimes the name "specification testing" is used, just below we will understand why "testing". Here you need to carefully read the documentation (specification) for the application.
  2. Smoke testing. At this stage, it is necessary to check whether the system works at all (whether it works correctly, whether it "swears" correctly if it is not worked out correctly, etc.). This is done in order to understand whether the application is suitable for further testing or it does not work at all (it does not work correctly).
  3. "Positive" testing. At this third stage, you need to check the result of the application when it receives the "correct" input.
  4. "Negative" testing. This is the fourth and final stage of initial testing. Here you need to see how the application behaves when it receives "incorrect" data as input. This is done in order to determine how the application behaves in this case. If such an option is described in the specification, but it should be described, then compare the expected result with the obtained result.

So let's take a look at everything in order.

Specification, requirements, SRS.

How to determine when and how the application itself should work, when and how it should "break" (that is, how the system or its module should react to invalid data or incorrect user behavior)? What should be the result of correct working out, under what conditions and input data should the correct working out take place? What should be the result of incorrect processing of the application under test, under what conditions should it take place?

All these questions are answered in the documentation of the application under test. In any case, the answer must be there, otherwise the documentation is not complete, which is equal to an error in the documentation. I want to make a reservation that already at this stage the first defects may appear - a defect in the specification (in the requirements) of the same importance for the system (and sometimes higher priority!) defect. It should also be noted that requirements testing is such a full-fledged type of testing, which is undeservedly paid little attention. The main indicators of successful testing of requirements is the achievement of the criteria for completeness (testability) and consistency of requirements.

The documentation makes it possible to understand for yourself the main steps of checking the application, where and how the application should work, where to "break". And, which is important, as break. What to "say" in case of successful working off, what error messages can / should appear during working off.

Having understood all the "wisdom" of the application requirements and the specifics of the developer's implementation of these requirements, you can start testing the final result.

Testing process

This process can be described in the following steps:

  1. Check how the application works when it receives “correct” data as input (to find out “what is good and what is bad” read the documentation);
  2. If everything is working and working correctly (ie exactly as described in the specification), the next step is to check the boundary values \u200b\u200b(ie where the "correct" data begins and where it ends);
  3. Checking the operation of the application when entering data that is not in the range of acceptable values \u200b\u200b(again, look at the specification).

The first and second paragraphs describe a process called "positive" testing. "Positive" testing is testing on data or scenarios that correspond to the normal (standard, expected) behavior of the system under test.

The third point describes the opposite of the "positive" process - "negative" testing. This is testing on data or scenarios that correspond to the abnormal behavior of the system under test - various error messages, exceptional situations, "out-of-the-box" states, etc.

However, prior to "positive" and "negative" testing should be work on the implementation of "smoke" testing.

The information dictionary gives a fairly clear definition of the term "smoke testing":

  • a rudimentary form of testing a software product after changing its configuration or after changing it (software product) itself. During smoke testing, the tester checks the software for the presence of "smoke", i.e. looking for any critical program errors;
  • the first launch of the program after its critical change or "build".

Testing Priorities

Why is "positive" testing an order of magnitude more important than "negative"?

Let's assume that the system is not very tolerant to "bad" inputs. This is scary? Often not too much. Sooner or later, users will learn to bypass the pitfalls, they will not do “dangerous” or “unauthorized” actions, the technical support service will soon remember what problems users usually have and will give advice such as “in no case leave this field empty , and then ... ".

But - all this will not happen at all if the system does not fulfill its main purpose, if users (customers) cannot solve their business problems, if they do everything correctly, enter good data, but do not get the result. And you cannot advise them anything in this situation. And they leave ...

That is why "positive" testing is much, much more important than "negative".

However, this does not mean that "negative" tests can be neglected, since not at all stages of the software lifecycle the value priorities remain unchanged.

Summary

Now, having taken the first successful steps in testing the application and having received a positive result, you can think of more sophisticated ways to test the application, as they say: "Further - more." It all depends on the depth of the required level of testing, the desire and ability to test the application. Naturally, the four stages described above do not cover the full testing cycle of the application, but they are mandatory for initial testing.

Iiiiiiii ... This is the last entry from the cycle! It is the shortest, the simplest and almost entirely consists of real stories. If possible - stupidly funny. There is even a video shot specifically for recording right at the time of writing. Svezhachok-s. Unfortunately, I didn't think to take a screenshot with the message about the Youtube client crashing, it would have worked. Fell right when filling the video that is inserted into the article. Okay, let's have my lock screen.

At the start of testing, regardless of whether it is a new project or one that should have been buried already, in general it is always clear where to start. Unless, of course, by the time testing started, none of the links in the chain worked. Usually testers read requirements and other documents with non-Russian names, such as "BBC", "EsArki" and "User Story" and figure out how to write a test case so that it would check the fulfillment of all these documents. This is all clear, on the surface and there is no point in dwelling on this. But there is also the behavior of Android itself, which sometimes not only analysts, but even architects and some developers do not know about. And remembering that, only with custom, quite a lot of such features emerge. And I'm not talking about stressful scenarios, when there is no memory or the battery was suddenly taken out (somehow I met a person's indignation at the GNU / Linux terminal that he does not show the password when typing, but he has a buggy keyboard and he does not understand whether he is entering a password or this keyboard doesn't work again), but about the default Android customization behavior and even the behavior embedded in the AOSP. That is, regular system behaviors that could adversely affect the product under test. The so-called negative scenarios.


I will briefly describe some negative scenarios and try to give specific examples.

  • Communication problems.The simplest example is Fly Mode. For example, the Google Keep note-taking app was either not tested in flight mode, or the bugs found did not affect the release. It is very easy to reproduce the problem:
    • Turn on flight mode
    • Tap on the line Take a note ...
    • On the screen that appears, perform the Delete action
    • Enjoy frame-by-frame animation of the movement of previously saved notes


In addition to Fly Mode, there is an unstable connection with packet loss, and a very slow connection, and closed ports through which your application works, and a Wi-Fi connection, but without access to the Internet.
  • No access to the app store... For example, to test in-app purchases, you need the assembly to be uploaded to the store in a special section. If it is not there, or there is not the same version (we are talking about version code - the internal version), then you will not test your purchases. If a user flew on vacation to China, where everything is very sad with the connection to Google Play, the license for which he paid money should not fall off.
  • Application operation with limited permissionsif Target API Level is below 23, that is, below Android 6, and when API version is 23 or higher. In the first case, the application is legacy, but you can still take away permissions. In the second case, it will still begin to receive new exceptions that it did not know before.
  • Battery saving mode... Implementation of both Doze and App Standby, as well as alternative implementations of alternatively gifted manufacturers such as Samsung (and STAMINA from Sony in the first version), when everything is implemented terribly wrong, but you have to live with it. It is permissible for an application not to perform checks on time, not send statistics, or update data. But it is not permissible to crash, freeze, never perform scheduled tasks.
  • Change date, time, time zone... People can fly on holidays and business trips to other countries where the time zone is different. If the plane crosses the 180th meridian, then the user may well get “in yesterday” from the point of view of the application.

    The real story of failure. Parental Controls in KIS for Windows appeared in version 7.0 in 2006. At the same time, there was a built-in news agent in the product, not at all the same as now. It was supposed that various news about threats, all sorts of "what's new" and the like would be sent through it. The release version, which was already installed by users, had a bug. If you put the time in Windows back, before the start of the license, the protection was disabled. Strictly speaking, non-administrators cannot change the time, but 10 years ago, companies did not pay much attention to user rights and every accountant was a local administrator. One of our clients in his small office set up parental controls so that users could not surf the Internet, except to permitted sites. Drakonovsky configured and password protected the settings. Everything worked fine until the news was sent to the built-in news agent that it was time to upgrade to the new version 7.0.1 where, among other things, an error was fixed that disables protection when the time is reversed before the start of the license. The user read the news, was delighted and disabled the protection using the proposed method. A few days later, this story from him came to the then popular bash.org.ru. Since then, news of this kind has no longer come to users.

    And do not think that he does not make such mistakes. Remember the story with iOS, which happened this year, although it was only 3 months since the beginning of the year ( Note: yes, this is a rather old lecture, I have long wanted to post it). Phones were cut off if the time was moved closer to the beginning of the unix time. And how did Apple fix this bug? They forbade moving the time further than the critical date, which was NOT a fix for the problem. Attackers began to set up their Wi-Fi hotspots with names that are usually found in all kinds of McDonald's and through them transmit fake time. Devices connected to such points automatically and discovered NTP servers, from which they requested the time. Apple simply didn't take care of iOS not using fake NTP servers. Thus, iOS was revived again.

  • Changing the system locale, interface language... The user has the right to change the system language a hundred times a day and no one can forbid him to do so. The tester's task is to make sure that the product, firstly, reacts correctly to this (changes the language to the desired one automatically), and secondly, it does not crash at all. In addition to the locale, the user has the right to change typefaces and pins, choosing those that are comfortable for him to read. The application should not creep if the user makes reasonable changes.
  • Tapjaking... I mentioned this stuff in the very first lecture. Let me remind you that this is the interception of tapes that the activation of application A accepts, while the user tried to reach application B. Just activate application A transparent. It looks like not a secure Google solution, but this is how brightness and color temperature control apps work on devices. Users are comfortable with such applications and since Android allows them to work without root, this must be taken into account. For example, if you have an application that uses code or, say, a picture for authorization, you must use protection against tapjacking, for example, set filterTouchesWhenObscured to true.
  • Calling Activity directly... I have already spoken about this, but we will repeat it. Activity is one of the entry points to the application. It is quite acceptable to have several different activities that can be called by external applications, you never know why. This will be exported activities. But it may be that to call some activity, you need to pass parameters to it. And a third-party application will not transfer them. At best, the user will see some kind of crooked screen, at worst, your application will crash. So you shouldn't, so to speak, shine your bare ass out unnecessarily. By default, the exported flag is set to true, and if you are sure that external applications should not call them, you should set it to false. Well, the tester must check how the application will behave if you call its activation from other applications.
  • Systemic killer... Generally it is called OOM Killer - Out Of Memory Killer. The system starts KILLING if the application with which the user is interacting at this particular moment does not have enough memory to work. Of course, the killer is not stupid, obeys certain algorithms, choosing targets (for example, the system will easily kill the background service, but until the last moment it will save the foreground service; the foreground service is usually the one that draws its icon in the notification area, for example, the player ), but the essence is as follows. As a rule, on modern devices, OOM Killer is not very fierce. Now the memory is set from one gigabyte and above. But this does not apply to games. Games are so heavy, they consume so much memory that no matter how much you fill it in, there will still be little. And in general, the more RAM they put into the devices, the fatter the applications will be, and the games will be the fattest. However, they will remain all the same dull and unnecessary.

    The bottom line is that your product is guaranteed to fall under the OOM Killer. Your job is to make sure that it doesn't get you anywhere and the product will rise as soon as the SSBass attachment is collapsed by the system (if required of the product, of course). And the system will do it as soon as possible, it will not allow living in the background of such a SSBass.
    Another takeaway is that your application shouldn't be a SSBBW either. Any leaks must be detected by the developer before he writes the real code. Your performance tests should definitely have validation scripts when monkey generates a ton of events. If the code is written well, then the garbage collector will free memory and the system will not kill the application process. If everything is bad and the application is flowing from all the cracks, the system will shoot it. Of course, it will take off after that again and there will be no memory anymore, because after killing the process, the garbage collector cleans everything up, but if the decoy showed that the application flows in his test in 15 minutes, then the user has these leaks, though later, but that's it. will show themselves equally.

  • Big data... If your application works with user data, be prepared for the fact that the user feeds something very large without any second thought. For example, as a user, I quite expect that the Youtube client will download my video, no matter how heavy the video is. I expect that the archiver will fit into any depth of the archive, which weighs 5 times more than the entire available RAM of the device. This is normal. If someone tells you that “no one will ever feed such large files,” then most likely the speaker is simply not a very good developer.
  • The most stupid and therefore ridiculous situation causing the application to malfunction, up to and including a crash, is the simple screen rotation... How many such falls were identified during the testing phase! Especially if a popup appears. On pop-ups, an experienced tester immediately starts flipping the phone! It also happened that the whole team tested the product on only phones, where the screen rotation for the application was blocked. And then, when the tablets were brought in, it turned out that on the tablets the application falls on almost every screen. But because the fragments. There were different interfaces on the screen and on the phone and the misuse of fragments led to a sad outcome.
  • Double, triple tapas... For some reason, some people believe that no one makes multiple tapes on interface elements. But no! I do! And not because I'm testing, but because I may have an old Android 4.0 phone in my hands, which is already barely tossing and turning, and its screen is also not very responsive. It may not be clear whether there was a click or not, and you get double taps. Not because they are "double" (in the sense that they are not made with an interval of less than a second), but because there are two or more of them while the application "thought". For example, while forming a list of many elements.
  • One of the handy features of Android 6, when insufficiently tested, leads to terrible results. To the extent that its use is explicitly prohibited in the application, which, for now, is allowed by Google. This feature - backup and restore from backup... By the way, it is not new, the backup appeared in Android 2.2, but I don’t know of a single application that would use this bun.
    By itself, creating a backup and restoring it is not scary. Problems start if the product uses a binding to the device ID and installation ID. Even within the same device, this can lead to problems, and after all, restoring from backup is allowed by Android itself to any device with Android 6 on board: the system backs up applications from device A, and the user buys device B and restores them all on it. And these applications work simultaneously on two devices, although their identifiers are different. If this is a client-server application, where all communication is done on tokens, a lot of problems arise here.

    A fighting example is the cool Talon for Twitter app. I haven't reset the device for a very long time and therefore I don't know if the author has fixed this error. When I told him about it, he answered me why the error occurred (although I already know why!), But did not say whether he would correct the behavior. In general, this application has a kind of installation wizard that talks about the capabilities of this Twitter client, requesting the necessary permissions along the way. Everything is clear according to Google guidelines, right from the notes. When the setup wizard was completed and the necessary permissions were received, the flag was raised about this so as not to repeat the setup every time. And the application was backed up along with this flag. Together with him, it was restored. Although by default for all new type applications (i.e. targetApi level\u003e \u003d 23) permissions are disabled. You launch the application, but it cannot work normally. Because there is no check for the availability of permissions, all checks remained in the initial setup wizard, which did not run because the flag was set to "the wizard has already been passed." In addition, after launch, the client did not download tweets, giving a shock from Twitter itself. Because the dug in token was not valid on the new installation and it was necessary to request a new one, and this request was also made in the installation wizard at the very first step!

  • In Android, starting with version (if memory serves me) 2.2.1, it became possible to move part of application data to memory card... Little by little, they began to slaughter this opportunity, until in Android 6 Google gave it a second life, significantly improving it. If the device manufacturer in his custom did not break the behavior of AOSP in this situation, then as soon as Android detects the memory card, he offers to make a choice whether the user will sometimes pull it out or not. If the user says that he does not plan to disable it, then Android formats the card into its file system and connects it as part of the main memory, allowing applications to be installed there. And here are some pitfalls:
    • If the application is using hardcoded paths, then everything is gone. But this is such bad form that I hope no one does it.
    • If the application asked the system for paths at the first start and dug them forever, then it will be exactly the same as with hardcoded
  • As applications are updated, users will receive new versions from the application store and put them on top of the existing one. because checking application update to a new version - mandatory script. Normally, everything should be fine, but when you have to support many specific devices with your specific behavior, the format of the settings can change. This almost never leads to crashes, if the code is written with less quality, it handles various exceptions. But just losing some of the settings is bad. For example, we had a situation when users formed an anti-spam list for months, blocking numbers of taxis, banks, collection services, and then, after updating to a new version, all the lists were lost. Precisely because the format of the settings has changed and it is here, exactly in this place, that the settings were not read by the new version of the product.
  • In addition to updating the product to a new version, there is a rarer, but much more hardcore option - updating the firmware itself to a new version, but with a running product. I will give an example of two cases, one of which I have already told.
    • The usual Security Update for Android 5.1, which took and disabled the life-long OS chips used by the application
    • After updating Android 4.4 to Android 5.0, the paths of installed applications changed. Previously, installed applications were stored in one familiar path (/data/app/com.package.name.apk). In one of our products for internal security-related purposes, there is a check on which path the protected application is accessible through and whether it has changed. An update to 5.0 arrived and the absolute paths changed for already installed applications (data / app / com.package.name / base.apk). The product was beating the alarm that the app was compromised. Corrected, of course.
Well, that's all for now. Now I am writing a report about problems specific only for specific versions of Android, only for specific firmware, only for specific devices. So stay tuned! However, part of you already know - are described right in this series of entries.
Bye Bye!