Comments by "MrAbrazildo" (@MrAbrazildo) on "Continuous Delivery" channel.

  1. 7
  2. 5
  3. 4
  4. 3
  5. 3
  6. 3
  7. 4:20, I can't even remember if I ever had a serious bug using pointers. Here go my tips for everyone, who use to have problems with that, to get rid of this issue once and for all: - If you have to allocate memory, don't do that directly: use containers from the standard library , STL. They have their size hidden from you, and manage it automatically. - When traversing those containers, use their own iterators (OO pointers). Member-f()s 'begin' and 'end' provide them for you. Just keep a model like this: for_each (container.begin() + offset, container.end() - premature_end, some_algorithm); With offset <= premature_end, and both >= 0. If you just want to run all the way (default is copy, but you can reference that, with &) : for (auto &a_var_not_ptr: container) some_algorithm (a_var_not_ptr); In any of these cases you deal with pointer directly. - Reallocations may invalidate previous iterators : std::vector <Type> container; auto it = container.cbegin() + K; container.push_back (M); //May invalidate. auto x = *it; //May crash. There are 2 main solutions for this: a) Just "refresh it", after push_back: auto it = container.cbegin() + K; // ≃ F5 in a webpage. auto x = *it; //Guaranteed to work. b) Recommended: reserve memory right after container creation: //Chandler Carruth: "We allocate a page each time. This is astonishing fast!" container.reserve (1024); container.push_back (M); //Just added M, not reallocate. auto x = *it; //Ok. There won't has a new reallocation, as long as container.size() <= container.capacity() . This is much faster and generates a much shorter bytecode. - If you need to allocate directly, use smart pointers instead, to do that for you on the backstage - as well as free them. - If, for any reason, you need a C-like pointer (raw pointer), wrap it inside a class , together with a variable for its size, both hidden (no Java setter methods!) and a destructor, to automatically free the memory of a specific object, once that specific object ceases its existence. - In this case, you will have to write a copy constructor for it, to avoid "memory stealing" , from 1 object that has copied its content, and had its destructor activated. If you already wrote that class, and are in a hurry to use it before writing this constructor, you can still be safe by temporarily deleting its attribution operators (each_of_their_declarations = delete): if any copy is made, it will arise a compile-time error . - I read on GCC (compiler) documentation, that the OS may not provide a pointer aiming before the container . So, if you intend to use reverse iterator, keep that in mind. - Just like index, pointer keeps its step as large as the size of the type it's pointing to. I once had a code running on Windows and Linux, both 32 bits, using 24 bits (+ 8 bits alpha) BMP images. They were read by pointers of type 'long', the max/platform size, to keep portable for the future, automatically growing up to pixel size and OS. When I migrated to Linux 64 bits, it started to get unstable on Linux only. It took me a couple of minutes to figure it out: pointer step became twice the intended size. Easy. - Let's say a f() received pointer for object of class 'animal', and it accesses also the 'dog' class supposedly in it, a class that inherits animal. But this may be just a pointer to 'animal', not dog + animal. To make this downcast, use dynamic_cast: it makes a runtime check , to see if there's "ground for pointer landing". - Above all things, watch out for undefined behaviour . 1 of the many tricks C++ uses to get faster is to not bother the compiler, about the order it must execute things. There's operator preceding, but if that is respected, any order is accepted. So, don't do messy things like this: pointer[index++] = ++index*5; Sure, it will execute [], multiplication, =, those 3 in this order. But what index it will treat 1st? You must get used to have a special look for too compacted commands. Instead of that, unroll the command in the order you are thinking (read-only instructions are 100% safe, even for multithread) : ++index; pointer[index] = index*5; index++; ======= // =========== This is about all the universe of a pointer, all tricks it might trick on you. If you keep yourself lucid about those topics, each time you deal with pointers, they will never be more-than-minutes-to-solve bugs for you. PS: pointer is much faster than index, because it memorizes "where it is", meanwhile index goes all the way from the begin, each time it is called .
    2
  8. 2
  9. 2
  10. 2
  11. 2
  12. 2
  13. 2
  14. 2
  15. 2
  16. 2
  17. 2
  18. 2
  19. 2
  20. 2
  21. 2
  22. 2
  23. 2
  24. 2
  25. 2
  26. 2
  27. 2
  28. 2
  29. 2
  30. 2
  31. 2
  32. 2
  33. 1
  34. 1
  35. I guess by now everybody understood TDD is good. You should now become more technical. Let's say I have a f() that makes several steps to reach a goal. It has several steps because they are private to it, they would not make sense outside of it - it could be prone to errors if called outside . And that f() also updates variables in other classes, because otherwise this should be scheduled to do later, and before some certain other actions take place - and would be prone to errors too, if this schedule fails to complete or, in worse cases, fails to follow a certain order too . By what I understood so far, TDD would demand that a f(), which would be written like that, be splitted in several small f()s. So, to avoid a disaster, I think 2 possible solutions: a) Each f() like this should become a class, with all those private steps being private f()s from it. Tests would have plenty of access to those f()s, most of them not tagged as 'const'. Fortunately, tests are small, and compile-time prone to be implemented. So, tests are unlikely to cause bugs - and easy to fix if that happens. Pros: each test would be executed only 1 time . Cons: kind of a "risky" design . b) Any f() like this stays the same, but its steps could become lambdas, in order to be tested, and this would be made inside that "big f()", each time anyone calls it. To avoid repeated tests execution (hence consuming performance), they could only communicate in case of failing. Since they are compile-time (according to your examples), the optimizer could solve them, realizing that they work, meaning they would do nothing, and then it could decide to eliminate them at compile-time, "because they are useless" (unreachable code). For the tests that accuse an error, the programmer should fix them right away, in order to them become potentially "useless" too - thus eliminated too. Pros: design continues to be as safe as before, despite tests intrusion . Cons: tests would be processed several times, unless the optimizer decides to wipe them out .
    1
  36. 1
  37. 1
  38. 1
  39. 1
  40. 1
  41. 1
  42. 1
  43. 1
  44. 1
  45. 1
  46. 1
  47. 1
  48. 1
  49. 1
  50. 1